* Large memory usage by gdb
@ 2017-07-25 20:20 Alex Lindsay
2017-07-25 20:28 ` Philippe Waroquiers
2017-07-26 7:28 ` Yao Qi
0 siblings, 2 replies; 12+ messages in thread
From: Alex Lindsay @ 2017-07-25 20:20 UTC (permalink / raw)
To: gdb
My OS is Ubuntu 17.04. Using both gdb 7.12 and 8.0, I experience large
memory usage when debugging my executable. As I add breakpoints and run
the executable multiple times in a single session, memory usage grows
continuously, regularly hitting 10s of GBs. I don't recall experiencing
this issue with earlier Ubuntu versions (and also likely earlier
versions of gdb). When I debug the same executable with `lldb`, memory
usage is pretty much constant at around 2 GB. Does anyone have any
suggestions?
Alex
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Large memory usage by gdb
2017-07-25 20:20 Large memory usage by gdb Alex Lindsay
@ 2017-07-25 20:28 ` Philippe Waroquiers
2017-07-31 22:11 ` Alex Lindsay
2017-07-26 7:28 ` Yao Qi
1 sibling, 1 reply; 12+ messages in thread
From: Philippe Waroquiers @ 2017-07-25 20:28 UTC (permalink / raw)
To: Alex Lindsay; +Cc: gdb
Run gdb under Valgrind, and make some heap profiling dump at regular
interval, (e.g. after each run).
With valgrind 3.12 or before, you can do a leak report to show
the delta (increase or decrease) compared to the previous leak search,
including the reachable blocks. So, you will be able to see what
increases the memory.
If you compile the latest Valgrind (3.13), you can e.g. use memcheck
and produce heap profiling reports readable with kcachegrind.
You will need a gdb compiled with debug or install the debug info
of gdb to have understandable stack traces.
Philippe
On Tue, 2017-07-25 at 15:20 -0500, Alex Lindsay wrote:
> My OS is Ubuntu 17.04. Using both gdb 7.12 and 8.0, I experience large
> memory usage when debugging my executable. As I add breakpoints and run
> the executable multiple times in a single session, memory usage grows
> continuously, regularly hitting 10s of GBs. I don't recall experiencing
> this issue with earlier Ubuntu versions (and also likely earlier
> versions of gdb). When I debug the same executable with `lldb`, memory
> usage is pretty much constant at around 2 GB. Does anyone have any
> suggestions?
>
> Alex
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Large memory usage by gdb
2017-07-25 20:20 Large memory usage by gdb Alex Lindsay
2017-07-25 20:28 ` Philippe Waroquiers
@ 2017-07-26 7:28 ` Yao Qi
[not found] ` <4fc14853-b066-4fd7-f0c9-b98f442a9a95@gmail.com>
1 sibling, 1 reply; 12+ messages in thread
From: Yao Qi @ 2017-07-26 7:28 UTC (permalink / raw)
To: Alex Lindsay; +Cc: gdb
Alex Lindsay <alexlindsay239@gmail.com> writes:
> My OS is Ubuntu 17.04. Using both gdb 7.12 and 8.0, I experience large
> memory usage when debugging my executable. As I add breakpoints and
> run the executable multiple times in a single session, memory usage
> grows continuously, regularly hitting 10s of GBs. I don't recall
> experiencing this issue with earlier Ubuntu versions (and also likely
> earlier versions of gdb). When I debug the same executable with
> lldb`, memory usage is pretty much constant at around 2 GB. Does
> anyone have any suggestions?
What is your executable? Can you give us some characteristics of your
executable to help us to reproduce this problem? Is it multi-threaded
program? Is it a C or C++ program? Does it load many shared library?
--
Yao (齐尧)
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Large memory usage by gdb
[not found] ` <4fc14853-b066-4fd7-f0c9-b98f442a9a95@gmail.com>
@ 2017-07-26 15:55 ` Yao Qi
0 siblings, 0 replies; 12+ messages in thread
From: Yao Qi @ 2017-07-26 15:55 UTC (permalink / raw)
To: Alex Lindsay; +Cc: GDB
[Add gdb@ back]
On Wed, Jul 26, 2017 at 2:50 PM, Alex Lindsay <alexlindsay239@gmail.com> wrote:
> Thanks for your suggestion Philippe. I hope to try that this weekend.
>
> Yao,
Hi Alex, thanks for your information. I'll write a small
program which needs many small libraries, and see
if I can find some leaks.
>>
>> What is your executable?
>
> My project is here: github.com/arfc/moltres
>>
>> Can you give us some characteristics of your
>> executable to help us to reproduce this problem? Is it multi-threaded
>> program?
>
> It can be parallelized with threads or MPI, but when running with gdb I run
> single thread, single process.
>>
>> Is it a C or C++ program?
>
> C++
>>
>> Does it load many shared library?
>
> Yes it does load *a lot* of shared libraries, so I do expect a fairly large
> memory footprint, but I don't expect it to grow by large chunks with time.
> In case it's of use here's the output from `ldd moltres-dbg`:
>
> linux-vdso.so.1 => (0x00007ffdaf953000)
> libmoltres-dbg.so.0 =>
> /home/lindsayad/projects/moltres/lib/libmoltres-dbg.so.0
> (0x00007f4fa02de000)
> libsquirrel-dbg.so.0 =>
> /home/lindsayad/projects/moltres/squirrel/lib/libsquirrel-dbg.so.0
> (0x00007f4f9ff4a000)
> libmodule_loader_with_fp_rdg_ns_tm_pf-dbg.so.0 =>
> /home/lindsayad/projects/moose/modules/module_loader/lib/libmodule_loader_with_fp_rdg_ns_tm_pf-dbg.so.0
> (0x00007f4f9fd0b000)
> libphase_field-dbg.so.0 =>
> /home/lindsayad/projects/moose/modules/phase_field/lib/libphase_field-dbg.so.0
> (0x00007f4f9f124000)
> libtensor_mechanics-dbg.so.0 =>
> /home/lindsayad/projects/moose/modules/tensor_mechanics/lib/libtensor_mechanics-dbg.so.0
> (0x00007f4f9e733000)
> libnavier_stokes-dbg.so.0 =>
> /home/lindsayad/projects/moose/modules/navier_stokes/lib/libnavier_stokes-dbg.so.0
> (0x00007f4f9e0d4000)
> librdg-dbg.so.0 =>
> /home/lindsayad/projects/moose/modules/rdg/lib/librdg-dbg.so.0
> (0x00007f4f9dd9e000)
> libfluid_properties-dbg.so.0 =>
> /home/lindsayad/projects/moose/modules/fluid_properties/lib/libfluid_properties-dbg.so.0
> (0x00007f4f9da48000)
> libmoose-dbg.so.0 =>
> /home/lindsayad/projects/moose/framework/libmoose-dbg.so.0
> (0x00007f4f9b79c000)
> libpcre-dbg.so.0 =>
> /home/lindsayad/projects/moose/framework/contrib/pcre/libpcre-dbg.so.0
> (0x00007f4f9b567000)
> libgcc_s.so.1 => /opt/moose/gcc-6.2.0/lib64/libgcc_s.so.1
> (0x00007f4f9b351000)
> libmesh_dbg.so.0 =>
> /home/lindsayad/projects/moose/scripts/../libmesh/installed/lib/libmesh_dbg.so.0
> (0x00007f4f98f16000)
> libnetcdf.so.11 => /usr/lib/x86_64-linux-gnu/libnetcdf.so.11
> (0x00007f4f95bb0000)
> libvtkIOCore-7.1.so.1 =>
> /opt/moose/VTK-7.1.0/clang-opt/lib/libvtkIOCore-7.1.so.1
> (0x00007f4f958e4000)
> libvtkCommonCore-7.1.so.1 =>
> /opt/moose/VTK-7.1.0/clang-opt/lib/libvtkCommonCore-7.1.so.1
> (0x00007f4f95052000)
> libvtkCommonDataModel-7.1.so.1 =>
> /opt/moose/VTK-7.1.0/clang-opt/lib/libvtkCommonDataModel-7.1.so.1
> (0x00007f4f948a0000)
> libvtkFiltersCore-7.1.so.1 =>
> /opt/moose/VTK-7.1.0/clang-opt/lib/libvtkFiltersCore-7.1.so.1
> (0x00007f4f9404b000)
> libvtkIOXML-7.1.so.1 =>
> /opt/moose/VTK-7.1.0/clang-opt/lib/libvtkIOXML-7.1.so.1 (0x00007f4f93ce4000)
> libvtkImagingCore-7.1.so.1 =>
> /opt/moose/VTK-7.1.0/clang-opt/lib/libvtkImagingCore-7.1.so.1
> (0x00007f4f9389a000)
> libvtkIOImage-7.1.so.1 =>
> /opt/moose/VTK-7.1.0/clang-opt/lib/libvtkIOImage-7.1.so.1
> (0x00007f4f93452000)
> libvtkImagingMath-7.1.so.1 =>
> /opt/moose/VTK-7.1.0/clang-opt/lib/libvtkImagingMath-7.1.so.1
> (0x00007f4f931fd000)
> libvtkIOParallelXML-7.1.so.1 =>
> /opt/moose/VTK-7.1.0/clang-opt/lib/libvtkIOParallelXML-7.1.so.1
> (0x00007f4f92fb6000)
> libvtkParallelMPI-7.1.so.1 =>
> /opt/moose/VTK-7.1.0/clang-opt/lib/libvtkParallelMPI-7.1.so.1
> (0x00007f4f92d98000)
> libvtkParallelCore-7.1.so.1 =>
> /opt/moose/VTK-7.1.0/clang-opt/lib/libvtkParallelCore-7.1.so.1
> (0x00007f4f92aee000)
> libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f4f928d2000)
> libtbb.so.2 =>
> /home/lindsayad/embree-2.15.0.x86_64.linux/lib/libtbb.so.2
> (0x00007f4f92679000)
> libtbbmalloc.so.2 =>
> /home/lindsayad/embree-2.15.0.x86_64.linux/lib/libtbbmalloc.so.2
> (0x00007f4f92425000)
> libslepc.so.3.7 =>
> /opt/moose/slepc/slepc-3.7.3-mpich-clang/lib/libslepc.so.3.7
> (0x00007f4f91ff4000)
> libpetsc.so.3.7 =>
> /opt/moose/petsc/mpich_petsc-3.7.5/clang-opt-superlu/lib/libpetsc.so.3.7
> (0x00007f4f90c66000)
> libsuperlu_dist.so.5 =>
> /opt/moose/petsc/mpich_petsc-3.7.5/clang-opt-superlu/lib/libsuperlu_dist.so.5
> (0x00007f4f909b8000)
> libparmetis.so =>
> /opt/moose/petsc/mpich_petsc-3.7.5/clang-opt-superlu/lib/libparmetis.so
> (0x00007f4f9076d000)
> libmetis.so =>
> /opt/moose/petsc/mpich_petsc-3.7.5/clang-opt-superlu/lib/libmetis.so
> (0x00007f4f904ec000)
> libX11.so.6 => /usr/lib/x86_64-linux-gnu/libX11.so.6
> (0x00007f4f901b3000)
> libhwloc.so.5 => /usr/lib/x86_64-linux-gnu/libhwloc.so.5
> (0x00007f4f8ff78000)
> libmpifort.so.12 =>
> /opt/moose/mpich/mpich-3.2/clang-opt/lib/libmpifort.so.12
> (0x00007f4f8fd3c000)
> libgfortran.so.3 => /opt/moose/gcc-6.2.0/lib64/libgfortran.so.3
> (0x00007f4f8fa16000)
> libgomp.so.1 => /opt/moose/gcc-6.2.0/lib64/libgomp.so.1
> (0x00007f4f8f7e9000)
> libquadmath.so.0 => /opt/moose/gcc-6.2.0/lib64/libquadmath.so.0
> (0x00007f4f8f5a8000)
> libmpicxx.so.12 =>
> /opt/moose/mpich/mpich-3.2/clang-opt/lib/libmpicxx.so.12
> (0x00007f4f8f37e000)
> libmpi.so.12 => /opt/moose/mpich/mpich-3.2/clang-opt/lib/libmpi.so.12
> (0x00007f4f8ee09000)
> librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f4f8ec01000)
> libomp.so => /opt/moose/llvm-3.9.0/lib/libomp.so (0x00007f4f8e944000)
> libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0
> (0x00007f4f8e726000)
> libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f4f8e520000)
> libstdc++.so.6 => /opt/moose/gcc-6.2.0/lib64/libstdc++.so.6
> (0x00007f4f8e1a0000)
> libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f4f8de97000)
> libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f4f8dad0000)
> libhdf5_serial_hl.so.100 =>
> /usr/lib/x86_64-linux-gnu/libhdf5_serial_hl.so.100 (0x00007f4f8d882000)
> libhdf5_serial.so.100 => /usr/lib/x86_64-linux-gnu/libhdf5_serial.so.100
> (0x00007f4f8d32b000)
> libcurl-gnutls.so.4 => /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4
> (0x00007f4f8d0b9000)
> libvtkCommonExecutionModel-7.1.so.1 =>
> /opt/moose/VTK-7.1.0/clang-opt/lib/libvtkCommonExecutionModel-7.1.so.1
> (0x00007f4f8cd89000)
> libvtkCommonMisc-7.1.so.1 =>
> /opt/moose/VTK-7.1.0/clang-opt/lib/libvtkCommonMisc-7.1.so.1
> (0x00007f4f8cb41000)
> libvtkzlib-7.1.so.1 =>
> /opt/moose/VTK-7.1.0/clang-opt/lib/libvtkzlib-7.1.so.1 (0x00007f4f8c922000)
> libvtkCommonTransforms-7.1.so.1 =>
> /opt/moose/VTK-7.1.0/clang-opt/lib/libvtkCommonTransforms-7.1.so.1
> (0x00007f4f8c6e4000)
> libvtkCommonMath-7.1.so.1 =>
> /opt/moose/VTK-7.1.0/clang-opt/lib/libvtkCommonMath-7.1.so.1
> (0x00007f4f8c4b7000)
> libvtksys-7.1.so.1 =>
> /opt/moose/VTK-7.1.0/clang-opt/lib/libvtksys-7.1.so.1 (0x00007f4f8c227000)
> libvtkCommonSystem-7.1.so.1 =>
> /opt/moose/VTK-7.1.0/clang-opt/lib/libvtkCommonSystem-7.1.so.1
> (0x00007f4f8c009000)
> libvtkIOXMLParser-7.1.so.1 =>
> /opt/moose/VTK-7.1.0/clang-opt/lib/libvtkIOXMLParser-7.1.so.1
> (0x00007f4f8bde0000)
> libvtkDICOMParser-7.1.so.1 =>
> /opt/moose/VTK-7.1.0/clang-opt/lib/libvtkDICOMParser-7.1.so.1
> (0x00007f4f8bb8a000)
> libvtkmetaio-7.1.so.1 =>
> /opt/moose/VTK-7.1.0/clang-opt/lib/libvtkmetaio-7.1.so.1
> (0x00007f4f8b86a000)
> libvtkpng-7.1.so.1 =>
> /opt/moose/VTK-7.1.0/clang-opt/lib/libvtkpng-7.1.so.1 (0x00007f4f8b633000)
> libvtktiff-7.1.so.1 =>
> /opt/moose/VTK-7.1.0/clang-opt/lib/libvtktiff-7.1.so.1 (0x00007f4f8b3a3000)
> libvtkjpeg-7.1.so.1 =>
> /opt/moose/VTK-7.1.0/clang-opt/lib/libvtkjpeg-7.1.so.1 (0x00007f4f8b173000)
> libvtkIOLegacy-7.1.so.1 =>
> /opt/moose/VTK-7.1.0/clang-opt/lib/libvtkIOLegacy-7.1.so.1
> (0x00007f4f8ae6e000)
> libxcb.so.1 => /usr/lib/x86_64-linux-gnu/libxcb.so.1
> (0x00007f4f8ac4c000)
> libnuma.so.1 => /usr/lib/x86_64-linux-gnu/libnuma.so.1
> (0x00007f4f8aa41000)
> libltdl.so.7 => /usr/lib/x86_64-linux-gnu/libltdl.so.7
> (0x00007f4f8a837000)
> /lib64/ld-linux-x86-64.so.2 (0x000056155f77e000)
> libsz.so.2 => /usr/lib/x86_64-linux-gnu/libsz.so.2 (0x00007f4f8a632000)
> libidn2.so.0 => /usr/lib/x86_64-linux-gnu/libidn2.so.0
> (0x00007f4f8a410000)
> librtmp.so.1 => /usr/lib/x86_64-linux-gnu/librtmp.so.1
> (0x00007f4f8a1f4000)
> libpsl.so.5 => /usr/lib/x86_64-linux-gnu/libpsl.so.5
> (0x00007f4f89fe6000)
> libnettle.so.6 => /usr/lib/x86_64-linux-gnu/libnettle.so.6
> (0x00007f4f89db0000)
> libgnutls.so.30 => /usr/lib/x86_64-linux-gnu/libgnutls.so.30
> (0x00007f4f89a50000)
> libgssapi_krb5.so.2 => /usr/lib/x86_64-linux-gnu/libgssapi_krb5.so.2
> (0x00007f4f89804000)
> liblber-2.4.so.2 => /usr/lib/x86_64-linux-gnu/liblber-2.4.so.2
> (0x00007f4f895f6000)
> libldap_r-2.4.so.2 => /usr/lib/x86_64-linux-gnu/libldap_r-2.4.so.2
> (0x00007f4f893a4000)
> libvtkexpat-7.1.so.1 =>
> /opt/moose/VTK-7.1.0/clang-opt/lib/libvtkexpat-7.1.so.1 (0x00007f4f8916f000)
> libXau.so.6 => /usr/lib/x86_64-linux-gnu/libXau.so.6
> (0x00007f4f88f69000)
> libXdmcp.so.6 => /usr/lib/x86_64-linux-gnu/libXdmcp.so.6
> (0x00007f4f88d63000)
> libaec.so.0 => /usr/lib/x86_64-linux-gnu/libaec.so.0
> (0x00007f4f88b5b000)
> libunistring.so.0 => /usr/lib/x86_64-linux-gnu/libunistring.so.0
> (0x00007f4f88845000)
> libhogweed.so.4 => /usr/lib/x86_64-linux-gnu/libhogweed.so.4
> (0x00007f4f88612000)
> libgmp.so.10 => /usr/lib/x86_64-linux-gnu/libgmp.so.10
> (0x00007f4f88392000)
> libp11-kit.so.0 => /usr/lib/x86_64-linux-gnu/libp11-kit.so.0
> (0x00007f4f8812b000)
> libidn.so.11 => /lib/x86_64-linux-gnu/libidn.so.11 (0x00007f4f87ef8000)
> libtasn1.so.6 => /usr/lib/x86_64-linux-gnu/libtasn1.so.6
> (0x00007f4f87ce5000)
> libkrb5.so.3 => /usr/lib/x86_64-linux-gnu/libkrb5.so.3
> (0x00007f4f87a10000)
> libk5crypto.so.3 => /usr/lib/x86_64-linux-gnu/libk5crypto.so.3
> (0x00007f4f877de000)
> libcom_err.so.2 => /lib/x86_64-linux-gnu/libcom_err.so.2
> (0x00007f4f875da000)
> libkrb5support.so.0 => /usr/lib/x86_64-linux-gnu/libkrb5support.so.0
> (0x00007f4f873cd000)
> libresolv.so.2 => /lib/x86_64-linux-gnu/libresolv.so.2
> (0x00007f4f871b2000)
> libsasl2.so.2 => /usr/lib/x86_64-linux-gnu/libsasl2.so.2
> (0x00007f4f86f97000)
> libgssapi.so.3 => /usr/lib/x86_64-linux-gnu/libgssapi.so.3
> (0x00007f4f86d55000)
> libffi.so.6 => /usr/lib/x86_64-linux-gnu/libffi.so.6
> (0x00007f4f86b4d000)
> libkeyutils.so.1 => /lib/x86_64-linux-gnu/libkeyutils.so.1
> (0x00007f4f86947000)
> libheimntlm.so.0 => /usr/lib/x86_64-linux-gnu/libheimntlm.so.0
> (0x00007f4f8673e000)
> libkrb5.so.26 => /usr/lib/x86_64-linux-gnu/libkrb5.so.26
> (0x00007f4f864b1000)
> libasn1.so.8 => /usr/lib/x86_64-linux-gnu/libasn1.so.8
> (0x00007f4f8620e000)
> libhcrypto.so.4 => /usr/lib/x86_64-linux-gnu/libhcrypto.so.4
> (0x00007f4f85fd7000)
> libroken.so.18 => /usr/lib/x86_64-linux-gnu/libroken.so.18
> (0x00007f4f85dc1000)
> libwind.so.0 => /usr/lib/x86_64-linux-gnu/libwind.so.0
> (0x00007f4f85b96000)
> libheimbase.so.1 => /usr/lib/x86_64-linux-gnu/libheimbase.so.1
> (0x00007f4f85987000)
> libhx509.so.5 => /usr/lib/x86_64-linux-gnu/libhx509.so.5
> (0x00007f4f8573c000)
> libsqlite3.so.0 => /usr/lib/x86_64-linux-gnu/libsqlite3.so.0
> (0x00007f4f85435000)
> libcrypt.so.1 => /lib/x86_64-linux-gnu/libcrypt.so.1
> (0x00007f4f851fd000)
--
Yao (齐尧)
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Large memory usage by gdb
2017-07-25 20:28 ` Philippe Waroquiers
@ 2017-07-31 22:11 ` Alex Lindsay
2017-08-01 19:12 ` Philippe Waroquiers
0 siblings, 1 reply; 12+ messages in thread
From: Alex Lindsay @ 2017-07-31 22:11 UTC (permalink / raw)
To: Philippe Waroquiers; +Cc: gdb
Philippe,
Is memcheck a better tool to use here compared to massif?
Alex
On 07/25/2017 03:28 PM, Philippe Waroquiers wrote:
> Run gdb under Valgrind, and make some heap profiling dump at regular
> interval, (e.g. after each run).
>
> With valgrind 3.12 or before, you can do a leak report to show
> the delta (increase or decrease) compared to the previous leak search,
> including the reachable blocks. So, you will be able to see what
> increases the memory.
>
> If you compile the latest Valgrind (3.13), you can e.g. use memcheck
> and produce heap profiling reports readable with kcachegrind.
>
> You will need a gdb compiled with debug or install the debug info
> of gdb to have understandable stack traces.
>
> Philippe
>
> On Tue, 2017-07-25 at 15:20 -0500, Alex Lindsay wrote:
>> My OS is Ubuntu 17.04. Using both gdb 7.12 and 8.0, I experience large
>> memory usage when debugging my executable. As I add breakpoints and run
>> the executable multiple times in a single session, memory usage grows
>> continuously, regularly hitting 10s of GBs. I don't recall experiencing
>> this issue with earlier Ubuntu versions (and also likely earlier
>> versions of gdb). When I debug the same executable with `lldb`, memory
>> usage is pretty much constant at around 2 GB. Does anyone have any
>> suggestions?
>>
>> Alex
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Large memory usage by gdb
2017-07-31 22:11 ` Alex Lindsay
@ 2017-08-01 19:12 ` Philippe Waroquiers
[not found] ` <420b109c-1610-d687-ae9a-b172542fafca@gmail.com>
0 siblings, 1 reply; 12+ messages in thread
From: Philippe Waroquiers @ 2017-08-01 19:12 UTC (permalink / raw)
To: Alex Lindsay; +Cc: gdb
On Mon, 2017-07-31 at 17:11 -0500, Alex Lindsay wrote:
> Philippe,
>
> Is memcheck a better tool to use here compared to massif?
In valgrind 3.13, memcheck provides a quite detailed/precise
way to see delta memory increase/decrease.
Typically, you will give --xtree-memory=full argument,
and then e.g. use vgdb to launch a (delta) leak search
after each run.
You can then use kcachegrind to visualise the resulting
memory increase.
Massif is IMO less precise, but automatically produces
memory status at regular interval.
So, in summary, I would use valgrind 3.13 and memcheck
(with an additional benefit that if ever your use case
causes real memory leaks, memcheck will detect them).
Philippe
>
> Alex
>
> On 07/25/2017 03:28 PM, Philippe Waroquiers wrote:
> > Run gdb under Valgrind, and make some heap profiling dump at regular
> > interval, (e.g. after each run).
> >
> > With valgrind 3.12 or before, you can do a leak report to show
> > the delta (increase or decrease) compared to the previous leak search,
> > including the reachable blocks. So, you will be able to see what
> > increases the memory.
> >
> > If you compile the latest Valgrind (3.13), you can e.g. use memcheck
> > and produce heap profiling reports readable with kcachegrind.
> >
> > You will need a gdb compiled with debug or install the debug info
> > of gdb to have understandable stack traces.
> >
> > Philippe
> >
> > On Tue, 2017-07-25 at 15:20 -0500, Alex Lindsay wrote:
> >> My OS is Ubuntu 17.04. Using both gdb 7.12 and 8.0, I experience large
> >> memory usage when debugging my executable. As I add breakpoints and run
> >> the executable multiple times in a single session, memory usage grows
> >> continuously, regularly hitting 10s of GBs. I don't recall experiencing
> >> this issue with earlier Ubuntu versions (and also likely earlier
> >> versions of gdb). When I debug the same executable with `lldb`, memory
> >> usage is pretty much constant at around 2 GB. Does anyone have any
> >> suggestions?
> >>
> >> Alex
> >
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Large memory usage by gdb
[not found] ` <420b109c-1610-d687-ae9a-b172542fafca@gmail.com>
@ 2017-08-04 21:43 ` Alex Lindsay
2017-08-07 9:16 ` Yao Qi
2017-08-07 18:19 ` Philippe Waroquiers
1 sibling, 1 reply; 12+ messages in thread
From: Alex Lindsay @ 2017-08-04 21:43 UTC (permalink / raw)
To: gdb
So I wanted to share what I've been doing to make sure that I'm not
wasting my time (or the list's). Since this is my first deep dive into
valgrind, I started "simple". I've been running more or less:
valgrind --xtree-leak=yes gdb --args ./hello
Where `hello` is just a hello world program. My initial leak report was:
==5923== LEAK SUMMARY:
==5923== definitely lost: 734,882 bytes in 6,231 blocks
==5923== indirectly lost: 42,581 bytes in 6 blocks
==5923== possibly lost: 112,422 bytes in 327 blocks
==5923== still reachable: 8,514,469 bytes in 16,790 blocks
==5923== suppressed: 0 bytes in 0 blocks
with the initial call-graph (call-graphs viewable at
https://github.com/lindsayad/markdown-notebooks/blob/master/gdb/images-for-list.md).
Consequently, I made some changes in the `cp_canonicalize_string`
function of `cp-support.c` (viewable at
https://github.com/lindsayad/gdb/pull/1/files). Subsequent running of
the same valgrind command resulted in the new summary:
==1748== LEAK SUMMARY:
==1748== definitely lost: 74,226 bytes in 21 blocks
==1748== indirectly lost: 42,581 bytes in 6 blocks
==1748== possibly lost: 111,142 bytes in 324 blocks
==1748== still reachable: 8,515,463 bytes in 16,791 blocks
==1748== suppressed: 0 bytes in 0 blocks
and a new call-graph. I was fairly pleased that I reduced the number of
definitely lost bytes by an order of magnitude. So iterating again, I
made some changes to `elfread.c` and generated the new summary:
==30129== LEAK SUMMARY:
==30129== definitely lost: 37,538 bytes in 15 blocks
==30129== indirectly lost: 0 bytes in 0 blocks
==30129== possibly lost: 111,142 bytes in 324 blocks
==30129== still reachable: 8,512,473 bytes in 16,788 blocks
==30129== suppressed: 0 bytes in 0 blocks
and new call-graph. So my question is, is what I'm doing valuable? I
haven't done any profiling yet to see how these changes affect my real
use case where I'm debugging an executable with lots of shared
libraries. Nevertheless, these leaks do seem to be very real. I know
that GDB developers are way better programmers than I am, so the fact
that these leaks haven't been found yet makes me wonder whether they
matter in real use cases or not. I am using a gdb built from the git
repository (GNU gdb (GDB) 8.0.50.20170803-git).
Thanks for your time,
Alex
On 08/01/2017 02:11 PM, Philippe Waroquiers wrote:
> On Mon, 2017-07-31 at 17:11 -0500, Alex Lindsay wrote:
>> Philippe,
>>
>> Is memcheck a better tool to use here compared to massif?
> In valgrind 3.13, memcheck provides a quite detailed/precise
> way to see delta memory increase/decrease.
> Typically, you will give --xtree-memory=full argument,
> and then e.g. use vgdb to launch a (delta) leak search
> after each run.
> You can then use kcachegrind to visualise the resulting
> memory increase.
>
> Massif is IMO less precise, but automatically produces
> memory status at regular interval.
>
> So, in summary, I would use valgrind 3.13 and memcheck
> (with an additional benefit that if ever your use case
> causes real memory leaks, memcheck will detect them).
>
> Philippe
>
>> Alex
>>
>> On 07/25/2017 03:28 PM, Philippe Waroquiers wrote:
>>> Run gdb under Valgrind, and make some heap profiling dump at regular
>>> interval, (e.g. after each run).
>>>
>>> With valgrind 3.12 or before, you can do a leak report to show
>>> the delta (increase or decrease) compared to the previous leak search,
>>> including the reachable blocks. So, you will be able to see what
>>> increases the memory.
>>>
>>> If you compile the latest Valgrind (3.13), you can e.g. use memcheck
>>> and produce heap profiling reports readable with kcachegrind.
>>>
>>> You will need a gdb compiled with debug or install the debug info
>>> of gdb to have understandable stack traces.
>>>
>>> Philippe
>>>
>>> On Tue, 2017-07-25 at 15:20 -0500, Alex Lindsay wrote:
>>>> My OS is Ubuntu 17.04. Using both gdb 7.12 and 8.0, I experience large
>>>> memory usage when debugging my executable. As I add breakpoints and run
>>>> the executable multiple times in a single session, memory usage grows
>>>> continuously, regularly hitting 10s of GBs. I don't recall experiencing
>>>> this issue with earlier Ubuntu versions (and also likely earlier
>>>> versions of gdb). When I debug the same executable with `lldb`, memory
>>>> usage is pretty much constant at around 2 GB. Does anyone have any
>>>> suggestions?
>>>>
>>>> Alex
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Large memory usage by gdb
2017-08-04 21:43 ` Alex Lindsay
@ 2017-08-07 9:16 ` Yao Qi
2017-08-07 19:53 ` Philippe Waroquiers
0 siblings, 1 reply; 12+ messages in thread
From: Yao Qi @ 2017-08-07 9:16 UTC (permalink / raw)
To: Alex Lindsay; +Cc: gdb
Alex Lindsay <alexlindsay239@gmail.com> writes:
> and new call-graph. So my question is, is what I'm doing valuable? I
Oh, definitely yes! Thanks a lot for the investigation.
> haven't done any profiling yet to see how these changes affect my real
> use case where I'm debugging an executable with lots of shared
> libraries. Nevertheless, these leaks do seem to be very real. I know
> that GDB developers are way better programmers than I am, so the fact
> that these leaks haven't been found yet makes me wonder whether they
> matter in real use cases or not. I am using a gdb built from the git
> repository (GNU gdb (GDB) 8.0.50.20170803-git).
leaks are bugs, and we should fix them. I can find these leaks in
valgrind too,
==21225== 463 (336 direct, 127 indirect) bytes in 1 blocks are definitely lost in loss record 10,770 of 10,949^M
==21225== at 0x4C2DB8F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)^M
==21225== by 0x6C6DA2: bfd_malloc (libbfd.c:193)^M
==21225== by 0x6C6F4D: bfd_zmalloc (libbfd.c:278)^M
==21225== by 0x6D252E: elf_x86_64_get_synthetic_symtab (elf64-x86-64.c:6846)^M
==21225== by 0x4B397A: elf_read_minimal_symbols (elfread.c:1124)^M
==21225== by 0x4B397A: elf_symfile_read(objfile*, enum_flags<symfile_add_flag>) (elfread.c:1182)^M
==21225== by 0x63AC94: read_symbols(objfile*, enum_flags<symfile_add_flag>) (symfile.c:861)^M
==21225== by 0x63A773: syms_from_objfile_1 (symfile.c:1062)
and
==21225== 32 bytes in 1 blocks are definitely lost in loss record 6,063 of 10,949^M
==21225== at 0x4C2DB8F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)^M
==21225== by 0x4C2FDEF: realloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)^M
==21225== by 0x76CB31: d_growable_string_resize (cp-demangle.c:3963)^M
==21225== by 0x76CB31: d_growable_string_init (cp-demangle.c:3942)^M
==21225== by 0x76CB31: cplus_demangle_print (cp-demangle.c:4308)^M
==21225== by 0x4C9535: cp_comp_to_string(demangle_component*, int) (cp-name-parser.y:1972)^M
==21225== by 0x53EF14: cp_canonicalize_string[abi:cxx11](char const*) (cp-support.c:569)^M
==21225== by 0x561B75: dwarf2_canonicalize_name(char const*, dwarf2_cu*, obstack*) [clone .isra.210] (dwarf2read.c:20159)^M
==21225== by 0x566B77: read_partial_die (dwarf2read.c:16264)
Can you post your two patches
https://github.com/lindsayad/gdb/pull/1/files separately to
gdb-patches@sourceware.org?
--
Yao (齐尧)
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Large memory usage by gdb
[not found] ` <420b109c-1610-d687-ae9a-b172542fafca@gmail.com>
2017-08-04 21:43 ` Alex Lindsay
@ 2017-08-07 18:19 ` Philippe Waroquiers
1 sibling, 0 replies; 12+ messages in thread
From: Philippe Waroquiers @ 2017-08-07 18:19 UTC (permalink / raw)
To: Alex Lindsay; +Cc: gdb
On Fri, 2017-08-04 at 16:14 -0500, Alex Lindsay wrote:
> So I wanted to share what I've been doing to make sure that I'm not
> wasting my time (or the list's).
For sure, fixing leaks is not wasted time.
> Since this is my first deep dive into valgrind, I started "simple".
> I've been running more or less:
> valgrind --xtree-leak=yes gdb --args ./hello
If you are only interested in leaks (definite and/or possible leaks)
and you do not have zillions of different leaks, then using
the classical text output for leak search might be easier.
If you use kcachegrind to visualise xtree leak reports,
you might have to tune the way the graph is shown by using
menus in the graph such as:
right click -> Graph -> Caller Depth -> ...
-> Callee Depth -> ...
-> Min Node Cost -> ...
Once the leaks are solved, then to visualise the increase
of memory caused by a run in gdb, you might do (from a shell) :
vgdb leak_check xtleak kinds all any
In kcachegrind, you can then analyse various 'events'
(typically for your case, you might look first at
'increase Reachable Bytes'
Alternatively, to see memory increase in a textual output,
you might do:
vgdb leak_check full kinds all increased
(if output too large, you can add
limited 100
after increased
to output only the 100 (or whatever nr) 'biggest increase'.
Philippe
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Large memory usage by gdb
2017-08-07 9:16 ` Yao Qi
@ 2017-08-07 19:53 ` Philippe Waroquiers
2017-08-07 21:04 ` Alex Lindsay
0 siblings, 1 reply; 12+ messages in thread
From: Philippe Waroquiers @ 2017-08-07 19:53 UTC (permalink / raw)
To: Yao Qi; +Cc: Alex Lindsay, gdb
On Mon, 2017-08-07 at 10:14 +0100, Yao Qi wrote:
> leaks are bugs, and we should fix them. I can find these leaks in
> valgrind too,
When running valgrind + gdb on a small program, I also get
many errors like the below (GDB 8.0, Debian 8).
Do you also see that ?
Philippe
==9360== Invalid read of size 4
==9360== at 0x58AD9F3: PyObject_Free (in /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
==9360== by 0x4C5E7F: gdb_Py_DECREF (python-internal.h:194)
==9360== by 0x4C5E7F: decref (py-ref.h:36)
==9360== by 0x4C5E7F: ~ref_ptr (gdb_ref_ptr.h:91)
==9360== by 0x4C5E7F: unicode_to_encoded_string(_object*, char const*) (py-utils.c:74)
==9360== by 0x4C5F9C: python_string_to_host_string(_object*) (py-utils.c:158)
==9360== by 0x4BBDDD: get_doc_string(_object*, _object*) (py-param.c:314)
==9360== by 0x4BC11D: parmpy_init(_object*, _object*, _object*) (py-param.c:707)
==9360== by 0x580AD5B: ??? (in /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
==9360== by 0x5899BE2: PyObject_Call (in /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
==9360== by 0x58CD441: PyEval_EvalFrameEx (in /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
==9360== by 0x594218F: PyEval_EvalCodeEx (in /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
==9360== by 0x589132B: ??? (in /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
==9360== by 0x5899BE2: PyObject_Call (in /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
==9360== by 0x58DC0E4: ??? (in /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
==9360== Address 0x6f0c020 is 1,280 bytes inside a block of size 3,133 free'd
==9360== at 0x4C29B8A: realloc (vg_replace_malloc.c:785)
==9360== by 0x5862625: _PyString_Resize (in /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
==9360== by 0x57E40AC: PyUnicodeUCS4_EncodeUTF8 (in /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
==9360== by 0x5848A98: ??? (in /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
==9360== by 0x5899BE2: PyObject_Call (in /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
==9360== by 0x59416E6: PyEval_CallObjectWithKeywords (in /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
==9360== by 0x5906C4D: PyCodec_Encode (in /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
==9360== by 0x57E4AB4: PyUnicodeUCS4_AsEncodedString (in /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
==9360== by 0x4C5E44: unicode_to_encoded_string(_object*, char const*) (py-utils.c:74)
==9360== by 0x4C5F9C: python_string_to_host_string(_object*) (py-utils.c:158)
==9360== by 0x4BBDDD: get_doc_string(_object*, _object*) (py-param.c:314)
==9360== by 0x4BC11D: parmpy_init(_object*, _object*, _object*) (py-param.c:707)
==9360== Block was alloc'd at
==9360== at 0x4C27BF5: malloc (vg_replace_malloc.c:299)
==9360== by 0x5864249: PyString_FromStringAndSize (in /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
==9360== by 0x57E41C6: PyUnicodeUCS4_EncodeUTF8 (in /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
==9360== by 0x5848A98: ??? (in /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
==9360== by 0x5899BE2: PyObject_Call (in /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
==9360== by 0x59416E6: PyEval_CallObjectWithKeywords (in /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
==9360== by 0x5906C4D: PyCodec_Encode (in /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
==9360== by 0x57E4AB4: PyUnicodeUCS4_AsEncodedString (in /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
==9360== by 0x4C5E44: unicode_to_encoded_string(_object*, char const*) (py-utils.c:74)
==9360== by 0x4C5F9C: python_string_to_host_string(_object*) (py-utils.c:158)
==9360== by 0x4BBDDD: get_doc_string(_object*, _object*) (py-param.c:314)
==9360== by 0x4BC11D: parmpy_init(_object*, _object*, _object*) (py-param.c:707)
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Large memory usage by gdb
2017-08-07 19:53 ` Philippe Waroquiers
@ 2017-08-07 21:04 ` Alex Lindsay
2017-08-07 21:34 ` Simon Marchi
0 siblings, 1 reply; 12+ messages in thread
From: Alex Lindsay @ 2017-08-07 21:04 UTC (permalink / raw)
To: Philippe Waroquiers, Yao Qi; +Cc: gdb
Yes, I've also seen all those errors. I wrote them off to errors in the
python library but maybe I should have looked more closely
On 08/07/2017 02:53 PM, Philippe Waroquiers wrote:
> On Mon, 2017-08-07 at 10:14 +0100, Yao Qi wrote:
>
>> leaks are bugs, and we should fix them. I can find these leaks in
>> valgrind too,
> When running valgrind + gdb on a small program, I also get
> many errors like the below (GDB 8.0, Debian 8).
>
> Do you also see that ?
>
>
> Philippe
>
> ==9360== Invalid read of size 4
> ==9360== at 0x58AD9F3: PyObject_Free (in /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
> ==9360== by 0x4C5E7F: gdb_Py_DECREF (python-internal.h:194)
> ==9360== by 0x4C5E7F: decref (py-ref.h:36)
> ==9360== by 0x4C5E7F: ~ref_ptr (gdb_ref_ptr.h:91)
> ==9360== by 0x4C5E7F: unicode_to_encoded_string(_object*, char const*) (py-utils.c:74)
> ==9360== by 0x4C5F9C: python_string_to_host_string(_object*) (py-utils.c:158)
> ==9360== by 0x4BBDDD: get_doc_string(_object*, _object*) (py-param.c:314)
> ==9360== by 0x4BC11D: parmpy_init(_object*, _object*, _object*) (py-param.c:707)
> ==9360== by 0x580AD5B: ??? (in /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
> ==9360== by 0x5899BE2: PyObject_Call (in /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
> ==9360== by 0x58CD441: PyEval_EvalFrameEx (in /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
> ==9360== by 0x594218F: PyEval_EvalCodeEx (in /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
> ==9360== by 0x589132B: ??? (in /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
> ==9360== by 0x5899BE2: PyObject_Call (in /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
> ==9360== by 0x58DC0E4: ??? (in /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
> ==9360== Address 0x6f0c020 is 1,280 bytes inside a block of size 3,133 free'd
> ==9360== at 0x4C29B8A: realloc (vg_replace_malloc.c:785)
> ==9360== by 0x5862625: _PyString_Resize (in /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
> ==9360== by 0x57E40AC: PyUnicodeUCS4_EncodeUTF8 (in /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
> ==9360== by 0x5848A98: ??? (in /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
> ==9360== by 0x5899BE2: PyObject_Call (in /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
> ==9360== by 0x59416E6: PyEval_CallObjectWithKeywords (in /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
> ==9360== by 0x5906C4D: PyCodec_Encode (in /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
> ==9360== by 0x57E4AB4: PyUnicodeUCS4_AsEncodedString (in /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
> ==9360== by 0x4C5E44: unicode_to_encoded_string(_object*, char const*) (py-utils.c:74)
> ==9360== by 0x4C5F9C: python_string_to_host_string(_object*) (py-utils.c:158)
> ==9360== by 0x4BBDDD: get_doc_string(_object*, _object*) (py-param.c:314)
> ==9360== by 0x4BC11D: parmpy_init(_object*, _object*, _object*) (py-param.c:707)
> ==9360== Block was alloc'd at
> ==9360== at 0x4C27BF5: malloc (vg_replace_malloc.c:299)
> ==9360== by 0x5864249: PyString_FromStringAndSize (in /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
> ==9360== by 0x57E41C6: PyUnicodeUCS4_EncodeUTF8 (in /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
> ==9360== by 0x5848A98: ??? (in /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
> ==9360== by 0x5899BE2: PyObject_Call (in /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
> ==9360== by 0x59416E6: PyEval_CallObjectWithKeywords (in /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
> ==9360== by 0x5906C4D: PyCodec_Encode (in /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
> ==9360== by 0x57E4AB4: PyUnicodeUCS4_AsEncodedString (in /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
> ==9360== by 0x4C5E44: unicode_to_encoded_string(_object*, char const*) (py-utils.c:74)
> ==9360== by 0x4C5F9C: python_string_to_host_string(_object*) (py-utils.c:158)
> ==9360== by 0x4BBDDD: get_doc_string(_object*, _object*) (py-param.c:314)
> ==9360== by 0x4BC11D: parmpy_init(_object*, _object*, _object*) (py-param.c:707)
>
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Large memory usage by gdb
2017-08-07 21:04 ` Alex Lindsay
@ 2017-08-07 21:34 ` Simon Marchi
0 siblings, 0 replies; 12+ messages in thread
From: Simon Marchi @ 2017-08-07 21:34 UTC (permalink / raw)
To: Alex Lindsay; +Cc: Philippe Waroquiers, Yao Qi, gdb
On 2017-08-07 23:04, Alex Lindsay wrote:
> Yes, I've also seen all those errors. I wrote them off to errors in
> the python library but maybe I should have looked more closely
>
> On 08/07/2017 02:53 PM, Philippe Waroquiers wrote:
>> On Mon, 2017-08-07 at 10:14 +0100, Yao Qi wrote:
>>
>>> leaks are bugs, and we should fix them. I can find these leaks in
>>> valgrind too,
>> When running valgrind + gdb on a small program, I also get
>> many errors like the below (GDB 8.0, Debian 8).
>>
>> Do you also see that ?
>>
>>
>> Philippe
>>
>> ==9360== Invalid read of size 4
>> ==9360== at 0x58AD9F3: PyObject_Free (in
>> /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
>> ==9360== by 0x4C5E7F: gdb_Py_DECREF (python-internal.h:194)
>> ==9360== by 0x4C5E7F: decref (py-ref.h:36)
>> ==9360== by 0x4C5E7F: ~ref_ptr (gdb_ref_ptr.h:91)
>> ==9360== by 0x4C5E7F: unicode_to_encoded_string(_object*, char
>> const*) (py-utils.c:74)
>> ==9360== by 0x4C5F9C: python_string_to_host_string(_object*)
>> (py-utils.c:158)
>> ==9360== by 0x4BBDDD: get_doc_string(_object*, _object*)
>> (py-param.c:314)
>> ==9360== by 0x4BC11D: parmpy_init(_object*, _object*, _object*)
>> (py-param.c:707)
>> ==9360== by 0x580AD5B: ??? (in
>> /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
>> ==9360== by 0x5899BE2: PyObject_Call (in
>> /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
>> ==9360== by 0x58CD441: PyEval_EvalFrameEx (in
>> /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
>> ==9360== by 0x594218F: PyEval_EvalCodeEx (in
>> /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
>> ==9360== by 0x589132B: ??? (in
>> /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
>> ==9360== by 0x5899BE2: PyObject_Call (in
>> /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
>> ==9360== by 0x58DC0E4: ??? (in
>> /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
>> ==9360== Address 0x6f0c020 is 1,280 bytes inside a block of size
>> 3,133 free'd
>> ==9360== at 0x4C29B8A: realloc (vg_replace_malloc.c:785)
>> ==9360== by 0x5862625: _PyString_Resize (in
>> /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
>> ==9360== by 0x57E40AC: PyUnicodeUCS4_EncodeUTF8 (in
>> /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
>> ==9360== by 0x5848A98: ??? (in
>> /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
>> ==9360== by 0x5899BE2: PyObject_Call (in
>> /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
>> ==9360== by 0x59416E6: PyEval_CallObjectWithKeywords (in
>> /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
>> ==9360== by 0x5906C4D: PyCodec_Encode (in
>> /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
>> ==9360== by 0x57E4AB4: PyUnicodeUCS4_AsEncodedString (in
>> /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
>> ==9360== by 0x4C5E44: unicode_to_encoded_string(_object*, char
>> const*) (py-utils.c:74)
>> ==9360== by 0x4C5F9C: python_string_to_host_string(_object*)
>> (py-utils.c:158)
>> ==9360== by 0x4BBDDD: get_doc_string(_object*, _object*)
>> (py-param.c:314)
>> ==9360== by 0x4BC11D: parmpy_init(_object*, _object*, _object*)
>> (py-param.c:707)
>> ==9360== Block was alloc'd at
>> ==9360== at 0x4C27BF5: malloc (vg_replace_malloc.c:299)
>> ==9360== by 0x5864249: PyString_FromStringAndSize (in
>> /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
>> ==9360== by 0x57E41C6: PyUnicodeUCS4_EncodeUTF8 (in
>> /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
>> ==9360== by 0x5848A98: ??? (in
>> /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
>> ==9360== by 0x5899BE2: PyObject_Call (in
>> /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
>> ==9360== by 0x59416E6: PyEval_CallObjectWithKeywords (in
>> /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
>> ==9360== by 0x5906C4D: PyCodec_Encode (in
>> /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
>> ==9360== by 0x57E4AB4: PyUnicodeUCS4_AsEncodedString (in
>> /usr/lib/x86_64-linux-gnu/libpython2.7.so.1.0)
>> ==9360== by 0x4C5E44: unicode_to_encoded_string(_object*, char
>> const*) (py-utils.c:74)
>> ==9360== by 0x4C5F9C: python_string_to_host_string(_object*)
>> (py-utils.c:158)
>> ==9360== by 0x4BBDDD: get_doc_string(_object*, _object*)
>> (py-param.c:314)
>> ==9360== by 0x4BC11D: parmpy_init(_object*, _object*, _object*)
>> (py-param.c:707)
>>
>>
This is expected with Python:
https://svn.python.org/projects/python/trunk/Misc/README.valgrind
Simon
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2017-08-07 21:34 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-07-25 20:20 Large memory usage by gdb Alex Lindsay
2017-07-25 20:28 ` Philippe Waroquiers
2017-07-31 22:11 ` Alex Lindsay
2017-08-01 19:12 ` Philippe Waroquiers
[not found] ` <420b109c-1610-d687-ae9a-b172542fafca@gmail.com>
2017-08-04 21:43 ` Alex Lindsay
2017-08-07 9:16 ` Yao Qi
2017-08-07 19:53 ` Philippe Waroquiers
2017-08-07 21:04 ` Alex Lindsay
2017-08-07 21:34 ` Simon Marchi
2017-08-07 18:19 ` Philippe Waroquiers
2017-07-26 7:28 ` Yao Qi
[not found] ` <4fc14853-b066-4fd7-f0c9-b98f442a9a95@gmail.com>
2017-07-26 15:55 ` Yao Qi
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox