Hello GDB experts, I wonder if somebody can help me to understand which GDB MI behavior is supposed to be correct. I've included the test case, the MI commands used, and the outputs from two debuggers: the native FC5 Linux-X86 GNU gdb Red Hat Linux (6.3.0.0-1.134.fc5rh) and ours GNU gdb 6.5 Xtensa Tools 7.1.0-development Our GNU gdb 6.5 is consistent with the top of the FSF tree. PROBLEM DESCRIPTION: ==================== When we hit the breakpoint inside f11() second time: In case of 6.3 we have : 228^done,changelist=[{name="var3",in_scope="true",type_changed="false"}] (gdb) 229^done,changelist=[{name="var4",in_scope="true",type_changed="false"}] (gdb) 230^done,value="3" (gdb) 231^done,value="2" (gdb) In in case of 6.5+ we have : 228^done,changelist=[{name="var3",in_scope="false"}] (gdb) 229^done,changelist=[{name="var4",in_scope="false"}] (gdb) 230^done,value="2" (gdb) 231^done,value="1" (gdb) So "var3" and "var4" are out of scope. Our GUI front-end relies on the 6.3-like behavior, which is consistent with what we had in our previous releases based on GNU gdb 5.2.1. QUESTIONS ========= 1) Is 6.5(+)-style behavior incorrect ? If it is correct: - Are we supposed to recreate variables each time we enter the function ? - Is this efficient ? 2) Where can I find a good documentation describing these aspects of GDB MI ? All docs I found on the Internet weren't quite helpful. Thanks in advance for any of your help. -- Maxim