Hi Andrew,

I don't know about gdb, as I don't use it (in fact, I never used a debugger on 
the MSP - with the setup I had to begin here it was impossible and later I did 
fine without).
But reading the TI  BBS I learned that the new versions of the CCE debugger 
have an option to stop the clocks while under debugger control.
It makes some sense, as it won't do you any good if you get an ISR fired every 
millisecond while debugging. You'll forever be in the ISRs and never reach your 
main breakpoint :)
(in my case, I use realtime external signals, so debugging with or without 
stopped timers wouldn't work as I cannot bring the rest of reality under 
debugging control)

Did you check the changelog of gdb? Maybe it's a new feature that the clock 
control is implemented. Maybe the new gdb uses the Embedded Emulation Module 
rather than just the JTAG to overcome the older 
versions' issues.
My 5438 errata sheet tells that the clock control does not work for the VLO. 
Does your CPU has a VLO too? It is about 10kHz (with +-75% tolerance), so you 
could perhaps switch ACLK to this for testing. If interrupts 
then come with the expected frequency (6 seconds then, plus tolerance) it is 
the EEM that's stopping the clocks while tracing (provided that your CPU has a 
VLO and EEM and the same erratum).

JMGross

----- Ursprüngliche Nachricht -----
Von: Andrew McLaren
An: [email protected]
Gesendet am: 10 Apr 2010 01:42:10
Betreff: [Mspgcc-users] ACLK appears to slow dramatically under gdb control?

I have recently upgraded to 7.0.1 of the msp430 gdb, to get around some
issues with automatic breakpoints (for example the lack of a Step Over).
The new version appears to have solved these, except that I am now
having some issues with timing.
 
I have reduced it to the following scenario. I have Timer-A (2131
processor) clocked from a 32K ACLK, with simple interrupts raised on TAR
overflow (approx 2 seconds), and an intermediate compare at a fraction
of a second (TARCCR1 at 1024). This all works fine unless its under the
control of the debugger, in which case the interrupts take forever to
fire (the expected 2 second TAR overflow takes 10-15 minutes). This is
irrespective of whether any manual or automatic breakpoints are defined
- just say go. Freeing the processor from GDB control (simple reset of
the processor) immediately reverts to the expected timing.
 
I've reverted back to the older mspgcc toolchain (the 30 Dec 08
version), and everything goes back to the way it was (timing works 100%,
but with GDB issues). I'm guessing that this is somehow related to the
debugger effectively single stepping the logic - I could understand it
playing with the system clock to achieve this, but not the ACLK? Or am I
missing something entirely?
 
The full toolchain I am using is mspgcc running under Eclipse. I picked
up the 7.0.1 msp430 gdb as part of an 18 Feb 10 release (4.4.3?), but
not totally sure how I got to this, as the SourceForge release still
appears to be the 30 Dec 08 version. Am I simply using something not yet
released??
 
I will have more of a play with this - the whole installation has got a
bit messy, so I'd like to redo this with a clean install so I know
exactly what I have got, but any other ideas or comments welcome.
 
Andrew


Reply via email to