Re: [Mspgcc-users] Building MSPGCC from Source Code - The Build procedure
(a) For a ticketing approach, might be better to create tickets on the sourceforge project instead of emailing everybody with each one. Though I don't know if anybody's going to maintain the how-to for the existing mspgcc infrastructure; my preference is to leave packaging to downstream and just provide the patches. (b) Not sure what the last problem was; the failure to find msp430-ar? Probably you installed binutils in a directory that is not in your PATH. sudo can contribute to confusion there, since the path it uses is not the same as the path you've got in your main shell; contrast msp430-ar -v with sudo msp430-ar -v (c) You might be better off using mspgcc4 until we get the two projects back together. Run the buildgcc.pl script it comes with. Peter On Wed, Aug 11, 2010 at 5:43 PM, Errol errol.kow...@gmail.com wrote: Being a newbie to this, I wasn't sure if each problem should be treated as something unique, or part of the wider picture. I opted for the former, and more of a ticketing approach. Please accept my humblest apologies for erring on the wrong side. However, that said, the last (unanswered) problem has, in my opinion, little to do with typos, and the source code build, and more likely the compiler I'm using in my Ubuntu Jaunty installation. This one is the show stopper. Anyone got any ideas? - regards, Errol Kowald ^ Engenia Pty Ltd, 2 Clear St., Palmerston, ACT 2913, Australia pH:+612 6242 0351 ^ On Wed, 2010-08-11 at 16:00 -0400, John Porubek wrote: (hope no-one is too put-out by my pedantic nit picking) I've got no problems with the nit-picking, but I am a little put off by small changes to each subsequent Subject line so that each message shows up as the start of a new thread (instead of being properly nested as a single thread). Hope no one is too put off by my curmudgeonly response, John -- This SF.net email is sponsored by Make an app they can't live without Enter the BlackBerry Developer Challenge http://p.sf.net/sfu/RIM-dev2dev ___ Mspgcc-users mailing list Mspgcc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/mspgcc-users
Re: [Mspgcc-users] Updates to MSP430 toolchain: binutils
Thanks for your input. (Note: Detailed discussion of the mechanics of reconciling mspgcc/mspgcc4/msp430x is intended to be held on mspgcc-devel. Please subscribe to that list if you're specifically interested in this topic.) Re errata: to whatever extent binutils or mspgcc{3,4} already supports chip-specific errata I'll have to retain that. To whatever extent I can come up with a way to add support, I'll do so. But I expect this'll be an on-demand sort of thing: having gotten away from the need to read every data sheet to figure out where the flash sections start, I'm not going to start reading every errata document to figure out what's broken on which revision of which chip. Names are TBD, but yes, I foresee something like: -msp430=cpu (cpux, cpuxv2) -mpy=none (mpy, mpy32) if I choose to be cute and use the m from the machine-specific option as part of the option name, at least within gcc. Gonna have to review prior art to see what's the best solution. I'm going to do what I can to delay resolving peripheral addresses to the final link phase. Having control of the standard header files should make that easier. I'm very sympathetic to your views on watchdogs, and would like to make it easier to control that function. However, a large portion of the MSPGCC user community is completely ignorant of the presence of WDT because it's always been automatically turned off for them. I'm just not willing to deal with the fallout from changing the existing default behavior. Peter On Tue, Aug 10, 2010 at 5:48 AM, JMGross msp...@grossibaer.de wrote: - Ursprüngliche Nachricht - Von: Peter Bigot Gesendet am: 10 Aug 2010 01:17:03 What I propose to do instead is reduce the machines to those required reflect the chip CPU architecture (MSP430, MSP430X, MSP430XV2), and not try to indicate things like available peripherals and the like as distinct machine types. The only other feature that affects generated code is presence/absence of hardware multiply support (none, MPY, MPY32), and I think that can be done without having to add another axis to the architecture matrix. This is AFAIK the only part that needs care, besides the X/XV2 thing. Of course there are the errata which should be taken care of, or the compiler will possibly create code that will simply crash. This is mostly X/XV2 related stuff (forbidden sequences of instruction, necessary NOPs, not doing some PC/SP manipulations etc. Unfortunately, this is machine dependent. SOmetimes it applies to whole families or groups of processors, sometimes only single ones are affected. I fear this could break the 'slim' concept you want to introduce. There can be, however, basic support for yet unsupported machines by supporting the base architecture and MPY through parameters or generic machine types. This allows compiler/assembler support for new processors. (of course, linker scripts and include files are necessary, but these can be written by 'normal' people) I propose that an unknown mmcu will recognize a -core and -mpy parameter and use the generic machines if given (or throw an error or assume non-X no MPY if not set). This way it would be compatible to the current mechanisms of forking inside header files etc. If this works, the guts of binutils (the assembler and linker) becomes independent of specific chip details, except for the location of ld sections (RAM, ROM, etc) and the base address of a few peripherals. Since TI's now providing me with a table containing, for each chip, the necessary addresses all the way down to individual info sections, it should be much easier to do this; plus, I believe it can be done in a way that allows new chips to be targeted without having to rebuild the toolchain. Because of the silicon errata this can prove difficult to impossible. Except there is a way to manually set errata flags. But I don't know whether TI has a 'global' errata naming. I fear the errata names are independent for each group of processors and same name may mean different things for different processors. Some of the questions that arise include: *) Is it necessary to retain support for the existing machine definitions in binutils, at least to allow linking to legacy libraries that can't be rebuilt? I don't think existing libraries are a problem. If they cannot be rebuilt, they'll work or not. machine-dependent defines are already resolved during compilation. All that's left is resolving the symbols and this is a machine-independent process. Either they are provided in the (properly) compiled project object files or they are missing and linking is impossible. Some special code (like the startup) depends on machine-dependent information, but this is usually part of the linker scripts. *) Can critical values like the address of the watchdog reset or multiply peripheral registers be correctly resolved using separate compilation, in such
Re: [Mspgcc-users] Updates to MSP430 toolchain: MSP430X
I'm not convinced that presence of a single far pointer requires that every address kick up to 32 bits. If near/far is maintained as an attribute on the symbol, as apparently IAR uses a __data20 qualifier, it should be possible to manage things properly. Whether GCC+binutils makes it reasonably easy to do this, I don't know yet, but it's certainly not a new problem. If a function address is qualified as near/far, adapting the CALL instruction to the right form at the time the executable is generated should be straightforward. Some email from Dmitry Diky to the binutils reflector back in the mid 2000s when MSP430X was first added implied the linker (as opposed to assembler) was already handling that sort of thing in some cases where PCREL jump instructions were selected based on the size of intervening instructions. Locating and updating the corresponding RET may be a bit trickier, though again this isn't a problem that only applies to the MSP430. Maybe this isn't doable, but I think it's worth putting some serious effort into trying. If putting printf format strings in far memory means doubling the size of every address in my application, I'm not sure I'm going to want to enable that feature. But for those who need it, the fallback will be distinct near and far models for code and data, independently, as currently done in the MSP430X branch. (BTW: Anybody know how to get ahold of Dmitry Diky? He's officially the maintainer of binutils for msp430, though I haven't been able to contact him at any known address and haven't seen anything from him in the archives since maybe 2006.) Peter On Wed, Aug 11, 2010 at 7:01 AM, JMGross msp...@grossibaer.de wrote: Von: Michiel Konstapel Gesendet am: 10 Aug 2010 14:40:06 Generally using 20 bit registers isn't advisable as it increases code size and execution times and only a fraction of the MSP users needs the additional code size. Many (inlcuding me) just need the higher speed, more ram or better peripherals. On the other hand, many (including me) are mainly interested in the extra flash and are happy to take the step BACK in RAM going from a 1611 to a 2418. It's faster doesn't help if you're stuck at but it doesn't fit :) Some do, some don't. Indeed, if you NEED the space, then you need it. My experience at e2e.ti.com is that many only think they need it while they actually need petter programming skills and more imagination :) And even if so, often putting the functions far is good enough. Constant data, too, would be nice, but I agree with the advantages of sticking to 16 bit data pointers (unless 20 bits are requested by a switch). The problem is that it affects code. If you use a single 20 bit pointer, then all code in the whole project needs to be written to support 20 it pointers. As it needs to save the registers as 32 bit on stack, (including ISRs), needs to deal with stack parameters differently etc. For bulk data, you can fall back to a manual loading routine. that's what I do with my dynamically loaded firmware update and for some constants. A hand-crafted 'getter' and 'setter' method. Those who are already using C++ objects won't notice the difference or the overhead :) Being able to automatically split code into both low and far memory would be great, but as far as I understand, is fairly impossible for ld. Yes, it's a rather complex task *) How much unhappiness would there be if the default compiler behavior was to support near-mode compilation, with far-mode for code and/or data as an option specified either by attributes in the source or compiler flags? I don't mind which becomes the default, if it's easy to (globally) change. The -mdata-64k and -mcode-64k flags of mspgcc3's MSP430X branch, or their inverse, would do the trick. But it is necessary to note the usage of these flags in the object file. You may not mix code compiled with or without a flag. It will result in erratric program behaviour. So my proposal is: far attribute for functions and constants (automatic for strings e.g. in printf), but for constants only if enabled with additional commandline switch. Then the command line switch would select a different, precompiled library with far data pointers? yes. If I understand correctly, the overhead for 20 bit code addresses is localized to the functions that use it (so libraries can be compiled once and will work either near or far), but 20 bit data pointers force that overhead on the whole program? Basically yes. 20bit program code needs to be called with 20bit CALLA instruction, but the calling function and all surroundings may be 16 bit completely except for the function call itself. ISRs work transparent (the missing 4 bits are automatically saved in an unused area of the status word) A problem are all library functions which take (callback) function pointers. As the pointers (and pointer variables) are 16 bit. If these
[Mspgcc-users] Breakpoints on EZ430
Hi, I have moved to mspgcc4, I like it. This should have nothing to do with the move. I'm using Eclipse to debug. When I have the USB debugger plugged in with a 149 everything works fine. When I plug in the EZ-430 I can do everything but set a breakpoint. The breakpoint usually gets a little flag when set but not with the EZ. I do believe I have the project set up the same as the 149 project where it matters, but that doesn't mean I've missed something. Has anyone seen this before? Thanks, Dan.
[Mspgcc-users] mspgcc-devel mailing list
That's https://lists.sourceforge.net/lists/subscribe/mspgcc-devel, by the way - it doesn't show up on the list-of-mailing-lists at http://sourceforge.net/mail/?group_id=42303. From: Peter Bigot [mailto:p...@peoplepowerco.com] Sent: donderdag 12 augustus 2010 2:29 To: mspgcc-de...@lists.sourceforge.net; GCC for MSP430 - http://mspgcc.sf.net Subject: Re: [Mspgcc-devel] [Mspgcc-users] Updates to MSP430 toolchain:binutils Thanks for your input. (Note: Detailed discussion of the mechanics of reconciling mspgcc/mspgcc4/msp430x is intended to be held on mspgcc-devel. Please subscribe to that list if you're specifically interested in this topic.)
Re: [Mspgcc-users] Building MSPGCC from Source Code - The Build procedure
Thanks for your help Peter. I didn't realise that the root PATH would be different to the user PATH, so after trying to learn how to work around that, I took you advice, at (c), and tried building mspgcc4, from the procedure at http://mspgcc4.sourceforge.net/ What could go wrong? It's only 3 lines. :- ~$ svn checkout https://mspgcc4.svn.sourceforge.net/svnroot/mspgcc4 checked out revision 139 ~$ cd mspgcc4 ~/mspgcc4$ sh buildgcc.sh accepted all of the default choices started the build right now with the result :- Selected GCC 3.2.3 GDB version: 7.0.1 Insight version: 6.8-1 Target location: /opt/msp430-gcc-3.2.3 Binary package name: msp430-gcc-3.2.3_gdb_7.0.1.tar.bz2 - Do you want to start build right now? (y/n) [n] y Running sh do-binutils.sh /opt/msp430-gcc-3.2.3 2.20.1 http://ftp.uni-kl.de; build === makeinfo is missing from path, but required for the binutils build. Please install texinfo. Aborting. === sh do-binutils.sh /opt/msp430-gcc-4.4.3 2.20.1 http://ftp.uni-kl.de; build exited with status code 1. Failed to execute sh do-binutils.sh /opt/msp430-gcc-4.4.3 2.20.1 http://ftp.uni-kl.de; build at ./buildgcc.pl line 247, STDIN line 9. Where is makeinfo? What should PATH= ? What should the root PATH= ? Errol On Wed, 2010-08-11 at 18:08 -0500, Peter Bigot wrote: (a) For a ticketing approach, might be better to create tickets on the sourceforge project instead of emailing everybody with each one. Though I don't know if anybody's going to maintain the how-to for the existing mspgcc infrastructure; my preference is to leave packaging to downstream and just provide the patches. (b) Not sure what the last problem was; the failure to find msp430-ar? Probably you installed binutils in a directory that is not in your PATH. sudo can contribute to confusion there, since the path it uses is not the same as the path you've got in your main shell; contrast msp430-ar -v with sudo msp430-ar -v (c) You might be better off using mspgcc4 until we get the two projects back together. Run the buildgcc.pl script it comes with. Peter On Wed, Aug 11, 2010 at 5:43 PM, Errol errol.kow...@gmail.com wrote: Being a newbie to this, I wasn't sure if each problem should be treated as something unique, or part of the wider picture. I opted for the former, and more of a ticketing approach. Please accept my humblest apologies for erring on the wrong side. However, that said, the last (unanswered) problem has, in my opinion, little to do with typos, and the source code build, and more likely the compiler I'm using in my Ubuntu Jaunty installation. This one is the show stopper. Anyone got any ideas? - regards, Errol Kowald ^ Engenia Pty Ltd, 2 Clear St., Palmerston, ACT 2913, Australia pH:+612 6242 0351 ^ On Wed, 2010-08-11 at 16:00 -0400, John Porubek wrote: (hope no-one is too put-out by my pedantic nit picking) I've got no problems with the nit-picking, but I am a little put off by small changes to each subsequent Subject line so that each message shows up as the start of a new thread (instead of being properly nested as a single thread). Hope no one is too put off by my curmudgeonly response, John -- This SF.net email is sponsored by Make an app they can't live without Enter the BlackBerry Developer Challenge http://p.sf.net/sfu/RIM-dev2dev ___ Mspgcc-users mailing list Mspgcc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/mspgcc-users
Re: [Mspgcc-users] Updates to MSP430 toolchain: binutils
Von: Peter Bigot Gesendet am: 30 Dez 1899 00:00:00 Re errata: to whatever extent binutils or mspgcc{3,4} already supports chip-specific errata I'll have to retain that. To whatever extent I can come up with a way to add support, I'll do so. But I expect this'll be an on-demand sort of thing: having gotten away from the need to read every data sheet to figure out where the flash sections start, I'm not going to start reading every errata document to figure out what's broken on which revision of which chip. Understandable. It is mostly the CPUx bugs which are compiler-specific. Some others (EEMx) may be of interest for the debugger or the JTAG connection. I think it is vital for mspgcc to take care of this or the (IMHO worse) IAR and CCS will make it vanish from the (new) users horizon. Once a product has the stigma of not supporting this or that, people tend to ignore it completely, even if it is a feature they don't need. Names are TBD, but yes, I foresee something like: -msp430=cpu (cpux, cpuxv2) -mpy=none (mpy, mpy32) That will mostly do the trick. Yet there is some mechanism needed to ensure you don't mix incompatible object files. At least with a default warning you can deactivate manually by another compiler/linker option. Imagine ewhat happens if you mix MSP and MSPX code. Everything from fully functional to occasionally or always crashing can result. Same for the MPY. This can be left to the user, but then newbies (or those not aware of it) will start complaining too. I'm going to do what I can to delay resolving peripheral addresses to the final link phase. Having control of the standard header files should make that easier. There should be a separate set of 'generic' header files which do not define any address, just extern volatile unsigned char variables. There are only a few registers needed for the libraries. MPY, WD, maybe a few more for certain types of processors. I'm very sympathetic to your views on watchdogs, and would like to make it easier to control that function. However, a large portion of the MSPGCC user community is completely ignorant of the presence of WDT because it's always been automatically turned off for them. I'm just not willing to deal with the fallout from changing the existing default behavior. There could be a weak generic version which simply turns off the watchdog. But I won't recommend it. Every example from TI contains deactivating the watchdog as the first line. So almost every piece of code I've ever seen from users on e2e.ti.com deactivated the WD. Using a default startup sequence that triggers the WD instead of deactivating it will do no harm. Yes, it would be a bit slower, but who cares about few microseconds startup time. Those who do will program a DMA for copying :) Actually, it took quite some time until I discovered that the WD was deactivated in all of the projects I took over. The code was triggering the WD all the time, but nobody (before me) ever noticed that the WD was actually off (a HUGE security violation) from the beginning. All TI datasheets told that it is on after a PUC, yet it wasn't. Finally I discovered the startup sequence as the violator and wrote my own. I guess there are MANY people out there, who constantly trigger the WD and rely on it, not knowing that it is off all the time. JMGross On Tue, Aug 10, 2010 at 5:48 AM, JMGross msp...@grossibaer.de wrote: - Urspr=FCngliche Nachricht - Von: Peter Bigot Gesendet am: 10 Aug 2010 01:17:03 What I propose to do instead is reduce the machines to those required reflect the chip CPU architecture (MSP430, MSP430X, MSP430XV2), and not try to indicate things like available peripherals and the like as distinct machine types. The only other feature that affects generated code is presence/absence of hardware multiply support (none, MPY, MPY32), and I think that can be done without having to add another axis to the architecture matrix. This is AFAIK the only part that needs care, besides the X/XV2 thing. Of course there are the errata which should be taken care of, or the compiler will possibly create code that will simply crash. This is mostly X/XV2 related stuff (forbidden sequences of instruction, necessary NOPs, not doing some PC/SP manipulations etc. Unfortunately, this is machine dependent. SOmetimes it applies to whole families or groups of processors, sometimes only single ones are affected= . I fear this could break the 'slim' concept you want to introduce. There can be, however, basic support for yet unsupported machines by supporting the base architecture and MPY through parameters or generic machine types. This allows compiler/assembler support for new processors. (of course, linker scripts and include files are necessary, b= ut these can be written by 'normal' people) I propose that an unknown mmcu will recognize a -core and -mpy parameter and use the generic machines if given (or throw an error
Re: [Mspgcc-users] Updates to MSP430 toolchain: binutils
People will turn away from a product when it doesn't fit their purpose now, or in the foreseeable future. (If I could ever get it built, I might be able to pass judgement.) Errol Kowald On Thu, 2010-08-12 at 12:42 +0200, JMGross wrote: Von: Peter Bigot Gesendet am: 30 Dez 1899 00:00:00 Re errata: to whatever extent binutils or mspgcc{3,4} already supports chip-specific errata I'll have to retain that. To whatever extent I can come up with a way to add support, I'll do so. But I expect this'll be an on-demand sort of thing: having gotten away from the need to read every data sheet to figure out where the flash sections start, I'm not going to start reading every errata document to figure out what's broken on which revision of which chip. Understandable. It is mostly the CPUx bugs which are compiler-specific. Some others (EEMx) may be of interest for the debugger or the JTAG connection. I think it is vital for mspgcc to take care of this or the (IMHO worse) IAR and CCS will make it vanish from the (new) users horizon. Once a product has the stigma of not supporting this or that, people tend to ignore it completely, even if it is a feature they don't need. Names are TBD, but yes, I foresee something like: -msp430=cpu (cpux, cpuxv2) -mpy=none (mpy, mpy32) That will mostly do the trick. Yet there is some mechanism needed to ensure you don't mix incompatible object files. At least with a default warning you can deactivate manually by another compiler/linker option. Imagine ewhat happens if you mix MSP and MSPX code. Everything from fully functional to occasionally or always crashing can result. Same for the MPY. This can be left to the user, but then newbies (or those not aware of it) will start complaining too. I'm going to do what I can to delay resolving peripheral addresses to the final link phase. Having control of the standard header files should make that easier. There should be a separate set of 'generic' header files which do not define any address, just extern volatile unsigned char variables. There are only a few registers needed for the libraries. MPY, WD, maybe a few more for certain types of processors. I'm very sympathetic to your views on watchdogs, and would like to make it easier to control that function. However, a large portion of the MSPGCC user community is completely ignorant of the presence of WDT because it's always been automatically turned off for them. I'm just not willing to deal with the fallout from changing the existing default behavior. There could be a weak generic version which simply turns off the watchdog. But I won't recommend it. Every example from TI contains deactivating the watchdog as the first line. So almost every piece of code I've ever seen from users on e2e.ti.com deactivated the WD. Using a default startup sequence that triggers the WD instead of deactivating it will do no harm. Yes, it would be a bit slower, but who cares about few microseconds startup time. Those who do will program a DMA for copying :) Actually, it took quite some time until I discovered that the WD was deactivated in all of the projects I took over. The code was triggering the WD all the time, but nobody (before me) ever noticed that the WD was actually off (a HUGE security violation) from the beginning. All TI datasheets told that it is on after a PUC, yet it wasn't. Finally I discovered the startup sequence as the violator and wrote my own. I guess there are MANY people out there, who constantly trigger the WD and rely on it, not knowing that it is off all the time. JMGross On Tue, Aug 10, 2010 at 5:48 AM, JMGross msp...@grossibaer.de wrote: - Urspr=FCngliche Nachricht - Von: Peter Bigot Gesendet am: 10 Aug 2010 01:17:03 What I propose to do instead is reduce the machines to those required reflect the chip CPU architecture (MSP430, MSP430X, MSP430XV2), and not try to indicate things like available peripherals and the like as distinct machine types. The only other feature that affects generated code is presence/absence of hardware multiply support (none, MPY, MPY32), and I think that can be done without having to add another axis to the architecture matrix. This is AFAIK the only part that needs care, besides the X/XV2 thing. Of course there are the errata which should be taken care of, or the compiler will possibly create code that will simply crash. This is mostly X/XV2 related stuff (forbidden sequences of instruction, necessary NOPs, not doing some PC/SP manipulations etc. Unfortunately, this is machine dependent. SOmetimes it applies to whole families or groups of processors, sometimes only single ones are affected= . I fear this could break the 'slim' concept you want to introduce. There can be, however, basic support for yet unsupported machines by supporting the base architecture
Re: [Mspgcc-users] Updates to MSP430 toolchain: MSP430X
Von: Peter Bigot Gesendet am: 12 Aug 2010 03:04:29 I'm not convinced that presence of a single far pointer requires that every address kick up to 32 bits. If near/far is maintained as an attribute on the symbol, as apparently IAR uses a __data20 qualifier, it should be possible to manage things properly. Whether GCC+binutils makes it reasonably easy to do this, I don't know yet, but it's certainly not a new problem. Unfortunately it is necessary. As soon as a single register holds a 20 bit address, ALL functions in EVERY part of the application need to save 20bit registers instead of 16 bit. Including all ISRs. Imagine the pointer being held as a register variable in R11. You call a function that IS 20bit-aware, yet it does not use R11. But it calls a function that is not but does need R11 locally. So it pushes 16 bits of R11 to the stack and restores them later. And the original funciton will have its local far register pointer truncated to 16 bits. Things are even more difficult for the call if the pointer in question is a function pointer and not a variable pointer. At some point you'll need to assemble the address to the required 20 bit address pointer. If then a non-20 bit-aware ISR interferes, you're callign low-mem without noticing. The only way to circumvent this would be putting the address into a local variable on stack and calling the function with CALL x(SP). Quite a performance hit even if you pass the pinter as a 32 bit value in R14/R15. And it needs some 'smartness' from the compiler, as users won't expect to have to do such 'optimizations'. (if you can at all force the compiler to not use a register for a local variable or parameter at all). If a function address is qualified as near/far, adapting the CALL instruction to the right form at the time the executable is generated should be straightforward. If a function is qualified near/far, the type of CALL needed is already set by this qualifyer. Far functions require a far call (and will do a far return), near function don't require a far call but if it is allowed to call them from within a far function they need a far return too and therefore need to be called with a far call too. So once you are using far code, all code needs to use RETA and CALLA. No place for any single RET or CALL instruction in this code anymore. Unfortunately the required code not only depends on the type of function called, but also how/from where it is called. So it's an ever or never. Some email from Dmitry Diky to the binutils reflector back in the mid 2000s when MSP430X was first added implied the linker (as opposed to assembler) was already handling that sort of thing in some cases where PCREL jump instructions were selected based on the size of intervening instructions. Locating and updating the corresponding RET may be a bit trickier, though again this isn't a problem that only applies to the MSP430. jump instructions are different, as they are relative and do not consist of a call/ret pair that has to match. Maybe this isn't doable, but I think it's worth putting some serious effort into trying. If putting printf format strings in far memory means doubling the size of every address in my application, I'm not sure I'm going to want to enable that feature. Maybe it would be sufficient to provide a far version of printf that is optimized to handle this without 20 bit pointers and/or use assembly language with cleared GIE during the critical fetches. The Atmel does something similar for format strings in program code (as its harvard architecture separates data and program space with different access instructions). There's a printf and a printf_P. Only difference is the access to the format string. So there need to be two versions of printf which have a far or near format string. Alternatively, the format string parameter could be always 32 bit. It would slow down normal printf slightly but would be transparent. The compiler needs to check and warn. Also, an additional format qualifier is needed, so you can pass 32 bit pointers to string parameters. JMGross Von: Michiel Konstapel Gesendet am: 10 Aug 2010 14:40:06 Generally using 20 bit registers isn't advisable as it increases code size and execution times and only a fraction of the MSP users needs the additional code size. Many (inlcuding me) just need the higher speed, more ram or better peripherals. On the other hand, many (including me) are mainly interested in the extra flash and are happy to take the step BACK in RAM going from a 1611 to a 2418. It's faster doesn't help if you're stuck at but it doesn't fit :) Some do, some don't. Indeed, if you NEED the space, then you need it. My experience at e2e.ti.com is that many only think they need it while they actually need petter programming skills and more imagination :) And even if so, often putting the functions far is good enough. Constant data, too, would be nice, but I agree
Re: [Mspgcc-users] Building MSPGCC from Source Code - The Build procedure
As noted in earlier email to this list, svn for mspgcc4 is left in its legacy state in support of the WASP project. Download either the release tarball of mspgcc4, or clone the git repository. (Sorry if there is still how-to documentation somewhere that implies otherwise. Somebody needs to take charge of new-user step-by-step instructions; my time is overcommitted on getting the two forks back into one. Again this is why I want downstream packaging: so y'all don't *have* to build this stuff yourselves.) The following is more than you need, but should add the packages required for building mspgcc. Peter apt-get install -y \ emacs build-essential checkinstall \ git git-core texinfo \ file patch perl python autoconf automake autotools-dev \ dh-make debhelper devscripts fakeroot gnupg xutils-dev \ lintian pbuilder patchutils \ ncurses-dev libz-dev tk8.4-dev On Thu, Aug 12, 2010 at 3:37 AM, Errol errol.kow...@gmail.com wrote: Thanks for your help Peter. I didn't realise that the root PATH would be different to the user PATH, so after trying to learn how to work around that, I took you advice, at (c), and tried building mspgcc4, from the procedure at http://mspgcc4.sourceforge.net/ What could go wrong? It's only 3 lines. :- ~$ svn checkout https://mspgcc4.svn.sourceforge.net/svnroot/mspgcc4 checked out revision 139 ~$ cd mspgcc4 ~/mspgcc4$ sh buildgcc.sh accepted all of the default choices started the build right now with the result :- Selected GCC 3.2.3 GDB version: 7.0.1 Insight version: 6.8-1 Target location: /opt/msp430-gcc-3.2.3 Binary package name: msp430-gcc-3.2.3_gdb_7.0.1.tar.bz2 - Do you want to start build right now? (y/n) [n] y Running sh do-binutils.sh /opt/msp430-gcc-3.2.3 2.20.1 http://ftp.uni-kl.de; build === makeinfo is missing from path, but required for the binutils build. Please install texinfo. Aborting. === sh do-binutils.sh /opt/msp430-gcc-4.4.3 2.20.1 http://ftp.uni-kl.de; build exited with status code 1. Failed to execute sh do-binutils.sh /opt/msp430-gcc-4.4.3 2.20.1 http://ftp.uni-kl.de; build at ./buildgcc.pl line 247, STDIN line 9. Where is makeinfo? What should PATH= ? What should the root PATH= ? Errol On Wed, 2010-08-11 at 18:08 -0500, Peter Bigot wrote: (a) For a ticketing approach, might be better to create tickets on the sourceforge project instead of emailing everybody with each one. Though I don't know if anybody's going to maintain the how-to for the existing mspgcc infrastructure; my preference is to leave packaging to downstream and just provide the patches. (b) Not sure what the last problem was; the failure to find msp430-ar? Probably you installed binutils in a directory that is not in your PATH. sudo can contribute to confusion there, since the path it uses is not the same as the path you've got in your main shell; contrast msp430-ar -v with sudo msp430-ar -v (c) You might be better off using mspgcc4 until we get the two projects back together. Run the buildgcc.pl script it comes with. Peter On Wed, Aug 11, 2010 at 5:43 PM, Errol errol.kow...@gmail.com wrote: Being a newbie to this, I wasn't sure if each problem should be treated as something unique, or part of the wider picture. I opted for the former, and more of a ticketing approach. Please accept my humblest apologies for erring on the wrong side. However, that said, the last (unanswered) problem has, in my opinion, little to do with typos, and the source code build, and more likely the compiler I'm using in my Ubuntu Jaunty installation. This one is the show stopper. Anyone got any ideas? - regards, Errol Kowald ^ Engenia Pty Ltd, 2 Clear St., Palmerston, ACT 2913, Australia pH:+612 6242 0351 ^ On Wed, 2010-08-11 at 16:00 -0400, John Porubek wrote: (hope no-one is too put-out by my pedantic nit picking) I've got no problems with the nit-picking, but I am a little put off by small changes to each subsequent Subject line so that each message shows up as the start of a new thread (instead of being properly nested as a single thread). Hope no one is too put off by my curmudgeonly response, John -- This SF.net email is sponsored by Make an app they can't live without Enter the BlackBerry Developer Challenge http://p.sf.net/sfu/RIM-dev2dev ___ Mspgcc-users mailing list Mspgcc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/mspgcc-users