Re: [OMPI users] Problems building Open MPI 1.4.1 with Pathscale

2010-02-18 Thread Jeff Squyres
Thanks George. I assume we need this in 1.4.2 and 1.5, right? On Feb 17, 2010, at 6:15 PM, George Bosilca wrote: > I usually prefer the expanded notation: > > unsigned char ret; > __asm__ __volatile__ ( > "lock; cmpxchgl %3,%4 \n\t" >

Re: [OMPI users] Problems building Open MPI 1.4.1 with Pathscale

2010-02-17 Thread George Bosilca
I usually prefer the expanded notation: unsigned char ret; __asm__ __volatile__ ( "lock; cmpxchgl %3,%4 \n\t" " sete %0 \n\t" : "=qm" (ret), "=a" (oldval), "=m" (*addr)

Re: [OMPI users] Problems building Open MPI 1.4.1 with Pathscale

2010-02-10 Thread Ake Sandgren
On Wed, 2010-02-10 at 08:42 -0700, Barrett, Brian W wrote: > Adding the memory and cc will certainly do no harm, and someone tried to > remove them as an optimization. I wouldn't change the input and output lines > - the differences are mainly syntactic sugar. Gcc actually didn't like the

Re: [OMPI users] Problems building Open MPI 1.4.1 with Pathscale

2010-02-10 Thread Barrett, Brian W
Adding the memory and cc will certainly do no harm, and someone tried to remove them as an optimization. I wouldn't change the input and output lines - the differences are mainly syntactic sugar. Brian On Feb 10, 2010, at 7:04 AM, Ake Sandgren wrote: > On Wed, 2010-02-10 at 08:21 -0500, Jeff

Re: [OMPI users] Problems building Open MPI 1.4.1 with Pathscale

2010-02-10 Thread Ake Sandgren
On Wed, 2010-02-10 at 08:21 -0500, Jeff Squyres wrote: > On Feb 10, 2010, at 7:47 AM, Ake Sandgren wrote: > > > According to people who knows asm statements fairly well (compiler > > developers), it should be > > > static inline int opal_atomic_cmpset_32( volatile int32_t *addr, > >

Re: [OMPI users] Problems building Open MPI 1.4.1 with Pathscale

2010-02-10 Thread Jeff Squyres
On Feb 10, 2010, at 7:47 AM, Ake Sandgren wrote: > According to people who knows asm statements fairly well (compiler > developers), it should be > static inline int opal_atomic_cmpset_32( volatile int32_t *addr, > int32_t oldval, int32_t newval) > { >

Re: [OMPI users] Problems building Open MPI 1.4.1 with Pathscale

2010-02-10 Thread Ake Sandgren
On Tue, 2010-02-09 at 14:44 -0800, Mostyn Lewis wrote: > The old opal_atomic_cmpset_32 worked: > > static inline int opal_atomic_cmpset_32( volatile int32_t *addr, > unsigned char ret; > __asm__ __volatile__ ( > SMPLOCK "cmpxchgl %1,%2 \n\t" >

Re: [OMPI users] Problems building Open MPI 1.4.1 with Pathscale

2010-02-10 Thread Terry Dontje
Jeff Squyres wrote: Iain did the genius for the new assembly. Iain -- can you respond? Iain is on vacation right now so he probably want be able to respond until next week. --td On Feb 9, 2010, at 5:44 PM, Mostyn Lewis wrote: The old opal_atomic_cmpset_32 worked: static inline int

Re: [OMPI users] Problems building Open MPI 1.4.1 with Pathscale

2010-02-09 Thread Iain Bason
Well, I am by no means an expert on the GNU-style asm directives. I believe someone else (George Bosilca?) tweaked what I had suggested. That being said, I think the memory "clobber" is harmless. Iain On Feb 9, 2010, at 5:51 PM, Jeff Squyres wrote: Iain did the genius for the new

Re: [OMPI users] Problems building Open MPI 1.4.1 with Pathscale

2010-02-09 Thread Jeff Squyres
Iain did the genius for the new assembly. Iain -- can you respond? On Feb 9, 2010, at 5:44 PM, Mostyn Lewis wrote: > The old opal_atomic_cmpset_32 worked: > > static inline int opal_atomic_cmpset_32( volatile int32_t *addr, > unsigned char ret; > __asm__ __volatile__ ( >

Re: [OMPI users] Problems building Open MPI 1.4.1 with Pathscale

2010-02-09 Thread Mostyn Lewis
The old opal_atomic_cmpset_32 worked: static inline int opal_atomic_cmpset_32( volatile int32_t *addr, unsigned char ret; __asm__ __volatile__ ( SMPLOCK "cmpxchgl %1,%2 \n\t" "sete %0 \n\t" : "=qm"

Re: [OMPI users] Problems building Open MPI 1.4.1 with Pathscale

2010-02-09 Thread Åke Sandgren
On Tue, 2010-02-09 at 13:42 -0500, Jeff Squyres wrote: > Perhaps someone with a pathscale compiler support contract can investigate > this with them. > > Have them contact us if they want/need help understanding our atomics; we're > happy to explain, etc. (the atomics are fairly localized to a

Re: [OMPI users] Problems building Open MPI 1.4.1 with Pathscale

2010-02-09 Thread Jeff Squyres
Perhaps someone with a pathscale compiler support contract can investigate this with them. Have them contact us if they want/need help understanding our atomics; we're happy to explain, etc. (the atomics are fairly localized to a small part of OMPI). On Feb 9, 2010, at 11:42 AM, Mostyn

Re: [OMPI users] Problems building Open MPI 1.4.1 with Pathscale

2010-02-09 Thread Mostyn Lewis
All, FWIW, Pathscale is dying in the new atomics in 1.4.1 (and svn trunk) - actually looping - from gdb: opal_progress_event_users_decrement () at ../.././opal/include/opal/sys/atomic_impl.h:61 61 } while (0 == opal_atomic_cmpset_32(addr, oldval, oldval - delta)); Current language:

Re: [OMPI users] Problems building Open MPI 1.4.1 with Pathscale

2010-02-09 Thread Ake Sandgren
On Tue, 2010-02-09 at 08:49 -0500, Jeff Squyres wrote: > FWIW, I have had terrible luck with the patschale compiler over the years. > Repeated attempts to get support from them -- even when I was a paying > customer -- resulted in no help (e.g., a pathCC bug with the OMPI C++ > bindings that I

Re: [OMPI users] Problems building Open MPI 1.4.1 with Pathscale

2010-02-09 Thread Jeff Squyres
FWIW, I have had terrible luck with the patschale compiler over the years. Repeated attempts to get support from them -- even when I was a paying customer -- resulted in no help (e.g., a pathCC bug with the OMPI C++ bindings that I filed years ago was never resolved). Is this compiler even

Re: [OMPI users] Problems building Open MPI 1.4.1 with Pathscale

2010-02-08 Thread Rafael Arco Arredondo
Hello, It does work with version 1.4. This is the hello world that hangs with 1.4.1: #include #include int main(int argc, char **argv) { int node, size; MPI_Init(,); MPI_Comm_rank(MPI_COMM_WORLD, ); MPI_Comm_size(MPI_COMM_WORLD, ); printf("Hello World from Node %d of %d.\n", node,

Re: [OMPI users] Problems building Open MPI 1.4.1 with Pathscale

2010-01-25 Thread Åke Sandgren
1 - Do you have problems with openmpi 1.4 too? (I don't, haven't built 1.4.1 yet) 2 - There is a bug in the pathscale compiler with -fPIC and -g that generates incorrect dwarf2 data so debuggers get really confused and will have BIG problems debugging the code. I'm chasing them to get a fix... 3 -

Re: [OMPI users] Problems building Open MPI 1.4.1 with Pathscale

2010-01-25 Thread Jeff Squyres
I'm afraid I don't have any clues offhand. We *have* had problems with the Pathscale compiler in the past that were never resolved by their support crew. However, they were of the "variables weren't initialized and the process generally aborts" kind of failure, not a "persistent hang" kind of