Re: [OMPI devel] === CREATE FAILURE (trunk) ===

2014-07-28 Thread Ralph Castain
Fix is coming

On Jul 28, 2014, at 6:11 PM, MPI Team  wrote:

> 
> ERROR: Command returned a non-zero exist status (trunk):
>   make -j 8 distcheck
> 
> Start time: Mon Jul 28 21:05:01 EDT 2014
> End time:   Mon Jul 28 21:11:02 EDT 2014
> 
> ===
> [... previous lines snipped ...]
>  CC   class/opal_ring_buffer.lo
>  CC   class/opal_rb_tree.lo
>  CC   class/ompi_free_list.lo
>  CC   memoryhooks/memory.lo
>  CC   runtime/opal_progress.lo
>  CC   runtime/opal_finalize.lo
>  CC   runtime/opal_init.lo
>  CC   runtime/opal_params.lo
>  CC   runtime/opal_cr.lo
>  CC   runtime/opal_info_support.lo
>  CC   runtime/opal_progress_threads.lo
>  CC   threads/condition.lo
>  CC   threads/mutex.lo
>  CC   threads/thread.lo
>  CC   threads/tsd.lo
>  CC   dss/dss_internal_functions.lo
>  CC   dss/dss_compare.lo
>  CC   dss/dss_copy.lo
>  CC   dss/dss_dump.lo
>  CC   dss/dss_load_unload.lo
>  CC   dss/dss_lookup.lo
>  CC   dss/dss_pack.lo
>  CC   dss/dss_peek.lo
>  CC   dss/dss_print.lo
>  CC   dss/dss_register.lo
>  CC   dss/dss_unpack.lo
>  CC   dss/dss_open_close.lo
>  CCLD libopen-pal.la
> make[3]: Leaving directory 
> `/home/mpiteam/openmpi/nightly-tarball-build-root/trunk/create-r32335/ompi/openmpi-1.9a1r32335/_build/opal'
> Making all in mca/common/sm
> make[3]: Entering directory 
> `/home/mpiteam/openmpi/nightly-tarball-build-root/trunk/create-r32335/ompi/openmpi-1.9a1r32335/_build/opal/mca/common/sm'
>  CC   common_sm.lo
>  LN_S libmca_common_sm.la
> ../../../../../opal/mca/common/sm/common_sm.c:54:27: error: common_sm_rml.h: 
> No such file or directory
> ../../../../../opal/mca/common/sm/common_sm.c: In function 'attach_and_init':
> ../../../../../opal/mca/common/sm/common_sm.c:83: warning: implicit 
> declaration of function 'opal_shmem_segment_attach'
> ../../../../../opal/mca/common/sm/common_sm.c:83: warning: cast to pointer 
> from integer of different size
> ../../../../../opal/mca/common/sm/common_sm.c:90: warning: implicit 
> declaration of function 'opal_shmem_segment_detach'
> ../../../../../opal/mca/common/sm/common_sm.c:96: warning: implicit 
> declaration of function 'opal_shmem_ds_copy'
> ../../../../../opal/mca/common/sm/common_sm.c: In function 
> 'mca_common_sm_module_create_and_attach':
> ../../../../../opal/mca/common/sm/common_sm.c:172: warning: implicit 
> declaration of function 'opal_shmem_segment_create'
> ../../../../../opal/mca/common/sm/common_sm.c: In function 
> 'mca_common_sm_module_unlink':
> ../../../../../opal/mca/common/sm/common_sm.c:208: warning: implicit 
> declaration of function 'opal_shmem_unlink'
> make[3]: *** [common_sm.lo] Error 1
> make[3]: Leaving directory 
> `/home/mpiteam/openmpi/nightly-tarball-build-root/trunk/create-r32335/ompi/openmpi-1.9a1r32335/_build/opal/mca/common/sm'
> make[2]: *** [all-recursive] Error 1
> make[2]: Leaving directory 
> `/home/mpiteam/openmpi/nightly-tarball-build-root/trunk/create-r32335/ompi/openmpi-1.9a1r32335/_build/opal'
> make[1]: *** [all-recursive] Error 1
> make[1]: Leaving directory 
> `/home/mpiteam/openmpi/nightly-tarball-build-root/trunk/create-r32335/ompi/openmpi-1.9a1r32335/_build'
> make: *** [distcheck] Error 1
> ===
> 
> Your friendly daemon,
> Cyrador
> ___
> testing mailing list
> test...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/testing



Re: [OMPI devel] === CREATE FAILURE (trunk) ===

2012-08-01 Thread Jeff Squyres
r26918 was the previous nightly tarball -- the create failure was in making the 
current nightly tarball.  So it's something that happened during the day 
yesterday that caused the failure (e.g., some header file was forgotten to be 
added to a Makefile.am).

I think Ralph fixed it in r26934.  I'll spin up a new tarball now, just to 
verify...


On Aug 1, 2012, at 12:43 AM, Paul Hargrove wrote:

> I've just built from openmpi-1.9a1r26918.tar.bz2 on a Linux/x86-64 machine 
> w/o any problems.
> So, that would seem to discount something missing from the tarball.
> 
> -Paul
> 
> On Tue, Jul 31, 2012 at 9:07 PM, Ralph Castain  wrote:
> Agreed - but I checked and didn't see anything missing.  will take 
> another look...
> 
> On Jul 31, 2012, at 9:04 PM, Paul Hargrove  wrote:
> 
>> The most likely reason for a "distcheck" to fail in this manner when a 
>> checkout is fine would be a header not getting included in the tarball due 
>> to some omission from Makefile.am
>> 
>> -Paul
>> 
>> On Tue, Jul 31, 2012 at 9:00 PM, Ralph Castain  wrote:
>> I'm not sure what to make of this one. I checked the code out and built it 
>> just fine on a linux box, including watching this file build. For whatever 
>> reason, the tarball maker just didn't find some include file?
>> 
>> 
>> On Jul 31, 2012, at 6:33 PM, MPI Team  wrote:
>> 
>> >
>> > ERROR: Command returned a non-zero exist status (trunk):
>> >   make distcheck
>> >
>> > Start time: Tue Jul 31 21:00:01 EDT 2012
>> > End time:   Tue Jul 31 21:33:05 EDT 2012
>> >
>> > ===
>> > [... previous lines snipped ...]
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:453: error: `buf' 
>> > undeclared (first use in this function)
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:454: error: `max_data' 
>> > undeclared (first use in this function)
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:455: error: `iov' 
>> > undeclared (first use in this function)
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:456: error: `iovec_count' 
>> > undeclared (first use in this function)
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c: At top level:
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:466: error: syntax error 
>> > before "ompi_file_t"
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c: In function 
>> > `mca_io_ompio_datatype_is_contiguous':
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:468: error: 
>> > `mca_io_ompio_data_t' undeclared (first use in this function)
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:468: error: `data' 
>> > undeclared (first use in this function)
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:469: error: 
>> > `mca_io_ompio_file_t' undeclared (first use in this function)
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:469: error: `fh' 
>> > undeclared (first use in this function)
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:471: error: syntax error 
>> > before ')' token
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:474: error: `datatype' 
>> > undeclared (first use in this function)
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:475: error: 
>> > `OMPIO_CONTIGUOUS_MEMORY' undeclared (first use in this function)
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c: At top level:
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:483: error: syntax error 
>> > before '*' token
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c: In function 
>> > `mca_io_ompio_set_aggregator_props':
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:488: error: 
>> > `mca_io_ompio_data_t' undeclared (first use in this function)
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:488: error: `data' 
>> > undeclared (first use in this function)
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:489: error: 
>> > `mca_io_ompio_file_t' undeclared (first use in this function)
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:489: error: `fh' 
>> > undeclared (first use in this function)
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:491: error: syntax error 
>> > before ')' token
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:494: error: 
>> > `num_aggregators' undeclared (first use in this function)
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:495: error: 
>> > `bytes_per_proc' undeclared (first use in this function)
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c: At top level:
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:504: error: syntax error 
>> > before '*' token
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c: In function 
>> > `mca_io_ompio_generate_current_file_view':
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:510: error: 
>> > `mca_io_ompio_data_t' undeclared (first use in this function)
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:510: error: `data' 
>> > undeclared (first use in this function)
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:511: 

Re: [OMPI devel] === CREATE FAILURE (trunk) ===

2012-08-01 Thread Paul Hargrove
I've just built from openmpi-1.9a1r26918.tar.bz2 on a Linux/x86-64 machine
w/o any problems.
So, that would seem to discount something missing from the tarball.

-Paul

On Tue, Jul 31, 2012 at 9:07 PM, Ralph Castain  wrote:

> Agreed - but I checked and didn't see anything missing.  will take
> another look...
>
> On Jul 31, 2012, at 9:04 PM, Paul Hargrove  wrote:
>
> The most likely reason for a "distcheck" to fail in this manner when a
> checkout is fine would be a header not getting included in the tarball due
> to some omission from Makefile.am
>
> -Paul
>
> On Tue, Jul 31, 2012 at 9:00 PM, Ralph Castain  wrote:
>
>> I'm not sure what to make of this one. I checked the code out and built
>> it just fine on a linux box, including watching this file build. For
>> whatever reason, the tarball maker just didn't find some include file?
>>
>>
>> On Jul 31, 2012, at 6:33 PM, MPI Team  wrote:
>>
>> >
>> > ERROR: Command returned a non-zero exist status (trunk):
>> >   make distcheck
>> >
>> > Start time: Tue Jul 31 21:00:01 EDT 2012
>> > End time:   Tue Jul 31 21:33:05 EDT 2012
>> >
>> > ===
>> > [... previous lines snipped ...]
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:453: error: `buf'
>> undeclared (first use in this function)
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:454: error: `max_data'
>> undeclared (first use in this function)
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:455: error: `iov'
>> undeclared (first use in this function)
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:456: error:
>> `iovec_count' undeclared (first use in this function)
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c: At top level:
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:466: error: syntax
>> error before "ompi_file_t"
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c: In function
>> `mca_io_ompio_datatype_is_contiguous':
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:468: error:
>> `mca_io_ompio_data_t' undeclared (first use in this function)
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:468: error: `data'
>> undeclared (first use in this function)
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:469: error:
>> `mca_io_ompio_file_t' undeclared (first use in this function)
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:469: error: `fh'
>> undeclared (first use in this function)
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:471: error: syntax
>> error before ')' token
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:474: error: `datatype'
>> undeclared (first use in this function)
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:475: error:
>> `OMPIO_CONTIGUOUS_MEMORY' undeclared (first use in this function)
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c: At top level:
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:483: error: syntax
>> error before '*' token
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c: In function
>> `mca_io_ompio_set_aggregator_props':
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:488: error:
>> `mca_io_ompio_data_t' undeclared (first use in this function)
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:488: error: `data'
>> undeclared (first use in this function)
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:489: error:
>> `mca_io_ompio_file_t' undeclared (first use in this function)
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:489: error: `fh'
>> undeclared (first use in this function)
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:491: error: syntax
>> error before ')' token
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:494: error:
>> `num_aggregators' undeclared (first use in this function)
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:495: error:
>> `bytes_per_proc' undeclared (first use in this function)
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c: At top level:
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:504: error: syntax
>> error before '*' token
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c: In function
>> `mca_io_ompio_generate_current_file_view':
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:510: error:
>> `mca_io_ompio_data_t' undeclared (first use in this function)
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:510: error: `data'
>> undeclared (first use in this function)
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:511: error:
>> `mca_io_ompio_file_t' undeclared (first use in this function)
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:511: error: `fh'
>> undeclared (first use in this function)
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:513: error: syntax
>> error before ')' token
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:516: error: `max_data'
>> undeclared (first use in this function)
>> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:517: error: `f_iov'
>> undeclared (first use in this 

Re: [OMPI devel] === CREATE FAILURE (trunk) ===

2012-08-01 Thread Ralph Castain
Agreed - but I checked and didn't see anything missing.  will take 
another look...

On Jul 31, 2012, at 9:04 PM, Paul Hargrove  wrote:

> The most likely reason for a "distcheck" to fail in this manner when a 
> checkout is fine would be a header not getting included in the tarball due to 
> some omission from Makefile.am
> 
> -Paul
> 
> On Tue, Jul 31, 2012 at 9:00 PM, Ralph Castain  wrote:
> I'm not sure what to make of this one. I checked the code out and built it 
> just fine on a linux box, including watching this file build. For whatever 
> reason, the tarball maker just didn't find some include file?
> 
> 
> On Jul 31, 2012, at 6:33 PM, MPI Team  wrote:
> 
> >
> > ERROR: Command returned a non-zero exist status (trunk):
> >   make distcheck
> >
> > Start time: Tue Jul 31 21:00:01 EDT 2012
> > End time:   Tue Jul 31 21:33:05 EDT 2012
> >
> > ===
> > [... previous lines snipped ...]
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:453: error: `buf' 
> > undeclared (first use in this function)
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:454: error: `max_data' 
> > undeclared (first use in this function)
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:455: error: `iov' 
> > undeclared (first use in this function)
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:456: error: `iovec_count' 
> > undeclared (first use in this function)
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c: At top level:
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:466: error: syntax error 
> > before "ompi_file_t"
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c: In function 
> > `mca_io_ompio_datatype_is_contiguous':
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:468: error: 
> > `mca_io_ompio_data_t' undeclared (first use in this function)
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:468: error: `data' 
> > undeclared (first use in this function)
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:469: error: 
> > `mca_io_ompio_file_t' undeclared (first use in this function)
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:469: error: `fh' undeclared 
> > (first use in this function)
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:471: error: syntax error 
> > before ')' token
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:474: error: `datatype' 
> > undeclared (first use in this function)
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:475: error: 
> > `OMPIO_CONTIGUOUS_MEMORY' undeclared (first use in this function)
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c: At top level:
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:483: error: syntax error 
> > before '*' token
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c: In function 
> > `mca_io_ompio_set_aggregator_props':
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:488: error: 
> > `mca_io_ompio_data_t' undeclared (first use in this function)
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:488: error: `data' 
> > undeclared (first use in this function)
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:489: error: 
> > `mca_io_ompio_file_t' undeclared (first use in this function)
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:489: error: `fh' undeclared 
> > (first use in this function)
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:491: error: syntax error 
> > before ')' token
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:494: error: 
> > `num_aggregators' undeclared (first use in this function)
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:495: error: 
> > `bytes_per_proc' undeclared (first use in this function)
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c: At top level:
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:504: error: syntax error 
> > before '*' token
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c: In function 
> > `mca_io_ompio_generate_current_file_view':
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:510: error: 
> > `mca_io_ompio_data_t' undeclared (first use in this function)
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:510: error: `data' 
> > undeclared (first use in this function)
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:511: error: 
> > `mca_io_ompio_file_t' undeclared (first use in this function)
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:511: error: `fh' undeclared 
> > (first use in this function)
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:513: error: syntax error 
> > before ')' token
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:516: error: `max_data' 
> > undeclared (first use in this function)
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:517: error: `f_iov' 
> > undeclared (first use in this function)
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:518: error: `iov_count' 
> > undeclared (first use in this function)
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c: At top level:
> > ../../../..

Re: [OMPI devel] === CREATE FAILURE (trunk) ===

2012-08-01 Thread Paul Hargrove
The most likely reason for a "distcheck" to fail in this manner when a
checkout is fine would be a header not getting included in the tarball due
to some omission from Makefile.am

-Paul

On Tue, Jul 31, 2012 at 9:00 PM, Ralph Castain  wrote:

> I'm not sure what to make of this one. I checked the code out and built it
> just fine on a linux box, including watching this file build. For whatever
> reason, the tarball maker just didn't find some include file?
>
>
> On Jul 31, 2012, at 6:33 PM, MPI Team  wrote:
>
> >
> > ERROR: Command returned a non-zero exist status (trunk):
> >   make distcheck
> >
> > Start time: Tue Jul 31 21:00:01 EDT 2012
> > End time:   Tue Jul 31 21:33:05 EDT 2012
> >
> > ===
> > [... previous lines snipped ...]
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:453: error: `buf'
> undeclared (first use in this function)
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:454: error: `max_data'
> undeclared (first use in this function)
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:455: error: `iov'
> undeclared (first use in this function)
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:456: error:
> `iovec_count' undeclared (first use in this function)
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c: At top level:
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:466: error: syntax error
> before "ompi_file_t"
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c: In function
> `mca_io_ompio_datatype_is_contiguous':
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:468: error:
> `mca_io_ompio_data_t' undeclared (first use in this function)
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:468: error: `data'
> undeclared (first use in this function)
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:469: error:
> `mca_io_ompio_file_t' undeclared (first use in this function)
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:469: error: `fh'
> undeclared (first use in this function)
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:471: error: syntax error
> before ')' token
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:474: error: `datatype'
> undeclared (first use in this function)
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:475: error:
> `OMPIO_CONTIGUOUS_MEMORY' undeclared (first use in this function)
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c: At top level:
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:483: error: syntax error
> before '*' token
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c: In function
> `mca_io_ompio_set_aggregator_props':
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:488: error:
> `mca_io_ompio_data_t' undeclared (first use in this function)
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:488: error: `data'
> undeclared (first use in this function)
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:489: error:
> `mca_io_ompio_file_t' undeclared (first use in this function)
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:489: error: `fh'
> undeclared (first use in this function)
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:491: error: syntax error
> before ')' token
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:494: error:
> `num_aggregators' undeclared (first use in this function)
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:495: error:
> `bytes_per_proc' undeclared (first use in this function)
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c: At top level:
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:504: error: syntax error
> before '*' token
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c: In function
> `mca_io_ompio_generate_current_file_view':
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:510: error:
> `mca_io_ompio_data_t' undeclared (first use in this function)
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:510: error: `data'
> undeclared (first use in this function)
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:511: error:
> `mca_io_ompio_file_t' undeclared (first use in this function)
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:511: error: `fh'
> undeclared (first use in this function)
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:513: error: syntax error
> before ')' token
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:516: error: `max_data'
> undeclared (first use in this function)
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:517: error: `f_iov'
> undeclared (first use in this function)
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:518: error: `iov_count'
> undeclared (first use in this function)
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c: At top level:
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:527: error: syntax error
> before '*' token
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c: In function
> `mca_io_ompio_free_f_io_array':
> > ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:529: error:
> `mca_io

Re: [OMPI devel] === CREATE FAILURE (trunk) ===

2012-08-01 Thread Ralph Castain
I'm not sure what to make of this one. I checked the code out and built it just 
fine on a linux box, including watching this file build. For whatever reason, 
the tarball maker just didn't find some include file?


On Jul 31, 2012, at 6:33 PM, MPI Team  wrote:

> 
> ERROR: Command returned a non-zero exist status (trunk):
>   make distcheck
> 
> Start time: Tue Jul 31 21:00:01 EDT 2012
> End time:   Tue Jul 31 21:33:05 EDT 2012
> 
> ===
> [... previous lines snipped ...]
> ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:453: error: `buf' undeclared 
> (first use in this function)
> ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:454: error: `max_data' 
> undeclared (first use in this function)
> ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:455: error: `iov' undeclared 
> (first use in this function)
> ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:456: error: `iovec_count' 
> undeclared (first use in this function)
> ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c: At top level:
> ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:466: error: syntax error 
> before "ompi_file_t"
> ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c: In function 
> `mca_io_ompio_datatype_is_contiguous':
> ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:468: error: 
> `mca_io_ompio_data_t' undeclared (first use in this function)
> ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:468: error: `data' undeclared 
> (first use in this function)
> ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:469: error: 
> `mca_io_ompio_file_t' undeclared (first use in this function)
> ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:469: error: `fh' undeclared 
> (first use in this function)
> ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:471: error: syntax error 
> before ')' token
> ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:474: error: `datatype' 
> undeclared (first use in this function)
> ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:475: error: 
> `OMPIO_CONTIGUOUS_MEMORY' undeclared (first use in this function)
> ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c: At top level:
> ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:483: error: syntax error 
> before '*' token
> ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c: In function 
> `mca_io_ompio_set_aggregator_props':
> ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:488: error: 
> `mca_io_ompio_data_t' undeclared (first use in this function)
> ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:488: error: `data' undeclared 
> (first use in this function)
> ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:489: error: 
> `mca_io_ompio_file_t' undeclared (first use in this function)
> ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:489: error: `fh' undeclared 
> (first use in this function)
> ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:491: error: syntax error 
> before ')' token
> ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:494: error: `num_aggregators' 
> undeclared (first use in this function)
> ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:495: error: `bytes_per_proc' 
> undeclared (first use in this function)
> ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c: At top level:
> ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:504: error: syntax error 
> before '*' token
> ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c: In function 
> `mca_io_ompio_generate_current_file_view':
> ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:510: error: 
> `mca_io_ompio_data_t' undeclared (first use in this function)
> ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:510: error: `data' undeclared 
> (first use in this function)
> ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:511: error: 
> `mca_io_ompio_file_t' undeclared (first use in this function)
> ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:511: error: `fh' undeclared 
> (first use in this function)
> ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:513: error: syntax error 
> before ')' token
> ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:516: error: `max_data' 
> undeclared (first use in this function)
> ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:517: error: `f_iov' 
> undeclared (first use in this function)
> ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:518: error: `iov_count' 
> undeclared (first use in this function)
> ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c: At top level:
> ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:527: error: syntax error 
> before '*' token
> ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c: In function 
> `mca_io_ompio_free_f_io_array':
> ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:529: error: 
> `mca_io_ompio_data_t' undeclared (first use in this function)
> ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:529: error: `data' undeclared 
> (first use in this function)
> ../../../../../ompi/mca/io/ompio/io_ompio_nbc.c:530: error: 
> `mca_io_ompio_file_t' undeclared (first use in this function)
> ../../../../../

Re: [OMPI devel] === CREATE FAILURE (trunk) ===

2011-10-20 Thread Ralph Castain
regenerating now...

On Oct 20, 2011, at 7:14 PM, MPI Team wrote:

> 
> ERROR: Command returned a non-zero exist status (trunk):
>   make distcheck
> 
> Start time: Thu Oct 20 21:00:02 EDT 2011
> End time:   Thu Oct 20 21:14:13 EDT 2011
> 
> ===
> [... previous lines snipped ...]
> make[2]: Entering directory 
> `/home/mpiteam/openmpi/nightly-tarball-build-root/trunk/create-r25345/ompi/ompi/mca/vprotocol'
> make[2]: Leaving directory 
> `/home/mpiteam/openmpi/nightly-tarball-build-root/trunk/create-r25345/ompi/ompi/mca/vprotocol'
> (cd mca/common/mx && make  top_distdir=../../../../openmpi-1.7a1r25345 
> distdir=../../../../openmpi-1.7a1r25345/ompi/mca/common/mx \
> am__remove_distdir=: am__skip_length_check=: am__skip_mode_fix=: distdir)
> make[2]: Entering directory 
> `/home/mpiteam/openmpi/nightly-tarball-build-root/trunk/create-r25345/ompi/ompi/mca/common/mx'
> make[2]: Leaving directory 
> `/home/mpiteam/openmpi/nightly-tarball-build-root/trunk/create-r25345/ompi/ompi/mca/common/mx'
> (cd mca/common/cuda && make  top_distdir=../../../../openmpi-1.7a1r25345 
> distdir=../../../../openmpi-1.7a1r25345/ompi/mca/common/cuda \
> am__remove_distdir=: am__skip_length_check=: am__skip_mode_fix=: distdir)
> make[2]: Entering directory 
> `/home/mpiteam/openmpi/nightly-tarball-build-root/trunk/create-r25345/ompi/ompi/mca/common/cuda'
> make[2]: Leaving directory 
> `/home/mpiteam/openmpi/nightly-tarball-build-root/trunk/create-r25345/ompi/ompi/mca/common/cuda'
> (cd mca/common/sm && make  top_distdir=../../../../openmpi-1.7a1r25345 
> distdir=../../../../openmpi-1.7a1r25345/ompi/mca/common/sm \
> am__remove_distdir=: am__skip_length_check=: am__skip_mode_fix=: distdir)
> make[2]: Entering directory 
> `/home/mpiteam/openmpi/nightly-tarball-build-root/trunk/create-r25345/ompi/ompi/mca/common/sm'
> make[2]: Leaving directory 
> `/home/mpiteam/openmpi/nightly-tarball-build-root/trunk/create-r25345/ompi/ompi/mca/common/sm'
> (cd mca/common/portals && make  top_distdir=../../../../openmpi-1.7a1r25345 
> distdir=../../../../openmpi-1.7a1r25345/ompi/mca/common/portals \
> am__remove_distdir=: am__skip_length_check=: am__skip_mode_fix=: distdir)
> make[2]: Entering directory 
> `/home/mpiteam/openmpi/nightly-tarball-build-root/trunk/create-r25345/ompi/ompi/mca/common/portals'
> make[2]: Leaving directory 
> `/home/mpiteam/openmpi/nightly-tarball-build-root/trunk/create-r25345/ompi/ompi/mca/common/portals'
> (cd mca/allocator/bucket && make  top_distdir=../../../../openmpi-1.7a1r25345 
> distdir=../../../../openmpi-1.7a1r25345/ompi/mca/allocator/bucket \
> am__remove_distdir=: am__skip_length_check=: am__skip_mode_fix=: distdir)
> make[2]: Entering directory 
> `/home/mpiteam/openmpi/nightly-tarball-build-root/trunk/create-r25345/ompi/ompi/mca/allocator/bucket'
> make[2]: Leaving directory 
> `/home/mpiteam/openmpi/nightly-tarball-build-root/trunk/create-r25345/ompi/ompi/mca/allocator/bucket'
> (cd mca/allocator/basic && make  top_distdir=../../../../openmpi-1.7a1r25345 
> distdir=../../../../openmpi-1.7a1r25345/ompi/mca/allocator/basic \
> am__remove_distdir=: am__skip_length_check=: am__skip_mode_fix=: distdir)
> make[2]: Entering directory 
> `/home/mpiteam/openmpi/nightly-tarball-build-root/trunk/create-r25345/ompi/ompi/mca/allocator/basic'
> make[2]: Leaving directory 
> `/home/mpiteam/openmpi/nightly-tarball-build-root/trunk/create-r25345/ompi/ompi/mca/allocator/basic'
> (cd mca/bml/r2 && make  top_distdir=../../../../openmpi-1.7a1r25345 
> distdir=../../../../openmpi-1.7a1r25345/ompi/mca/bml/r2 \
> am__remove_distdir=: am__skip_length_check=: am__skip_mode_fix=: distdir)
> make[2]: Entering directory 
> `/home/mpiteam/openmpi/nightly-tarball-build-root/trunk/create-r25345/ompi/ompi/mca/bml/r2'
> make[2]: Leaving directory 
> `/home/mpiteam/openmpi/nightly-tarball-build-root/trunk/create-r25345/ompi/ompi/mca/bml/r2'
> (cd mca/btl/self && make  top_distdir=../../../../openmpi-1.7a1r25345 
> distdir=../../../../openmpi-1.7a1r25345/ompi/mca/btl/self \
> am__remove_distdir=: am__skip_length_check=: am__skip_mode_fix=: distdir)
> make[2]: Entering directory 
> `/home/mpiteam/openmpi/nightly-tarball-build-root/trunk/create-r25345/ompi/ompi/mca/btl/self'
> make[2]: Leaving directory 
> `/home/mpiteam/openmpi/nightly-tarball-build-root/trunk/create-r25345/ompi/ompi/mca/btl/self'
> (cd mca/btl/mx && make  top_distdir=../../../../openmpi-1.7a1r25345 
> distdir=../../../../openmpi-1.7a1r25345/ompi/mca/btl/mx \
> am__remove_distdir=: am__skip_length_check=: am__skip_mode_fix=: distdir)
> make[2]: Entering directory 
> `/home/mpiteam/openmpi/nightly-tarball-build-root/trunk/create-r25345/ompi/ompi/mca/btl/mx'
> make[2]: Leaving directory 
> `/home/mpiteam/openmpi/nightly-tarball-build-root/trunk/create-r25345/ompi/ompi/mca/btl/mx'
> (cd mca/btl/ofud && make  top_distdir=../../../../openmpi-1.7a1r25345 
> distdir=../../..

Re: [OMPI devel] === CREATE FAILURE (trunk) ===

2010-11-03 Thread Jeff Squyres
I confirm -- I was able to run "make distcheck" successfully on my systems.  I 
will kick off the nightly script right now to see if it works in the official 
build environment.


On Nov 3, 2010, at 12:12 PM, Shiqing Fan wrote:

> 
> Really sorry for the trouble. I forgot that opal_config.h has been included 
> only for Windows for the bool definition. 
> 
> It's already been reverted early today in the trunk.
> 
> 
> On 2010-11-3 4:11 PM, Barrett, Brian W wrote:
>> I'm pretty sure it was Shiqing's patch.  THe problem is that OPAL_DECLSPEC 
>> was added to event.h to export a couple of symbols, but none of the libevent 
>> files include opal_config.h, so OPAL_DECLSPEC isn't properly defined on Unix 
>> systems.  I ran into this last night and was going to send an e-mail this 
>> morning, but it looks like the nightly build beat me to it.  Unfortunately, 
>> I don't have time to fix the bug.
>> 
>> Brian
>> 
>> On Nov 3, 2010, at 7:20 AM, Ralph Castain wrote:
>> 
>>> Hmmm...but it -did- work after your last change - we got a nice tarball 
>>> that night. Perhaps it was Shiqing's commit that broke it?
>>> 
>>> 
>>> On Wed, Nov 3, 2010 at 4:36 AM, Jeff Squyres (jsquyres) 
>>>  wrote:
>>> Yep - I get these mails, too. 
>>> 
>>> My only comment is: %}%%€<>~|%>€€!!!
>>> 
>>> I swear I actually do test these things and they *do* work before I commit 
>>> them. There must be some difference between my env and the nightly creation 
>>> env. I'll investigate...
>>> 
>>> Sent from my PDA. No type good. 
>>> 
>>> On Nov 3, 2010, at 2:12 AM, "Mike Dubman"  wrote:
>>> 
 
 Hi,
 ompi/trunk (r23985) build still fails with compilation errors (attached).
 
 Regards
 M
 
 On Mon, Nov 1, 2010 at 11:10 PM, Jeff Squyres  wrote:
 Sorry for the delay on this -- the issue was quite subtle and the holiday 
 weekend got in the way.
 
 I have a fix that will be committed a little after 6pm US Eastern.  It 
 seems to allow a fresh SVN checkout (with my patch applied) to pass "make 
 distcheck".  Hopefully we'll finally get a new trunk tarball tonight.
 
 
 On Oct 31, 2010, at 9:16 PM, MPI Team wrote:
 
 >
 > ERROR: Command returned a non-zero exist status (trunk):
 >   make distcheck
 >
 > Start time: Sun Oct 31 21:00:12 EDT 2010
 > End time:   Sun Oct 31 21:16:33 EDT 2010
 >
 > ===
 > [... previous lines snipped ...]
 > checking for OPAL CXXFLAGS... -pthread
 > checking for OPAL CXXFLAGS_PREFIX...
 > checking for OPAL LDFLAGS...
 > checking for OPAL LIBS... -ldl   -Wl,--export-dynamic -lrt -lnsl -lutil 
 > -lm -ldl
 > checking for OPAL extra include dirs...
 > checking for ORTE CPPFLAGS...
 > checking for ORTE CXXFLAGS... -pthread
 > checking for ORTE CXXFLAGS_PREFIX...
 > checking for ORTE CFLAGS... -pthread
 > checking for ORTE CFLAGS_PREFIX...
 > checking for ORTE LDFLAGS...
 > checking for ORTE LIBS...  -ldl   -Wl,--export-dynamic -lrt -lnsl -lutil 
 > -lm -ldl
 > checking for ORTE extra include dirs...
 > checking for OMPI CPPFLAGS...
 > checking for OMPI CFLAGS... -pthread
 > checking for OMPI CFLAGS_PREFIX...
 > checking for OMPI CXXFLAGS... -pthread
 > checking for OMPI CXXFLAGS_PREFIX...
 > checking for OMPI FFLAGS... -pthread
 > checking for OMPI FFLAGS_PREFIX...
 > checking for OMPI FCFLAGS... -pthread
 > checking for OMPI FCFLAGS_PREFIX...
 > checking for OMPI LDFLAGS...
 > checking for OMPI LIBS...   -ldl   -Wl,--export-dynamic -lrt -lnsl 
 > -lutil -lm -ldl
 > checking for OMPI extra include dirs...
 >
 > *** Final output
 > configure: creating ./config.status
 > config.status: creating ompi/include/ompi/version.h
 > config.status: creating orte/include/orte/version.h
 > config.status: creating opal/include/opal/version.h
 > config.status: creating opal/mca/backtrace/Makefile
 > config.status: creating opal/mca/backtrace/printstack/Makefile
 > config.status: creating opal/mca/backtrace/execinfo/Makefile
 > config.status: creating opal/mca/backtrace/darwin/Makefile
 > config.status: creating opal/mca/backtrace/none/Makefile
 > config.status: creating opal/mca/carto/Makefile
 > config.status: creating opal/mca/carto/auto_detect/Makefile
 > config.status: creating opal/mca/carto/file/Makefile
 > config.status: creating opal/mca/compress/Makefile
 > config.status: creating opal/mca/compress/gzip/Makefile
 > config.status: creating opal/mca/compress/bzip/Makefile
 > config.status: creating opal/mca/crs/Makefile
 > config.status: creating opal/mca/crs/none/Makefile
 > config.status: creating opal/mca/crs/self/Makefile
 > config.status: creating opal/mca/crs/blcr/Makefile
 > config.status: creating opal/mca/event/Makefile
 > config.status: creating op

Re: [OMPI devel] === CREATE FAILURE (trunk) ===

2010-11-03 Thread Shiqing Fan


Really sorry for the trouble. I forgot that opal_config.h has been 
included only for Windows for the bool definition.


It's already been reverted early today in the trunk.


On 2010-11-3 4:11 PM, Barrett, Brian W wrote:
I'm pretty sure it was Shiqing's patch.  THe problem is that 
OPAL_DECLSPEC was added to event.h to export a couple of symbols, but 
none of the libevent files include opal_config.h, so OPAL_DECLSPEC 
isn't properly defined on Unix systems.  I ran into this last night 
and was going to send an e-mail this morning, but it looks like the 
nightly build beat me to it.  Unfortunately, I don't have time to fix 
the bug.


Brian

On Nov 3, 2010, at 7:20 AM, Ralph Castain wrote:

Hmmm...but it -did- work after your last change - we got a nice 
tarball that night. Perhaps it was Shiqing's commit that broke it?



On Wed, Nov 3, 2010 at 4:36 AM, Jeff Squyres (jsquyres) 
mailto:jsquy...@cisco.com>> wrote:


Yep - I get these mails, too.

My only comment is: %}%%€<>~|%>€€!!!

I swear I actually do test these things and they *do* work before
I commit them. There must be some difference between my env and
the nightly creation env. I'll investigate...

Sent from my PDA. No type good.

On Nov 3, 2010, at 2:12 AM, "Mike Dubman" mailto:mike.o...@gmail.com>> wrote:



Hi,
ompi/trunk (r23985) build still fails with compilation errors
(attached).

Regards
M

On Mon, Nov 1, 2010 at 11:10 PM, Jeff Squyres
mailto:jsquy...@cisco.com>> wrote:

Sorry for the delay on this -- the issue was quite subtle
and the holiday weekend got in the way.

I have a fix that will be committed a little after 6pm US
Eastern.  It seems to allow a fresh SVN checkout (with my
patch applied) to pass "make distcheck".  Hopefully we'll
finally get a new trunk tarball tonight.


On Oct 31, 2010, at 9:16 PM, MPI Team wrote:

>
> ERROR: Command returned a non-zero exist status (trunk):
>   make distcheck
>
> Start time: Sun Oct 31 21:00:12 EDT 2010
> End time:   Sun Oct 31 21:16:33 EDT 2010
>
>
===
> [... previous lines snipped ...]
> checking for OPAL CXXFLAGS... -pthread
> checking for OPAL CXXFLAGS_PREFIX...
> checking for OPAL LDFLAGS...
> checking for OPAL LIBS... -ldl   -Wl,--export-dynamic -lrt
-lnsl -lutil -lm -ldl
> checking for OPAL extra include dirs...
> checking for ORTE CPPFLAGS...
> checking for ORTE CXXFLAGS... -pthread
> checking for ORTE CXXFLAGS_PREFIX...
> checking for ORTE CFLAGS... -pthread
> checking for ORTE CFLAGS_PREFIX...
> checking for ORTE LDFLAGS...
> checking for ORTE LIBS...  -ldl   -Wl,--export-dynamic
-lrt -lnsl -lutil -lm -ldl
> checking for ORTE extra include dirs...
> checking for OMPI CPPFLAGS...
> checking for OMPI CFLAGS... -pthread
> checking for OMPI CFLAGS_PREFIX...
> checking for OMPI CXXFLAGS... -pthread
> checking for OMPI CXXFLAGS_PREFIX...
> checking for OMPI FFLAGS... -pthread
> checking for OMPI FFLAGS_PREFIX...
> checking for OMPI FCFLAGS... -pthread
> checking for OMPI FCFLAGS_PREFIX...
> checking for OMPI LDFLAGS...
> checking for OMPI LIBS...   -ldl   -Wl,--export-dynamic
-lrt -lnsl -lutil -lm -ldl
> checking for OMPI extra include dirs...
>
> *** Final output
> configure: creating ./config.status
> config.status: creating ompi/include/ompi/version.h
> config.status: creating orte/include/orte/version.h
> config.status: creating opal/include/opal/version.h
> config.status: creating opal/mca/backtrace/Makefile
> config.status: creating opal/mca/backtrace/printstack/Makefile
> config.status: creating opal/mca/backtrace/execinfo/Makefile
> config.status: creating opal/mca/backtrace/darwin/Makefile
> config.status: creating opal/mca/backtrace/none/Makefile
> config.status: creating opal/mca/carto/Makefile
> config.status: creating opal/mca/carto/auto_detect/Makefile
> config.status: creating opal/mca/carto/file/Makefile
> config.status: creating opal/mca/compress/Makefile
> config.status: creating opal/mca/compress/gzip/Makefile
> config.status: creating opal/mca/compress/bzip/Makefile
> config.status: creating opal/mca/crs/Makefile
> config.status: creating opal/mca/crs/none/Makefile
> config.status: creating opal/mca/crs/self/Makefile
> config.status: creating opal/mca/crs/blcr/Makefile
> config.status: creating opal/mca/event/Makefile
> config.status: creating opal/mca/event/libevent207/Makefile
> config.status: error: 

Re: [OMPI devel] === CREATE FAILURE (trunk) ===

2010-11-03 Thread Barrett, Brian W
I'm pretty sure it was Shiqing's patch.  THe problem is that OPAL_DECLSPEC was 
added to event.h to export a couple of symbols, but none of the libevent files 
include opal_config.h, so OPAL_DECLSPEC isn't properly defined on Unix systems. 
 I ran into this last night and was going to send an e-mail this morning, but 
it looks like the nightly build beat me to it.  Unfortunately, I don't have 
time to fix the bug.

Brian

On Nov 3, 2010, at 7:20 AM, Ralph Castain wrote:

Hmmm...but it -did- work after your last change - we got a nice tarball that 
night. Perhaps it was Shiqing's commit that broke it?


On Wed, Nov 3, 2010 at 4:36 AM, Jeff Squyres (jsquyres) 
mailto:jsquy...@cisco.com>> wrote:
Yep - I get these mails, too.

My only comment is: %}%%€<>~|%>€€!!!

I swear I actually do test these things and they *do* work before I commit 
them. There must be some difference between my env and the nightly creation 
env. I'll investigate...

Sent from my PDA. No type good.

On Nov 3, 2010, at 2:12 AM, "Mike Dubman" 
mailto:mike.o...@gmail.com>> wrote:


Hi,
ompi/trunk (r23985) build still fails with compilation errors (attached).

Regards
M

On Mon, Nov 1, 2010 at 11:10 PM, Jeff Squyres 
<jsquy...@cisco.com> 
wrote:
Sorry for the delay on this -- the issue was quite subtle and the holiday 
weekend got in the way.

I have a fix that will be committed a little after 6pm US Eastern.  It seems to 
allow a fresh SVN checkout (with my patch applied) to pass "make distcheck".  
Hopefully we'll finally get a new trunk tarball tonight.


On Oct 31, 2010, at 9:16 PM, MPI Team wrote:

>
> ERROR: Command returned a non-zero exist status (trunk):
>   make distcheck
>
> Start time: Sun Oct 31 21:00:12 EDT 2010
> End time:   Sun Oct 31 21:16:33 EDT 2010
>
> ===
> [... previous lines snipped ...]
> checking for OPAL CXXFLAGS... -pthread
> checking for OPAL CXXFLAGS_PREFIX...
> checking for OPAL LDFLAGS...
> checking for OPAL LIBS... -ldl   -Wl,--export-dynamic -lrt -lnsl -lutil -lm 
> -ldl
> checking for OPAL extra include dirs...
> checking for ORTE CPPFLAGS...
> checking for ORTE CXXFLAGS... -pthread
> checking for ORTE CXXFLAGS_PREFIX...
> checking for ORTE CFLAGS... -pthread
> checking for ORTE CFLAGS_PREFIX...
> checking for ORTE LDFLAGS...
> checking for ORTE LIBS...  -ldl   -Wl,--export-dynamic -lrt -lnsl -lutil -lm 
> -ldl
> checking for ORTE extra include dirs...
> checking for OMPI CPPFLAGS...
> checking for OMPI CFLAGS... -pthread
> checking for OMPI CFLAGS_PREFIX...
> checking for OMPI CXXFLAGS... -pthread
> checking for OMPI CXXFLAGS_PREFIX...
> checking for OMPI FFLAGS... -pthread
> checking for OMPI FFLAGS_PREFIX...
> checking for OMPI FCFLAGS... -pthread
> checking for OMPI FCFLAGS_PREFIX...
> checking for OMPI LDFLAGS...
> checking for OMPI LIBS...   -ldl   -Wl,--export-dynamic -lrt -lnsl -lutil -lm 
> -ldl
> checking for OMPI extra include dirs...
>
> *** Final output
> configure: creating ./config.status
> config.status: creating ompi/include/ompi/version.h
> config.status: creating orte/include/orte/version.h
> config.status: creating opal/include/opal/version.h
> config.status: creating opal/mca/backtrace/Makefile
> config.status: creating opal/mca/backtrace/printstack/Makefile
> config.status: creating opal/mca/backtrace/execinfo/Makefile
> config.status: creating opal/mca/backtrace/darwin/Makefile
> config.status: creating opal/mca/backtrace/none/Makefile
> config.status: creating opal/mca/carto/Makefile
> config.status: creating opal/mca/carto/auto_detect/Makefile
> config.status: creating opal/mca/carto/file/Makefile
> config.status: creating opal/mca/compress/Makefile
> config.status: creating opal/mca/compress/gzip/Makefile
> config.status: creating opal/mca/compress/bzip/Makefile
> config.status: creating opal/mca/crs/Makefile
> config.status: creating opal/mca/crs/none/Makefile
> config.status: creating opal/mca/crs/self/Makefile
> config.status: creating opal/mca/crs/blcr/Makefile
> config.status: creating opal/mca/event/Makefile
> config.status: creating opal/mca/event/libevent207/Makefile
> config.status: error: cannot find input file: 
> `opal/mca/event/libevent207/libevent/include/event2/event-config.h.in'
> make: *** [distcheck] Error 1
> ===
>
> Your friendly daemon,
> Cyrador
> ___
> testing mailing list
>  
> test...@open-mpi.org
>  
> http://www.open-mpi.org/mailman/listinfo.cgi/testing


--
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/leg

Re: [OMPI devel] === CREATE FAILURE (trunk) ===

2010-11-03 Thread Ralph Castain
Hmmm...but it -did- work after your last change - we got a nice tarball that
night. Perhaps it was Shiqing's commit that broke it?


On Wed, Nov 3, 2010 at 4:36 AM, Jeff Squyres (jsquyres)
wrote:

> Yep - I get these mails, too.
>
> My only comment is: %}%%€<>~|%>€€!!!
>
> I swear I actually do test these things and they *do* work before I commit
> them. There must be some difference between my env and the nightly creation
> env. I'll investigate...
>
> Sent from my PDA. No type good.
>
> On Nov 3, 2010, at 2:12 AM, "Mike Dubman"  wrote:
>
>
> Hi,
> ompi/trunk (r23985) build still fails with compilation errors (attached).
>
> Regards
> M
>
> On Mon, Nov 1, 2010 at 11:10 PM, Jeff Squyres < 
> jsquy...@cisco.com> wrote:
>
>> Sorry for the delay on this -- the issue was quite subtle and the holiday
>> weekend got in the way.
>>
>> I have a fix that will be committed a little after 6pm US Eastern.  It
>> seems to allow a fresh SVN checkout (with my patch applied) to pass "make
>> distcheck".  Hopefully we'll finally get a new trunk tarball tonight.
>>
>>
>> On Oct 31, 2010, at 9:16 PM, MPI Team wrote:
>>
>> >
>> > ERROR: Command returned a non-zero exist status (trunk):
>> >   make distcheck
>> >
>> > Start time: Sun Oct 31 21:00:12 EDT 2010
>> > End time:   Sun Oct 31 21:16:33 EDT 2010
>> >
>> > ===
>> > [... previous lines snipped ...]
>> > checking for OPAL CXXFLAGS... -pthread
>> > checking for OPAL CXXFLAGS_PREFIX...
>> > checking for OPAL LDFLAGS...
>> > checking for OPAL LIBS... -ldl   -Wl,--export-dynamic -lrt -lnsl -lutil
>> -lm -ldl
>> > checking for OPAL extra include dirs...
>> > checking for ORTE CPPFLAGS...
>> > checking for ORTE CXXFLAGS... -pthread
>> > checking for ORTE CXXFLAGS_PREFIX...
>> > checking for ORTE CFLAGS... -pthread
>> > checking for ORTE CFLAGS_PREFIX...
>> > checking for ORTE LDFLAGS...
>> > checking for ORTE LIBS...  -ldl   -Wl,--export-dynamic -lrt -lnsl -lutil
>> -lm -ldl
>> > checking for ORTE extra include dirs...
>> > checking for OMPI CPPFLAGS...
>> > checking for OMPI CFLAGS... -pthread
>> > checking for OMPI CFLAGS_PREFIX...
>> > checking for OMPI CXXFLAGS... -pthread
>> > checking for OMPI CXXFLAGS_PREFIX...
>> > checking for OMPI FFLAGS... -pthread
>> > checking for OMPI FFLAGS_PREFIX...
>> > checking for OMPI FCFLAGS... -pthread
>> > checking for OMPI FCFLAGS_PREFIX...
>> > checking for OMPI LDFLAGS...
>> > checking for OMPI LIBS...   -ldl   -Wl,--export-dynamic -lrt -lnsl
>> -lutil -lm -ldl
>> > checking for OMPI extra include dirs...
>> >
>> > *** Final output
>> > configure: creating ./config.status
>> > config.status: creating ompi/include/ompi/version.h
>> > config.status: creating orte/include/orte/version.h
>> > config.status: creating opal/include/opal/version.h
>> > config.status: creating opal/mca/backtrace/Makefile
>> > config.status: creating opal/mca/backtrace/printstack/Makefile
>> > config.status: creating opal/mca/backtrace/execinfo/Makefile
>> > config.status: creating opal/mca/backtrace/darwin/Makefile
>> > config.status: creating opal/mca/backtrace/none/Makefile
>> > config.status: creating opal/mca/carto/Makefile
>> > config.status: creating opal/mca/carto/auto_detect/Makefile
>> > config.status: creating opal/mca/carto/file/Makefile
>> > config.status: creating opal/mca/compress/Makefile
>> > config.status: creating opal/mca/compress/gzip/Makefile
>> > config.status: creating opal/mca/compress/bzip/Makefile
>> > config.status: creating opal/mca/crs/Makefile
>> > config.status: creating opal/mca/crs/none/Makefile
>> > config.status: creating opal/mca/crs/self/Makefile
>> > config.status: creating opal/mca/crs/blcr/Makefile
>> > config.status: creating opal/mca/event/Makefile
>> > config.status: creating opal/mca/event/libevent207/Makefile
>> > config.status: error: cannot find input file:
>> `opal/mca/event/libevent207/libevent/include/event2/event-config.h.in'
>> > make: *** [distcheck] Error 1
>> > ===
>> >
>> > Your friendly daemon,
>> > Cyrador
>> > ___
>> > testing mailing list
>> > test...@open-mpi.org
>> > 
>> http://www.open-mpi.org/mailman/listinfo.cgi/testing
>>
>>
>> --
>> Jeff Squyres
>>  jsquy...@cisco.com
>> For corporate legal information go to:
>>  
>> http://www.cisco.com/web/about/doing_business/legal/cri/
>>
>>
>> ___
>> devel mailing list
>>  de...@open-mpi.org
>>  
>> http://www.open-mpi.org/mailman/listinfo.cgi/devel
>>
>
> 
>
> ___
> devel mailing list
> de...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/devel
>
>
> ___
> devel mailing li

Re: [OMPI devel] === CREATE FAILURE (trunk) ===

2010-11-03 Thread Jeff Squyres (jsquyres)
Yep - I get these mails, too. 

My only comment is: %}%%€<>~|%>€€!!!

I swear I actually do test these things and they *do* work before I commit 
them. There must be some difference between my env and the nightly creation 
env. I'll investigate...

Sent from my PDA. No type good. 

On Nov 3, 2010, at 2:12 AM, "Mike Dubman"  wrote:

> 
> Hi,
> ompi/trunk (r23985) build still fails with compilation errors (attached).
> 
> Regards
> M
> 
> On Mon, Nov 1, 2010 at 11:10 PM, Jeff Squyres  wrote:
> Sorry for the delay on this -- the issue was quite subtle and the holiday 
> weekend got in the way.
> 
> I have a fix that will be committed a little after 6pm US Eastern.  It seems 
> to allow a fresh SVN checkout (with my patch applied) to pass "make 
> distcheck".  Hopefully we'll finally get a new trunk tarball tonight.
> 
> 
> On Oct 31, 2010, at 9:16 PM, MPI Team wrote:
> 
> >
> > ERROR: Command returned a non-zero exist status (trunk):
> >   make distcheck
> >
> > Start time: Sun Oct 31 21:00:12 EDT 2010
> > End time:   Sun Oct 31 21:16:33 EDT 2010
> >
> > ===
> > [... previous lines snipped ...]
> > checking for OPAL CXXFLAGS... -pthread
> > checking for OPAL CXXFLAGS_PREFIX...
> > checking for OPAL LDFLAGS...
> > checking for OPAL LIBS... -ldl   -Wl,--export-dynamic -lrt -lnsl -lutil -lm 
> > -ldl
> > checking for OPAL extra include dirs...
> > checking for ORTE CPPFLAGS...
> > checking for ORTE CXXFLAGS... -pthread
> > checking for ORTE CXXFLAGS_PREFIX...
> > checking for ORTE CFLAGS... -pthread
> > checking for ORTE CFLAGS_PREFIX...
> > checking for ORTE LDFLAGS...
> > checking for ORTE LIBS...  -ldl   -Wl,--export-dynamic -lrt -lnsl -lutil 
> > -lm -ldl
> > checking for ORTE extra include dirs...
> > checking for OMPI CPPFLAGS...
> > checking for OMPI CFLAGS... -pthread
> > checking for OMPI CFLAGS_PREFIX...
> > checking for OMPI CXXFLAGS... -pthread
> > checking for OMPI CXXFLAGS_PREFIX...
> > checking for OMPI FFLAGS... -pthread
> > checking for OMPI FFLAGS_PREFIX...
> > checking for OMPI FCFLAGS... -pthread
> > checking for OMPI FCFLAGS_PREFIX...
> > checking for OMPI LDFLAGS...
> > checking for OMPI LIBS...   -ldl   -Wl,--export-dynamic -lrt -lnsl -lutil 
> > -lm -ldl
> > checking for OMPI extra include dirs...
> >
> > *** Final output
> > configure: creating ./config.status
> > config.status: creating ompi/include/ompi/version.h
> > config.status: creating orte/include/orte/version.h
> > config.status: creating opal/include/opal/version.h
> > config.status: creating opal/mca/backtrace/Makefile
> > config.status: creating opal/mca/backtrace/printstack/Makefile
> > config.status: creating opal/mca/backtrace/execinfo/Makefile
> > config.status: creating opal/mca/backtrace/darwin/Makefile
> > config.status: creating opal/mca/backtrace/none/Makefile
> > config.status: creating opal/mca/carto/Makefile
> > config.status: creating opal/mca/carto/auto_detect/Makefile
> > config.status: creating opal/mca/carto/file/Makefile
> > config.status: creating opal/mca/compress/Makefile
> > config.status: creating opal/mca/compress/gzip/Makefile
> > config.status: creating opal/mca/compress/bzip/Makefile
> > config.status: creating opal/mca/crs/Makefile
> > config.status: creating opal/mca/crs/none/Makefile
> > config.status: creating opal/mca/crs/self/Makefile
> > config.status: creating opal/mca/crs/blcr/Makefile
> > config.status: creating opal/mca/event/Makefile
> > config.status: creating opal/mca/event/libevent207/Makefile
> > config.status: error: cannot find input file: 
> > `opal/mca/event/libevent207/libevent/include/event2/event-config.h.in'
> > make: *** [distcheck] Error 1
> > ===
> >
> > Your friendly daemon,
> > Cyrador
> > ___
> > testing mailing list
> > test...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/testing
> 
> 
> --
> Jeff Squyres
> jsquy...@cisco.com
> For corporate legal information go to:
> http://www.cisco.com/web/about/doing_business/legal/cri/
> 
> 
> ___
> devel mailing list
> de...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/devel
> 
> 
> ___
> devel mailing list
> de...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/devel


Re: [OMPI devel] === CREATE FAILURE (trunk) ===

2010-11-03 Thread Mike Dubman
Hi,
ompi/trunk (r23985) build still fails with compilation errors (attached).

Regards
M

On Mon, Nov 1, 2010 at 11:10 PM, Jeff Squyres  wrote:

> Sorry for the delay on this -- the issue was quite subtle and the holiday
> weekend got in the way.
>
> I have a fix that will be committed a little after 6pm US Eastern.  It
> seems to allow a fresh SVN checkout (with my patch applied) to pass "make
> distcheck".  Hopefully we'll finally get a new trunk tarball tonight.
>
>
> On Oct 31, 2010, at 9:16 PM, MPI Team wrote:
>
> >
> > ERROR: Command returned a non-zero exist status (trunk):
> >   make distcheck
> >
> > Start time: Sun Oct 31 21:00:12 EDT 2010
> > End time:   Sun Oct 31 21:16:33 EDT 2010
> >
> > ===
> > [... previous lines snipped ...]
> > checking for OPAL CXXFLAGS... -pthread
> > checking for OPAL CXXFLAGS_PREFIX...
> > checking for OPAL LDFLAGS...
> > checking for OPAL LIBS... -ldl   -Wl,--export-dynamic -lrt -lnsl -lutil
> -lm -ldl
> > checking for OPAL extra include dirs...
> > checking for ORTE CPPFLAGS...
> > checking for ORTE CXXFLAGS... -pthread
> > checking for ORTE CXXFLAGS_PREFIX...
> > checking for ORTE CFLAGS... -pthread
> > checking for ORTE CFLAGS_PREFIX...
> > checking for ORTE LDFLAGS...
> > checking for ORTE LIBS...  -ldl   -Wl,--export-dynamic -lrt -lnsl -lutil
> -lm -ldl
> > checking for ORTE extra include dirs...
> > checking for OMPI CPPFLAGS...
> > checking for OMPI CFLAGS... -pthread
> > checking for OMPI CFLAGS_PREFIX...
> > checking for OMPI CXXFLAGS... -pthread
> > checking for OMPI CXXFLAGS_PREFIX...
> > checking for OMPI FFLAGS... -pthread
> > checking for OMPI FFLAGS_PREFIX...
> > checking for OMPI FCFLAGS... -pthread
> > checking for OMPI FCFLAGS_PREFIX...
> > checking for OMPI LDFLAGS...
> > checking for OMPI LIBS...   -ldl   -Wl,--export-dynamic -lrt -lnsl -lutil
> -lm -ldl
> > checking for OMPI extra include dirs...
> >
> > *** Final output
> > configure: creating ./config.status
> > config.status: creating ompi/include/ompi/version.h
> > config.status: creating orte/include/orte/version.h
> > config.status: creating opal/include/opal/version.h
> > config.status: creating opal/mca/backtrace/Makefile
> > config.status: creating opal/mca/backtrace/printstack/Makefile
> > config.status: creating opal/mca/backtrace/execinfo/Makefile
> > config.status: creating opal/mca/backtrace/darwin/Makefile
> > config.status: creating opal/mca/backtrace/none/Makefile
> > config.status: creating opal/mca/carto/Makefile
> > config.status: creating opal/mca/carto/auto_detect/Makefile
> > config.status: creating opal/mca/carto/file/Makefile
> > config.status: creating opal/mca/compress/Makefile
> > config.status: creating opal/mca/compress/gzip/Makefile
> > config.status: creating opal/mca/compress/bzip/Makefile
> > config.status: creating opal/mca/crs/Makefile
> > config.status: creating opal/mca/crs/none/Makefile
> > config.status: creating opal/mca/crs/self/Makefile
> > config.status: creating opal/mca/crs/blcr/Makefile
> > config.status: creating opal/mca/event/Makefile
> > config.status: creating opal/mca/event/libevent207/Makefile
> > config.status: error: cannot find input file:
> `opal/mca/event/libevent207/libevent/include/event2/event-config.h.in'
> > make: *** [distcheck] Error 1
> > ===
> >
> > Your friendly daemon,
> > Cyrador
> > ___
> > testing mailing list
> > test...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/testing
>
>
> --
> Jeff Squyres
> jsquy...@cisco.com
> For corporate legal information go to:
> http://www.cisco.com/web/about/doing_business/legal/cri/
>
>
> ___
> devel mailing list
> de...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/devel
>
Making all in .
make[5]: Entering directory
   `/net/amd6/scrap/mtt/mtt-scratch/20101102_211254_12013/mpi-install/vFqD/
   src/trunk/opal/mca/event/libevent207/libevent'
  CC event.lo
  CC evthread.lo
  CC buffer.lo
  CC bufferevent.lo
  CC bufferevent_sock.lo
  CC bufferevent_filter.lo
  CC bufferevent_pair.lo
  CC listener.lo
  CC bufferevent_ratelim.lo
In file included from bufferevent_pair.c:39:
   ./include/event2/event.h:324: error: expected ‘=’, ‘,’, ‘;’,
   ‘asm’ or ‘__attribute__’ before ‘int’./include/event2/event.h:472: error: 
expected ‘=’, ‘,’, ‘;’,
   ‘asm’ or ‘__attribute__’ before ‘int’
   ./include/event2/event.h:526: error: expected ‘=’, ‘,’, ‘;’,
   ‘asm’ or ‘__attribute__’ before ‘int’
   ./include/event2/event.h:539: error: expected ‘=’, ‘,’, ‘;’,
   ‘asm’ or ‘__attribute__’ before ‘int’
In file included from bufferevent_ratelim.c:34:
   ./include/event2/event.h:324: error: expected ‘=’, ‘,’, ‘;’,
   ‘asm’ or ‘__attribute__’ before ‘int’./include/event2/event.h:472: error: 
expected ‘=’, ‘,’, ‘;’,
   ‘asm’ or ‘_

Re: [OMPI devel] === CREATE FAILURE (trunk) ===

2010-11-01 Thread Jeff Squyres
Sorry for the delay on this -- the issue was quite subtle and the holiday 
weekend got in the way.

I have a fix that will be committed a little after 6pm US Eastern.  It seems to 
allow a fresh SVN checkout (with my patch applied) to pass "make distcheck".  
Hopefully we'll finally get a new trunk tarball tonight.


On Oct 31, 2010, at 9:16 PM, MPI Team wrote:

> 
> ERROR: Command returned a non-zero exist status (trunk):
>   make distcheck
> 
> Start time: Sun Oct 31 21:00:12 EDT 2010
> End time:   Sun Oct 31 21:16:33 EDT 2010
> 
> ===
> [... previous lines snipped ...]
> checking for OPAL CXXFLAGS... -pthread 
> checking for OPAL CXXFLAGS_PREFIX...  
> checking for OPAL LDFLAGS...   
> checking for OPAL LIBS... -ldl   -Wl,--export-dynamic -lrt -lnsl -lutil -lm 
> -ldl 
> checking for OPAL extra include dirs... 
> checking for ORTE CPPFLAGS... 
> checking for ORTE CXXFLAGS... -pthread 
> checking for ORTE CXXFLAGS_PREFIX...  
> checking for ORTE CFLAGS... -pthread 
> checking for ORTE CFLAGS_PREFIX...  
> checking for ORTE LDFLAGS...
> checking for ORTE LIBS...  -ldl   -Wl,--export-dynamic -lrt -lnsl -lutil -lm 
> -ldl 
> checking for ORTE extra include dirs... 
> checking for OMPI CPPFLAGS... 
> checking for OMPI CFLAGS... -pthread 
> checking for OMPI CFLAGS_PREFIX...  
> checking for OMPI CXXFLAGS... -pthread 
> checking for OMPI CXXFLAGS_PREFIX...  
> checking for OMPI FFLAGS... -pthread 
> checking for OMPI FFLAGS_PREFIX...  
> checking for OMPI FCFLAGS... -pthread 
> checking for OMPI FCFLAGS_PREFIX...  
> checking for OMPI LDFLAGS... 
> checking for OMPI LIBS...   -ldl   -Wl,--export-dynamic -lrt -lnsl -lutil -lm 
> -ldl 
> checking for OMPI extra include dirs... 
> 
> *** Final output
> configure: creating ./config.status
> config.status: creating ompi/include/ompi/version.h
> config.status: creating orte/include/orte/version.h
> config.status: creating opal/include/opal/version.h
> config.status: creating opal/mca/backtrace/Makefile
> config.status: creating opal/mca/backtrace/printstack/Makefile
> config.status: creating opal/mca/backtrace/execinfo/Makefile
> config.status: creating opal/mca/backtrace/darwin/Makefile
> config.status: creating opal/mca/backtrace/none/Makefile
> config.status: creating opal/mca/carto/Makefile
> config.status: creating opal/mca/carto/auto_detect/Makefile
> config.status: creating opal/mca/carto/file/Makefile
> config.status: creating opal/mca/compress/Makefile
> config.status: creating opal/mca/compress/gzip/Makefile
> config.status: creating opal/mca/compress/bzip/Makefile
> config.status: creating opal/mca/crs/Makefile
> config.status: creating opal/mca/crs/none/Makefile
> config.status: creating opal/mca/crs/self/Makefile
> config.status: creating opal/mca/crs/blcr/Makefile
> config.status: creating opal/mca/event/Makefile
> config.status: creating opal/mca/event/libevent207/Makefile
> config.status: error: cannot find input file: 
> `opal/mca/event/libevent207/libevent/include/event2/event-config.h.in'
> make: *** [distcheck] Error 1
> ===
> 
> Your friendly daemon,
> Cyrador
> ___
> testing mailing list
> test...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/testing


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/




Re: [OMPI devel] === CREATE FAILURE (trunk) ===

2010-10-29 Thread Jeff Squyres
I have fixes for this, but they're .m4 changes (stupid VPATH stuff; sorry) -- 
so I'll commit them tonight after 6pm US Eastern.



On Oct 28, 2010, at 9:16 PM, MPI Team wrote:

> 
> ERROR: Command returned a non-zero exist status (trunk):
>   make distcheck
> 
> Start time: Thu Oct 28 21:00:05 EDT 2010
> End time:   Thu Oct 28 21:16:19 EDT 2010
> 
> ===
> [... previous lines snipped ...]
> checking for OPAL CXXFLAGS... -pthread 
> checking for OPAL CXXFLAGS_PREFIX...  
> checking for OPAL LDFLAGS...   
> checking for OPAL LIBS... -ldl   -Wl,--export-dynamic -lrt -lnsl -lutil -lm 
> -ldl 
> checking for OPAL extra include dirs... 
> checking for ORTE CPPFLAGS... 
> checking for ORTE CXXFLAGS... -pthread 
> checking for ORTE CXXFLAGS_PREFIX...  
> checking for ORTE CFLAGS... -pthread 
> checking for ORTE CFLAGS_PREFIX...  
> checking for ORTE LDFLAGS...
> checking for ORTE LIBS...  -ldl   -Wl,--export-dynamic -lrt -lnsl -lutil -lm 
> -ldl 
> checking for ORTE extra include dirs... 
> checking for OMPI CPPFLAGS... 
> checking for OMPI CFLAGS... -pthread 
> checking for OMPI CFLAGS_PREFIX...  
> checking for OMPI CXXFLAGS... -pthread 
> checking for OMPI CXXFLAGS_PREFIX...  
> checking for OMPI FFLAGS... -pthread 
> checking for OMPI FFLAGS_PREFIX...  
> checking for OMPI FCFLAGS... -pthread 
> checking for OMPI FCFLAGS_PREFIX...  
> checking for OMPI LDFLAGS... 
> checking for OMPI LIBS...   -ldl   -Wl,--export-dynamic -lrt -lnsl -lutil -lm 
> -ldl 
> checking for OMPI extra include dirs... 
> 
> *** Final output
> configure: creating ./config.status
> config.status: creating ompi/include/ompi/version.h
> config.status: creating orte/include/orte/version.h
> config.status: creating opal/include/opal/version.h
> config.status: creating opal/mca/backtrace/Makefile
> config.status: creating opal/mca/backtrace/printstack/Makefile
> config.status: creating opal/mca/backtrace/execinfo/Makefile
> config.status: creating opal/mca/backtrace/darwin/Makefile
> config.status: creating opal/mca/backtrace/none/Makefile
> config.status: creating opal/mca/carto/Makefile
> config.status: creating opal/mca/carto/auto_detect/Makefile
> config.status: creating opal/mca/carto/file/Makefile
> config.status: creating opal/mca/compress/Makefile
> config.status: creating opal/mca/compress/gzip/Makefile
> config.status: creating opal/mca/compress/bzip/Makefile
> config.status: creating opal/mca/crs/Makefile
> config.status: creating opal/mca/crs/none/Makefile
> config.status: creating opal/mca/crs/self/Makefile
> config.status: creating opal/mca/crs/blcr/Makefile
> config.status: creating opal/mca/event/Makefile
> config.status: creating opal/mca/event/libevent207/Makefile
> config.status: error: cannot find input file: 
> `opal/mca/event/libevent207/libevent/include/event2/event-config.h.in'
> make: *** [distcheck] Error 1
> ===
> 
> Your friendly daemon,
> Cyrador
> ___
> testing mailing list
> test...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/testing


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/




Re: [OMPI devel] === CREATE FAILURE (trunk) ===

2010-02-08 Thread Jeff Squyres
The build machine at IU was running out of disk space.

I'm not 100% sure that this is the underlying error, but I'm going to kick off 
another build and see if we get the same dist error.  If so, I'll dig more.


On Feb 7, 2010, at 9:13 PM, MPI Team wrote:

> 
> ERROR: Command returned a non-zero exist status (trunk):
>make distcheck
> 
> Start time: Sun Feb  7 21:00:06 EST 2010
> End time:   Sun Feb  7 21:13:33 EST 2010
> 
> ===
> [... previous lines snipped ...]
>   CC opal_convertor_raw.lo
>   CC opal_copy_functions.lo
>   CC opal_copy_functions_heterogeneous.lo
>   CC opal_datatype_add.lo
>   CC opal_datatype_clone.lo
>   CC opal_datatype_copy.lo
>   CC opal_datatype_create.lo
>   CC opal_datatype_create_contiguous.lo
>   CC opal_datatype_destroy.lo
>   CC opal_datatype_dump.lo
>   CC opal_datatype_fake_stack.lo
>   CC opal_datatype_get_count.lo
>   CC opal_datatype_module.lo
>   CC opal_datatype_optimize.lo
>   CC opal_datatype_pack.lo
>   CC opal_datatype_position.lo
>   CC opal_datatype_resize.lo
>   CC opal_datatype_unpack.lo
>   CCLD   libdatatype.la
> make[3]: Leaving directory 
> `/home/mpiteam/openmpi/nightly-tarball-build-root/trunk/create-r22568/ompi/openmpi-1.7a1r22568/_build/opal/datatype'
> Making all in etc
> make[3]: Entering directory 
> `/home/mpiteam/openmpi/nightly-tarball-build-root/trunk/create-r22568/ompi/openmpi-1.7a1r22568/_build/opal/etc'
> make[3]: Nothing to be done for `all'.
> make[3]: Leaving directory 
> `/home/mpiteam/openmpi/nightly-tarball-build-root/trunk/create-r22568/ompi/openmpi-1.7a1r22568/_build/opal/etc'
> Making all in event
> make[3]: Entering directory 
> `/home/mpiteam/openmpi/nightly-tarball-build-root/trunk/create-r22568/ompi/openmpi-1.7a1r22568/_build/opal/event'
> Making all in compat
> make[4]: Entering directory 
> `/home/mpiteam/openmpi/nightly-tarball-build-root/trunk/create-r22568/ompi/openmpi-1.7a1r22568/_build/opal/event/compat'
> Making all in sys
> make[5]: Entering directory 
> `/home/mpiteam/openmpi/nightly-tarball-build-root/trunk/create-r22568/ompi/openmpi-1.7a1r22568/_build/opal/event/compat/sys'
> make[5]: Nothing to be done for `all'.
> make[5]: Leaving directory 
> `/home/mpiteam/openmpi/nightly-tarball-build-root/trunk/create-r22568/ompi/openmpi-1.7a1r22568/_build/opal/event/compat/sys'
> make[5]: Entering directory 
> `/home/mpiteam/openmpi/nightly-tarball-build-root/trunk/create-r22568/ompi/openmpi-1.7a1r22568/_build/opal/event/compat'
> make[5]: Nothing to be done for `all-am'.
> make[5]: Leaving directory 
> `/home/mpiteam/openmpi/nightly-tarball-build-root/trunk/create-r22568/ompi/openmpi-1.7a1r22568/_build/opal/event/compat'
> make[4]: Leaving directory 
> `/home/mpiteam/openmpi/nightly-tarball-build-root/trunk/create-r22568/ompi/openmpi-1.7a1r22568/_build/opal/event/compat'
> make[4]: Entering directory 
> `/home/mpiteam/openmpi/nightly-tarball-build-root/trunk/create-r22568/ompi/openmpi-1.7a1r22568/_build/opal/event'
>   CC event.lo
>   CC log.lo
>   CC evutil.lo
> ../../../opal/event/evutil.c:201:2: #error "I don't know how to parse 64-bit 
> integers."
> make[4]: *** [evutil.lo] Error 1
> make[4]: Leaving directory 
> `/home/mpiteam/openmpi/nightly-tarball-build-root/trunk/create-r22568/ompi/openmpi-1.7a1r22568/_build/opal/event'
> make[3]: *** [all-recursive] Error 1
> make[3]: Leaving directory 
> `/home/mpiteam/openmpi/nightly-tarball-build-root/trunk/create-r22568/ompi/openmpi-1.7a1r22568/_build/opal/event'
> make[2]: *** [all-recursive] Error 1
> make[2]: Leaving directory 
> `/home/mpiteam/openmpi/nightly-tarball-build-root/trunk/create-r22568/ompi/openmpi-1.7a1r22568/_build/opal'
> make[1]: *** [all-recursive] Error 1
> make[1]: Leaving directory 
> `/home/mpiteam/openmpi/nightly-tarball-build-root/trunk/create-r22568/ompi/openmpi-1.7a1r22568/_build'
> make: *** [distcheck] Error 1
> ===
> 
> Your friendly daemon,
> Cyrador
> ___
> testing mailing list
> test...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/testing
> 


-- 
Jeff Squyres
jsquy...@cisco.com

For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/




Re: [OMPI devel] === CREATE FAILURE (trunk) ===

2010-02-05 Thread Jeff Squyres
This was a mistake in the libevent refresh (two files disappeared from libevent 
that were still listed in our Makefile.am).  Fixed in r22562.


On Feb 4, 2010, at 9:35 PM, MPI Team wrote:

> 
> ERROR: Command returned a non-zero exist status (trunk):
>make distcheck
> 
> Start time: Thu Feb  4 21:27:00 EST 2010
> End time:   Thu Feb  4 21:35:05 EST 2010
> 
> ===
> [... previous lines snipped ...]
>  am__remove_distdir=: am__skip_length_check=: am__skip_mode_fix=: distdir)
> make[2]: Entering directory 
> `/home/mpiteam/openmpi/nightly-tarball-build-root/trunk/create-r22560/ompi/opal/asm'
> make  \
>   top_distdir="../../openmpi-1.7a1r22560" 
> distdir="../../openmpi-1.7a1r22560/opal/asm" \
>   dist-hook
> make[3]: Entering directory 
> `/home/mpiteam/openmpi/nightly-tarball-build-root/trunk/create-r22560/ompi/opal/asm'
> mkdir "../../openmpi-1.7a1r22560/opal/asm/generated"
> perl "../../opal/asm/generate-all-asm.pl" "perl" "." 
> "../../openmpi-1.7a1r22560/opal/asm"
> --> Generating assembly for "ALPHA" "default-.text-.globl-:--$-@-1-1-1-1-1"
> --> Generating assembly for "AMD64" "default-.text-.globl-:--.L-@-1-0-1-1-1"
> --> Generating assembly for "AMD64" "default-.text-.globl-:--.L-@-1-0-1-1-0"
> --> Generating assembly for "IA32" "default-.text-.globl-:--.L-@-1-0-1-1-1"
> --> Generating assembly for "IA32" "default-.text-.globl-:--.L-@-1-0-1-1-0"
> --> Generating assembly for "IA32" "default-.text-.globl-:-_-L--0-1-1-1-0"
> --> Generating assembly for "IA32" "default-.text-.globl-:-_-L--0-0-1-1-1"
> --> Generating assembly for "IA32" "default-.text-.globl-:-_-L--0-0-1-1-0"
> --> Generating assembly for "IA64" "default-.text-.globl-:--.L-@-1-0-1-1-1"
> --> Generating assembly for "IA64" "default-.text-.globl-:--.L-@-1-0-1-1-0"
> --> Generating assembly for "POWERPC32" 
> "default-.text-.globl-:-_-L--0-1-1-0-0"
> --> Generating assembly for "POWERPC32" 
> "default-.text-.globl-:--.L-@-1-1-0-0-1"
> --> Generating assembly for "POWERPC32" 
> "default-.text-.globl-:--.L-@-1-1-0-0-0"
> --> Generating assembly for "POWERPC32" "aix-.csect 
> .text[PR]-.globl-:-.-L--0-1-0-0-0"
> --> Generating assembly for "POWERPC32" 
> "default-.text-.globl-:-_-L--0-1-1-1-0"
> --> Generating assembly for "POWERPC64" 
> "default-.text-.globl-:-_-L--0-1-1-1-0"
> --> Generating assembly for "POWERPC64" 
> "default-.text-.globl-:-.-.L-@-1-1-0-1-1"
> --> Generating assembly for "POWERPC64" 
> "default-.text-.globl-:-.-.L-@-1-1-0-1-0"
> --> Generating assembly for "POWERPC64" "aix-.csect 
> .text[PR]-.globl-:-.-L--0-1-0-1-0"
> --> Generating assembly for "SPARC" "default-.text-.globl-:--.L-#-1-0-1-0-0"
> --> Generating assembly for "SPARCV9_32" 
> "default-.text-.globl-:--.L-#-1-0-1-1-0"
> --> Generating assembly for "SPARCV9_64" 
> "default-.text-.globl-:--.L-#-1-0-1-1-0"
> --> Generating assembly for "MIPS" "default-.text-.globl-:--L--1-1-1-1-0"
> --> Generating assembly for "MIPS" "default-.text-.globl-:--L--1-1-1-1-0"
> make[3]: Leaving directory 
> `/home/mpiteam/openmpi/nightly-tarball-build-root/trunk/create-r22560/ompi/opal/asm'
> make[2]: Leaving directory 
> `/home/mpiteam/openmpi/nightly-tarball-build-root/trunk/create-r22560/ompi/opal/asm'
>  (cd datatype && make  top_distdir=../../openmpi-1.7a1r22560 
> distdir=../../openmpi-1.7a1r22560/opal/datatype \
>  am__remove_distdir=: am__skip_length_check=: am__skip_mode_fix=: distdir)
> make[2]: Entering directory 
> `/home/mpiteam/openmpi/nightly-tarball-build-root/trunk/create-r22560/ompi/opal/datatype'
> make[2]: Leaving directory 
> `/home/mpiteam/openmpi/nightly-tarball-build-root/trunk/create-r22560/ompi/opal/datatype'
>  (cd etc && make  top_distdir=../../openmpi-1.7a1r22560 
> distdir=../../openmpi-1.7a1r22560/opal/etc \
>  am__remove_distdir=: am__skip_length_check=: am__skip_mode_fix=: distdir)
> make[2]: Entering directory 
> `/home/mpiteam/openmpi/nightly-tarball-build-root/trunk/create-r22560/ompi/opal/etc'
> make[2]: Leaving directory 
> `/home/mpiteam/openmpi/nightly-tarball-build-root/trunk/create-r22560/ompi/opal/etc'
>  (cd event && make  top_distdir=../../openmpi-1.7a1r22560 
> distdir=../../openmpi-1.7a1r22560/opal/event \
>  am__remove_distdir=: am__skip_length_check=: am__skip_mode_fix=: distdir)
> make[2]: Entering directory 
> `/home/mpiteam/openmpi/nightly-tarball-build-root/trunk/create-r22560/ompi/opal/event'
> make[2]: *** No rule to make target `WIN32-Code/misc.c', needed by `distdir'. 
>  Stop.
> make[2]: Leaving directory 
> `/home/mpiteam/openmpi/nightly-tarball-build-root/trunk/create-r22560/ompi/opal/event'
> make[1]: *** [distdir] Error 1
> make[1]: Leaving directory 
> `/home/mpiteam/openmpi/nightly-tarball-build-root/trunk/create-r22560/ompi/opal'
> make: *** [distdir] Error 1
> ===
> 
> Your friendly daemon,
> Cyrador
> ___
> testing mai

Re: [OMPI devel] === CREATE FAILURE (trunk) ===

2009-05-27 Thread Josh Hursey

This should be fixed in r21293.

Sorry about that.

On May 27, 2009, at 9:10 AM, Josh Hursey wrote:

It seems that the MPI Extensions commit broke that tarball  
(distcheckclean) last night. This shouldn't effect too many people,  
but I am working on a solution so hopefully everything is better  
tonight.


-- Josh

On May 26, 2009, at 9:53 PM, MPI Team wrote:



ERROR: Command returned a non-zero exist status (trunk):
 make distcheck

Start time: Tue May 26 21:26:07 EDT 2009
End time:   Tue May 26 21:53:45 EDT 2009

= 
= 
=

[... previous lines snipped ...]
test -z "" || rm -f
rm -f class/.deps/.dirstamp
rm -f class/.dirstamp
rm -f dss/.deps/.dirstamp
rm -f dss/.dirstamp
rm -f memoryhooks/.deps/.dirstamp
rm -f memoryhooks/.dirstamp
rm -f runtime/.deps/.dirstamp
rm -f runtime/.dirstamp
rm -f threads/.deps/.dirstamp
rm -f threads/.dirstamp
rm -f win32/.deps/.dirstamp
rm -f win32/.dirstamp
rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags
make[3]: Leaving directory `/home/mpiteam/openmpi/nightly-tarball- 
build-root/trunk/create-r21285/ompi/openmpi-1.4a1r21285/_build/opal'
rm -rf class/.deps dss/.deps memoryhooks/.deps runtime/.deps  
threads/.deps win32/.deps

rm -f Makefile
make[2]: Leaving directory `/home/mpiteam/openmpi/nightly-tarball- 
build-root/trunk/create-r21285/ompi/openmpi-1.4a1r21285/_build/opal'

Making distclean in contrib
make[2]: Entering directory `/home/mpiteam/openmpi/nightly-tarball- 
build-root/trunk/create-r21285/ompi/openmpi-1.4a1r21285/_build/ 
contrib'

test -z "*~ .#*" || rm -f *~ .#*
rm -rf .libs _libs
rm -f *.lo
test -z "" || rm -f
rm -f Makefile
make[2]: Leaving directory `/home/mpiteam/openmpi/nightly-tarball- 
build-root/trunk/create-r21285/ompi/openmpi-1.4a1r21285/_build/ 
contrib'

Making distclean in config
make[2]: Entering directory `/home/mpiteam/openmpi/nightly-tarball- 
build-root/trunk/create-r21285/ompi/openmpi-1.4a1r21285/_build/ 
config'

test -z "*~ .#*" || rm -f *~ .#*
rm -rf .libs _libs
rm -f *.lo
test -z "" || rm -f
rm -f Makefile
make[2]: Leaving directory `/home/mpiteam/openmpi/nightly-tarball- 
build-root/trunk/create-r21285/ompi/openmpi-1.4a1r21285/_build/ 
config'

Making distclean in .
make[2]: Entering directory `/home/mpiteam/openmpi/nightly-tarball- 
build-root/trunk/create-r21285/ompi/openmpi-1.4a1r21285/_build'

test -z "*~ .#*" || rm -f *~ .#*
rm -rf .libs _libs
rm -f *.lo
test -z "ompi/include/ompi/version.h orte/include/orte/version.h  
opal/include/opal/version.h" || rm -f ompi/include/ompi/version.h  
orte/include/orte/version.h opal/include/opal/version.h

rm -f libtool
rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags
make[2]: Leaving directory `/home/mpiteam/openmpi/nightly-tarball- 
build-root/trunk/create-r21285/ompi/openmpi-1.4a1r21285/_build'
rm -f config.status config.cache config.log configure.lineno  
config.status.lineno

rm -f Makefile
ERROR: files left in build directory after distclean:
./ompi/include/mpi-ext.h
make[1]: *** [distcleancheck] Error 1
make[1]: Leaving directory `/home/mpiteam/openmpi/nightly-tarball- 
build-root/trunk/create-r21285/ompi/openmpi-1.4a1r21285/_build'

make: *** [distcheck] Error 2
= 
= 
=


Your friendly daemon,
Cyrador
___
testing mailing list
test...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/testing


___
devel mailing list
de...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/devel




Re: [OMPI devel] === CREATE FAILURE (trunk) ===

2009-05-27 Thread Josh Hursey
It seems that the MPI Extensions commit broke that tarball  
(distcheckclean) last night. This shouldn't effect too many people,  
but I am working on a solution so hopefully everything is better  
tonight.


-- Josh

On May 26, 2009, at 9:53 PM, MPI Team wrote:



ERROR: Command returned a non-zero exist status (trunk):
  make distcheck

Start time: Tue May 26 21:26:07 EDT 2009
End time:   Tue May 26 21:53:45 EDT 2009

= 
==

[... previous lines snipped ...]
test -z "" || rm -f
rm -f class/.deps/.dirstamp
rm -f class/.dirstamp
rm -f dss/.deps/.dirstamp
rm -f dss/.dirstamp
rm -f memoryhooks/.deps/.dirstamp
rm -f memoryhooks/.dirstamp
rm -f runtime/.deps/.dirstamp
rm -f runtime/.dirstamp
rm -f threads/.deps/.dirstamp
rm -f threads/.dirstamp
rm -f win32/.deps/.dirstamp
rm -f win32/.dirstamp
rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags
make[3]: Leaving directory `/home/mpiteam/openmpi/nightly-tarball- 
build-root/trunk/create-r21285/ompi/openmpi-1.4a1r21285/_build/opal'
rm -rf class/.deps dss/.deps memoryhooks/.deps runtime/.deps  
threads/.deps win32/.deps

rm -f Makefile
make[2]: Leaving directory `/home/mpiteam/openmpi/nightly-tarball- 
build-root/trunk/create-r21285/ompi/openmpi-1.4a1r21285/_build/opal'

Making distclean in contrib
make[2]: Entering directory `/home/mpiteam/openmpi/nightly-tarball- 
build-root/trunk/create-r21285/ompi/openmpi-1.4a1r21285/_build/ 
contrib'

test -z "*~ .#*" || rm -f *~ .#*
rm -rf .libs _libs
rm -f *.lo
test -z "" || rm -f
rm -f Makefile
make[2]: Leaving directory `/home/mpiteam/openmpi/nightly-tarball- 
build-root/trunk/create-r21285/ompi/openmpi-1.4a1r21285/_build/ 
contrib'

Making distclean in config
make[2]: Entering directory `/home/mpiteam/openmpi/nightly-tarball- 
build-root/trunk/create-r21285/ompi/openmpi-1.4a1r21285/_build/config'

test -z "*~ .#*" || rm -f *~ .#*
rm -rf .libs _libs
rm -f *.lo
test -z "" || rm -f
rm -f Makefile
make[2]: Leaving directory `/home/mpiteam/openmpi/nightly-tarball- 
build-root/trunk/create-r21285/ompi/openmpi-1.4a1r21285/_build/config'

Making distclean in .
make[2]: Entering directory `/home/mpiteam/openmpi/nightly-tarball- 
build-root/trunk/create-r21285/ompi/openmpi-1.4a1r21285/_build'

test -z "*~ .#*" || rm -f *~ .#*
rm -rf .libs _libs
rm -f *.lo
test -z "ompi/include/ompi/version.h orte/include/orte/version.h  
opal/include/opal/version.h" || rm -f ompi/include/ompi/version.h  
orte/include/orte/version.h opal/include/opal/version.h

rm -f libtool
rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags
make[2]: Leaving directory `/home/mpiteam/openmpi/nightly-tarball- 
build-root/trunk/create-r21285/ompi/openmpi-1.4a1r21285/_build'
rm -f config.status config.cache config.log configure.lineno  
config.status.lineno

rm -f Makefile
ERROR: files left in build directory after distclean:
./ompi/include/mpi-ext.h
make[1]: *** [distcleancheck] Error 1
make[1]: Leaving directory `/home/mpiteam/openmpi/nightly-tarball- 
build-root/trunk/create-r21285/ompi/openmpi-1.4a1r21285/_build'

make: *** [distcheck] Error 2
= 
==


Your friendly daemon,
Cyrador
___
testing mailing list
test...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/testing




Re: [OMPI devel] === CREATE FAILURE (trunk) ===

2009-03-23 Thread Ralph Castain
Wow, sometimes I even amaze myself! Two for two on create failures in  
a single night!!


:-)

Anyway, both are fixed or shortly will be. However, there will be no  
MTT runs tonight as neither branch successfully generated a tarball.


Ralph


On Mar 23, 2009, at 7:30 PM, MPI Team wrote:



ERROR: Command returned a non-zero exist status (trunk):
  make distcheck

Start time: Mon Mar 23 21:22:33 EDT 2009
End time:   Mon Mar 23 21:30:20 EDT 2009

= 
==
{ test ! -d openmpi-1.4a1r20848 || { find openmpi-1.4a1r20848 -type  
d ! -perm -200 -exec chmod u+w {} ';' && rm -fr  
openmpi-1.4a1r20848; }; }

test -d openmpi-1.4a1r20848 || mkdir openmpi-1.4a1r20848
list='config contrib opal orte ompi test'; for subdir in $list; do \
 if test "$subdir" = .; then :; else \
   test -d "openmpi-1.4a1r20848/$subdir" \
   || /bin/mkdir -p "openmpi-1.4a1r20848/$subdir" \
   || exit 1; \
   distdir=`CDPATH="${ZSH_VERSION+.}:" && cd openmpi-1.4a1r20848 &&  
pwd`; \
   top_distdir=`CDPATH="${ZSH_VERSION+.}:" && cd openmpi-1.4a1r20848  
&& pwd`; \

   (cd $subdir && \
 make  \
   top_distdir="$top_distdir" \
   distdir="$distdir/$subdir" \
am__remove_distdir=: \
am__skip_length_check=: \
   distdir) \
 || exit 1; \
 fi; \
done
make[1]: Entering directory `/home/mpiteam/openmpi/nightly-tarball- 
build-root/trunk/create-r20848/ompi/config'
make[1]: Leaving directory `/home/mpiteam/openmpi/nightly-tarball- 
build-root/trunk/create-r20848/ompi/config'
make[1]: Entering directory `/home/mpiteam/openmpi/nightly-tarball- 
build-root/trunk/create-r20848/ompi/contrib'
make[1]: *** No rule to make target `platform/lanl/rr-class/ 
debug.conf', needed by `distdir'.  Stop.
make[1]: Leaving directory `/home/mpiteam/openmpi/nightly-tarball- 
build-root/trunk/create-r20848/ompi/contrib'

make: *** [distdir] Error 1
= 
==


Your friendly daemon,
Cyrador
___
testing mailing list
test...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/testing




Re: [OMPI devel] === CREATE FAILURE (trunk) ===

2009-02-17 Thread Jeff Squyres
This should be fixed in r20576; I'll go start a new tarball build  
right now...



On Feb 16, 2009, at 9:33 PM, MPI Team wrote:



ERROR: Command returned a non-zero exist status (trunk):
  ./configure --enable-dist

Start time: Mon Feb 16 21:26:57 EST 2009
End time:   Mon Feb 16 21:33:03 EST 2009

= 
==

[... previous lines snipped ...]
+++ Configuring MCA framework memory
checking for no configure components in framework memory...
checking for m4 configure components in framework memory...  
ptmalloc2, malloc_solaris, mallopt


--- MCA component memory:ptmalloc2 (m4 configuration macro)
checking for MCA component memory:ptmalloc2 compile mode... static
checking if ptmalloc2 should be part of libopen-pal... no
checking for malloc.h... (cached) yes
checking whether __malloc_initialize_hook is declared... yes
checking whether sbrk is declared... yes
checking syscall.h usability... yes
checking syscall.h presence... yes
checking for syscall.h... yes
checking for syscall... yes
checking for __munmap... no
checking for __mmap... no
checking for dlsym in -ldl... yes
checking for dlsym... yes
checking if MCA component memory:ptmalloc2 can compile... yes

--- MCA component memory:malloc_solaris (m4 configuration macro)
checking for MCA component memory:malloc_solaris compile mode...  
static

checking if MCA component memory:malloc_solaris can compile... no

--- MCA component memory:mallopt (m4 configuration macro)
checking for MCA component memory:mallopt compile mode... static
checking if MCA component memory:mallopt can compile... no

+++ Configuring MCA framework paffinity
checking for no configure components in framework paffinity...
checking for m4 configure components in framework paffinity...  
linux, solaris, windows, darwin, posix


--- MCA component paffinity:linux (m4 configuration macro)
checking for MCA component paffinity:linux compile mode... dso
checking for syscall... (cached) yes
checking sys/syscall.h usability... yes
checking sys/syscall.h presence... yes
checking for sys/syscall.h... yes
checking for unistd.h... (cached) yes
checking for __NR_sched_setaffinity... yes
checking for __NR_sched_getaffinity... yes
checking for PLPA building mode... included
checking if want PLPA maintainer support... enabled (SVN checkout  
default)

checking for PLPA config prefix... opal/mca/paffinity/linux/plpa
checking for PLPA symbol prefix... opal_paffinity_linux_plpa_
checking valgrind/valgrind.h usability... yes
checking valgrind/valgrind.h presence... yes
checking for valgrind/valgrind.h... yes
checking for VALGRIND_CHECK_MEM_IS_ADDRESSABLE... no
configure: error: Need Valgrind version 3.2.0 or later.
= 
==


Your friendly daemon,
Cyrador
___
testing mailing list
test...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/testing



--
Jeff Squyres
Cisco Systems



Re: [OMPI devel] === CREATE FAILURE (trunk) ===

2008-08-22 Thread Tim Mattox
Just so everyone knows, the nightly OMPI tarballs from last night all
failed due to
an unfortunate timing of some system maintenance at IU.  Think of it
this way, the resulting
"duplicate" MTT runs from last night might help with diagnosing any
intermittent bugs. :-)

On Thu, Aug 21, 2008 at 9:00 PM, MPI Team  wrote:
>
> ERROR: Command returned a non-zero exist status (trunk):
>   svn co http://svn.open-mpi.org/svn/ompi//trunk -r  ompi
>
> Start time: Thu Aug 21 21:00:33 EDT 2008
> End time:   Thu Aug 21 21:00:33 EDT 2008
>
> ===
> ./create_tarball.sh: line 112: svn: command not found
> ===
>
> Your friendly daemon,
> Cyrador
> ___
> testing mailing list
> test...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/testing
>



-- 
Tim Mattox, Ph.D. - http://homepage.mac.com/tmattox/
 tmat...@gmail.com || timat...@open-mpi.org
 I'm a bright... http://www.the-brights.net/