Re: [OMPI users] opal_cr_tmp_dir

2010-05-12 Thread Ralph Castain
Define them in your environment prior to executing any of those commands.

On May 12, 2010, at 4:43 PM,  wrote:

> Ralph
> 
> When you say manually, do you mean setting these parameters in the command 
> line while calling mpirun, ompi-restart, and ompi-checkpoint? Or is there 
> another way to set these parameters?
> 
> Thanks
> 
> Ananda
> 
> ==
> 
> Subject: Re: [OMPI users] opal_cr_tmp_dir
> From: Ralph Castain (rhc_at_[hidden])
> Date: 2010-05-12 18:09:17
> 
> Previous message: ananda.mudar_at_[hidden]: "Re: [OMPI users] opal_cr_tmp_dir"
> In reply to: ananda.mudar_at_[hidden]: "Re: [OMPI users] opal_cr_tmp_dir"
> You shouldn't have to, but there may be a bug in the system. Try manually 
> setting both envars and see if it fixes the problem.
> 
> On May 12, 2010, at 3:59 PM,  wrote:
> 
> > Ralph 
> > 
> > I have these parameters set in ~/.openmpi/mca-params.conf file 
> > 
> > $ cat ~/.openmpi/mca-params.conf 
> > 
> > orte_tmpdir_base = /home/ananda/ORTE 
> > 
> > opal_cr_tmp_dir = /home/ananda/OPAL 
> > 
> > $ 
> > 
> > 
> > 
> > Should I be setting OMPI_MCA_opal_cr_tmp_dir? 
> > 
> > 
> > 
> > FYI, I am using openmpi 1.3.4 with blcr 0.8.2 
> > 
> > 
> > Thanks 
> > 
> > Ananda 
> > 
> > = 
> > 
> > Subject: Re: [OMPI users] opal_cr_tmp_dir 
> > From: Ralph Castain (rhc_at_[hidden]) 
> > Date: 2010-05-12 16:47:16 
> > 
> > Previous message: Jeff Squyres: "Re: [OMPI users] getc in openmpi" 
> > In reply to: ananda.mudar_at_[hidden]: "Re: [OMPI users] opal_cr_tmp_dir" 
> > ompi-restart just does a fork/exec of the mpirun, so it should get the 
> > param if it is in your environ. How are you setting it? Have you tried 
> > adding OMPI_MCA_opal_cr_tmp_dir= to your environment? 
> > 
> > On May 12, 2010, at 12:45 PM,  wrote: 
> > 
> > > Thanks Ralph. 
> > > 
> > > Another question. Even though I am setting opal_cr_tmp_dir to a directory 
> > > other than /tmp while calling ompi-restart command, this setting is not 
> > > getting passed to the mpirun command that gets generated by ompi-restart. 
> > > How do I overcome this constraint? 
> > > 
> > > 
> > > 
> > > Thanks 
> > > 
> > > Ananda 
> > > 
> > > == 
> > > 
> > > Subject: Re: [OMPI users] opal_cr_tmp_dir 
> > > From: Ralph Castain (rhc_at_[hidden]) 
> > > Date: 2010-05-12 14:38:00 
> > > 
> > > Previous message: ananda.mudar_at_[hidden]: "[OMPI users] 
> > > opal_cr_tmp_dir" 
> > > In reply to: ananda.mudar_at_[hidden]: "[OMPI users] opal_cr_tmp_dir" 
> > > It's a different MCA param: orte_tmpdir_base 
> > > 
> > > On May 12, 2010, at 12:33 PM,  wrote: 
> > > 
> > > > I am setting the MCA parameter “opal_cr_tmp_dir” to a directory other 
> > > > than /tmp while calling “mpirun”, “ompi-restart”, and “ompi-checkpoint” 
> > > > commands so that I don’t fill up /tmp filesystem. But I see that 
> > > > openmpi-sessions* directory is still getting created under /tmp. How do 
> > > > I overcome this problem so that openmpi-sessions* directory also gets 
> > > > created under the same directory I have defined for “opal_cr_tmp_dir”? 
> > > > 
> > > > Is there a way to clean up these temporary files after their 
> > > > requirement is over? 
> > > > 
> > > > Thanks 
> > > > Ananda 
> > > > Please do not print this email unless it is absolutely necessary. 
> > > > 
> > > > The information contained in this electronic message and any 
> > > > attachments to this message are intended for the exclusive use of the 
> > > > addressee(s) and may contain proprietary, confidential or privileged 
> > > > information. If you are not the intended recipient, you should not 
> > > > disseminate, distribute or copy this e-mail. Please notify the sender 
> > > > immediately and destroy all copies of this message and any attachments. 
> > > > 
> > > > WARNING: Computer viruses can be transmitted via email. The recipient 
> > > > should check this email and any attachments for the presence of 
> > > > viruses. The company accepts no liability for any damage caused by any 
> > > > virus transmitted by this email. 
> > > > 
> > > > www.wipro.com 
> > > > 
> > > > ___ 
> > > > users mailing list 
> > > > users_at_[hidden] 
> > > > http://www.open-mpi.org/mailman/listinfo.cgi/users 
> > > 
> > > Please do not print this email unless it is absolutely necessary. 
> > > 
> > > The information contained in this electronic message and any attachments 
> > > to this message are intended for the exclusive use of the addressee(s) 
> > > and may contain proprietary, confidential or privileged information. If 
> > > you are not the intended recipient, you should not disseminate, 
> > > distribute or copy this e-mail. Please notify the sender immediately and 
> > > destroy all copies of this message and any attachments. 
> > > 
> > > WARNING: Computer viruses can be transmitted via email. 

Re: [OMPI users] opal_cr_tmp_dir

2010-05-12 Thread ananda.mudar
Ralph

When you say manually, do you mean setting these parameters in the
command line while calling mpirun, ompi-restart, and ompi-checkpoint? Or
is there another way to set these parameters?

Thanks

Ananda

==

Subject: Re: [OMPI users] opal_cr_tmp_dir
From: Ralph Castain (rhc_at_[hidden])
List-Post: users@lists.open-mpi.org
Date: 2010-05-12 18:09:17

*   Previous message: ananda.mudar_at_[hidden]: "Re: [OMPI users]
opal_cr_tmp_dir"

*   In reply to: ananda.mudar_at_[hidden]: "Re: [OMPI users]
opal_cr_tmp_dir"




You shouldn't have to, but there may be a bug in the system. Try
manually setting both envars and see if it fixes the problem.

On May 12, 2010, at 3:59 PM,  wrote:

> Ralph
>
> I have these parameters set in ~/.openmpi/mca-params.conf file
>
> $ cat ~/.openmpi/mca-params.conf
>
> orte_tmpdir_base = /home/ananda/ORTE
>
> opal_cr_tmp_dir = /home/ananda/OPAL
>
> $
>
>
>
> Should I be setting OMPI_MCA_opal_cr_tmp_dir?
>
>
>
> FYI, I am using openmpi 1.3.4 with blcr 0.8.2
>
>
> Thanks
>
> Ananda
>
> =
>
> Subject: Re: [OMPI users] opal_cr_tmp_dir
> From: Ralph Castain (rhc_at_[hidden])
> Date: 2010-05-12 16:47:16
>
> Previous message: Jeff Squyres: "Re: [OMPI users] getc in openmpi"
> In reply to: ananda.mudar_at_[hidden]: "Re: [OMPI users]
opal_cr_tmp_dir"
> ompi-restart just does a fork/exec of the mpirun, so it should get the
param if it is in your environ. How are you setting it? Have you tried
adding OMPI_MCA_opal_cr_tmp_dir= to your environment?
>
> On May 12, 2010, at 12:45 PM,  wrote:
>
> > Thanks Ralph.
> >
> > Another question. Even though I am setting opal_cr_tmp_dir to a
directory other than /tmp while calling ompi-restart command, this
setting is not getting passed to the mpirun command that gets generated
by ompi-restart. How do I overcome this constraint?
> >
> >
> >
> > Thanks
> >
> > Ananda
> >
> > ==
> >
> > Subject: Re: [OMPI users] opal_cr_tmp_dir
> > From: Ralph Castain (rhc_at_[hidden])
> > Date: 2010-05-12 14:38:00
> >
> > Previous message: ananda.mudar_at_[hidden]: "[OMPI users]
opal_cr_tmp_dir"
> > In reply to: ananda.mudar_at_[hidden]: "[OMPI users]
opal_cr_tmp_dir"
> > It's a different MCA param: orte_tmpdir_base
> >
> > On May 12, 2010, at 12:33 PM,  wrote:
> >
> > > I am setting the MCA parameter "opal_cr_tmp_dir" to a directory
other than /tmp while calling "mpirun", "ompi-restart", and
"ompi-checkpoint" commands so that I don't fill up /tmp filesystem. But
I see that openmpi-sessions* directory is still getting created under
/tmp. How do I overcome this problem so that openmpi-sessions* directory
also gets created under the same directory I have defined for
"opal_cr_tmp_dir"?
> > >
> > > Is there a way to clean up these temporary files after their
requirement is over?
> > >
> > > Thanks
> > > Ananda
> > > Please do not print this email unless it is absolutely necessary.
> > >
> > > The information contained in this electronic message and any
attachments to this message are intended for the exclusive use of the
addressee(s) and may contain proprietary, confidential or privileged
information. If you are not the intended recipient, you should not
disseminate, distribute or copy this e-mail. Please notify the sender
immediately and destroy all copies of this message and any attachments.
> > >
> > > WARNING: Computer viruses can be transmitted via email. The
recipient should check this email and any attachments for the presence
of viruses. The company accepts no liability for any damage caused by
any virus transmitted by this email.
> > >
> > > www.wipro.com
> > >
> > > ___
> > > users mailing list
> > > users_at_[hidden]
> > > http://www.open-mpi.org/mailman/listinfo.cgi/users
> >
> > Please do not print this email unless it is absolutely necessary.
> >
> > The information contained in this electronic message and any
attachments to this message are intended for the exclusive use of the
addressee(s) and may contain proprietary, confidential or privileged
information. If you are not the intended recipient, you should not
disseminate, distribute or copy this e-mail. Please notify the sender
immediately and destroy all copies of this message and any attachments.
> >
> > WARNING: Computer viruses can be transmitted via email. The
recipient should check this email and any attachments for the presence
of viruses. The company accepts no liability for any damage caused by
any virus transmitted by this email.
> >
> > www.wipro.com
> >
> > ___
> > users mailing list
> > users_at_[hidden]
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
>
> Ananda B Mudar, PMP
> Senior Technical 

[OMPI users] (no subject)

2010-05-12 Thread ananda.mudar
Ralph

When you say manually, do you mean setting these parameters in the
command line while calling mpirun, ompi-restart, and ompi-checkpoint? Or
is there another way to set these parameters?

Thanks

Ananda

==

Subject: Re: [OMPI users] opal_cr_tmp_dir
From: Ralph Castain (rhc_at_[hidden])
List-Post: users@lists.open-mpi.org
Date: 2010-05-12 18:09:17

*   Previous message: ananda.mudar_at_[hidden]: "Re: [OMPI users]
opal_cr_tmp_dir"

*   In reply to: ananda.mudar_at_[hidden]: "Re: [OMPI users]
opal_cr_tmp_dir"




You shouldn't have to, but there may be a bug in the system. Try
manually setting both envars and see if it fixes the problem.

On May 12, 2010, at 3:59 PM,  wrote:

> Ralph
>
> I have these parameters set in ~/.openmpi/mca-params.conf file
>
> $ cat ~/.openmpi/mca-params.conf
>
> orte_tmpdir_base = /home/ananda/ORTE
>
> opal_cr_tmp_dir = /home/ananda/OPAL
>
> $
>
>
>
> Should I be setting OMPI_MCA_opal_cr_tmp_dir?
>
>
>
> FYI, I am using openmpi 1.3.4 with blcr 0.8.2
>
>
> Thanks
>
> Ananda
>
> =
>
> Subject: Re: [OMPI users] opal_cr_tmp_dir
> From: Ralph Castain (rhc_at_[hidden])
> Date: 2010-05-12 16:47:16
>
> Previous message: Jeff Squyres: "Re: [OMPI users] getc in openmpi"
> In reply to: ananda.mudar_at_[hidden]: "Re: [OMPI users]
opal_cr_tmp_dir"
> ompi-restart just does a fork/exec of the mpirun, so it should get the
param if it is in your environ. How are you setting it? Have you tried
adding OMPI_MCA_opal_cr_tmp_dir= to your environment?
>
> On May 12, 2010, at 12:45 PM,  wrote:
>
> > Thanks Ralph.
> >
> > Another question. Even though I am setting opal_cr_tmp_dir to a
directory other than /tmp while calling ompi-restart command, this
setting is not getting passed to the mpirun command that gets generated
by ompi-restart. How do I overcome this constraint?
> >
> >
> >
> > Thanks
> >
> > Ananda
> >
> > ==
> >
> > Subject: Re: [OMPI users] opal_cr_tmp_dir
> > From: Ralph Castain (rhc_at_[hidden])
> > Date: 2010-05-12 14:38:00
> >
> > Previous message: ananda.mudar_at_[hidden]: "[OMPI users]
opal_cr_tmp_dir"
> > In reply to: ananda.mudar_at_[hidden]: "[OMPI users]
opal_cr_tmp_dir"
> > It's a different MCA param: orte_tmpdir_base
> >
> > On May 12, 2010, at 12:33 PM,  wrote:
> >
> > > I am setting the MCA parameter "opal_cr_tmp_dir" to a directory
other than /tmp while calling "mpirun", "ompi-restart", and
"ompi-checkpoint" commands so that I don't fill up /tmp filesystem. But
I see that openmpi-sessions* directory is still getting created under
/tmp. How do I overcome this problem so that openmpi-sessions* directory
also gets created under the same directory I have defined for
"opal_cr_tmp_dir"?
> > >
> > > Is there a way to clean up these temporary files after their
requirement is over?
> > >
> > > Thanks
> > > Ananda
> > > Please do not print this email unless it is absolutely necessary.
> > >
> > > The information contained in this electronic message and any
attachments to this message are intended for the exclusive use of the
addressee(s) and may contain proprietary, confidential or privileged
information. If you are not the intended recipient, you should not
disseminate, distribute or copy this e-mail. Please notify the sender
immediately and destroy all copies of this message and any attachments.
> > >
> > > WARNING: Computer viruses can be transmitted via email. The
recipient should check this email and any attachments for the presence
of viruses. The company accepts no liability for any damage caused by
any virus transmitted by this email.
> > >
> > > www.wipro.com
> > >
> > > ___
> > > users mailing list
> > > users_at_[hidden]
> > > http://www.open-mpi.org/mailman/listinfo.cgi/users
> >
> > Please do not print this email unless it is absolutely necessary.
> >
> > The information contained in this electronic message and any
attachments to this message are intended for the exclusive use of the
addressee(s) and may contain proprietary, confidential or privileged
information. If you are not the intended recipient, you should not
disseminate, distribute or copy this e-mail. Please notify the sender
immediately and destroy all copies of this message and any attachments.
> >
> > WARNING: Computer viruses can be transmitted via email. The
recipient should check this email and any attachments for the presence
of viruses. The company accepts no liability for any damage caused by
any virus transmitted by this email.
> >
> > www.wipro.com
> >
> > ___
> > users mailing list
> > users_at_[hidden]
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
>
> Ananda B Mudar, PMP
> Senior Technical 

Re: [OMPI users] opal_cr_tmp_dir

2010-05-12 Thread Ralph Castain
You shouldn't have to, but there may be a bug in the system. Try manually 
setting both envars and see if it fixes the problem.


On May 12, 2010, at 3:59 PM,  wrote:

> Ralph
> 
> I have these parameters set in ~/.openmpi/mca-params.conf file
> 
> $ cat ~/.openmpi/mca-params.conf
> 
> orte_tmpdir_base = /home/ananda/ORTE
> 
> opal_cr_tmp_dir = /home/ananda/OPAL
> 
> $
> 
>  
> 
> Should I be setting OMPI_MCA_opal_cr_tmp_dir?
> 
>  
> 
> FYI, I am using openmpi 1.3.4 with blcr 0.8.2
> 
> 
> Thanks
> 
> Ananda
> 
> =
> 
> Subject: Re: [OMPI users] opal_cr_tmp_dir
> From: Ralph Castain (rhc_at_[hidden])
> Date: 2010-05-12 16:47:16
> 
> Previous message: Jeff Squyres: "Re: [OMPI users] getc in openmpi"
> In reply to: ananda.mudar_at_[hidden]: "Re: [OMPI users] opal_cr_tmp_dir"
> ompi-restart just does a fork/exec of the mpirun, so it should get the param 
> if it is in your environ. How are you setting it? Have you tried adding 
> OMPI_MCA_opal_cr_tmp_dir= to your environment?
> 
> On May 12, 2010, at 12:45 PM,  wrote:
> 
> > Thanks Ralph. 
> > 
> > Another question. Even though I am setting opal_cr_tmp_dir to a directory 
> > other than /tmp while calling ompi-restart command, this setting is not 
> > getting passed to the mpirun command that gets generated by ompi-restart. 
> > How do I overcome this constraint? 
> > 
> > 
> > 
> > Thanks 
> > 
> > Ananda 
> > 
> > == 
> > 
> > Subject: Re: [OMPI users] opal_cr_tmp_dir 
> > From: Ralph Castain (rhc_at_[hidden]) 
> > Date: 2010-05-12 14:38:00 
> > 
> > Previous message: ananda.mudar_at_[hidden]: "[OMPI users] opal_cr_tmp_dir" 
> > In reply to: ananda.mudar_at_[hidden]: "[OMPI users] opal_cr_tmp_dir" 
> > It's a different MCA param: orte_tmpdir_base 
> > 
> > On May 12, 2010, at 12:33 PM,  wrote: 
> > 
> > > I am setting the MCA parameter “opal_cr_tmp_dir” to a directory other 
> > > than /tmp while calling “mpirun”, “ompi-restart”, and “ompi-checkpoint” 
> > > commands so that I don’t fill up /tmp filesystem. But I see that 
> > > openmpi-sessions* directory is still getting created under /tmp. How do I 
> > > overcome this problem so that openmpi-sessions* directory also gets 
> > > created under the same directory I have defined for “opal_cr_tmp_dir”? 
> > > 
> > > Is there a way to clean up these temporary files after their requirement 
> > > is over? 
> > > 
> > > Thanks 
> > > Ananda 
> > > Please do not print this email unless it is absolutely necessary. 
> > > 
> > > The information contained in this electronic message and any attachments 
> > > to this message are intended for the exclusive use of the addressee(s) 
> > > and may contain proprietary, confidential or privileged information. If 
> > > you are not the intended recipient, you should not disseminate, 
> > > distribute or copy this e-mail. Please notify the sender immediately and 
> > > destroy all copies of this message and any attachments. 
> > > 
> > > WARNING: Computer viruses can be transmitted via email. The recipient 
> > > should check this email and any attachments for the presence of viruses. 
> > > The company accepts no liability for any damage caused by any virus 
> > > transmitted by this email. 
> > > 
> > > www.wipro.com 
> > > 
> > > ___ 
> > > users mailing list 
> > > users_at_[hidden] 
> > > http://www.open-mpi.org/mailman/listinfo.cgi/users 
> > 
> > Please do not print this email unless it is absolutely necessary. 
> > 
> > The information contained in this electronic message and any attachments to 
> > this message are intended for the exclusive use of the addressee(s) and may 
> > contain proprietary, confidential or privileged information. If you are not 
> > the intended recipient, you should not disseminate, distribute or copy this 
> > e-mail. Please notify the sender immediately and destroy all copies of this 
> > message and any attachments. 
> > 
> > WARNING: Computer viruses can be transmitted via email. The recipient 
> > should check this email and any attachments for the presence of viruses. 
> > The company accepts no liability for any damage caused by any virus 
> > transmitted by this email. 
> > 
> > www.wipro.com 
> > 
> > ___ 
> > users mailing list 
> > users_at_[hidden] 
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
>  
>  
> Ananda B Mudar, PMP
> Senior Technical Architect
> Wipro Technologies
> Ph: 972 765 8093
> ananda.mu...@wipro.com
>  
> Please do not print this email unless it is absolutely necessary.
> 
> The information contained in this electronic message and any attachments to 
> this message are intended for the exclusive use of the addressee(s) and may 
> contain proprietary, confidential or privileged information. If you are not 
> the intended recipient, you should not disseminate, distribute or copy this 
> e-mail. 

Re: [OMPI users] opal_cr_tmp_dir

2010-05-12 Thread ananda.mudar
Ralph

I have these parameters set in ~/.openmpi/mca-params.conf file

$ cat ~/.openmpi/mca-params.conf

orte_tmpdir_base = /home/ananda/ORTE

opal_cr_tmp_dir = /home/ananda/OPAL

$



Should I be setting OMPI_MCA_opal_cr_tmp_dir?



FYI, I am using openmpi 1.3.4 with blcr 0.8.2


Thanks

Ananda

=

Subject: Re: [OMPI users] opal_cr_tmp_dir
From: Ralph Castain (rhc_at_[hidden])
List-Post: users@lists.open-mpi.org
Date: 2010-05-12 16:47:16

*   Previous message: Jeff Squyres: "Re: [OMPI users] getc in
openmpi"

*   In reply to: ananda.mudar_at_[hidden]: "Re: [OMPI users]
opal_cr_tmp_dir"




ompi-restart just does a fork/exec of the mpirun, so it should get the
param if it is in your environ. How are you setting it? Have you tried
adding OMPI_MCA_opal_cr_tmp_dir= to your environment?

On May 12, 2010, at 12:45 PM,  wrote:

> Thanks Ralph.
>
> Another question. Even though I am setting opal_cr_tmp_dir to a
directory other than /tmp while calling ompi-restart command, this
setting is not getting passed to the mpirun command that gets generated
by ompi-restart. How do I overcome this constraint?
>
>
>
> Thanks
>
> Ananda
>
> ==
>
> Subject: Re: [OMPI users] opal_cr_tmp_dir
> From: Ralph Castain (rhc_at_[hidden])
> Date: 2010-05-12 14:38:00
>
> Previous message: ananda.mudar_at_[hidden]: "[OMPI users]
opal_cr_tmp_dir"
> In reply to: ananda.mudar_at_[hidden]: "[OMPI users] opal_cr_tmp_dir"
> It's a different MCA param: orte_tmpdir_base
>
> On May 12, 2010, at 12:33 PM,  wrote:
>
> > I am setting the MCA parameter "opal_cr_tmp_dir" to a directory
other than /tmp while calling "mpirun", "ompi-restart", and
"ompi-checkpoint" commands so that I don't fill up /tmp filesystem. But
I see that openmpi-sessions* directory is still getting created under
/tmp. How do I overcome this problem so that openmpi-sessions* directory
also gets created under the same directory I have defined for
"opal_cr_tmp_dir"?
> >
> > Is there a way to clean up these temporary files after their
requirement is over?
> >
> > Thanks
> > Ananda
> > Please do not print this email unless it is absolutely necessary.
> >
> > The information contained in this electronic message and any
attachments to this message are intended for the exclusive use of the
addressee(s) and may contain proprietary, confidential or privileged
information. If you are not the intended recipient, you should not
disseminate, distribute or copy this e-mail. Please notify the sender
immediately and destroy all copies of this message and any attachments.
> >
> > WARNING: Computer viruses can be transmitted via email. The
recipient should check this email and any attachments for the presence
of viruses. The company accepts no liability for any damage caused by
any virus transmitted by this email.
> >
> > www.wipro.com
> >
> > ___
> > users mailing list
> > users_at_[hidden]
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> Please do not print this email unless it is absolutely necessary.
>
> The information contained in this electronic message and any
attachments to this message are intended for the exclusive use of the
addressee(s) and may contain proprietary, confidential or privileged
information. If you are not the intended recipient, you should not
disseminate, distribute or copy this e-mail. Please notify the sender
immediately and destroy all copies of this message and any attachments.
>
> WARNING: Computer viruses can be transmitted via email. The recipient
should check this email and any attachments for the presence of viruses.
The company accepts no liability for any damage caused by any virus
transmitted by this email.
>
> www.wipro.com
>
> ___
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users





Ananda B Mudar, PMP

Senior Technical Architect

Wipro Technologies

Ph: 972 765 8093

ananda.mu...@wipro.com




Please do not print this email unless it is absolutely necessary. 

The information contained in this electronic message and any attachments to 
this message are intended for the exclusive use of the addressee(s) and may 
contain proprietary, confidential or privileged information. If you are not the 
intended recipient, you should not disseminate, distribute or copy this e-mail. 
Please notify the sender immediately and destroy all copies of this message and 
any attachments. 

WARNING: Computer viruses can be transmitted via email. The recipient should 
check this email and any attachments for the presence of viruses. The company 
accepts no liability for any damage caused by any virus transmitted by this 
email. 

www.wipro.com


Re: [OMPI users] opal_cr_tmp_dir

2010-05-12 Thread Ralph Castain
ompi-restart just does a fork/exec of the mpirun, so it should get the param if 
it is in your environ. How are you setting it? Have you tried adding 
OMPI_MCA_opal_cr_tmp_dir= to your environment?

On May 12, 2010, at 12:45 PM,  wrote:

> Thanks Ralph.
> 
> Another question. Even though I am setting opal_cr_tmp_dir to a directory 
> other than /tmp while calling ompi-restart command, this setting is not 
> getting passed to the mpirun command that gets generated by ompi-restart. How 
> do I overcome this constraint?
> 
>  
> 
> Thanks
> 
> Ananda
> 
> ==
> 
> Subject: Re: [OMPI users] opal_cr_tmp_dir
> From: Ralph Castain (rhc_at_[hidden])
> Date: 2010-05-12 14:38:00
> 
> Previous message: ananda.mudar_at_[hidden]: "[OMPI users] opal_cr_tmp_dir"
> In reply to: ananda.mudar_at_[hidden]: "[OMPI users] opal_cr_tmp_dir"
> It's a different MCA param: orte_tmpdir_base
> 
> On May 12, 2010, at 12:33 PM,  wrote:
> 
> > I am setting the MCA parameter “opal_cr_tmp_dir” to a directory other than 
> > /tmp while calling “mpirun”, “ompi-restart”, and “ompi-checkpoint” commands 
> > so that I don’t fill up /tmp filesystem. But I see that openmpi-sessions* 
> > directory is still getting created under /tmp. How do I overcome this 
> > problem so that openmpi-sessions* directory also gets created under the 
> > same directory I have defined for “opal_cr_tmp_dir”? 
> > 
> > Is there a way to clean up these temporary files after their requirement is 
> > over? 
> > 
> > Thanks 
> > Ananda 
> > Please do not print this email unless it is absolutely necessary. 
> > 
> > The information contained in this electronic message and any attachments to 
> > this message are intended for the exclusive use of the addressee(s) and may 
> > contain proprietary, confidential or privileged information. If you are not 
> > the intended recipient, you should not disseminate, distribute or copy this 
> > e-mail. Please notify the sender immediately and destroy all copies of this 
> > message and any attachments. 
> > 
> > WARNING: Computer viruses can be transmitted via email. The recipient 
> > should check this email and any attachments for the presence of viruses. 
> > The company accepts no liability for any damage caused by any virus 
> > transmitted by this email. 
> > 
> > www.wipro.com 
> > 
> > ___ 
> > users mailing list 
> > users_at_[hidden] 
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> Please do not print this email unless it is absolutely necessary.
> 
> The information contained in this electronic message and any attachments to 
> this message are intended for the exclusive use of the addressee(s) and may 
> contain proprietary, confidential or privileged information. If you are not 
> the intended recipient, you should not disseminate, distribute or copy this 
> e-mail. Please notify the sender immediately and destroy all copies of this 
> message and any attachments.
> 
> WARNING: Computer viruses can be transmitted via email. The recipient should 
> check this email and any attachments for the presence of viruses. The company 
> accepts no liability for any damage caused by any virus transmitted by this 
> email.
> 
> www.wipro.com
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users



Re: [OMPI users] getc in openmpi

2010-05-12 Thread Jeff Squyres
On May 12, 2010, at 3:01 PM, Fernando Lemos wrote:

> Please correct me if I'm wrong, but I believe OpenMPI sends
> stdin/stdout from the other ranks back to rank 0 so that the output is
> displayed as the stdin of mpirun and the other way around with
> stdout/stderr. Otherwise it wouldn't be possible to even see the
> output from the other ranks. I guess that could make things slower.

Correct, OMPI does this.  The original question asked if we overrode getc(); we 
do not.  Open MPI sends stdin and receives stdout/stderr via pipes to MPI 
processes.

The phrasing of the original question also led me to believe that the program 
was being run without mpirun, but that could be a bad assumption on my part.  
Another relevant data point here would be exactly what "4x slower" means.

-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/




Re: [OMPI users] getc in openmpi

2010-05-12 Thread Fernando Lemos
On Wed, May 12, 2010 at 2:51 PM, Jeff Squyres  wrote:
> On May 12, 2010, at 1:48 PM, Hanjun Kim wrote:
>
>> I am working on parallelizing my sequential program using OpenMPI.
>> Although I got performance speedup using many threads, there was
>> slowdown on a small number of threads like 4 threads.
>> I found that it is because getc worked much slower than sequential
>> version. Does OpenMPI override or wrap getc function?
>
> No.

Please correct me if I'm wrong, but I believe OpenMPI sends
stdin/stdout from the other ranks back to rank 0 so that the output is
displayed as the stdin of mpirun and the other way around with
stdout/stderr. Otherwise it wouldn't be possible to even see the
output from the other ranks. I guess that could make things slower.

MPICH-2  had a command line option that told mpiexec who would receive
stdin (all processes or only some of them) so that you could do things
like mpiexec 

Re: [OMPI users] opal_cr_tmp_dir

2010-05-12 Thread ananda.mudar
Thanks Ralph.

Another question. Even though I am setting opal_cr_tmp_dir to a
directory other than /tmp while calling ompi-restart command, this
setting is not getting passed to the mpirun command that gets generated
by ompi-restart. How do I overcome this constraint?



Thanks

Ananda

==

Subject: Re: [OMPI users] opal_cr_tmp_dir
From: Ralph Castain (rhc_at_[hidden])
List-Post: users@lists.open-mpi.org
Date: 2010-05-12 14:38:00

*   Previous message: ananda.mudar_at_[hidden]: "[OMPI users]
opal_cr_tmp_dir"

*   In reply to: ananda.mudar_at_[hidden]: "[OMPI users]
opal_cr_tmp_dir"




It's a different MCA param: orte_tmpdir_base

On May 12, 2010, at 12:33 PM,  wrote:

> I am setting the MCA parameter "opal_cr_tmp_dir" to a directory other
than /tmp while calling "mpirun", "ompi-restart", and "ompi-checkpoint"
commands so that I don't fill up /tmp filesystem. But I see that
openmpi-sessions* directory is still getting created under /tmp. How do
I overcome this problem so that openmpi-sessions* directory also gets
created under the same directory I have defined for "opal_cr_tmp_dir"?
>
> Is there a way to clean up these temporary files after their
requirement is over?
>
> Thanks
> Ananda
> Please do not print this email unless it is absolutely necessary.
>
> The information contained in this electronic message and any
attachments to this message are intended for the exclusive use of the
addressee(s) and may contain proprietary, confidential or privileged
information. If you are not the intended recipient, you should not
disseminate, distribute or copy this e-mail. Please notify the sender
immediately and destroy all copies of this message and any attachments.
>
> WARNING: Computer viruses can be transmitted via email. The recipient
should check this email and any attachments for the presence of viruses.
The company accepts no liability for any damage caused by any virus
transmitted by this email.
>
> www.wipro.com
>
> ___
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users


Please do not print this email unless it is absolutely necessary. 

The information contained in this electronic message and any attachments to 
this message are intended for the exclusive use of the addressee(s) and may 
contain proprietary, confidential or privileged information. If you are not the 
intended recipient, you should not disseminate, distribute or copy this e-mail. 
Please notify the sender immediately and destroy all copies of this message and 
any attachments. 

WARNING: Computer viruses can be transmitted via email. The recipient should 
check this email and any attachments for the presence of viruses. The company 
accepts no liability for any damage caused by any virus transmitted by this 
email. 

www.wipro.com


Re: [OMPI users] opal_cr_tmp_dir

2010-05-12 Thread Ralph Castain
It's a different MCA param: orte_tmpdir_base



On May 12, 2010, at 12:33 PM,  wrote:

> I am setting the MCA parameter “opal_cr_tmp_dir” to a directory other than 
> /tmp while calling “mpirun”, “ompi-restart”, and “ompi-checkpoint” commands 
> so that I don’t fill up /tmp filesystem. But I see that openmpi-sessions* 
> directory is still getting created under /tmp. How do I overcome this problem 
> so that openmpi-sessions* directory also gets created under the same 
> directory I have defined for “opal_cr_tmp_dir”?
>  
> Is there a way to clean up these temporary files after their requirement is 
> over?
>  
> Thanks
> Ananda
> Please do not print this email unless it is absolutely necessary.
> 
> The information contained in this electronic message and any attachments to 
> this message are intended for the exclusive use of the addressee(s) and may 
> contain proprietary, confidential or privileged information. If you are not 
> the intended recipient, you should not disseminate, distribute or copy this 
> e-mail. Please notify the sender immediately and destroy all copies of this 
> message and any attachments.
> 
> WARNING: Computer viruses can be transmitted via email. The recipient should 
> check this email and any attachments for the presence of viruses. The company 
> accepts no liability for any damage caused by any virus transmitted by this 
> email.
> 
> www.wipro.com
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users



[OMPI users] opal_cr_tmp_dir

2010-05-12 Thread ananda.mudar
I am setting the MCA parameter "opal_cr_tmp_dir" to a directory other
than /tmp while calling "mpirun", "ompi-restart", and "ompi-checkpoint"
commands so that I don't fill up /tmp filesystem. But I see that
openmpi-sessions* directory is still getting created under /tmp. How do
I overcome this problem so that openmpi-sessions* directory also gets
created under the same directory I have defined for "opal_cr_tmp_dir"?



Is there a way to clean up these temporary files after their requirement
is over?



Thanks

Ananda


Please do not print this email unless it is absolutely necessary. 

The information contained in this electronic message and any attachments to 
this message are intended for the exclusive use of the addressee(s) and may 
contain proprietary, confidential or privileged information. If you are not the 
intended recipient, you should not disseminate, distribute or copy this e-mail. 
Please notify the sender immediately and destroy all copies of this message and 
any attachments. 

WARNING: Computer viruses can be transmitted via email. The recipient should 
check this email and any attachments for the presence of viruses. The company 
accepts no liability for any damage caused by any virus transmitted by this 
email. 

www.wipro.com


Re: [OMPI users] getc in openmpi

2010-05-12 Thread Jeff Squyres
On May 12, 2010, at 1:48 PM, Hanjun Kim wrote:

> I am working on parallelizing my sequential program using OpenMPI.
> Although I got performance speedup using many threads, there was
> slowdown on a small number of threads like 4 threads.
> I found that it is because getc worked much slower than sequential
> version. Does OpenMPI override or wrap getc function?

No.

> To find the cause, I added mpi.h to the program and compiled it with
> mpicc. There are two versions on the program;
> 1. The sequential program without MPI_Init_thread and MPI_Finalize.
> 2. The sequential program with MPI_Init_thread and MPI_Finalize.
> Version 2 was 4x slower than version 1. I think it is because of slow
> getc. Does MPI_Init_thread have some relation with the getc function
> call?

Not that I'm aware of.  Perhaps something about linking in threading libraries 
changes getc...?

-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/




[OMPI users] getc in openmpi

2010-05-12 Thread Hanjun Kim
Hi,

I am working on parallelizing my sequential program using OpenMPI.
Although I got performance speedup using many threads, there was
slowdown on a small number of threads like 4 threads.
I found that it is because getc worked much slower than sequential
version. Does OpenMPI override or wrap getc function?

To find the cause, I added mpi.h to the program and compiled it with
mpicc. There are two versions on the program;
1. The sequential program without MPI_Init_thread and MPI_Finalize.
2. The sequential program with MPI_Init_thread and MPI_Finalize.
Version 2 was 4x slower than version 1. I think it is because of slow
getc. Does MPI_Init_thread have some relation with the getc function
call?

Thank you in advance.

Hanjun


Re: [OMPI users] Fortran issues on Windows and 1.5 Trunk version

2010-05-12 Thread Damien Hocking

Absolutely.  I'll get a package of stuff put together.

Damien

On 12/05/2010 2:24 AM, Shiqing Fan wrote:


Hi Damien,

I know there will be more problems, and your feedback is always 
helpful.  :-)


Could you please provide me a Visual Studio solution file for MUMPS? I 
would like to test it a little.



Thanks,
Shiqing

On 2010-5-12 6:11 AM, Damien wrote:

Hi all,

Me again (poor Shiqing, I know...).  I've been trying to get the 
MUMPS solver running on Windows with Open-MPI.  I can only use the 
1.5 branch because that has Fortran support on Windows and 1.4.2 
doesn't.  There's a couple of things going wrong:


First, calls to MPI_Initialized from Fortran report that MPI isn't 
initialised (MUMPS has a MPI_Initialized check).  If I call 
MPI_Initialized from C or C++, it is initialized.  I'm not sure what 
this means for MPI calls from Fortran, but it could be the cause of 
the second problem, which is:  If I bypass the MPI_Initialized check 
in MUMPS, I can get the solver to start and run in one process.  If I 
try and run 2 or more processes, all the processes ramp to 100% CPU 
in the first parallel section, and sit there with no progress.  If I 
break in with the debugger, I can usually land on some MPI_IProbe 
calls, presumably looking for receives that don't exist, possibly 
because the Fortran MPI environment really isn't initialised.  After 
many debugger break-ins, I end up in a small group of calls, so it's 
a loop waiting for something.


For reference, it was yesterday's 1.5 svn trunk, MUMPS 4.9.2, and 
Intel Math libraries, and a 32-bit build.  MUMPS is Fortran 90/95 but 
uses the F77 MPI interfaces.  It does run with MPICH2.  I realise 
that 1.5 is a dev branch, so it might just be too early for this to 
work.  I'd be grateful for suggestions though.  I can build and test 
this on Linux if that would help narrow this down.


Damien
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users






[OMPI users] Question on virtual memory allocated

2010-05-12 Thread Olivier Riff
Hello,

My question is about virtual memory allocated by an open-mpi program. I am
not familiar with memory managment and I will be grateful if you could
explain me what I am observing when I launch my openMpi program on several
machines.

My program is started on a server machine which comunicate with 72 client
machines.
When I am doing a "top" on the Linux shell of the server machine: I can
observe:

Mem:   2074468k total, 777848k used,  1296628k free,  4224k
buffers
Swap:  4192924k total,   52k used,  4192872k free,
339184k cached
PID   USER  PR  NI   VIRT   RES   SHRS   %CPU
%MEM TIME+COMMAND
28211  realtime  20   0 *2104m*158m 29mS100
4.6 1:04.14   MyOpenMPIProgram

What I do not understand is where the value of 2104m for the virtual memory
comes from.
When I add the value of Mem used (777848k) to the value of the cache
(339184k) : the amount is by far inferior to the Virtual memory (2104m).
Are here part of the memory allocated by the clients taken into account ?
Where are physically allocated these 2104m of data ?
I was assuming that a process cannot allocate more the 2Go of RAM on a 32bit
machine, this would meant that part of this 2104m is located on the disk or
anywhere else...

My configuration is:
OpenMPI 1.4 on Mandriva 2008 (32bit)
Program is started using: mpirun --mca btl_tcp_eager_limit 5000 -v
-machinefile machinefile.txt MyOpenMPIProgram

Thanks in advance for any help/tips (and sorry if this is not completly
related to openmpi).

Olivier


Re: [OMPI users] Fortran issues on Windows and 1.5 Trunk version

2010-05-12 Thread Shiqing Fan


Hi Damien,

I know there will be more problems, and your feedback is always 
helpful.  :-)


Could you please provide me a Visual Studio solution file for MUMPS? I 
would like to test it a little.



Thanks,
Shiqing

On 2010-5-12 6:11 AM, Damien wrote:

Hi all,

Me again (poor Shiqing, I know...).  I've been trying to get the MUMPS 
solver running on Windows with Open-MPI.  I can only use the 1.5 
branch because that has Fortran support on Windows and 1.4.2 doesn't.  
There's a couple of things going wrong:


First, calls to MPI_Initialized from Fortran report that MPI isn't 
initialised (MUMPS has a MPI_Initialized check).  If I call 
MPI_Initialized from C or C++, it is initialized.  I'm not sure what 
this means for MPI calls from Fortran, but it could be the cause of 
the second problem, which is:  If I bypass the MPI_Initialized check 
in MUMPS, I can get the solver to start and run in one process.  If I 
try and run 2 or more processes, all the processes ramp to 100% CPU in 
the first parallel section, and sit there with no progress.  If I 
break in with the debugger, I can usually land on some MPI_IProbe 
calls, presumably looking for receives that don't exist, possibly 
because the Fortran MPI environment really isn't initialised.  After 
many debugger break-ins, I end up in a small group of calls, so it's a 
loop waiting for something.


For reference, it was yesterday's 1.5 svn trunk, MUMPS 4.9.2, and 
Intel Math libraries, and a 32-bit build.  MUMPS is Fortran 90/95 but 
uses the F77 MPI interfaces.  It does run with MPICH2.  I realise that 
1.5 is a dev branch, so it might just be too early for this to work.  
I'd be grateful for suggestions though.  I can build and test this on 
Linux if that would help narrow this down.


Damien
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




--
--
Shiqing Fan  http://www.hlrs.de/people/fan
High Performance Computing   Tel.: +49 711 685 87234
  Center Stuttgart (HLRS)Fax.: +49 711 685 65832
Address:Allmandring 30   email: f...@hlrs.de
70569 Stuttgart



Re: [OMPI users] Dynamic libraries in OpenMPI

2010-05-12 Thread jody
Just to be sure:
Is there a copy of  the shared library on the other host (hpcnode1) ?

jody

On Mon, May 10, 2010 at 5:20 PM, Prentice Bisbal  wrote:
> Are you runing thee jobs through a queuing system like PBS, Torque, or SGE?
>
> Prentice
>
> Miguel Ángel Vázquez wrote:
>> Hello Prentice,
>>
>> Thank you for your advice but that doesn't solve the problem.
>>
>> The non-login bash updates properly the $LD_LIBRARY_PATH value.
>>
>> Any other idea?
>>
>> Thanks,
>>
>> Miguel
>>
>> 2010/5/7 Prentice Bisbal >
>>
>>
>>
>>     Miguel Ángel Vázquez wrote:
>>     > Dear all,
>>     >
>>     > I am trying to run a C++ program which uses dynamic libraries
>>     under mpi.
>>     >
>>     > The compilation command looks like:
>>     >
>>     >  mpiCC `pkg-config --cflags itpp`  -o montecarlo  montecarlo.cpp
>>     > `pkg-config --libs itpp`
>>     >
>>     > And it works if I executed it in one machine:
>>     >
>>     > mpirun -np 2 -H localhost montecarlo
>>     >
>>     > I tested this both in the "master node" and in the "compute nodes" and
>>     > it works. However, when I try to run it with two different machines:
>>     >
>>     > mpirun -np 2 -H localhost,hpcnode1 montecarlo
>>     >
>>     > The program claims that it can't find the shared libraries:
>>     >
>>     > montecarlo: error while loading shared libraries: libitpp.so.6: cannot
>>     > open shared object file: No such file or directory
>>     >
>>     > The LD_LIBRARY_PATH is set properly at every machine, any idea
>>     where the
>>     > problem is? I attached you the config.log and the result of the
>>     omp-info
>>     > --all
>>     >
>>     > Thank you in advance,
>>     >
>>     > Miguel
>>
>>     Miguel,
>>
>>     Shells behave differently depending on whether it is an interactive
>>     login shell or a non-interactive shell. For example, the bash shell uses
>>     .bash_profile in case, but .bashrc in the other. Check the documentation
>>     for your shell and see what files it uses in each case, and make sure
>>     the non-login config file has the necessary settings for your MPI jobs.
>>      It sounds like your login shell environment is okay, but your non-login
>>     environment isn't setup correctly. This is a common problem.
>>
>>     I use bash, and to keep it simple, my .bash_profile is just a symbolic
>>     link to .bashrc. That way, both shell types have the same environment.
>>     This isn't always a good idea, but in my case it's fine.
>>
>>     --
>>     Prentice
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>



[OMPI users] Fortran issues on Windows and 1.5 Trunk version

2010-05-12 Thread Damien

Hi all,

Me again (poor Shiqing, I know...).  I've been trying to get the MUMPS 
solver running on Windows with Open-MPI.  I can only use the 1.5 branch 
because that has Fortran support on Windows and 1.4.2 doesn't.  There's 
a couple of things going wrong:


First, calls to MPI_Initialized from Fortran report that MPI isn't 
initialised (MUMPS has a MPI_Initialized check).  If I call 
MPI_Initialized from C or C++, it is initialized.  I'm not sure what 
this means for MPI calls from Fortran, but it could be the cause of the 
second problem, which is:  If I bypass the MPI_Initialized check in 
MUMPS, I can get the solver to start and run in one process.  If I try 
and run 2 or more processes, all the processes ramp to 100% CPU in the 
first parallel section, and sit there with no progress.  If I break in 
with the debugger, I can usually land on some MPI_IProbe calls, 
presumably looking for receives that don't exist, possibly because the 
Fortran MPI environment really isn't initialised.  After many debugger 
break-ins, I end up in a small group of calls, so it's a loop waiting 
for something.


For reference, it was yesterday's 1.5 svn trunk, MUMPS 4.9.2, and Intel 
Math libraries, and a 32-bit build.  MUMPS is Fortran 90/95 but uses the 
F77 MPI interfaces.  It does run with MPICH2.  I realise that 1.5 is a 
dev branch, so it might just be too early for this to work.  I'd be 
grateful for suggestions though.  I can build and test this on Linux if 
that would help narrow this down.


Damien