[OMPI users] HugeTLB messages from mpi code

2014-07-01 Thread Brock Palen
We are getting the following on our RHEL6 cluster using openmpi 1.8.1 with meep
http://ab-initio.mit.edu/wiki/index.php/Meep

WARNING: at fs/hugetlbfs/inode.c:940 hugetlb_file_setup+0x227/0x250() (Tainted: 
P   ---   )
Hardware name: C6100   
Using mlock ulimits for SHM_HUGETLB deprecated
Modules linked in: rdma_ucm(U) openafs(P)(U) autofs4 mgc(U) lustre(U) lov(U) 
mdc(U) lquota(U) osc(U) ksocklnd(U) ko2iblnd(U) rdma_cm(U) iw_cm(U) ib_addr(U) 
ptlrpc(U) obdclass(U) lnet(U) lvfs(U) libcfs(U) nfs lockd fscache auth_rpcgss 
nfs_acl sunrpc acpi_cpufreq freq_table mperf ipt_REJECT nf_conntrack_ipv4 
nf_defrag_ipv4 xt_state nf_conntrack xt_multiport iptable_filter ip_tables 
ip6_tables ib_ipoib(U) ib_cm(U) ipv6 ib_uverbs(U) ib_umad(U) iw_nes(U) 
libcrc32c cxgb3 mdio mlx4_vnic(U) mlx4_vnic_helper(U) ib_sa(U) mlx4_ib(U) 
mlx4_en(U) mlx4_core(U) ib_mthca(U) ib_mad(U) ib_core(U) mic(U) vhost_net 
macvtap macvlan tun kvm ipmi_devintf igb ptp pps_core dcdbas microcode i2c_i801 
i2c_core sg iTCO_wdt iTCO_vendor_support ioatdma dca i7core_edac edac_core 
shpchp ext4 jbd2 mbcache sd_mod crc_t10dif ahci dm_mirror dm_region_hash dm_log 
dm_mod [last unloaded: scsi_wait_scan]
Pid: 14367, comm: meep-mpi Tainted: P   ---
2.6.32-358.23.2.el6.x86_64 #1
Call Trace:
 [] ? warn_slowpath_common+0x87/0xc0
 [] ? warn_slowpath_fmt+0x46/0x50
 [] ? user_shm_lock+0x9c/0xc0
 [] ? hugetlb_file_setup+0x227/0x250
 [] ? sprintf+0x40/0x50
 [] ? newseg+0x152/0x290
 [] ? ipcget+0x61/0x200
 [] ? remove_vma+0x6e/0x90
 [] ? sys_shmget+0x59/0x60
 [] ? newseg+0x0/0x290
 [] ? shm_security+0x0/0x10
 [] ? shm_more_checks+0x0/0x20
 [] ? system_call_fastpath+0x16/0x1b
---[ end trace 375c130ede6f14a0 ]---


Doing some googling looks like this could be hurting our performance, but i'm 
not sure what todo about it?  There is nothing on the list, but there was one 
reference to another MPI library.  Is there any idea what would cause this?


Brock Palen
www.umich.edu/~brockp
CAEN Advanced Computing
XSEDE Campus Champion
bro...@umich.edu
(734)936-1985





signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: [OMPI users] Missing -enable-crdebug option in configure step

2014-07-01 Thread Josh Hursey
The C/R Debugging feature (the ability to do reversible debugging or
backward stepping with gdb and/or DDT) was added on 8/10/2010 in the commit
below:
  https://svn.open-mpi.org/trac/ompi/changeset/23587

This feature never made it into a release so it was only ever available on
the trunk. However, since that time the C/R functionality has fallen into
disrepair. It is most likely broken in the trunk today.

There is an effort to bring back the checkpoint/restart functionality in
the Open MPI trunk. Once that is stable we might revisit bringing back this
feature if there is time and interest.

-- Josh



On Mon, Jun 30, 2014 at 8:35 AM, Ralph Castain  wrote:

> I don't recall ever seeing such an option in Open MPI - what makes you
> believe it should exist?
>
> On Jun 29, 2014, at 9:25 PM, Đỗ Mai Anh Tú  wrote:
>
> Hi all,
>
> I am trying to run the checkpoint/restart enabled debugging code in Open
> MPI. This requires configure this option at the set up step :
>
> ./configure --with-ft=cr --enable-crdebug
>
> But no matter which version of Open MPI, I can't not find any option as
> --enable-crdebug (I have tried all versions from 1.5 to the newest one
> 1.8.1). Anyone could  help me fingure out this prolems. Was this option no
> longer belong or it has been replaced by the other term ?
>
> I appreciate all your helps and thanks all.
>
> --
> Đỗ Mai Anh Tú - Student ID 51104066
> Department of Computer Engineering
> Faculty of Computer Science and Engineering
> HCMC University of Technology.
> Viet Nam National University
>  ___
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2014/06/24729.php
>
>
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2014/06/24730.php
>



-- 
Joshua Hursey
Assistant Professor of Computer Science
University of Wisconsin-La Crosse
http://cs.uwlax.edu/~jjhursey


[OMPI users] EuroMPI/Asia 2014 Call for Participation

2014-07-01 Thread George Bosilca

EuroMPI/ASIA 2014 Call for participation

EuroMPI/ASIA 2014 in-cooperation status with ACM and SIGHPC in Kyoto,
Japan, 9th - 12th September, 2014.
The prime annual meeting for researchers, developers, and students in
message-passing parallel computing with MPI and related paradigms.

 Deadline of early registration is July 31, 2014 *
www.eurompi2014.org

The conference will feature 19 strong technical paper presentations,
3 invited talks, 2 tutorials, 2 workshops, posters ,and
exhibitions. The list of accepted papers is attached in this CFP.
The detailed conference information is being incrementally updated in
www.eurompi2014.org. Please watch the URL to get the latest
information.

TUTORIALS
--- 
- Advanced MPI: New Features of MPI-3
Torsten Hoefler, ETH, Switzerland
- Practical Parallel Application Performance Engineering
Marc-Andre Hermanns, German Research School for Simulation Sciences, Germany
Allen Malony, University of Oregon
Matthias Weber, TU Dresden, Germany

INVITED TALKS
--- 
- Enabling Scientific Discovery Through High Performance Computing and Extreme 
Data Science with the Cori System
Katie Antypas, National Energy Research Scientific Computing Center (NERSC), USA
- Large Scale System Design and Application for Tianhe-2
Yutong Lu, National University of Defense Technology (NUDT), China
- NESUS: Looking for sustainability in ultrascale computing systems
Jesus Carretero, Universidad Carlos III de Madrid, Spain

WORKSHOPS
--- 
- Challenges in Data-Centric Computing
https://www.hlrs.de/index.php?id=2032
- ESAA, Workshop on Enhancing Parallel Scientific Applications with
Accelerated HPC
http://www.arcos.inf.uc3m.es/~esaa2014/

IMPORTANT DATES
--- 
- Early Registration Deadline: July 31, 2014
- Visa support Deadline: August 8, 2014
- Pre-Registration Deadline: August 25, 2014 
- Tutorials: September 9th, 2014 
- Conference: September 10th-12th, 2014


ACCEPTED PAPERS
--- 
- MPI Collectives and Datatypes for hierarchical All-to-all Communication
Jesper Larsson Traff and Antoine Rougier
- Optimal MPI datatype normalization for vector and index-block types
Jesper Larsson Traff
- Zero-copy, hierarchical Gather is not possible with MPI Datatypes and 
Collectives
Jesper Larsson Tr?«£ff and Antoine Rougier
- GPU-Aware Intranode MPI_Allreduce
Iman Faraji and Ahmad Afsahi
- Toward Local Failure Local Recovery Resilience Model using MPI-ULFM
Keita Teranishi and Michael Heroux
- Evaluating User-Level Fault Tolerance for MPI Applications
Ignacio Laguna, David F. Richards, Todd Gamblin, Martin Schulz and Bronis R. de 
Supinski
- Comparing, Contrasting, Generalizing, and Integrating Two Current Designs for 
Fault-Tolerant MPI
Amin Hassani, Anthony Skjellum and Ron Brightwell
- Implementing the MPI-3.0 Fortran 2008 Binding
Junchao Zhang, Bill Long, Kenneth Raffenetti and Pavan Balaji
- Scalable MPI3 RMA on the Blue Gene/Q Supercomputer
Sameer Kumar and Michael Blocksome
- Intra-Epoch Message Scheduling To Exploit Unused or Residual Overlapping 
Potential
Judicael A. Zounmevo and Ahmad Afsahi
- PMI Extensions for Scalable MPI Startup
Sourav Chakraborty, Hari Subramoni, Jonathan Perkins, Adam Moody, Mark Arnold 
and Dhabaleswar Panda
- Exploring the Capabilities of the New MPI_T Interface
Tanzima Islam, Kathryn Mohror and Martin Schulz
- Understanding the Memory-Utilization of MPI Libraries: Challenges and Designs 
in Implementing the MPI_T Interface
Raghunath Rajachandrasekar, Jonathan Perkins, Khaled Hamidouche, Mark Arnold 
and Dhabaleswar K. Panda
- Catching Idlers with Ease: A Lightweight Wait-State Profiler for MPI Programs
Guoyong Mao, David Boehme, Marc-Andre Hermanns, Markus Geimer, Daniel Lorenz 
and Felix Wolf
- Distributed Behavioral Cartography of Timed Automata
Etienne Andre, Camille Coti and Sami Evangelista
- Reproducible MPI Micro-Benchmarking Isn't As Easy As You Think
Sascha Hunold, Alexandra Carpen-Amarie and Jesper Larsson Tr?«£ff
- Exploring the effect of noise on the performance benefit of non-blocking 
MPI_Allreduce
Patrick Widener, Kurt Ferreira and Scott Levy
- A Portable Petascale Framework for Efficient Particle Methods with Custom 
Interactions
Andreas Schafer and Dietmar Fey