Hi,

> On May 15, 2017, at 3:46 PM, Cody Permann <[email protected]> wrote:
> 
> 
> 
> On Mon, May 15, 2017 at 1:21 PM Ata ollah Mesgarnejad 
> <[email protected] <mailto:[email protected]>> wrote:
> Hello again,
> 
> I just got around working back on the application that needs big meshes. I
> tried to split my mesh using splitter on my work computer and use it in a
> big job. Unfortunately the mesh.read fails on the cluster for example on 20
> cpus with:
> 
> Attempted to utilize a checkpoint file on 20 processors but it was written
> using 0!!
> [1] ../../src/mesh/checkpoint_io.C, line 561, compiled Dec  5 2016 at
> 14:23:30
> 
> 
> I'm received that exact same error as are several others users who have tried 
> to use this capability. You might try the latest version to see if it has 
> been fixed. I can tell you that it's on our radar and will be fixed soon.  

OK. I’ll pull and rebuild libMesh.

>  
> Is there something I'm doing wrong here?
> 
> Probably not, however if you get a chance to try the latest version of 
> libMesh, let us know if you still have the same problem. What system are you 
> working on?

I was trying it on our local cluster which runs CentOs 6.5. If it works I do my 
production runs on LSU’s SuperMIC through XSEDE.

Ata

>  
> 
> PS: I'm using a slightly older libMesh from
> cbab3c34bb5e988339e1bb4aa7cf4828362eb494
> commit.
> 
> Best,
> 
> Ata
> 
> 
> 
> 
> 
> 
> On Wed, Jan 11, 2017 at 6:37 PM, Ata Mesgarnejad <[email protected] 
> <mailto:[email protected]>>
> wrote:
> 
> > Thank you Derek,
> >
> > This is really helpful.
> >
> > Ata
> >
> > On Jan 11, 2017, at 6:15 PM, Derek Gaston <[email protected] 
> > <mailto:[email protected]>> wrote:
> >
> > splitter also has the ability to do M->N parittioning.  What this means is
> > that splitter itself can be run using MPI... and can generate partitions
> > for any number of processors.  In fact, you can give it a list of n_procs
> > you want it partition for and it will do all of them simultaneously.
> >
> > If you have one handy I highly recommend running splitter on a cluster
> > that has a good parallel filesystem.
> >
> > BTW: I've used splitter to split huge multi GB Exodus files up into
> > ~20,000 or so partitions and it's been pretty solid.
> >
> > Derek
> >
> > On Wed, Jan 11, 2017 at 4:18 PM gmail <[email protected] 
> > <mailto:[email protected]>> wrote:
> >
> >> Perfect. I will give it a try.
> >>
> >> Ata
> >> > On Jan 11, 2017, at 4:14 PM, John Peterson <[email protected] 
> >> > <mailto:[email protected]>>
> >> wrote:
> >> >
> >> >
> >> >
> >> > On Wed, Jan 11, 2017 at 2:04 PM, gmail <[email protected] 
> >> > <mailto:[email protected]> <
> >> mailto:[email protected] <mailto:[email protected]>>> wrote:
> >> > Hi John,
> >> >
> >> > This is the first time I see this. So I’m not sure what this .cpr
> >> format is?
> >> >
> >> > It's the CheckpointIO format.  It can write either binary or ASCII
> >> files and is similar to the xdr/a format that libmesh uses to write meshes
> >> in serial.  Only thing it can't really do is restart from N files on M
> >> procs, but that shouldn't affect your use case.
> >> >
> >> >
> >> > Can each processor read its own chunk using with mesh.read when libMesh
> >> is compiled with parallel mesh?
> >> >
> >> >
> >> > Yes, that's how it should work.  You can use one of the Partitioner
> >> classes that comes with libmesh, or partition the mesh yourself in some
> >> application-specific way.
> >> >
> >> > --
> >> > John
> >>
> >> ------------------------------------------------------------
> >> ------------------
> >> Developer Access Program for Intel Xeon Phi Processors
> >> Access to Intel Xeon Phi processor-based developer platforms.
> >> With one year of Intel Parallel Studio XE.
> >> Training and support from Colfax.
> >> Order your platform today. http://sdm.link/xeonphi 
> >> <http://sdm.link/xeonphi>
> >> _______________________________________________
> >> Libmesh-users mailing list
> >> [email protected] 
> >> <mailto:[email protected]>
> >> https://lists.sourceforge.net/lists/listinfo/libmesh-users 
> >> <https://lists.sourceforge.net/lists/listinfo/libmesh-users>
> >
> >
> >
> 
> 
> --
> Best,
> Ata
> ------------------------------------------------------------------------------
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot 
> <http://sdm.link/slashdot>
> _______________________________________________
> Libmesh-users mailing list
> [email protected] 
> <mailto:[email protected]>
> https://lists.sourceforge.net/lists/listinfo/libmesh-users 
> <https://lists.sourceforge.net/lists/listinfo/libmesh-users>

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Libmesh-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/libmesh-users

Reply via email to