Thanks, I might end up doing that. But, I need to write the node position in order according to there global ids, so looping through the nodes doesn't provide the nodes in order. So, I would need to store the node data, sort it, then write it. This is possible, I am just exploring simpler alternatives.
Thanks, Andrew On Sat, Nov 24, 2012 at 11:44 AM, Cody Permann <[email protected]>wrote: > > > > On Sat, Nov 24, 2012 at 8:34 AM, Andrew E Slaughter < > [email protected]> wrote: > >> Roy, I need to loop over the local nodes as well as the elements. When I >> did this with local_nodes_begin() and local_nodes_end() it works except >> that the nodes did not repeat for elements. For example if element 1 is on >> processes 1 that process may contain all the nodes for the element. If >> element 2 is on another processor that shares nodes with element 1, only >> the non shared nodes are on processor 2. Also, the local node connectivity >> that should be available from the pid_mesh will also make writing the >> local >> component easier, although this is not necessary as I already wrote code >> to >> localize the node connectivity. Is there a method to get the shared nodes >> to be ghosted across the processes? If so, that would be ideal. I was just >> looking for the simplest solution to writing the files at the moment. >> > > When writing out the nodal information, instead of looping over local > nodes, you may try looping over the local elements to obtain the ghosted > node information. For each element, you can loop over the nodes on that > element. So in your scenario above, processor 1 will have access to ALL > the nodes on element 1 and processor 2 will have access to ALL the nodes on > element 2 including the nodes that are owned by processor 1. > > Hope that helps, > Cody > > > Thanks, >> Andrew >> >> On Tue, Nov 20, 2012 at 5:04 PM, Roy Stogner <[email protected] >> >wrote: >> >> > >> > On Tue, 20 Nov 2012, Andrew E Slaughter wrote: >> > >> > I think I might of solved my problem by disabling the partitioning on >> the >> >> pid_mesh (pid_mesh.skip_partioning(**true)). I still need to do some >> >> >> testing. >> >> >> > >> > Why create the pid_mesh in the first place? If you want each >> > processor to work on its local elements, just loop from >> > local_elements_begin() to local_elements_end(). >> > --- >> > Roy >> > >> >> >> >> -- >> Andrew E. Slaughter, PhD >> [email protected] >> >> Materials Process Design and Control Laboratory >> Sibley School of Mechanical and Aerospace Engineering >> 169 Frank H. T. Rhodes Hall >> Cornell University >> Ithaca, NY 14853-3801 >> (607) 229-1829 >> http://aeslaughter.github.com/ >> >> ------------------------------------------------------------------------------ >> Monitor your physical, virtual and cloud infrastructure from a single >> web console. Get in-depth insight into apps, servers, databases, vmware, >> SAP, cloud infrastructure, etc. Download 30-day Free Trial. >> Pricing starts from $795 for 25 servers or applications! >> http://p.sf.net/sfu/zoho_dev2dev_nov >> _______________________________________________ >> Libmesh-users mailing list >> [email protected] >> https://lists.sourceforge.net/lists/listinfo/libmesh-users >> > > -- Andrew E. Slaughter, PhD [email protected] Materials Process Design and Control Laboratory Sibley School of Mechanical and Aerospace Engineering 169 Frank H. T. Rhodes Hall Cornell University Ithaca, NY 14853-3801 (607) 229-1829 http://aeslaughter.github.com/ ------------------------------------------------------------------------------ Monitor your physical, virtual and cloud infrastructure from a single web console. Get in-depth insight into apps, servers, databases, vmware, SAP, cloud infrastructure, etc. Download 30-day Free Trial. Pricing starts from $795 for 25 servers or applications! http://p.sf.net/sfu/zoho_dev2dev_nov _______________________________________________ Libmesh-users mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/libmesh-users
