Hi Derek,

Iterating over elements is NOT what I want to do. I want to operate on the i-th 
element, 
supposing that I know the index i. I cannot use Mesh.elem(i) because it is 
unknown which 
processor owns the element i.

My current solution is to iterate over all the local elements on each processor 
in order 
to find the i-th element. This is cumbersome. I am asking if libmesh provides a 
function 
for such needs.

Thanks,
Dafang

On 9/1/2015 6:48 PM, Derek Gaston wrote:
> What are you actually trying to do?
>
> I figured you were iterating over elements and then needing to know if the 
> element you 
> are currently on is owned by the local processor or not...
>
> Derek
>
> On Tue, Sep 1, 2015 at 6:24 PM Dafang Wang <dafang.w...@jhu.edu 
> <mailto:dafang.w...@jhu.edu>> wrote:
>
>     Hi Derek,
>
>     Thanks for your quick response. What if the elem pointer is not 
> available, i.e., how
>     can I
>     obtain the i-th element in a parallel mesh without iterating over the 
> mesh? Shall I use
>     mesh.query_elem()?
>
>     Cheers,
>     Dafang
>
>     On 9/1/2015 5:17 PM, Derek Gaston wrote:
>     > if (elem->processor_id() == mesh.processor_id())
>     >
>     > On Tue, Sep 1, 2015 at 5:12 PM Dafang Wang <dafang.w...@jhu.edu
>     <mailto:dafang.w...@jhu.edu>
>     > <mailto:dafang.w...@jhu.edu <mailto:dafang.w...@jhu.edu>>> wrote:
>     >
>     >     Hi,
>     >
>     >     I am wondering how to query whether a processor owns a given 
> element in a
>     parallel mesh,
>     >     assuming that the finite-element mesh is partitioned across multiple
>     processors. One
>     >     way I
>     >     can think of is to use an iterator over 
> ParallelMesh::active_local_elements()
>     on each
>     >     processor, but this way is quite cumbersome.
>     >
>     >     Apparently the function ParallelMesh::query_element(const 
> dof_id_type elementID)
>     >     seems to
>     >     do the job, but it didn't work out when I ran in the parallel mode.
>     >
>     >     Any suggestions will be greatly appreciated. Thanks!
>     >
>     >     Cheers,
>     >     Dafang
>     >     --
>     >     Dafang Wang, Ph.D.
>     >     Postdoctoral Fellow
>     >     Institute of Computational Medicine
>     >     Department of Biomedical Engineering
>     >     Hackerman Hall 218
>     >     Johns Hopkins University, Baltimore, 21218
>     >
>     >  
> ------------------------------------------------------------------------------
>     >     _______________________________________________
>     >     Libmesh-users mailing list
>     > Libmesh-users@lists.sourceforge.net 
> <mailto:Libmesh-users@lists.sourceforge.net>
>     <mailto:Libmesh-users@lists.sourceforge.net
>     <mailto:Libmesh-users@lists.sourceforge.net>>
>     > https://lists.sourceforge.net/lists/listinfo/libmesh-users
>     >
>
>     --
>     Dafang Wang, Ph.D.
>     Postdoctoral Fellow
>     Institute of Computational Medicine
>     Department of Biomedical Engineering
>     Hackerman Hall 218
>     Johns Hopkins University, Baltimore, 21218
>     
> ------------------------------------------------------------------------------
>     _______________________________________________
>     Libmesh-users mailing list
>     Libmesh-users@lists.sourceforge.net 
> <mailto:Libmesh-users@lists.sourceforge.net>
>     https://lists.sourceforge.net/lists/listinfo/libmesh-users
>

-- 
Dafang Wang, Ph.D.
Postdoctoral Fellow
Institute of Computational Medicine
Department of Biomedical Engineering
Hackerman Hall 218
Johns Hopkins University, Baltimore, 21218
------------------------------------------------------------------------------
Monitor Your Dynamic Infrastructure at Any Scale With Datadog!
Get real-time metrics from all of your servers, apps and tools
in one place.
SourceForge users - Click here to start your Free Trial of Datadog now!
http://pubads.g.doubleclick.net/gampad/clk?id=241902991&iu=/4140
_______________________________________________
Libmesh-users mailing list
Libmesh-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/libmesh-users

Reply via email to