>> DistributedVector<Real> vec(n_global, n_local, array_of_remote_indices); >> >> Where "vec" is a vector which stores n_local entries, and also has "ghost >> storage" for array_of_remote_indices.size() entries. > > We'd also need an "unsigned int first_local_index" argument for the > offset of n_local, right? In any case that sounds like an excellent > idea.
Is there ever a case where you need it? Was thinking it can always be computed from the partial sums of n_local over all the processors lower than you. Similarly, n_global really would not be used except to assert that it is the same as the sum of n_local. ... > While you're explaining the fundamentals of our parallel vector > structures to me, could you explain the VecAssemblyBegin / > VecAssemblyEnd pair in PetscVector::close? That's what does all the > communication required by VecSetValues, right? What's the reason for > the dual API call; does the communication get started asynchronously > by Begin() and then End() blocks waiting for completion? That is absolutely the case. I fretted for a while about implementing them separately, but I feared there would be a million places you could get out of sync. For example, you call vec.close_start() at the end of your matrix assembly routine, then vec.close_finish() before the linear solver? What computation would you want to slip between those two? I figured we'd be setting ourselves up for a most frequently-asked question, and it doesn't seem to have hurt anything to date... -Ben ------------------------------------------------------------------------- Check out the new SourceForge.net Marketplace. It's the best place to buy or sell services for just about anything Open Source. http://sourceforge.net/services/buy/index.php _______________________________________________ Libmesh-devel mailing list Libmesh-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/libmesh-devel