Chenchen, All the work you do inside the multithreaded block is guarded with a mutex, which means only one single thread can execute the code at one time. Of course it won't be faster than single threaded.
On Wed, May 18, 2016 at 10:36 PM, Chenchen Liu <[email protected]> wrote: > Dear all, > > I am doing multiscale finite element method (FE^2) using deal.ii. That is, > we have a mesh at macro level, and at each quadrature point, we attach a > representative volume element (RVE) as micro mesh. Since each RVE problem is > independent, I want use parallel computation on RVE problem. > > Let's simplify the problem as: > First, we have a 1-D macro mesh, and at each element (interval) we have 2 > quadrature points. On each quadrature point, we generate a new Vector, i.e., > solution_x. Then, I want to remember this Vector by writing it into > local_quadrature_point_history(). > > I have written the following code, expecting that we can divide the entire > elements into 4 sub_ranges, and write new Vector to > local_quadrature_point_history() in parallel. But it turns out that it > cannot save any computational time. Can anyone give me any suggestions? > Thank you! > > > void Step3::Write_problem_on_cell_range (DoFHandler<1>::cell_iterator > &range_begin, > > > DoFHandler<1>::cell_iterator &range_end) > > { > > const unsigned int n_q_points = quadrature_formula_MACRO.size(); > > static Threads::Mutex mutex; > > Threads::Mutex::ScopedLock lock (mutex); > > > > DoFHandler<1>::cell_iterator cell_MACRO = range_begin, endc_MACRO = > range_end; > > for (; cell_MACRO!=endc_MACRO; ++cell_MACRO) > > { > > // build history > > PointHistory<1> *local_quadrature_points_history_MACRO > > = reinterpret_cast<PointHistory<1> *>(cell_MACRO->user_pointer()); > > > > // BEGIN RVE problem > > for (unsigned int q_index = 0; q_index<n_q_points; ++q_index) > > { > > solution_x_RVE[0] = 1; solution_x_RVE[1] = 2; > > // Write > > local_quadrature_points_history_MACRO[q_index].solution_x_RVE_h > = solution_x_RVE; > > } > > } > > } > > > void Step3::Write_problem () > > { > > static unsigned int n_virtual_cores = MultithreadInfo::n_threads(); > > Threads::ThreadGroup<void> threads; > > > std::vector<std::pair<DoFHandler<1>::cell_iterator,DoFHandler<1>::cell_iterator> >> > > sub_ranges = Threads::split_range > <DoFHandler<1>::cell_iterator>(dof_handler_MACRO.begin_active(), > > > dof_handler_MACRO.end(), > > > n_virtual_cores); > > > > for (unsigned int t=0; t<n_virtual_cores; ++t) > > threads += Threads::new_thread (&Step3::Write_problem_on_cell_range, > *this, > > > sub_ranges[t].first, sub_ranges[t].second); > > threads.join_all(); > > } > > > > > > Best, > > Chenchen > > -- > The deal.II project is located at http://www.dealii.org/ > For mailing list/forum options, see > https://groups.google.com/d/forum/dealii?hl=en > --- > You received this message because you are subscribed to the Google Groups > "deal.II User Group" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > For more options, visit https://groups.google.com/d/optout. -- Timo Heister http://www.math.clemson.edu/~heister/ -- The deal.II project is located at http://www.dealii.org/ For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en --- You received this message because you are subscribed to the Google Groups "deal.II User Group" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. For more options, visit https://groups.google.com/d/optout.
