On 3 Jul 2015, at 22:38, Erik Schnetter <[email protected]> wrote:

> I ran the Simfactory benchmark for ML_BSSN on both the current version and 
> the "rewrite" branch to see whether this branch is ready for production use. 
> I ran this benchmark on a single node of Shelob at LSU. In both cases, using 
> 2 OpenMP threads and 8 MPI processes per node was fastest, so I am reporting 
> these results below. Since I was interested in the performance of McLachlan, 
> this is a unigrid vacuum benchmark using fourth order differencing.
> 
> One noteworthy difference is that dissipation as implemented in the "rewrite" 
> branch is finally approximately as fast as thorn Dissipation, and I have thus 
> used this option for the "rewrite" branch.
> 
> Here are the high-level results:
> 
> current: 3.03136e-06 sec per grid point
> rewrite: 2.85734e-06 sec per grid point
> 
> That is, the rewrite branch is about 5% faster.

Hi Erik,

That is very reassuring!  However, for production use, I would be more 
interested in 6th or 8th order finite differencing (where the advection 
stencils become very large), and with Jacobians.  If 8th order with Jacobians 
is at least a similar speed with the rewrite branch, then I would be happy with 
switching.

-- 
Ian Hinder
http://members.aei.mpg.de/ianhin

_______________________________________________
Users mailing list
[email protected]
http://lists.einsteintoolkit.org/mailman/listinfo/users

Reply via email to