Hi,

I understand that as mentioned in the faq, due to the limitations in memory, the scaling is not linear. So, I am trying to write a proposal to use a supercomputer.

Its specs are:

Compute nodes: 82,944 nodes (SPARC64 VIIIfx; 16GB of memory per node)

8 cores / processor

Interconnect: Tofu (6-dimensional mesh/torus) Interconnect

Each cabinet contains 96 computing nodes,

One of the requirement is to give the performance of my current code with my current set of data, and there is a formula to calculate the estimated parallel efficiency when using the new large set of data

There are 2 ways to give performance:
1. Strong scaling, which is defined as how the elapsed time varies with the number of processors for a fixed
problem.
2. Weak scaling, which is defined as how the elapsed time varies with the number of processors for a
fixed problem size per processor.

I ran my cases with 48 and 96 cores with my current cluster, giving 140 and 90 mins respectively. This is classified as strong scaling.

Cluster specs:

CPU: AMD 6234 2.4GHz

8 cores / processor (CPU)

6 CPU / node

So 48 Cores / CPU

Not sure abt the memory / node


The parallel efficiency ‘En’ for a given degree of parallelism ‘n’ indicates how much the program is efficiently accelerated by parallel processing. ‘En’ is given by the following formulae. Although their derivation processes are different depending on strong and weak scaling, derived formulae are the
same.

From the estimated time, my parallel efficiency using Amdahl's law on the current old cluster was 52.7%.

So is my results acceptable?

For the large data set, if using 2205 nodes (2205X8cores), my expected parallel efficiency is only 0.5%. The proposal recommends value of > 50%.

Is it possible for this type of scaling in PETSc (>50%), when using 17640 (2205X8) cores?

Btw, I do not have access to the system.




Sent using CloudMagic Email <https://cloudmagic.com/k/d/mailapp?ct=pa&cv=7.4.10&pv=5.0.2&source=email_footer_2>

Reply via email to