Ok, so:

Jed,
I've installed PETSc through Ubuntu Software Center so there is no Makefile
in my PETSC_DIR (or I am looking for in the wrong directory), and this is
why I can't run the "make streams" command.

John,
I think my code is running in parallel cause when I put the "cout" command
using  4 processors it prints just a fourth of the full expected output and
when I use a fstream file to print some strain values the OUTPUT is right
just when I use 1 processor ( with more than one it became a mess with
spaces and line changes not structured as the code predicts).

This is a part of the -log_summary:

---------------------------------------------- PETSc Performance Summary:
----------------------------------------------

./result-opt on a linux-gnu-c-opt named rodrigo-HP-Pavilion-g6-Notebook-PC
with 4 processors, by rodrigo Fri Sep 12 17:44:22 2014
Using Petsc Release Version 3.4.2, Jul, 02, 2013

                         Max       Max/Min        Avg      Total
Time (sec):           4.356e+02      1.00000   4.356e+02
Objects:              1.420e+02      1.00000   1.420e+02
Flops:                1.693e+11      1.01246   1.684e+11  6.737e+11
Flops/sec:            3.886e+08      1.01246   3.867e+08  1.547e+09
MPI Messages:         1.461e+04      1.99781   1.096e+04  4.385e+04
MPI Message Lengths:  1.397e+08      1.81949   9.882e+03  4.333e+08
MPI Reductions:       1.448e+04      1.00000

Flop counting convention: 1 flop = 1 real number operation of type
(multiply/divide/add/subtract)
                            e.g., VecAXPY() for real vectors of length N
--> 2N flops
                            and VecAXPY() for complex vectors of length N
--> 8N flops

Summary of Stages:   ----- Time ------  ----- Flops -----  --- Messages
---  -- Message Lengths --  -- Reductions --
                        Avg     %Total     Avg     %Total   counts
%Total     Avg         %Total   counts   %Total
 0:      Main Stage: 4.3556e+02 100.0%  6.7368e+11 100.0%  4.385e+04
100.0%  9.882e+03      100.0%  1.447e+04 100.0%

------------------------------------------------------------------------------------------------------------------------

any other information?

Thanks,




On Fri, Sep 12, 2014 at 5:31 PM, John Peterson <[email protected]> wrote:

> On Fri, Sep 12, 2014 at 1:56 AM, Rodrigo Broggi <[email protected]>
> wrote:
> > Hi guys,
> >
> > I have a problem in the execution of my program:
> >
> > For some reason I get the same running time in linear system.solve
> function
> > if i make it run with 1, 2, 3 or 4 processors, PETSc is installed and it
> > was recognized when I had installed libmesh, Other process does run in
> > parallel but not the solution of the linear system (most expensive
> phase).
> > Any idea on how to tackle the problem?
>
> I think we need more information to figure out exactly what the problem
> is...
>
> Can you run with -log_summary so we can see more information about
> what solvers PETSc is using?  I suppose it's possible that you
> accidentally built a serial PETSc and an MPI-enabled libmesh, but it
> seems unlikely that combination would even run...  How do you know
> that "other processes" do run in parallel?
>
> --
> John
>
------------------------------------------------------------------------------
Want excitement?
Manually upgrade your production database.
When you want reliability, choose Perforce
Perforce version control. Predictably reliable.
http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk
_______________________________________________
Libmesh-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/libmesh-users

Reply via email to