Sorry for the last message, it was not finished yet - i
unintentionally pressed the send button.

Once again:


Hello,

I haven't tried the suggestion of Derek yet. Instead, I
wanted to try to split a smaller mesh with nemesis and see
if my code runs in parallel 
(libmesh configure was done with
--enable-parmesh)

I split a smaller (about 1 000 000 nodes) mesh with the
following code:

---------------------------------------
//CODE 1

Mesh mesh;

TetGenIO TETGEN(mesh);
TETGEN.read("meshfile.ele");

set_subdomains(&TETGEN, mesh); // I manually set
subdomain_id
set_boundary_ids(mesh); // I manually set boundary_id

mesh.find_neighbors();
MetisPartitioner().partition(mesh,4);

mesh.print_info();

Nemesis_IO NEM ( mesh );
WriteMesh[actual_pulse]<<"main_alm.nemesis";


//END CODE 1
---------------------------------------

After this I get 4 files (running on 4 processors) 
xxx.4.0, xxx.4.1, xxx.4.2, xxx.4.3 where xxx stands for the
file name.



Now, in a new program I want to carry out my calculations: 



---------------------------------------

// CODE 2
         Mesh new_mesh;           
         Nemesis_IO NEMESIS (new_mesh);
           
        NEMESIS.read("xxx");
         
         if (!new_mesh. is_prepared ())
                        new_mesh.prepare_for_use();

         new_mesh.print_info();

        // MeshRefinement meshrefine (new_mesh);
        // meshrefine.uniformly_refine(1);
        // new_mesh.print_info();
              
     EquationSystems new_equation_systems (new_mesh);

          TransientLinearImplicitSystem & T_system = 
                                
new_equation_systems.add_system<TransientLinearImplicitSystem>
("Diffusion");
                                                        
          T_system.add_variable ("T", FIRST);
          
          T_system.attach_assemble_function (assemble_cd);
          
          T_system.attach_init_function (init_geothermal_gradient);
          
          new_equation_systems.init ();
          
          
          new_equation_systems.print_info();

          
        // ********** TIME LOOP *******
          
          int t_write = 0;
          
          std::cout << "t_step "<<t_step<<"
t_next_pulse[actual_pulse]  " <<
t_next_pulse[actual_pulse]<<"  dt  "<<dt<<std::endl;
          
          for (; t_step < t_next_pulse[actual_pulse]/dt; t_step++)
                {
                  time += dt;
                  
                  
                  new_equation_systems.parameters.set<Real> ("time") =
time;
                  
                  TransientLinearImplicitSystem&  T_system =
                        
new_equation_systems.get_system<TransientLinearImplicitSystem>("Diffusion");

                  *T_system.old_local_solution =
*T_system.current_local_solution;


                  new_equation_systems.get_system("Diffusion").solve();

}


// END CODE 2
---------------------------------------


The assemble function is similar to the one in example 9. 
At every time step I get the following warning: 

Warning:  This MeshInput subclass only support meshes which
have been serialized!
[0] /home/ThermoPaine/libmesh_svn/include/mesh/mesh_input.h,
line 148, compiled Oct 30 2011 at 11:09:26
Warning:  This MeshOutput subclass only support meshes which
have been serialized!
[0]
/home/ThermoPaine/libmesh_svn/include/mesh/mesh_output.h,
line 181, compiled Oct 30 2011 at 11:09:26

-----------
QUESTION 1: 
-----------
What does this warning mean? Can I simply ignore it? I would
have said that there is still a problem with my
configuration for parallel???

-----------
QUESTION 2:
-----------
There is another thing which disturbs me: 

When I partition the mesh with the first code, I get 
  n_subdomains()=4
  n_partitions()=4
  n_processors()=4

in mesh.print_info().

But, after reading the mesh with the second code, I get
  n_subdomains()=4
  n_partitions()=1
  n_processors()=4


-----------
QUESTION 3
-----------
In several discussions I read that there are still serious
bugs in ParallelMesh. Do these bugs also concern the code I
have above? AFAIK there are bugs with adaptive coarsening
and refinement. In my code I don't use the class
ParallelMesh - so I was wondering if the data-structure,
which I get with --enable-parmesh, allows refinement and
coarsening.

So, although I read in 1 file per processor the
n_partitions() has changed.




Thank you, 
Robert




------------------------------------------------------------------------------
RSA&#174; Conference 2012
Save $700 by Nov 18
Register now&#33;
http://p.sf.net/sfu/rsa-sfdev2dev1
_______________________________________________
Libmesh-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/libmesh-users

Reply via email to