Hi Evan,

are you using distributed::Triangulation or the normal(old) one?
1. For the distributed::Triangulation see step-40 (or step-32 for a
more complex example): this generates .vtu files and a .pvtu/.visit
(to be loaded in paraview or visit)
2. for a normal Triangulation you can:
  a) just write everything on processor 0
  b) write a vtu file only for the own cells (this would be as in 1),
here you need to do some work: you need to create a class, derive from
DataOut and overwrite first_cell() and last_cell(). See the
documentation of DataOut. You also need a pvtu, that deal can generate
for you. This can then be loaded in paraview/visit.
  c) what you are doing now with step-19

In general I would say 1 or 2b) would perform the best and
paraview/visit are the most powerful visualization tools.

If c) does not work for you, can you construct a minimal example that I can run?

Best,
Timo

On Sat, Apr 14, 2012 at 8:42 PM, pleramorphyllactic
<[email protected]> wrote:
> Hi guys,
>
> Quick question, I hope someone can help with.  I'm using fesystem with
> two components: one is a scalar component, the other a vector
> component.   In serial I'm having no trouble outputing data doing the
> following type of thing:
>
>
>   DataOut<dim,DoFHandler<dim> >  data_out;
>      data_out0.attach_dof_handler (*(dof_handler[k]));
>      data_out0.add_data_vector (subdomain_solution[k],
>                              hd<dim>::component_names (k),
>                              DataOut<dim, DoFHandler<dim> >::type_dof_data,
>                              hd<dim>::component_interpretation ());
>
>
> std::ostringstream filename;
>   filename << "solution0-"
>  << cycle
>  << ".vtk";
>
> std::ofstream output (filename.str().c_str());
> data_out.write_vtk (output);
>
> where the component interpretation works fine, and here's the pseudo flow:
>
>    if(k>=0){
>      std::vector<std::string> names;
>      std::ostringstream parser;
>      parser << "alpha_" << (k);
>      names.push_back(parser.str());
>      for(unsigned int j=0;j<dim;j++){
>        std::ostringstream parser;
>        parser << "sigma_" << (k) << "_<" << (j) << ">";
>        names.push_back(parser.str());
>      }
>
>    std::vector<DataComponentInterpretation::DataComponentInterpretation>
> data_component_interpretation;
>    
> data_component_interpretation.push_back(DataComponentInterpretation::component_is_scalar);
>    for(unsigned int j=0;j<dim;j++){
>      
> data_component_interpretation.push_back(DataComponentInterpretation::component_is_part_of_vector);
>    }
>    return data_component_interpretation;
>
> The problem is, when I'm running over many processors using petsc and
> mpi, this output gives processor specific data, and isn't really
> useful.  I've tried the style used in step-40 and step-18, but in the
> former case I get an error on the vtu/pvtu files where paraview can't
> load them (, and in the ladder step-19 kicks an exception and can't
> convert.  I'm guessing it might have something to do with the fact
> that the fesystem has one scalar component and one vector component?
> My preference would be to have something like used in step-18.  Here's
> what I've tried:
>
>  for(unsigned int k=0; k<alphadim; k++){
>
>
>  if(k==0){
>
>
>      FilteredDataOut<dim> data_out0(this_mpi_process);
>
>      data_out0.attach_dof_handler (*(dof_handler[k]));
>      data_out0.add_data_vector (subdomain_solution[k],
>                                 rmhd<dim>::component_names (k),
>                                 DataOut<dim, DoFHandler<dim> >::type_dof_data,
>                                 rmhd<dim>::component_interpretation ());
>
>      std::vector<unsigned int> partition_int (triangulation.n_active_cells());
>      GridTools::get_subdomain_association (triangulation, partition_int);
>      const Vector<double> partitioning(partition_int.begin(),
>                                        partition_int.end());
>      data_out0.add_data_vector (partitioning, "partitioning");
>
>      data_out0.build_patches ();
>
>      std::ostringstream filename;
>      filename << "solution-";
>      filename << std::setfill('0');
>      filename.setf(std::ios::fixed, std::ios::floatfield);
>      filename << std::setw(9) << std::setprecision(4) << cycle;
>
>   if (n_mpi_processes != 1)
>     {
>       AssertThrow (n_mpi_processes < 1000, ExcNotImplemented());
>
>       filename << '-';
>       filename << std::setfill('0');
>       filename << std::setw(3) << this_mpi_process;
>     }
>
>   filename << data_out0.default_suffix(DataOut<dim>::deal_II_intermediate);
>
>   std::ofstream output (filename.str().c_str());
>   data_out0.write_deal_II_intermediate (output);
>    }
>
> The *.d2 files print out, and look reasonable to me when I open them,
> but  ../../../step-19/step-19 solution-000000000.d1 -x gmv test1.gmv
> aborts without converting.
>
> An error occurred in line <580> of file <step-19.cc> in function
>    void Step19::convert()
> The violated condition was:
>    input
> The name and call sequence of the exception was:
>    ExcIO()
> Additional Information:
> (none)
>
> I'm not sure at present how to fix this.
>
> Many thanks,
> Evan
> _______________________________________________
> dealii mailing list http://poisson.dealii.org/mailman/listinfo/dealii



-- 
Timo Heister
http://www.math.tamu.edu/~heister/
_______________________________________________
dealii mailing list http://poisson.dealii.org/mailman/listinfo/dealii

Reply via email to