Hi Timo, Thanks for responding.
The first issue it turned out was that I had compiled petsc using mixed compilers. That is, mvapich was compiled with intel and petsc was compiled with both petsc and gcc. After a lot of messing around, it turned out, as far as I could tell, that petsc doesn't mind this type of stuff, as long as you use the right flags, but deal.ii configure seems to get confused, and even if you force compile, it causes weird issues downstream. Something people might want to bear in mind. My recommendation would be to compile mvapich using the gnu compilers from the outset, and avoid intel for deal.ii, though that might be wrong ... it solved my problems though. So right now I have things working ... sort of. The vtu files output fine for multiple processors using the step-40 method, but the pvtu file is unreadable in paraview (maybe you need a newer version of paraview?) I've tried up to paraview 3.14 thus far. I'm trying to output an fesystem, and the error in paraview is: ERROR: In /home/utkarsh/Kitware/superbuild/paraview/src/paraview/VTK/IO/vtkXMLPDataReader.cxx, line 364 vtkXMLPUnstructuredGridReader (0x5ebb550): File for piece 0 cannot be read. It might be related to the varaible name for the vector component, which deal.ii writes into the pvtu file as: <PDataArray type="Float64" Name="sigma_0_0__sigma_0_1" NumberOfComponents="3" format="ascii"/> I've no idea why this is confusing it, and online documentation for paraview file formats seems pretty sparse (maybe I'm missing it). I can get the output working for any of the outputs supported using data_out.default_suffix(DataOut<dim>::whatever). When I use step-19 on the *.d2 intermediate files however, the output is always missing one cell (I don't know why). The tecplot output doesn't set the SOLUTIONTIME variable, which should maybe be fixed (I can look into doing this, should be simple). I have the vtk files from multiple procs, that look fine. I wrote a script to make them into pvtk files, but again, timestepping doesn't really work right as paraview no longer supports this format. So getting a pvtu file that can load vtu files and remember their timestepping would be nice. My present hack is a script for the tecplot output to insert the solutiontime variable into each file, and tecplot can open the files simultaneously. It would be nice if I could get things a little more streamlined though, and only open a single file in the vis software. Any help is greatly appreciated, Evan On Fri, Apr 20, 2012 at 5:56 PM, Timo Heister <[email protected]> wrote: > Hi Evan, > > are you using distributed::Triangulation or the normal(old) one? > 1. For the distributed::Triangulation see step-40 (or step-32 for a > more complex example): this generates .vtu files and a .pvtu/.visit > (to be loaded in paraview or visit) > 2. for a normal Triangulation you can: > a) just write everything on processor 0 > b) write a vtu file only for the own cells (this would be as in 1), > here you need to do some work: you need to create a class, derive from > DataOut and overwrite first_cell() and last_cell(). See the > documentation of DataOut. You also need a pvtu, that deal can generate > for you. This can then be loaded in paraview/visit. > c) what you are doing now with step-19 > > In general I would say 1 or 2b) would perform the best and > paraview/visit are the most powerful visualization tools. > > If c) does not work for you, can you construct a minimal example that I can > run? > > Best, > Timo > > On Sat, Apr 14, 2012 at 8:42 PM, pleramorphyllactic > <[email protected]> wrote: >> Hi guys, >> >> Quick question, I hope someone can help with. I'm using fesystem with >> two components: one is a scalar component, the other a vector >> component. In serial I'm having no trouble outputing data doing the >> following type of thing: >> >> >> DataOut<dim,DoFHandler<dim> > data_out; >> data_out0.attach_dof_handler (*(dof_handler[k])); >> data_out0.add_data_vector (subdomain_solution[k], >> hd<dim>::component_names (k), >> DataOut<dim, DoFHandler<dim> >::type_dof_data, >> hd<dim>::component_interpretation ()); >> >> >> std::ostringstream filename; >> filename << "solution0-" >> << cycle >> << ".vtk"; >> >> std::ofstream output (filename.str().c_str()); >> data_out.write_vtk (output); >> >> where the component interpretation works fine, and here's the pseudo flow: >> >> if(k>=0){ >> std::vector<std::string> names; >> std::ostringstream parser; >> parser << "alpha_" << (k); >> names.push_back(parser.str()); >> for(unsigned int j=0;j<dim;j++){ >> std::ostringstream parser; >> parser << "sigma_" << (k) << "_<" << (j) << ">"; >> names.push_back(parser.str()); >> } >> >> std::vector<DataComponentInterpretation::DataComponentInterpretation> >> data_component_interpretation; >> >> data_component_interpretation.push_back(DataComponentInterpretation::component_is_scalar); >> for(unsigned int j=0;j<dim;j++){ >> >> data_component_interpretation.push_back(DataComponentInterpretation::component_is_part_of_vector); >> } >> return data_component_interpretation; >> >> The problem is, when I'm running over many processors using petsc and >> mpi, this output gives processor specific data, and isn't really >> useful. I've tried the style used in step-40 and step-18, but in the >> former case I get an error on the vtu/pvtu files where paraview can't >> load them (, and in the ladder step-19 kicks an exception and can't >> convert. I'm guessing it might have something to do with the fact >> that the fesystem has one scalar component and one vector component? >> My preference would be to have something like used in step-18. Here's >> what I've tried: >> >> for(unsigned int k=0; k<alphadim; k++){ >> >> >> if(k==0){ >> >> >> FilteredDataOut<dim> data_out0(this_mpi_process); >> >> data_out0.attach_dof_handler (*(dof_handler[k])); >> data_out0.add_data_vector (subdomain_solution[k], >> rmhd<dim>::component_names (k), >> DataOut<dim, DoFHandler<dim> >> >::type_dof_data, >> rmhd<dim>::component_interpretation ()); >> >> std::vector<unsigned int> partition_int >> (triangulation.n_active_cells()); >> GridTools::get_subdomain_association (triangulation, partition_int); >> const Vector<double> partitioning(partition_int.begin(), >> partition_int.end()); >> data_out0.add_data_vector (partitioning, "partitioning"); >> >> data_out0.build_patches (); >> >> std::ostringstream filename; >> filename << "solution-"; >> filename << std::setfill('0'); >> filename.setf(std::ios::fixed, std::ios::floatfield); >> filename << std::setw(9) << std::setprecision(4) << cycle; >> >> if (n_mpi_processes != 1) >> { >> AssertThrow (n_mpi_processes < 1000, ExcNotImplemented()); >> >> filename << '-'; >> filename << std::setfill('0'); >> filename << std::setw(3) << this_mpi_process; >> } >> >> filename << data_out0.default_suffix(DataOut<dim>::deal_II_intermediate); >> >> std::ofstream output (filename.str().c_str()); >> data_out0.write_deal_II_intermediate (output); >> } >> >> The *.d2 files print out, and look reasonable to me when I open them, >> but ../../../step-19/step-19 solution-000000000.d1 -x gmv test1.gmv >> aborts without converting. >> >> An error occurred in line <580> of file <step-19.cc> in function >> void Step19::convert() >> The violated condition was: >> input >> The name and call sequence of the exception was: >> ExcIO() >> Additional Information: >> (none) >> >> I'm not sure at present how to fix this. >> >> Many thanks, >> Evan >> _______________________________________________ >> dealii mailing list http://poisson.dealii.org/mailman/listinfo/dealii > > > > -- > Timo Heister > http://www.math.tamu.edu/~heister/ _______________________________________________ dealii mailing list http://poisson.dealii.org/mailman/listinfo/dealii
