Re: [Meep-discuss] Continued with hdf5 thing
Bruck Roman roman.br...@... writes: We already had this topic. HDF5 output on NFS is very slow. Please find below the answer from Steven in the thread [Meep-discuss] HDF5 creating file slow on NFS (fwd). We are also using NFS and output is very slow, but so far I had no time to change the file system, so I cannot tell if another file system is better. Best regards, Roman Thank you all. I installed meep-mpi with serial hdf5, which runs well. But I did not see much speed improvement. Is it normal observation? Probably I should try some professional supercomputer cluster. ___ meep-discuss mailing list meep-discuss@ab-initio.mit.edu http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss
Re: [Meep-discuss] Continued with hdf5 thing
I guess your problem is NFS, which doesn't like parallel hdf5. Have a look at some of the related threads on the mailing list: http://www.mail-archive.com/search?l=meep-discuss%40ab-initio.mit.eduq=NFS Best, Matt On Sun, 24 Jan 2010, Lingyun Wang wrote: I solved this issue from the help of one of the mail list members (I accidentally deleted his / her email). Since I installed both mpich2 and openmpi, meep-mpi gets confused. So I deleted both and then install only openmpi. But I found out openmpi still not work as supposed. The hdf5 file needs super long time to finish writing, the whole program just idle on that point without progress. So I switched back to mpich2, and compiled parallel hdf5. The same thing happened. Still stuck on the hdf5 file thing. The cluster works OK. It's NFS enabled mpich2 shared with two quad PC linked with Gbs ethernet (crossover cable linked). If I just run mpirun -np 8 uptime it just returns things perfectly. Ip information PC1: 192.168.2.x1 PC2: 192.168.2.x2 ssh nfs can both log on each other without password. If I run single node meep-mpi with -np 4, the hdf5 is happy with it. However this hdf5 thing bothers me a lot. In the meep installation part, it mentions the parallel HDF5 library then does not work with serial code. Does this mean I should use serial hdf5 library instead for this scenario? Thank you! ___ meep-discuss mailing list meep-discuss@ab-initio.mit.edu http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss ___ meep-discuss mailing list meep-discuss@ab-initio.mit.edu http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss
[Meep-discuss] Continued with hdf5 thing
I solved this issue from the help of one of the mail list members (I accidentally deleted his / her email). Since I installed both mpich2 and openmpi, meep-mpi gets confused. So I deleted both and then install only openmpi. But I found out openmpi still not work as supposed. The hdf5 file needs super long time to finish writing, the whole program just idle on that point without progress. So I switched back to mpich2, and compiled parallel hdf5. The same thing happened. Still stuck on the hdf5 file thing. The cluster works OK. It's NFS enabled mpich2 shared with two quad PC linked with Gbs ethernet (crossover cable linked). If I just run mpirun -np 8 uptime it just returns things perfectly. Ip information PC1: 192.168.2.x1 PC2: 192.168.2.x2 ssh nfs can both log on each other without password. If I run single node meep-mpi with -np 4, the hdf5 is happy with it. However this hdf5 thing bothers me a lot. In the meep installation part, it mentions the parallel HDF5 library then does not work with serial code. Does this mean I should use serial hdf5 library instead for this scenario? Thank you! ___ meep-discuss mailing list meep-discuss@ab-initio.mit.edu http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss