The behaviour sounds similar to what I've observed. I was executing a code path which had not been made to respect options->calculate_statistics_and_exit. The result was that the master was doing all the computation, as well as farming it out to the workers. The workers did work, but only for a fraction of the time, since the DB was many times bigger than memory.

In my case I was getting failed mallocs after eating too much memory, and the patch was specific to the -l switch. Perhaps for aliased databases there is some similar magic which causes your 'File write error'? Keep an eye on your diskspace and number of filehandles during the next run.


If you can provide a query and specifics on the DB I'd be happy to try and reproduce what you're seeing.

Mike Cariaso







Aaron Darling wrote:
the startup phase.  As Joe mentioned, perhaps this is related to -a 2?

I may be wrong on my assessment, but it appears that when I try to run an mpiblast that the master node (chosen by the algorithm) does all the work. I'll get a constant load average of 1 on that node while the program is running. On the other nodes I barely register any activity. For small database searches I will get results, but for larger ones it takes too long and patience gives out or it finishes with errors. The last large job I ran ended with this message after giving a few results:

NULL_Caption] FATAL ERROR: CoreLib [002.005] 000049_0498_0531: File write error 0 34645.5 Bailing out with signal -1
[0] MPI Abort by user Aborting program !
[0] Aborting program!






-------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
_______________________________________________
Mpiblast-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/mpiblast-users

Reply via email to