Hi Lucas,
After some time i can see that on others machine my
process is also running but after 2 -3 hours it was
stoped and gave me output like this on the screen as
per the file mpirun-mpiblast-out and in
/var/log/message file i found following things i think
this is memory problem.
Apr 12 17:29:54 master kernel: Mem-info:
Apr 12 17:29:54 master kernel: Zone:DMA freepages:
0 min: 0 low: 0 high: 0
Apr 12 17:29:54 master kernel: Zone:Normal freepages:
765 min: 766 low: 4603 high: 6649
Apr 12 17:29:54 master kernel: Zone:HighMem freepages:
0 min: 0 low: 0 high: 0
Apr 12 17:29:54 master kernel: Zone:DMA freepages:
2521 min: 0 low: 0 high: 0
Apr 12 17:29:54 master kernel: Zone:Normal freepages:
765 min: 766 low: 4541 high: 6556
Apr 12 17:29:54 master kernel: Zone:HighMem freepages:
0 min: 0 low: 0 high: 0
Apr 12 17:29:54 master kernel: Free pages: 4051
( 0 HighMem)
Apr 12 17:29:54 master kernel: ( Active: 483579/7376,
inactive_laundry: 0, inactive_clean: 0, free: 4051 )
Apr 12 17:29:54 master kernel: aa:0 ac:0 id:0 il:0
ic:0 fr:0
Apr 12 17:29:54 master kernel: aa:241015 ac:1979
id:3233 il:1 ic:0 fr:765
Apr 12 17:29:54 master kernel: aa:0 ac:0 id:0 il:0
ic:0 fr:0
Apr 12 17:29:57 master kernel: aa:0 ac:0 id:0 il:0
ic:0 fr:2521
Apr 12 17:29:57 master kernel: aa:238782 ac:1947
id:3999 il:0 ic:0 fr:765
Apr 12 17:29:57 master kernel: aa:0 ac:0 id:0 il:0
ic:0 fr:0
Apr 12 17:29:57 master kernel: 1*4kB 2*8kB 0*16kB
1*32kB 1*64kB 1*128kB 1*256kB 1*512kB 0*1024kB
1*2048kB 0*4096kB = 3060kB)
Apr 12 17:29:57 master kernel: Swap cache: add 798540,
delete 798504, find 10751/50082, race 0+0
Apr 12 17:29:57 master kernel: 3892 pages of slabcache
Apr 12 17:29:57 master kernel: 198 pages of kernel
stacks
Apr 12 17:29:57 master kernel: 1395 lowmem pagetables,
1982 highmem pagetables
Apr 12 17:29:57 master kernel: Free swap:
0kB
Apr 12 17:29:57 master kernel: 524143 pages of RAM
Apr 12 17:29:57 master kernel: 7116 free pages
Apr 12 17:29:57 master kernel: 17937 reserved pages
Apr 12 17:29:57 master kernel: 7644 pages shared
Apr 12 17:29:57 master kernel: 36 pages swap cached
Apr 12 17:29:57 master kernel: Buffer memory:
6432kB
Apr 12 17:29:57 master kernel: Cache memory:
10048kB
Apr 12 17:29:57 master kernel: CLEAN: 1047 buffers,
4170 kbyte, 74 used (last=587), 0 locked, 0 dirty 0
delay
Apr 12 17:30:07 master kernel: LOCKED: 32 buffers,
128 kbyte, 32 used (last=32), 0 locked, 0 dirty 0
delay
Apr 12 17:30:08 master kernel: DIRTY: 1 buffers, 4
kbyte, 1 used (last=1), 0 locked, 1 dirty 0 delay
Apr 12 17:30:08 master kernel: Out of Memory: Killed
process 28476 (httpd).
Apr 12 17:30:08 master kernel: Out of Memory: Killed
process 28476 (httpd).
--- Lucas Carey <[EMAIL PROTECTED]> wrote:
> Hi Ankit,
> mpiBLAST needs to process the query file before it
> can start BLASTing. This is unfortunatly currently a
> serial process, and so with large query files the
> master node may spend a long time before the workers
> start doing thier thing.
> If you try it with a small query file first you
> should see the workers start up right away. Give the
> masternode some time.
> You can look in the mailing list archives for other
> query-file size related discussions.
> -Lucas
>
> On Wednesday, April 12, 2006 at 23:55 -0700, ankit
> patel wrote:
> > Hi Lucas,
> >
> > By using mpirun with -v command i got the output
> that
> > on all machine mpiblast was running but when i was
> > going on all machines than i found that mpiblast
> was
> > running on only my master machine. On other
> machines
> > mpiblast process was not running.
> >
> > In attachment i am sending you lamboot -d ,
> lamhalt -d
> > (when running lamd), lamnodes, mpirun and mpiblast
> > command output with -v option and all files in
> local
> > directory as mentioned in MPIBLAST_LOCAL variable
> on
> > all machines.
> >
> > --- Lucas Carey <[EMAIL PROTECTED]> wrote:
> >
> > > Hi Ankit,
> > > This is probably a problem with the way lam is
> > > setup.
> > > what is the output of
> > > lamhalt -d (if its already running)
> > > lamboot -d
> > > lamnodes
> > > lamhalt -d
> > >
> > > If lam is setup correctly then mpiblast should
> run
> > > on all nodes.
> > > To get a list of the nodes that mpiblast is
> being
> > > run on:
> > > mpirun -v -np 24 .... mpiblast ....
> > >
> > > -Lucas
> > >
> > >
> > > On Wednesday, April 12, 2006 at 03:01 -0700,
> ankit
> > > patel wrote:
> > > >
> > > > Hi Lucas,
> > > > Thanks for your help. Now i can run my
> > > program mpiblast. But still i
> > > > have question why mpiblast is not run on
> all
> > > machines? I have 12 Sun
> > > > fire v20z machines and each has two
> > > processors. But when i am running
> > > > my my job for 24 processors than it
> > > execute randomaly not on all
> > > > machines. It used hardely 8-9 processors at
> one
> > > time.
> > > > I used lam-mpi 7.0. Is there any
> problem
> > > with lam environment for
> > > > mpiblast?
> > > > Lucas Carey <[EMAIL PROTECTED]>
> wrote:
> > > >
> > > > Hi Ankit,
> > > > Could you post the conents of your
> .ncbicr
> > > file?
> > > > Also, you can try passing them as
> environment
> > > variables:
> > > > mpirun -np 24 -x
> > > >
> > >
> >
>
MPIBLAST_SHARED=/where/ever,MPIBLAST_LOCAL=/local/storage
> > > mpirun
> > > > -Lucas
> > > > On Sunday, April 09, 2006 at 23:39 -0700,
> > > ankit patel wrote:
> > > > > i think i have a problem to set the
> > > environment
> > > > > variable for mpiblast. MPIBLAST_SHARED
> and
> > > > > MPIBLAST_LOCAL. I am already set in
> .ncbirc
> > > file
> > > > > created by me at home directory as well
> as
> > > in ncbi
> > > > > directory but i failed to get output.
> so
> > > can u tell me
> > > > > when i have to make .ncbirc file and
> what i
> > > have to
> > > > > write in it.
> > > > >
> > > > > With Best Regards,
> > > > >
> > > > > Ankit Patel
> > >
> > >
> > >
> >
>
-------------------------------------------------------
> > > This SF.Net email is sponsored by xPML, a
> > > groundbreaking scripting language
> > > that extends applications into web and mobile
> media.
> > > Attend the live webcast
> > > and join the prime developer group breaking into
> > > this new coding territory!
> > >
> >
>
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642
> > > _______________________________________________
> > > Mpiblast-users mailing list
> > > [email protected]
> > >
> >
>
https://lists.sourceforge.net/lists/listinfo/mpiblast-users
> > >
> >
> > With Best Regards,
> >
> > Ankit Patel
> >
> > __________________________________________________
> > Do You Yahoo!?
> > Tired of spam? Yahoo! Mail has the best spam
> protection around
> > http://mail.yahoo.com
>
>
>
>
>
>
>
>
-------------------------------------------------------
> This SF.Net email is sponsored by xPML, a
> groundbreaking scripting language
> that extends applications into web and mobile media.
> Attend the live webcast
> and join the prime developer group breaking into
> this new coding territory!
>
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642
> _______________________________________________
> Mpiblast-users mailing list
> [email protected]
>
https://lists.sourceforge.net/lists/listinfo/mpiblast-users
>
With Best Regards,
Ankit Patel
__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com
-------------------------------------------------------
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642
_______________________________________________
Mpiblast-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/mpiblast-users