On Tue, Oct 22, 2013 at 07:01:47PM +0000, Lee, Brett wrote:
> Andrew,
> 
> If I recall correctly, "FSTYPE=zfs /usr/lib64/lustre/tests/llmount.sh" will 
> create and start a sample ZFS-backed Lustre file system using loopback 
> devices.

That's not entirely true with ZFS. It'll create ZFS pools backed by
ordinary files. No need for loopback devices.

-- 
Cheers, Prakash

> 
> Could you please check to see if there are loopback devices mounted as Lustre 
> storage targets?  If so, unmounting these and stopping the Lustre file system 
> should (could?) clean things up.
> 
> --
> Brett Lee
> Sr. Systems Engineer
> Intel High Performance Data Division
> 
> 
> > -----Original Message-----
> > From: [email protected] [mailto:lustre-discuss-
> > [email protected]] On Behalf Of Andrew Holway
> > Sent: Tuesday, October 22, 2013 10:44 AM
> > To: [email protected]
> > Cc: [email protected]
> > Subject: Re: [Lustre-discuss] [zfs-discuss] ZFS/Lustre echo 0 >> 
> > max_cached_mb
> > chewing 100% cpu
> > 
> > On 22 October 2013 16:21, Prakash Surya <[email protected]> wrote:
> > > This probably belongs on the Lustre mailing list.
> > 
> > I cross posted :)
> > 
> > > Regardless, I don't
> > > think you want to do that (do you?). It'll prevent any client side
> > > caching, and more importantly, I don't think it's a case that's been
> > > tested/optimized. What're you trying to acheive?
> > 
> > Sorry I was not clear, I didn't action this and I cant kill the process. It 
> > seemed to
> > start directly after running:
> > 
> > "FSTYPE=zfs /usr/lib64/lustre/tests/llmount.sh"
> > 
> > I have tried to kill it first with -2 upto -9 but the process will not 
> > budge.
> > 
> > Here is the top lines from perf top
> > 
> > 37.39%  [osc]              [k] osc_set_info_async
> >  27.14%  [lov]              [k] lov_set_info_async
> >   4.13%  [kernel]           [k] kfree
> >   3.57%  [ptlrpc]           [k] ptlrpc_set_destroy
> >   3.14%  [kernel]           [k] mutex_unlock
> >   3.10%  [lustre]           [k] ll_wr_max_cached_mb
> >   3.00%  [kernel]           [k] mutex_lock
> >   2.82%  [ptlrpc]           [k] ptlrpc_prep_set
> >   2.52%  [kernel]           [k] __kmalloc
> > 
> > Thanks,
> > 
> > Andrew
> > 
> > >
> > > Also, just curious, where's the CPU time being spent? What process
> > > and/or kernel thread? What are the top entries listed when you run "perf
> > top"?
> > >
> > > --
> > > Cheers, Prakash
> > >
> > > On Tue, Oct 22, 2013 at 12:53:44PM +0100, Andrew Holway wrote:
> > >> Hello,
> > >>
> > >> I have just setup a "toy" lustre setup using this guide here:
> > >> http://zfsonlinux.org/lustre and have this process chewing 100% cpu.
> > >>
> > >> sh -c echo 0 >>
> > >> /proc/fs/lustre/llite/lustre-ffff88006b0c7c00/max_cached_mb
> > >>
> > >> Until I get something more beasty I am using my desktop machine with
> > >> KVM. Using standard Centos 6.4 with latest kernel. (2.6.32-358.23.2).
> > >> my machine has 2GB ram
> > >>
> > >> Any ideas?
> > >>
> > >> Thanks,
> > >>
> > >> Andrew
> > >>
> > >> To unsubscribe from this group and stop receiving emails from it, send an
> > email to [email protected].
> > >
> > > To unsubscribe from this group and stop receiving emails from it, send an
> > email to [email protected].
> > _______________________________________________
> > Lustre-discuss mailing list
> > [email protected]
> > http://lists.lustre.org/mailman/listinfo/lustre-discuss
> 
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected].
_______________________________________________
Lustre-discuss mailing list
[email protected]
http://lists.lustre.org/mailman/listinfo/lustre-discuss

Reply via email to