[zfs-discuss] ARC Ghost lists, why have them and how much ram is used to keep track of them? [long]

2009-11-28 Thread Tomas Ögren
Hello.

We have a file server running S10u8 which is a disk backend to a caching
ftp/http frontend cluster (homebrew) which currently has about 4.4TB
of data which obviously doesn't fit in the 8GB of ram the machine has.

arc_summary currently says:
System Memory:
 Physical RAM:  8055 MB
 Free Memory :  1141 MB
 LotsFree:  124 MB
ARC Size:
 Current Size: 3457 MB (arcsize)
 Target Size (Adaptive):   3448 MB (c)
 Min Size (Hard Limit):878 MB (zfs_arc_min)
 Max Size (Hard Limit):7031 MB (zfs_arc_max)
ARC Size Breakdown:
 Most Recently Used Cache Size:  93%3231 MB (p)
 Most Frequently Used Cache Size: 6%217 MB (c-p)
...
CACHE HITS BY CACHE LIST:
  Anon:3%377273490  [ New 
Customer, First Cache Hit ]
  Most Recently Used:  9%1005243026 (mru)   [ 
Return Customer ]
  Most Frequently Used:   81%9113681221 (mfu)   [ 
Frequent Customer ]
  Most Recently Used Ghost:2%284232070 (mru_ghost)  [ 
Return Customer Evicted, Now Back ]
  Most Frequently Used Ghost:  3%361458550 (mfu_ghost)  [ 
Frequent Customer Evicted, Now Back ]

And some info from echo ::arc | mdb -k:
arc_meta_used =  2863 MB
arc_meta_limit=  3774 MB
arc_meta_max  =  4343 MB


Now to the questions.. As I've understood it, ARC keeps a list of newly
evicted data from the ARC in the ghost lists, for example to be used
for L2ARC (or?).

In mdb -k:
 ARC_mfu_ghost::print
...
arcs_lsize = [ 0x2341ca00, 0x4b61d200 ]
arcs_size = 0x6ea39c00
...
 ARC_mru_ghost::print
arcs_lsize = [ 0x65646400, 0xd24e00 ]
arcs_size = 0x6636b200
 ARC_mru::print
arcs_lsize = [ 0x2b9ae600, 0x38646e00 ]
arcs_size = 0x758ae800
 ARC_mfu::print
arcs_lsize = [ 0, 0x4d200 ]
arcs_size = 0x1043a000

Does this mean that currently, 1770MB+1635MB is wasted just for
statistics, and 1880+260MB is used for actual cached data, or does these
numbers just refer to how much data they keep stats for?

So basically, what is the point of the ghost lists and how much ram are
they actually using?

Also, since this machine just has 2 purposes in life - sharing data over
nfs and taking backups of the same data, I'd like to get those 1141MB of
free memory to be actually used.. Can I set zfs_arc_max (can't find
any runtime tunable, only /etc/system one, right?) to 8GB. If it runs
out of memory, it'll set no_grow and shrink a little, right?

Currently, data can use all of ARC if it wants, but metadata can use a
maximum of $arc_meta_max. Since there's no chance of caching all of the
data, but there's a high chance of caching a large proportion of the
metadata, I'd like reverse limits; limit data size to 1GB or so (due
to buffers currently being handled, setting primarycache=metadata will
give crap performance in my testing) and let metadata take as much as
it'd like.. Is there a chance of getting something like this?

/Tomas
-- 
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS CIFS, smb.conf (smb/server) and LDAP

2009-11-28 Thread Richard Elling

On Nov 27, 2009, at 11:32 PM, Steven Sim wrote:


All;

I am deeply sorry if this topic has been rehashed, checksummed, de- 
duplicated and archived before.


But I just need a small clarification.

/etc/sfw/smb.conf is necessary only for smb/server to function  
properly but is smb/server SMF service necessary for ZFS sharesmb to  
work


I am trying to setup an open solaris file server acting as a Windows  
PDC with SAMBA/LDAP integration on the open solaris box. (with ZFS  
of course...)


I read a BLOG which says ZFS CIFS has nothing to do with smb/server  
but it seems i cannot get ZFS sharesmb to work without smb/server  
SMF service.


CIFS services uses ZFS sharesmb property, but only for OpenSolaris.
It is a nop on Solaris 10.

Samba for Solaris 10 or OpenSolaris.  Configure as per directions for  
other

file systems.
 -- richard



What exactly is the dependency here?

On a separate note, I've actually gotten the shares to work and also  
successfully gotten the Windows Previous Version tab relating to  
ZFS snapshots. It's awesome!


But now I'm facing a heck of a lot of problems getting SAMBA  
integrated with LDAP. (not this list i know..)


For any interested and willing to advice I am using Sun DSEE 7.0 and  
I'm facing a heck of a lot of problems with the LDAP DIT structure.


Warmest Regards
Steven Sim


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Full Disk Encryption (FDE) support

2009-11-28 Thread ChrisS
Until ZFS encryption is available, is there a way to software encrypt a 
filesystem?  I need unmounted files to be unreadable.  I don't want to encrypt 
on a file-by-file basis.  Mounted files need to be shared with Windows 
machines.  I'm using FreeBSD's geli and Samba now.  Thanks.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] single pool with 2 raidz arrays

2009-11-28 Thread Muhammed Syyid
Hi
I currently have 1 ZFS pool (tank) setup with a 4 drive raidz array (c9t4d0 
c9t5d0 c7t0d0 c7t1d0) and wanted to add another 4 drive aray. I wasn't able to 
figure out what the best practise for that would be.
Should the two raidz's be in the same pool if so what would be the command to 
do so
zpool create tank raidz c9t1d0 c9t2d0 c9t3d0 c9t4d0 
or create a separate pool
zpool create tank2 raidz c9t1d0 c9t2d0 c9t3d0 c9t4d0 
What would be the pro's and con's of both? (presuming the first is an actual 
option). 
I don't want to expand the size of the raidz itself so if needed I can pull 4 
disks independantly of the rest etc (upgrade them or remove half my storage etc)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS CIFS, smb.conf (smb/server) and LDAP

2009-11-28 Thread Venkatesh K
 All;
 
 I am deeply sorry if this topic has been rehashed,
 checksummed, 
 de-duplicated and archived before.
 
 But I just need a small clarification.
 
 /etc/sfw/smb.conf is necessary only for smb/server to
 function properly 
 but is smb/server SMF service necessary for ZFS
 sharesmb to work
 
 I am trying to setup an open solaris file server
 acting as a Windows PDC 
 with SAMBA/LDAP integration on the open solaris box.
 (with ZFS of course...)
 
 I read a BLOG which says ZFS CIFS has nothing to do
 with smb/server but 
 it seems i cannot get ZFS sharesmb to work without
 smb/server SMF service.
 
 What exactly is the dependency here?
 
 On a separate note, I've actually gotten the shares
 to work and also 
 successfully gotten the Windows Previous Version
 tab relating to ZFS 
 snapshots. It's awesome!
 
 But now I'm facing a heck of a lot of problems
 getting SAMBA integrated 
 with LDAP. (not this list i know..)

Sorry for discussing off topic subject here. We will try to find some other 
forum soon. I am also exploring options of integrating 
opensolaris/ZFS/OpenDS/Samba to be used as domain controller to replace the 
current Linux/OpenLDAP/Samba DC. I have yet to find useful material. If you are 
interested, we can find a suitable forum to discuss.

 
 For any interested and willing to advice I am using
 Sun DSEE 7.0 and I'm 
 facing a heck of a lot of problems with the LDAP DIT
 structure.
 

Let me know how and where we can discuss?

Thanks,

Venkatesh K
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss