Re: [zfs-discuss] kernel memory and zfs

2008-03-28 Thread Matt Cohen
[EMAIL PROTECTED]:~ #mdb -k
Loading modules: [ unix krtld genunix specfs dtrace uppc pcplusmp ufs md mpt ip 
hook neti sctp arp usba uhci fcp fctl qlc nca lofs zfs random fcip crypto 
logindmux ptm nfs ]
 ::memstat
Page SummaryPagesMB  %Tot
     
Kernel4314678 16854   51%
Anon  3538066 13820   42%
Exec and libs9249360%
Page cache  29347   1140%
Free (cachelist)89647   3501%
Free (freelist)404276  15795%

Total 8385263 32754
Physical  8176401 31939
 :quit

[EMAIL PROTECTED]:~ #kstat -m zfs
module: zfs instance: 0
name:   arcstatsclass:misc
c   12451650535
c_max   33272295424
c_min   1073313664
crtime  175.759605187
deleted 26773228
demand_data_hits89284658
demand_data_misses  1995438
demand_metadata_hits1139759543
demand_metadata_misses  5671445
evict_skip  5105167
hash_chain_max  15
hash_chains 296214
hash_collisions 75773190
hash_elements   995458
hash_elements_max   1576353
hits1552496231
mfu_ghost_hits  4321964
mfu_hits1263340670
misses  11984648
mru_ghost_hits  474500
mru_hits57043004
mutex_miss  106728
p   9304845931
prefetch_data_hits  10792085
prefetch_data_misses3571943
prefetch_metadata_hits  312659945
prefetch_metadata_misses745822
recycle_miss2775287
size12451397120
snaptime2410363.20494097

So it looks like our kernel is using 16GB and ZFS is using ~12GB for it's arc 
cache.  Is a 4GB kernel for other stuff normal?  It still seems like a lot of 
memory to me, but I don't know how all the zones affect the amount of memory 
the kernel needs.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] kernel memory and zfs

2008-03-27 Thread Matt Cohen
We have a 32 GB RAM server running about 14 zones. There are multiple 
databases, application servers, web servers, and ftp servers running in the 
various zones.

I understand that using ZFS will increase kernel memory usage, however I am a 
bit concerned at this point.

[EMAIL PROTECTED]:~/zonecfg #mdb -k

Loading modules: [ unix krtld genunix specfs dtrace uppc pcplusmp ufs md mpt ip 
indmux ptm nfs ]

::memstat
Page Summary Pages MB %Tot

Kernel 4108442 16048 49%
Anon 3769634 14725 45%
Exec and libs 9098 35 0%
Page cache 29612 115 0%
Free (cachelist) 99437 388 1%
Free (freelist) 369040 1441 4%

Total 8385263 32754
Physical 8176401 31939

Out of 32GB of RAM, 16GB is being used by the kernel. Is there a way to find 
out how much of that kernel memory is due to ZFS?

It just seems an excessively high amount of our memory is going to the kernel, 
even with ZFS being used on the server.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Replacing failing drive

2008-03-05 Thread Matt Cohen
Hi.  We have a hard drive failing in one of our production servers.

The server has two drives, mirrored.  It is split between UFS with SVM, and ZFS.

Both drives are setup as follows.  The drives are c0t0d0 and c0t1d0.  c0t1d0 is 
the failing drive.

slice 0 - 3.00GB UFS  (root partition)
slice 1 - 1.00GB swap
slice 3 - 4.00GB UFS  (var partition)
slice 4 - 60GB ZFS  (mirrored slice in our zfs pool)
slice 6 - 54MB metadb
slice 7 - 54MB metadb

I think I have the plan to replace the harddrive without interrupting either 
the SVM mirrors on slices 0,1,3 or the ZFS pool which is mirrored on slice 4.  
I am hoping someone can take a quick look and let me know if I missed anything:

1)  Detach the SVM mirrors on the failing drive
===
metadetach -f d0 d20
metaclear d20
metadetach -f d1 d21
metaclear d21
metadetach -f d3 d23
metaclear d23

2)  Remove the metadb's from the failing drive:
===
metadb -f -d c0t1d0s6
metadb -f -d c0t1d0s7

3)  Offline the ZFS mirror slice
===
zpool offline poolname c0t1d0s0

4)  At this point it should be safe to remove the drive.  All SVM mirrors are 
detached, the metadb's on the failed drive are deleted, and the ZFS slice is 
offline.

5)  Insert and partition the new drive so it's partitions are the same as the 
working drive.

6)  Create the SVM partitions and attach them
===
metainit d20 1 1 c0t1d0s0
metattach d0 d20
metainit d21 1 1 c0t1d0s1
metattach d1 d21
metainit d23 1 1 c0t1d0s3
metattach d3 d23

7)  Add the metadb's back to the new drive
===
metadb -a -f -c2 c0t1d0s6 c0t1d0s7

8)  Add the ZFS slice back to the zfs pool as part of the mirrored pool
===
zpool replace hrlpool c0t1d0s4
zpool online c0t1d0s4

DONE

The drive should be functioning at this point.

Does this look correct?  Have I missed anything obvious?

I know this isn't totally ZFS related, but I wasn't sure where to put it since 
it has both SVM and ZFS mirrored slices.

Thanks in advance for any input.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Replacing a drive using ZFS

2007-02-21 Thread Matt Cohen
We have a system with two drives in it, part UFS, part ZFS.  It's a software 
mirrored system with slices 0,1,3 setup as small UFS slices, and slice 4 on 
each drive being the ZFS slice.

One of the drives is failing and we need to replace it.

I just want to make sure I have the correct order of things before I do this.

This is our pool:
NAME  STATE READ WRITE CKSUM
mainpoolONLINE   0 0 0
  mirror  ONLINE   0 0 0
c0t0d0s4  ONLINE   0 0   243
c0t1d0s4  ONLINE   0 0 0

1)  zpool detach mainpool c0t0d0s4
2)  powerdown system, replace faulty drive
3)  reboot system, setup slices to match the current setup
4)  zpool add mainpool c0t0d0s4

This will add the new drive back into the mirrored pool and sync the new slice 
4 back into the mirror, correct?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss