Re: [zfs-discuss] zfs global hot spares?

2011-06-24 Thread Fred Liu
 
 zpool status -x output would be useful. These error reports do not
 include a
 pointer to the faulty device. fmadm can also give more info.
 

Yes. Thanks.

 mpathadm can be used to determine the device paths for this disk.
 
 Notice how the disk is offline at multiple times. There is some sort of
 recovery going on here that continues to fail later. I call these
 wounded
 soldiers because they take a lot more care than a dead soldier. You
 would be better off if the drive completely died.
 

I think it only works in mpts2(sas2) where multi-path is forcedly enabled.
I agree the disk was a sort of critical status before died. The difficult
point is the OS can NOT automatically off the wounded disk in mid-night(
maybe cause the coming scsi reset storm), nobody can do it at all.

 
 In my experience they start randomly and in some cases are not
 reproducible.
 

It seems sort of agnostic? Isn't it? :-)

 
 Are you asking for fault tolerance?  If so, then you need a fault
 tolerant system like
 a Tandem. If you are asking for a way to build a cost effective
 solution using
 commercial, off-the-shelf (COTS) components, then that is far beyond
 what can be easily
 said in a forum posting.
  -- richard

Yeah. High availability is another topic which has more technical challenges.

Anyway, thank you very much.

Fred
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Fixing txg commit frequency

2011-06-24 Thread Sašo Kiselkov
Hi All,

I'd like to ask about whether there is a method to enforce a certain txg
commit frequency on ZFS. I'm doing a large amount of video streaming
from a storage pool while also slowly continuously writing a constant
volume of data to it (using a normal file descriptor, *not* in O_SYNC).
When reading volume goes over a certain threshold (and average pool load
over ~50%), ZFS thinks it's running out of steam on the storage pool and
starts committing transactions more often which results in even greater
load on the pool. This leads to a sudden spike in I/O utilization on the
pool in roughly the following method:

 # streaming clientspool load [%]
15  8%
20 11%
40 22%
60 33%
80 44%
--- around here txg timeouts start to shorten ---
85 60%
90 70%
95 85%

My application does a fair bit of caching and prefetching, so I have
zfetch disabled and primarycache set to only metadata. Also, reads
happen (on a per client basis) relatively infrequently, so I can easily
take it if the pool stops reading for a few seconds and just writes
data. The problem is, ZFS starts alternating between reads and writes
really quickly, which in turn starves me on IOPS and results in a huge
load spike. Judging on load numbers up to around 80 concurrent clients,
I suspect I could go up to 150 concurrent clients on this pool, but
because of this spike I top out at around 95-100 concurrent clients.

Regards,
--
Saso
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Why can zfs receive an incremental stream?

2011-06-24 Thread Maurice Volaski
This is a known bug, CR 7043668.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cannot format 2.5TB ext disk (EFI)

2011-06-24 Thread Brandon High
On Thu, Jun 23, 2011 at 1:20 PM, Richard Elling
richard.ell...@gmail.com wrote:
 2TB limit for 32-bit Solaris. If you hit this, then you'll find a lot of 
 complaints at boot.
 By default, an Ultra-24 should boot 64-bit. Dunno about the HBA, though...

I think the limit is 1TB for 32-bit. I've tried to use 2TB drives on
an Atom N270-based board and they were not recognized, but they worked
fine under FreeBSD.

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Question: adding a single drive to a mirrored zpool

2011-06-24 Thread alex stun
Hello,
I have a zpool consisting of several mirrored vdevs. I was in the middle of 
adding another mirrored vdev today, but found out one of the new drives is bad. 
I will be receiving the replacement drive in a few days. In the mean time, I 
need the additional storage on my zpool.

Is the command to add a single drive to a mirrored zpool:
zpool add -f tank drive1?

Does the -f command cause any issues?
I realize that there will be no redundancy on that drive for a few days, and I 
can live with that as long as the rest of my zpool remains intact.

Thanks
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Question: adding a single drive to a mirrored zpool

2011-06-24 Thread Craig Cory
Alex,

alex stun wrote:
 Hello,
 I have a zpool consisting of several mirrored vdevs. I was in the middle of
 adding another mirrored vdev today, but found out one of the new drives is
 bad. I will be receiving the replacement drive in a few days. In the mean
 time, I need the additional storage on my zpool.

 Is the command to add a single drive to a mirrored zpool:
 zpool add -f tank drive1?

 Does the -f command cause any issues?
 I realize that there will be no redundancy on that drive for a few days, and I
 can live with that as long as the rest of my zpool remains intact.

 Thanks
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

This is exactly what you'll need to do. Without the -f zpool will stop and
warn you that you have a mismatch in reliability. So, to get the space:

 zpool add -f poolname single-disk

Then later,

 zpool attach poolname newdisk single-disk

HTH

Craig


-- 
Craig Cory
 Senior Instructor :: ExitCertified
 : Oracle/Sun Certified System Administrator
 : Oracle/Sun Certified Network Administrator
 : Oracle/Sun Certified Security Administrator
 : Symantec/Veritas Certified Instructor
 : RedHat Certified Systems Administrator

+-+
 ExitCertified :: Excellence in IT Certified Education

  Certified training with Oracle, Sun Microsystems, Apple, Symantec, IBM,
   Red Hat, MySQL, Hitachi Storage, SpringSource and VMWare.

 1.800.803.EXIT (3948)  |  www.ExitCertified.com
+-+
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Question: adding a single drive to a mirrored zpool

2011-06-24 Thread Freddie Cash
On Fri, Jun 24, 2011 at 2:25 PM, alex stun a...@stundzia.com wrote:

 I have a zpool consisting of several mirrored vdevs. I was in the middle of
 adding another mirrored vdev today, but found out one of the new drives is
 bad. I will be receiving the replacement drive in a few days. In the mean
 time, I need the additional storage on my zpool.

 Is the command to add a single drive to a mirrored zpool:
 zpool add -f tank drive1?

 Does the -f command cause any issues?
 I realize that there will be no redundancy on that drive for a few days,
 and I can live with that as long as the rest of my zpool remains intact.


Note:  you will have 0 redundancy on the ENTIRE POOL, not just that one
vdev.  If that non-redundant vdev dies, you lose the entire pool.

Are you willing to take that risk, if one of the new drives is already DoA?

-- 
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] External Backup with Time Slider Manager

2011-06-24 Thread Alex
Hi all,

I'm trying to understand the external backup feature of Time Slider Manager 
in Solaris 11 Express. In the window, I have Replicate backups to an external 
drive checked. The Backup Device is the mount point of my backup drive. In 
File Systems To Back Up, Select and Replicate are checked for everything 
except the backup filesystem.

After having set this a few times, only two filesystems have been backed up: my 
root fs, and an fs for one of my zones. These are located at 
/backup/TIMESLIDER/hostname/fsname.

I let it sit for a few weeks and checked again, thinking maybe it would start 
with a weekly snapshot or something, but this does not seem to be the case. 
Restarting the time-slider and auto-snapshot processes doesn't seem to do 
anything either and there is no relevant info in any of the relevant SMF log 
files.

My understanding is that this feature is supposed to replicate via rsync all 
Time Slider snapshots for all filesystems that have been selected to replicate, 
to the backup drive. I wonder if this understanding is incorrect, or if I'm 
doing something wrong.

The relevant time-slider services appear as follows:

online Jun_11   svc:/application/time-slider:default
online Jun_11   svc:/system/filesystem/zfs/auto-snapshot:daily
online Jun_11   svc:/system/filesystem/zfs/auto-snapshot:monthly
online Jun_11   svc:/system/filesystem/zfs/auto-snapshot:frequent
online Jun_11   svc:/system/filesystem/zfs/auto-snapshot:weekly
online Jun_11   svc:/system/filesystem/zfs/auto-snapshot:hourly
online Jun_11   svc:/application/time-slider/plugin:rsync

The capacity of the backup drive is 928GB; the total capacity of all 
filesystems to back up is 1161GB, however, only 410GB are used.

Thanks,
Alex
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss