Re: [zfs-discuss] Using multiple logs on single SSD devices

2010-08-03 Thread Jonathan Loran
On Aug 2, 2010, at 8:18 PM, Edward Ned Harvey wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Jonathan Loran Because you're at pool v15, it does not matter if the log device fails while you're running, or you're offline and trying

[zfs-discuss] Does zpool clear delete corrupted files

2009-06-01 Thread Jonathan Loran
? I'm going to perform a full backup of this guy (not so easy on my budget), and I would rather only get the good files. Thanks, Jon - _/ _/ / - Jonathan Loran - - -/ / /IT Manager

Re: [zfs-discuss] Does zpool clear delete corrupted files

2009-06-01 Thread Jonathan Loran
, Paul Choi wrote: zpool clear just clears the list of errors (and # of checksum errors) from its stats. It does not modify the filesystem in any manner. You run zpool clear to make the zpool forget that it ever had any issues. -Paul Jonathan Loran wrote: Hi list, First off: # cat /etc

Re: [zfs-discuss] Inexpensive ZFS home server

2008-11-12 Thread Jonathan Loran
is not good from a ZFS perspective. How many SATA plugs are there on the MB in this guy? Jon -- - _/ _/ / - Jonathan Loran - - -/ / /IT Manager - - _ / _ / / Space Sciences

Re: [zfs-discuss] [storage-discuss] ZFS Success Stories

2008-10-20 Thread Jonathan Loran
on this list! :-) Thanks! ___ storage-discuss mailing list [EMAIL PROTECTED] http://mail.opensolaris.org/mailman/listinfo/storage-discuss -- - _/ _/ / - Jonathan Loran

Re: [zfs-discuss] ZFS poor performance on Areca 1231ML

2008-09-26 Thread Jonathan Loran
-- - _/ _/ / - Jonathan Loran - - -/ / /IT Manager - - _ / _ / / Space Sciences Laboratory, UC Berkeley -/ / / (510) 643-5146 [EMAIL PROTECTED

Re: [zfs-discuss] pulling disks was: ZFS hangs/freezes after disk failure,

2008-08-28 Thread Jonathan Loran
: Fe = 46% failures/month * 12 months = 5.52 failures Jon -- - _/ _/ / - Jonathan Loran - - -/ / /IT Manager - - _ / _ / / Space Sciences Laboratory, UC Berkeley

Re: [zfs-discuss] x4500 dead HDD, hung server, unable to boot.

2008-08-11 Thread Jonathan Loran
Jorgen Lundman wrote: # /usr/X11/bin/scanpci | /usr/sfw/bin/ggrep -A1 vendor 0x11ab device 0x6081 pci bus 0x0001 cardnum 0x01 function 0x00: vendor 0x11ab device 0x6081 Marvell Technology Group Ltd. MV88SX6081 8-port SATA II PCI-X Controller But it claims resolved for our version: SunOS

Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-31 Thread Jonathan Loran
Miles Nordin wrote: s == Steve [EMAIL PROTECTED] writes: s http://www.newegg.com/Product/Product.aspx?Item=N82E16813128354 no ECC: http://en.wikipedia.org/wiki/List_of_Intel_chipsets#Core_2_Chipsets This MB will take these:

Re: [zfs-discuss] Supermicro AOC-SAT2-MV8 hang when drive removed

2008-07-30 Thread Jonathan Loran
be considerably different since NFS requests that its data be committed to disk. Bob -- - _/ _/ / - Jonathan Loran - - -/ / /IT Manager - - _ / _ / / Space Sciences Laboratory

Re: [zfs-discuss] Supermicro AOC-SAT2-MV8 hang when drive removed

2008-07-29 Thread Jonathan Loran
/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- - _/ _/ / - Jonathan Loran

Re: [zfs-discuss] ZFS deduplication

2008-07-08 Thread Jonathan Loran
profile is just like Tim's: Terra bytes of satellite data. I'm going to guess that the d11p ratio won't be fantastic for us. I sure would like to measure it though. Jon -- - _/ _/ / - Jonathan Loran

Re: [zfs-discuss] ZFS deduplication

2008-07-08 Thread Jonathan Loran
/erickustarz/entry/how_dedupalicious_is_your_pool Unfortunately we are on Solaris 10 :( Can I get a zdb for zfs V4 that will dump those checksums? Jon -- - _/ _/ / - Jonathan Loran - - -/ / /IT Manager

Re: [zfs-discuss] ZFS deduplication

2008-07-08 Thread Jonathan Loran
reference count. If a block has few references, it should expire first, and vise versa, blocks with many references should be the last out. With all the savings on disks, think how much RAM you could buy ;) Jon -- - _/ _/ / - Jonathan Loran

Re: [zfs-discuss] ZFS deduplication

2008-07-07 Thread Jonathan Loran
be very excited to see block level ZFS deduplication roll out. Especially since we already have the infrastructure in place using Solaris/ZFS. Cheers, Jon -- - _/ _/ / - Jonathan Loran - - -/ / /IT Manager

Re: [zfs-discuss] Cannot delete errored file

2008-06-13 Thread Jonathan Loran
, but make sure you power supply is running clean. I can't tell you how many times I've seen very strange and intermittent system errors occur from a flaky power supply. Jon -- - _/ _/ / - Jonathan Loran

Re: [zfs-discuss] Inconcistancies with scrub and zdb

2008-05-06 Thread Jonathan Loran
Jonathan Loran wrote: Since no one has responded to my thread, I have a question: Is zdb suitable to run on a live pool? Or should it only be run on an exported or destroyed pool? In fact, I see that it has been asked before on this forum, but is there a users guide to zdb

Re: [zfs-discuss] Inconcistancies with scrub and zdb

2008-05-05 Thread Jonathan Loran
-- - _/ _/ / - Jonathan Loran - - -/ / /IT Manager - - _ / _ / / Space Sciences Laboratory, UC Berkeley -/ / / (510) 643-5146 [EMAIL PROTECTED

[zfs-discuss] Inconcistancies with scrub and zdb

2008-05-04 Thread Jonathan Loran
Hi List, First of all: S10u4 120011-14 So I have the weird situation. Earlier this week, I finally mirrored up two iSCSI based pools. I had been wanting to do this for some time, because the availability of the data in these pools is important. One pool mirrored just fine, but the other

Re: [zfs-discuss] share zfs hierarchy over nfs

2008-04-30 Thread Jonathan Loran
the Solaris map, thus: auto_home: *zfs-server:/home/ Sorry to be so off (ZFS) topic. Jon -- - _/ _/ / - Jonathan Loran - - -/ / /IT Manager - - _ / _ / / Space Sciences

Re: [zfs-discuss] ZFS - Implementation Successes and Failures

2008-04-29 Thread Jonathan Loran
Dominic Kay wrote: Hi Firstly apologies for the spam if you got this email via multiple aliases. I'm trying to document a number of common scenarios where ZFS is used as part of the solution such as email server, $homeserver, RDBMS and so forth but taken from real implementations where

Re: [zfs-discuss] ZFS for write-only media?

2008-04-22 Thread Jonathan Loran
Bob Friesenhahn wrote: The problem here is that by putting the data away from your machine, you loose the chance to scrub it on a regular basis, i.e. there is always the risk of silent corruption. Running a scrub is pointless since the media is not writeable. :-) But that's the

Re: [zfs-discuss] ZFS for write-only media?

2008-04-22 Thread Jonathan Loran
Bob Friesenhahn wrote: On Tue, 22 Apr 2008, Jonathan Loran wrote: But that's the point. You can't correct silent errors on write once media because you can't write the repair. Yes, you can correct the error (at time of read) due to having both redundant media, and redundant blocks

Re: [zfs-discuss] 24-port SATA controller options?

2008-04-15 Thread Jonathan Loran
Luke Scharf wrote: Maurice Volaski wrote: Perhaps providing the computations rather than the conclusions would be more persuasive on a technical list ; 2 16-disk SATA arrays in RAID 5 2 16-disk SATA arrays in RAID 6 1 9-disk SATA array in RAID 5. 4 drive failures over

Re: [zfs-discuss] OpenSolaris ZFS NAS Setup

2008-04-10 Thread Jonathan Loran
Chris Siebenmann wrote: | What your saying is independent of the iqn id? Yes. SCSI objects (including iSCSI ones) respond to specific SCSI INQUIRY commands with various 'VPD' pages that contain information about the drive/object, including serial number info. Some Googling turns up:

Re: [zfs-discuss] OpenSolaris ZFS NAS Setup

2008-04-09 Thread Jonathan Loran
Just to report back to the list... Sorry for the lengthy post So I've tested the iSCSI based zfs mirror on Sol 10u4, and it does more or less work as expected. If I unplug one side of the mirror - unplug or power down one of the iSCSI targets - I/O to the zpool stops for a while, perhaps a

Re: [zfs-discuss] [storage-discuss] OpenSolaris ZFS NAS Setup

2008-04-06 Thread Jonathan Loran
kristof wrote: If you have a mirrored iscsi zpool. It will NOT panic when 1 of the submirrors is unavailable. zpool status will hang for some time, but after I thinkt 300 seconds it will put the device on unavailable. The panic was the default in the past, And it only occurs if all

Re: [zfs-discuss] OpenSolaris ZFS NAS Setup

2008-04-04 Thread Jonathan Loran
This guy seems to have had lots of fun with iSCSI :) http://web.ivy.net/~carton/oneNightOfWork/20061119-carton.html This is scaring the heck out of me. I have a project to create a zpool mirror out of two iSCSI targets, and if the failure of one of them will panic my system, that will

Re: [zfs-discuss] Backup-ing up ZFS configurations

2008-03-25 Thread Jonathan Loran
Bob Friesenhahn wrote: On Tue, 25 Mar 2008, Robert Milkowski wrote: As I wrote before - it's not only about RAID config - what if you have hundreds of file systems, with some share{nfs|iscsi|cifs) enabled with specific parameters, then specific file system options, etc. Some zfs-related

Re: [zfs-discuss] zfs backups to tape

2008-03-14 Thread Jonathan Loran
CIFS compatibility, and it is the way the industry will be moving. Jon -- - _/ _/ / - Jonathan Loran - - -/ / /IT Manager - - _ / _ / / Space Sciences Laboratory, UC Berkeley

Re: [zfs-discuss] zfs backups to tape

2008-03-14 Thread Jonathan Loran
Robert Milkowski wrote: Hello Jonathan, Friday, March 14, 2008, 9:48:47 PM, you wrote: Carson Gaspar wrote: Bob Friesenhahn wrote: On Fri, 14 Mar 2008, Bill Shannon wrote: What's the best way to backup a zfs filesystem to tape, where the size of the filesystem is

Re: [zfs-discuss] Mirroring to a smaller disk

2008-03-04 Thread Jonathan Loran
Patrick Bachmann wrote: Jonathan, On Tue, Mar 04, 2008 at 12:37:33AM -0800, Jonathan Loran wrote: I'm 'not sure I follow how this would work. The keyword here is thin provisioning. The sparse zvol only uses as much space as the actual data needs. So, if you use a sparse zvol, you

[zfs-discuss] Does a mirror increase read performance

2008-02-28 Thread Jonathan Loran
Quick question: If I create a ZFS mirrored pool, will the read performance get a boost? In other words, will the data/parity be read round robin between the disks, or do both mirrored sets of data and parity get read off of both disks? The latter case would have a CPU expense, so I would

Re: [zfs-discuss] Does a mirror increase read performance

2008-02-28 Thread Jonathan Loran
Roch Bourbonnais wrote: Le 28 févr. 08 à 20:14, Jonathan Loran a écrit : Quick question: If I create a ZFS mirrored pool, will the read performance get a boost? In other words, will the data/parity be read round robin between the disks, or do both mirrored sets of data and parity get

Re: [zfs-discuss] Does a mirror increase read performance

2008-02-28 Thread Jonathan Loran
Roch Bourbonnais wrote: Le 28 févr. 08 à 21:00, Jonathan Loran a écrit : Roch Bourbonnais wrote: Le 28 févr. 08 à 20:14, Jonathan Loran a écrit : Quick question: If I create a ZFS mirrored pool, will the read performance get a boost? In other words, will the data/parity be read

Re: [zfs-discuss] Can ZFS be event-driven or not?

2008-02-25 Thread Jonathan Loran
David Magda wrote: On Feb 24, 2008, at 01:49, Jonathan Loran wrote: In some circles, CDP is big business. It would be a great ZFS offering. ZFS doesn't have it built-in, but AVS made be an option in some cases: http://opensolaris.org/os/project/avs/ Point in time copy (as AVS offers

Re: [zfs-discuss] Which DTrace provider to use

2008-02-14 Thread Jonathan Loran
Marion Hakanson wrote: [EMAIL PROTECTED] said: It's not that old. It's a Supermicro system with a 3ware 9650SE-8LP. Open-E iSCSI-R3 DOM module. The system is plenty fast. I can pretty handily pull 120MB/sec from it, and write at over 100MB/sec. It falls apart more on random I/O. The

Re: [zfs-discuss] Which DTrace provider to use

2008-02-14 Thread Jonathan Loran
be a good provider to hit up for the VFS layer. I'd also check syscall latencies - it might be too obvious, but it can be worth checking (eg, if you discover those long latencies are only on the open syscall)... Brendan -- - _/ _/ / - Jonathan Loran

Re: [zfs-discuss] Which DTrace provider to use

2008-02-14 Thread Jonathan Loran
[EMAIL PROTECTED] wrote: On Tue, Feb 12, 2008 at 10:21:44PM -0800, Jonathan Loran wrote: Thanks for any help anyone can offer. I have faced similar problem (although not exactly the same) and was going to monitor disk queue with dtrace but couldn't find any docs/urls about

Re: [zfs-discuss] Which DTrace provider to use

2008-02-13 Thread Jonathan Loran
Marion Hakanson wrote: [EMAIL PROTECTED] said: ... I know, I know, I should have gone with a JBOD setup, but it's too late for that in this iteration of this server. We we set this up, I had the gear already, and it's not in my budget to get new stuff right now. What kind of

[zfs-discuss] Which DTrace provider to use

2008-02-12 Thread Jonathan Loran
Hi List, I'm wondering if one of you expert DTrace guru's can help me. I want to write a DTrace script to print out a a histogram of how long IO requests sit in the service queue. I can output the results with the quantize method. I'm not sure which provider I should be using for this.

Re: [zfs-discuss] OpenSolaris, ZFS and Hardware RAID,

2008-02-10 Thread Jonathan Loran
Anton B. Rang wrote: Careful here. If your workload is unpredictable, RAID 6 (and RAID 5) for that matter will break down under highly randomized write loads. Oh? What precisely do you mean by break down? RAID 5's write performance is well-understood and it's used successfully in

Re: [zfs-discuss] OpenSolaris, ZFS and Hardware RAID, a recipe for success?

2008-02-09 Thread Jonathan Loran
Richard Elling wrote: Nick wrote: Using the RAID cards capability for RAID6 sounds attractive? Assuming the card works well with Solaris, this sounds like a reasonable solution. Careful here. If your workload is unpredictable, RAID 6 (and RAID 5) for that matter

Re: [zfs-discuss] ZIL controls in Solaris 10 U4?

2008-02-02 Thread Jonathan Loran
is that the requirement for this very stability is why we haven't seen the features in the ZFS code we need in Solaris 10. Thanks, Jon Mike Gerdts wrote: On Jan 30, 2008 2:27 PM, Jonathan Loran [EMAIL PROTECTED] wrote: Before ranting any more, I'll do the test of disabling the ZIL. We may have to build out

Re: [zfs-discuss] ZIL controls in Solaris 10 U4?

2008-01-31 Thread Jonathan Loran
10 U? as a preferred method. Jon -- - _/ _/ / - Jonathan Loran - - -/ / /IT Manager - - _ / _ / / Space Sciences Laboratory, UC Berkeley -/ / / (510) 643

Re: [zfs-discuss] ZIL controls in Solaris 10 U4?

2008-01-30 Thread Jonathan Loran
Neil Perrin wrote: Roch - PAE wrote: Jonathan Loran writes: Is it true that Solaris 10 u4 does not have any of the nice ZIL controls that exist in the various recent Open Solaris flavors? I would like to move my ZIL to solid state storage, but I fear I can't do it until I

Re: [zfs-discuss] ZIL controls in Solaris 10 U4?

2008-01-30 Thread Jonathan Loran
-- - _/ _/ / - Jonathan Loran - - -/ / /IT Manager - - _ / _ / / Space Sciences Laboratory, UC Berkeley -/ / / (510) 643-5146 [EMAIL PROTECTED

[zfs-discuss] ZIL controls in Solaris 10 U4?

2008-01-29 Thread Jonathan Loran
off to see how my NFS on ZFS performance is effected before spending the $'s. Anyone know when will we see this in Solaris 10? Thanks, Jon -- - _/ _/ / - Jonathan Loran - - -/ / /IT Manager

Re: [zfs-discuss] hardware for zfs home storage

2008-01-14 Thread Jonathan Loran
-- - _/ _/ / - Jonathan Loran - - -/ / /IT Manager - - _ / _ / / Space Sciences Laboratory, UC Berkeley -/ / / (510) 643-5146 [EMAIL PROTECTED

Re: [zfs-discuss] rename(2) (mv(1)) between ZFS filesystems in the same zpool

2008-01-03 Thread Jonathan Loran
Joerg Schilling wrote: Carsten Bormann [EMAIL PROTECTED] wrote: On Dec 29 2007, at 08:33, Jonathan Loran wrote: We snapshot the file as it exists at the time of the mv in the old file system until all referring file handles are closed, then destroy the single file snap. I know

Re: [zfs-discuss] rename(2) (mv(1)) between ZFS filesystems in the same zpool

2007-12-30 Thread Jonathan Loran
with the semantics. It's not just a path change as in a directory mv. Jon -- - _/ _/ / - Jonathan Loran - - -/ / /IT Manager - - _ / _ / / Space Sciences Laboratory, UC Berkeley

Re: [zfs-discuss] rename(2) (mv(1)) between ZFS filesystems in the same zpool

2007-12-28 Thread Jonathan Loran
-- - _/ _/ / - Jonathan Loran - - -/ / /IT Manager - - _ / _ / / Space Sciences Laboratory, UC Berkeley -/ / / (510) 643-5146 [EMAIL PROTECTED] - __/__/__/ AST:7731^29u18e3

Re: [zfs-discuss] Is round-robin I/O correct for ZFS?

2007-12-18 Thread Jonathan Loran
Gary Mills wrote: On Fri, Dec 14, 2007 at 10:55:10PM -0800, Jonathan Loran wrote: This is the same configuration we use on 4 separate servers (T2000, two X4100, and a V215). We do use a different iSCSI solution, but we have the same multi path config setup with scsi_vhci. Dual GigE

Re: [zfs-discuss] Is round-robin I/O correct for ZFS?

2007-12-18 Thread Jonathan Loran
Jonathan Loran wrote: Gary Mills wrote: On Fri, Dec 14, 2007 at 10:55:10PM -0800, Jonathan Loran wrote: This is the same configuration we use on 4 separate servers (T2000, two X4100, and a V215). We do use a different iSCSI solution, but we have the same multi path config setup

Re: [zfs-discuss] Is round-robin I/O correct for ZFS?

2007-12-14 Thread Jonathan Loran
devices, of course, but by two different paths. Is this a correct configuration for ZFS? I assume it's safe, but I thought I should check. -- - _/ _/ / - Jonathan Loran - - -/ / /IT Manager

Re: [zfs-discuss] HAMMER

2007-10-17 Thread Jonathan Loran
-- - _/ _/ / - Jonathan Loran - - -/ / /IT Manager - - _ / _ / / Space Sciences Laboratory, UC Berkeley -/ / / (510) 643-5146 [EMAIL PROTECTED] - __/__/__/ AST:7731

Re: [zfs-discuss] HAMMER

2007-10-17 Thread Jonathan Loran
Richard Elling wrote: Jonathan Loran wrote: snip... Do not assume that a compressed file system will send compressed. IIRC, it does not. Let's say, if it were possible to detect the remote compression support, couldn't we send it compressed? With higher compression rates, wouldn't

Re: [zfs-discuss] Direct I/O ability with zfs?

2007-10-05 Thread Jonathan Loran
Nicolas Williams wrote: On Thu, Oct 04, 2007 at 10:26:24PM -0700, Jonathan Loran wrote: I can envision a highly optimized, pipelined system, where writes and reads pass through checksum, compression, encryption ASICs, that also locate data properly on disk. ... I've argued before

Re: [zfs-discuss] Direct I/O ability with zfs?

2007-10-04 Thread Jonathan Loran
___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- - _/ _/ / - Jonathan Loran - - -/ / /IT Manager

Re: [zfs-discuss] enterprise scale redundant Solaris 10/ZFS server providing NFSv4/CIFS

2007-09-24 Thread Jonathan Loran
Paul B. Henson wrote: On Sat, 22 Sep 2007, Jonathan Loran wrote: My gut tells me that you won't have much trouble mounting 50K file systems with ZFS. But who knows until you try. My questions for you is can you lab this out? Yeah, after this research phase has been completed

Re: [zfs-discuss] enterprise scale redundant Solaris 10/ZFS server providing NFSv4/CIFS

2007-09-22 Thread Jonathan Loran
a user's files are when they want to access them :(. -- - _/ _/ / - Jonathan Loran - - -/ / /IT Manager - - _ / _ / / Space Sciences Laboratory, UC Berkeley

Re: [zfs-discuss] hardware sizing for a zfs-based system?

2007-09-13 Thread Jonathan Loran
-- - _/ _/ / - Jonathan Loran - - -/ / /IT Manager - - _ / _ / / Space Sciences Laboratory, UC Berkeley -/ / / (510) 643-5146 [EMAIL PROTECTED