[zfs-discuss] ZFS checksum error detection

2007-03-16 Thread Thomas Nau
Hi all. A quick question about the checksum error detection routines in ZFS. Surely ZFS can decide about checksum errors in a redundant environment but what about an non-redundant one? We connected a single RAID5 array to a v440 as a NFS server and while doing backups and the like we see the

Re: [zfs-discuss] Re: ZFS checksum error detection

2007-03-17 Thread Thomas Nau
On Fri, 16 Mar 2007, Anton B. Rang wrote: It's possible (if unlikely) that you are only getting checksum errors on metadata. Since ZFS always internally mirrors its metadata, even on non-redundant pools, it can recover from metadata corruption which does not affect all copies. (If there is

[zfs-discuss] ZFS over iSCSI question

2007-03-23 Thread Thomas Nau
Dear all. I've setup the following scenario: Galaxy 4200 running OpenSolaris build 59 as iSCSI target; remaining diskspace of the two internal drives with a total of 90GB is used as zpool for the two 32GB volumes exported via iSCSI The initiator is an up to date Solaris 10 11/06 x86 box

Re: [zfs-discuss] ZFS over iSCSI question

2007-03-23 Thread Thomas Nau
On Fri, 23 Mar 2007, Roch - PAE wrote: I assume the rsync is not issuing fsyncs (and it's files are not opened O_DSYNC). If so, rsync just works against the filesystem cache and does not commit the data to disk. You might want to run sync(1M) after a successful rsync. A larger rsync would

Re: [zfs-discuss] ZFS over iSCSI question

2007-03-23 Thread Thomas Nau
Dear Fran Casper I'd tend to disagree with that. POSIX/SUS does not guarantee data makes it to disk until you do an fsync() (or open the file with the right flags, or other techniques). If an application REQUIRES that data get to disk, it really MUST DTRT. Indeed; want your data safe?

Re: [zfs-discuss] ZFS over iSCSI question

2007-03-23 Thread Thomas Nau
Richard, Like this? disk--zpool--zvol--iscsitarget--network--iscsiclient--zpool--filesystem--app exactly I'm in a way still hoping that it's a iSCSI related Problem as detecting dead hosts in a network can be a non trivial problem and it takes quite some time for TCP to timeout and inform

Re[2]: [zfs-discuss] ZFS over iSCSI question

2007-03-25 Thread Thomas Nau
Hi Robert, On Sun, 25 Mar 2007, Robert Milkowski wrote: The problem is that the failure modes are very different for networks and presumably reliable local disk connections. Hence NFS has a lot of error handling code and provides well understood error handling semantics. Maybe what you really

[zfs-discuss] Q: grow zpool build on top of iSCSI devices

2008-07-02 Thread Thomas Nau
Hi all. We currenty move out a number of iSCSI servers based on Thumpers (x4500) running both, Solaris 10 and OpenSolaris build 90+. The targets on the machines are based on ZVOLs. Some of the clients use those iSCSI disks to build mirrored Zpools. As the volumes size on the x4500 can easily

[zfs-discuss] problems accessing ZFS snapshots

2008-07-30 Thread Thomas Nau
Dear all. I stumbled over an issue triggered by Samba while accessing ZFS snapshots. As soon as a Windows client tries to open the .zfs/snapshot folder it issues the Microsoft equivalent of ls dir, dir *. It get's translates by Samba all the way down into stat64(/pool/.zfs/snapshot*). The

Re: [zfs-discuss] problems accessing ZFS snapshots

2008-07-31 Thread Thomas Nau
Tim On Wed, 30 Jul 2008, Tim Haley wrote: Ah, ignore my previous question. We believe we found the problem, and filed: 6731778 'ls *' in empty zfs snapshot directory returns EILSEQ vs. ENOENT we get in other empty directories Fix will likely go back today or tomorrow and be present in

[zfs-discuss] checksum errors after online'ing device

2008-08-02 Thread Thomas Nau
Dear all As we wanted to patch one of our iSCSI Solaris servers we had to offline the ZFS submirrors on the clients connected to that server. The devices connected to the second server stayed online so the pools on the clients were still available but in degraded mode. When the server came

Re: [zfs-discuss] checksum errors after online'ing device

2008-08-02 Thread Thomas Nau
Miles On Sat, 2 Aug 2008, Miles Nordin wrote: tn == Thomas Nau [EMAIL PROTECTED] writes: tn Nevertheless during the first hour of operation after onlining tn we recognized numerous checksum errors on the formerly tn offlined device. We decided to scrub the pool and after tn

[zfs-discuss] Problem with zfs mounting in b114?

2009-05-28 Thread Thomas Nau
Dear all We use iSCSI quite a lot e.g. as backend for our OpenSolaris based fileservers. After updating the machine to b114 we ran into a strange problem. The pool get's imported (listed by 'zpool list') but none of it's ZFS filesystems get mounted. Exporting and reimporting manually fixes the

Re: [zfs-discuss] Problem with zfs mounting in b114?

2009-05-28 Thread Thomas Nau
Miles, Miles Nordin wrote: tn == Thomas Nau thomas@uni-ulm.de writes: tn After updating the machine to b114 we ran into a strange tn problem. The pool get's imported (listed by 'zpool list') but tn none of it's ZFS filesystems get mounted. Exporting and tn reimporting

Re: [zfs-discuss] Set New File/Folder ZFS ACLs Automatically through Samba?

2009-07-29 Thread Thomas Nau
Jeff, On Tue, 28 Jul 2009, Jeff Hulen wrote: Do any of you know how to set the default ZFS ACLs for newly created files and folders when those files and folders are created through Samba? I want to have all new files and folders only inherit extended (non-trivial) ACLs that are set on the

Re: [zfs-discuss] Install and boot from USB stick?

2009-07-30 Thread Thomas Nau
Hi Ive tried to find any hard information on how to install, and boot, opensolaris from a USB stick. Ive seen a few people written a few sucessfull stories about this, but I cant seem to get it to work. The procedure: Boot from LiveCD, insert USB drive, find it using `format', start

[zfs-discuss] ZFS dedup memory usage for DDT

2009-12-21 Thread Thomas Nau
Dear all. We use an old 48TB 4500 aka Thumper as iSCSI server based on snv_129. As the machine has only 16GB of RAM we are wondering if it's sufficient for holding the bigger part of the DDT in memory without affecting performance by limiting the ARC. Any hints about scaling memory vs. disk space

[zfs-discuss] panic after zfs mount

2010-06-13 Thread Thomas Nau
Dear all We ran into a nasty problem the other day. One of our mirrored zpool hosts several ZFS filesystems. After a reboot (all FS mounted at that time an in use) the machine paniced (console output further down). After detaching one of the mirrors the pool fortunately imported automatically in

Re: [zfs-discuss] panic after zfs mount

2010-06-13 Thread Thomas Nau
Thanks for the link Arne. On 06/13/2010 03:57 PM, Arne Jansen wrote: Thomas Nau wrote: Dear all We ran into a nasty problem the other day. One of our mirrored zpool hosts several ZFS filesystems. After a reboot (all FS mounted at that time an in use) the machine paniced (console output

Re: [zfs-discuss] panic after zfs mount

2010-06-13 Thread Thomas Nau
Arne, On 06/13/2010 03:57 PM, Arne Jansen wrote: Thomas Nau wrote: Dear all We ran into a nasty problem the other day. One of our mirrored zpool hosts several ZFS filesystems. After a reboot (all FS mounted at that time an in use) the machine paniced (console output further down). After

[zfs-discuss] JBOD recommendation for ZFS usage

2011-05-30 Thread Thomas Nau
Dear all Sorry if it's kind of off-topic for the list but after talking to lots of vendors I'm running out of ideas... We are looking for JBOD systems which (1) hold 20+ 3.3 SATA drives (2) are rack mountable (3) have all the nive hot-swap stuff (4) allow 2 hosts to connect via SAS (4+ lines

Re: [zfs-discuss] JBOD recommendation for ZFS usage

2011-05-30 Thread Thomas Nau
Thanks Jim and all the other who have replied so far On 05/30/2011 11:37 AM, Jim Klimov wrote: ... So if your application can live with the unit of failover being a bunch of 21 or 24 disks - that might be a way to go. However each head would only have one connection to each backplane,

Re: [zfs-discuss] 512b vs 4K sectors

2011-07-04 Thread Thomas Nau
Richard On 07/04/2011 03:58 PM, Richard Elling wrote: On Jul 4, 2011, at 6:42 AM, Lanky Doodle wrote: Hiya, I''ve been doing a lot of research surrounding this and ZFS, including some posts on here, though I am still left scratching my head. I am planning on using slow RPM drives for a

[zfs-discuss] ZFS performance question over NFS

2011-08-18 Thread Thomas Nau
Dear all. We finally got all the parts for our new fileserver following several recommendations we got over this list. We use Dell R715, 96GB RAM, dual 8-core Opterons 1 10GE Intel dual-port NIC 2 LSI 9205-8e SAS controllers 2 DataON DNS-1600 JBOD chassis 46 Seagate constellation SAS drives 2

Re: [zfs-discuss] ZFS performance question over NFS

2011-08-18 Thread Thomas Nau
Tim the client is identical as the server but no SAS drives attached. Also right now only one 1gbit Intel NIC Is available Thomas Am 18.08.2011 um 17:49 schrieb Tim Cook t...@cook.ms: What are the specs on the client? On Aug 18, 2011 10:28 AM, Thomas Nau thomas@uni-ulm.de wrote: Dear

Re: [zfs-discuss] ZFS performance question over NFS

2011-08-19 Thread Thomas Nau
Hi Bob I don't know what the request pattern from filebench looks like but it seems like your ZEUS RAM devices are not keeping up or else many requests are bypassing the ZEUS RAM devices. Note that very large synchronous writes will bypass your ZEUS RAM device and go directly to a log in

[zfs-discuss] does log device (ZIL) require a mirror setup?

2011-12-11 Thread Thomas Nau
Dear all We use a STEC ZeusRAM as a log device for a 200TB RAID-Z2 pool. As they are supposed to be read only after a crash or when booting and those nice things are pretty expensive I'm wondering if mirroring the log devices is a must / highly recommended Thomas

Re: [zfs-discuss] Stress test zfs

2012-01-07 Thread Thomas Nau
Hi Grant On 01/06/2012 04:50 PM, Richard Elling wrote: Hi Grant, On Jan 4, 2012, at 2:59 PM, grant lowe wrote: Hi all, I've got a solaris 10 running 9/10 on a T3. It's an oracle box with 128GB memory RIght now oracle . I've been trying to load test the box with bonnie++. I can seem

[zfs-discuss] need hint on pool setup

2012-01-31 Thread Thomas Nau
Dear all We have two JBODs with 20 or 21 drives available per JBOD hooked up to a server. We are considering the following setups: RAIDZ2 made of 4 drives RAIDZ2 made of 6 drives The first option wastes more disk space but can survive a JBOD failure whereas the second is more space effective but

Re: [zfs-discuss] need hint on pool setup

2012-02-01 Thread Thomas Nau
Bob, On 01/31/2012 09:54 PM, Bob Friesenhahn wrote: On Tue, 31 Jan 2012, Thomas Nau wrote: Dear all We have two JBODs with 20 or 21 drives available per JBOD hooked up to a server. We are considering the following setups: RAIDZ2 made of 4 drives RAIDZ2 made of 6 drives The first

Re: [zfs-discuss] ZFS error accessing past end of object

2012-03-04 Thread Thomas Nau
Dear all I'm about to answer my own question with some really useful hints from Steve, thanks for that!!! On 03/02/2012 07:43 AM, Thomas Nau wrote: Dear all I asked before but without much feedback. As the issue is persistent I want to give it another try. We disabled panicing

Re: [zfs-discuss] Solaris 11 System Reboots Continuously Because of a ZFS-Related Panic (7191375)

2012-12-12 Thread Thomas Nau
Jamie We ran Into the same and had to migrate the pool while imported read-only. On top we were adviced to NOT use an L2ARC. Maybe you should consider that as well Thomas Am 12.12.2012 um 19:21 schrieb Jamie Krier jamie.kr...@gmail.com: I've hit this bug on four of my Solaris 11 servers.

[zfs-discuss] iSCSI access patterns and possible improvements?

2013-01-16 Thread Thomas Nau
Dear all I've a question concerning possible performance tuning for both iSCSI access and replicating a ZVOL through zfs send/receive. We export ZVOLs with the default volblocksize of 8k to a bunch of Citrix Xen Servers through iSCSI. The pool is made of SAS2 disks (11 x 3-way mirrored) plus

Re: [zfs-discuss] iSCSI access patterns and possible improvements?

2013-01-17 Thread Thomas Nau
Thanks for all the answers more inline) On 01/18/2013 02:42 AM, Richard Elling wrote: On Jan 17, 2013, at 7:04 AM, Bob Friesenhahn bfrie...@simple.dallas.tx.us mailto:bfrie...@simple.dallas.tx.us wrote: On Wed, 16 Jan 2013, Thomas Nau wrote: Dear all I've a question concerning possible

Re: [zfs-discuss] Freeing unused space in thin provisioned zvols

2013-02-12 Thread Thomas Nau
Darren On 02/12/2013 11:25 AM, Darren J Moffat wrote: On 02/10/13 12:01, Koopmann, Jan-Peter wrote: Why should it? Unless you do a shrink on the vmdk and use a zfs variant with scsi unmap support (I believe currently only Nexenta but correct me if I am wrong) the blocks will not be freed,