Re: [zfs-discuss] partioned cache devices

2013-03-19 Thread Ian Collins
Andrew Werchowiecki wrote: Thanks for the info about slices, I may give that a go later on. I’m not keen on that because I have clear evidence (as in zpools set up this way, right now, working, without issue) that GPT partitions of the style shown above work and I want to see why it doesn’t

Re: [zfs-discuss] partioned cache devices

2013-03-15 Thread Ian Collins
Andrew Werchowiecki wrote: Hi all, I'm having some trouble with adding cache drives to a zpool, anyone got any ideas? muslimwookie@Pyzee:~$ sudo zpool add aggr0 cache c25t10d1p2 Password: cannot open '/dev/dsk/c25t10d1p2': I/O error muslimwookie@Pyzee:~$ I have two SSDs in the system,

Re: [zfs-discuss] ZFS Distro Advice

2013-02-26 Thread Ian Collins
Robert Milkowski wrote: Solaris 11.1 (free for non-prod use). But a ticking bomb if you use a cache device. -- Ian. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS Distro Advice

2013-02-26 Thread Ian Collins
Robert Milkowski wrote: Robert Milkowski wrote: Solaris 11.1 (free for non-prod use). But a ticking bomb if you use a cache device. It's been fixed in SRU (although this is only for customers with a support contract - still, will be in 11.2 as well). Then, I'm sure there are other bugs

Re: [zfs-discuss] SVM ZFS

2013-02-26 Thread Ian Collins
Alfredo De Luca wrote: On Wed, Feb 27, 2013 at 10:36 AM, Paul Kraus p...@kraus-haus.org mailto:p...@kraus-haus.org wrote: On Feb 26, 2013, at 6:19 PM, Jim Klimov jimkli...@cos.ru mailto:jimkli...@cos.ru wrote: Ah, I forgot to mention - ufsdump|ufsrestore was at some time also

Re: [zfs-discuss] ZFS Distro Advice

2013-02-26 Thread Ian Collins
Bob Friesenhahn wrote: On Tue, 26 Feb 2013, Richard Elling wrote: Consider using different policies for different data. For traditional file systems, you had relatively few policy options: readonly, nosuid, quota, etc. With ZFS, dedup and compression are also policy options. In your case,

Re: [zfs-discuss] ZFS Distro Advice

2013-02-26 Thread Ian Collins
Bob Friesenhahn wrote: On Wed, 27 Feb 2013, Ian Collins wrote: I am finding that rsync with the right options (to directly block-overwrite) plus zfs snapshots is providing me with pretty amazing deduplication for backups without even enabling deduplication in zfs. Now backup storage goes

Re: [zfs-discuss] Is there performance penalty when adding vdev to existing pool

2013-02-20 Thread Ian Collins
Peter Wood wrote: I'm using OpenIndiana 151a7, zpool v28, zfs v5. When I bought my storage servers I intentionally left hdd slots available so I can add another vdev when needed and delay immediate expenses. After reading some posts on the mailing list I'm getting concerned about degrading

Re: [zfs-discuss] Is there performance penalty when adding vdev to existing pool

2013-02-20 Thread Ian Collins
Bob Friesenhahn wrote: On Thu, 21 Feb 2013, Sašo Kiselkov wrote: On 02/21/2013 12:27 AM, Peter Wood wrote: Will adding another vdev hurt the performance? In general, the answer is: no. ZFS will try to balance writes to top-level vdevs in a fashion that assures even data distribution. If your

Re: [zfs-discuss] Is there performance penalty when adding vdev to existing pool

2013-02-20 Thread Ian Collins
Peter Wood wrote: Currently the pool is about 20% full: # zpool list pool01 NAME SIZE ALLOC FREE EXPANDSZCAP DEDUP HEALTH ALTROOT pool01 65.2T 15.4T 49.9T -23% 1.00x ONLINE - # So you will be about 15% full after adding a new vdev. Unless you are likely to

Re: [zfs-discuss] zfs-discuss mailing list opensolaris EOL

2013-02-17 Thread Ian Collins
Edward Ned Harvey (opensolarisisdeadlongliveopensolaris) wrote: From: Tim Cook [mailto:t...@cook.ms] We can agree to disagree. I think you're still operating under the auspices of Oracle wanting to have an open discussion. This is patently false. I'm just going to respond to this by saying

Re: [zfs-discuss] HELP! RPool problem

2013-02-16 Thread Ian Collins
Sašo Kiselkov wrote: On 02/16/2013 09:49 PM, John D Groenveld wrote: Boot with kernel debugger so you can see the panic. Sadly, though, without access to the source code, all he do can at that point is log a support ticket with Oracle (assuming he has paid his support fees) and hope it will

Re: [zfs-discuss] zfs-discuss mailing list opensolaris EOL

2013-02-16 Thread Ian Collins
Toby Thain wrote: Signed up, thanks. The ZFS list has been very high value and I thank everyone whose wisdom I have enjoyed, especially people like you Sašo, Mr Elling, Mr Friesenhahn, Mr Harvey, the distinguished Sun and Oracle engineers who post here, and many others. Let the Illumos list

Re: [zfs-discuss] zfs-discuss mailing list opensolaris EOL

2013-02-16 Thread Ian Collins
Richard Elling wrote: On Feb 16, 2013, at 10:16 PM, Bryan Horstmann-Allen b...@mirrorshades.net wrote: +-- | On 2013-02-17 18:40:47, Ian Collins wrote: | One of its main advantages is it has been platform agnostic

Re: [zfs-discuss] Slow zfs writes

2013-02-12 Thread Ian Collins
Ram Chander wrote: Hi Roy, You are right. So it looks like re-distribution issue. Initially there were two Vdev with 24 disks ( disk 0-23 ) for close to year. After which which we added 24 more disks and created additional vdevs. The initial vdevs are filled up and so write speed declined.

Re: [zfs-discuss] Slow zfs writes

2013-02-12 Thread Ian Collins
Jim Klimov wrote: On 2013-02-12 10:32, Ian Collins wrote: Ram Chander wrote: Hi Roy, You are right. So it looks like re-distribution issue. Initially there were two Vdev with 24 disks ( disk 0-23 ) for close to year. After which which we added 24 more disks and created additional vdevs

[zfs-discuss] Bizarre receive error

2013-02-08 Thread Ian Collins
I recently had to recover a lot of data from my backup pool which is on a Solaris 11 system. I'm now sending regular snapshots back to the pool and all was well until the pool became nearly full. I then started getting receive failures: receiving incremental stream of

Re: [zfs-discuss] RFE: Un-dedup for unique blocks

2013-01-23 Thread Ian Collins
Jim Klimov wrote: On 2013-01-23 09:41, casper@oracle.com wrote: Yes and no: the system reserves a lot of additional memory (Solaris doesn't over-commits swap) and swap is needed to support those reservations. Also, some pages are dirtied early on and never touched again; those pages should

Re: [zfs-discuss] RFE: Un-dedup for unique blocks

2013-01-22 Thread Ian Collins
Darren J Moffat wrote: It is a mechanism for part of the storage system above the disk (eg ZFS) to inform the disk that it is no longer using a given set of blocks. This is useful when using an SSD - see Saso's excellent response on that. However it can also be very useful when your disk is an

[zfs-discuss] Odd snapshots exposed in Solaris 11.1

2013-01-21 Thread Ian Collins
Since upgrading to Solaris 11.1, I've started seeing snapshots like tank/vbox/shares%VMs appearing with zfs list -t snapshot. I thought snapshots with a % in their name where private objects created during a send/receive operation. These snapshots don't have many properties: zfs get all

Re: [zfs-discuss] Solaris 11 System Reboots Continuously Because of a ZFS-Related Panic (7191375)

2013-01-14 Thread Ian Collins
Cindy Swearingen wrote: Hi Jamie, Yes, that is correct. The S11u1 version of this bug is: https://bug.oraclecorp.com/pls/bug/webbug_print.show?c_rptno=15852599 and has this notation which means Solaris 11.1 SRU 3.4: Changeset pushed to build 0.175.1.3.0.4.0 Hello Cindy, I really really

Re: [zfs-discuss] Zpool error in metadata:0x0

2012-12-08 Thread Ian Collins
Jim Klimov wrote: I've had this error on my pool since over a year ago, when I posted and asked about it. The general consent was that this is only fixable by recreation of the pool, and that if things don't die right away, the problem may be benign (i.e. in some first blocks of MOS that are in

Re: [zfs-discuss] zfs on SunFire X2100M2 with hybrid pools

2012-11-28 Thread Ian Collins
Edward Ned Harvey (opensolarisisdeadlongliveopensolaris) wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Jim Klimov I really hope someone better versed in compression - like Saso - would chime in to say whether gzip-9 vs. lzjb (or lz4)

Re: [zfs-discuss] ZFS Appliance as a general-purpose server question

2012-11-22 Thread Ian Collins
On 11/23/12 05:50, Jim Klimov wrote: On 2012-11-22 17:31, Darren J Moffat wrote: Is it possible to use the ZFS Storage appliances in a similar way, and fire up a Solaris zone (or a few) directly on the box for general-purpose software; or to shell-script administrative tasks such as the backup

Re: [zfs-discuss] Intel DC S3700

2012-11-21 Thread Ian Collins
On 11/14/12 12:28, Jim Klimov wrote: On 2012-11-13 22:56, Mauricio Tavares wrote: Trying again: Intel just released those drives. Any thoughts on how nicely they will play in a zfs/hardware raid setup? Seems interesting - fast, assumed reliable and consistent in its IOPS (according to

[zfs-discuss] Woeful performance from an iSCSI pool

2012-11-21 Thread Ian Collins
I look after a remote server that has two iSCSI pools. The volumes for each pool are sparse volumes and a while back the target's storage became full, causing weird and wonderful corruption issues until they manges to free some space. Since then, one pool has been reasonably OK, but the

Re: [zfs-discuss] Woeful performance from an iSCSI pool

2012-11-21 Thread Ian Collins
On 11/22/12 10:15, Ian Collins wrote: I look after a remote server that has two iSCSI pools. The volumes for each pool are sparse volumes and a while back the target's storage became full, causing weird and wonderful corruption issues until they manges to free some space. Since then, one pool

Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-13 Thread Ian Collins
On 11/14/12 15:20, Dan Swartzendruber wrote: Well, I think I give up for now. I spent quite a few hours over the last couple of days trying to get gnome desktop working on bare-metal OI, followed by virtualbox. Supposedly that works in headless mode with RDP for management, but nothing but

Re: [zfs-discuss] Strange mount -a problem in Solaris 11.1

2012-10-31 Thread Ian Collins
On 10/31/12 23:35, Edward Ned Harvey (opensolarisisdeadlongliveopensolaris) wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Ian Collins Have have a recently upgraded (to Solaris 11.1) test system that fails to mount its filesystems

[zfs-discuss] Strange mount -a problem in Solaris 11.1

2012-10-30 Thread Ian Collins
Have have a recently upgraded (to Solaris 11.1) test system that fails to mount its filesystems on boot. Running zfs mount -a results in the odd error #zfs mount -a internal error Invalid argument truss shows the last call as ioctl(3, ZFS_IOC_OBJECT_STATS, 0xF706BBB0) The system boots up

Re: [zfs-discuss] zfs send to older version

2012-10-18 Thread Ian Collins
On 10/18/12 21:09, Michel Jansens wrote: Hi, I've been using a Solaris 10 update 9 machine for some time to replicate filesystems from different servers through zfs send|ssh zfs receive. This was done to store disaster recovery pools. The DR zpools are made from sparse files (to allow for

Re: [zfs-discuss] ZFS best practice for FreeBSD?

2012-10-13 Thread Ian Collins
On 10/13/12 22:13, Jim Klimov wrote: 2012-10-13 0:41, Ian Collins пишет: On 10/13/12 02:12, Edward Ned Harvey (opensolarisisdeadlongliveopensolaris) wrote: There are at least a couple of solid reasons *in favor* of partitioning. #1 It seems common, at least to me, that I'll build a server

Re: [zfs-discuss] Using L2ARC on an AdHoc basis.

2012-10-13 Thread Ian Collins
On 10/14/12 10:02, Michael Armstrong wrote: Hi Guys, I have a portable pool i.e. one that I carry around in an enclosure. However, any SSD I add for L2ARC, will not be carried around... meaning the cache drive will become unavailable from time to time. My question is Will random removal

Re: [zfs-discuss] ZFS best practice for FreeBSD?

2012-10-12 Thread Ian Collins
On 10/13/12 02:12, Edward Ned Harvey (opensolarisisdeadlongliveopensolaris) wrote: There are at least a couple of solid reasons *in favor* of partitioning. #1 It seems common, at least to me, that I'll build a server with let's say, 12 disk slots, and we'll be using 2T disks or something like

Re: [zfs-discuss] Building an On-Site and Off-Size ZFS server, replication question

2012-10-08 Thread Ian Collins
On 10/08/12 20:08, Tiernan OToole wrote: Ok, so, after reading a bit more of this discussion and after playing around at the weekend, i have a couple of questions to ask... 1: Do my pools need to be the same? for example, the pool in the datacenter is 2 1Tb drives in Mirror. in house i have 5

Re: [zfs-discuss] Building an On-Site and Off-Size ZFS server, replication question

2012-10-05 Thread Ian Collins
On 10/05/12 21:36, Jim Klimov wrote: 2012-10-05 11:17, Tiernan OToole wrote: Also, as a follow up question, but slightly unrelated, when it comes to the ZFS Send, i could use SSH to do the send, directly to the machine... Or i could upload the compressed, and possibly encrypted dump to the

Re: [zfs-discuss] Building an On-Site and Off-Size ZFS server, replication question

2012-10-05 Thread Ian Collins
On 10/06/12 07:57, Edward Ned Harvey (opensolarisisdeadlongliveopensolaris) wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Frank Cusack On Fri, Oct 5, 2012 at 3:17 AM, Ian Collinsi...@ianshome.com wrote: I do have to suffer a slow,

[zfs-discuss] Should clearing the share property on a clone unshare the origin?

2012-09-25 Thread Ian Collins
I've noticed on a Solaris 11 system that when I clone a filesystem and change the share property: #zfs clone -p -o atime=off filesystem@snapshot clone #zfs set -c share=name=old share clone #zfs set share=name=new NFS share clone #zfs set sharenfs=on clone The origin filesystem is no longer

Re: [zfs-discuss] all in one server

2012-09-19 Thread Ian Collins
On 09/19/12 02:38 AM, Sašo Kiselkov wrote: On 09/18/2012 04:31 PM, Eugen Leitl wrote: Can I actually have a year's worth of snapshots in zfs without too much performance degradation? Each additional dataset (not sure about snapshots, though) increases boot times slightly, however, I've seen

Re: [zfs-discuss] Zvol vs zfs send/zfs receive

2012-09-14 Thread Ian Collins
On 09/15/12 04:46 PM, Dave Pooser wrote: I need a bit of a sanity check here. 1) I have a a RAIDZ2 of 8 1TB drives, so 6TB usable, running on an ancient version of OpenSolaris (snv_134 I think). On that zpool (miniraid) I have a zvol (RichRAID) that's using almost the whole FS. It's shared out

Re: [zfs-discuss] scripting incremental replication data streams

2012-09-12 Thread Ian Collins
On 09/13/12 07:44 AM, Edward Ned Harvey (opensolarisisdeadlongliveopensolaris) wrote: I send a replication data stream from one host to another. (and receive). I discovered that after receiving, I need to remove the auto-snapshot property on the receiving side, and set the readonly property

Re: [zfs-discuss] scripting incremental replication data streams

2012-09-12 Thread Ian Collins
On 09/13/12 10:23 AM, Timothy Coalson wrote: Unless i'm missing something, they didn't solve the matching snapshots thing yet, from their site: To Do: Additional error handling for mismatched snapshots (last destination snap no longer exists on the source) walk backwards through the remote

Re: [zfs-discuss] what have you been buying for slog and l2arc?

2012-08-29 Thread Ian Collins
On 08/ 4/12 09:50 PM, Eugen Leitl wrote: On Fri, Aug 03, 2012 at 08:39:55PM -0500, Bob Friesenhahn wrote: Extreme write IOPS claims in consumer SSDs are normally based on large write caches which can lose even more data if there is a power failure. Intel 311 with a good UPS would seem to be a

Re: [zfs-discuss] Benefits of enabling compression in ZFS for the zones

2012-07-10 Thread Ian Collins
On 07/10/12 09:25 PM, Jordi Espasa Clofent wrote: Hi all, By default I'm using ZFS for all the zones: admjoresp@cyd-caszonesrv-15:~$ zfs list NAME USED AVAIL REFER MOUNTPOINT opt 4.77G 45.9G 285M /opt opt/zones 4.49G

Re: [zfs-discuss] Scenario sanity check

2012-07-09 Thread Ian Collins
On 07/10/12 05:26 AM, Brian Wilson wrote: Yep, thanks, and to answer Ian with more detail on what TruCopy does. TruCopy mirrors between the two storage arrays, with software running on the arrays, and keeps a list of dirty/changed 'tracks' while the mirror is split. I think they call it

Re: [zfs-discuss] Scenario sanity check

2012-07-06 Thread Ian Collins
On 07/ 7/12 08:34 AM, Brian Wilson wrote: Hello, I'd like a sanity check from people more knowledgeable than myself. I'm managing backups on a production system. Previously I was using another volume manager and filesystem on Solaris, and I've just switched to using ZFS. My model is -

Re: [zfs-discuss] Scenario sanity check

2012-07-06 Thread Ian Collins
On 07/ 7/12 11:29 AM, Brian Wilson wrote: On 07/ 6/12 04:17 PM, Ian Collins wrote: On 07/ 7/12 08:34 AM, Brian Wilson wrote: Hello, I'd like a sanity check from people more knowledgeable than myself. I'm managing backups on a production system. Previously I was using another volume manager

Re: [zfs-discuss] Sol11 missing snapshot facility

2012-07-05 Thread Ian Collins
On 07/ 5/12 06:52 PM, Carsten John wrote: Hello everybody, for some reason I can not find the zfs-autosnapshot service facility any more. I already reinstalles time-slider, but it refuses to start: RuntimeError: Error reading SMF schedule instances Details: ['/usr/bin/svcs', '-H', '-o',

Re: [zfs-discuss] Sol11 missing snapshot facility

2012-07-05 Thread Ian Collins
On 07/ 5/12 09:25 PM, Carsten John wrote: Hi Ian, yes, I already checked that: svcs -a | grep zfs disabled 11:50:39 svc:/application/time-slider/plugin:zfs-send is the only service I get listed. Odd. How did you install? Is the manifest there

Re: [zfs-discuss] Sol11 missing snapshot facility

2012-07-05 Thread Ian Collins
On 07/ 5/12 11:32 PM, Carsten John wrote: -Original message- To: Carsten Johncj...@mpi-bremen.de; CC: zfs-discuss@opensolaris.org; From: Ian Collinsi...@ianshome.com Sent: Thu 05-07-2012 11:35 Subject:Re: [zfs-discuss] Sol11 missing snapshot facility On 07/ 5/12

Re: [zfs-discuss] Has anyone used a Dell with a PERC H310?

2012-07-02 Thread Ian Collins
On 05/29/12 08:42 AM, Richard Elling wrote: On May 28, 2012, at 2:48 AM, Ian Collins wrote: On 05/28/12 08:55 PM, Sašo Kiselkov wrote: .. If the drives show up at all, chances are you only need to work around the power-up issue in Dell HDD firmware. Here's what I had to do to get the drives

Re: [zfs-discuss] Very sick iSCSI pool

2012-07-02 Thread Ian Collins
On 07/ 1/12 08:57 PM, Ian Collins wrote: On 07/ 1/12 10:20 AM, Fajar A. Nugraha wrote: On Sun, Jul 1, 2012 at 4:18 AM, Ian Collinsi...@ianshome.com wrote: On 06/30/12 03:01 AM, Richard Elling wrote: Hi Ian, Chapter 7 of the DTrace book has some examples of how to look at iSCSI target

Re: [zfs-discuss] Very sick iSCSI pool

2012-07-01 Thread Ian Collins
On 07/ 1/12 10:20 AM, Fajar A. Nugraha wrote: On Sun, Jul 1, 2012 at 4:18 AM, Ian Collinsi...@ianshome.com wrote: On 06/30/12 03:01 AM, Richard Elling wrote: Hi Ian, Chapter 7 of the DTrace book has some examples of how to look at iSCSI target and initiator behaviour. Thanks Richard, I 'll

Re: [zfs-discuss] Very sick iSCSI pool

2012-06-30 Thread Ian Collins
On 06/30/12 03:01 AM, Richard Elling wrote: Hi Ian, Chapter 7 of the DTrace book has some examples of how to look at iSCSI target and initiator behaviour. Thanks Richard, I 'll have a look. I'm assuming the pool is hosed? -- richard On Jun 28, 2012, at 10:47 PM, Ian Collins wrote: I'm

[zfs-discuss] Very sick iSCSI pool

2012-06-28 Thread Ian Collins
I'm trying to work out the case a remedy for a very sick iSCSI pool on a Solaris 11 host. The volume is exported from an Oracle storage appliance and there are no errors reported there. The host has no entries in its logs relating to the network connections. Any zfs or zpool commands the

Re: [zfs-discuss] Has anyone used a Dell with a PERC H310?

2012-05-28 Thread Ian Collins
On 05/ 7/12 04:08 PM, Ian Collins wrote: On 05/ 7/12 03:42 PM, Greg Mason wrote: I am currently trying to get two of these things running Illumian. I don't have any particular performance requirements, so I'm thinking of using some sort of supported hypervisor, (either RHEL and KVM or VMware

Re: [zfs-discuss] Has anyone used a Dell with a PERC H310?

2012-05-28 Thread Ian Collins
On 05/28/12 08:55 PM, Sašo Kiselkov wrote: On 05/28/2012 10:48 AM, Ian Collins wrote: To follow up, the H310 appears to be useless in non-raid mode. The drives do show up in Solaris 11 format, but they show up as unknown, unformatted drives. One oddity is the box has two SATA SSDs which also

Re: [zfs-discuss] Has anyone used a Dell with a PERC H310?

2012-05-28 Thread Ian Collins
On 05/28/12 10:53 PM, Sašo Kiselkov wrote: On 05/28/2012 11:48 AM, Ian Collins wrote: On 05/28/12 08:55 PM, Sašo Kiselkov wrote: On 05/28/2012 10:48 AM, Ian Collins wrote: To follow up, the H310 appears to be useless in non-raid mode. The drives do show up in Solaris 11 format, but they show

Re: [zfs-discuss] Has anyone used a Dell with a PERC H310?

2012-05-28 Thread Ian Collins
On 05/28/12 11:01 PM, Sašo Kiselkov wrote: On 05/28/2012 12:59 PM, Ian Collins wrote: On 05/28/12 10:53 PM, Sašo Kiselkov wrote: On 05/28/2012 11:48 AM, Ian Collins wrote: On 05/28/12 08:55 PM, Sašo Kiselkov wrote: On 05/28/2012 10:48 AM, Ian Collins wrote: To follow up, the H310 appears

Re: [zfs-discuss] need information about ZFS API

2012-05-28 Thread Ian Collins
On 05/29/12 08:32 AM, Richard Elling wrote: Hi Dhiraj, On May 27, 2012, at 11:28 PM, Dhiraj Bhandare wrote: Hi All I would like to create a sample application for ZFS using C++/C and libzfs. I am very new to ZFS, I would like to have an some information about ZFS API. Even some sample

Re: [zfs-discuss] Hard Drive Choice Question

2012-05-17 Thread Ian Collins
On 05/17/12 02:53 AM, Paul Kraus wrote: I have a small server at home (HP Proliant Micro N36) that I use for file, DNS, DHCP, etc. services. I currently have a zpool of four mirrored 1 TB Seagate ES2 SATA drives. Well, it was a zpool of four until last night when one of the drives died. ZFS

[zfs-discuss] Unexpected error adding a cache device to existing pool

2012-05-14 Thread Ian Collins
On a Solaris 11 system I have a pool that was originally built with a log a cache device on a single SSD. The SSD died and I realised I should have a mirror log, so I've just tried to replace the log a cache with a pair of SSDs. Adding the log was OK: zpool add -f export log mirror

Re: [zfs-discuss] Unexpected error adding a cache device to existing pool

2012-05-14 Thread Ian Collins
On 05/14/12 10:32 PM, Carson Gaspar wrote: On 5/14/12 2:02 AM, Ian Collins wrote: Adding the log was OK: zpool add -f export log mirror c10t3d0s0 c10t4d0s0 But adding the cache fails: zpool add -f export cache c10t3d0s1 c10t4d0s1 invalid vdev specification the following errors must

Re: [zfs-discuss] Strange hang during snapshot receive

2012-05-11 Thread Ian Collins
On 05/11/12 02:01 AM, Mike Gerdts wrote: On Thu, May 10, 2012 at 5:37 AM, Ian Collinsi...@ianshome.com wrote: I have an application I have been using to manage data replication for a number of years. Recently we started using a new machine as a staging server (not that new, an x4540) running

[zfs-discuss] Strange hang during snapshot receive

2012-05-10 Thread Ian Collins
I have an application I have been using to manage data replication for a number of years. Recently we started using a new machine as a staging server (not that new, an x4540) running Solaris 11 with a single pool built from 7x6 drive raidz. No dedup and no reported errors. On that box and

Re: [zfs-discuss] Hung zfs destroy

2012-05-07 Thread Ian Collins
On 05/ 8/12 08:36 AM, Edward Ned Harvey wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Ian Collins On a Solaris 11 (SR3) system I have a zfs destroy process what appears to be doing nothing and can't be killed. It has used 5 seconds

[zfs-discuss] Has anyone used a Dell with a PERC H310?

2012-05-06 Thread Ian Collins
I'm trying to configure a DELL R720 (not a pleasant experience) which has an H710p card fitted. The H710p definitely doesn't support JBOD, but the H310 looks like it might (the data sheet mentions non-RAID). Has anyone used one with ZFS? Thanks, -- Ian.

Re: [zfs-discuss] Has anyone used a Dell with a PERC H310?

2012-05-06 Thread Ian Collins
On 05/ 7/12 03:42 PM, Greg Mason wrote: I am currently trying to get two of these things running Illumian. I don't have any particular performance requirements, so I'm thinking of using some sort of supported hypervisor, (either RHEL and KVM or VMware ESXi) to get around the driver support

[zfs-discuss] Hung zfs destroy

2012-05-04 Thread Ian Collins
On a Solaris 11 (SR3) system I have a zfs destroy process what appears to be doing nothing and can't be killed. It has used 5 seconds of CPU in a day and a half, but truss -p won't attach. No data appears to have been removed. The dataset (but not the pool) is busy. I thought this was an

Re: [zfs-discuss] cluster vs nfs

2012-04-26 Thread Ian Collins
On 04/26/12 10:12 PM, Jim Klimov wrote: On 2012-04-26 2:20, Ian Collins wrote: On 04/26/12 09:54 AM, Bob Friesenhahn wrote: On Wed, 25 Apr 2012, Rich Teer wrote: Perhaps I'm being overly simplistic, but in this scenario, what would prevent one from having, on a single file server, /exports

Re: [zfs-discuss] cluster vs nfs

2012-04-25 Thread Ian Collins
On 04/26/12 09:54 AM, Bob Friesenhahn wrote: On Wed, 25 Apr 2012, Rich Teer wrote: Perhaps I'm being overly simplistic, but in this scenario, what would prevent one from having, on a single file server, /exports/nodes/node[0-15], and then having each node NFS-mount /exports/nodes from the

Re: [zfs-discuss] cluster vs nfs

2012-04-25 Thread Ian Collins
On 04/26/12 10:34 AM, Paul Archer wrote: 2:34pm, Rich Teer wrote: On Wed, 25 Apr 2012, Paul Archer wrote: Simple. With a distributed FS, all nodes mount from a single DFS. With NFS, each node would have to mount from each other node. With 16 nodes, that's what, 240 mounts? Not to mention

Re: [zfs-discuss] Two disks giving errors in a raidz pool, advice needed

2012-04-22 Thread Ian Collins
On 04/23/12 01:47 PM, Manuel Ryan wrote: Hello, I have looked around this mailing list and other virtual spaces and I wasn't able to find a similar situation than this weird one. I have a 6 disks raidz zfs15 pool. After a scrub, the status of the pool and all disks still show up as ONLINE but

[zfs-discuss] Improving snapshot write performance

2012-04-11 Thread Ian Collins
I use an application with a fairly large receive data buffer (256MB) to replicate data between sites. I have noticed the buffer becoming completely full when receiving snapshots for some filesystems, even over a slow (~2MB/sec) WAN connection. I assume this is due to the changes being widely

Re: [zfs-discuss] Improving snapshot write performance

2012-04-11 Thread Ian Collins
On 04/12/12 04:17 AM, Richard Elling wrote: On Apr 11, 2012, at 1:34 AM, Ian Collins wrote: I use an application with a fairly large receive data buffer (256MB) to replicate data between sites. I have noticed the buffer becoming completely full when receiving snapshots for some filesystems

Re: [zfs-discuss] Improving snapshot write performance

2012-04-11 Thread Ian Collins
On 04/12/12 09:00 AM, Jim Klimov wrote: 2012-04-11 23:55, Ian Collins wrote: Odd. The pool is a single iSCSI volume exported from a 7320 and there is 18TB free. Lame question: is that 18Tb free on the pool inside the iSCSI volume, or on the backing pool on 7320? I mean that as far

Re: [zfs-discuss] Improving snapshot write performance

2012-04-11 Thread Ian Collins
On 04/12/12 09:51 AM, Peter Jeremy wrote: On 2012-Apr-11 18:34:42 +1000, Ian Collinsi...@ianshome.com wrote: I use an application with a fairly large receive data buffer (256MB) to replicate data between sites. I have noticed the buffer becoming completely full when receiving snapshots for

Re: [zfs-discuss] Puzzling problem with zfs receive exit status

2012-03-29 Thread Ian Collins
On 03/29/12 10:46 PM, Borja Marcos wrote: Hello, I hope someone has an idea. I have a replication program that copies a dataset from one server to another one. The replication mechanism is the obvious one, of course: zfs send -Ri from snapshot(n-1) snapshot(n) file scp file remote machine

Re: [zfs-discuss] Receive failing with invalid backup stream error

2012-03-09 Thread Ian Collins
On 03/10/12 01:48 AM, Jim Klimov wrote: 2012-03-09 9:24, Ian Collins wrote: I sent the snapshot to a file, coped the file to the remote host and piped the file into zfs receive. That worked and I was able to send further snapshots with ssh. Odd. Is it possible that in case of zfs send

Re: [zfs-discuss] zfs send/receive script

2012-03-09 Thread Ian Collins
On 03/10/12 02:48 AM, Cameron Hanover wrote: On Mar 6, 2012, at 8:26 AM, Carsten John wrote: Hello everybody, I set up a script to replicate all zfs filesystems (some 300 user home directories in this case) within a given pool to a mirror machine. The basic idea is to send the snapshots

Re: [zfs-discuss] Receive failing with invalid backup stream error

2012-03-08 Thread Ian Collins
On 03/ 3/12 11:57 AM, Ian Collins wrote: Hello, I am problems sending some snapshots between two fully up to date Solaris 11 systems: zfs send -i tank/live/fs@20120226_0705 tank/live/fs@20120226_1105 | ssh remote zfs receive -vd fileserver/live receiving incremental stream of tank/live/fs

[zfs-discuss] Receive failing with invalid backup stream error

2012-03-02 Thread Ian Collins
Hello, I am problems sending some snapshots between two fully up to date Solaris 11 systems: zfs send -i tank/live/fs@20120226_0705 tank/live/fs@20120226_1105 | ssh remote zfs receive -vd fileserver/live receiving incremental stream of tank/live/fs@20120226_1105 into

Re: [zfs-discuss] zfs diff performance

2012-02-28 Thread Ian Collins
On 02/28/12 12:53 PM, Ulrich Graef wrote: Hi Ian, On 26.02.12 23:42, Ian Collins wrote: I had high hopes of significant performance gains using zfs diff in Solaris 11 compared to my home-brew stat based version in Solaris 10. However the results I have seen so far have been disappointing

[zfs-discuss] zfs diff performance

2012-02-26 Thread Ian Collins
I had high hopes of significant performance gains using zfs diff in Solaris 11 compared to my home-brew stat based version in Solaris 10. However the results I have seen so far have been disappointing. Testing on a reasonably sized filesystem (4TB), a diff that listed 41k changes took 77

Re: [zfs-discuss] Server upgrade

2012-02-17 Thread Ian Collins
On 02/17/12 03:54 AM, Edward Ned Harvey wrote: If you consider paying for solaris - at Oracle, you just pay them for An OS and they don't care which one you use. Could be oracle linux, solaris, or solaris express. I would recommend solaris 11 express based on personal experience. It gets

[zfs-discuss] Strange send failure

2012-02-08 Thread Ian Collins
Hello, I'm attempting to dry run the send the root data set of a zone from one Solaris 11 host to another: sudo zfs send -r rpool/zoneRoot/zone@to_send | sudo ssh remote zfs receive -ven fileserver/zones But I'm seeing cannot receive: stream has unsupported feature, feature flags = 24

Re: [zfs-discuss] Improving L1ARC cache efficiency with dedup

2011-12-08 Thread Ian Collins
On 12/ 9/11 12:39 AM, Darren J Moffat wrote: On 12/07/11 20:48, Mertol Ozyoney wrote: Unfortunetly the answer is no. Neither l1 nor l2 cache is dedup aware. The only vendor i know that can do this is Netapp In fact , most of our functions, like replication is not dedup aware. For example,

Re: [zfs-discuss] First zone creation - getting ZFS error

2011-12-08 Thread Ian Collins
On 12/ 9/11 11:37 AM, Betsy Schwartz wrote: On Dec 7, 2011, at 9:50 PM, Ian Collins i...@ianshome.com wrote: On 12/ 7/11 05:12 AM, Mark Creamer wrote: Since the zfs dataset datastore/zones is created, I don't understand what the error is trying to get me to do. Do I have to do: zfs create

Re: [zfs-discuss] First zone creation - getting ZFS error

2011-12-07 Thread Ian Collins
On 12/ 7/11 05:12 AM, Mark Creamer wrote: I'm running OI 151a. I'm trying to create a zone for the first time, and am getting an error about zfs. I'm logged in as me, then su - to root before running these commands. I have a pool called datastore, mounted at /datastore Per the wiki document

[zfs-discuss] Confusing zfs error message

2011-11-26 Thread Ian Collins
I was trying to destroy a filesystem and I was baffled by the following error: zfs destroy -r rpool/test/opt cannot destroy 'rpool/test/opt/csw@2001_1405': dataset already exists zfs destroy -r rpool/test/opt/csw@2001_1405 cannot destroy 'rpool/test/opt/csw@2001_1405': snapshot is

Re: [zfs-discuss] Compression

2011-11-22 Thread Ian Collins
On 11/23/11 04:58 PM, Jim Klimov wrote: 2011-11-23 7:39, Matt Breitbach wrote: So I'm looking at files on my ZFS volume that are compressed, and I'm wondering to myself, self, are the values shown here the size on disk, or are they the pre-compressed values. Google gives me no great results on

Re: [zfs-discuss] slow zfs send/recv speed

2011-11-15 Thread Ian Collins
On 11/16/11 01:01 PM, Eric D. Mudama wrote: On Wed, Nov 16 at 3:05, Anatoly wrote: Good day, The speed of send/recv is around 30-60 MBytes/s for initial send and 17-25 MBytes/s for incremental. I have seen lots of setups with 1 disk to 100+ disks in pool. But the speed doesn't vary in any

Re: [zfs-discuss] Hare receiving snapshots become slower?

2011-11-15 Thread Ian Collins
On 11/14/11 04:00 AM, Jeff Savit wrote: On 11/12/2011 03:04 PM, Ian Collins wrote: It turns out this was a problem with e1000g interfaces. When we swapped over to an igb port, the problem went away. Ian, could you summarize what the e1000g problem was? It might be interesting or useful

Re: [zfs-discuss] Hare receiving snapshots become slower?

2011-11-12 Thread Ian Collins
On 09/30/11 08:12 AM, Ian Collins wrote: On 09/30/11 08:03 AM, Bob Friesenhahn wrote: On Fri, 30 Sep 2011, Ian Collins wrote: Slowing down replication is not a good move! Do you prefer pool corruption? ;-) Probably they fixed a dire bug and this is the cost of the fix. Could be. I

Re: [zfs-discuss] how to set up solaris os and cache within one SSD

2011-11-11 Thread Ian Collins
On 11/11/11 08:52 PM, darkblue wrote: 2011/11/11 Ian Collins i...@ianshome.com mailto:i...@ianshome.com On 11/11/11 02:42 AM, Edward Ned Harvey wrote: From: zfs-discuss-boun...@opensolaris.org mailto:zfs-discuss-boun...@opensolaris.org [mailto:zfs

Re: [zfs-discuss] how to set up solaris os and cache within one SSD

2011-11-10 Thread Ian Collins
On 11/11/11 02:42 AM, Edward Ned Harvey wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of darkblue 1 * XEON 5606 1 * supermirco X8DT3-LN4F 6 * 4G RECC RAM 22 * WD RE3 1T harddisk 4 * intel 320 (160G) SSD 1 * supermicro 846E1-900B chassis

Re: [zfs-discuss] Stream versions in Solaris 10.

2011-11-04 Thread Ian Collins
On 11/ 5/11 02:37 PM, Matthew Ahrens wrote: On Wed, Oct 19, 2011 at 1:52 AM, Ian Collins i...@ianshome.com mailto:i...@ianshome.com wrote: I just tried sending from a oi151a system to a Solaris 10 backup server and the server barfed with zfs_receive: stream is unsupported

Re: [zfs-discuss] Log disk with all ssd pool?

2011-10-28 Thread Ian Collins
On 10/28/11 07:04 PM, Mark Wolek wrote: Still kicking around this idea and didn’t see it addressed in any of the threads before the forum closed. If one made an all ssd pool, would a log/cache drive just slow you down? Would zil slow you down? I would guess not, you would still be

[zfs-discuss] Stream versions in Solaris 10.

2011-10-19 Thread Ian Collins
I just tried sending from a oi151a system to a Solaris 10 backup server and the server barfed with zfs_receive: stream is unsupported version 17 I can't find any documentation linking stream version to release, so does anyone know the Update 10 stream version? -- Ian.

Re: [zfs-discuss] about btrfs and zfs

2011-10-18 Thread Ian Collins
On 10/19/11 03:12 AM, Paul Kraus wrote: On Tue, Oct 18, 2011 at 9:13 AM, Darren J Moffat darr...@opensolaris.org wrote: On 10/18/11 14:04, Jim Klimov wrote: 2011-10-18 16:26, Darren J Moffat пишет: ZFS does slightly biases new vdevs for new writes so that we will get to a more even spread.

  1   2   3   4   5   6   7   8   >