Re: [zfs-discuss] deleting a link in ZFS

2012-08-29 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Murray Cullen I've copied an old home directory from an install of OS 134 to the data pool on my OI install. Opensolaris apparently had wine installed as I now have a link to / in my data

Re: [zfs-discuss] ZFS ok for single disk dev box?

2012-08-30 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Anonymous Hi. I have a spare off the shelf consumer PC and was thinking about loading Solaris on it for a development box since I use Studio @work and like it better than gcc. I was thinking

Re: [zfs-discuss] Interesting question about L2ARC

2012-09-11 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Dan Swartzendruber My first thought was everything is hitting in ARC, but that is clearly not the case, since it WAS gradually filling up the cache device.  When things become colder in

[zfs-discuss] scripting incremental replication data streams

2012-09-12 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
I send a replication data stream from one host to another. (and receive). I discovered that after receiving, I need to remove the auto-snapshot property on the receiving side, and set the readonly property on the receiving side, to prevent accidental changes (including auto-snapshots.) Question

Re: [zfs-discuss] scripting incremental replication data streams

2012-09-12 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: Richard Elling [mailto:richard.ell...@gmail.com] Question #2:  What's the best way to find the latest matching snap on both the source and destination?  At present, it seems, I'll have to build a list of sender snaps, and a list of receiver snaps, and parse and search them, till I

Re: [zfs-discuss] scripting incremental replication data streams

2012-09-12 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Edward Ned Harvey Question #2: What's the best way to find the latest matching snap on both the source and destination? At present, it seems, I'll have to build a list of sender snaps,

Re: [zfs-discuss] Zvol vs zfs send/zfs receive

2012-09-14 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Dave Pooser Unfortunately I did not realize that zvols require disk space sufficient to duplicate the zvol, and my zpool wasn't big enough. After a false start (zpool add is dangerous when

Re: [zfs-discuss] Zvol vs zfs send/zfs receive

2012-09-16 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Bill Sommerfeld But simply creating the snapshot on the sending side should be no problem. By default, zvols have reservations equal to their size (so that writes don't fail due to the

Re: [zfs-discuss] Interesting question about L2ARC

2012-09-26 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Jim Klimov Got me wondering: how many reads of a block from spinning rust suffice for it to ultimately get into L2ARC? Just one so it gets into a recent-read list of the ARC and then expires

[zfs-discuss] zvol refreservation size

2012-09-26 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
When I create a 50G zvol, it gets volsize 50G, and it gets used and refreservation 51.6G I have some filesystems already in use, hosting VM's, and I'd like to mimic the refreservation setting on the filesystem, as if I were smart enough from the beginning to have used the zvol. So my question

[zfs-discuss] vm server storage mirror

2012-09-26 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
Here's another one. Two identical servers are sitting side by side. They could be connected to each other via anything (presently using crossover ethernet cable.) And obviously they both connect to the regular LAN. You want to serve VM's from at least one of them, and even if the VM's

Re: [zfs-discuss] vm server storage mirror

2012-09-27 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: Tim Cook [mailto:t...@cook.ms] Sent: Wednesday, September 26, 2012 3:45 PM I would suggest if you're doing a crossover between systems, you use infiniband rather than ethernet.  You can eBay a 40Gb IB card for under $300.  Quite frankly the performance issues should become almost a

[zfs-discuss] Failure to zfs destroy - after interrupting zfs receive

2012-09-28 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
Formerly, if you interrupted a zfs receive, it would leave a clone with a % in its name, and you could find it via zdb -d and then you could destroy the clone, and then you could destroy the filesystem you had interrupted receiving. That was considered a bug, and it was fixed, I think by Sun.

[zfs-discuss] iscsi confusion

2012-09-28 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
I am confused, because I would have expected a 1-to-1 mapping, if you create an iscsi target on some system, you would have to specify which LUN it connects to. But that is not the case... I read the man pages for sbdadm, stmfadm, itadm, and iscsiadm. I read some online examples, where you

Re: [zfs-discuss] vm server storage mirror

2012-10-01 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Jim Klimov If they are close enough for crossover cable where the cable is UTP, then they are close enough for SAS. Pardon my ignorance, can a system easily serve its local storage

Re: [zfs-discuss] Best way to measure performance of ZIL

2012-10-01 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- --- How how will improving ZIL latency improve performance of my pool that is used as a NFS share to ESXi hosts which forces sync writes only (i.e will it be noticeable in an end-to-end context)? Just perform a bunch of

Re: [zfs-discuss] Failure to zfs destroy - after interrupting zfs receive

2012-10-03 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Ariel T. Glenn I have the same issue as described by Ned in his email. I had a zfs recv going that deadlocked against a zfs list; after a day of leaving them hung I finally had to hard

Re: [zfs-discuss] vm server storage mirror

2012-10-03 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Edward Ned Harvey it doesn't work right - It turns out, iscsi devices (And I presume SAS devices) are not removable storage. That means, if the device goes offline and comes back online

Re: [zfs-discuss] Making ZIL faster

2012-10-03 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Schweiss, Chip How can I determine for sure that my ZIL is my bottleneck?  If it is the bottleneck, is it possible to keep adding mirrored pairs of SSDs to the ZIL to make it faster?  Or

Re: [zfs-discuss] Making ZIL faster

2012-10-04 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: Andrew Gabriel [mailto:andrew.gabr...@cucumber.demon.co.uk] Temporarily set sync=disabled Or, depending on your application, leave it that way permanently. I know, for the work I do, most systems I support at most locations have sync=disabled. It all depends on the workload.

Re: [zfs-discuss] Making ZIL faster

2012-10-04 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Schweiss, Chip . The ZIL can have any number of SSDs attached either mirror or individually.   ZFS will stripe across these in a raid0 or raid10 fashion depending on how you configure. I'm

Re: [zfs-discuss] vm server storage mirror

2012-10-04 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: Jim Klimov [mailto:jimkli...@cos.ru] Well, on my system that I complained a lot about last year, I've had a physical pool, a zvol in it, shared and imported over iscsi on loopback (or sometimes initiated from another box), and another pool inside that zvol ultimately. Ick. And it

Re: [zfs-discuss] vm server storage mirror

2012-10-04 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Jim Klimov There are also loops ;) # svcs -d filesystem/usr STATE STIMEFMRI online Aug_27 svc:/system/scheduler:default ... # svcs -d scheduler STATE

Re: [zfs-discuss] Making ZIL faster

2012-10-04 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Schweiss, Chip If I get to build it this system, it will house a decent size VMware NFS storage w/ 200+ VMs, which will be dual connected via 10Gbe.   This is all medical imaging research. 

Re: [zfs-discuss] Making ZIL faster

2012-10-04 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Neil Perrin The ZIL code chains blocks together and these are allocated round robin among slogs or if they don't exist then the main pool devices. So, if somebody is doing sync writes as

Re: [zfs-discuss] vm server storage mirror

2012-10-05 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Jim Klimov Well, it seems just like a peculiar effect of required vs. optional dependencies. The loop is in the default installation. Details: # svcprop filesystem/usr | grep scheduler

Re: [zfs-discuss] Making ZIL faster

2012-10-05 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: Neil Perrin [mailto:neil.per...@oracle.com] In general - yes, but it really depends. Multiple synchronous writes of any size across multiple file systems will fan out across the log devices. That is because there is a separate independent log chain for each file system. Also large

Re: [zfs-discuss] Building an On-Site and Off-Size ZFS server, replication question

2012-10-05 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Tiernan OToole I am in the process of planning a system which will have 2 ZFS servers, one on site, one off site. The on site server will be used by workstations and servers in house, and

Re: [zfs-discuss] vm server storage mirror

2012-10-05 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Edward Ned Harvey I must be missing something - I don't see anything above that indicates any required vs optional dependencies. Ok, I see that now. (Thanks to the SMF FAQ). A dependency

Re: [zfs-discuss] Building an On-Site and Off-Size ZFS server, replication question

2012-10-05 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Frank Cusack On Fri, Oct 5, 2012 at 3:17 AM, Ian Collins i...@ianshome.com wrote: I do have to suffer a slow, glitchy WAN to a remote server and rather than send stream files, I broke the

Re: [zfs-discuss] How many disk in one pool

2012-10-05 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Albert Shih I'm actually running ZFS under FreeBSD. I've a question about how many disks I can have in one pool. At this moment I'm running with one server (FreeBSD 9.0) with 4 MD1200

Re: [zfs-discuss] Building an On-Site and Off-Size ZFS server, replication question

2012-10-10 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Richard Elling If the recipient system doesn't support zfs receive, [...] On that note, is there a minimal user-mode zfs thing that would allow receiving a stream into an image file? No

Re: [zfs-discuss] Directory is not accessible

2012-10-10 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Sami Tuominen Unfortunately there aren't any snapshots. The version of zpool is 15. Is it safe to upgrade that? Is zpool clear -F supported or of any use here? The only thing that will be

Re: [zfs-discuss] Building an On-Site and Off-Size ZFS server, replication question

2012-10-11 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: Richard Elling [mailto:richard.ell...@gmail.com] Read it again he asked, On that note, is there a minimal user-mode zfs thing that would allow receiving a stream into an image file?  Something like: zfs send ... | ssh user@host cat file He didn't say he wanted to cat to a

Re: [zfs-discuss] Building an On-Site and Off-Size ZFS server, replication question

2012-10-12 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: Richard Elling [mailto:richard.ell...@gmail.com] Pedantically, a pool can be made in a file, so it works the same... Pool can only be made in a file, by a system that is able to create a pool. Point is, his receiving system runs linux and doesn't have any zfs; his receiving system is

Re: [zfs-discuss] ZFS best practice for FreeBSD?

2012-10-12 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of andy thomas According to a Sun document called something like 'ZFS best practice' I read some time ago, best practice was to use the entire disk for ZFS and not to partition or slice it in

Re: [zfs-discuss] Building an On-Site and Off-Size ZFS server, replication question

2012-10-12 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
Jim, I'm trying to contact you off-list, but it doesn't seem to be working. Can you please contact me off-list? ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS best practice for FreeBSD?

2012-10-14 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: Ian Collins [mailto:i...@ianshome.com] On 10/13/12 02:12, Edward Ned Harvey (opensolarisisdeadlongliveopensolaris) wrote: There are at least a couple of solid reasons *in favor* of partitioning. #1 It seems common, at least to me, that I'll build a server with let's say, 12

Re: [zfs-discuss] ZFS best practice for FreeBSD?

2012-10-14 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Edward Ned Harvey A solid point. I don't. This doesn't mean you can't - it just means I don't. This response was kind of long-winded. So here's a simpler version: Suppose 6 disks in a

Re: [zfs-discuss] Fixing device names after disk shuffle

2012-10-14 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Paul van der Zwan What was c5t2 is now c7t1 and what was c4t1 is now c5t2. Everything seems to be working fine, it's just a bit confusing. That ... Doesn't make any sense. Did you

[zfs-discuss] openindiana-1 filesystem, time-slider, and snapshots

2012-10-16 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
Can anyone explain to me what the openindiana-1 filesystem is all about? I thought it was the backup copy of the openindiana filesystem, when you apply OS updates, but that doesn't seem to be the case... I have time-slider enabled for rpool/ROOT/openindiana. It has a daily snapshot (amongst

Re: [zfs-discuss] zfs send to older version

2012-10-19 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Ian Collins You have to create pools/filesystems with the older versions used by the destination machine. Apparently zpool create -d -o version=28 you might want to do on the new system...

Re: [zfs-discuss] Changing rpool device paths/drivers

2012-10-19 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of James C. McPherson As far as I'm aware, having an rpool on multipathed devices is fine. Even a year ago, a new system I bought from Oracle came with multipath devices for all devices by

Re: [zfs-discuss] vm server storage mirror

2012-10-19 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
Yikes, I'm back at it again, and so frustrated. For about 2-3 weeks now, I had the iscsi mirror configuration in production, as previously described. Two disks on system 1 mirror against two disks on system 2, everything done via iscsi, so you could zpool export on machine 1, and then

Re: [zfs-discuss] zfs send to older version

2012-10-19 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Richard Elling At some point, people will bitterly regret some zpool upgrade with no way back. uhm... and how is that different than anything else in the software world? No attempt at

[zfs-discuss] What happens when you rm zpool.cache?

2012-10-20 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
If you rm /etc/zfs/zpool.cache and reboot... The system is smart enough (at least in my case) to re-import rpool, and another pool, but it didn't figure out to re-import some other pool. How does the system decide, in the absence of rpool.cache, which pools it's going to import at boot?

Re: [zfs-discuss] vm server storage mirror

2012-10-20 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: Timothy Coalson [mailto:tsc...@mst.edu] Sent: Friday, October 19, 2012 9:43 PM A shot in the dark here, but perhaps one of the disks involved is taking a long time to return from reads, but is returning eventually, so ZFS doesn't notice the problem?  Watching 'iostat -x' for busy

Re: [zfs-discuss] What happens when you rm zpool.cache?

2012-10-22 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Gary Mills On Sun, Oct 21, 2012 at 11:40:31AM +0200, Bogdan Ćulibrk wrote: Follow up question regarding this: is there any way to disable automatic import of any non-rpool on boot

Re: [zfs-discuss] What happens when you rm zpool.cache?

2012-10-22 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Edward Ned Harvey If you rm /etc/zfs/zpool.cache and reboot... The system is smart enough (at least in my case) to re-import rpool, and another pool, but it didn't figure out to re-import

Re: [zfs-discuss] What happens when you rm zpool.cache?

2012-10-22 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: Jim Klimov [mailto:jimkli...@cos.ru] Sent: Monday, October 22, 2012 7:26 AM Are you sure that the system with failed mounts came up NOT in a read-only root moment, and that your removal of /etc/zfs/zpool.cache did in fact happen (and that you did not then boot into an earlier BE with

Re: [zfs-discuss] What is L2ARC write pattern?

2012-10-23 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Jim Klimov One idea I have is that a laptop which only has a single HDD slot, often has SD/MMC cardreader slots. If populated with a card for L2ARC, can it be expected to boost the

Re: [zfs-discuss] zfs send to older version

2012-10-23 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: Richard Elling [mailto:richard.ell...@gmail.com] At some point, people will bitterly regret some zpool upgrade with no way back. uhm... and how is that different than anything else in the software world? No attempt at backward compatibility, and no downgrade path, not even by

Re: [zfs-discuss] zfs send to older version

2012-10-23 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Karl Wagner The only thing I think Oracle should have done differently is to allow either a downgrade or creating a send stream in a lower version (reformatting the data where necessary, and

Re: [zfs-discuss] Scrub and checksum permutations

2012-10-25 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Karl Wagner I can only speak anecdotally, but I believe it does. Watching zpool iostat it does read all data on both disks in a mirrored pair. Logically, it would not make sense not to

Re: [zfs-discuss] Scrub and checksum permutations

2012-10-25 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Jim Klimov Logically, yes - I agree this is what we expect to be done. However, at least with the normal ZFS reading pipeline, reads of redundant copies and parities only kick in if the

Re: [zfs-discuss] Zpool LUN Sizes

2012-10-27 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: Edward Ned Harvey (opensolarisisdeadlongliveopensolaris) Performance is much better if you use mirrors instead of raid. (Sequential performance is just as good either way, but sequential IO is unusual for most use cases. Random IO is much better with mirrors, and that includes scrubs

Re: [zfs-discuss] Zpool LUN Sizes

2012-10-27 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Fajar A. Nugraha So my suggestion is actually just present one huge 25TB LUN to zfs and let the SAN handle redundancy. Oh - No Definitely let zfs handle the redundancy. Because ZFS is

Re: [zfs-discuss] Scrub and checksum permutations

2012-10-28 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Jim Klimov I tend to agree that parity calculations likely are faster (even if not all parities are simple XORs - that would be silly for double- or triple-parity sets which may use

Re: [zfs-discuss] Strange mount -a problem in Solaris 11.1

2012-10-31 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Ian Collins Have have a recently upgraded (to Solaris 11.1) test system that fails to mount its filesystems on boot. Running zfs mount -a results in the odd error #zfs mount -a

Re: [zfs-discuss] Strange mount -a problem in Solaris 11.1

2012-11-01 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Ian Collins ioctl(3, ZFS_IOC_OBJECT_STATS, 0xF706BBB0) The system boots up fine in the original BE. The root (only) pool in a single drive. Any ideas? devfsadm -Cv rm

Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-07 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Tiernan OToole I have a Dedicated server in a data center in Germany, and it has 2 3TB drives, but only software RAID. I have got them to install VMWare ESXi and so far everything is going

Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-07 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: Dan Swartzendruber [mailto:dswa...@druber.com] I'm curious here. Your experience is 180 degrees opposite from mine. I run an all in one in production and I get native disk performance, and ESXi virtual disk I/O is faster than with a physical SAN/NAS for the NFS datastore, since the

Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-07 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: Edward Ned Harvey (opensolarisisdeadlongliveopensolaris) Stuff like that. I could go on, but it basically comes down to: With openindiana, you can do a lot more than you can with ESXi. Because it's a complete OS. You simply have more freedom, better performance, less maintenance

Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-08 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Jim Klimov the VM running a ZFS OS enjoys PCI-pass-through, so it gets dedicated hardware access to the HBA(s) and harddisks at raw speeds, with no extra layers of lags in between. Ah.

Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-08 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Karl Wagner I am just wondering why you export the ZFS system through NFS? I have had much better results (albeit spending more time setting up) using iSCSI. I found that performance was

Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-08 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: Dan Swartzendruber [mailto:dswa...@druber.com] Now you have me totally confused. How does your setup get data from the guest to the OI box? If thru a wire, if it's gig-e, it's going to be 1/3-1/2 the speed of the other way. If you're saying you use 10gig or some-such, we're talking

Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-09 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: Dan Swartzendruber [mailto:dswa...@druber.com] I have to admit Ned's (what do I call you?)idea is interesting. I may give it a try... Yup, officially Edward, most people call me Ned. I contributed to the OI VirtualBox instructions. See here: http://wiki.openindiana.org/oi/VirtualBox

Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-09 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: Karl Wagner [mailto:k...@mouse-hole.com] If I was doing this now, I would probably use the ZFS aware OS bare metal, but I still think I would use iSCSI to export the ZVols (mainly due to the ability to use it across a real network, hence allowing guests to be migrated simply) Yes,

Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-09 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Eugen Leitl On Thu, Nov 08, 2012 at 04:57:21AM +, Edward Ned Harvey (opensolarisisdeadlongliveopensolaris) wrote: Yes you can, with the help of Dell, install OMSA to get the web

Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-14 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Dan Swartzendruber Well, I think I give up for now. I spent quite a few hours over the last couple of days trying to get gnome desktop working on bare-metal OI, followed by virtualbox. I

[zfs-discuss] zvol access rights - chown zvol on reboot / startup / boot

2012-11-15 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
When I google around for anyone else who cares and may have already solved the problem before I came along - it seems we're all doing the same thing for the same reason. If by any chance you are running VirtualBox on a solaris / opensolaris / openidiana / whatever ZFS host, you could of course

Re: [zfs-discuss] zvol access rights - chown zvol on reboot / startup / boot

2012-11-16 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Geoff Nordli Instead of using vdi, I use comstar targets and then use vbox built-in scsi initiator. Based on my recent experiences, I am hesitant to use the iscsi ... I don't know if it was

Re: [zfs-discuss] zvol access rights - chown zvol on reboot / startup / boot

2012-11-16 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Jim Klimov Well, as a simple stone-age solution (to simplify your SMF approach), you can define custom attributes on dataset, zvols included. I think a custom attr must include a colon : in

Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-17 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: Edward Ned Harvey (opensolarisisdeadlongliveopensolaris) Found quite a few posts on various forums of people complaining that RDP with external auth doesn't work (or not reliably), Actually, it does work, and it works reliably, but the setup is very much not straightforward

Re: [zfs-discuss] zvol access rights - chown zvol on reboot / startup / boot

2012-11-17 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Edward Ned Harvey An easier event to trigger is the starting of the virtualbox guest. Upon vbox guest starting, check the service properties for that instance of vboxsvc, and chmod if

Re: [zfs-discuss] zvol wrapped in a vmdk by Virtual Box and double writes?

2012-11-20 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Nathan Kroenert I chopped into a few slices - p0 (partition table), p1 128GB, p2 60gb. As part of my work, I have used it both as a RAW device (cxtxdxp1) and wrapped partition 1 with a

Re: [zfs-discuss] zvol wrapped in a vmdk by Virtual Box and double writes?

2012-11-21 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Jim Klimov As for ZIL - even if it is used with the in-pool variant, I don't think your setup needs any extra steps to disable it (as Edward likes to suggest), and most other setups don't

Re: [zfs-discuss] Woeful performance from an iSCSI pool

2012-11-22 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Ian Collins I look after a remote server that has two iSCSI pools. The volumes for each pool are sparse volumes and a while back the target's storage became full, causing weird and

Re: [zfs-discuss] ZFS Appliance as a general-purpose server question

2012-11-23 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Jim Klimov I wonder if it would make weird sense to get the boxes, forfeit the cool-looking Fishworks, and install Solaris/OI/Nexenta/whatever to get the most flexibility and bang for a

Re: [zfs-discuss] Directory is not accessible

2012-11-26 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Sami Tuominen How can one remove a directory containing corrupt files or a corrupt file itself? For me rm just gives input/output error. I was hoping to see somebody come up with an answer

Re: [zfs-discuss] zfs on SunFire X2100M2 with hybrid pools

2012-11-27 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Eugen Leitl can I make e.g. LSI SAS3442E directly do SSD caching (it says something about CacheCade, but I'm not sure it's an OS-side driver thing), as it is supposed to boost IOPS?

Re: [zfs-discuss] zfs on SunFire X2100M2 with hybrid pools

2012-11-28 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Jim Klimov I really hope someone better versed in compression - like Saso - would chime in to say whether gzip-9 vs. lzjb (or lz4) sucks in terms of read-speeds from the pools. My HDD-based

Re: [zfs-discuss] Question about degraded drive

2012-11-28 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Freddie Cash And you can try 'zpool online' on the failed drive to see if it comes back online. Be cautious here - I have an anecdote, which might represent a trend in best practice, or it

Re: [zfs-discuss] Question about degraded drive

2012-11-28 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Chris Dunbar - Earthside, LLC # zpool replace tank c11t4d0 # zpool clear tank I would expect this to work, or detach/attach. You should scrub periodically, and ensure no errors after

Re: [zfs-discuss] zfs on SunFire X2100M2 with hybrid pools

2012-11-29 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Jim Klimov this is the part I am not certain about - it is roughly as cheap to READ the gzip-9 datasets as it is to read lzjb (in terms of CPU decompression). Nope. I know LZJB is not LZO,

Re: [zfs-discuss] query re disk mirroring

2012-11-29 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Enda o'Connor - Oracle Ireland - Say I have an ldoms guest that is using zfs root pool that is mirrored, and the two sides of the mirror are coming from two separate vds servers, that is

Re: [zfs-discuss] Remove disk

2012-12-07 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Freddie Cash On Thu, Dec 6, 2012 at 12:35 AM, Albert Shih albert.s...@obspm.fr wrote:  Le 01/12/2012 ? 08:33:31-0700, Jan Owoc a ?crit 2) replace the disks with larger ones one-by-one,

Re: [zfs-discuss] any more efficient way to transfer snapshot between two hosts than ssh tunnel?

2012-12-14 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Fred Liu BTW, anyone played NDMP in solaris? Or is it feasible to transfer snapshot via NDMP protocol? I've heard you could, but I've never done it. Sorry I'm not much help, except as a

Re: [zfs-discuss] The format command crashes on 3TB disk but zpool create ok

2012-12-14 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of sol I added a 3TB Seagate disk (ST3000DM001) and ran the 'format' command but it crashed and dumped core. However the zpool 'create' command managed to create a pool on the whole disk

Re: [zfs-discuss] S11 vs illumos zfs compatiblity

2012-12-14 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Bob Netherton At this point, the only thing would be to use 11.1 to create a new pool at 151's version (-o version=) and top level dataset (-O version=). Recreate the file system

[zfs-discuss] zfs receive options (was S11 vs illumos zfs compatiblity)

2012-12-21 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of bob netherton You can, with recv, override any property in the sending stream that can be set from the command line (ie, a writable). # zfs send repo/support@cpu-0412 | zfs recv -o

Re: [zfs-discuss] zfs receive options (was S11 vs illumos zfs compatiblity)

2012-12-21 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Edward Ned Harvey zfs send foo/bar@42 | zfs receive -o compression=on,sync=disabled biz/baz I have not yet tried this syntax. Because you mentioned it, I looked for it in the man page,

Re: [zfs-discuss] zfs receive options (was S11 vs illumos zfs compatiblity)

2012-12-22 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: Cindy Swearingen [mailto:cindy.swearin...@oracle.com] Which man page are you referring to? I see the zfs receive -o syntax in the S11 man page. Oh ... It's the latest openindiana. So I suppose it must be a new feature post-rev-28 in the non-open branch... But it's no big deal. I

Re: [zfs-discuss] poor CIFS and NFS performance

2012-12-31 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Eugen Leitl I have a pool of 8x ST31000340AS on an LSI 8-port adapter as a raidz3 (no compression nor dedup) with reasonable bonnie++ 1.03 values, e.g. 145 MByte/s Seq-Write @ 48% CPU and

Re: [zfs-discuss] iSCSI access patterns and possible improvements?

2013-01-19 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Bob Friesenhahn If almost all of the I/Os are 4K, maybe your ZVOLs should use a volblocksize of 4K? This seems like the most obvious improvement. Oh, I forgot to mention - The above logic

Re: [zfs-discuss] Resilver w/o errors vs. scrub with errors

2013-01-20 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Stephan Budach I am always experiencing chksum errors while scrubbing my zpool(s), but I never experienced chksum errors while resilvering. Does anybody know why that would be? When you

Re: [zfs-discuss] Resilver w/o errors vs. scrub with errors

2013-01-20 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Jim Klimov And regarding the considerable activity - AFAIK there is little way for ZFS to reliably read and test TXGs newer than X My understanding is like this: When you make a snapshot,

Re: [zfs-discuss] RFE: Un-dedup for unique blocks

2013-01-20 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Nico Williams I've wanted a system where dedup applies only to blocks being written that have a good chance of being dups of others. I think one way to do this would be to keep a scalable

Re: [zfs-discuss] iSCSI access patterns and possible improvements?

2013-01-20 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: Richard Elling [mailto:richard.ell...@gmail.com] Sent: Saturday, January 19, 2013 5:39 PM the space allocation more closely resembles a variant of mirroring, like some vendors call RAID-1E Awesome, thank you. :-) ___ zfs-discuss mailing

Re: [zfs-discuss] RFE: Un-dedup for unique blocks

2013-01-20 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Nico Williams To decide if a block needs dedup one would first check the Bloom filter, then if the block is in it, use the dedup code path, else the non-dedup codepath and insert the block

  1   2   >