Re: [zfs-discuss] incremental backup with zfs to file

2009-08-23 Thread Edward Ned Harvey
zfs send -Rv rp...@0908 /net/remote/rpool/snaps/rpool.0908 The recommended thing is to zfs send | zfs receive ... or more likely, zfs send | ssh somehost 'zfs receive' You should ensure the source and destination OSes are precisely the same version, because then you're assured the zfs

Re: [zfs-discuss] How to prevent /usr/bin/chmod from following symbolic links?

2009-08-23 Thread Edward Ned Harvey
How can I prevent /usr/bin/chmod from following symbolic links? I can't find any -P option in the documentation (and it doesn't work either..). Maybe find can be used in some way? Not possible; in Solaris we don't have a lchmod(2) system call which makes adding a chmod option (like

Re: [zfs-discuss] How to prevent /usr/bin/chmod from following symbolic links?

2009-08-24 Thread Edward Ned Harvey
It's a strange question anyway - You want a single file to have permissions (suppose 755) in one directory, and some different permissions (suppost 700) in some other directory? Then some users could access the file if they use path A, but would be denied access to the same file if they

Re: [zfs-discuss] RAIDZ versus mirrroed

2009-09-16 Thread Edward Ned Harvey
Hi. If I am using slightly more reliable SAS drives versus SATA, SSDs for both L2Arc and ZIL and lots of RAM, will a mirrored pool of say 24 disks hold any significant advantages over a RAIDZ pool? Generally speaking, striping mirrors will be faster than raidz or raidz2, but it will require a

[zfs-discuss] .zfs snapshots on subdirectories?

2009-10-02 Thread Edward Ned Harvey
Suppose I have a storagepool: /storagepool And I have snapshots on it. Then I can access the snaps under /storagepool/.zfs/snapshots But is there any way to enable this within all the subdirs? For example, cd /storagepool/users/eharvey/some/foo/dir cd

Re: [zfs-discuss] Setting up an SSD ZIL - Need A Reality Check

2009-10-20 Thread Edward Ned Harvey
System: Dell 2950 16G RAM 16 1.5T SATA disks in a SAS chassis hanging off of an LSI 3801e, no extra drive slots, a single zpool. svn_124, but with my zpool still running at the 2009.06 version (14). My plan is to put the SSD into an open disk slot on the 2950, but will have to configure

Re: [zfs-discuss] Setting up an SSD ZIL - Need A Reality Check

2009-10-21 Thread Edward Ned Harvey
Thanks Ed. It sounds like you have run in this mode? No issues with the perc? You can JBOD with the perc. It might be technically a raid0 or raid1 with a single disk in it, but that would be functionally equivalent to JBOD. The only time I did this was ... I have a Windows server, on

Re: [zfs-discuss] Setting up an SSD ZIL - Need A Reality Check

2009-10-22 Thread Edward Ned Harvey
Replacing failed disks is easy when PERC is doing the RAID. Just remove the failed drive and replace with a good one, and the PERC will rebuild automatically. Sorry, not correct. When you replace a failed drive, the perc card doesn't know for certain that the new drive you're adding is meant

Re: [zfs-discuss] Setting up an SSD ZIL - Need A Reality Check

2009-10-22 Thread Edward Ned Harvey
The Intel specified random write IOPS are with the cache enabled and without cache flushing. They also carefully only use a limited span of the device, which fits most perfectly with how the device is built. How do you know this? This sounds much more detailed than any average person could

[zfs-discuss] zfs send from solaris 10/08 to zfs receive on solaris 10/09

2009-11-12 Thread Edward Ned Harvey
I built a fileserver on solaris 10u6 (10/08) intending to back it up to another server via zfs send | ssh othermachine 'zfs receive' However, the new server is too new for 10u6 (10/08) and requires a later version of solaris . presently available is 10u8 (10/09) Is it crazy for me to try the

Re: [zfs-discuss] zfs send from solaris 10/08 to zfs receive on solaris 10/09

2009-11-12 Thread Edward Ned Harvey
*snip* I hope that's clear. Yes, perfectly clear, and very helpful. Thank you very much. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zfs send from solaris 10/08 to zfs receive on solaris 10/09

2009-11-13 Thread Edward Ned Harvey
It says at the end of the zfs send section of the man page The format of the stream is committed. You will be able to receive your streams on future versions of ZFS. 'Twas not always so. It used to say The format of the stream is evolving. No backwards compatibility is guaranteed. You may

Re: [zfs-discuss] Separate Zil on HDD ?

2009-12-02 Thread Edward Ned Harvey
I previously had a linux NFS server that I had mounted 'ASYNC' and, as one would expect, NFS performance was pretty good getting close to 900gb/s. Now that I have moved to opensolaris, NFS performance is not very good, I'm guessing mainly due to the 'SYNC' nature of NFS. I've seen various

[zfs-discuss] ZFS send | verify | receive

2009-12-04 Thread Edward Ned Harvey
If there were a ³zfs send² datastream saved someplace, is there a way to verify the integrity of that datastream without doing a ³zfs receive² and occupying all that disk space? I am aware that ³zfs send² is not a backup solution, due to vulnerability of even a single bit error, and lack of

Re: [zfs-discuss] ZFS send | verify | receive

2009-12-04 Thread Edward Ned Harvey
Depending of your version of OS, I think the following post from Richard Elling will be of great interest to you: - http://richardelling.blogspot.com/2009/10/check-integrity-of-zfs-send-streams. html Thanks! :-) No, wait! According to that page, if you zfs receive -n then you should

Re: [zfs-discuss] ZFS send | verify | receive

2009-12-06 Thread Edward Ned Harvey
If feasible, you may want to generate MD5 sums on the streamed output and then use these for verification. That's actually not a bad idea. It should be kinda obvious, but I hadn't thought of it because it's sort-of duplicating existing functionality. I do have a multipipe script that behaves

Re: [zfs-discuss] ZFS send | verify | receive

2009-12-06 Thread Edward Ned Harvey
Where exactly do you get zstreamdump? I found a link to zstreamdump.c ... but is that it? Shouldn't it be part of a source tarball or something? Does it matter what OS? Every reference I see for zstreamdump is about opensolaris. But I'm running solaris.

Re: [zfs-discuss] ZFS send | verify | receive

2009-12-06 Thread Edward Ned Harvey
Gzip can be a bit slow. Luckily there is 'lzop' which is quite a lot more CPU efficient on i386 and AMD64, and even on SPARC. If the compressor is able to keep up with the network and disk, then it is fast enough. See http://www.lzop.org/;. In my development/testing this week, I did time

Re: [zfs-discuss] ZFS send | verify | receive

2009-12-06 Thread Edward Ned Harvey
OS means Operating System, or OpenSolaris. This is in the second meaning I wrote OS in my answer. It was not obvious you were using Solaris 10 though. Sorry about that. (FYI, zstreamdump seems to be an addition to build 125.) Oh - I never connected OS to OpenSolaris. ;-) So I gather

Re: [zfs-discuss] ZFS send | verify | receive

2009-12-06 Thread Edward Ned Harvey
I see 3.6X less CPU consumption from 'lzop -3' than from 'gzip -3'. Where do you get lzop from? I don't see any binaries on their site, nor blastwave, nor opencsw. And I am having difficulty building it from source. ___ zfs-discuss mailing list

Re: [zfs-discuss] ZFS send | verify | receive

2009-12-06 Thread Edward Ned Harvey
Oh well. I built LZO, and can't seem to link it in the lzop build, despite correctly setting the FLAGS variables they say in the INSTALL file. I'd love to provide an lzop comparison, but can't get it. I give up ... Also, can't build python-lzo. Also would be sweet, but hey. For whoever

Re: [zfs-discuss] ZFS send | verify | receive

2009-12-06 Thread Edward Ned Harvey
cat my_log_file | tee (gzip my_log_file.gz) (wc -l) (md5sum) | sort | uniq -c That is great. ;-) Thank you very much. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS incremental receives fail

2009-12-10 Thread Edward Ned Harvey
We've been using ZFS for about two years now and make a lot of use of zfs send/receive to send our data from one X4500 to another. This has been working well for the past 18 months that we've been doing the sends. I recently upgraded the receiving thumper to Solaris 10 u8 and since then,

Re: [zfs-discuss] ZFS - how to determine which physical drive to replace

2009-12-12 Thread Edward Ned Harvey
This is especially important, because if you have 1 failed drive, and you pull the wrong drive, now you have 2 failed drives. And that could destroy the dataset (depending on whether you have raidz-1 or raidz-2) Whenever possible, always get the hotswappable hardware, that will blink a red

Re: [zfs-discuss] zfs zend is very slow

2009-12-16 Thread Edward Ned Harvey
I'll first suggest questioning the measurement of speed you're getting, 12.5Mb/sec. I'll suggest another, more accurate method: date ; zfs send somefilesystem | pv -b | ssh somehost zfs receive foo ; date At any given time, you can see how many bytes have transferred in aggregate, and what time

Re: [zfs-discuss] zfs zend is very slow

2009-12-16 Thread Edward Ned Harvey
I'm seeing similar results, though my file systems currently have de-dupe disabled, and only compression enable, both systems being I can't say this is your issue, but you can count on slow writes with compression on. How slow is slow? Don't know. Irrelevant in this case? Possibly.

Re: [zfs-discuss] zfs zend is very slow

2009-12-17 Thread Edward Ned Harvey
I'm willing to accept slower writes with compression enabled, par for the course. Local writes, even with compression enabled, can still exceed 500MB/sec, with moderate to high CPU usage. These problems seem to have manifested after snv_128, and seemingly only affect ZFS receive speeds. Local

Re: [zfs-discuss] compress an existing filesystem

2009-12-17 Thread Edward Ned Harvey
Hi all, I need to move a filesystem off of one host and onto another smaller one. The fs in question, with no compression enabled, is using 1.2 TB (refer). I'm hoping that zfs compression will dramatically reduce this requirement and allow me to keep the dataset on an 800 GB store.

Re: [zfs-discuss] compress an existing filesystem

2009-12-18 Thread Edward Ned Harvey
I've taken to creating an unmounted empty filesystem with a reservation to prevent the zpool from filling up. It gives you behavior similar to ufs's reserved blocks. So ... Something like this? zpool create -m /path/to/mountpoint myzpool c1t0d0 and then... Assuming it's a 500G disk ... zfs

Re: [zfs-discuss] Backing up a ZFS pool

2010-01-16 Thread Edward Ned Harvey
What is the best way to back up a zfs pool for recovery? Recover entire pool or files from a pool... Would you use snapshots and clones? I would like to move the backup to a different disk and not use tapes. Personally, I use zfs send | zfs receive to an external disk. Initially a full

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-16 Thread Edward Ned Harvey
I am considering building a modest sized storage system with zfs. Some of the data on this is quite valuable, some small subset to be backed up forever, and I am evaluating back-up options with that in mind. You don't need to store the zfs send data stream on your backup media. This would be

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-17 Thread Edward Ned Harvey
NO, zfs send is not a backup. Understood, but perhaps you didn't read my whole message. Here, I will spell out the whole discussion: If you zfs send somefile it is well understood there are two big problems with this method of backup. #1 If a single bit error is introduced into the file,

Re: [zfs-discuss] Backing up a ZFS pool

2010-01-17 Thread Edward Ned Harvey
Personally, I use zfs send | zfs receive to an external disk. Initially a full image, and later incrementals. Do these incrementals go into the same filesystem that received the original zfs stream? Yes. In fact, I think that's the only way possible. The end result is ... On my

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-18 Thread Edward Ned Harvey
I still believe that a set of compressed incremental star archives give you more features. Big difference there is that in order to create an incremental star archive, star has to walk the whole filesystem or folder that's getting backed up, and do a stat on every file to see which files have

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-18 Thread Edward Ned Harvey
Consider then, using a zpool-in-a-file as the file format, rather than zfs send streams. That's a pretty cool idea. Then you've still got the entire zfs volume inside of a file, but you're able to mount and extract individual files if you want, and you're able to pipe your zfs send directly to

Re: [zfs-discuss] Backing up a ZFS pool

2010-01-18 Thread Edward Ned Harvey
Personally, I like to start with a fresh full image once a month, and then do daily incrementals for the rest of the month. This doesn't buy you anything. ZFS isn't like traditional backups. If you never send another full, then eventually the delta from the original to the present will

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-19 Thread Edward Ned Harvey
Star implements this in a very effective way (by using libfind) that is even faster that the find(1) implementation from Sun. Even if I just find my filesystem, it will run for 7 hours. But zfs can create my whole incremental snapshot in a minute or two. There is no way star or any other

Re: [zfs-discuss] x4500...need input and clarity on striped/mirrored configuration

2010-01-21 Thread Edward Ned Harvey
zpool create -f testpool mirror c0t0d0 c1t0d0 mirror c4t0d0 c6t0d0 mirror c0t1d0 c1t1d0 mirror c4t1d0 c5t1d0 mirror c6t1d0 c7t1d0 mirror c0t2d0 c1t2d0 mirror c4t2d0 c5t2d0 mirror c6t2d0 c7t2d0 mirror c0t3d0 c1t3d0 mirror c4t3d0 c5t3d0 mirror c6t3d0 c7t3d0 mirror

Re: [zfs-discuss] x4500...need input and clarity on striped/mirrored configuration

2010-01-21 Thread Edward Ned Harvey
Zfs does not strictly support RAID 1+0. However, your sample command will create a pool based on mirror vdevs which is written to in a load-shared fashion (not striped). This type of pool is ideal for Although it's not technically striped according to the RAID definition of striping, it does

Re: [zfs-discuss] x4500...need input and clarity on striped/mirrored configuration

2010-01-21 Thread Edward Ned Harvey
zpool create testpool disk1 disk2 disk3 In the traditional sense of RAID, this would create a concatenated data set. The size of the data set is the size of disk1 + disk2 + disk3. However, since this is ZFS, it's not constrained to linearly assigning virtual disk blocks to physical disk blocks

Re: [zfs-discuss] ZFS backup/restore

2010-01-24 Thread Edward Ned Harvey
Are there any plans to have a tool to restore individual files from zfs send streams - like ufsrestore? The best advice I've heard so far is thus: On your backup media, create a zpool in a file container. When you zfs send don't save the data stream. Instead, feed it directly into zfs

Re: [zfs-discuss] ZFS configuration suggestion with 24 drives

2010-01-28 Thread Edward Ned Harvey
Replacing my current media server with another larger capacity media server. Also switching over to solaris/zfs. Anyhow we have 24 drive capacity. These are for large sequential access (large media files) used by no more than 3 or 5 users at a time. What type of disks are you using, and

Re: [zfs-discuss] ZFS configuration suggestion with 24 drives

2010-01-29 Thread Edward Ned Harvey
Thanks for the responses guys. It looks like I'll probably use RaidZ2 with 8 drives. The write bandwidth isn't that great as it'll be a hundred gigs every couple weeks but in a bulk load type of environment. So, not a major issue. Testing with 8 drives in a raidz2 easily saturated a GigE

Re: [zfs-discuss] Cores vs. Speed?

2010-02-04 Thread Edward Ned Harvey
I plan to start with 5 1.5 TB drives in a raidz2 configuration and 2 mirrored boot drives. You want to use compression and deduplication and raidz2. I hope you didn't want to get any performance out of this system, because all of those are compute or IO intensive. FWIW ... 5 disks in raidz2

Re: [zfs-discuss] Cores vs. Speed?

2010-02-04 Thread Edward Ned Harvey
Data in raidz2 is striped so that it is split across multiple disks. Partial truth. Yes, the data is on more than one disk, but it's a parity hash, requiring computation overhead and a write operation on each and every disk. It's not simply striped. Whenever you read or write, you need to

Re: [zfs-discuss] Cores vs. Speed?

2010-02-04 Thread Edward Ned Harvey
I want my VMs to run fast - so is it deduplication that really slows things down? Are you saying raidz2 would overwhelm current I/O controllers to where I could not saturate 1 GB network link? Is the CPU I am looking at not capable of doing dedup and compression? Or are no CPUs capable

Re: [zfs-discuss] Cores vs. Speed?

2010-02-06 Thread Edward Ned Harvey
b (4) Hold backups from windows machines, mac (time machine), b linux. for time machine you will probably find yourself using COMSTAR and the GlobalSAN iSCSI initiator because Time Machine does not seem willing to work over NFS. Otherwise, for Macs you should definitely use NFS,

[zfs-discuss] NFS access by OSX clients (was Cores vs. Speed?)

2010-02-08 Thread Edward Ned Harvey
There's also questions of case sensitivity, locking, being mounted at boot time rather than login time, accomodating more than one user. I've also heard SMB is far slower. The Macs I've switched to automounted NFS are causing me less trouble. If you are in a ``share almost everything''

[zfs-discuss] NFS access by OSX clients (was Cores vs. Speed?)

2010-02-09 Thread Edward Ned Harvey
There's also questions of case sensitivity, locking, being mounted at boot time rather than login time, accomodating more than one user. I've also heard SMB is far slower. The Macs I've switched to automounted NFS are causing me less trouble. If you are in a ``share almost everything''

Re: [zfs-discuss] zfs receive : is this expected ?

2010-02-10 Thread Edward Ned Harvey
amber ~ # zpool list data NAME SIZE ALLOC FREECAP DEDUP HEALTH ALTROOT data 930G 295G 635G31% 1.00x ONLINE - amber ~ # zfs send -RD d...@prededup |zfs recv -d ezdata cannot receive new filesystem stream: destination 'ezdata' exists must specify -F to overwrite it

[zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-13 Thread Edward Ned Harvey
I have a new server, with 7 disks in it. I am performing benchmarks on it before putting it into production, to substantiate claims I make, like striping mirrors is faster than raidz and so on. Would anybody like me to test any particular configuration? Unfortunately I don't have any SSD, so I

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-13 Thread Edward Ned Harvey
IMHO, sequential tests are a waste of time. With default configs, it will be difficult to separate the raw performance from prefetched performance. You might try disabling prefetch as an option. Let me clarify: Iozone does a nonsequential series of sequential tests, specifically

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-14 Thread Edward Ned Harvey
Never mind. I have no interest in performance tests for Solaris 10. The code is so old, that it does not represent current ZFS at all. Whatever. Regardless of what you say, it does show: . Which is faster, raidz, or a stripe of mirrors? . How much does raidz2 hurt

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-14 Thread Edward Ned Harvey
iozone -m -t 8 -T -O -r 128k -o -s 12G Actually, it seems that this is more than sufficient: iozone -m -t 8 -T -r 128k -o -s 4G Good news, cuz I kicked off the first test earlier today, and it seems like it will run till Wednesday. ;-) The first run, on a single disk, took 6.5 hrs,

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-18 Thread Edward Ned Harvey
://nedharvey.com/iozone_weezer/neds%20method/raw_results.zip From: Edward Ned Harvey [mailto:sola...@nedharvey.com] Sent: Saturday, February 13, 2010 9:07 AM To: opensolaris-disc...@opensolaris.org; zfs-discuss@opensolaris.org Subject: ZFS performance benchmarks in various

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-18 Thread Edward Ned Harvey
A most excellent set of tests. We could use some units in the PDF file though. Oh, hehehe. ;-) The units are written in the raw txt files. On your tests, the units were ops/sec, and in mine, they were Kbytes/sec. If you like, you can always grab the xlsx and modify it to your tastes, and

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-18 Thread Edward Ned Harvey
A most excellent set of tests. We could use some units in the PDF file though. Oh, by the way, you originally requested the 12G file to be used in benchmark, and later changed to 4G. But by that time, two of the tests had already completed on the 12G, and I didn't throw away those results,

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-19 Thread Edward Ned Harvey
/10 8:08 AM, Edward Ned Harvey sola...@nedharvey.com wrote: Ok, I¹ve done all the tests I plan to complete. For highest performance, it seems: ·The measure I think is the most relevant for typical operation is the fastest random read /write / mix. (Thanks Bob, for suggesting I do

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-20 Thread Edward Ned Harvey
ZFS has intelligent prefetching. AFAIK, Solaris disk drivers do not prefetch. Can you point me to any reference? I didn't find anything stating yay or nay, for either of these. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-20 Thread Edward Ned Harvey
Doesn't this mean that if you enable write back, and you have a single, non-mirrored raid-controller, and your raid controller dies on you so that you loose the contents of the nvram, you have a potentially corrupt file system? It is understood, that any single point of failure could result

Re: [zfs-discuss] zfs sequential read performance

2010-02-24 Thread Edward Ned Harvey
I wonder if it is a real problem, ie, for example cause longer backup time, will it be addressed in future? It doesn't cause longer backup time, as long as you're doing a zfs send | zfs receive But it could cause longer backup time if you're using something like tar. The only way to solve it

Re: [zfs-discuss] zfs sequential read performance

2010-02-24 Thread Edward Ned Harvey
Once the famous bp rewriter is integrated and a defrag functionality built on top of it you will be able to re-arrange your data again so it is sequential again. Then again, this would also rearrange your data to be sequential again: cp -p somefile somefile.tmp ; mv -f somefile.tmp somefile

[zfs-discuss] How to disable ZIL and benchmark disk speed irresponsibly

2010-03-02 Thread Edward Ned Harvey
I have a system with a bunch of disks, and I¹d like to know how much faster it would be if I had an SSD for the ZIL; however, I don¹t have the SSD and I don¹t want to buy one right now. The reasons are complicated, but it¹s not a cost barrier. Naturally I can¹t do the benchmark right now... But

Re: [zfs-discuss] Any way to fix ZFS sparse file bug #6792701

2010-03-03 Thread Edward Ned Harvey
I don't know the answer to your question, but I am running the same version of OS you are, and this bug could affect us. Do you have any link to any documentation about this bug? I'd like to forward something to inform the other admins at work. From: zfs-discuss-boun...@opensolaris.org

Re: [zfs-discuss] [osol-help] ZFS two way replication

2010-03-03 Thread Edward Ned Harvey
Sorry for double-post. This thread was posted separately to opensolaris-help and zfs-discuss. So I'm replying to both lists. I'm wondering what the possibilities of two-way replication are for a ZFS storage pool. Based on all the description you gave, I wouldn't call this two-way

Re: [zfs-discuss] [osol-discuss] Moving Storage to opensolaris+zfs. What about backup?

2010-03-04 Thread Edward Ned Harvey
Is there any work on an upgrade of zfs send/receive to handle resuming on next media? Please see Darren's post, pasted below. -Original Message- From: opensolaris-discuss-boun...@opensolaris.org [mailto:opensolaris- discuss-boun...@opensolaris.org] On Behalf Of Darren Mackay

Re: [zfs-discuss] [osol-discuss] Moving Storage to opensolaris+zfs. What about backup?

2010-03-04 Thread Edward Ned Harvey
Is there any work on an upgrade of zfs send/receive to handle resuming on next media? See Darren's post, regarding mkfifo. The purpose is to enable you to use normal backup tools that support changing tapes, to backup your zfs send to multiple split tapes. I wonder though - During a restore,

[zfs-discuss] WriteBack versus SSD-ZIL

2010-03-05 Thread Edward Ned Harvey
In this email, when I say PERC, I really mean either a PERC, or any other hardware WriteBack buffered raid controller with BBU. For future server purchases, I want to know which is faster: (a) A bunch of hard disks with PERC and WriteBack enabled, or (b) A bunch of hard disks, plus one SSD

[zfs-discuss] Monitoring my disk activity

2010-03-06 Thread Edward Ned Harvey
Recently, I'm benchmarking all kinds of stuff on my systems. And one question I can't intelligently answer is what blocksize I should use in these tests. I assume there is something which monitors present disk activity, that I could run on my production servers, to give me some statistics of

Re: [zfs-discuss] [osol-discuss] WriteBack versus SSD-ZIL

2010-03-06 Thread Edward Ned Harvey
From everything I've seen, an SSD wins simply because it's 20-100x the size. HBAs almost never have more than 512MB of cache, and even fancy SAN boxes generally have 1-2GB max. So, HBAs are subject to being overwhelmed with heavy I/O. The SSD ZIL has a much better chance of being able to

Re: [zfs-discuss] zpool on sparse files

2010-03-06 Thread Edward Ned Harvey
You are running into this bug: http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6929751 Currently, building a pool from files is not fully supported. I think Cindy and I interpreted the question differently. If you want the zpool inside a file to stay mounted while the system is

Re: [zfs-discuss] Monitoring my disk activity

2010-03-08 Thread Edward Ned Harvey
It all depends on how they are connecting to the storage. iSCSI, CIFS, NFS, database, rsync, ...? The reason I say this is because ZFS will coalesce writes, so just looking at iostat data (ops versus size) will not be appropriate. You need to look at the data flowing between ZFS and

Re: [zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)

2010-03-08 Thread Edward Ned Harvey
I don't have an answer to this question, but I can say, I've seen a similar surprising result. I ran iozone on various raid configurations of spindle disks . and on a ramdisk. I was surprised to see the ramdisk is only about 50% to 200% faster than the next best competitor in each category. . I

Re: [zfs-discuss] backup zpool to tape

2010-03-10 Thread Edward Ned Harvey
In my case where I reboot the server I cannot get the pool to come back up. It shows UNAVAIL, I have tried to export before reboot and reimport it and have not been successful and I dont like this in the case a power issue of some sort happens. My other option was to mount using lofiadm

Re: [zfs-discuss] zfs send and receive ... any ideas for FEC?

2010-03-12 Thread Edward Ned Harvey
I don't think retransmissions of b0rken packets is a problem anymore, most people use ssh which provides good error detection at a fine grain. It is rare that one would need to resend an entire ZFS dump stream when using ssh (or TLS or ...) Archival tape systems are already designed to

Re: [zfs-discuss] When to Scrub..... ZFS That Is

2010-03-13 Thread Edward Ned Harvey
In addition to backups on tape, I like to backup my ZFS to removable hard disk. (Created a ZFS filesystem on removable disk, and zfs send | zfs receive onto the removable disk). But since a single hard disk is so prone to failure, I like to scrub my external disk regularly, just to verify the

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-17 Thread Edward Ned Harvey
The one thing that I keep thinking, and which I have yet to see discredited, is that ZFS file systems use POSIX semantics.  So, unless you are using specific features (notably ACLs, as Paul Henson is), you should be able to backup those file systems using well known tools.  This is

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-17 Thread Edward Ned Harvey
I think what you're saying is: Why bother trying to backup with zfs send when the recommended practice, fully supportable, is to use other tools for backup, such as tar, star, Amanda, bacula, etc. Right? The answer to this is very simple. #1 ... #2 ... Oh, one more thing. zfs send

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-17 Thread Edward Ned Harvey
Why do we want to adapt zfs send to do something it was never intended to do, and probably won't be adapted to do (well, if at all) anytime soon instead of optimizing existing technologies for this use case? The only time I see or hear of anyone using zfs send in a way it wasn't intended is

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-18 Thread Edward Ned Harvey
My own stuff is intended to be backed up by a short-cut combination -- zfs send/receive to an external drive, which I then rotate off-site (I have three of a suitable size). However, the only way that actually works so far is to destroy the pool (not just the filesystem) and recreate it from

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-18 Thread Edward Ned Harvey
From what I've read so far, zfs send is a block level api and thus cannot be used for real backups. As a result of being block level oriented, the Weirdo. The above cannot be used for real backups is obviously subjective, is incorrect and widely discussed here, so I just say weirdo. I'm tired

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-18 Thread Edward Ned Harvey
From what I've read so far, zfs send is a block level api and thus cannot be used for real backups. As a result of being block level oriented, the Weirdo. The above cannot be used for real backups is obviously subjective, is incorrect and widely discussed here, so I just say weirdo.

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-19 Thread Edward Ned Harvey
ZFS+CIFS even provides Windows Volume Shadow Services so that Windows users can do this on their own. I'll need to look into that, when I get a moment. Not familiar with Windows Volume Shadow Services, but having people at home able to do this directly seems useful. I'd like to spin

[zfs-discuss] ZFS+CIFS: Volume Shadow Services, or Simple Symlink?

2010-03-19 Thread Edward Ned Harvey
ZFS+CIFS even provides Windows Volume Shadow Services so that Windows users can do this on their own. I'll need to look into that, when I get a moment. Not familiar with Windows Volume Shadow Services, but having people at home able to do this directly seems useful. Even in

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-19 Thread Edward Ned Harvey
I'll say it again: neither 'zfs send' or (s)tar is an enterprise (or even home) backup system on their own one or both can be components of the full solution. I would be pretty comfortable with a solution thusly designed: #1 A small number of external disks, zfs send onto the disks and

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-19 Thread Edward Ned Harvey
1. NDMP for putting zfs send streams on tape over the network.  So Tell me if I missed something here. I don't think I did. I think this sounds like crazy talk. I used NDMP up till November, when we replaced our NetApp with a Solaris Sun box. In NDMP, to choose the source files, we had the

Re: [zfs-discuss] ZFS/OSOL/Firewire...

2010-03-19 Thread Edward Ned Harvey
It would appear that the bus bandwidth is limited to about 10MB/sec (~80Mbps) which is well below the theoretical 400Mbps that 1394 is supposed to be able to handle. I know that these two disks can go significantly higher since I was seeing 30MB/sec when they were used on Macs previously in

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-20 Thread Edward Ned Harvey
I'll say it again: neither 'zfs send' or (s)tar is an enterprise (or even home) backup system on their own one or both can be components of the full solution. Up to a point. zfs send | zfs receive does make a very good back up scheme for the home user with a moderate amount of

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-20 Thread Edward Ned Harvey
5+ years ago the variety of NDMP that was available with the combination of NetApp's OnTap and Veritas NetBackup did backups at the volume level. When I needed to go to tape to recover a file that was no longer in snapshots, we had to find space on a NetApp to restore the volume. It could

Re: [zfs-discuss] Proposition of a new zpool property.

2010-03-21 Thread Edward Ned Harvey
That would add unnecessary code to the ZFS layer for something that cron can handle in one line. Actually ... Why should there be a ZFS property to share NFS, when you can already do that with share and dfstab? And still the zfs property exists. I think the proposed existence of a ZFS scrub

Re: [zfs-discuss] Proposition of a new zpool property.

2010-03-21 Thread Edward Ned Harvey
Most software introduced in Linux clearly violates the UNIX philosophy. Hehehe, don't get me started on OSX. ;-) And for the love of all things sacred, never say OSX is not UNIX. I made that mistake once. Which is not to say I was proven wrong or anything - but it's apparently a subject

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-21 Thread Edward Ned Harvey
The only tool I'm aware of today that provides a copy of the data, and all of the ZPL metadata and all the ZFS dataset properties is 'zfs send'. AFAIK, this is correct. Further, the only type of tool that can backup a pool is a tool like dd. How is it different to backup a pool, versus

Re: [zfs-discuss] ZFS+CIFS: Volume Shadow Services, or Simple Symlink?

2010-03-21 Thread Edward Ned Harvey
ln -s .zfs/snapshot snapshots Voila. All Windows or Mac or Linux or whatever users are able to easily access snapshots. Clever. Just one minor problem though, you've circumvented the reason why the snapdir property defaults to hidden. This probably won't affect clients that

Re: [zfs-discuss] Proposition of a new zpool property.

2010-03-21 Thread Edward Ned Harvey
Actually ... Why should there be a ZFS property to share NFS, when you can already do that with share and dfstab? And still the zfs property exists. Probably because it is easy to create new filesystems and clone them; as NFS only works per filesystem you need to edit dfstab every time

Re: [zfs-discuss] Proposition of a new zpool property.

2010-03-22 Thread Edward Ned Harvey
Does cron happen to know how many other scrubs are running, bogging down your IO system? If the scrub scheduling was integrated into zfs itself, It doesn't need to. Crontab entry: /root/bin/scruball.sh /root/bin/scruball.sh: #!/usr/bin/bash for filesystem in filesystem1 filesystem2

Re: [zfs-discuss] Proposition of a new zpool property.

2010-03-22 Thread Edward Ned Harvey
no, it is not a subdirectory it is a filesystem mounted on top of the subdirectory. So unless you use NFSv4 with mirror mounts or an automounter other NFS version will show you contents of a directory and not a filesystem. It doesn't matter if it is a zfs or not. Ok, I learned something

Re: [zfs-discuss] Proposition of a new zpool property.

2010-03-22 Thread Edward Ned Harvey
IIRC it's zpool scrub, and last time I checked, the zpool command exited (with status 0) as soon as it had started the scrub. Your command would start _ALL_ scrubs in paralell as a result. You're right. I did that wrong. Sorry 'bout that. So either way, if there's a zfs property for scrub,

Re: [zfs-discuss] ZFS+CIFS: Volume Shadow Services, or Simple Symlink?

2010-03-22 Thread Edward Ned Harvey
Not being a CIFS user, could you clarify/confirm for me.. is this just a presentation issue, ie making a directory icon appear in a gooey windows explorer (or mac or whatever equivalent) view for people to click on? The windows client could access the .zfs/snapshot dir via typed pathname if

Re: [zfs-discuss] snapshots as versioning tool

2010-03-22 Thread Edward Ned Harvey
This may be a bit dimwitted since I don't really understand how snapshots work. I mean the part concerning COW (copy on right) and how it takes so little room. COW and snapshots are very simple to explain. Suppose you're chugging along using your filesystem, and then one moment, you tell the

Re: [zfs-discuss] Proposition of a new zpool property.

2010-03-22 Thread Edward Ned Harvey
In other words, there is no case where multiple scrubs compete for the resources of a single disk because a single disk only participates in one pool. Excellent point. However, the problem scenario was described as SAN. I can easily imagine a scenario where some SAN administrator created

  1   2   3   4   5   6   7   8   9   10   >