Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-17 Thread Andrej Podzimek

I did not say there is something wrong about published reports. I often read
them. (Who doesn't?) However, there are no trustworthy reports on this topic
yet, since Btrfs is unfinished. Let's see some examples:

(1) http://www.phoronix.com/scan.php?page=articleitem=zfs_ext4_btrfsnum=1


My little few yen in this massacre: Phoronix usually compares apples
with oranges and pigs with candies. So be careful.


Nobody said one should blindly trust Phoronix. ;-) In fact I clearly said the contrary. I 
mentioned the famous example of a totally absurd benchmark that used crippled 
and crashing code from the ZEN patchset to benchmark Reiser4.


Disclaimer: I use Reiser4


A Killer FS™. :-)


I had been using Reiser4 for quite a long time before Hans Reiser was convicted for the 
murder of his wife. There was absolutely no (objective technical) reason to make a change 
afterwards. :-) As far as speed is concerned, Reiser4 really is a Killer FS 
(in a very positive sense). It is now maintained by Edward Shishkin, a former Namesys 
employee. Patches are available for each kernel version. 
(http://www.kernel.org/pub/linux/kernel/people/edward/reiser4/reiser4-for-2.6/)

Admittedly, with the advent of Ext4 and Btrfs, Reiser4 is not so brilliant 
any more. Reiser4 could have been a much larger project with many features known from 
today's ZFS/Btrfs (encryption, compression and perhaps even snapshots and subvolumes), 
but long disputes around kernel integration and the events around Hans Reiser blocked the 
whole effort and Reiser4 lost its advantage.

Andrej



smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-16 Thread Andrej Podzimek

Well, a typical conversation about speed and stability usually boils down
to this:

A: I've heard that XYZ is unstable and slow.
B: Are you sure? Have you tested XYZ? What are your benchmark results?
Have you had any issues?
A: No. I *have* *not* *tested* XYZ. I think XYZ is so unstable and slow
that it's not worth testing.


Yes indeed!

I can't afford to test everything carefully.  Like most people, I read
published reports and listen to conversations places like this, and form
an impression of what performs how.

Then I do some testing to verify that something I'm seriously considering
produces satisfactory performance.  The key there is satisfactory; I'm
not looking for the best, I'm looking for something that fits in and is
satisfactory.

The more unusual my requirements, and the better defined, the less I can
gain from studying outside test reports.


My only point was: There is no published report saying that stability or *performance* of 
Btrfs will be worse (or better) than that of ZFS. This is because nobody can guess how 
Btrfs will perform once it's finished. (In fact nobody even knows *when* it is going to 
be finished. My guess was that it might not be considered experimental in one 
year's time, but that's just a shot in the dark.)

For that reason, spreading myths about stability  performance  maturity 
serves no purpose. (And this is what caused my (over)reaction.)

I did not say there is something wrong about published reports. I often read 
them. (Who doesn't?) However, there are no trustworthy reports on this topic 
yet, since Btrfs is unfinished. Let's see some examples:

(1) http://www.phoronix.com/scan.php?page=articleitem=zfs_ext4_btrfsnum=1
(2) http://www.dhtusa.com/media/IOPerf_CMG09DHT.pdf

Based on (1), one could say that Btrfs outperforms ZFS with ease and confidence. 
Unfinished Btrfs versus a port of ZFS to FreeBSD -- that sounds fair, doesn't it? Well, 
in fact such a comparison is neither fair nor meaningful. Furthermore, 
benchmarks from Phoronix don't seem to have a good reputation... (See the P. S. for 
details.)

In (2), ZFS performs (much) better than (what will once be) Btrfs. However, the 
results in (2) are related to a 2.6.30 kernel, which is as *old* as June 
2009... Nobody knows how the tested file systems would perform today.

Yes, Btrfs is still somewhat immature. Yes, Btrfs is not ready for serious 
deployments (right now, in August 2010). So it's way to soon to compare the 
stability and performance of Btrfs and ZFS.

Disclaimer: I use Reiser4, Ext4, ZFS, Btrfs and Ext3 (in this order of 
frequency) and I'm not an advocate of any of them.

Andrej


P. S. As far as Phoronix is concerned... Well, I remember how they once used a malfunctioning and crippled 
Reiser4 implementation (hacked by the people around the ZEN patchset so that it caused data corruption (!) 
and kernel crashes) and compared it to other file systems. (That foolish Reiser4 
benchmark can be found here: 
http://www.phoronix.com/scan.php?page=articleitem=reiser4_benchmarksnum=1)



smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-14 Thread Andrej Podzimek

3. Just stick with b134. Actually, I've managed to compile my way up to b142, 
but I'm having trouble getting beyond it - my attempts to install later 
versions just result in new boot environments with the old kernel, even with 
the latest pkg-gate code in place. Still, even if I get the latest code to 
install, it's not viable for the long term unless I'm willing to live with 
stasis.


I run build 146. There have been some heads-up messages on the topic. You need 
b137 or later in order to build b143 or later. Plus the latest packaging bits 
and other stuff. 
http://mail.opensolaris.org/pipermail/on-discuss/2010-June/001932.html

When compiling b146, it's good to read this first: 
http://mail.opensolaris.org/pipermail/on-discuss/2010-August/002110.html 
Instead of using the tagged onnv_146 code, you have to apply all the changesets 
up to 13011:dc5824d1233f.
 

6. Abandon ZFS completely and go back to LVM/MD-RAID. I ran it for years before 
switching to ZFS, and it works - but it's a bitter pill to swallow after 
drinking the ZFS Kool-Aid.


Or Btrfs. It may not be ready for production now, but it could become a serious 
alternative to ZFS in one year's time or so. (I have been using it for some 
time with absolutely no issues, but some people (Edward Shishkin) say it has 
obvious bugs related to fragmentation.)

Andrej



smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-14 Thread Andrej Podzimek

From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Andrej Podzimek

Or Btrfs. It may not be ready for production now, but it could become a
serious alternative to ZFS in one year's time or so. (I have been using


I will much sooner pay for sol11 instead of use btrfs.  Stability  speed  
maturity greatly outweigh a few hundred dollars a year, if you run your business on it.


Well, a typical conversation about speed and stability usually boils down to 
this:

A: I've heard that XYZ is unstable and slow.
B: Are you sure? Have you tested XYZ? What are your benchmark results? Have you 
had any issues?
A: No. I *have* *not* *tested* XYZ. I think XYZ is so unstable and slow that 
it's not worth testing.

It is true that the userspace utilities for Btrfs are immature. But nobody says 
Btrfs is ready for business deployments *right* *now*. I merely said it could 
become a serious alternative to ZFS in one year's time.

As far as stability is concerned, I haven't had any issues so far. Neither with 
ZFS, nor with Btrfs.

As far as performance is concerned, some people probably own a crystal ball. This explains their 
ability to guess whether Btrfs will outperform ZFS or not, once the first stable 
release of Btrfs is out. Unfortunately, I'm not a prophet. ;-) So I'll have to make a decision 
based on benchmarks and thorough testing on some of my machines, as soon as the first 
stable release of Btrfs is out.

Andrej



smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] carrying on [was: Legality and the future of zfs...]

2010-07-19 Thread Andrej Podzimek

Ubuntu always likes to be on the edge even if btrfs is far from being
'stable' I would not want to run a release that does this. Servers need
stability and reliability. Btrfs is far from this.


Well, it seems to me that this is a well-known and very popular „circle in 
proving“:

A: XYZ is far from stability and reliability.
B: Are you sure? Have you had any serious issues with XYZ? Are there any 
failure reports and statistics? What are you comparing XYZ with?
A: How can I be sure? I cannot give XYZ a try, because it is so far from 
stability and reliability...

I run ArchLinux with Btrfs and OpenSolaris with ZFS. I haven't had a serious 
issue with any of them so far. (Well, in fact I had one issue with OpenSolaris 
in QEMU, but that's a well-known story, probably not related to ZFS: 
http://www.neuhalfen.name/2009/08/05/OpenSolaris_KVM_and_large_IDE_drives/.)

As far as Btrfs is concerned, I am perfectly satisfied with it, as far as 
performance and features are concerned. On the other hand, Btrfs still has 
quite a lot of issues that need to be dealt with. For example,

1) Btrfs does not have mature and user-friendly command-line tools. 
AFAIK, you can only list your snapshots and subvolumes by grep'ing the tree 
dump. ;-)
2) there are still bugs that *must* be fixed before Btrfs can be 
seriously considered: 
http://www.mail-archive.com/linux-bt...@vger.kernel.org/msg05130.html

Undoubtedly, ZFS is currently much more mature and usable than Btrfs. However, 
Btrfs can evolve very quickly, considering the huge community around Linux. For 
example, EXT4 was first released in late 2006 and I first deployed it (with a 
stable on-disk format) in early 2009.

Andrej



smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] carrying on

2010-07-19 Thread Andrej Podzimek


 ap  2) there are still bugs that *must* be fixed before Btrfs can
 ap  be seriously considered:
 ap  http://www.mail-archive.com/linux-bt...@vger.kernel.org/msg05130.html

I really don't think that's a show-stopper.  He filled the disk with
2KB files.  HE FILLED THE DISK WITH 2KB FILES.


Well, if there was a 50% overhead, then fine. That can happen. 80%? All right, 
still good... But what actually happened does not seem acceptable to me.


It's more, ``you think you're so clever, but you're not, see?''  I'm
not saying not to fix it.  I'm saying it's not a show-stopper.


I'm not saying it's a showstopper. I just don't think anyone could seriously 
consider a production deployment before this is fixed.

Edward Shishkin is the maintainer and co-author of Reiser4, which has not been 
accepted into the kernel yet, despite the fact that many people have been using 
it successfully for years. (I am also one of the Reiser4 users and run it on 
some laptops I maintain.) So Edward's reaction is not surprising. ;-) It's like 
„hey! My stable filesystem stays out, but various experiments (EXT4, NILFS2, 
Btrfs, ...) are let in! How comes?“

Andrej



smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Kernel panic on zpool status -v (build 143)

2010-06-28 Thread Andrej Podzimek

I ran 'zpool scrub' and will report what happens once it's finished. (It will 
take pretty long.)


The scrub finished successfully (with no errors) and 'zpool status -v' doesn't 
crash the kernel any more.

Andrej



smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Kernel panic on zpool status -v (build 143)

2010-06-27 Thread Andrej Podzimek

Hello,

I got a zfs panic on build 143 (installed with onu) in the following unusual 
situation:

1) 'zpool scrub' found a corrupted snapshot on which two BEs were based.
2) I removed the first dependency with 'zfs promote'.
3) I removed the second dependency with 'zfs -pv send ... | zfs -v 
receive ...'
4) 'zfs destroy' said dataset busy when called on the old snapshot. 
So I rebooted.
5) After the reboot, the corrupted snapshot could be successfully 
destroyed.
6) One dataset and two other snapshots created on the way (in (3)) were 
removed.
7) Now 'zpool status -v' *crashed* the kernel.
8) After a reboot, 'zpool status -v' caused a crash again.

I ran 'zpool scrub' and will report what happens once it's finished. (It will 
take pretty long.)

An mdb session output is attached to this message. I can provide the full crash 
dump if you wish. (As for the ::stack at the end, I'm not sure if it's 
meaningful. This is (unfortunately) not a debugging kernel, so the first 6 
arguments should not be stored on the stack.)

Andrej
 ::status
debugging crash dump vmcore.5 (64-bit) from helium
operating system: 5.11 osnet143 (i86pc)
panic message: assertion failed: 0 == dmu_bonus_hold(os, object, dl, 
dl-dl_dbuf) (0x0 == 0x16), file: ../../common/fs/zfs/dsl_deadlist.c, line: 80
dump content: kernel pages only


 ::msgbuf ! tail -21
panic[cpu4]/thread=ff02d59540a0: 
assertion failed: 0 == dmu_bonus_hold(os, object, dl, dl-dl_dbuf) (0x0 == 
0x16), file: ../../common/fs/zfs/dsl_deadlist.c, line: 80


ff00106a0a50 genunix:assfail3+c1 ()
ff00106a0ad0 zfs:dsl_deadlist_open+ef ()
ff00106a0b80 zfs:dsl_dataset_get_ref+14c ()
ff00106a0bc0 zfs:dsl_dataset_hold_obj+2d ()
ff00106a0c20 zfs:dsl_dsobj_to_dsname+73 ()
ff00106a0c40 zfs:zfs_ioc_dsobj_to_dsname+23 ()
ff00106a0cc0 zfs:zfsdev_ioctl+176 ()
ff00106a0d00 genunix:cdev_ioctl+45 ()
ff00106a0d40 specfs:spec_ioctl+5a ()
ff00106a0dc0 genunix:fop_ioctl+7b ()
ff00106a0ec0 genunix:ioctl+18e ()
ff00106a0f10 unix:brand_sys_sysenter+1c9 ()

syncing file systems...
 done
dumping to /dev/zvol/dsk/rpool/dump, offset 65536, content: kernel
NOTICE: ahci0: ahci_tran_reset_dport port 0 reset port


 ff02d59540a0::whatis  
ff02d59540a0 is allocated as a thread structure


 ff02d59540a0::print kthread_t t_procp | ::print proc_t p_user.u_psargs
p_user.u_psargs = [ zpool status -v rpool ]


 ::stack
vpanic()
assfail3+0xc1(f7a2dff0, 0, f7a2e050, 16, f7a2e028, 50)
dsl_deadlist_open+0xef(ff02f43dd7f0, ff02cff74080, 0)
dsl_dataset_get_ref+0x14c(ff02d2ebacc0, 1b, f7a2865c, 
ff00106a0bd8)
dsl_dataset_hold_obj+0x2d(ff02d2ebacc0, 1b, f7a2865c, 
ff00106a0bd8)
dsl_dsobj_to_dsname+0x73(ff02f5f44000, 1b, ff02f5f44400)
zfs_ioc_dsobj_to_dsname+0x23(ff02f5f44000)
zfsdev_ioctl+0x176(b6, 5a25, 8042130, 13, ff02dae06460, 
ff00106a0de4)
cdev_ioctl+0x45(b6, 5a25, 8042130, 13, ff02dae06460, 
ff00106a0de4)
spec_ioctl+0x5a(ff02d5fd7900, 5a25, 8042130, 13, ff02dae06460, 
ff00106a0de4)
fop_ioctl+0x7b(ff02d5fd7900, 5a25, 8042130, 13, ff02dae06460, 
ff00106a0de4)
ioctl+0x18e(3, 5a25, 8042130)
_sys_sysenter_post_swapgs+0x149()



smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Accessing a zpool from another system

2010-06-26 Thread Andrej Podzimek

Hello,

I have a problem after accessing a zpool containing a boot environment from 
another system. When the zpool is accessed (imported, mounted and exported 
again) by another system, the device addresses stored in its metadata are 
overwritten. Consequently, it is not bootable any more and causes a kernel 
panic when booted. The only solution is to boot from a live CD and do a zpool 
import/export in exactly the same hardware environment, so that the correct 
device addresses are restored again.

Can this be avoided? I need the zpool to be both directly bootable and 
accessible from another (virtual) machine. How can a zpool be imported and 
mounted without changing the device addresses stored in its metadata? I know 
those hardware characteristics could be important in RAIDZ scenarios, but this 
is just one partition...

Andrej



smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Detaching a clone from a snapshot

2010-06-25 Thread Andrej Podzimek

Hello,

Is it possible to detach a clone from its snapshot (and copy all its data 
physically)? I ran into an obscure situation where 'zfs promote' does not help.

Snapshot S has clones C1 and C2, both of which are boot environments. S has a 
data error that cannot be corrected. The error affects *one* crash dump file, 
so it's obviously benign. (The system crashed when a crash dump was being 
transferred from the dump device to /var/crash and this happened more than 
once. This is a nightly + onu system, so accidents might happen.)

If I understand it well, the original dependency graph looks like this

C1 - S - C2,

and I can only achieve one the following with 'zfs promote':

C1 - S - C2
C1 - S - C2

I can't get rid of S (and of the error message in zpool status) without 
removing either C1 or C2. Is there a solution other than removing C1 or C2?

Andrej



smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss