[zfs-discuss] zfs snapshot: cannot snapshot, dataset is busy

2006-06-15 Thread Jürgen Keil
http://www.opensolaris.org/jive/thread.jspa?messageID=36229#36229 The problem is back, on a different system: a laptop running on-20060605 bits. Compared to snv_29, the error message has improved, though: # zfs snapshot hdd/[EMAIL PROTECTED] cannot snapshot 'hdd/[EMAIL PROTECTED]': dataset is

[zfs-discuss] Re: nevada_41 and zfs disk partition

2006-06-20 Thread Jürgen Keil
What throughput do you get for the full untar (untared size / elapse time) ? # tar xf thunderbird-1.5.0.4-source.tar 2.77s user 35.36s system 33% cpu 1:54.19 260M/114 =~ 2.28 MB/s on this IDE disk IDE disk? Maybe it's this sparc ide/ata driver issue: Bug ID: 6421427 Synopsis: netra x1

[zfs-discuss] snv_46: hangs when using zvol swap and the system is low on free memory ?

2006-08-07 Thread Jürgen Keil
I've tried to use dmake lint on on-src-20060731, and was running out of swap on my Tecra S1 laptop, 32-bit x86, 768MB main memory, with a 512MB swap slice. The FULL KERNEL: global crosschecks: lint run consumes lots (~800MB) of space in /tmp, so the system was running out of swap space. To fix

[zfs-discuss] zfs panic: assertion failed: zp_error != 0 dzp_error != 0

2006-09-04 Thread Jürgen Keil
I made some powernow experiments on a dual core amd64 box, running the 64-bit debug on-20060828 kernel. At some point the kernel seemed to make no more progress (probably a bug in the multiprocessor powernow code), the gui was stuck, so I typed (blind) F1-A + $systemdump. Writing the crashdump

[zfs-discuss] Re: Re: Re: ZFS forces system to paging to the point it is

2006-09-07 Thread Jürgen Keil
We are trying to obtain a mutex that is currently held by another thread trying to get memory. Hmm, reminds me a bit on the zvol swap hang I got some time ago: http://www.opensolaris.org/jive/thread.jspa?threadID=11956tstart=150 I guess if the other thead is stuck trying to get memory, then

[zfs-discuss] Re: [Blade 150] ZFS: extreme low performance

2006-09-15 Thread Jürgen Keil
The disks in that Blade 100, are these IDE disks? The performance problem is probably bug 6421427: http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6421427 A fix for the issue was integrated into the Opensolaris 20060904 source drop (actually closed binary drop):

[zfs-discuss] Re: Re: ZFS hangs systems during copy

2006-10-27 Thread Jürgen Keil
Sounds familiar. Yes it is a small system a Sun blade 100 with 128MB of memory. Oh, 128MB... Btw, does anyone know if there are any minimum hardware (physical memory) requirements for using ZFS? It seems as if ZFS wan't tested that much on machines with 256MB (or less)

[zfs-discuss] Re: zpool snapshot fails on unmounted filesystem

2006-10-27 Thread Jürgen Keil
I just retried to reproduce it to generate a reliable test case. Unfortunately, I cannot reproduce the error message. So I really have no idea what might have cause it I also had this problem 2-3 times in the past, but I cannot reproduce it.

[zfs-discuss] Re: Re: ZFS hangs systems during copy

2006-10-27 Thread Jürgen Keil
This is: 6483887 without direct management, arc ghost lists can run amok That seems to be a new bug? http://bugs.opensolaris.org does not yet find it. The fix I have in mind is to control the ghost lists as part of the arc_buf_hdr_t allocations. If you want to test out my fix, I can send

[zfs-discuss] zfs/fstyp slows down recognizing pcfs formatted floppies

2006-12-18 Thread Jürgen Keil
I've noticed that fstyp on a floppy media formatted with pcfs now needs somewhere between 30 - 100 seconds to find out that the floppy media is formatted with pcfs. E.g. on sparc snv_48, I currently observe this: % time fstyp /vol/dev/rdiskette0/nomedia pcfs 0.01u 0.10s 1:38.84 0.1% zfs's

[zfs-discuss] Re: ZFS related (probably) hangs due to memory exhaustion(?) with snv53

2007-01-05 Thread Jürgen Keil
Hmmm, so there is lots of evictable cache here (mostly in the MFU part of the cache)... could you make your core file available? I would like to take a look at it. Isn't this just like: 6493923 nfsfind on ZFS filesystem quickly depletes memory in a 1GB system Which was introduced in

[zfs-discuss] zfs legacy filesystem remounted rw: atime temporary off?

2007-02-05 Thread Jürgen Keil
I have my /usr filesystem configured as a zfs filesystem, using a legacy mountpoint. I noticed that the system boots with atime updates temporarily turned off (and doesn't record file accesses in the /usr filesystem): # df -h /usr Filesystem size used avail capacity Mounted on

[zfs-discuss] Re: ZFS and Firewire/USB enclosures

2007-03-20 Thread Jürgen Keil
I still haven't got any warm and fuzzy responses yet solidifying ZFS in combination with Firewire or USB enclosures. I was unable to use zfs (that is zpool create or mkfs -F ufs) on firewire devices, because scsa1394 would hang the system as soon as multiple concurrent write commands are

[zfs-discuss] Re: ZFS and UFS performance

2007-03-28 Thread Jürgen Keil
We are running Solaris 10 11/06 on a Sun V240 with 2 CPUS and 8 GB of memory. This V240 is attached to a 3510 FC that has 12 x 300 GB disks. The 3510 is configured as HW RAID 5 with 10 disks and 2 spares and it's exported to the V240 as a single LUN. We create iso images of our product in

[zfs-discuss] Re: gzip compression throttles system?

2007-05-03 Thread Jürgen Keil
I just had a quick play with gzip compression on a filesystem and the result was the machine grinding to a halt while copying some large (.wav) files to it from another filesystem in the same pool. The system became very unresponsive, taking several seconds to echo keystrokes. The box is a

[zfs-discuss] Re: Re: gzip compression throttles system?

2007-05-03 Thread Jürgen Keil
The reason you are busy computing SHA1 hashes is you are using /dev/urandom. The implementation of drv/random uses SHA1 for mixing, actually strictly speaking it is the swrand provider that does that part. Ahh, ok. So, instead of using dd reading from /dev/urandom all the time, I've now

[zfs-discuss] Re: Re: Re: gzip compression throttles system?

2007-05-04 Thread Jürgen Keil
A couple more questions here. ... What do you have zfs compresison set to? The gzip level is tunable, according to zfs set, anyway: PROPERTY EDIT INHERIT VALUES compression YES YES on | off | lzjb | gzip | gzip-[1-9] I've used the default gzip compression level, that

[zfs-discuss] Re: Re: Re: gzip compression throttles system?

2007-05-04 Thread Jürgen Keil
Roch Bourbonnais wrote with recent bits ZFS compression is now handled concurrently with many CPUs working on different records. So this load will burn more CPUs and acheive it's results (compression) faster. Is this done using the taskq's, created in spa_activate()?

[zfs-discuss] Re: Re: Re: gzip compression throttles system?

2007-05-04 Thread Jürgen Keil
A couple more questions here. ... You still have idle time in this lockstat (and mpstat). What do you get for a lockstat -A -D 20 sleep 30? Do you see anyone with long lock hold times, long sleeps, or excessive spinning? Hmm, I ran a series of lockstat -A -l ph_mutex -s 16 -D 20 sleep 5

[zfs-discuss] Re: Re: Re: gzip compression throttles system?

2007-05-07 Thread Jürgen Keil
A couple more questions here. [mpstat] CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl 0 0 0 3109 3616 316 196 5 17 48 45 245 0 85 0 15 1 0 0 3127 3797 592 217 4 17 63 46 176 0 84 0 15 CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl 0

[zfs-discuss] Re: Re: Re: gzip compression throttles system?

2007-05-07 Thread Jürgen Keil
with recent bits ZFS compression is now handled concurrently with many CPUs working on different records. So this load will burn more CPUs and acheive it's results (compression) faster. So the observed pauses should be consistent with that of a load generating high system time. The

[zfs-discuss] Re: Re: Re: gzip compression throttles system?

2007-05-10 Thread Jürgen Keil
by Eric in build 59. It was pointed out by Jürgen Keil that using ZFS compression submits a lot of prio 60 tasks to the system task queues; this would clobber interactive performance. Actually the taskq spa_zio_issue / spa_zio_intr run at prio 99 (== maxclsyspri or MAXCLSYSPRI): http

[zfs-discuss] Re: Re: Lots of overhead with ZFS - what am I doing wrong?

2007-05-15 Thread Jürgen Keil
Would you mind also doing: ptime dd if=/dev/dsk/c2t1d0 of=/dev/null bs=128k count=1 to see the raw performance of underlying hardware. This dd command is reading from the block device, which might cache dataand probably splits requests into maxphys pieces (which happens to be 56K on an

[zfs-discuss] Deterioration with zfs performace and recent zfs bits?

2007-05-29 Thread Jürgen Keil
Has anyone else noticed a significant zfs performance deterioration when running recent opensolaris bits? My 32-bit / 768 MB Toshiba Tecra S1 notebook was able to do a full opensolaris release build in ~ 4 hours 45 minutes (gcc shadow compilation disabled; using an lzjb compressed zpool / zfs

[zfs-discuss] Re: Deterioration with zfs performance and recent zfs bits?

2007-06-01 Thread Jürgen Keil
I wrote Has anyone else noticed a significant zfs performance deterioration when running recent opensolaris bits? My 32-bit / 768 MB Toshiba Tecra S1 notebook was able to do a full opensolaris release build in ~ 4 hours 45 minutes (gcc shadow compilation disabled; using an lzjb compressed

[zfs-discuss] Re: Re: Deterioration with zfs performance and recent zfs bits?

2007-06-04 Thread Jürgen Keil
Patching zfs_prefetch_disable = 1 has helped It's my belief this mainly aids scanning metadata. my testing with rsync and yours with find (and seen with du ; zpool iostat -v 1 ) pans this out.. mainly tracked in bug 6437054 vdev_cache: wise up or die

[zfs-discuss] Re: Deterioration with zfs performance and recent zfs bits?

2007-06-04 Thread Jürgen Keil
I wrote Instead of compiling opensolaris for 4-6 hours, I've now used the following find / grep test using on-2007-05-30 sources: 1st test using Nevada build 60: % cd /files/onnv-2007-05-30 % repeat 10 /bin/time find usr/src/ -name *.[hc] -exec grep FooBar {} + This find + grep command

[zfs-discuss] Re: Re: Re: Deterioration with zfs performance and recent zfs bits?

2007-06-05 Thread Jürgen Keil
Hello Jürgen, Monday, June 4, 2007, 7:09:59 PM, you wrote: Patching zfs_prefetch_disable = 1 has helped It's my belief this mainly aids scanning metadata. my testing with rsync and yours with find (and seen with du ; zpool iostat -v 1 ) pans this out.. mainly tracked in bug

[zfs-discuss] Re: zfs compression - scale to multiple cpu ?

2007-06-18 Thread Jürgen Keil
i think i have read somewhere that zfs gzip compression doesn`t scale well since the in-kernel compression isn`t done multi-threaded. is this true - and if so - will this be fixed ? If you're writing lots of data, zfs gzip compression might not be a good idea for a desktop machine, because

[zfs-discuss] Re: ZFS usb keys

2007-06-26 Thread Jürgen Keil
I used a zpool on a usb key today to get some core files off a non-networked Thumper running S10U4 beta. Plugging the stick into my SXCE b61 x86 machine worked fine; I just had to 'zpool import sticky' and it worked ok. But when we attach the drive to a blade 100 (running s10u3), it sees

[zfs-discuss] Re: ZFS usb keys

2007-06-27 Thread Jürgen Keil
Shouldn't S10u3 just see the newer on-disk format and report that fact, rather than complain it is corrupt? Yep, I just tried it, and it refuses to zpool import the newer pool, telling me about the incompatible version. So I guess the pool format isn't the correct explanation for the Dick

[zfs-discuss] snv_70 - snv_66: ZPL_VERSION 2, File system version mismatch ....?

2007-07-20 Thread Jürgen Keil
Yesterday I was surprised because an old snv_66 kernel (installed as a new zfs rootfs) refused to mount. Error message was Mismatched versions: File system is version 2 on-disk format, which is incompatible with this software version 1! I tried to prepare that snv_66 rootfs when

Re: [zfs-discuss] Firewire zpool transport rejected fatal error, 6560174

2007-08-02 Thread Jürgen Keil
I think I have ran into this bug, 6560174, with a firewire drive. And 6560174 might be a duplicate of 6445725 This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Firewire zpool transport rejected fatal error, 6560174

2007-08-02 Thread Jürgen Keil
And 6560174 might be a duplicate of 6445725 I see what you mean. Unfortunately there does not look to be a work-around. Nope, no work-around. This is a scsa1394 bug; it has some issues when it is used from interrupt context. I have some source code diffs, that are supposed to fix the

Re: [zfs-discuss] Firewire zpool transport rejected fatal error, 6560174

2007-08-03 Thread Jürgen Keil
Nope, no work-around. OK. Then I have 3 questions: 1) How do I destroy the pool that was on the firewire drive? (So that zfs stops complaining about it) Even if the drive is disconnected, it should be possible to zpool export it, so that the OS forgets about it and doesn't try to mount

Re: [zfs-discuss] Firewire zpool transport rejected fatal error, 6560174

2007-08-03 Thread Jürgen Keil
3) Can your code diffs be integrated into the OS on my end to use this drive, and if so, how? I believe the bug is still being worked on, right Jürgen ? The opensolaris sponsor process for fixing bug 6445725 seems to got stuck. I ping'ed Alan P. on the state of that bug... This

Re: [zfs-discuss] Firewire zpool transport rejected fatal error, 6560174

2007-08-06 Thread Jürgen Keil
By coincidence, I spent some time dtracing 6560174 yesterday afternoon on b62, and these bugs are indeed duplicates. I never noticed 6445725 because my system wasn't hanging but as the notes say, the fix for 6434435 changes the problem, and instead the error that gets propogated back from

Re: [zfs-discuss] SiI 3114 Chipset on Syba Card - Solaris Hangs

2007-08-07 Thread Jürgen Keil
I'm running snv 65 and having an issue much like this: http://osdir.com/ml/solaris.opensolaris.help/2006-11/msg00047.html Bug 6414472? Has anyone found a workaround? You can try to patch my suggested fix for 6414472 into the ata binary and see if it helps:

Re: [zfs-discuss] ZFS boot: 3 smaller glitches with console,

2007-08-09 Thread Jürgen Keil
in my setup i do not install the ufsroot. i have 2 disks -c0d0 for the ufs install -c1d0s0 which is my zfs root i want to exploit my idea is to remove the c0d0 disk when the system will be ok Btw. if you're trying to pull the ufs disk c0d0 from the system, and physically move the zfs

Re: [zfs-discuss] Unremovable file in ZFS filesystem.

2007-08-09 Thread Jürgen Keil
I managed to create a link in a ZFS directory that I can't remove. # find . -print . ./bayes_journal find: stat() error ./bayes.lock.router.3981: No such file or directory ./user_prefs # ZFS scrub shows no problems in the pool. Now, this was probably cause when I was doing some

Re: [zfs-discuss] nv-69 install panics dell precision 670

2007-08-14 Thread Jürgen Keil
using hyperterm, I captured the panic message as: SunOS Release 5.11 Version snv_69 32-bit Copyright 1983-2007 Sun Microsystems, Inc. All rights reserved. Use is subject to license terms. panic[cpu0]/thread=fec1ede0: Can't handle mwait size 0 fec37e70 unix:mach_alloc_mwait+72

[zfs-discuss] EOF broken on zvol raw devices?

2007-08-23 Thread Jürgen Keil
I tried to copy a 8GB Xen domU disk image from a zvol device to an image file on an ufs filesystem, and was surprised that reading from the zvol character device doesn't detect EOF. On snv_66 (sparc) and snv_73 (x86) I can reproduce it, like this: # zfs create -V 1440k tank/floppy-img # dd

Re: [zfs-discuss] EOF broken on zvol raw devices?

2007-08-23 Thread Jürgen Keil
I tried to copy a 8GB Xen domU disk image from a zvol device to an image file on an ufs filesystem, and was surprised that reading from the zvol character device doesn't detect EOF. On snv_66 (sparc) and snv_73 (x86) I can reproduce it, like this: # zfs create -V 1440k tank/floppy-img

Re: [zfs-discuss] EOF broken on zvol raw devices?

2007-08-23 Thread Jürgen Keil
I tried to copy a 8GB Xen domU disk image from a zvol device to an image file on an ufs filesystem, and was surprised that reading from the zvol character device doesn't detect EOF. I've filed bug 6596419... This message posted from opensolaris.org

Re: [zfs-discuss] EOF broken on zvol raw devices?

2007-08-27 Thread Jürgen Keil
I tried to copy a 8GB Xen domU disk image from a zvol device to an image file on an ufs filesystem, and was surprised that reading from the zvol character device doesn't detect EOF. I've filed bug 6596419... Requesting a sponsor for bug 6596419...

[zfs-discuss] Bug 6580715, panic: freeing free segment

2007-09-03 Thread Jürgen Keil
Yesterday I tried to clone a xen dom0 zfs root filesystem and hit this panic (probably Bug ID 6580715): System is running last week's opensolaris bits (but I'm also accessing the zpool using the xen snv_66 bits). files/s11-root-xen: is an existing version 1 zfs files/[EMAIL PROTECTED]: new

Re: [zfs-discuss] zfs boot doesn't support /usr on a separate partition.

2007-10-01 Thread Jürgen Keil
I would like confirm that Solaris Express Developer Edition 09/07 b70, you can't have /usr on a separate zfs filesystem because of broken dependencies. 1/ Part of the problem is that /sbin/zpool is linked to /usr/lib/libdiskmgt.so.1 Yep, in the past this happened on several occasions

Re: [zfs-discuss] ZFS Mountroot and Bootroot Comparison

2007-10-05 Thread Jürgen Keil
Regarding compression, if I am not mistaken, grub cannot access files that are compressed. There was a bug where grub was unable to access files on zfs that contained holes: Bug ID 6541114 SynopsisGRUB/ZFS fails to load files from a default compressed (lzjb) root

Re: [zfs-discuss] zfs: allocating allocated segment(offset=77984887808

2007-10-12 Thread Jürgen Keil
size=66560) In-Reply-To: [EMAIL PROTECTED] Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Approved: 3sm4u3 X-OpenSolaris-URL: http://www.opensolaris.org/jive/message.jspa?messageID=163221tstart=0#163221 how does one free

Re: [zfs-discuss] ZFS very slow under xVM

2007-11-02 Thread Jürgen Keil
I've got Solaris Express Community Edition build 75 (75a) installed on an Asus P5K-E/WiFI-AP (ip35/ICH9R based) board. CPU=Q6700, RAM=8Gb, disk=Samsung HD501LJ and (older) Maxtor 6H500F0. When the O/S is running on bare metal, ie no xVM/Xen hypervisor, then everything is fine. When

Re: [zfs-discuss] ZFS boot issues on older P3 system.

2008-06-30 Thread Jürgen Keil
I wanted to resurrect an old dual P3 system with a couple of IDE drives to use as a low power quiet NIS/DHCP/FlexLM server so I tried installing ZFS boot from build 90. Jun 28 16:09:19 zack scsi: [ID 107833 kern.warning] WARNING: /[EMAIL PROTECTED],0/[EMAIL PROTECTED],1/[EMAIL PROTECTED]

Re: [zfs-discuss] [caiman-discuss] swap dump on ZFS volume

2008-07-01 Thread Jürgen Keil
Mike Gerdts wrote By default, only kernel memory is dumped to the dump device. Further, this is compressed. I have heard that 3x compression is common and the samples that I have range from 3.51x - 6.97x. My samples are in the range 1.95x - 3.66x. And yes, I lost a few crash dumps on a box

Re: [zfs-discuss] checksum errors on root pool after upgrade to snv_94

2008-07-18 Thread Jürgen Keil
I ran a scrub on a root pool after upgrading to snv_94, and got checksum errors: Hmm, after reading this, I started a zpool scrub on my mirrored pool, on a system that is running post snv_94 bits: It also found checksum errors # zpool status files pool: files state: DEGRADED status: One

Re: [zfs-discuss] checksum errors on root pool after upgrade to snv_94

2008-07-18 Thread Jürgen Keil
I ran a scrub on a root pool after upgrading to snv_94, and got checksum errors: Hmm, after reading this, I started a zpool scrub on my mirrored pool, on a system that is running post snv_94 bits: It also found checksum errors ... OTOH, trying to verify checksums with zdb -c didn't

Re: [zfs-discuss] checksum errors on root pool after upgrade to snv_94

2008-07-21 Thread Jürgen Keil
Bill Sommerfeld wrote: On Fri, 2008-07-18 at 10:28 -0700, Jürgen Keil wrote: I ran a scrub on a root pool after upgrading to snv_94, and got checksum errors: Hmm, after reading this, I started a zpool scrub on my mirrored pool, on a system that is running post snv_94 bits

Re: [zfs-discuss] checksum errors on root pool after upgrade to snv_94

2008-07-21 Thread Jürgen Keil
Rustam wrote: I'm living with this error for almost 4 months and probably have record number of checksum errors: # zpool status -xv pool: box5 ... errors: Permanent errors have been detected in the following files: box5:0x0 I've Sol 10 U5 though. I suspect that this

Re: [zfs-discuss] Moving ZFS root pool to different system breaks boot

2008-07-23 Thread Jürgen Keil
Recently, I needed to move the boot disks containing a ZFS root pool in an Ultra 1/170E running snv_93 to a different system (same hardware) because the original system was broken/unreliable. To my dismay, unlike with UFS, the new machine wouldn't boot: WARNING: pool 'root' could not be

Re: [zfs-discuss] checksum errors on root pool after upgrade to snv_94

2008-07-23 Thread Jürgen Keil
I wrote: Bill Sommerfeld wrote: On Fri, 2008-07-18 at 10:28 -0700, Jürgen Keil wrote: I ran a scrub on a root pool after upgrading to snv_94, and got checksum errors: Hmm, after reading this, I started a zpool scrub on my mirrored pool, on a system that is running post

Re: [zfs-discuss] error found while scubbing, how to fix it?

2008-08-21 Thread Jürgen Keil
I have OpenSolaris (snv_95) installed into my laptop (single sata disk) and tomorrow I updated my pool with: # zpool -V 11 -a and after I start a scrub into the pool with: # zpool scrub rpool # zpool status -vx NAMESTATE READ WRITE CKSUM rpool

Re: [zfs-discuss] error found while scubbing, how to fix it?

2008-08-21 Thread Jürgen Keil
On 08/21/08 17:26, Jürgen Keil wrote: Looks like bug 6727872, which is fixed in build 96. http://bugs.opensolaris.org/view_bug.do?bug_id=6727872 that pool contains normal OpenSolaris mountpoints, Did you upgrade the opensolaris installation in the past? AFAIK the opensolaris upgrade

Re: [zfs-discuss] CF to SATA adapters for boot device

2008-08-25 Thread Jürgen Keil
W. Wayne Liauh wrote: If you are running B95, that may be the problem. I have no problem booting B93 ( previous builds) from a USB stick, but B95, which has a newer version of ZFS, does not allow me to boot from it ( the USB stick was of course recognized during installation of B95, just

Re: [zfs-discuss] CF to SATA adapters for boot device

2008-08-27 Thread Jürgen Keil
What Widows utility you are talking about? I have used the Sandisk utility program to remove the U3 Launchpad (which creates a permanent hsfs partition in the flash disk), but it does not help the problem. That's the problem, most usb sticks don't require any special software and just work

Re: [zfs-discuss] SIL3124 stability?

2008-09-25 Thread Jürgen Keil
THe lock I observed happened inside the BIOS of the card after the main board BIOS jumped into the board BIOS. This was before any bootloader has been ionvolved. Is there a disk using a zpool with an EFI disk label? Here's a link to an old thread about systems hanging in BIOS POST when they

Re: [zfs-discuss] zpool import of bootable root pool renders it unbootable

2008-10-06 Thread Jürgen Keil
Cannot mount root on /[EMAIL PROTECTED],0/pci103c,[EMAIL PROTECTED],2/[EMAIL PROTECTED],0:a fstype zfs Is that physical device path correct for your new system? Or is this the physical device path (stored on-disk in the zpool label) from some other system? In this case you may be able to

Re: [zfs-discuss] zpool import of bootable root pool renders it unbootable

2008-10-13 Thread Jürgen Keil
Again, what I'm trying to do is to boot the same OS from physical drive - once natively on my notebook, the other time from withing Virtualbox. There are two problems, at least. First is the bootpath as in VB it emulates the disk as IDE while booting natively it is sata. When I started

Re: [zfs-discuss] ZFS: unreliable for professional usage?

2009-02-09 Thread Jürgen Keil
bash-3.00# zfs mount usbhdd1 cannot mount 'usbhdd1': E/A-Fehler bash-3.00# Why is there an I/O error? Is there any information logged to /var/adm/messages when this I/O error is reported? E.g. timeout errors for the USB storage device? -- This message posted from opensolaris.org

Re: [zfs-discuss] ZFS snapshot splitting joining

2009-02-12 Thread Jürgen Keil
The problem was with the shell. For whatever reason, /usr/bin/ksh can't rejoin the files correctly. When I switched to /sbin/sh, the rejoin worked fine, the cksum's matched, ... The ksh I was using is: # what /usr/bin/ksh /usr/bin/ksh: Version M-11/16/88i SunOS 5.10 Generic

Re: [zfs-discuss] zfs on 32 bit?

2009-06-14 Thread Jürgen Keil
besides performance aspects, what`s the con`s of running zfs on 32 bit ? The default 32 bit kernel can cache a limited amount of data ( 512MB) - unless you lower the kernelbase parameter. In the end the small cache size on 32 bit explains the inferior performance compared to the 64 bit kernel.

Re: [zfs-discuss] zfs on 32 bit?

2009-06-17 Thread Jürgen Keil
Not a ZFS bug. IIRC, the story goes something like this: a SMI label only works to 1 TByte, so to use 1 TByte, you need an EFI label. For older x86 systems -- those which are 32-bit -- you probably have a BIOS which does not handle EFI labels. This will become increasingly irritating

Re: [zfs-discuss] moving a disk between controllers

2009-06-17 Thread Jürgen Keil
I had a system with it's boot drive attached to a backplane which worked fine. I tried moving that drive to the onboard controller and a few seconds into booting it would just reboot. In certain cases zfs is able to find the drive on the new physical device path (IIRC: the disk's devid

Re: [zfs-discuss] zfs on 32 bit?

2009-06-17 Thread Jürgen Keil
32 bit Solaris can use at most 2^31 as disk address; a disk block is 512bytes, so in total it can address 2^40 bytes. A SMI label found in Solaris 10 (update 8?) and OpenSolaris has been enhanced and can address 2TB but only on a 64 bit system. is what the problem is. so 32-bit

Re: [zfs-discuss] Install and boot from USB stick?

2009-07-31 Thread Jürgen Keil
The GRUB menu is presented, no problem there, and then the opensolaris progress bar. But im unable to find a way to view any details on whats happening there. The progress bar just keep scrolling and scrolling. Press the ESC key; this should switch back from graphics to text mode and most

Re: [zfs-discuss] Install and boot from USB stick?

2009-07-31 Thread Jürgen Keil
I've found it only works for USB sticks up to 4GB :( If I tried a USB stick bigeer than that, it didn't boot. Works for me on 8GB USB sticks. It is possible that the stick you've tried has some issues with the Solaris USB drivers, and needs to have one of the workarounds from the scsa2usb.conf

Re: [zfs-discuss] Install and boot from USB stick?

2009-07-31 Thread Jürgen Keil
Well, here is the error: ... usb stick reports(?) scsi error: medium may have changed ... That's strange. The media in a flash memory stick can't be changed - although most sticks report that they do have removable media. Maybe this stick needs one of the workarounds that can be enabled in

Re: [zfs-discuss] Install and boot from USB stick?

2009-07-31 Thread Jürgen Keil
How can i implement that change, after installing the OS? Or do I need to build my own livecd? Boot from the livecd, attach the usb stick, open a terminal window, pfexec bash starts a root shell, zpool import -f rpool should find and import the zpool from the usb stick. Mount the root

Re: [zfs-discuss] Install and boot from USB stick?

2009-08-01 Thread Jürgen Keil
Nah, that didnt seem to do the trick. After unmounting and rebooting, i get the same error msg from my previous post. Did you get these scsi error messages during installation to the usb stick, too? Another thing that confuses me: the unit attention / medium may have changed message is

Re: [zfs-discuss] Install and boot from USB stick?

2009-08-01 Thread Jürgen Keil
Are there any message with Error level: fatal ? Not that I know of, however, i can check. But im unable to find out what to change in grub to get verbose output rather than just the splashimage. Edit the grub commands, delete all splashimage, foreground and background lines, and delete

Re: [zfs-discuss] Install and boot from USB stick?

2009-08-02 Thread Jürgen Keil
No there was no error level fatal. Well, here is what I have tried since: a) I´ve tried to install a custom grub like described here: http://defect.opensolaris.org/bz/show_bug.cgi?id=4755#c28 With that in place, I just get the grub prompt. I´ve tried to zpool import -f rpool when this

Re: [zfs-discuss] Install and boot from USB stick?

2009-08-02 Thread Jürgen Keil
Does this give you anything? [url=http://bildr.no/view/460193][img]http://bildr.no/thumb/460193.jpeg[/img][/url] That looks like the zfs mountroot panic you get when the root disk was moved to a different physical location (e.g. different usb port). In this case the physical device path

Re: [zfs-discuss] Change physical path to a zpool.

2009-10-24 Thread Jürgen Keil
I have a functional OpenSolaris x64 system on which I need to physically move the boot disk, meaning its physical device path will change and probably its cXdX name. When I do this the system fails to boot ... How do I inform ZFS of the new path? ... Do I need to boot from the LiveCD

Re: [zfs-discuss] ZFS dedup issue

2009-11-03 Thread Jürgen Keil
So.. it seems that data is deduplicated, zpool has 54.1G of free space, but I can use only 40M. It's x86, ONNV revision 10924, debug build, bfu'ed from b125. I think I'm observing the same (with changeset 10936) ... I created a 2GB file, and a tank zpool on top of that file, with

Re: [zfs-discuss] ZFS dedup accounting reservations

2009-11-03 Thread Jürgen Keil
But: Isn't there an implicit expectation for a space guarantee associated with a dataset? In other words, if a dataset has 1GB of data, isn't it natural to expect to be able to overwrite that space with other data? Is there such a space guarantee for compressed or cloned zfs? -- This

Re: [zfs-discuss] ZFS dedup accounting

2009-11-03 Thread Jürgen Keil
Well, then you could have more logical space than physical space, and that would be extremely cool, I think we already have that, with zfs clones. I often clone a zfs onnv workspace, and everything is deduped between zfs parent snapshot and clone filesystem. The clone (initially) needs no

Re: [zfs-discuss] I/O Read starvation

2010-01-09 Thread Jürgen Keil
I wasnt clear in my description, I m referring to ext4 on Linux. In fact on a system with low RAM even the dd command makes the system horribly unresponsive. IMHO not having fairshare or timeslicing between different processes issuing reads is frankly unacceptable given a lame user

Re: [zfs-discuss] opensolaris fresh install lock up

2010-01-17 Thread Jürgen Keil
I just installed opensolaris build 130 which i downloaded from genunix.  The install went fineand the first reboot after install seemed to work but when i powered down and rebooted fully, it locks up as soon as i log in. Hmm, seems you're asking in the wrong forum. Sounds more like a

Re: [zfs-discuss] opensolaris fresh install lock up

2010-01-17 Thread Jürgen Keil
in the build 130 annoucement you can find this: 13540 Xserver crashes and freezes a system installed with LiveCD on bld 130 It is for sure this bug. This is ok, i can do most of what i need via ssh.  I just wasn't sure if it was a bug or if i had done something wrongi had tried

Re: [zfs-discuss] ZFS and 4kb sector Drives (All new western digital GREEN Drives?)

2010-03-27 Thread Jürgen Keil
It would be nice if the 32bit osol kernel support 48bit LBA Is already supported, for may years (otherwise disks with a capacity = 128GB could not be used with Solaris) ... (similar to linux, not sure if 32bit BSD supports 48bit LBA ), then the drive would probably work - perhaps later in

[zfs-discuss] zfs periodic writes on idle system [Re: Getting desktop to auto sleep]

2010-06-20 Thread Jürgen Keil
Why does zfs produce a batch of writes every 30 seconds on opensolaris b134 (5 seconds on a post b142 kernel), when the system is idle? On an idle OpenSolaris 2009.06 (b111) system, /usr/demo/dtrace/iosnoop.d shows no i/o activity for at least 15 minutes. The same dtrace test on an idle b134

Re: [zfs-discuss] zfs periodic writes on idle system [Re: Getting desktop to auto sleep]

2010-06-21 Thread Jürgen Keil
Why does zfs produce a batch of writes every 30 seconds on opensolaris b134 (5 seconds on a post b142 kernel), when the system is idle? It was caused by b134 gnome-terminal. I had an iostat running in a gnome-terminal window, and the periodic iostat output is written to a temporary file by

Re: [zfs-discuss] [OpenIndiana-discuss] format dumps the core

2010-10-31 Thread Jürgen Keil
- Original Message - ... r...@tos-backup:~# format Searching for disks...Arithmetic Exception (core dumped) This error also seems to occur on osol 134. Any idea what this might be? What stack backtrace is reported for that core dump (pstack core) ? -- This message posted from

Re: [zfs-discuss] [OpenIndiana-discuss] format dumps the core

2010-11-06 Thread Jürgen Keil
r...@tos-backup:~# pstack /dev/rdsk/core core '/dev/rdsk/core' of 1217: format fee62e4a UDiv (4, 0, 8046c80, 80469a0, 8046a30, 8046a50) + 2a 08079799 auto_sense (4, 0, 8046c80, 0) + 281 ... Seems that one function call is missing in the back trace between auto_sense and UDiv, because UDiv