[zfs-discuss] ZFS doesn't notice errors in mirrored log device?

2010-11-12 Thread Alexander Skwar
Hello! I've got a Solaris 10 10/08 Sparc system and use ZFS pool version 15. I'm playing around a bit to make it break. I've created a mirrored Test pool using mirrored log devices: # zpool create Test \ mirror /dev/zvol/dsk/data/DiskNr1 /dev/zvol/dsk/data/DiskNr2 \ log mirror

Re: [zfs-discuss] Sliced iSCSI device for doing RAIDZ?

2010-09-24 Thread Alexander Skwar
Hello. 2010/9/24 Marty Scholes martyscho...@yahoo.com: ZFS will ensure integrity, even when the underlying device fumbles. Yes. When you mirror the iSCSI devices, be sure that they are configured in such a way that a failure on one iSCSI device does not imply a failure on the other iSCSI

Re: [zfs-discuss] Sliced iSCSI device for doing RAIDZ?

2010-09-24 Thread Alexander Skwar
Hello again! 2010/9/24 Gary Mills mi...@cc.umanitoba.ca: On Fri, Sep 24, 2010 at 12:01:35AM +0200, Alexander Skwar wrote: Yes. I was rather thinking about RAIDZ instead of mirroring. I was just using a simpler example. Understood. Like I just wrote, we're actually now going to use mirroring

[zfs-discuss] non-ECC Systems and ZFS for home users (was: Please warn a home user against OpenSolaris under VirtualBox under WinXP ; ))

2010-09-23 Thread Alexander Skwar
Hi. 2010/9/19 R.G. Keen k...@geofex.com and last-generation hardware is very, very cheap. Yes, of course, it is. But, actually, is that a true statement? I've read that it's *NOT* advisable to run ZFS on systems which do NOT have ECC RAM. And those cheapo last-gen hardware boxes quite often

Re: [zfs-discuss] Sliced iSCSI device for doing RAIDZ?

2010-09-23 Thread Alexander Skwar
Hi! 2010/9/23 Gary Mills mi...@cc.umanitoba.ca On Tue, Sep 21, 2010 at 05:48:09PM +0200, Alexander Skwar wrote: We're using ZFS via iSCSI on a S10U8 system. As the ZFS Best Practices Guide http://j.mp/zfs-bp states, it's advisable to use redundancy (ie. RAIDZ, mirroring or whatnot

Re: [zfs-discuss] Doing ZFS rollback with preserving later created clones/snapshot?

2009-12-11 Thread Alexander Skwar
of the snapshot you want to roll back to - promote the clone See 'zfs promote' for details. Jeff On Fri, Dec 11, 2009 at 08:37:04AM +0100, Alexander Skwar wrote: Hi. Is it possible on Solaris 10 5/09, to rollback to a ZFS snapshot, WITHOUT destroying later created clones or snapshots

Re: [zfs-discuss] Doing ZFS rollback with preserving later created clones/snapshot?

2009-12-11 Thread Alexander Skwar
Hi. On Fri, Dec 11, 2009 at 15:35, Ross Walker rswwal...@gmail.com wrote: On Dec 11, 2009, at 4:17 AM, Alexander Skwar alexanders.mailinglists+nos...@gmail.com wrote: Hello Jeff! Could you (or anyone else, of course *G*) please show me how? [...] Could you please be so kind and show what

Re: [zfs-discuss] Doing ZFS rollback with preserving later created clones/snapshot?

2009-12-11 Thread Alexander Skwar
Hi! On Fri, Dec 11, 2009 at 15:55, Fajar A. Nugraha fa...@fajar.net wrote: On Fri, Dec 11, 2009 at 4:17 PM, Alexander Skwar alexanders.mailinglists+nos...@gmail.com wrote: $ sudo zfs create rpool/rb-test $ zfs list rpool/rb-test NAME    USED  AVAIL  REFER  MOUNTPOINT rpool/rb-test

[zfs-discuss] Doing ZFS rollback with preserving later created clones/snapshot?

2009-12-10 Thread Alexander Skwar
Hi. Is it possible on Solaris 10 5/09, to rollback to a ZFS snapshot, WITHOUT destroying later created clones or snapshots? Example: --($ ~)-- sudo zfs snapshot rpool/r...@01 --($ ~)-- sudo zfs snapshot rpool/r...@02 --($ ~)-- sudo zfs clone rpool/r...@02 rpool/ROOT-02 --($ ~)-- LC_ALL=C

Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40 days work

2009-07-22 Thread Alexander Skwar
Hi. Good to Know! But how do we deal with that on older sStems, which don't have the patch applied, once it is out? Thanks, Alexander On Tuesday, July 21, 2009, George Wilson george.wil...@sun.com wrote: Russel wrote: OK. So do we have an zpool import --xtg 56574 mypoolname or help to do

Re: [zfs-discuss] What are the rollback tools?

2009-07-20 Thread Alexander Skwar
Hi. Hm, what are you actually referring to? On Mon, Jul 20, 2009 at 13:45, Ross no-re...@opensolaris.org wrote: That's the stuff. I think that is probably your best bet at the moment. I've not seen even a mention of an actual tool to do that, and I'd be surprised if we saw one this side of

Re: [zfs-discuss] permission problem using ZFS send and zfs receive accross SSH

2009-07-16 Thread Alexander Skwar
Hi! On Thu, Jul 16, 2009 at 14:00, Cyril Ducrocq no-re...@opensolaris.orgwrote: moreover i added an on the fly compression using gzip You can dump the gzip|gunzip, if you use SSH on-the-fly compression, using ssh -C ssh also uses gzip, so there won't be much difference. Regards,

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-13 Thread Alexander Skwar
Bob, On Sun, Jul 12, 2009 at 23:38, Bob Friesenhahnbfrie...@simple.dallas.tx.us wrote: There has been no forward progress on the ZFS read performance issue for a week now.  A 4X reduction in file read performance due to having read the file before is terrible, and of course the situation is

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-13 Thread Alexander Skwar
Here's a more useful output, with having set the number of files to 6000, so that it has a dataset which is larger than the amount of RAM. --($ ~)-- time sudo ksh zfs-cache-test.ksh zfs create rpool/zfscachetest Creating data file set (6000 files of 8192000 bytes) under /rpool/zfscachetest ...

[zfs-discuss] cannot receive new filesystem stream: invalid backup stream

2009-07-10 Thread Alexander Skwar
Hallo. I'm trying do zfs send -R from a S10 U6 Sparc system to a Solaris 10 U7 Sparc system. The filesystem in question is running version 1. Here's what I did: $ fs=data/oracle ; snap=transfer.hot-b ; sudo zfs send -R $...@$snap | sudo rsh winds07-bge0 zfs create rpool/trans/winds00r/${fs%%/*}

[zfs-discuss] UCD-SNMP-MIB::dskAvail et.al. not supported on ZFS?

2009-07-07 Thread Alexander Skwar
Hi. I've got a fully patched Solaris 10 U7 Sparc system, on which I enabled SNMP disk monitoring by adding those lines to the /etc/sma/snmp/snmpd.conf configuration file: disk / 5% disk /tmp 10% disk /data 5% That's supposed to mean, that I see 5% available on / to be critical, 10% on /tmp and

Re: [zfs-discuss] UCD-SNMP-MIB::dskAvail et.al. not supported on ZFS?

2009-07-07 Thread Alexander Skwar
Hallo Jörg! On Tue, Jul 7, 2009 at 13:53, Joerg Schillingjoerg.schill...@fokus.fraunhofer.de wrote: Alexander Skwar alexanders.mailinglists+nos...@gmail.com wrote: Hi. I've got a fully patched Solaris 10 U7 Sparc system, on which I enabled SNMP disk monitoring by adding those lines

[zfs-discuss] UCD-SNMP-MIB::dskPercent not returned for ZFS filesystems?

2009-06-11 Thread Alexander Skwar
Hello. On a Solaris 10 10/08 (137137-09) Sparc system, I setup SMA to also return values for disk usage, by adding the following to snmpd.conf: disk / 5% disk /tmp 10% disk /apps 4% disk /data 3% /data and /apps are on ZFS. But when I do snmpwalk -v2c -c public 10.0.1.26