Re: [zfs-discuss] ZFS percent busy vs zpool iostat

2011-01-12 Thread a . smith
Quoting Bob Friesenhahn bfrie...@simple.dallas.tx.us: What function is the system performing when it is so busy? The work load of the server is SMTP mail server, with associated spam and virus scanning, and serving maildir email via POP3 and IMAP. Wrong conclusion. I am not sure what

Re: [zfs-discuss] ZFS percent busy vs zpool iostat

2011-01-12 Thread a . smith
Ok, think I have the biggest issue. The drives are 4k sector drives, and I wasn't aware of that. My fault, I should have checked this. Had the disks for ages and are sub 1TB so had the idea that they wouldn't be 4k drives... I will obviously have to address this, either by creating a pool

Re: [zfs-discuss] zpool scalability and performance

2011-01-13 Thread a . smith
Basically I think yes you need to add all the vdevs you require in the circumstances you describe. You just have to consider what ZFS is able to do with the disks that you give it. If you have 4x mirrors to start with then all writes will be spread across all disks and you will get nice

Re: [zfs-discuss] Drive i/o anomaly

2011-02-08 Thread a . smith
It is a 4k sector drive, but I thought zfs recognised those drives and didn't need any special configuration...? 4k drives are a big problem for ZFS, much has been posted/written about it. Basically, if the 4k drives report 512 byte blocks, as they almost all do, then ZFS does not detect

Re: [zfs-discuss] ZFS send/recv initial data load

2011-02-16 Thread a . smith
On Feb 16, 2011, at 7:38 AM, whitetr6 at gmail.com wrote: My question is about the initial seed of the data. Is it possible to use a portable drive to copy the initial zfs filesystem(s) to the remote location and then make the subsequent incrementals over the network? If so, what would I

Re: [zfs-discuss] Solaris vs FreeBSD question

2011-05-18 Thread a . smith
Hi, I am using FreeBSD 8.2 in production with ZFS. Although I have had one issue with it in the past but I would recommend it and I consider it production ready. That said if you can wait for FreeBSD 8.3 or 9.0 to come out (a few months away) you will get a better system as these will

Re: [zfs-discuss] Monitoring disk seeks

2011-05-24 Thread a . smith
Hi, see the seeksize script on this URL: http://prefetch.net/articles/solaris.dtracetopten.html Not used it but looks neat! cheers Andy. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] ZFS, Oracle and Nexenta

2011-05-25 Thread a . smith
Still i wonder what Gartner means with Oracle monetizing on ZFS.. It simply means that Oracle want to make money from ZFS (as is normal for technology companies with their own technology). The reason this might cause uncertainty for ZFS is that maintaining or helping make the open source

Re: [zfs-discuss] Question on ZFS iSCSI

2011-06-01 Thread a . smith
Disk /dev/zvol/rdsk/pool/dcpool: 4295GB Sector size (logical/physical): 512B/512B Just to check, did you already try: zpool import -d /dev/zvol/rdsk/pool/ poolname ? thanks Andy. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] zfs root eta?

2006-05-15 Thread Terry Smith
Hi there Can a ZFS root participate in Live Upgrade, that is does luupgrade understand a ZFS root? T Tabriz Leman wrote: All, For those who haven't already gone through the painful manual process of setting up a ZFS Root, Tim Foster has put together a script. It is available on his

[zfs-discuss] How to NOT mount a ZFS storage pool/ZFS file system?

2006-09-12 Thread David Smith
I currently have a system which has two ZFS storage pools. One of the pools is coming from a faulty piece of hardware. I would like to bring up our server mounting the storage pool which is okay and NOT mounting the one with from the hardware with problems. Is there a simple way to NOT

[zfs-discuss] Re: Re: Corrupted LUN in RAIDZ group -- How to repair?

2006-09-14 Thread David Smith
I have run zpool scrub again, and I now see checksum errors again. Wouldn't the checksum errors gotten fixed with the first zpool scrub? Can anyone recommend actions I should do at this point? Thanks, David This message posted from opensolaris.org

[zfs-discuss] Re: How to get new ZFS Solaris 10 U3 features going from Solaris 10 U2

2006-12-18 Thread David Smith
Thank you to everyone that has replied. It sounds like I have a few options with regards to upgrading or just waiting and patching the current environment. David This message posted from opensolaris.org ___ zfs-discuss mailing list

[zfs-discuss] Help understanding some benchmark results

2007-01-11 Thread Chris Smith
G'day, all, So, I've decided to migrate my home server from Linux+swRAID+LVM to Solaris+ZFS, because it seems to hold much better promise for data integrity, which is my primary concern. However, naturally, I decided to do some benchmarks in the process, and I don't understand why the results

[zfs-discuss] Re: Help understanding some benchmark results

2007-01-11 Thread Chris Smith
What build/version of Solaris/ZFS are you using? Solaris 11/06. bash-3.00# uname -a SunOS nitrogen 5.10 Generic_118855-33 i86pc i386 i86pc bash-3.00# What block size are you using for writes in bonnie++? I ind performance on streaming writes is better w/ larger writes. I'm afraid I

[zfs-discuss] Need guidance on RAID 5, ZFS, and RAIDZ on home file server

2007-05-08 Thread John Smith
Hello all, Spent the last several hours perusing the ZFS forums and some of the blog entries regarding ZFS. I have a couple of questions and am open to any hints, tips, or things to watch out for on implementation of my home file server. I'm building a file server consisting of an Asus P5WD2

[zfs-discuss] Re: Need guidance on RAID 5, ZFS, and RAIDZ on home file server

2007-05-08 Thread John Smith
The original thought was 3 of the drives as storage, and one of the drives as parity. So that would yield around 1.4TB of useable storage. I hadn't given any thought to running 64 bit. This system is being built from the ground up. I guess in the back of my head I had assumed it would be 32

[zfs-discuss] Re: Need guidance on RAID 5, ZFS, and RAIDZ on home file server

2007-05-08 Thread John Smith
Sorry about that, the specific processor in question is the Pentium D 930 which supports 64 bit computing through the Extended Memory 64 Technology. It was my initial reaction to say I'd go with 32 bit computing because my general experience with 64-bit is Windows, Linux, and some FreeBSD.

[zfs-discuss] Re: Need guidance on RAID 5, ZFS, and RAIDZ on home file server

2007-05-09 Thread John Smith
Thanks for the continuing flow of information. I already have all of the equipment. I'm actually upgrading my main computer to a new Core 2 Duo setup which is why this hardware is going to the file server. I think I'm going to try a 64bit install using the four 500GB drives in a RAID-Z

[zfs-discuss] Re: snv63: kernel panic on import

2007-05-15 Thread Nigel Smith
I seem to have got the same core dump, in a different way. I had a zpool setup on a iscsi 'disk'. For details see: http://mail.opensolaris.org/pipermail/storage-discuss/2007-May/001162.html But after a reboot the iscsi target was not longer available, so the iscsi initiator could not provide the

[zfs-discuss] Re: Re: New zfs pr0n server :)))

2007-05-21 Thread Nigel Smith
the path) My sata drive is using the 'ahci' driver, connecting to the ICH7 chipset on the motherboard. And I have a scsi drive on a Adaptec card, plugged into a PCI slot. Thanks Nigel Smith # format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0t0d0 DEFAULT cyl 2229 alt 2 hd 255

[zfs-discuss] zpool status -v: machine readable format?

2007-07-03 Thread David Smith
I was wondering if anyone had a script to parse the zpool status -v output into a more machine readable format? Thanks, David This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

[zfs-discuss] zfs list hangs if zfs send is killed (leaving zfs receive process)

2007-07-13 Thread David Smith
I was in the process of doing a large zfs send | zfs receive when I decided that I wanted to terminate the the zfs send process. I killed it, but the zfs receive doesn't want to die... In the meantime my zfs list command just hangs. Here is the tail end of the truss output from a truss zfs

Re: [zfs-discuss] Again ZFS with expanding LUNs!

2007-07-13 Thread David Smith
I don't believe LUN expansion is quite yet possible under Solaris 10 (11/06). I believe this might make it into the next update but I'm not sure on that. Someone from Sun would need to comment on when this will make it into the production release of Solaris. I know this because I was working

Re: [zfs-discuss] zfs list hangs if zfs send is killed (leaving zfs receive process)

2007-07-13 Thread David Smith
Well, the zfs receive process finally died, and now my zfs list works just fine. If there is a better way to capture what is going on, please let me know and I can duplicate the hang. David This message posted from opensolaris.org ___ zfs-discuss

Re: [zfs-discuss] si3124 controller problem and fix (fwd)

2007-07-17 Thread Nigel Smith
You can see the status of bug here: http://bugs.opensolaris.org/view_bug.do?bug_id=6566207 Unfortunately, it's showing no progress since 20th June. This fix really could do to be in place for S10u4 and snv_70. Thanks Nigel Smith This message posted from opensolaris.org

[zfs-discuss] General recommendations on raidz groups of different sizes

2007-07-18 Thread David Smith
What are your thoughts or recommendations on having a zpool made up of raidz groups of different sizes? Are there going to be performance issues? For example: pool: testpool1 state: ONLINE scrub: none requested config: NAMESTATE READ

[zfs-discuss] ZFS and Oracle asynchronous I/O

2007-08-01 Thread Bob Smith
We have an Oracle 10.2.0.3 installation on a Sun T2000 logical domain. The virtual disk has a ZFS file system. When we try to create a tablespace we get these errors: WARNING: aiowait timed out 1 times WARNING: aiowait timed out 2 times WARNING: aiowait timed out 3 times ... Does

Re: [zfs-discuss] remove snapshots

2007-08-17 Thread David Smith
To list your snapshots: /usr/sbin/zfs list -H -t snapshot -o name Then you could use that in a for loop: for i in `/usr/sbin/zfs list -H -t snapshot -o name` ; do echo Destroying snapshot: $i /usr/sbin/zfs destroy $i done The above would destroy all your snapshots. You could put a grep on

Re: [zfs-discuss] Please help! ZFS crash burn in SXCE b70!

2007-08-31 Thread Nigel Smith
Yes, I'm not surprised. I thought it would be a RAM problem. I always recommend a 'memtest' on any new hardware. Murphy's law predicts that you only have RAM problems on PC's that you don't test! Regards Nigel Smith This message posted from opensolaris.org

Re: [zfs-discuss] Please help! ZFS crash burn in SXCE b70!

2007-08-31 Thread Nigel Smith
Richard, thanks for the pointer to the tests in '/usr/sunvts', as this is the first I have heard of them. They look quite comprehensive. I will give them a trial when I have some free time. Thanks Nigel Smith pmemtest- Physical Memory Test ramtest - Memory DIMMs (RAM) Test

Re: [zfs-discuss] MS Exchange storage on ZFS?

2007-09-12 Thread Nigel Smith
back to this forum, and on the 'Storage-discuss' forum where these sort of questions are more usually discussed. Thanks Nigel Smith This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] Possible ZFS Bug - Causes OpenSolaris Crash

2007-10-13 Thread Nigel Smith
Please can you provide the source code for your test app. I would like to see if I can reproduce this 'crash'. Thanks Nigel This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Possible ZFS Bug - Causes OpenSolaris Crash

2007-10-15 Thread Nigel Smith
upgrade to snv70 or latter. Regards, Nigel Smith This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] I/O write failures on non-replicated pool

2007-10-25 Thread Nigel Smith
/pipermail/onnv-notify/2007-October/012782.html Regards Nigel Smith This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zfs corruption w/ sil3114 sata controllers

2007-10-30 Thread Nigel Smith
for a different hard disk controller card: http://mail.opensolaris.org/pipermail/storage-discuss/2007-September/003399.html Regards Nigel Smith This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] zfs corruption w/ sil3114 sata controllers

2007-10-30 Thread Nigel Smith
And are you seeing any error messages in '/var/adm/messages' indicating any failure on the disk controller card? If so, please post a sample back here to the forum. This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] zfs corruption w/ sil3114 sata controllers

2007-10-31 Thread Nigel Smith
(PCI-E) and SiI-3124 (PCI-X) devices. 2. The AHCI driver, which supports the Intel ICH6 and latter devices, often found on motherboard. 4. The NV_SATA driver which supports Nvidia ck804/mcp55 devices. Regards Nigel Smith This message posted from opensolaris.org

Re: [zfs-discuss] Help! ZFS pool is UNAVAILABLE

2008-01-01 Thread Nigel Smith
Presumably the labels are some how confused, especially for your USB drives :-( Regards Nigel Smith This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Replacing a Lun (Raid 0) (Santricity)

2008-03-14 Thread David Smith
I would like advise about how to replace a raid 0 lun. The lun is basically a raid 0 lun which is from a single disk volume group / volume from our Flexline 380 unit. So every disk in the unit is a volume group/volume/lun mapped to the host. We then let ZFS do the raid. We have a lun now

Re: [zfs-discuss] Replacing a Lun (Raid 0) (Santricity)

2008-03-14 Thread David Smith
Addtional information: It looks like perhaps the original drive is in use, and the hot spare is assigned but not in use see below about zpool iostat: raidz22.76T 4.49T 0 0 29.0K 18.4K c10t600A0B80001139967CF945E80E95d0 - - 0

Re: [zfs-discuss] Replacing a Lun (Raid 0) (Santricity)

2008-03-14 Thread David Smith
Yes! That worked to get the spare back to an available state. Thanks! So that leaves me with the trying to put together a recommended procedure to replace a failed lun/disk from our Flexline 380. Does anyone have configuration in which they are using a RAID 0 lun, which they need to

Re: [zfs-discuss] lucreate error: Cannot determine the physical boot device ...

2008-04-08 Thread Terry Smith
Roman I didn't think that we had live upgrade support for zfs root filesystem yet. T Roman Morokutti wrote: # lucreate -n B85 Analyzing system configuration. Hi, after typing # lucreate -n B85 I get the following error: No name for current boot environment. INFORMATION: The

[zfs-discuss] Clone a disk, need to change pool_guid

2008-05-25 Thread John Smith
Hi folks, I use an iSCSI disk mounted onto a Solaris 10 server. I installed a ZFS file system into s2 of the disk. I exported the disk and cloned it on the iSCSI target. The clone is a perfect copy of the iSCSI LUN and therefore has the same zpool name and guid. My question is: is there

[zfs-discuss] ZFS sharing options for Windows

2008-05-28 Thread Craig Smith
Hello, I am fairly new to Solaris and ZFS. I am testing both out in a sandbox at work. I am playing with virtual machines running on a windows front-end that connects to a zfs back-end for its data needs. As far as i know my two options are sharesmb and shareiscsci for data sharing. I have a

Re: [zfs-discuss] ZFS sharing options for Windows

2008-05-30 Thread Craig Smith
Justin, Thanks for the reply In the environment I currently work in, the powers that be are almost completely anti unix. Installing the nfs client on all machines would take a real good sales pitch. None the less I am still playing with the client in our sandbox. As I install this on a test

[zfs-discuss] Filesystem for each home dir - 10,000 users?

2008-06-05 Thread Adam Smith
Hi All, I'm new to ZFS but I'm intrigued by the possibilities it presents. I'm told one of the greatest benefits is that, instead of setting quotas, each user can have their own 'filesystem' under a single pool. This is obviously great if you've got 10 users but what if you have 10,000? Are

[zfs-discuss] FW: please help with raid / failure / rebuild calculations

2008-07-15 Thread Ross Smith
bits vs bytes D'oh! again. It's a good job I don't do these calculations professionally. :-) Date: Tue, 15 Jul 2008 02:30:33 -0400 From: [EMAIL PROTECTED] To: [EMAIL PROTECTED] Subject: Re: [zfs-discuss] please help with raid / failure / rebuild calculations CC:

Re: [zfs-discuss] J4500 device renumbering

2008-07-15 Thread Ross Smith
It sounds like you might be interested to read up on Eric Schrock's work. I read today about some of the stuff he's been doing to bring integrated fault management to Solaris: http://blogs.sun.com/eschrock/entry/external_storage_enclosures_in_solaris His last paragraph is great to see, Sun

Re: [zfs-discuss] Supermicro AOC-SAT2-MV8 hang when drive removed

2008-07-28 Thread Ross Smith
File Browser is the name of the program that Solaris opens when you open Computer on the desktop. It's the default graphical file manager. It does eventually stop copying with an error, but it takes a good long while for ZFS to throw up that error, and even when it does, the pool doesn't

Re: [zfs-discuss] Supermicro AOC-SAT2-MV8 hang when drive removed

2008-07-28 Thread Ross Smith
snv_91. I downloaded snv_94 today so I'll be testing with that tomorrow. Date: Mon, 28 Jul 2008 09:58:43 -0700 From: [EMAIL PROTECTED] Subject: Re: [zfs-discuss] Supermicro AOC-SAT2-MV8 hang when drive removed To: [EMAIL PROTECTED] Which OS and revision? -- richard Ross wrote: Ok,

Re: [zfs-discuss] Supermicro AOC-SAT2-MV8 hang when drive removed

2008-07-29 Thread Ross Smith
A little more information today. I had a feeling that ZFS would continue quite some time before giving an error, and today I've shown that you can carry on working with the filesystem for at least half an hour with the disk removed. I suspect on a system with little load you could carry on

Re: [zfs-discuss] Supermicro AOC-SAT2-MV8 hang when drive removed

2008-07-30 Thread Ross Smith
I agree that device drivers should perform the bulk of the fault monitoring, however I disagree that this absolves ZFS of any responsibility for checking for errors. The primary goal of ZFS is to be a filesystem and maintain data integrity, and that entails both reading and writing data to

Re: [zfs-discuss] Supermicro AOC-SAT2-MV8 hang when drive removed

2008-07-31 Thread Ross Smith
PROTECTED] Subject: Re: [zfs-discuss] Supermicro AOC-SAT2-MV8 hang when drive removed To: [EMAIL PROTECTED] CC: zfs-discuss@opensolaris.org I was able to reproduce this in b93, but might have a different interpretation of the conditions. More below... Ross Smith wrote: A little more

Re: [zfs-discuss] Can I trust ZFS?

2008-08-01 Thread Ross Smith
Hey Brent, On the Sun hardware like the Thumper you do get a nice bright blue ready to remove led as soon as you issue the cfgadm -c unconfigure xxx command. On other hardware it takes a little more care, I'm labelling our drive bays up *very* carefully to ensure we always remove the right

Re: [zfs-discuss] Replacing the boot HDDs in x4500

2008-08-01 Thread Ross Smith
Sorry Ian, I was posting on the forum and missed the word disks from my previous post. I'm still not used to Sun's mutant cross of a message board / mailing list. Ross Date: Fri, 1 Aug 2008 21:08:08 +1200 From: [EMAIL PROTECTED] To: [EMAIL PROTECTED] CC: zfs-discuss@opensolaris.org

Re: [zfs-discuss] are these errors dangerous

2008-08-03 Thread Ross Smith
Hi Matt, If it's all 3 disks, I wouldn't have thought it likely to be disk errors, and I don't think it's a ZFS fault as such. You might be better posting the question in the storage or help forums to see if anybody there can shed more light on this. Ross Date: Sun, 3 Aug 2008 16:48:03

Re: [zfs-discuss] Zpool import not working - I broke my pool...

2008-08-05 Thread Ross Smith
Just a thought, before I go and wipe this zpool, is there any way to manually recreate the /etc/zfs/zpool.cache file? Ross Date: Mon, 4 Aug 2008 10:42:43 -0600 From: [EMAIL PROTECTED] Subject: Re: [zfs-discuss] Zpool import not working - I broke my pool... To: [EMAIL PROTECTED]; [EMAIL

Re: [zfs-discuss] Zpool import not working - I broke my pool...

2008-08-05 Thread Ross Smith
PROTECTED]; zfs-discuss@opensolaris.org Ross Smith wrote: Just a thought, before I go and wipe this zpool, is there any way to manually recreate the /etc/zfs/zpool.cache file? Do you have a copy in a snapshot? ZFS for root is awesome! -- richard Ross Date: Mon, 4 Aug 2008 10:42:43 -0600

Re: [zfs-discuss] Zpool import not working - I broke my pool...

2008-08-06 Thread Ross Smith
Hmm... got a bit more information for you to add to that bug I think. Zpool import also doesn't work if you have mirrored log devices and either one of them is offline. I created two ramdisks with: # ramdiskadm -a rc-pool-zil-1 256m # ramdiskadm -a rc-pool-zil-2 256m And added them to the

Re: [zfs-discuss] FW: Supermicro AOC-SAT2-MV8 hang when drive removed

2008-08-15 Thread Ross Smith
Oh god no, I'm already learning three new operating systems, now is not a good time to add a fourth. Ross-- Windows admin now working with Ubuntu, OpenSolaris and ESX Date: Fri, 15 Aug 2008 10:07:31 -0500From: [EMAIL PROTECTED]: [EMAIL PROTECTED]: Re: [zfs-discuss] FW: Supermicro

Re: [zfs-discuss] FW: Supermicro AOC-SAT2-MV8 hang when drive removed

2008-08-20 Thread Ross Smith
Without fail, cfgadm changes the status from disk to sata-port when I unplug a device attached to port 6 or 7, but most of the time unplugging disks 0-5 results in no change in cfgadm, until I also attach disk 6 or 7. That does seem inconsistent, or at least, it's not what I'd expect.

Re: [zfs-discuss] Best layout for 15 disks?

2008-08-22 Thread Ross Smith
Yup, you got it, and an 8 disk raid-z2 array should still fly for a home system :D I'm guessing you're on gigabit there? I don't see you having any problems hitting the bandwidth limit on it. Ross Date: Fri, 22 Aug 2008 11:11:21 -0700 From: [EMAIL PROTECTED] To: [EMAIL PROTECTED]

Re: [zfs-discuss] ZFS automatic snapshots 0.11 Early Access

2008-08-27 Thread Ross Smith
That sounds absolutely perfect Tim, thanks. Yes, we'll be sending these to other zfs filesystems, although I haven't looked at the send/receive part of your service yet. What I'd like to do is stage the send/receive as files on an external disk, and then receive them remotely from that.

Re: [zfs-discuss] Availability: ZFS needs to handle disk removal / driver failure better

2008-08-28 Thread Ross Smith
Hi guys, Bob, my thought was to have this timeout as something that can be optionally set by the administrator on a per pool basis. I'll admit I was mainly thinking about reads and hadn't considered the write scenario, but even having thought about that it's still a feature I'd like. After

Re: [zfs-discuss] Availability: ZFS needs to handle disk removal / driver failure better

2008-08-30 Thread Ross Smith
Triple mirroring you say? That'd be me then :D The reason I really want to get ZFS timeouts sorted is that our long term goal is to mirror that over two servers too, giving us a pool mirrored across two servers, each of which is actually a zfs iscsi volume hosted on triply mirrored disks.

Re: [zfs-discuss] EMC - top of the table for efficiency, how well would ZFS do?

2008-08-31 Thread Ross Smith
Hey Tim, I'll admit I just quoted the blog without checking, I seem to remember the sales rep I spoke to recommending putting aside 20-50% of my disk for snapshots. Compared to ZFS where I don't need to reserve any space it feels very old fashioned. With ZFS, snapshots just take up as much

Re: [zfs-discuss] EMC - top of the table for efficiency, how well would ZFS do?

2008-08-31 Thread Ross Smith
] To: [EMAIL PROTECTED] Subject: Re: [zfs-discuss] EMC - top of the table for efficiency, how well would ZFS do? CC: zfs-discuss@opensolaris.org On Sun, Aug 31, 2008 at 10:39 AM, Ross Smith [EMAIL PROTECTED] wrote: Hey Tim, I'll admit I just quoted the blog without checking, I seem

Re: [zfs-discuss] Availability: ZFS needs to handle disk removal / driver failure better

2008-09-02 Thread Ross Smith
Thinking about it, we could make use of this too. The ability to add a remote iSCSI mirror to any pool without sacrificing local performance could be a huge benefit. From: [EMAIL PROTECTED] To: [EMAIL PROTECTED] CC: [EMAIL PROTECTED]; zfs-discuss@opensolaris.org Subject: Re: Availability:

Re: [zfs-discuss] ZFS Mirrors braindead?

2008-10-07 Thread Ross Smith
Oh cool, that's great news. Thanks Eric. Date: Tue, 7 Oct 2008 11:50:08 -0700 From: [EMAIL PROTECTED] To: [EMAIL PROTECTED] CC: zfs-discuss@opensolaris.org Subject: Re: [zfs-discuss] ZFS Mirrors braindead? On Tue, Oct 07, 2008 at 11:42:57AM

[zfs-discuss] Change the volblocksize of a ZFS volume

2008-10-14 Thread Nick Smith
help. Nick Smith -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Improving zfs send performance

2008-10-15 Thread Ross Smith
I'm using 2008-05-07 (latest stable), am I right in assuming that one is ok? Date: Wed, 15 Oct 2008 13:52:42 +0200 From: [EMAIL PROTECTED] To: [EMAIL PROTECTED]; zfs-discuss@opensolaris.org Subject: Re: [zfs-discuss] Improving zfs send performance

Re: [zfs-discuss] Improving zfs send performance

2008-10-15 Thread Ross Smith
Thanks, that got it working. I'm still only getting 10MB/s, so it's not solved my problem - I've still got a bottleneck somewhere, but mbuffer is a huge improvement over standard zfs send / receive. It makes such a difference when you can actually see what's going on.

Re: [zfs-discuss] Improving zfs send performance

2008-10-16 Thread Ross Smith
Try to separate the two things: (1) Try /dev/zero - mbuffer --- network --- mbuffer /dev/null That should give you wirespeed I tried that already. It still gets just 10-11MB/s from this server. I can get zfs send / receive and mbuffer working at 30MB/s though from a couple of test servers

Re: [zfs-discuss] Improving zfs send performance

2008-10-16 Thread Ross Smith
Oh dear god. Sorry folks, it looks like the new hotmail really doesn't play well with the list. Trying again in plain text: Try to separate the two things: (1) Try /dev/zero - mbuffer --- network --- mbuffer /dev/null That should give you wirespeed I tried that already. It still

Re: [zfs-discuss] HELP! SNV_97, 98, 99 zfs with iscsitadm and VMWare!

2008-10-16 Thread Nigel Smith
to show the iscsi session has dropped out, and the initiator is auto retrying to connect to the target, but failing. It may help to get a packet capture at this stage to try see why the logon is failing. Regards Nigel Smith -- This message posted from opensolaris.org

Re: [zfs-discuss] HELP! SNV_97, 98, 99 zfs with iscsitadm and VMWare!

2008-10-18 Thread Nigel Smith
Tano, based on the above, I would say you need unique GUID's for two separate Targets/LUNS. Best Regards Nigel Smith http://nwsmith.blogspot.com/ -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] My 500-gig ZFS is gone: insufficient replicas, corrupted data

2008-10-19 Thread Nigel Smith
hard drives would not help, as the bios update may cause an identical problem with each drive.) Good Luck Nigel Smith -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman

Re: [zfs-discuss] HELP! SNV_97, 98, 99 zfs with iscsitadm and VMWare!

2008-10-21 Thread Nigel Smith
could check the Solaris iScsi target works ok under stress from something other that ESX, like say the Windows iscsi initiator. Regards Nigel Smith -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] HELP! SNV_97, 98, 99 zfs with iscsitadm and VMWare!

2008-10-21 Thread Nigel Smith
. Following Eugene report, I'm beginning to fear that some sort of regression has been introduced into the iscsi target code... Regards Nigel Smith -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] HELP! SNV_97, 98, 99 zfs with iscsitadm and VMWare!

2008-10-22 Thread Nigel Smith
the snv_93 and snv_97 iscsi target to work well with the Vmware ESX and Microsoft initiators. So it is a surprise to see these problems occurring. Maybe some of the more resent builds snv_98, 99 have 'fixes' that have cause the problem... Regards Nigel Smith -- This message posted from

Re: [zfs-discuss] HELP! zfs with iscsitadm (on poweredge1900) and VMWare!

2008-10-22 Thread Nigel Smith
Hi Tano I will have a look at your snoop file. (Tomorrow now, as it's late in the UK!) I will send you my email address. Thanks Nigel Smith -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] Disabling COMMIT at NFS level, or disabling ZIL on a per-filesystem basis

2008-10-23 Thread Ross Smith
No problem. I didn't use mirrored slogs myself, but that's certainly a step up for reliability. It's pretty easy to create a boot script to re-create the ramdisk and re-attach it to the pool too. So long as you use the same device name for the ramdisk you can add it each time with a simple

Re: [zfs-discuss] diagnosing read performance problem

2008-10-25 Thread Nigel Smith
the capture. You can then use Ethereal or WireShark to analyze the capture file. On the 'Analyze' menu, select 'Expert Info'. This will look through all the packets and will report any warning or errors it sees. Regards Nigel Smith -- This message posted from opensolaris.org

Re: [zfs-discuss] diagnosing read performance problem

2008-10-26 Thread Nigel Smith
Nigel Smith -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] HELP! zfs with iscsitadm (on poweredge1900) and VMWare!

2008-10-26 Thread Nigel Smith
'prstat' to see if it gives any clues. Presumably you are using ZFS as the backing store for iScsi, in which case, maybe try with a UFS formatted disk to see if that is a factor. Regards Nigel Smith -- This message posted from opensolaris.org ___ zfs-discuss

Re: [zfs-discuss] My 500-gig ZFS is gone: insufficient replicas, corrupted data

2008-10-27 Thread Nigel Smith
. If your using Solaris, maybe try 'prtvtoc'. http://docs.sun.com/app/docs/doc/819-2240/prtvtoc-1m?a=view (Unless someone knows a better way?) Thanks Nigel Smith # prtvtoc /dev/rdsk/c1t1d0 * /dev/rdsk/c1t1d0 partition map * * Dimensions: * 512 bytes/sector * 1465149168 sectors * 1465149101 accessible

Re: [zfs-discuss] My 500-gig ZFS is gone: insufficient replicas, corrupted data

2008-10-27 Thread Nigel Smith
/2007/660/onepager/ http://bugs.opensolaris.org/view_bug.do?bug_id=5044205 Regards Nigel Smith -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zpool import problem

2008-10-27 Thread Nigel Smith
a 'zpool status') Thanks Nigel Smith -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zpool import: all devices online but: insufficient replicas

2008-10-27 Thread Nigel Smith
'status' of your zpool on Server2? (You have not provided a 'zpool status') Thanks Nigel Smith -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] My 500-gig ZFS is gone: insufficient replicas, corrupted data

2008-10-27 Thread Nigel Smith
'smartctl' (fully) working with PATA and SATA drives on x86 Solaris. I've done a quick search on PSARC 2007/660 and it was closed approved fast-track 11/28/2007. I did a quick search, but I could not find any code that had been committed to 'onnv-gate' that references this case. Regards Nigel Smith

Re: [zfs-discuss] HELP! SNV_97, 98, 99 zfs with iscsitadm and VMWare!

2008-10-27 Thread Nigel Smith
Hi Tano Please check out my post on the storage-forum for another idea to try which may give further clues: http://mail.opensolaris.org/pipermail/storage-discuss/2008-October/006458.html Best Regards Nigel Smith -- This message posted from opensolaris.org

Re: [zfs-discuss] diagnosing read performance problem

2008-10-27 Thread Nigel Smith
method used for this file is 98. Please can you check it out, and if necessary use a more standard compression algorithm. Download File Size was 8,782,584 bytes. Thanks Nigel Smith -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] diagnosing read performance problem

2008-10-28 Thread Nigel Smith
, that's my thoughts conclusion for now. Maybe you could get some more snoop captures with other clients, and with a different switch, and do a similar analysis. Regards Nigel Smith -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] HELP! SNV_97, 98, 99 zfs with iscsitadm and VMWare!

2008-10-29 Thread Nigel Smith
is closed source :-( Regards Nigel Smith -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] diagnosing read performance problem

2008-10-29 Thread Nigel Smith
be interesting to do two separate captures - one on the client and the one on the server, at the same time, as this would show if the switch was causing disruption. Try to have the clocks on the client server synchronised as close as possible. Thanks Nigel Smith -- This message posted from opensolaris.org

Re: [zfs-discuss] diagnosing read performance problem

2008-10-29 Thread Nigel Smith
' for the network card, just in case it turns out to be a driver bug. Regards Nigel Smith -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] diagnosing read performance problem

2008-10-30 Thread Nigel Smith
any good while that is happening. I think you need to try a different network card in the server. Regards Nigel Smith -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman

Re: [zfs-discuss] questions on zfs send,receive,backups

2008-11-03 Thread Ross Smith
Snapshots are not replacements for traditional backup/restore features. If you need the latter, use what is currently available on the market. -- richard I'd actually say snapshots do a better job in some circumstances. Certainly they're being used that way by the desktop team:

Re: [zfs-discuss] questions on zfs send,receive,backups

2008-11-03 Thread Ross Smith
If the file still existed, would this be a case of redirecting the file's top level block (dnode?) to the one from the snapshot? If the file had been deleted, could you just copy that one block? Is it that simple, or is there a level of interaction between files and snapshots that I've

  1   2   >