[zfs-discuss] bad seagate drive?

2011-09-11 Thread Matt Harrison

Hi list,

I've got a system with 3 WD and 3 seagate drives. Today I got an email 
that zpool status indicated one of the seagate drives as REMOVED.


I've tried clearing the error but the pool becomes faulted again. Taken 
out the offending drive and plugged into a windows box with seatools 
install. Unfortunately seatools finds nothing wrong with the drive.


Windows seems to see the drive details ok, of course I can't try 
anything ZFS related.


Is it worth RMAing to seagate anyway (considering they will apparently 
charge me if they don't think the drive is faulty) or are there some 
other tests I can try?


I've got the system powered down as there wasn't room to install hot 
spares, and I don't want to risk the rest of the pool with another failure.


Any tips appreciated.

Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] bad seagate drive?

2011-09-11 Thread Matt Harrison

On 11/09/2011 18:32, Krunal Desai wrote:

On Sep 11, 2011, at 13:01 , Richard Elling wrote:

The removed state can be the result of a transport issue. If this is a 
Solaris-based
OS, then look at fmadm faulty for a diagnosis leading to a removal. If none,
then look at fmdump -eV for errors relating to the disk. Last, check the 
zpool
history to make sure one of those little imps didn't issue a zpool remove
command.


Definitely check your cabling; a few of my drives disappeared like this as 
'REMOVED', turned out to be some loose SATA cables on my backplane.

--khd


Thanks guys,

I reinstalled the drive after testing on the windows machine and it 
looks fine now. By the time I'd got on to the console it had already 
started resilvering. All done now and hopefully it will stay like that 
for a while.


Thanks again, saved me some work
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] monitoring ops

2011-06-28 Thread Matt Harrison

Hi list,

I want to monitor the read and write ops/bandwidth for a couple of pools 
and I'm not quite sure how to proceed. I'm using rrdtool so I either 
want an accumulated counter or a gauge.


According to the ZFS admin guide, running zpool iostat without any 
parameters should show the activity since boot. On my system (OSOL 
snv_133) it's only showing ops in the single digits for a system with a 
months uptime and many GB of transfers.


So, is there a way to get this output correctly, or is there a better 
way to do this?


Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] monitoring ops

2011-06-28 Thread Matt Harrison

On 28/06/2011 16:44, Tomas Ögren wrote:



Matt Harrisoniwasinnamuk...@genestate.com  wrote:


Hi list,

I want to monitor the read and write ops/bandwidth for a couple of
pools
and I'm not quite sure how to proceed. I'm using rrdtool so I either
want an accumulated counter or a gauge.

According to the ZFS admin guide, running zpool iostat without any
parameters should show the activity since boot. On my system (OSOL


Average activity since boot...


Ahh ok, perhaps the guide should be updated to reflect this.




snv_133) it's only showing ops in the single digits for a system with a

months uptime and many GB of transfers.

So, is there a way to get this output correctly, or is there a better
way to do this?

Thanks


Thank you
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] changing vdev types

2011-06-01 Thread Matt Harrison

Hi list,

I've got a pool thats got a single raidz1 vdev. I've just some more 
disks in and I want to replace that raidz1 with a three-way mirror. I 
was thinking I'd just make a new pool and copy everything across, but 
then of course I've got to deal with the name change.


Basically, what is the most efficient way to migrate the pool to a 
completely different vdev?


Thanks

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] changing vdev types

2011-06-01 Thread Matt Harrison

On 01/06/2011 20:45, Eric Sproul wrote:

On Wed, Jun 1, 2011 at 2:54 PM, Matt Harrison
iwasinnamuk...@genestate.com  wrote:

Hi list,

I've got a pool thats got a single raidz1 vdev. I've just some more disks in
and I want to replace that raidz1 with a three-way mirror. I was thinking
I'd just make a new pool and copy everything across, but then of course I've
got to deal with the name change.

Basically, what is the most efficient way to migrate the pool to a
completely different vdev?


Since you can't mix vdev types in a single pool, you'll have to create
a new pool.  But you can use zfs send/recv to move the datasets, so
your mountpoints and other properties will be preserved.

Eric


Thanks Eric, however seeing as I can't have two pools named 'tank', I'll 
have to name the new one something else. I believe I will be able to 
rename it afterwards, but I just wanted to check first. I'd have to have 
to spend hours changing the pool name in a thousand files.


Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] changing vdev types

2011-06-01 Thread Matt Harrison

On 01/06/2011 20:52, Eric Sproul wrote:

On Wed, Jun 1, 2011 at 3:47 PM, Matt Harrison
iwasinnamuk...@genestate.com  wrote:

Thanks Eric, however seeing as I can't have two pools named 'tank', I'll
have to name the new one something else. I believe I will be able to rename
it afterwards, but I just wanted to check first. I'd have to have to spend
hours changing the pool name in a thousand files.


What files would those be?  Usually the pool name (and therefore the
dataset names) doesn't matter-- only mountpoints matter to most things
on the system, but maybe you have a more interesting use case.  :)

Eric


Nothing that impressive :D I have quite a few scripts and config files 
that of course access the data via the mountpoint, which in my case 
refers to the pool name. It's just slightly possible I exaggerated the 
number of files, but still I'd prefer not to do it at all. :)


But Cindy has just chimed in and confirmed that I can indeed just import 
the pool under the old name.


So many thanks to both of you, I'll go and read up on the snapshot 
options so I can get an accurate replica :)


Thanks

Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] changing vdev types

2011-06-01 Thread Matt Harrison

On 01/06/2011 20:53, Cindy Swearingen wrote:

Hi Matt,

You have several options in terms of migrating the data but I think the
best approach is to do something like I have described below.

Thanks,

Cindy

1. Create snapshots of the file systems to be migrated. If you
want to capture the file system properties, then see the zfs.1m
man page for a description of what you need.

2. Create your mirrored pool with the new disks and call it
pool2, if your raidz pool is pool1, for example.

3. Use zfs send/receive to send your snapshots to pool2.

4. Review the pool properties on pool1 if you want pool2 set up
similarly.

# zpool get all pool1

5. After your pool2 is setup and your data is migrated, then
you can destroy pool1.

6. You can export pool2 and import is as pool1.


Thanks Cindy,

Started with it now, just a bit of reading to do.

Matt

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] just can't import

2011-04-12 Thread Matt Harrison

On 13/04/2011 00:36, David Magda wrote:

On Apr 11, 2011, at 17:54, Brandon High wrote:


I suspect that the minimum memory for most moderately sized pools is
over 16GB. There has been a lot of discussion regarding how much
memory each dedup'd block requires, and I think it was about 250-270
bytes per block. 1TB of data (at max block size and no duplicate data)
will require about 2GB of memory to run effectively. (This seems high
to me, hopefully someone else can confirm.)


There was a  thread on the topic with the subject Newbie ZFS Question: RAM for 
Dedup. I think it summarized pretty well by Erik Trimble:


bottom line: 270 bytes per record

so, for 4k record size, that  works out to be 67GB per 1 TB of unique data. 
128k record size means about 2GB per 1 TB.

dedup means buy a (big) SSD for L2ARC.


http://mail.opensolaris.org/pipermail/zfs-discuss/2010-October/045720.html

Remember that 270 bytes per block means you're allocating one 512-byte sector 
for most current disks (a 4K sector for each block  RSN).

See also:

http://mail.opensolaris.org/pipermail/zfs-discuss/2010-March/037978.html
http://mail.opensolaris.org/pipermail/zfs-discuss/2010-February/037300.html



Thanks for the info guys.

I decided that the overhead involved in managing (esp deleting) deduped 
datasets far outweighed the benefits it was bringing me. I'm currently 
remaking datasets non-dedup and now I know about the hang, I am a lot 
more patient :D


Thanks

Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] just can't import

2011-04-11 Thread Matt Harrison

On 11/04/2011 10:04, Brandon High wrote:

On Sun, Apr 10, 2011 at 10:01 PM, Matt Harrison
iwasinnamuk...@genestate.com  wrote:

The machine only has 4G RAM I believe.


There's your problem. 4G is not enough memory for dedup, especially
without a fast L2ARC device.


It's time I should be heading to bed so I'll let it sit overnight, and if
I'm still stuck with it I'll give Ian's recent suggestions a go and report
back.


I'd suggest waiting for it to finish the destroy. It will, if you give it time.

Trying to force the import is only going to put you back in the same
situation - The system will attempt to complete the destroy and seem
to hang until it's completed.

-B



Thanks Brandon,

It did finish eventually, not sure how long it took in the end. Things 
are looking good again :)


Thanks for the help everyone

Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] just can't import

2011-04-10 Thread Matt Harrison
I'm running a slightly old version of OSOL, I'm sorry I can't remember 
the version.


I had a de-dup dataset and tried to destroy it. The command hung and so 
did anything else zfs related. I waited half and hour or so, the dataset 
was only 15G, and rebooted.


The machine refused to boot, stuck at Reading ZFS Config. Asking around 
on the OSOL list someone kindly suggested I try a livecd and import, 
scrub, export the pool from there.


Well the livecd is also hanging on import, anything else zfs hangs. 
iostat shows some reads but they drop off to almost nothing after 2 mins 
or so. Truss'ing the import process just loops this over and over:


3134/6: lwp_park(0xFDE02F38, 0) (sleeping...)
3134/6: lwp_park(0xFDE02F38, 0) Err#62 ETIME

I wouldn't mind waiting for the pool to right itself, but to my 
inexperienced eyes it doesn't actually seem to be doing anything.


Any tips greatly appreciated,

thanks

Matt Harrison
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] just can't import

2011-04-10 Thread Matt Harrison

On 11/04/2011 05:25, Brandon High wrote:

On Sun, Apr 10, 2011 at 9:01 PM, Matt Harrison
iwasinnamuk...@genestate.com  wrote:

I had a de-dup dataset and tried to destroy it. The command hung and so did
anything else zfs related. I waited half and hour or so, the dataset was
only 15G, and rebooted.


How much RAM does the system have? Dedup uses a LOT of memory, and it
can take a long time to destroy dedup'd datasets.

If you keep waiting, it'll eventually return. It could be a few hours or longer.


The machine refused to boot, stuck at Reading ZFS Config. Asking around on


The system resumed the destroy that was in progress. If you let it
sit, it'll eventually complete.


Well the livecd is also hanging on import, anything else zfs hangs. iostat
shows some reads but they drop off to almost nothing after 2 mins or so.


Likewise, it's trying to complete the destroy. Be patient and it'll
complete. Never versions of Open Solaris or Solaris 11 Express may
complete it faster.


Any tips greatly appreciated,


Just wait...

-B



Thanks for the replies,

The machine only has 4G RAM I believe.

It's time I should be heading to bed so I'll let it sit overnight, and 
if I'm still stuck with it I'll give Ian's recent suggestions a go and 
report back.


Many thanks

Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SSDs get faster and less expensive

2009-07-21 Thread Matt Harrison

Richard Elling wrote:

On Jul 21, 2009, at 12:49 PM, Bob Friesenhahn wrote:


On Tue, 21 Jul 2009, Andrew Gabriel wrote:
The X25-M drives referred to are Intel's Mainstream drives, using MLC 
flash.


The Enterprise grade drives are X25-E, which currently use SLC flash 
(less dense, more reliable, much longer lasting/more writes). The 
expected lifetime is similar to an Enterprise grade hard drive.


Yes, but they store hardly any data.  The X25-M sizes they mention are 
getting to the point that you could use them for a data drive.


With wear leveling and zfs you would probably discover that the drive 
suddenly starts to wear out all at once once it reaches the end of its 
lifetime.  Unless drive ages are carefully staggered, or different 
types of drives are intentionally used, it might be that data 
redundancy does not help.  Poof!


Eh?  Would you care to share how you calculate this?


Well I'm assuming something like this:

If all your drives have *exactly* the same lifetime, you really don't 
want them all to fail at the same time...so you should ideally arrange 
that they fail a month or so apart. That should leave you plenty of time 
to replace the failed device without all your data going bye bye at the 
same time.


Or maybe that wasn't the part you wanted clarified. My bad :)

Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] recovering fs's

2009-06-22 Thread Matt Harrison
I know this may have been discussed before but my google-fu hasn't 
turned up anything directly related.


My girlfriend had some files stored in a zfs dataset on my home server. 
She assured me that she didn't need them any more so I destroyed the 
dataset (I know I should have kept it anyway for just this occasion).


She's now desperate to get it back as she's realised there some 
important work stuff hidden away in there.


Now there has been data written to the other datasets, but as far as I 
know, no other dataset has been created or destroyed.


Is there any way to recover the dataset and any/all of the data?

Very grateful someone can give me some good news :)

Thanks

~Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] recovering fs's

2009-06-22 Thread Matt Harrison

dick hoogendijk wrote:

On Mon, 22 Jun 2009 21:42:23 +0100
Matt Harrison iwasinnamuk...@genestate.com wrote:

She's now desperate to get it back as she's realised there some 
important work stuff hidden away in there.


Without snapshots you're lost.



Ok, thanks. It was worth a shot. Guess she'll be working overtime tonight :P

~Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] recovering fs's

2009-06-22 Thread Matt Harrison

Simon Breden wrote:

Hi Matt!

As kim0 says, that s/w PhotoRec looks like it might work, if it can work with 
ZFS... would be interested to hear if it works.

Good luck,
Simon


I'll give it a go as soon as I get a chance. I've had a very quick look 
and ZFS isn't in the list of supported FSs...but we'll see.


~M
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS extended ACL

2009-01-29 Thread Matt Harrison
Christine Tran wrote:
 There was a very long discussion about this a couple of weeks ago on
 one of the lists. Apparently the decision was made to put the GNU
 utilities in default system wide path before the native Sun utilities
 in order to make it easier to attract Linux users by making the
 environment more familiar to them. It was apparently assumed that
 longtime Solaris users would quickly and easily figure out what the
 problem was and adjust the PATH to their liking.

 
 Well, it's OpenSOLARIS, comes with nice OpenSOLARIS goodies.  Oooh ZFS
 ACL! rub hands together  Goody!  chmod A+user... gets slapped  ls
 -V gets slapped  OpenSOLARIS sucks!
 
 It's a quibble, but the way things are, it pleases no one, I don't
 think the casual Linux user moseying over to OpenSolaris would like
 the scenario above.

As a previous long-time linux user who came over for ZFS, I totally 
agree. I much preferred to learn the solaris way and do things right 
than try and think it was still linux.

Now I'm comfortable working on both despite their differences, and I'm 
sure I can perform tasks a lot better for it.

Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cifs perfomance

2009-01-22 Thread Matt Harrison
Brandon High wrote:
 On Wed, Jan 21, 2009 at 5:40 PM, Bob Friesenhahn
 bfrie...@simple.dallas.tx.us wrote:
 Several people reported this same problem.  They changed their
 ethernet adaptor to an Intel ethernet interface and the performance
 problem went away.  It was not ZFS's fault.
 
 It may not be a ZFS problem, but it is a OpenSolaris problem. The
 drivers for hardware Realtek and other NICs are ... not so great.
 
 -B
 

+1, I was having terrible problems with the onboard RTL nics..but on 
changing to a decent e1000 all is peachy in my world.

Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 4 disk raidz1 with 3 disks...

2009-01-16 Thread Matt Harrison
JZ wrote:
 Beloved Jonny,
 
 I am just like you.
 
 
 There was a day, I was hungry, and went for a job interview for sysadmin.
 They asked me - what is a protocol?
 I could not give a definition, and they said, no, not qualified.
 
 But they did not ask me about CICS and mainframe. Too bad.
 
 
 
 baby, even there is a day you can break daddy's pride, you won't want to, I 
 am sure.   ;-)
 
 [if you want a solution, ask Orvar, I would guess he thinks on his own now, 
 not baby no more, teen now...]
 
 best,
 z
 
 - Original Message - 
 From: Jonny Gerold j...@thermeon.com
 To: JZ j...@excelsioritsolutions.com
 Sent: Thursday, January 15, 2009 10:19 PM
 Subject: Re: [zfs-discuss] 4 disk raidz1 with 3 disks...
 
 
 Sorry that I broke your pride (all knowing) bubble by challenging you.
 But your just as stupid as I am since you did not give me a solution.
 Find a solution, and I will rock with your Zhou style, otherwise you're
 just like me :) I am in the U.S. Great weather...
 
 Thanks, Jonny
 
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 

Is this guy seriously for real? It's getting hard to stay on the list 
with all this going on. No list etiquette, complete irrelevant 
ramblings, need I go on?

~Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 4 disk raidz1 with 3 disks...

2009-01-16 Thread Matt Harrison
Jonny Gerold wrote:
 Meh this is retarted. It looks like zpool list shows an incorrect 
 calculation? Can anyone agree that this looks like a bug?
 
 r...@fsk-backup:~# df -h | grep ambry
 ambry 2.7T   27K  2.7T   1% /ambry
 
 r...@fsk-backup:~# zpool list
 NAMESIZE   USED  AVAILCAP  HEALTH  ALTROOT
 ambry  3.62T   132K  3.62T 0%  ONLINE  -
 
 r...@fsk-backup:~# zfs list
 NAME   USED  AVAIL  REFER  MOUNTPOINT
 ambry 92.0K  2.67T  26.9K  /ambry

Bug or not I am not the person to say, but it's done that ever since 
I've used ZFS. zpool list shows the total space regardless of 
redundancy, whereas zfs list shows the actual available space. I was 
confusing at first but now I just ignore it.

Matt

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Degraded zpool without any kind of alert

2008-12-28 Thread Matt Harrison
Bob Friesenhahn wrote:
 On Sun, 28 Dec 2008, Robert Bauer wrote:
 
 It would be nice if gnome could notify me automatically when one of 
 my zpools are degraded or if any kind of ZFS error occurs.
 
 Yes.  It is a weird failing of Solaris to have an advanced fault 
 detection system without a useful reporting mechanism.
 
 I would also accept if it could be possible at least to send 
 automatically a mail when a zpool is degraded.
 
 This the script (run as root via crontab) I use to have an email sent 
 to 'root' if a fault is detected.  It has already reported a fault:
 
 #!/bin/sh
 REPORT=/tmp/faultreport.txt
 SYSTEM=$1
 rm -f $REPORT
 /usr/sbin/fmadm faulty 21  $REPORT
 if test -s $REPORT
 then
/usr/ucb/Mail -s $SYSTEM Fault Alert root  $REPORT
 fi
 rm -f $REPORT

I do much the same thing, although I had to fiddle it a bit to exclude a 
certain report type. A while ago, a server here started to send out 
errors from:

Fault class : defect.sunos.eft.undiagnosable_problem

The guys on the fault-discuss list have been unable to enlighten me as 
to the problem, and I'm ashamed to say I have not taken any steps. 
Except to replace the entire machine, I have no idea what to try.

I just wanted to note that although the fault detection is very good, it 
isn't always possible to work out what the fault really is.

Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS filesystem and CIFS permissions issue?

2008-12-25 Thread Matt Harrison
Jeff Waddell wrote:
 I'm having a permission issue with a ZFS fs over CIFS. I'm new to
  OpenSolaris, and fairly new to *nix (only real experience being OS
  X), so any help would be appreciated.

Sounds like you're falling foul as I did when I started.

Child filesystems are not accessible via cifs as child filesystems.

If I have

tank/backup
tank/backup/exodus

And share tank/backup via cifs, it will show the exodus folder, but it 
won't let you do anything with it. It has been explained better but I'm 
not in a position to find the reference. For now, you just can't.

To access those child filesystems, you will have to share them 
individually. It's a pain, but thats how I have to do it atm.

HTH

Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] diagnosing read performance problem

2008-10-31 Thread Matt Harrison
Ok, I have recieved a new set of NICs and a new switch and the problem still
remains.

Just for something to do I ran some tests:

Copying a 200Mb file over scp from the main problem workstation to a totally
unrelated gentoo linux box. Absolutely no problems.

So I thought it was down to the zfs fileserver. Then I ran the same test to
the zfs filer just to check. Absolutely no problems again!$%?%^

I may just be getting light-headed from the hair pulling, but it seems that
the problem only occurs when the traffic is going thru the CIFS server. 

I'm going to write a new thread to cifs-discuss and provide them some
captures, maybe they have a clue why this might happen.

I'm also going to switch back to the snv_95 BE I still have on the server,
it's possible it might have some effect.

Thanks

Matt


pgpdK9z30ugR8.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] diagnosing read performance problem

2008-10-31 Thread Matt Harrison
Well, somehow it's fixed:

Since putting in the new Intel card, the transfer from the box dropped so
badly, I couldn't even copy a snoop from it.

So I removed the dohwchksum line from /etc/system and rebooted. Then just to
clean up a bit I disabled the onboard NICs in the bios.

Now I'm still seeing the duplicate ACKs and the checksum errors from that
client, but the transfers have sped right up.

Video playback and all other copying from the server are now working again
without any problems so far.

Thanks to all that have contributed to this thread, it really did help me
organise my thoughts.

Matt


pgpVCuBqFE9Pi.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] diagnosing read performance problem

2008-10-30 Thread Matt Harrison
Nigel Smith wrote:
 Hi Matt
 Well this time you have filtered out any SSH traffic on port 22 successfully.
 
 But I'm still only seeing half of the conversation!

Grr this is my day, I think I know what the problem was...user error as 
I'm not used to snoop.

 I see packets sent from client to server.
 That is from source: 10.194.217.12 to destination: 10.194.217.3
 So a different client IP this time
 
 And the Duplicate ACK packets (often long bursts) are back in this capture.
 I've looked at these a little bit more carefully this time,
 and I now notice it's using the 'TCP selective acknowledgement' feature 
 (SACK) 
 on those packets.
 
 Now this is not something I've come across before, so I need to do some
 googling!  SACK is defined in RFC1208.
 
  http://www.ietf.org/rfc/rfc2018.txt
 
 I found this explanation of when SACK is used:
 
  http://thenetworkguy.typepad.com/nau/2007/10/one-of-the-most.html
  http://thenetworkguy.typepad.com/nau/2007/10/tcp-selective-a.html
 
 This seems to indicate these 'SACK' packets are triggered as a result 
 of 'lost packets', in this case, it must be the packets sent back from
 your server to the client, that is during your video playback.

Well thats a bit above me. I can understand the lost packets though, it 
sounds about right for the situation.

 Of course I'm not seeing ANY of those packets in this capture
 because there are none captured from server to client!  
 I'm still not sure why you cannot seem to capture these packets!

I think I know the problem, I thought I should enable promiscuous mode, 
so I quickly scanned the help output and added the -P switch. However 
that does the opposide of what I thought and takes the snoop out of 
promiscuous mode.

 Oh, by the way, I probably should advise you to run...
 
  # netstat -i

Yes one of the previous replies to this thread advised me to try that. 
The count does increase, however quite slowly to me.

After being up for 6 hours with a few video playback tests the Oerr 
count sits at 92 currently.

 ..on the OpenSolaris box, to see if any errors are being counted
 on the network interface.
 
 Are you still seeing the link going up/down in '/var/admin/message'?
 You are never going to do any good while that is happening.
 I think you need to try a different network card in the server.

Strangely the link up/down problem was only present on the second switch 
  I tried (which works perfectly for other connections). On the first 
switch the link appears stable at first glance however we're getting 
these duplicate acks, and checksum errors (although the csums might be 
caused by the hardware offloading of that client as you pointed out).

I've got a couple of brand new Intel Pro 1000s and a new switch arriving 
  by courier tomorrow morning, so with any luck I should see some 
difference.

I'm getting a bit busy but I will attempt to make another snoop 
*without* disabling promiscuous mode.

Thanks for all your input

Matt

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] diagnosing read performance problem

2008-10-29 Thread Matt Harrison
On Tue, Oct 28, 2008 at 05:30:55PM -0700, Nigel Smith wrote:
 Hi Matt.
 Ok, got the capture and successfully 'unzipped' it.
 (Sorry, I guess I'm using old software to do this!)
 
 I see 12840 packets. The capture is a TCP conversation 
 between two hosts using the SMB aka CIFS protocol.
 
 10.194.217.10 is the client - Presumably Windows?
 10.194.217.3 is the server - Presumably OpenSolaris - CIFS server?

All correct so far

 Using WireShark,
 Menu: 'Statistics  Endpoints' show:
 
 The Client has transmitted 4849 packets, and
 the Server has transmitted 7991 packets.
 
 Menu: 'Analyze  Expert info Composite':
 The 'Errors' tab shows:
 4849 packets with a 'Bad TCP checksum' error - These are all transmitted by 
 the Client.
 
 (Apply a filter of 'ip.src_host == 10.194.217.10' to confirm this.)
 
 The 'Notes' tab shows:
 ..numerous 'Duplicate Ack's'
 For example, for 60 different ACK packets, the exact same packet was 
 re-transmitted 7 times!
 Packet #3718 was duplicated 17 times.
 Packet #8215 was duplicated 16 times.
 packet #6421 was duplicated 15 times, etc.
 These bursts of duplicate ACK packets are all coming from the client side.
 
 This certainly looks strange to me - I've not seen anything like this before.
 It's not going to help the speed to unnecessarily duplicate packets like
 that, and these burst are often closely followed by a short delay, ~0.2 
 seconds.
 And as far as I can see, it looks to point towards the client as the source
 of the problem.
 If you are seeing the same problem with other client PC, then I guess we need 
 to 
 suspect the 'switch' that connects them.

I have another switch on the way to move to. I will see if this helps.

Thanks for your input

Matt


pgpPpxSVqiW79.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] diagnosing read performance problem

2008-10-29 Thread Matt Harrison
On Tue, Oct 28, 2008 at 05:45:48PM -0700, Richard Elling wrote:
 I replied to Matt directly, but didn't hear back.  It may be a driver issue
 with checksum offloading.  Certainly the symptoms are consistent.
 To test with a workaround see
 http://bugs.opensolaris.org/view_bug.do?bug_id=6686415

Hi, Sorry for not replying, we had some problems with our email provider
yesterday and I was up all night restoring backups.

I did try the workaround, but it didn't have any effect, presumbably because
it's not using the rge driver as you stated before.

I'll try swapping the switch out and post back my results.

Many Thanks

Matt


pgphN0KxBmHEz.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] diagnosing read performance problem

2008-10-29 Thread Matt Harrison
On Wed, Oct 29, 2008 at 10:01:09AM -0700, Nigel Smith wrote:
 Hi Matt
 Can you just confirm if that Ethernet capture file, that you made available,
 was done on the client, or on the server. I'm beginning to suspect you
 did it on the client.

That capture was done from the client

 You can get a capture file on the server (OpenSolaris) using the 'snoop'
 command, as per one of my previous emails.  You can still view the
 capture file with WireShark as it supports the 'snoop' file format.

I am uploading a snoop from the server to 

http://distfiles.genestate.com/snoop.zip

Please note this snoop will include traffic to ssh as I can't work out how
to filter that out :P

 Normally it would not be too important where the capture was obtained,
 but here, where something strange is happening, it could be critical to 
 understanding what is going wrong and where.
 
 It would be interesting to do two separate captures - one on the client
 and the one on the server, at the same time, as this would show if the
 switch was causing disruption.  Try to have the clocks on the client 
 server synchronised as close as possible.

Clocks are synced via ntp as we're using Active Directory with CIFS.

On another note, I've just moved the offending network to another switch and
it's even worse I think. I've noticed that under high load, the link light
for the server's connection blinks on and off, not quite steadily but about
every 2 seconds.

This appears in /var/adm/messages:

Oct 29 18:24:22 exodus mac: [ID 435574 kern.info] NOTICE: rtls0 link up, 100
Mbps, full duplex
Oct 29 18:24:24 exodus mac: [ID 486395 kern.info] NOTICE: rtls0 link down
Oct 29 18:24:25 exodus mac: [ID 435574 kern.info] NOTICE: rtls0 link up, 100
Mbps, full duplex
Oct 29 18:24:27 exodus mac: [ID 486395 kern.info] NOTICE: rtls0 link down
Oct 29 18:24:28 exodus mac: [ID 435574 kern.info] NOTICE: rtls0 link up, 100
Mbps, full duplex
Oct 29 18:24:30 exodus mac: [ID 486395 kern.info] NOTICE: rtls0 link down
Oct 29 18:24:31 exodus mac: [ID 435574 kern.info] NOTICE: rtls0 link up, 100
Mbps, full duplex

I think it's got to be the NIC, the network runs full duplex quite happily
so I don't think its an auto-neg problem.

Thanks for sticking with this :)

Matt


pgpgEL3gweH2e.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] diagnosing read performance problem

2008-10-29 Thread Matt Harrison
On Wed, Oct 29, 2008 at 05:32:39PM -0700, Nigel Smith wrote:
 Hi Matt
 In your previous capture, (which you have now confirmed was done
 on the Windows client), all those 'Bad TCP checksum' packets sent by the 
 client, 
 are explained, because you must be doing hardware TCP checksum offloading
 on the client network adaptor.  WireShark will capture the packets before
 that hardware calculation is done, so the checksum all appear to be wrong,
 as they have not yet been calculated!

I know that the client I was using has an nForce board with nVidia network
controllers. There is an option to offload to hardware but I believe that
was disabled.

 The strange thing is that I'm only seeing half of the conversation!
 I see packets sent from client to server.
 That is from source: 10.194.217.10 to destination: 10.194.217.3
 
 I can also see some packets from
 source: 10.194.217.5 (Your AD domain controller) to destination  10.194.217.3
 
 But you've not capture anything transmitted from your
 OpenSolaris server - source: 10.194.217.3
 
 (I checked, and I did not have any filters applied in WireShark
 that would cause the missing half!)
 Strange! I'm not sure how you did that.

I believe i was using the wrong filter expression...my bad :(

 The half of the conversation that I can see looks fine - there
 does not seem to be any problem.  I'm not seeing any duplication
 of ACK's from the client in this capture.  
 (So again somewhat strange, unless you've fixed the problem!)

 I'm assuming your using a single network card in the Solaris server, 
 but maybe you had better just confirm that.

Confirmed, there is a single PCI NIC that i'm using (there is the dual
onboard but they don't work for me anymore).
 
 Regarding not capturing SSH traffic and only capturing traffic from
 ( hopefully to) the client, try this:
 
  # snoop -o test.cap -d rtls0 host 10.194.217.10 and not port 22

Much better thanks. I am attaching a second snoop from the server with the
full conversation.

http://distfiles.genestate.com/snoop2.zip

Incidentally, this is talking to a different client, which although doesn't
show checksum errors, does still have a load of duplicate ACKs. If this
confuses the issue, I can do it from the old client as soon as it becomes
free.

 Regarding those 'link down', 'link up' messages, '/var/adm/messages'.
 I can tie up some of those events with your snoop capture file,
 but it just shows that no packets are being received while the link is down,
 which is exactly what you would expect.
 But dropping the link for a second will surely disrupt your video playback!
 
 If the switch is ok, and the cable from the switch is ok, then it does
 now point towards the network card in the OpenSolaris box.  
 Maybe as simple as a bad mechanical connection on the cable socket

Very possible. I have an Intell Pro 1000 and a new GB switch on the way.

 BTW, just run '/usr/X11/bin/scanpci'  and identify the 'vendor id' and
 'device id' for the network card, just in case it turns out to be a driver 
 bug.

pci bus 0x0001 cardnum 0x06 function 0x00: vendor 0x10ec device 0x8139
 Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+

and the two onboards that no longer function:

pci bus 0x cardnum 0x08 function 0x00: vendor 0x10de device 0x0373
 nVidia Corporation MCP55 Ethernet

pci bus 0x cardnum 0x09 function 0x00: vendor 0x10de device 0x0373
 nVidia Corporation MCP55 Ethernet

Thanks

Matt


pgpG9O5VYjefY.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] diagnosing read performance problem

2008-10-28 Thread Matt Harrison
On Mon, Oct 27, 2008 at 06:18:59PM -0700, Nigel Smith wrote:
 Hi Matt
 Unfortunately, I'm having problems un-compressing that zip file.
 I tried with 7-zip and WinZip reports this:
 
 skipping _1_20081027010354.cap: this file was compressed using an unknown 
 compression method.
Please visit www.winzip.com/wz54.htm for more information.
The compression method used for this file is 98.
 
 Please can you check it out, and if necessary use a more standard
 compression algorithm.
 Download File Size was 8,782,584 bytes.

Apologies, I had let winzip compress it with whatever it thought was best,
apparently this was the best method for size, not compatibility.

There's a new upload under the same URL compressed with 2.0 compatible
compression. Fingers crossed that works better for you.

Thanks

Matt


pgphON3KUfHQn.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] diagnosing read performance problem

2008-10-26 Thread Matt Harrison
On Sat, Oct 25, 2008 at 06:50:46PM -0700, Nigel Smith wrote:
 Hi Matt
 What chipset is your PCI network card?
 (obviously, it not Intel, but what is it?)
 Do you know which driver the card is using?

I believe it's some sort of Realtek (8139 probably). It's coming up as rtls0

 You say '..The system was fine for a couple of weeks..'.
 At that point did you change any software - do any updates or upgrades?
 For instance, did you upgrade to a new build of OpenSolaris?

No, since the original problem with the onboard NICs it hasn't been upgraded
or anything.

 If not, then I would guess it's some sort of hardware problem.
 Can you try different cables and a different switch - anything
 in the path between client  server is suspect.

Have tried different cables and switch ports, I will try a different switch
as soon as I can get some space on one of the others.

 A mismatch of Ethernet duplex settings can cause problems - are
 you sure this is Ok.

Not 100% sure, but I will check as best I can.

 To get an idea of how the network is running try this:
 
 On the Solaris box, do an Ethernet capture with 'snoop' to a file.
 http://docs.sun.com/app/docs/doc/819-2240/snoop-1m?a=view
 
  # snoop -d {device} -o {filename}
 
 .. then while capturing, try to play your video file through the network.
 Control-C to stop the capture.
 
 You can then use Ethereal or WireShark to analyze the capture file.
 On the 'Analyze' menu, select 'Expert Info'.
 This will look through all the packets and will report
 any warning or errors it sees.

It's coming up with a huge number of TCP Bad Checksum errors, a few
Previous Segment Lost and a few Fast retransmission.

Thanks

Matt


pgpodoJIT4jdS.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] diagnosing read performance problem

2008-10-26 Thread Matt Harrison
Nigel Smith wrote:
 Ok on the answers to all my questions.
 There's nothing that really stands out as being obviously wrong.
 Just out of interest, what build of OpenSolaris are you using?
 
 One thing you could try on the Ethernet capture file, is to set
 the WireShark 'Time' column like this:
 View  Time Display Format  Seconds Since Previous Displayed Packet
 
 Then look down the time column for any unusual high time delays
 between packets. Any unusually high delays during
 a data transfer phase, may indicate a problem.

Along with the errors that I noted previously, some of the packets to
seem to be taking a rather long time (0.5s).

I've taken a cap file from wireshark in the hope it clears up some
information. The capture is less than a minute of playing a video over
the cifs share.

It's a little too large to send in a mail so I've posted it at

http://distfiles.genestate.com/_1_20081027010354.zip

 Another thing you could try is measuring network performance
 with a utility called 'iperf'.

Thanks for pointing this program out, I've just run it to the gentoo
firewall we've got, and it's reporting good speeds for the network.

Thanks

Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] diagnosing read performance problem

2008-10-26 Thread Matt Harrison
Nigel Smith wrote:
 Ok on the answers to all my questions.
 There's nothing that really stands out as being obviously wrong.
 Just out of interest, what build of OpenSolaris are you using?

Damn forgot to add that, I'm running SXCE snv_97.

Thanks

Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] diagnosing read performance problem

2008-10-25 Thread Matt Harrison
Bob Friesenhahn wrote:
 Other people on this list who experienced the exact same problem
 ultimately determined that the problem was with the network card.  I
 recall that Intel NICs were the recommended solution.
 
 Note that 100MBit is now considered to be a slow link and PCI is also
 considered to be slow.

Thanks for the reply,

Yes I understand that 100mbit and pci are a bit outdated, unfortunately
I'm still campaigning to have our switches upgraded to gbit or 10gbit.

I will see if I can aquire an intel nic to test it with, however before
the problem with NICs started it operating fine. It seems though that
there is an ongoing problem with NICs on this machine.

The onboard ones haven't so much died (they still allow me to use them
from the OS) but they just won't start up or accept there is a cable
plugged in. The PCI nic does seem to be working and transfers to/from
the server seem ok except when there's video being moved.

I will do some testing and see if I can come up with a more definite
reason to the performance problems.

Thanks

Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] diagnosing read performance problem

2008-10-25 Thread Matt Harrison
On Sat, Oct 25, 2008 at 11:10:42AM -0500, Bob Friesenhahn wrote:
 Hmmm, this may indicate that there is an ethernet cable problem.  Use 
 'netstat -I interface' (where interface is the interface name shown by 
 'ifconfig -a') to see if the interface error count is increasing.  If you 
 are using a smart switch, use the switch admistrative interface and see 
 if the error count is increasing for the attached switch port. 
 Unfortunately your host can only see errors for packets it receives and it 
 may be that errors are occuring for packets it sends.

 If the ethernet cable is easy to replace, then it may be easiest to simply 
 replace it and use a different switch port to see if the problem just goes 
 away.

Ok, I've just tried 2 other cables, one doesn't even get a link light so
it's probably dead. The other one I had suspected was bad and indeed the
connection is terrible and the Oerr field in netstat does increase.

On the other hand, the Oerr field doesn't increase with the original cable,
however the video performance is still bad (although not as bad as with the
2nd replacement cable).

I will make up some new cables, and also place an order for an Intell
Pro100, as they are supposed to be really reliable.

Thanks

Matt


pgpgGq6AmGFWW.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] diagnosing read performance problem

2008-10-24 Thread Matt Harrison
Hi all,

I've got a lot of video files on a zfs/cifs fileserver running SXCE. A
little while ago the dual onboard NICs died and I had to replace them with a
PCI 10/100 NIC. The system was fine for a couple of weeks but now the
performance when viewing a video file from the cifs share is appauling. Videos
stop and jerk with audio distortion.

I have tried this from several client machines so I'm pretty certain it lies
with the server but I'm unsure of the next step to find out the source of
the problem.

Is there any tool I should be using to find out if this is a zfs, network or
other problem?

Grateful for any ideas

Thanks

Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS hangs/freezes after disk failure,

2008-08-24 Thread Matt Harrison
Todd H. Poole wrote:
 But you're not attempting hotswap, you're doing hot plug
 
 Do you mean hot UNplug? Because I'm not trying to get this thing to recognize 
 any new disks without a restart... Honest. I'm just trying to prevent the 
 machine from freezing up when a drive fails. I have no problem restarting the 
 machine with a new drive in it later so that it recognizes the new disk.
 
 and unless you're using the onboard bios' concept of an actual
 RAID array, you don't have an array, you've got a JBOD and
 it's not a real JBOD - it's a PC motherboard which does _not_
 have the same electronic and electrical protections that a
 JBOD has *by design*.
 
 I'm confused by what your definition of a RAID array is, and for that matter, 
 what a JBOD is... I've got plenty of experience with both, but just to make 
 sure I wasn't off my rocker, I consulted the demigod:
 
 http://en.wikipedia.org/wiki/RAID
 http://en.wikipedia.org/wiki/JBOD
 
 and I think what I'm doing is indeed RAID... I'm not using some sort of 
 controller card, or any specialized hardware, so it's certainly not Hardware 
 RAID (and thus doesn't contain any of the fancy electronic or electrical 
 protections you mentioned), but lacking said protections doesn't preclude the 
 machine from being considered a RAID. All the disks are the same capacity, 
 the OS still sees the zpool I've created as one large volume, and since I'm 
 using RAID-Z (RAID5), it should be redundant... What other qualifiers out 
 there are necessary before a system can be called RAID compliant?
 
 If it's hot-swappable technology, or a controller hiding the details from the 
 OS and instead  presenting a single volume, then I would argue those things 
 are extra - not a fundamental prerequisite for a system to be called a RAID.
 
 Furthermore, while I'm not sure what the difference between a real JBOD and 
 a plain old JBOD is, this set-up certainly wouldn't qualify for either. I 
 mean, there is no concatenation going on, redundancy should be present (but 
 due to this issue, I haven't been able to verify that yet), and all the 
 drives are the same size... Am I missing something in the definition of a 
 JBOD?
 
 I don't think so...
  
 And you're right, it can. But what you've been doing is outside
 the bounds of what IDE hardware on a PC motherboard is designed
 to cope with.
 
 Well, yes, you're right, but it's not like I'm making some sort of radical 
 departure outside of the bounds of the hardware... It really shouldn't be a 
 problem so long as it's not an unreasonable departure because that's where 
 software comes in. When the hardware can't cut it, that's where software 
 picks up the slack.
 
 Now, obviously, I'm not saying software can do anything with any piece of 
 hardware you give it - no matter how many lines of code you write, your 
 keyboard isn't going to turn into a speaker - but when it comes to reasonable 
 stuff like ensuring a machine doesn't crash because a user did something with 
 the hardware that he or she wasn't supposed to do? Prime target for software.
 
 And that's the way it's always been... The whole push behind that whole ZFS 
 Promise thing (or if you want to make it less specific, the attractiveness of 
 RAID in general), was that RAID-Z [wouldn't] require any special hardware. 
 It doesn't need NVRAM for correctness, and it doesn't need write buffering 
 for good performance. With RAID-Z, ZFS makes good on the original RAID 
 promise: it provides fast, reliable storage using cheap, commodity disks. 
 (http://blogs.sun.com/bonwick/entry/raid_z)
 
 Well sorry, it does. Welcome to an OS which does care.
 
 The half-hearted apology wasn't necessary... I understand that OpenSolaris 
 cares about the method those disks use to plug into the motherboard, but what 
 I don't understand is why that limitation exists in the first place. It would 
 seem much better to me to have an OS that doesn't care (but developers that 
 do) and just finds a way to work, versus one that does care (but developers 
 that don't) and instead isn't as flexible and gets picky... I'm not saying 
 OpenSolaris is the latter, but I'm not getting the impression it's the former 
 either...
 
 If the controlling electronics for your disk can't
 handle it, then you're hosed. That's why FC, SATA (in SATA
 mode) and SAS are much more likely to handle this out of
 the box. Parallel SCSI requires funky hardware, which is why
 those old 6- or 12-disk multipacks are so useful to have.

 Of the failure modes that you suggest above, only one
 is going to give you anything other than catastrophic
 failure (drive motor degradation) - and that is because the
 drive's electronics will realise this, and send warnings to
 the host which should have its drivers written so
 that these messages are logged for the sysadmin to act upon.

 The other failure modes are what we call catastrophic. And
 where your hardware isn't designed with certain protections
 around drive 

Re: [zfs-discuss] are these errors dangerous

2008-08-03 Thread Matt Harrison
Ross wrote:
 Hi,
 
 First of all, I really should warn you that I'm very new to Solaris, I'll 
 happily share my thoughts but be aware that there's not a lot of experience 
 backing them up.
 
From what you've said, and the logs you've posted I suspect you're hitting 
recoverable read errors.  ZFS wouldn't flag these as no corrupt data has been 
encountered, but I suspect the device driver is logging them anyway.
 
 The log you posted all appears to refer to one disk (sd0), my guess would be 
 that you have some hardware faults on that device and if it were me I'd 
 probably be replacing it before it actually fails.
 
 I'd check your logs before replacing that disk though, you need to see if 
 it's just that one disk, or if others are affected.  Provided you have a 
 redundant ZFS pool, it may be worth offlining that disk, unconfiguring it 
 with cfgadm, and then pulling the drive to see if that does cure the warnings 
 you're getting in the logs.
 
 Whatever you do, please keep me posted.  Your post has already made me 
 realise it would be a good idea to have a script watching log file sizes to 
 catch problems like this early.
 
 Ross

Thanks for your insights, I'm also relatively new to solaris but i've 
been on linux for years. I've just read more into the logs and its 
giving these errors for all 3 of my disks (sd0,1,2). I'm running a 
raidz1, unfortunately without any spares and I'm not too keen on 
removing the parity from my pool as I've got a lot of important files 
stored there.

I would agree that this seems to be a recoverable error and nothing is 
getting corrupted thanks to ZFS. The thing I'm worried about is if the 
entire batch is failing slowly and will all die at the same time.

Hopefully some ZFS/hardware guru can comment on this before the world 
ends for me :P

Thanks

Matt

No virus found in this outgoing message.
Checked by AVG - http://www.avg.com 
Version: 8.0.138 / Virus Database: 270.5.10/1587 - Release Date: 02/08/2008 
17:30


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] are these errors dangerous

2008-08-03 Thread Matt Harrison
Miles Nordin wrote:
 mh == Matt Harrison [EMAIL PROTECTED] writes:
 
 mh  I'm worried about is if the entire batch is failing slowly
 mh and will all die at the same time.
 
 If you can download smartctl, you can use the approach described here:
 
  http://web.Ivy.NET/~carton/rant/ml/raid-findingBadDisks-0.html
  http://web.Ivy.NET/~carton/rant/ml/raid-findingBadDisks-1.html

I already had smartmontools for temp monitoring. using smartctl -a I get :

Device supports SMART and is Enabled
Temperature Warning Disabled or Not Supported
SMART Health Status: OK

Current Drive Temperature: 33 C

Error Counter logging not supported   unhelpful
No self-tests have been logged

So it looks like I can't use the error count on these (sata) drives. 
Otherwise everything else looks ok for all 3.

And regard Ross' reply, I will try posting something to storage-discuss 
and see if anyone has more ideas.

thanks

Matt

No virus found in this outgoing message.
Checked by AVG - http://www.avg.com 
Version: 8.0.138 / Virus Database: 270.5.10/1587 - Release Date: 02/08/2008 
17:30


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] are these errors dangerous

2008-08-03 Thread Matt Harrison
Johan Hartzenberg wrote:
 On Sun, Aug 3, 2008 at 8:48 PM, Matt Harrison
 [EMAIL PROTECTED]wrote:
 
 Miles Nordin wrote:
 mh == Matt Harrison [EMAIL PROTECTED] writes:
 mh  I'm worried about is if the entire batch is failing slowly
 mh and will all die at the same time.

 
 
 Matt, can you please post the output from this command:
 
 iostat -E

[EMAIL PROTECTED]:~ # iostat -E
cmdk0 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Model: WDC WD2000JB-00 Revision:  Serial No: WD-WCAL81632817 Size: 
200.05GB 200047067136 bytes
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 0
sd0   Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: ATA  Product: WDC WD7500AAKS-0 Revision: 4G30 Serial No:
Size: 750.16GB 750156374016 bytes
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 478675 Predictive Failure Analysis: 0
sd1   Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: ATA  Product: WDC WD7500AAKS-0 Revision: 4G30 Serial No:
Size: 750.16GB 750156374016 bytes
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 478626 Predictive Failure Analysis: 0
sd2   Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: ATA  Product: WDC WD7500AAKS-0 Revision: 4G30 Serial No:
Size: 750.16GB 750156374016 bytes
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 478604 Predictive Failure Analysis: 0
sd3   Soft Errors: 0 Hard Errors: 16 Transport Errors: 0
Vendor: HL-DT-ST Product: DVDRAM_GSA-H10N  Revision: JX06 Serial No:
Size: 0.00GB 0 bytes
Media Error: 0 Device Not Ready: 16 No Device: 0 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0

Lots of illegal requests, and a few hard errors. Doesn't look good.

 This will show counts of the types of errors for all disks since the last
 reboot.  I am guessing sd0 is your CD / DVD drive.

I don't think so, my dvd drive is on ide along with the boot drive, 
while my pool is on 3 SATA disks.

Thanks

Matt


No virus found in this outgoing message.
Checked by AVG - http://www.avg.com 
Version: 8.0.138 / Virus Database: 270.5.10/1587 - Release Date: 02/08/2008 
17:30


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] are these errors dangerous

2008-08-03 Thread Matt Harrison
Richard Elling wrote:
 Matt Harrison wrote:
 Aug  2 14:46:06 exodus  Error for Command: read_defect_data
 Error Level: Informational
   
 
 key here: Informational
 
 Aug  2 14:46:06 exodus scsi: [ID 107833 kern.notice]Requested 
 Block: 0 Error Block: 0
 Aug  2 14:46:06 exodus scsi: [ID 107833 kern.notice]Vendor: ATA 
 Serial Number:
 Aug  2 14:46:06 exodus scsi: [ID 107833 kern.notice]Sense Key: 
 Illegal_Request
 Aug  2 14:46:06 exodus scsi: [ID 107833 kern.notice]ASC: 0x20 
 (invalid command operation code), ASCQ: 0x0, FRU: 0x0
   
 
 Key here: ASC 0x20 (invalid command operation code)
 
 Aug  2 14:46:06 exodus scsi: [ID 107833 kern.warning] WARNING: 
 /[EMAIL PROTECTED],0/pci1043,[EMAIL PROTECTED]/[EMAIL PROTECTED],0 (sd0):
 Aug  2 14:46:06 exodus  Error for Command: log_sense   
 Error Level: Informational
 Aug  2 14:46:06 exodus scsi: [ID 107833 kern.notice]Requested 
 Block: 0 Error Block: 0
 Aug  2 14:46:06 exodus scsi: [ID 107833 kern.notice]Vendor: ATA 
 Serial Number:
 Aug  2 14:46:06 exodus scsi: [ID 107833 kern.notice]Sense Key: 
 Illegal_Request
 Aug  2 14:46:06 exodus scsi: [ID 107833 kern.notice]ASC: 0x24 
 (invalid field in cdb), ASCQ: 0x0, FRU: 0x0
   
 
 Key here: invalid field in cbd where CDB is command data block
 http://en.wikipedia.org/wiki/SCSI_CDB
 
 Obviously a command is being sent to the device that it doesn't
 understand.  This could be a host side driver or disk firmware problem.
 I'd classify this as annoying, but doesn't appear dangerous on the face.
 With some digging you could determine which command is failing,
 but that won't fix anything.  You might check with the disk vendor
 for firmware upgrades and you might look at a later version of the
 OS drivers.

Well I'm pleased it doesn't scream DANGER to people. I can live with 
clearing out the logs now and then. I will check with WD if there are 
firmware updates for these disks, and I will update my snv at some point.

 This isn't a ZFS issue, so you might have better luck on the 
 storage-discuss

I have posted to storage-discuss a little while ago. I'm not even sure 
why I posted here in the first place, storage-discuss would be a much 
better idea.

Thanks

Matt

No virus found in this outgoing message.
Checked by AVG - http://www.avg.com 
Version: 8.0.138 / Virus Database: 270.5.10/1587 - Release Date: 02/08/2008 
17:30


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] are these errors dangerous

2008-08-02 Thread Matt Harrison
Hi everyone,

I've been running a zfs fileserver for about a month now (on snv_91) and 
it's all working really well. I'm scrubbing once a week and nothing has 
come up as a problem yet.

I'm a little worried as I've just noticed these messages in 
/var/adm/message and I don't know if they're bad or just informational:

Aug  2 14:46:06 exodus  Error for Command: read_defect_dataError 
Level: Informational
Aug  2 14:46:06 exodus scsi: [ID 107833 kern.notice]Requested Block: 
0 Error Block: 0
Aug  2 14:46:06 exodus scsi: [ID 107833 kern.notice]Vendor: ATA 
Serial Number:
Aug  2 14:46:06 exodus scsi: [ID 107833 kern.notice]Sense Key: 
Illegal_Request
Aug  2 14:46:06 exodus scsi: [ID 107833 kern.notice]ASC: 0x20 
(invalid command operation code), ASCQ: 0x0, FRU: 0x0
Aug  2 14:46:06 exodus scsi: [ID 107833 kern.warning] WARNING: 
/[EMAIL PROTECTED],0/pci1043,[EMAIL PROTECTED]/[EMAIL PROTECTED],0 (sd0):
Aug  2 14:46:06 exodus  Error for Command: log_sense   Error 
Level: Informational
Aug  2 14:46:06 exodus scsi: [ID 107833 kern.notice]Requested Block: 
0 Error Block: 0
Aug  2 14:46:06 exodus scsi: [ID 107833 kern.notice]Vendor: ATA 
Serial Number:
Aug  2 14:46:06 exodus scsi: [ID 107833 kern.notice]Sense Key: 
Illegal_Request
Aug  2 14:46:06 exodus scsi: [ID 107833 kern.notice]ASC: 0x24 
(invalid field in cdb), ASCQ: 0x0, FRU: 0x0
Aug  2 14:46:06 exodus scsi: [ID 107833 kern.warning] WARNING: 
/[EMAIL PROTECTED],0/pci1043,[EMAIL PROTECTED]/[EMAIL PROTECTED],0 (sd0):
Aug  2 14:46:06 exodus  Error for Command: mode_sense  Error 
Level: Informational
Aug  2 14:46:06 exodus scsi: [ID 107833 kern.notice]Requested Block: 
0 Error Block: 0
Aug  2 14:46:06 exodus scsi: [ID 107833 kern.notice]Vendor: ATA 
Serial Number:
Aug  2 14:46:06 exodus scsi: [ID 107833 kern.notice]Sense Key: 
Illegal_Request
Aug  2 14:46:06 exodus scsi: [ID 107833 kern.notice]ASC: 0x24 
(invalid field in cdb), ASCQ: 0x0, FRU: 0x0
Aug  2 14:46:06 exodus scsi: [ID 107833 kern.warning] WARNING: 
/[EMAIL PROTECTED],0/pci1043,[EMAIL PROTECTED]/[EMAIL PROTECTED],0 (sd0):
Aug  2 14:46:06 exodus  Error for Command: mode_sense  Error 
Level: Informational
Aug  2 14:46:06 exodus scsi: [ID 107833 kern.notice]Requested Block: 
0 Error Block: 0
Aug  2 14:46:06 exodus scsi: [ID 107833 kern.notice]Vendor: ATA 
Serial Number:
Aug  2 14:46:06 exodus scsi: [ID 107833 kern.notice]Sense Key: 
Illegal_Request
Aug  2 14:46:06 exodus scsi: [ID 107833 kern.notice]ASC: 0x24 
(invalid field in cdb), ASCQ: 0x0, FRU: 0x0

Any insights would be greatly appreciated.

Thanks

Matt

No virus found in this outgoing message.
Checked by AVG - http://www.avg.com 
Version: 8.0.138 / Virus Database: 270.5.10/1586 - Release Date: 01/08/2008 
18:59


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] are these errors dangerous

2008-08-02 Thread Matt Harrison
Ross wrote:
 What does zpool status say?

zpool status says everythings fine, i've run another scrub and it hasn't 
found any errors, so can i just consider this harmless? its filling up 
my log quickly though

thanks

Matt

No virus found in this outgoing message.
Checked by AVG - http://www.avg.com 
Version: 8.0.138 / Virus Database: 270.5.10/1586 - Release Date: 01/08/2008 
18:59


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] are these errors dangerous

2008-08-02 Thread Matt Harrison
Matt Harrison wrote:
 Ross wrote:
 What does zpool status say?
 
 zpool status says everythings fine, i've run another scrub and it hasn't 
 found any errors, so can i just consider this harmless? its filling up 
 my log quickly though
 

I've just checked past logs and i'm getting up to about 250mb of these 
messages each week. if this is not a harmful error is there any way to 
mute this particular message? I'd rather not be accumulating such large 
logs without good reason.

thanks

Matt


No virus found in this outgoing message.
Checked by AVG - http://www.avg.com 
Version: 8.0.138 / Virus Database: 270.5.10/1586 - Release Date: 01/08/2008 
18:59


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-23 Thread Matt Harrison
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Steve wrote:
| I'm a fan of ZFS since I've read about it last year.
|
| Now I'm on the way to build a home fileserver and I'm thinking to go
with Opensolaris and eventually ZFS!!
|
| Apart from the other components, the main problem is to choose the
motherboard. The offer is incredibly high and I'm lost.
|
| Minimum requisites should be:
| - working well with Open Solaris ;-)
| - micro ATX (I would put in a little case)
| - low power consumption but more important reliable (!)
| - with Gigabit ethernet
| - 4+ (even better 6+) sata 3gb controller
|
| Also: what type of RAM to select toghether? (I would chose if good
ECC, but the rest?)
|
| Does it make sense? What are the possibilities?
|

I have just setup a home fileserver with ZFS on opensolaris, I used some
posts from a blog to choose my hardware and eventually went with exactly
the same as the author. I can confirm that after 3 months of running
there hasn't even been a hint of a problem with the hardware choice.

You can see the hardware post here

http://breden.org.uk/2008/03/02/home-fileserver-zfs-hardware/

Hope this helps you decide a bit more easily.

Matt
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)

iEYEARECAAYFAkiHk7AACgkQxNZfa+YAUWHYdQCg8N6FJUWe24jbja8Si1SpCRzl
vj8AoK0qYEHjo0sslB4uogrU2dwjwTxQ
=D/Rf
-END PGP SIGNATURE-

No virus found in this outgoing message.
Checked by AVG - http://www.avg.com 
Version: 8.0.138 / Virus Database: 270.5.4/1567 - Release Date: 22/07/2008 16:05


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] iostat and monitoring

2008-07-05 Thread Matt Harrison
Hi gurus,

I like zpool iostat and I like system monitoring, so I setup a script 
within sma to let me get the zpool iostat figures through snmp.

The problem is that as zpool iostat is only run once for each snmp 
query, it always reports a static set of figures, like so:

[EMAIL PROTECTED]:snmp # zpool iostat -v
capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
tank 443G  1.60T  4  4   461K   467K
   raidz1 443G  1.60T  4  4   461K   467K
 c1t0d0  -  -  1  2   227K   234K
 c1t1d0  -  -  1  2   228K   234K
 c2t0d0  -  -  1  2   227K   234K
--  -  -  -  -  -  -

Whereas if I run it an interval, the figures even out after a few 
seconds. What I'm wondering is: Is there any way to get iostat to report 
accurate figures from a one time invocation?

Alternatively is there a better way to get read/write ops etc from my 
pool for monitoring applications?

I would really love if monitoring zfs pools from snmp was better all 
round, but I'm not going to reel off my wish list here at this point ;)

Thanks

Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] iostat and monitoring

2008-07-05 Thread Matt Harrison
Mike Gerdts wrote:
 $ kstat -p ::vopstats_zfs:{nread,read_bytes,nwrite,write_bytes}
 unix:0:vopstats_zfs:nread 418787
 unix:0:vopstats_zfs:read_bytes612076305
 unix:0:vopstats_zfs:nwrite163544
 unix:0:vopstats_zfs:write_bytes   255725992

Thanks Mike, thats exactly what I was looking for. I can work my way
around the other snmp problems, like not reporting total space on a zfs :)

Thanks

Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] planning for upgrades

2008-06-28 Thread Matt Harrison
Hi gurus,

Just wanted some input on this for the day when an upgrade is necessary.

Lets say I have simple pool made up of 3 750gb SATA disks in raidz1, 
giving around 1.3tb usable space. If we wanted to upgrade the disks, 
what is the accepted procedure? There are 6 SATA ports in the machine in 
question, so we can just add the 3 upraded disks, but what is the 
recommended procedure to re-create or migrate the pool to the new disks?

thanks

Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] planning for upgrades

2008-06-28 Thread Matt Harrison
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Tomas Ögren wrote:
| On 28 June, 2008 - Matt Harrison sent me these 0,6K bytes:
|
| Hi gurus,
|
| Just wanted some input on this for the day when an upgrade is necessary.
|
| Lets say I have simple pool made up of 3 750gb SATA disks in raidz1,
| giving around 1.3tb usable space. If we wanted to upgrade the disks,
| what is the accepted procedure? There are 6 SATA ports in the machine in
| question, so we can just add the 3 upraded disks, but what is the
| recommended procedure to re-create or migrate the pool to the new disks?
|
| Currently, you can either replace the individual disks to upgrade 3x750
| to 3x2TB or whatever disks you buy next.. that requires no extra ports..
| Or you can just add a 3x2TB raidz1 along with the 3x750.. Unless you
| want to rebuild it into a 6 disk raidz(1/2), there's not much need to
| re-create the pool.. But that will require sufficient storage somewhere
| else during the time..

Thanks for the reply,

I'd rather keep the pool to 3 disks if possible, so I can keep the
option of adding 3 more disks, whether it be for backup or recovery
purposes later.

If i were to add, for example a 3x2TB raidz1 alongside the 3x750GB, what
is the best way to transfer things from one pool to the other?
Minimising time expended and of course taking full advantage of the new
capacities.

thanks

- --
Matt Harrison
[EMAIL PROTECTED]
http://mattharrison.org
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)

iEYEARECAAYFAkhmDwUACgkQxNZfa+YAUWEzJACaAqkgsL6lPXXIwup2glcFv33B
rVYAn3wznTY4K0+oyiKs22s+I+n0J9ov
=ejuw
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] planning for upgrades

2008-06-28 Thread Matt Harrison
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Matt Harrison wrote:
| Tomas Ögren wrote:
| | On 28 June, 2008 - Matt Harrison sent me these 0,6K bytes:
| |
| | Hi gurus,
| |
| | Just wanted some input on this for the day when an upgrade is
necessary.
| |
| | Lets say I have simple pool made up of 3 750gb SATA disks in raidz1,
| | giving around 1.3tb usable space. If we wanted to upgrade the disks,
| | what is the accepted procedure? There are 6 SATA ports in the
machine in
| | question, so we can just add the 3 upraded disks, but what is the
| | recommended procedure to re-create or migrate the pool to the new
disks?
| |
| | Currently, you can either replace the individual disks to upgrade 3x750
| | to 3x2TB or whatever disks you buy next.. that requires no extra ports..
| | Or you can just add a 3x2TB raidz1 along with the 3x750.. Unless you
| | want to rebuild it into a 6 disk raidz(1/2), there's not much need to
| | re-create the pool.. But that will require sufficient storage somewhere
| | else during the time..
|
| Thanks for the reply,
|
| I'd rather keep the pool to 3 disks if possible, so I can keep the
| option of adding 3 more disks, whether it be for backup or recovery
| purposes later.
|
| If i were to add, for example a 3x2TB raidz1 alongside the 3x750GB, what
| is the best way to transfer things from one pool to the other?
| Minimising time expended and of course taking full advantage of the new
| capacities.

I seem to have overlooked the first part of your reply, I can just
replace the disks one at a time, and of course the pool would rebuild
itself onto the new disk. Would this automatically extend the size of
the pool once all 3 disks are replaced?

thanks

- --
Matt Harrison
[EMAIL PROTECTED]
http://mattharrison.org
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)

iEYEARECAAYFAkhmEUcACgkQxNZfa+YAUWFYewCfdA2Ax4fU2NUNK+mOtdI+pT2W
cMsAnj0FSaQp7DNdTpU61IqCCjytY8T3
=k8aG
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] planning for upgrades

2008-06-28 Thread Matt Harrison
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

James C. McPherson wrote:
| Matt Harrison wrote:
| 
| I seem to have overlooked the first part of your reply, I can just
| replace the disks one at a time, and of course the pool would rebuild
| itself onto the new disk. Would this automatically extend the size of
| the pool once all 3 disks are replaced?
|
| Yes - once the resilvering has finished on the final
| disk replacement. I used exactly this process to increase
| my poolsize from 2x200G to 2x320G disks last year. It
| was easy - so easy that I wondered what I had forgotten
| to do ... but nope, nothing - It Just Works(tm).

Excellent that sounds great. I'm almost looking forward to an expansion
in a year or so ;)

thanks

- --
Matt Harrison
[EMAIL PROTECTED]
http://mattharrison.org
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)

iEYEARECAAYFAkhmR0YACgkQxNZfa+YAUWEu+ACg8eGJzp3xRHN7FByzLUVc2q+b
LBYAoMnmIH2naDBJfsYlzGENmYPc10qO
=1jba
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] sharesmb on multiple fs

2008-06-25 Thread Matt Harrison
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I've got a pool, and a fs called public. Public should contain various
directories which I would like to keep on separate zfs's for backup reasons.

Public should be shared so users can map that as a drive in windows xp.
The problem is that the zfs's under public cannot be accessed when
public is mapped. I don't want to have to map each section of public to
a separate drive letter.

Is this a limitation that I cannot access a fs inside a share or is
there something up with how I'm trying to do it?

If this is a restriction and I really can't access individual zfs's
under one share, i guess i will have to have 6 network drives instead of
one but this will of course confuse the users no end.

Thanks

- --
Matt Harrison
[EMAIL PROTECTED]
http://mattharrison.org
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)

iEYEARECAAYFAkhh5qoACgkQxNZfa+YAUWF1RACdG62/V6kXHgC6zdec8EZIXT0W
O54An3a1C+18el0uMhGk1XgTHvMpTc3H
=It+X
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [cifs-discuss] sharesmb on multiple fs

2008-06-25 Thread Matt Harrison
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Afshin Salek wrote:
| Your terminology is a bit confusing for me, so:

Sorry i should have worded this much better,

| you have 1 pool (zpool create)
| you have a FS called public (zfs create?)
|
| what do you mean by keep on separate zfs's? You
| mean ZFS snapshot?

Ok, I'll start again:

I have a pool zpool create tank [...]

Then I made a zfs zfs create [...] tank/public

Now I want to keep the sections of public separate, i.e on individual zfs.

So I do zfs create [...] tank/public/audio

The problem is that if public is shared via smb, the user is unable to
access audio. It seems that if a zfs is shared, the child zfs' are not
accessible as it would if they were just subdirectories.

So I can do cd /tank/public; mkdir audio which gives users access to
public/audio via the public share, but it doesn't allow detailed
management of audio as it would with individual zfs'.

I hope this is a better explanation,

Thanks

- --
Matt Harrison
[EMAIL PROTECTED]
http://mattharrison.org
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)

iEYEARECAAYFAkhjEBAACgkQxNZfa+YAUWFSfwCfQxvONHtrqsf5F2FcUNYIRA8L
SDYAoL2vFdRx0WNN5wn7jnBY1ddIYod+
=zKm1
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss