[zfs-discuss] zfs+nfs: scary nfs log entries?

2009-08-19 Thread Blake Irvin
I have a zfs dataset that I use for network home directories.  The box is 
running 2008.11 with the auto-snapshot service enabled.  To help debug some 
mysterious file deletion issues, I've enabled nfs logging (all my clients are 
NFSv3 Linux boxes).

I keep seeing lines like this in the nfslog:
br
br
pre
Wed Aug 19 10:20:48 2009 0 host.name.domain.com 1168 
zfs-auto-snap.hourly-2009-08-17-09.00/username/incoming.file b _ i r 0 nfs 0 *
/pre
br

Why is my path showing up with the name of a snapshot?  This scares me since 
snapshots get rolled off automatically...on the other hand, I know that the 
snapshots are read-only.  Any insights?
br
br

Blake
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [storage-discuss] Supermicro SAS/SATA controllers?

2009-04-15 Thread Blake Irvin


On Apr 15, 2009, at 8:28 AM, Nicholas Lee emptysa...@gmail.com wrote:




On Tue, Apr 14, 2009 at 5:57 AM, Will Murnane  
will.murn...@gmail.com wrote:


 Has anyone done any specific testing with SSD devices and solaris  
other than

 the FISHWORKS stuff?  Which is better for what - SLC and MLC?
My impression is that the flash controllers make a much bigger
difference than the type of flash inside.  You should take a look at
AnandTech's review of the new OCZ Vertex drives [1], which has a
fairly comprehensive set of benchmarks.  I don't think any of the
products they review are really optimal choices, though; the Intel
X25-E drives look good until you see the price tag, and even they only
do 30-odd MB/s random writes.


Couple excellent articles about SSD from adandtech last month:
http://www.anandtech.com/showdoc.aspx?i=3532 - SSD versus Enterprise  
SAS and SATA disks (20/3/09)
http://www.anandtech.com/storage/showdoc.aspx?i=3531 - The SSD  
Anthology: Understanding SSDs and New Drives from OCZ (18/3/09)


And it looks like the Intel fragmentation issue is fixed as well: 
http://techreport.com/discussions.x/16739

It's a shame the Sun Writezilla devices are almost 10k USD - seems  
there are the only units on the market that (apart from cost) work  
well all around as slog devices - form factor, interface/drivers  
and performance.
What about the new flash drives Andy was showing off in Vegas?  Those  
looked small (capacity) - perhaps cheap too?






How much of an issue does the random write bandwidth limit have on a  
slog device? What about latency?  I would have thought the write  
traffic pattern for slog io was more sequential and bursty.



Nicholas
___
storage-discuss mailing list
storage-disc...@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] reboot when copying large amounts of data

2009-03-13 Thread Blake Irvin
This is really great information, though most of the controllers  
mentioned aren't on the OpenSolaris HCL.  Seems like that should be  
corrected :)


My thanks to the community for their support.

On Mar 12, 2009, at 10:42 PM, James C. McPherson james.mcpher...@sun.com 
 wrote:



On Thu, 12 Mar 2009 22:24:12 -0400
Miles Nordin car...@ivy.net wrote:


wm == Will Murnane will.murn...@gmail.com writes:



* SR = Software RAID IT = Integrate. Target mode. IR mode
is not supported.

   wm Integrated target mode lets you export some storage attached
   wm to the host system (through another adapter, presumably) as a
   wm storage device.  IR mode is almost certainly Internal RAID,
   wm which that card doesn't have support for.

no, the supermicro page for AOC-USAS-L8i does claim support for all
three, and supermicro has an ``IR driver'' available for download for
Linux and Windows, or at least a link to one.

I'm trying to figure out what's involved in determining and switching
modes, why you'd want to switch them, what cards support which modes,
which solaris drivers support which modes, u.s.w.

The answer may be very simple, like ``the driver supports only IR.
Most cards support IR, and cards that don't support IR won't work.   
IR

can run in single-LUN mode.  Some IR cards support RAID5, others
support only RAID 0, 1, 10.''  Or it could be ``the driver supports
only SR.  The driver is what determines the mode, and it does this by
loading firmware into the card, and the first step in initializing  
the

card is always for the driver to load in a firmware blob.  All
currently-produced cards support SR.''  so...actually, now that I say
it, I guess the answer cannot be very simple.  It's going to have to
be a little complicated.
Anyway, I can guess, too.  I was hoping someone would know for sure
off-hand.



Hi Miles,
the mpt(7D) driver supports that card. mpt(7D) supports both
IT and IR firmware variants. You can find out the specifics
for what RAID volume levels are supported by reading the
raidctl(1M) manpage. I don't think you can switch between IT
and IR firmware, but not having needed to know this before,
I haven't tried it.


James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcphttp://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can VirtualBox run a 64 bit guests on 32 bit host

2009-02-28 Thread Blake Irvin

Check out http://www.sun.com/bigadmin/hcl/data/os


Sent from my iPhone

On Feb 28, 2009, at 2:20 AM, Harry Putnam rea...@newsguy.com wrote:


Brian Hechinger wo...@4amlunch.net writes:

[...]


I think it would be better to answer this question that it would to
attempt to answer the VirtualBox question (I run it on a 64-bit OS,
so I can't really answer that anyway).


Thanks yes and appreciated here


The benefit to running ZFS on a 64-bit OS is if you have a large
amount of RAM.  I don't know what the breaking point is, but I can
definitely tell you that a 32-bit kernel and 4GB ram doesn't mix
well.  If all you are doing is testing ZFS on VMs you probably
aren't all that worried about performance so it really shouldn't be
an issue for you to run 32-bit.  I'd say keep your RAM allocations
down, and I wish I knew what to tell you to keep it under.
Hopefully someone who has a better grasp of all that can chime in.

Once you put it on real hardware, however, you really want a 64-bit
CPU and as much RAM as you can toss at the machine.



Sounds sensible, thanks for common sense input.

Just the little I've tinkered with zfs so far I'm in love already. zfs
is much more responive to some kinds of things I'm used to waiting for
on linux reiserfs.

Commands like du, mv, rm etc on hefty amounts of data are always slow
as molasses on linux/reiserfs (and reiserfs is faster than ext3).  I
have'nt tried ext4 but have been told it is no faster.

Whereas zfs gets those jobs done in short order... very noticably
faster but I am just going by feel but at least on very similar
hardware (cpu wise). (The linux is on Intel 3.06 celeron 2gb ram)

I guess there is something called btrfs (nicknamed butter fs) that is
supposed to be linux answer to zfs but it isn't ready for primetime
yet and I can say it will have a ways to go to compare to zfs.

My usage and skill level is probably the lowest on this list easily
but even I see some real nice features with zfs.  It seams taylor made
for semi-ambitious home NAS.

So Brian, If you can bear with my windyness a bit more, one of the
things flopping around in the back of my mind is something already
mentioned here too.. change out the mobo instead of dinking around
with addon pci sata controller..

I have 64 bit hardware... but am a bit scared of having lots of
trouble getting opensol to run peacefully on it.  Its a (somewhat old
fashioned now) athlon64 2.2 ghz +3400/Aopen AK86-L mobo. (socket 754)

The little jave tool that tests the hardware says my sata controller
wont work (the testing tool saw it as a VIA raid controller) and
suggests I turn off RAID in the bios.

After a carefull look in the bios menus I'm not finding any way to
turn it off so guessing the sata ports will be useless unless I
install a pci addon sata controller.

So thinking of justs changing out the mobo for something with stuff
that is known to work.

The machine came with an Asus mobo that I ruined myself by dicking
aournd installing RAM... somehow shorted out something, then mobo
became useless.

But I'm thinking of turning to Asus again and making sure there is
onboard SATA with at least 4 prts  and preferebly 6.

So cutting to the chase here... would you happen to have a
recommendation from your own experience, or something you've heard
will work and that can stand more ram... my current setup tops out at
3gb.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] GSoC 09 zfs ideas?

2009-02-28 Thread Blake Irvin

Shrinking pools would also solve the right-sizing dilemma.

Sent from my iPhone

On Feb 28, 2009, at 3:37 AM, Joe Esposito j...@j-espo.com wrote:


I'm using opensolaris and zfs at my house for my photography storage
as well as for an offsite backup location for my employer and several
side web projects.

I have an 80g drive as my root drive.  I recently took posesion of 2
74g 10k drives which I'd love to add as a mirror to replace the 80 g
drive.

From what I gather it is only possible if I zfs export my storage
array and reinstall solaris on the new disks.

So I guess I'm hoping zfs shrink and grow commands show up sooner or  
later.


Just a data point.

Joe Esposito
www.j-espo.com

On 2/28/09, C. Bergström cbergst...@netsyncro.com wrote:

Blake wrote:

Gnome GUI for desktop ZFS administration



On Fri, Feb 27, 2009 at 9:13 PM, Blake blake.ir...@gmail.com  
wrote:



zfs send is great for moving a filesystem with lots of tiny files,
since it just handles the blocks :)



I'd like to see:

pool-shrinking (and an option to shrink disk A when i want disk B  
to

become a mirror, but A is a few blocks bigger)

This may be interesting... I'm not sure how often you need to  
shrink a
pool though?  Could this be classified more as a Home or SME level  
feature?

install to mirror from the liveCD gui


I'm not working on OpenSolaris at all, but for when my projects
installer is more ready /we/ can certainly do this..

zfs recovery tools (sometimes bad things happen)

Agreed.. part of what I think keeps zfs so stable though is the  
complete
lack of dependence on any recovery tools..  It forces customers to  
bring

up the issue instead of dirty hack and nobody knows.

automated installgrub when mirroring an rpool


This goes back to an installer option?

./C

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool status -x strangeness

2009-01-25 Thread Blake Irvin
You can upgrade live.  'zfs upgrade' with no arguments shows you the  
zfs version status of filesystems present without upgrading.



On Jan 24, 2009, at 10:19 AM, Ben Miller mil...@eecis.udel.edu wrote:

 We haven't done 'zfs upgrade ...' any.  I'll give that a try the  
 next time the system can be taken down.

 Ben

 A little gotcha that I found in my 10u6 update
 process was that 'zpool
 upgrade [poolname]' is not the same as 'zfs upgrade
 [poolname]/[filesystem(s)]'

 What does 'zfs upgrade' say?  I'm not saying this is
 the source of
 your problem, but it's a detail that seemed to affect
 stability for
 me.


 On Thu, Jan 22, 2009 at 7:25 AM, Ben Miller
 The pools are upgraded to version 10.  Also, this
 is on Solaris 10u6.

 -- 
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Raidz1 p

2009-01-20 Thread Blake Irvin
I would in this case also immediately export the pool (to prevent any  
write attempts) and see about a firmware update for the failed drive  
(probably need windows for this).

Sent from my iPhone

On Jan 20, 2009, at 3:22 AM, zfs user zf...@itsbeen.sent.com wrote:

 I would get a new 1.5 TB and make sure it has the new firmware and  
 replace
 c6t3d0 right away - even if someone here comes up with a magic  
 solution, you
 don't want to wait for another drive to fail.

 http://hardware.slashdot.org/article.pl?sid=09/01/17/0115207
 http://techreport.com/discussions.x/15863


 Brad Hill wrote:
 Sure, and thanks for the quick reply.

 Controller: Supermicro AOC-SAT2-MV8 plugged into a 64-big PCI-X 133  
 bus
 Drives: 5 x Seagate 7200.11 1.5TB disks for the raidz1.
 Single 36GB western digital 10krpm raptor as system disk. Mate for  
 this is in but not yet mirrored.
 Motherboard: Tyan Thunder K8W S2885 (Dual AMD CPU) with 1GB ECC Ram

 Anything else I can provide?

 (thanks again)
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to diagnose zfs - iscsi - nfs hang

2008-12-03 Thread Blake Irvin
I'm having a very similar issue.  Just updated to 10 u6 and upgrade my zpools.  
They are fine (all 3-way mirors), but I've lost the machine around 12:30am two 
nights in a row.

I'm booting ZFS root pools, if that makes any difference.

I also don't see anything in dmesg, nothing on the console either.

I'm going to go back to the logs today to see what was going on around midnight 
on these occasions.  I know there are some built-in cronjobs that run around 
that time - perhaps one of them in the culprit.

What I'd really like is a way to force a core dump when the machine hangs like 
this.  scat is a very nifty tool for debugging such things - but I'm not 
getting a core or panic or anything :(
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to diagnose zfs - iscsi - nfs hang

2008-12-03 Thread Blake Irvin
I am directly on the console.  cde-login is disabled, so i'm dealing with 
direct entry.

 Are you directly on the console, or is the console on
 a serial port?  If 
 you are
 running over X windows, the input might still get in,
 but X may not be 
 displaying.
 If keyboard input is not getting in, your machine is
 probably wedged at 
 a high
 level interrupt, which sounds doubtful based on your
 problem description.
Out of curiosity, why do you say that?  I'm no expert on interrupts, so I'm 
curious.  It DOES seem that keyboard entry is ignored in this situation, since 
I see no results from ctrl-c, for example (I had left the console running 'tail 
-f /var/adm/messages'.  I'm not saying your are wrong, but if I should be 
examining interrupt issues, I'd like to know (I have 3 hard disk controllers in 
the box, for example...)
   
 If the deadman timer does not trigger, the clock is
 almost certainly 
 running, and your machine is
 almost certainly accepting keyboard input.
That's good to know.  I just enabled deadman after the last freeze, so it will 
be a bit before I can test this (hope I don't have to).

thanks!
Blake

 
 Good luck,
 max
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] rsync using 100% of a cpu

2008-12-01 Thread Blake Irvin

Upstream when using DSL is much slower than downstream?

Blake

On Dec 1, 2008, at 7:42 PM, Francois Dion [EMAIL PROTECTED]  
wrote:


Source is local to rsync, copying from a zfs file system,  
destination is remote over a dsl connection. Takes forever to just  
go through the unchanged files. Going the other way is not a  
problem, it takes a fraction of the time. Anybody seen that?  
Suggestions?



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is SUNWhd for Thumper only?

2008-12-01 Thread Blake Irvin
I've used that tool only with the Marvell chipset that ships with the  
thumpers.  (in a supermicro hba)

Have you looked at cfgadm?

Blake

On Dec 1, 2008, at 7:49 PM, [EMAIL PROTECTED] wrote:

 (http://cuddletech.com/blog/pivot/entry.php?id=993). Will the SUNWhd

 can't dump all SMART data, but get some temps on a generic box..

 4 % hd -a
  
 fdisk
 DeviceSerialVendor   Model Rev  Temperature  
 Type
 ------   -  ---  
 
 c3t0d0p0ATA  ST3750640AS   K255 C (491  
 F) EFI
 c3t1d0p0ATA  ST3750640AS   K255 C (491  
 F) EFI
 c3t2d0p0ATA  ST3750640AS   K255 C (491  
 F) EFI
 c3t4d0p0ATA  ST3750640AS   K255 C (491  
 F) EFI
 c3t5d0p0ATA  ST3750640AS   K255 C (491  
 F) EFI
 c4t0d0p0ATA  WDC WD1001FALS-0  0K05 43 C (109 F)  
 EFI
 c4t1d0p0ATA  WDC WD1001FALS-0  0K05 43 C (109 F)  
 EFI
 c4t2d0p0ATA  WDC WD1001FALS-0  0K05 43 C (109 F)  
 EFI
 c4t4d0p0ATA  WDC WD1001FALS-0  0K05 42 C (107 F)  
 EFI
 c4t5d0p0ATA  WDC WD1001FALS-0  0K05 43 C (109 F)  
 EFI
 c5t0d0p0    TSSTcorp CD/DVDW SH-S162A  TS02 None  None
 c5t1d0p0ATA  WDC WD3200JD-00K  5J08 0 C (32 F)  
 Solaris2
 c5t2d0p0ATA  WDC WD3200JD-00K  5J08 0 C (32 F)  
 Solaris2
 c5t3d0p0ATA  WDC WD3200JD-00K  5J08 0 C (32 F)  
 Solaris2
 c5t4d0p0ATA  WDC WD3200JD-00K  5J08 0 C (32 F)  
 Solaris2
 c5t5d0p0ATA  WDC WD3200JD-00K  5J08 0 C (32 F)  
 Solaris2

 Do you know of a solaris tool to get SMART data?

Rob

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] resilver being killed by 'zpool status' when root

2008-10-22 Thread Blake Irvin
As jritorto is noting, I think the issue here is whether the fix has been 
backported to Solaris 10 5/08 or 10/08.  It's a nasty problem to run into on a 
production machine.  In my case, I'm restoring from tape because my pool went 
corrupt waiting for resilvers to finish which were getting killed by a 
crontabbed reporting script.  Looks like a perfect time for a backport.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Booting 0811 from USB Stick

2008-10-22 Thread Blake Irvin
did you follow the instructions for updating grub after the image-update:

http://opensolaris.org/jive/thread.jspa?messageID=277115tstart=0
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] resilver being killed by 'zpool status' when root

2008-10-21 Thread Blake Irvin
I've confirmed the problem with automatic resilvers as well.  I will see about 
submitting a bug.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] resilver being killed by 'zpool status' when root

2008-10-21 Thread Blake Irvin
Looks like there is a closed bug for this:

http://bugs.opensolaris.org/view_bug.do?bug_id=6655927

It's been closed as 'not reproducible', but I can reproduce consistently on Sol 
10 5/08.  How can I re-open this bug?

I'm using a pair of Supermicro AOC-SAT2-MV8 on a fully patched install of 
Solaris 10 5/08, with a 9-disk raidz2 pool.

The motherboard is a Supermicro H8DM8-2.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] scrub restart patch status..

2008-10-13 Thread Blake Irvin
I'm also very interested in this.  I'm having a lot of pain with status 
requests killing my resilvers.  In the example below I was trying to test to 
see if timf's auto-snapshot service was killing my resilver, only to find that 
calling zpool status seems to be the issue:

[EMAIL PROTECTED] ~]# env LC_ALL=C zpool status $POOL | grep  in progress
 scrub: resilver in progress, 0.26% done, 35h4m to go

[EMAIL PROTECTED] ~]# env LC_ALL=C zpool status $POOL  
  pool: pit
 state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas exist for
the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-D3
 scrub: resilver in progress, 0.00% done, 484h39m to go
config:

NAME  STATE READ WRITE CKSUM
pit   DEGRADED 0 0 0
  raidz2  DEGRADED 0 0 0
c2t0d0ONLINE   0 0 0
c3t0d0ONLINE   0 0 0
c3t1d0ONLINE   0 0 0
c2t1d0ONLINE   0 0 0
spare DEGRADED 0 0 0
  c3t3d0  UNAVAIL  0 0 0  cannot open
  c3t7d0  ONLINE   0 0 0
c2t2d0ONLINE   0 0 0
c3t5d0ONLINE   0 0 0
c3t6d0ONLINE   0 0 0
spares
  c3t7d0  INUSE currently in use

errors: No known data errors
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] scrub restart patch status..

2008-10-13 Thread blake . irvin
Correct, that is a workaround.  The fact that I use the beta (alpha?)
zfs auto-snaphot service means that when the service checks for active
scrubs, it kills the resilver.

I think I will talk to Tim about modifying his method script to run
the scrub check with least privileges (ie, not as root).


On 10/13/08, Richard Elling [EMAIL PROTECTED] wrote:
 Blake Irvin wrote:
 I'm also very interested in this.  I'm having a lot of pain with status
 requests killing my resilvers.  In the example below I was trying to test
 to see if timf's auto-snapshot service was killing my resilver, only to
 find that calling zpool status seems to be the issue:


 workaround: don't run zpool status as root.
  -- richard

 [EMAIL PROTECTED] ~]# env LC_ALL=C zpool status $POOL | grep  in progress
  scrub: resilver in progress, 0.26% done, 35h4m to go

 [EMAIL PROTECTED] ~]# env LC_ALL=C zpool status $POOL
   pool: pit
  state: DEGRADED
 status: One or more devices could not be opened.  Sufficient replicas
 exist for
  the pool to continue functioning in a degraded state.
 action: Attach the missing device and online it using 'zpool online'.
see: http://www.sun.com/msg/ZFS-8000-D3
  scrub: resilver in progress, 0.00% done, 484h39m to go
 config:

  NAME  STATE READ WRITE CKSUM
  pit   DEGRADED 0 0 0
raidz2  DEGRADED 0 0 0
  c2t0d0ONLINE   0 0 0
  c3t0d0ONLINE   0 0 0
  c3t1d0ONLINE   0 0 0
  c2t1d0ONLINE   0 0 0
  spare DEGRADED 0 0 0
c3t3d0  UNAVAIL  0 0 0  cannot open
c3t7d0  ONLINE   0 0 0
  c2t2d0ONLINE   0 0 0
  c3t5d0ONLINE   0 0 0
  c3t6d0ONLINE   0 0 0
  spares
c3t7d0  INUSE currently in use

 errors: No known data errors
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] making sense of arcstat.pl output

2008-10-01 Thread Blake Irvin
I'm using Neelakanth's arcstat tool to troubleshoot performance problems with a 
ZFS filer we have, sharing home directories to a CentOS frontend Samba box.

Output shows an arc target size of 1G, which I find odd, since I haven't tuned 
the arc, and the system has 4G of RAM.  prstat -a tells me that userland 
processes are only using about 200-300mb of RAM, and even if Solaris is eating 
1GB, that still leaves quite a lot of RAM not being used by the arc.

I would believe that this was due to low workload, but I see that 'arcsz' 
matches 'c', which makes me think the system is hitting a bottleneck/wall of 
some kind.

Any thoughts on further troubleshooting appreciated.

Blake
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] making sense of arcstat.pl output

2008-10-01 Thread Blake Irvin
I think I need to clarify a bit.

I'm wondering why arc size is staying so low, when i have 10 nfs clients and 
about 75 smb clients accessing the store via resharing (on one of the 10 linux 
nfs clients) of the zfs/nfs export.  Or is it normal for the arc target and arc 
size to match? Of note, I didn't see these performance issues until the box had 
been up for about a week, probably enough time for weekly (roughly) windows 
reboots and profile syncs across multiple clients to force the arc to fill.

I have read through and follow the advice on the tuning guide, but still see 
Windows users with roaming profiles getting very slow profile syncs.  This 
makes me think that zfs isn't handling the random i/o generated by a profile 
sync very well.  Well, at least that's what I'm thinking when I see an arc size 
of 1G, there is at least another free gig of memory, and the clients syncing 
more than a gig of data fairly often.

I will return to studying the tuning guide, though, to make sure I've not 
missed some key bit.  It's not unlikely that I'm missing something fundamental 
about how zfs should behave in this scenario.

cheers,
Blake
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] resilver being killed by 'zpool status' when root

2008-09-24 Thread Blake Irvin
I was doing a manual resilver, not with spares.  I suspect still the issue 
comes from your script running as root, which is common for reporting scripts.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] resilver being killed by 'zpool status' when root

2008-09-23 Thread Blake Irvin
is there a bug for the behavior noted in the subject line of this post?

running 'zpool status' or 'zpool status -xv' during a resilver as a 
non-privileged user has no adverse effect, but if i do the same as root, the 
resilver restarts.

while i'm not running opensolaris here, i feel this is a good forum to post 
this question to.

(my system: SunOS filer1 5.10 Generic_137112-07 i86pc i386 i86pc)

thanks,
blake
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 24-port SATA controller options?

2008-06-30 Thread Blake Irvin
Hmm.  That's kind of sad.  I grabbed the latest Areca drivers and haven't had a 
speck of trouble.  Was the driver revision specified in the docs you read the 
latest one?

Flash boot does seem nice in a way, since Solaris writes to the boot volume so 
seldom on a machine that has enough RAM to avoid swapping.

I think Sun needs to offer something in the 5k range as well :)

Blake
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 24-port SATA controller options?

2008-06-30 Thread Blake Irvin
This Areca card is Solaris Certified (so says the HCL) and not that expensive:

http://www.sun.com/bigadmin/hcl/data/components/details/1179.html
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 24-port SATA controller options?

2008-06-27 Thread Blake Irvin
We are currently using the 2-port Areca card SilMech offers for boot, and 2 of 
the Supermicro/Marvell cards for our array.  Silicon Mechanics gave us great 
support and burn-in testing for Solaris 10.  Talk to a sales rep there and I 
don't think you will be disappointed.

cheers,
Blake
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 24-port SATA controller options?

2008-04-15 Thread Blake Irvin
Truly :)

I was planning something like 3 pools concatenated.  But we are only populating 
12 bays at the moment.

Blake
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] 24-port SATA controller options?

2008-04-14 Thread Blake Irvin
The only supported controller I've found is the Areca ARC-1280ML.  I want to 
put it in one of the 24-disk Supermicro chassis that Silicon Mechanics builds.

Has anyone had success with this card and this kind of chassis/number of drives?

cheers,
Blake
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss