Re: [zfs-discuss] x4500 dead HDD, hung server, unable to boot.

2008-08-10 Thread Frank Leers


On Aug 10, 2008, at 7:26 PM, Jorgen Lundman wrote:




the 'hd' utility on the tools and drivers cd produces the attached
output on thumper.



Clearly I need to find and install this utility, but even then, that
seems to just add yet another way to number the drives.

The message I get from kernel is:

/[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci11ab,[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0 (sd30):

And I need to get the answer 40. The hd output additionally  
gives me

sdar  ?



...yeah, when run on a thumper that is booted into linux.  I attached  
it to show you the drive positions.  Go get it and run it on your  
installation of S10.


-frank

smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] x4500 dead HDD, hung server, unable to boot.

2008-08-10 Thread Frank Leers


On Aug 10, 2008, at 8:14 PM, Jorgen Lundman wrote:



Does the SATA controller show any information in its log (if you go  
into

the controller BIOS, if there is one)?

Seeing more reports of full systems hangs from an unresponsive drive
makes me very concerned about bring a 4500 into our environment  :(



Not that I can see. Rebooting the new x4500 for the 6th time now as it
keeps hanging on IO. (Box is 100% idle, but any IO commands like
zpool/zfs/fmdump etc will just hung). I have absolutely no idea why it
hangs now, we have pulled out the replacement drive to see if it stays
up (in case it is a drive channel problem).

The most disappointing aspects of all this, is the incredibly poor
support we have had from our vendor (compared to NetApp support that  
we

have had in the past). I would have thought being the biggest ISP in
Japan would mean we'd be interesting to Sun, even if just a little  
bit.

I suspect we are one the first to try x4500 here as well.


Nope, Tokyo Tech in your neighborhood has a boatload...50 or so IIRC.
http://www.sun.com/blueprints/0507/820-2187.pdf

Have you opened up a case with Sun?




Anyway, it has almost rebooted, so I need to go remount everything.

Lund

--
Jorgen Lundman   | [EMAIL PROTECTED]
Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500  (cell)
Japan| +81 (0)3 -3375-1767  (home)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Norco's new storage appliance

2007-10-09 Thread Frank Leers
On Tue, 2007-10-09 at 23:36 +0100, Adam Lindsay wrote:
 Gary Gendel wrote:
  Norco usually uses Silicon Image based SATA controllers. 
 
 Ah, yes, I remember hearing SI SATA multiplexer horror stories when I 
 was researching storage possibilities.
 
 However, I just heard back from Norco:
 
  Thank you for interest in Norco products.
  Most of part uses by DS -520 are using chipset found on common boards.
  For example we use marvell 88sx6081 as SATA controller.
  The system should be function fine with OpenSolaris.
  Please feel free to contact us for further more questions.
 
 That's the Thumper's controller chipset, right? Sounds like very good 
 news to me.
 

Yes, it is.

0b:01.0 SCSI storage controller: Marvell Technology Group Ltd.
MV88SX6081 8-port SATA II PCI-X Controller (rev 09)



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Please help! ZFS crash burn in SXCE b70!

2007-08-31 Thread Frank Leers
MC wrote:
 Richard, thanks for the pointer to the tests in
 '/usr/sunvts', as this
 is the first I have heard of them. They look quite
 comprehensive.
 I will give them a trial when I have some free time.
 Thanks
 Nigel Smith

 pmemtest- Physical Memory Test
 ramtest - Memory DIMMs (RAM) Test
 vmemtest- Virtual Memory Test
 cddvdtest   - Optical Disk Drive Test
 cputest - CPUtest
 disktest- Disk and Floppy Drives Test
 dtlbtest- Data Translation Look-aside Buffer
 Test
 fputest - Floating Point Unit Test
 l1dcachetest- Level 1 Data Cache Test
 l2sramtest  - Level 2 Cache Test
 netlbtest   - Net Loop Back Test
 nettest - Network Hardware Test
 serialtest  - Serial Port Test
 tapetest- Tape Drive Test
 usbtest - USB Device Test
 systest - System Test
 iobustest   - Test for the IO interconnects and
 the Components on the IObus on high end Machines
 
 
 That is apparently one of those crazy hidden features in OpenSolaris that I 
 think Indiana should expose :)
  

VTS has been around for many years, although may have been more widely 
deployed on SPARC hardware.  VTS is Sun Services' tool of choice when 
'validating' hardware (V_alidation T_est S_uite).  Manufacturing also 
use the tool suite extensively to burn in hardware on their floor before 
shipping.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OT: QFS question WAS:Single SAN Lun presented to 4 Hosts

2007-08-30 Thread Frank Leers
On Thu, 2007-08-30 at 14:03 -0400, Paul Kraus wrote:
 On 8/30/07, Peter L. Thomas [EMAIL PROTECTED] wrote:
 
  That said, is there a HOWTO anywhere on installing QFS on Solaris 9 
  (Sparc64)
  machines?  Is that even possible?
 
 I don't know of a How To, but I assume the manual has instructions.
 When I took the Sun SAM-FS / QFS technical training many years ago,
 they were supported on Soalris 2.6, 7, and 8 (which tells you how long
 ago that was), so I assume Solaris 9 is (or was) supported.
 

As Paul mentions, SAMQ will definitely run on S9.  One source for the
docs are here:

http://docs.sun.com/app/docs/prod/samfs?l=en#hic

http://docs.sun.com/app/docs/prod/qfs?l=en#hic

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] large numbers of zfs filesystems

2007-03-06 Thread Frank Leers
I am curious as to what people are using in both test and production
environments WRT large numbers of ZFS filesystems.  Tens of thousands,
hundreds?  Does anyone have numbers around boot times, shutdown times
system performance with LARGE numbers of fs's.  How about sharing many
filesystems via NFS and/or SMB/TotalNet etc...

thanks,

-frank

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Thumper and ZFS

2006-10-16 Thread Frank Leers
On Fri, 2006-10-13 at 01:05 +0200, Robert Milkowski wrote:
 Hello zfs-discuss,
 
   While waiting for Thumpers to come I'm thinking how to configure
   them. I would like to use raid-z. As thumper has 6 SATA controllers
   each 8-port then maybe it would make sense to create raid-z groups
   from 6 disks each from separate controller. Then combine 7 such
   groups into one pool. Then there're 6 disks remaining with two of
   them designated for system (mirror) which leaves 4 disks probably
   as hot-spares.
 
   That way if one controller fails entire pool will still be ok.
 
   What do you think?
 
   ps. there still will be SPOF for boot disks and hot spares but it
   looks like there's no choice anyway.
 

Base on what I have seen so far, when your thumper shows up it should
have the following factory configuration, which is pretty close to your
scenario above.

c5t4 and c5t0 as boot disk mirror metadevices

1 pool with the following raidz groups:

c0t0 
c1t0 
c4t0
c6t0
c7t0

c0t1
c1t1
c4t1
c5t1
c6t1
c7t1

c0t2
c1t2
c4t2
c5t2
c6t2
c7t2

c0t3
c1t3
c4t3
c5t3
c6t3
c7t3

c0t4
c1t4
c4t4
c6t4
c7t4

c0t5
c1t5
c4t5
c5t5
c6t5
c7t5

c0t6
c1t6
c4t6
c5t6
c6t6
c7t6

c0t7
c1t7
c4t7
c5t7
c6t7
c7t7


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] panic string assistance

2006-10-03 Thread Frank Leers
Could someone offer insight into this panic, please?  


panic string:   ZFS: I/O failure (write on unknown off 0: zio
6000c5fbc0
0 [L0 ZIL intent log] 1000L/1000P DVA[0]=1:249b68000:1000 zilog uncompre
ssed BE contiguous birth=318892 fill=0 cksum=3b8f19730caa4327:9e102
 panic kernel thread: 0x2a1015d7cc0  PID: 0  on CPU: 530 
cmd: sched
t_procp: 0x187c780(proc_sched)
  p_as: 0x187e4d0(kas)
  zone: global
t_stk: 0x2a1015d7ad0  sp: 0x18aa901  t_stkbase: 0x2a1015d2000
t_pri: 99(SYS)  pctcpu: 0.00
t_lwp: 0x0psrset: 0  last CPU: 530  
idle: 0 ticks (0 seconds)
start: Wed Sep 20 18:17:22 2006
age: 1788 seconds (29 minutes 48 seconds)
tstate: TS_ONPROC - thread is being run on a processor
tflg:   T_TALLOCSTK - thread structure allocated from stk
T_PANIC - thread initiated a system panic
tpflg:  none set
tsched: TS_LOAD - thread is in memory
TS_DONT_SWAP - thread/LWP should not be swapped
TS_SIGNALLED - thread was awakened by cv_signal()
pflag:  SSYS - system resident process

pc:  0x105f7f8  unix:panicsys+0x48:   call  unix:setjmp
startpc: 0x119fa64  genunix:taskq_thread+0x0:   save%sp, -0xd0, %sp

unix:panicsys+0x48(0x7b6e53a0, 0x2a1015d77c8, 0x18ab2d0, 0x1, , , 0x4480001601, 
, , , , , , , 0x7b6e53a0, 0x2a1015d77c8)
unix:vpanic_common+0x78(0x7b6e53a0, 0x2a1015d77c8, 0x7b6e3bf8, 0x7080bc30, 0x708
0bc70, 0x7080b840)
unix:panic+0x1c(0x7b6e53a0, 0x7080bbf0, 0x7080bbc0, 0x7b6e4428, 0x0, 0x6000c5fbc
00, , 0x5)
zfs:zio_done+0x284(0x6000c5fbc00)
zfs:zio_next_stage(0x6000c5fbc00) - frame recycled
zfs:zio_vdev_io_assess+0x178(0x6000c5fbc00, 0x6000c586da0, 0x7b6c79f0)
genunix:taskq_thread+0x1a4(0x6000bc5ea38, 0x0)
unix:thread_start+0x4()

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss