On Tue, Jul 15, 2008 at 01:58, Ross [EMAIL PROTECTED] wrote:
However, I'm not sure where the 8 is coming from in your calculations.
Bits per byte ;)
In this case approximately 13/100 or around 1 in 8 odds.
Taking into account the factor 8, and it's around 8 in 8.
Another possible factor to
Hi everyone,
I have just installed Solaris and have added a 3x500GB raidz drive array. I am
able to use this pool ('tank') successfully locally, but when I try to share it
remotely, I can only read, I cannot execute or write. I didn't do anything
other than the default 'zfs set sharenfs=on
bits vs bytes D'oh! again. It's a good job I don't do these calculations
professionally. :-) Date: Tue, 15 Jul 2008 02:30:33 -0400 From: [EMAIL
PROTECTED] To: [EMAIL PROTECTED] Subject: Re: [zfs-discuss] please help with
raid / failure / rebuild calculations CC:
Well I haven't used a J4500, but when we had an x4500 (Thumper) on loan they
had Solaris pretty well integrated with the hardware. When a disk failed, I
used cfgadm to offline it and as soon as I did that a bright blue Ready to
Remove LED lit up on the drive tray of the faulty disk, right next
You can also try SunRay ultra thin clients. Read about them on www.sun.com and
the forum is called filibeto. Google for filibeto sunray
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Tue, Jul 15, 2008 at 4:17 AM, Ross [EMAIL PROTECTED] wrote:
Well I haven't used a J4500, but when we had an x4500 (Thumper) on loan they
had Solaris pretty well integrated with the hardware. When a disk failed, I
used cfgadm to offline it and as soon as I did that a bright blue Ready to
Dino wrote:
Hi everyone,
I have just installed Solaris and have added a 3x500GB raidz drive array. I
am able to use this pool ('tank') successfully locally, but when I try to
share it remotely, I can only read, I cannot execute or write. I didn't do
anything other than the default 'zfs
My first hunch would be to unmount the tank pool from /tank, and check the
permissions of the /tank directory. You'll see behavior like this if the
directory on which an NFS-exported file system will be mounted is not
world-readable before the mount.
This message posted from opensolaris.org
One nit ... the parity computation is 'in the noise' as far as the CPU goes,
but it tends to flush the CPU caches (or rather, replace useful cached data
with parity), which affects application performance.
Modern CPU architectures (including x86/SPARC) provide instructions which allow
data to
On Tue, 15 Jul 2008, Ross wrote:
Well I haven't used a J4500, but when we had an x4500 (Thumper) on
loan they had Solaris pretty well integrated with the hardware.
When a disk failed, I used cfgadm to offline it and as soon as I did
that a bright blue Ready to Remove LED lit up on the
It sounds like you might be interested to read up on Eric Schrock's work. I
read today about some of the stuff he's been doing to bring integrated fault
management to Solaris:
http://blogs.sun.com/eschrock/entry/external_storage_enclosures_in_solaris
His last paragraph is great to see, Sun
Will Murnane wrote:
On Tue, Jul 15, 2008 at 01:58, Ross [EMAIL PROTECTED] wrote:
However, I'm not sure where the 8 is coming from in your calculations.
Bits per byte ;)
In this case approximately 13/100 or around 1 in 8 odds.
Taking into account the factor 8, and it's
Around 9:45 this morning, our mailserver (SunOS 5.11 snv_91 i86pc i386
i86pc) rebooted.
Looking at /var/crash/HOSTNAME, I saw the unix.0 and vmcore0 files.
Loading them up in MDB, I get the following:
::panicinfo
cpu0
thread ff02d75b1ca0
On Tue, 15 Jul 2008, Ross Smith wrote:
It sounds like you might be interested to read up on Eric Schrock's work. I
read today about some of the stuff he's been doing to bring integrated fault
management to Solaris:
http://blogs.sun.com/eschrock/entry/external_storage_enclosures_in_solaris
On July 15, 2008 7:44:53 AM -0500 Jason King [EMAIL PROTECTED] wrote:
http://blogs.sun.com/eschrock/entry/external_storage_enclosures_in_solaris
has a bit more info on some of this -- while I would expect Sun
products to integrate that well, it's nice to know the framework is
there for other
Michael Hale wrote:
Around 9:45 this morning, our mailserver (SunOS 5.11 snv_91 i86pc i386
i86pc) rebooted.
Looking at /var/crash/HOSTNAME, I saw the unix.0 and vmcore0 files.
Loading them up in MDB, I get the following:
::panicinfo
In general, if you get a panic that cannot be
I've a raidz1 made up of 2 partitions. One is on a disk which is failing and
i want to replace it... cfgadm does not allow me to unconfigure it since
'device busy', so i am trying to offline the slice from the raidz1 vdev but
got this error:
# zpool offline swap c3t2d0s0
cannot offline
Frank Cusack wrote:
On July 15, 2008 7:44:53 AM -0500 Jason King [EMAIL PROTECTED] wrote:
http://blogs.sun.com/eschrock/entry/external_storage_enclosures_in_solaris
has a bit more info on some of this -- while I would expect Sun
products to integrate that well, it's nice to know the
On Tue, 2008-07-15 at 15:32 -0500, Bob Friesenhahn wrote:
On Tue, 15 Jul 2008, Ross Smith wrote:
It sounds like you might be interested to read up on Eric Schrock's work.
I read today about some of the stuff he's been doing to bring integrated
fault management to Solaris:
The current SES enumerator doesn't support parsing AES FC descriptors
(which are required to correlate disks with the Solaris abstraction).
So we get the PSU/fan/bay information, but don't know which disk is
which.
It should be pretty straightforward to do, though we may need to make
sure that
The stack trace makes it clear that it was ZFS that crashed. (The _cmntrap
stack frame indicates that a trap happened; in this case it's an access to bad
memory by the kernel. The previous stack frame indicates that ZFS was active.)
Now, it may not have been ZFS which caused the panic --
On July 14, 2008 9:54:43 PM -0700 Frank Cusack [EMAIL PROTECTED] wrote:
On July 14, 2008 7:49:58 PM -0500 Bob Friesenhahn
[EMAIL PROTECTED] wrote:
It sounds like they're talking more about traditional hardware RAID
but is this also true for ZFS? Right now I've got four 750GB drives
that I'm
22 matches
Mail list logo