Yes, but pricing that's so obviously disconnected with cost leads customers to
feel they're being ripped off.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Today we had another panic, at least it was during work time :) Just a
shame the 999GB ufs takes 80+ mins to fsck. (Yes, it is mounted 'logging').
panic[cpu3]/thread=ff001e70dc80:
free: freeing free block, dev:0xb60024, block:13144, ino:1737885,
fs:/export
/saba1
Well, I'm not holding out much hope of Sun working with these suppliers any
time soon. I asked Vmetro why they don't work with Sun considering how well
ZFS seems to fit with their products, and this was the reply I got:
Micro Memory has a long history of working with Sun, and I worked at Sun
Essentially yes, the entire pool dies. If you think of each mirror as an
individual disk, you've just striped them together so the pool goes offline if
any mirror fails, and each mirror can only guard against one half of the mirror
failing.
If you want to guard against any two trays failing,
Without checking your math, I believe you may be confusing the risk of *any*
data corruption with the risk of a total drive failure, but I do agree that the
calculation should just be for the data on the drive, not the whole array.
My feeling on this from the various analyses I've read on the
Get a cheap 5th SATA drive to act as your boot drive, install Solaris on that,
and then let ZFS use the whole of the remaining 4 drives.
That gives you performance benefits, and it means it's very easy to recover if
your boot drive fails - just re-install Solaris and zpool import the raid
It was posted in the CIFS forum a couple of days ago:
http://www.opensolaris.org/jive/forum.jspa?forumID=214
Thread: HEADS-UP: Please skip snv_93 if you use CIFS server:
http://www.opensolaris.org/jive/thread.jspa?threadID=65996tstart=0
This message posted from opensolaris.org
On Thu, Jul 10, 2008 at 1:15 AM, Fajar A. Nugraha [EMAIL PROTECTED] wrote:
Another alternative is to use an IDE to Compact Flash adapter, and
boot off of flash.
Just curious, what will that flash contain?
e.g. will it be similar to linux's /boot, or will it contain the full
solaris root?
Will Murnane wrote:
On Thu, Jul 10, 2008 at 12:43, Glaser, David [EMAIL PROTECTED] wrote:
I guess what I was wondering if there was a direct method rather than the
overhead of ssh.
On receiving machine:
nc -l 12345 | zfs recv mypool/[EMAIL PROTECTED]
and on sending machine:
zfs send
On Jul 10, 2008, at 12:42, Tim wrote:
It's the same reason you don't see HDS or EMC rushing to adjust the
price of
the SYM or USP-V based on Sun releasing the thumpers.
No one ever got fired for buying EMC/HDS/NTAP
I know my company has corporate standards for various aspects of
IT,
So, does anybody have an approach to recovering this filesystem?
Is there a way to relabel the drives so that ZFS will recognize them,
without losing the data?
Thanks,
Lee
On Jul 5, 2008, at 1:24 PM, Lee Fyock wrote:
Hi--
Here's the scoop, in probably too much detail:
I'm a sucker for
Hello relling,
Thanks for your comments. FWIW, I am building an actual hardware array, so een
though I _may_ put ZFS on top of the hardware arrays 22TB drive that the OS
sees (I may not) I am focusing purely on the controller rebuild.
So, setting aside ZFS for the moment, am I still correct
I'm still confused.
What is a -SAFE- way with two drives if you prepare for hardware
faulure? That is: one drive fails and the system does not go down
because the other drive takes over. Do I need raid or mirror?
--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
++ http://nagual.nl/ + SunOS sxce
Hi Dick
You want Mirroring. A Sun system with mirrored disks can be configured to
not go down due to one disk failing. For this to be valid, you need to also
make sure that the device used for SWAP is mirrored - you won't believe how
many times I've seen this mistake being made.
To be even
Sorry, but I'm stuck at 6540.
There are so many options in how you would practically configure these that
there is no way to give a sensible answer to your question. But the most
basic questions are: Does the racks have power from separate PDUs? Are they
in physically remote locations? Does
Bob Friesenhahn
I expect that Sun is realizing that it is already
undercutting much of
the rest of its product line. These minor updates would allow the
X4540 to compete against much more expensive StorageTek SAN hardware.
Assuming, of course that the requirements for the more expensive
On Fri, Jul 11, 2008 at 9:25 AM, Moore, Joe [EMAIL PROTECTED] wrote:
Features. RAS. Simplicity. Corporate Inertia (having storage admins
who don't know OpenSolaris). Executive outings with StorageTek-logo'd
golfballs. The last 2 aren't something I'd build a business case
around, but
On Fri, Jul 11, 2008 at 05:23, Darren J Moffat [EMAIL PROTECTED] wrote:
Why ?
Referenced by the following packages:
SUNWnetcat
Is this in 10u5? Weird, it's not on my media.
Will
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Hi thanks for you help in the forum help i got an answer also iam gonna try
that. But your suggestion is also an angle with i will investigate. Is there
maybo some diagnostic tool in opensolaris i can use, or shall i use the solaris
bootable cd that inspects of my hw is fully compitble ?
Will Murnane wrote:
On Fri, Jul 11, 2008 at 05:23, Darren J Moffat [EMAIL PROTECTED] wrote:
Why ?
Referenced by the following packages:
SUNWnetcat
Is this in 10u5? Weird, it's not on my media.
No but this is an opensolaris.org alias not a Solaris 10 support forum.
So the
On Fri, 11 Jul 2008, Ross wrote:
If you want to guard against any two trays failing, you need to use
some kind of dual parity protection. Either dual mirrors, or
raid-z2. Given that you only have 8 LUN's, raid-z2 would seem to be
the best option.
System reliability will be dominated by
On Fri, Jul 11, 2008 at 11:44, Darren J Moffat [EMAIL PROTECTED] wrote:
No but this is an opensolaris.org alias not a Solaris 10 support forum. So
the assumption unless people say otherwise is that you are running a recent
build of SX:CE or OpenSolaris 2008.05 (including updates).
Luckily, the
Yesterday evening, I tried Live Upgrade on a Sun Fire V60x running SX:CE 90
to SX:CE 93 with ZFS root (mirrored root pool called root). The LU itself
ran without problems, but before rebooting the machine, I wanted to add
some space to the root pool that had previously been in use for an UFS BE.
User Name wrote:
Hello relling,
Thanks for your comments. FWIW, I am building an actual hardware array, so
een though I _may_ put ZFS on top of the hardware arrays 22TB drive that
the OS sees (I may not) I am focusing purely on the controller rebuild.
So, setting aside ZFS for the
There's nothing I know of I'm afraid, I'm too new to Solaris to have looked
into things that deeply.
If you have access to any spare parts, the easiest way to test is to swop
things over and see if the problem is reproducable. It could even be something
as simple as a struggling power supply.
I need to find out what is the largest ZFS file system - in numbers of
files, NOT CAPACITY that has been tested.
Looking to scale to billions of files and would like to know if anyone
has tested anything close and what the performance ramifications are.
Has anyone tested a ZFS file system
On Fri, 11 Jul 2008, Sean Cochrane - Storage Architect wrote:
I need to find out what is the largest ZFS file system - in numbers of files,
NOT CAPACITY that has been tested.
Looking to scale to billions of files and would like to know if anyone has
tested anything close and what the
bf == Bob Friesenhahn [EMAIL PROTECTED] writes:
bf since the dawn of time
since the dawn of time Sun has been playing these games with hard
drive ``sleds''. I still have sparc32 stuff on the shelf with
missing/extra sleds.
bf POTS line
bf cell phone
bf You are free to select
jh == Johan Hartzenberg [EMAIL PROTECTED] writes:
jh To be even MORE safe, you want the two disks to be on separate
jh controllers, so that you can survive a controller failure too.
or a controller-driver-failure. At least on Linux, when a disk goes
bad, Linux starts resetting
Thanks for your comments. FWIW, I am building an
actual hardware array, so een though I _may_ put ZFS
on top of the hardware arrays 22TB drive that the
OS sees (I may not) I am focusing purely on the
controller rebuild.
Not letting ZFS handle (at least one level of) redundancy is a bad
Hi
I too strongly suspect that some HW component is failing. It is rare to see all
drives (in your case both drives in mirror and the boot drive) reporting errors
at same time.
zfs clear just resets the error counters. You still have got errors in there.
Start with following components (in
Thank you for all the feedback! It's appreciated!
@hartz
Does the racks have power from separate PDUs?
Yes
Are they in physically remote locations?
No, the racks are side by side
Does your fabric switches have redundant power from separate PDUs?
yes
Do you want mirroring here purely for
This shouldn't have happened. Do you have zdb on Mac ? If yes you can try it.
It is (intentionally?) undocumented, so you'll need to search for various
scripts on blogs.sun.com and here. Something might just work. But do check what
apple is actually shipping. You may want to use dtrace to find
Hi
running all kinds of tools now even a tool for my hd from WD, so we will she
what the results are.
I ordered another mobo this morning and if that doesn't work then i will ask a
fellow sysop to punt my disk in his solaris array.
No i didn't notice anything of kernel panics the only thing
Trying the disks in another machine is a great step, it will eliminate those
quickly. Use your own cables too so you can eliminate them from suspicion.
If this is hardware related, from my own experience I would say it's most
likely to be (in order):
- Power Supply
- Memory (especially if
On Fri, 11 Jul 2008, Sean Cochrane - Storage Architect wrote:
I need to find out what is the largest ZFS file system - in numbers of files,
NOT CAPACITY that has been tested.
In response to an earlier such question (from you?) I created a
directory with a million files. I forgot about it
On Fri, 11 Jul 2008, Akhilesh Mritunjai wrote:
Thanks for your comments. FWIW, I am building an
actual hardware array, so een though I _may_ put ZFS
on top of the hardware arrays 22TB drive that the
OS sees (I may not) I am focusing purely on the
controller rebuild.
Not letting ZFS handle
Will Murnane wrote:
If the prices on disks were lower on these, they would be interesting
for low-end businesses or even high-end home users. The chassis is
within reach of reasonable, but the disk prices look ludicrously high
from where I sit. An empty one only costs $3k, sure, but fill it
The admin user doesn't have any access to customer data; just could
kill off sessions, etc.
---
World-class email, DNS, web- and app-hosting services, www.concentric.com
.
On Jul 11, 2008, at 2:05 PM, Ian Collins wrote:
Will Murnane wrote:
If the prices on disks were lower on these,
Richard Elling wrote:
The best news, for many folks, is that you can boot from an
(externally pluggable) CF card, so that you don't have to burn
two disks for the OS.
Can these be mirrored? I've been bitten by these cards failing (in a
camera).
Ian
On Fri, 11 Jul 2008, Tim wrote:
20k list gets you into a decked out storevault with FCP/iSCSI/NFS... For
being just a jbod this thing is ridiculously overpriced, sorry.
I'm normally the first one to defend Sun when it come to decisions made due
to an enterprise customer base, but this will
Tim wrote:
On Fri, Jul 11, 2008 at 4:05 PM, Ian Collins [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
Will Murnane wrote:
If the prices on disks were lower on these, they would be
interesting
for low-end businesses or even high-end home users. The chassis is
Ian Collins wrote:
Richard Elling wrote:
The best news, for many folks, is that you can boot from an
(externally pluggable) CF card, so that you don't have to burn
two disks for the OS.
Can these be mirrored? I've been bitten by these cards failing (in a
camera).
Yes, of
On Jul 11, 2008, at 4:59 PM, Bob Friesenhahn wrote:
Has anyone tested a ZFS file system with at least 100 million +
files?
What were the performance characteristics?
I think that there are more issues with file fragmentation over a long
period of time than the sheer number of files.
On Fri, Jul 11, 2008 at 5:33 PM, Sean Cochrane - Storage Architect
[EMAIL PROTECTED] wrote:
I need to find out what is the largest ZFS file system - in numbers of
files, NOT CAPACITY that has been tested.
Looking to scale to billions of files and would like to know if anyone has
tested
Peter Tribble wrote:
On Fri, Jul 11, 2008 at 5:33 PM, Sean Cochrane - Storage Architect
[EMAIL PROTECTED] wrote:
What were the performance characteristics?
Not brilliant...
Although I suspect raid-z isn't exactly the ideal choice. Still, performance
generally is adequate for our
On Fri, Jul 11, 2008 at 3:59 PM, Bob Friesenhahn
[EMAIL PROTECTED] wrote:
There are definitely issues with programs like 'ls' when listing a
directory with a million files since 'ls' sorts its output by default.
My Windows system didn't like it at all when accessing it with CIFS
and the file
If the cabling outage was transient, the disk driver would simply retry
until they came back. If it's a hotplug-capable bus and the disks were
flagged as missing, ZFS would by default wait until the disks came back
(see zpool get failmode pool), and complete the I/O then. There would
be no
48 matches
Mail list logo