Kory,
I'm sorry that you had to go through this. We're all working very hard to
make ZFS better for everyone. We've noted this problem on the ZFS Best
Practices wiki to try and help avoid future problems until we can get the
quotas issue resolved.
-- richard
Kory Wheatley wrote:
Richard,
I appreciate your information and insight. At this time since ZFS is
not capable of handling thousands of file systems and has several
limitations. We are forced to focus our migration to using UFS, "after
wasting time", where Sun told us, "before we thought of migrating our
user accounts to ZFS", that everything would be fine. But they failed
to mention about the terrible slowness of the boot process. We told
them we would be adding thousands of file systems under ZFS, and they
said there would be no problems. Very unprofessional from my standpoint
since we invested so much time in ZFS. It's forced us to hold back on
our migration and caused us to spend another $12k of maintenance on our
current system, because we can't do our migration before the time our
maintenance contact runs out. We have to restructure our migration
plans for using UFS
ZFS needs to be stated in the correct manner in Sun's documentation and
presentation's that I've analyzed. Sure it supports thousand's and
million's of file system's, but there's ramifications . Resulting in a
very slow boot process (if that would have been stated that would have
been enough). This has caused us a considerable amount of time we've
exhausted in ZFS, and now we have to turn our attention to UFS for our
migration.
From what I understand this problem was identified last year. I'm
wondering how much time has been invested on it, since ZFS is such a key
element for everyone to migrate or install Solaris 10. You definitely
would not want to use ZFS with thousands of file systems, it will not
work for us at all at this time.
Richard Elling wrote:
Jim Mauro wrote:
(I'm probably not the best person to answer this, but that has never
stopped me
before, and I need to give Richard Elling a little more time to get
the Goats, Cows
and Horses fed, sip his morning coffee, and offer a proper response...)
chores are done, wading through the morning e-mail...
Would it benefit us to have the disk be setup as a raidz along with
the hardware raid 5 that is already setup too?
Way back when, we called such configurations "plaiding", which
described a host-based RAID configuration
that criss-crossed hardware RAID LUNs. In doing such things, we had
potentially better data availability
with a configuration that could survive more failure modes.
Alternatively, we used the hardware RAID
for the availability configuration (hardware RAID 5), and used
host-based RAID to stripe across hardware
RAID5 LUNs for performance. Seemed to work pretty well.
Yep, there are various ways to do this and, in general, the more copies
of the data you have, the better reliability you have. Space is also
fairly easy to calculate. Performance can be tricky, and you may need to
benchmark with your workload to see which is better, due to the
difficulty
in modeling such systems.
In theory, a raidz pool spread across some number of underlying
hardware raid 5 LUNs would
offer protection against more failure mode, such as the loss of an
entire raid5 LUN. So from
a failure protection/data availability point of view, it offers some
benefit. Now, as to whether or not
you experience a real, measurable benefit over time is hard to say.
Each additional level of protection/redundancy
has a diminishing return, often times at a dramatic incremental cost
(e.g. getting from "four nines" to "five nines").
If money was no issue, I'm sure we could come up with an awesome
solution :-)
Or with this double raid slow our performance with both a software
and hardware raid setup?
You will certainly pay a performance - using raidz across the raid5
luns will reduce deliverable IOPS
from the raid 5 luns. Whether or not the performance trade-off is
worth the RAS gain varies based on
your RAS and data availability requirements.
Fast, inexpensive, reliable: pick two.
Or would raidz setup be better than the hardware raid5 setup?
Assuming a robust raid5 implementation with battery-backed nvram
(protect against the "write hole" and
partial stripe writes), I think a raidz zpool covers more of the
datapath then a hardware raid 5 LUN, but
I'll wait for Richard to elaborate here (or tell me I'm wrong).
In general, you want the data protection in the application, or as
close to
the application as you can get. Since programmers tend to be lazy
(Gosling
said it, not me! :-) most rely on the file system and underlying
constructs
to ensure data protection. So, having ZFS manage the data protection
will
always be better than having some box at the other end of a wire managing
the protection.
Also if we do set the disks as a raidz would it benefit use more if
we specified each disks in the raidz or create them as Luns then
specify the setup in raidz.
Isn't' this the same question as the first question? I'm not sure
what you're asking here...
The questions you're asking are good ones, and date back to the
decades old struggle
around configuration tradeoffs for performance / availability / cost.
My knee-jerk reaction is that one level of RAID, like either hardware
raid5 ZFS raidz is sufficient
for availability, and keeps things relatively simple (and simple also
improves RAS). The advantage
host-based RAID has always had of hardware RAID is the ability to
create software LUNs
(like a raidz1 or raidz2 zpool) across physical disk controllers,
which may also cross SAN
switches, etc. So, twas me, I'd go with non-hardware RAID5 devices
from the storage frame,
and create raidz1 or raidz2 zpools across controllers.
This is reasonable.
But, that's me...
:^)
/jim
The important thing is to protect your data. You have lots of options
here,
so we'd need to know more precisely what the other requirements are
before
we could give better advice.
-- richard
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss