The vdev can handle dynamic lun growth, but the underlying VTOC or
EFI label
may need to be zero'd and reapplied if you setup the initial vdev on
a slice. If
you introduced the entire disk to the pool you should be fine, but I
believe you'll
still need to offline/online the pool.
Fine, at
I'm a little confused by the first poster's message as well, but you
lose some benefits of ZFS if you don't create your pools with either
RAID1 or RAIDZ, such as data corruption detection. The array isn't
going to detect that because all it knows about are blocks.
That's the dilemma, the array
Hi,
I've just started using ZFS + NFS, and i was wondering if there is
anything i can do to optimise it for being used as a mailstore ? (
small files, lots of them, with lots of directory's and high
concurrent access )
So any ideas guys?
P
___
but there may not be filesystem space for double the data.
Sounds like there is a need for a zfs-defragement-file utility
perhaps?
Or if you want to be politically cagey about naming choice, perhaps,
zfs-seq-read-optimize-file ? :-)
For Datawarehouse and streaming applications a
That's the dilemma, the array provides nice features like RAID1 and
RAID5, but those are of no real use when using ZFS.
RAID5 is not a nice feature when it breaks.
A RAID controller cannot guarantee that all bits of a RAID5 stripe
are written when power fails; then you have data corruption
On Tue, Jun 27, 2006 at 10:14:06AM +0200, Patrick wrote:
Hi,
I've just started using ZFS + NFS, and i was wondering if there is
anything i can do to optimise it for being used as a mailstore ? (
small files, lots of them, with lots of directory's and high
concurrent access )
So any
Chris Csanady writes:
On 6/26/06, Neil Perrin [EMAIL PROTECTED] wrote:
Robert Milkowski wrote On 06/25/06 04:12,:
Hello Neil,
Saturday, June 24, 2006, 3:46:34 PM, you wrote:
NP Chris,
NP The data will be written twice on ZFS using NFS. This is because NFS
grant beattie wrote:
On Tue, Jun 27, 2006 at 10:14:06AM +0200, Patrick wrote:
Hi,
I've just started using ZFS + NFS, and i was wondering if there is
anything i can do to optimise it for being used as a mailstore ? (
small files, lots of them, with lots of directory's and high
concurrent
Hi,
sounds like your workload is very similar to mine. is all public
access via NFS?
Well it's not 'public directly', courier-imap/pop3/postfix/etc... but
the maildirs are accessed directly by some programs for certain
things.
for small file workloads, setting recordsize to a value lower
On Tue, Jun 27, 2006 at 11:16:40AM +0200, Patrick wrote:
sounds like your workload is very similar to mine. is all public
access via NFS?
Well it's not 'public directly', courier-imap/pop3/postfix/etc... but
the maildirs are accessed directly by some programs for certain
things.
yes,
Philip Brown writes:
Roch wrote:
And, ifthe load can accomodate a
reorder, to get top per-spindle read-streaming performance,
a cp(1) of the file should do wonders on the layout.
but there may not be filesystem space for double the data.
Sounds like there is a need
Mika Borner writes:
RAID5 is not a nice feature when it breaks.
Let me correct myself... RAID5 is a nice feature for systems without
ZFS...
Are huge write caches really a advantage? Or are you taking about
huge
write caches with non-volatile storage?
Yes, you are right.
On Tue, Jun 27, 2006 at 12:07:47PM +0200, Roch wrote:
for small file workloads, setting recordsize to a value lower than the
default (128k) may prove useful.
When changing things like recordsize, can i do it on the fly on a
volume ? ( and then if i can what happens to the data
Hello Nathanael,
NB I'm a little confused by the first poster's message as well, but
NB you lose some benefits of ZFS if you don't create your pools with
NB either RAID1 or RAIDZ, such as data corruption detection. The
NB array isn't going to detect that because all it knows about are blocks.
Hi
Looks like same stack as 6413847, although it is pointed more towards hardware
failure.
the stack below is from 5.11 snv_38, but also seems to affect update 2 as
per above bug.
Enda
Thomas Maier-Komor wrote:
Hi,
my colleage is just testing ZFS and created a zpool which had a backing
Does it make sense to solve these problems piece-meal:
* Performance: ZFS algorithms and NVRAM
* Error detection: ZFS checksums
* Error correction: ZFS RAID1 or RAIDZ
Nathanael Burton wrote:
If you've got hardware raid-5, why not just run regular (non-raid) pools on
top of the raid-5?
I
Yes, but the idea of using software raid on a large server doesn't
make sense in modern systems. If you've got a large database server
that runs a large oracle instance, using CPU cycles for RAID is
counter productive. Add to that the need to manage the hardware
directly (drive
Most controllers support a background-scrub that will read a volume
and repair any bad stripes. This addresses the bad block issue in
most cases.
It still doesn't help when a double-failure occurs. Luckily, that's
very rare. Usually, in that case, you need to evacuate the volume
and
Bart Smaalders wrote:
Gregory Shaw wrote:
On Tue, 2006-06-27 at 09:09 +1000, Nathan Kroenert wrote:
How would ZFS self heal in this case?
You're using hardware raid. The hardware raid controller will rebuild
the volume in the event of a single drive failure. You'd need to keep
on top of
Peter Rival wrote:
storage arrays with the same arguments over and over without providing
an answer to the customer problem doesn't do anyone any good. So. I'll
restate the question. I have a 10TB database that's spread over 20
storage arrays that I'd like to migrate to ZFS. How should I
Unfortunately, a storage-based RAID controller cannot detect errors which occurred
between the filesystem layer and the RAID controller, in either direction - in or
out. ZFS will detect them through its use of checksums.
But ZFS can only fix them if it can access redundant bits. It can't
Peter Rival wrote:
See, telling folks you should just use JBOD when they don't have JBOD
and have invested millions to get to state they're in where they're
efficiently utilizing their storage via a SAN infrastructure is just
plain one big waste of everyone's time. Shouting down the
Not at all. ZFS is a quantum leap in Solaris filesystem/VM
functionality.
However, I don't see a lot of use for RAID-Z (or Z2) in large
enterprise customers situations. For instance, does ZFS enable Sun
to walk into an account and say You can now replace all of your high-
end (EMC)
This is getting pretty picky. You're saying that ZFS will detect any
errors introduced after ZFS has gotten the data. However, as stated
in a previous post, that doesn't guarantee that the data given to ZFS
wasn't already corrupted.
If you don't trust your storage subsystem, you're going
This is getting pretty picky. You're saying that ZFS will detect any
errors introduced after ZFS has gotten the data. However, as stated
in a previous post, that doesn't guarantee that the data given to ZFS
wasn't already corrupted.
But there's a big difference between the time ZFS gets
On Tue, Jun 27, 2006 at 09:41:10AM -0600, Gregory Shaw wrote:
This is getting pretty picky. You're saying that ZFS will detect any
errors introduced after ZFS has gotten the data. However, as stated
in a previous post, that doesn't guarantee that the data given to ZFS
wasn't already
On 6/27/06, Erik Trimble [EMAIL PROTECTED] wrote:
Darren J Moffat wrote:
Peter Rival wrote:
storage arrays with the same arguments over and over without
providing an answer to the customer problem doesn't do anyone any
good. So. I'll restate the question. I have a 10TB database that's
[EMAIL PROTECTED] wrote:
That's the dilemma, the array provides nice features like RAID1 and
RAID5, but those are of no real use when using ZFS.
RAID5 is not a nice feature when it breaks.
A RAID controller cannot guarantee that all bits of a RAID5 stripe
are written when power
Your example would prove more effective if you added, I've got ten
databases. Five on AIX, Five on Solaris 8
Peter Rival wrote:
I don't like to top-post, but there's no better way right now. This
issue has recurred several times and there have been no answers to it
that cover the bases.
Torrey McMahon wrote:
ZFS is greatfor the systems that can run it. However, any enterprise
datacenter is going to be made up of many many hosts running many many
OS. In that world you're going to consolidate on large arrays and use
the features of those arrays where they cover the most
Currently, when the root password is forgotten / munged, I boot from the cdrom
into a shell, mount the root filesystem on /mnt and edit /mnt/etc/shadow,
blowing away the root password.
What is going to happen when the root filesystem is ZFS? Hopefully the same
mechanism will be available.
Ron Halstead wrote:
Currently, when the root password is forgotten / munged, I boot from the cdrom
into a shell, mount the root filesystem on /mnt and edit /mnt/etc/shadow,
blowing away the root password.
What is going to happen when the root filesystem is ZFS? Hopefully the same mechanism
Jason Schroeder wrote:
Torrey McMahon wrote:
[EMAIL PROTECTED] wrote:
I'll bet that ZFS will generate more calls about broken hardware
and fingers will be pointed at ZFS at first because it's the new
kid; it will be some time before people realize that the data was
rotting all along.
Solaris 10u2 was released today. You can now download it from here:
http://www.sun.com/software/solaris/get.jsp
Does anyone know if ZFS is included in this release? One of my local
Sun reps said it did not make it into the u2 release, though I have
heard for ages that 6/06 would
Indeed. ZFS is included in Solaris 10 U2.
-- Prabahar.
Shannon Roddy wrote:
Solaris 10u2 was released today. You can now download it from here:
http://www.sun.com/software/solaris/get.jsp
Does anyone know if ZFS is included in this release? One of my local
Sun reps said it did not
Yup, it's there!
Shannon Roddy said the following on 06/27/06 12:57:
Solaris 10u2 was released today. You can now download it from here:
http://www.sun.com/software/solaris/get.jsp
Does anyone know if ZFS is included in this release?
Shannon Roddy wrote:
Solaris 10u2 was released today. You can now download it from here:
http://www.sun.com/software/solaris/get.jsp
Does anyone know if ZFS is included in this release? One of my local
Sun reps said it did not make it into the u2 release, though I have
heard for ages
Nicolas Williams wrote:
On Tue, Jun 27, 2006 at 09:41:10AM -0600, Gregory Shaw wrote:
This is getting pretty picky. You're saying that ZFS will detect any
errors introduced after ZFS has gotten the data. However, as stated
in a previous post, that doesn't guarantee that the data given to
Torrey McMahon wrote:
Darren J Moffat wrote:
So everything you are saying seems to suggest you think ZFS was a
waste of engineering time since hardware raid solves all the problems ?
I don't believe it does but I'm no storage expert and maybe I've drank
too much cool aid. I'm software
Just wondered if there'd been any progress in this area?
Correct me if i'm wrong, but as it stands, there's no way
to remove a device you accidentally 'zpool add'ed without
destroying the pool.
On 12/06/06, Gregory Shaw [EMAIL PROTECTED] wrote:
Yes, if zpool remove works like you describe, it
On Tue, 27 Jun 2006, Gregory Shaw wrote:
Yes, but the idea of using software raid on a large server doesn't
make sense in modern systems. If you've got a large database server
that runs a large oracle instance, using CPU cycles for RAID is
counter productive. Add to that the need to manage
... but I have to ask.
How do I back this up?
Here is my definition of a backup :
(1) I can copy all data and metadata onto some media in
a manner that verifies the integrity of the data and
metadata written.
(1.1) By verify I mean that the data written onto
Al Hopper wrote:
On Tue, 27 Jun 2006, Gregory Shaw wrote:
Yes, but the idea of using software raid on a large server doesn't
make sense in modern systems. If you've got a large database server
that runs a large oracle instance, using CPU cycles for RAID is
counter productive. Add to that
Robert Milkowski wrote On 06/27/06 03:00,:
Hello Chris,
Tuesday, June 27, 2006, 1:07:31 AM, you wrote:
CC On 6/26/06, Neil Perrin [EMAIL PROTECTED] wrote:
Robert Milkowski wrote On 06/25/06 04:12,:
Hello Neil,
Saturday, June 24, 2006, 3:46:34 PM, you wrote:
NP Chris,
NP The data will
Steve Bennett wrote:
OK, I know that there's been some discussion on this before, but I'm not sure
that any specific advice came out of it. What would the advice be for
supporting a largish number of users (10,000 say) on a system that supports
ZFS? We currently use vxfs and assign a user
On Tue, 2006-06-27 at 23:07, Steve Bennett wrote:
From what little I currently understand, the general advice would
seem to be to assign a filesystem to each user, and to set a quota
on that. I can see this being OK for small numbers of users (up to
1000 maybe), but I can also see it being a
[EMAIL PROTECTED] wrote On 06/27/06 17:17,:
We have over 1 filesystems under /home in strongspace.com and it works fine.
I forget but there was a bug or there was an improvement made around nevada
build 32 (we're currently at 41) that made the initial mount on reboot
significantly
On Jun 27, 2006, at 3:30 PM, Al Hopper wrote:On Tue, 27 Jun 2006, Gregory Shaw wrote: Yes, but the idea of using software raid on a large server doesn'tmake sense in modern systems. If you've got a large database serverthat runs a large oracle instance, using CPU cycles for RAID iscounter
48 matches
Mail list logo