On Thu, Jun 29, 2006 at 10:01:15AM +0200, Robert Milkowski wrote:
Hello przemolicc,
Thursday, June 29, 2006, 8:01:26 AM, you wrote:
ppf On Wed, Jun 28, 2006 at 03:30:28PM +0200, Robert Milkowski wrote:
ppf What I wanted to point out is the Al's example: he wrote about
damaged data.
I have hundreds of Xen-based virtual machines running off a ZFS/iSCSI
service; yes, it's viable. I can't speak for CentOS specifically; our
infrastructure is using Debian Etch with our own build of Xen.
How does ZFS handle snapshots of large files like VM images? Is replication
done on the
How does ZFS handle snapshots of large files like VM images? Is
replication done on the bit/block level or by file? In otherwords, does
a snapshot of a changed VM image take up the same amount of space as the
image or only the amount of space of the bits that have changed within
the
[EMAIL PROTECTED] wrote on 17/07/2007 05:12:49 AM:
I'm going to be setting up about 6 virtual machines (Windows
Linux) in either VMWare Server or Xen on a CentOS 5 box. I'd like to
connect to a ZFS iSCSI target to store the vm images and be able to
use zfs snapshots for backup. I have no
I'm going to be setting up about 6 virtual machines (Windows Linux) in
either VMWare Server or Xen on a CentOS 5 box. I'd like to connect to a ZFS
iSCSI target to store the vm images and be able to use zfs snapshots for
backup. I have no experience with ZFS, so I have a couple of questions
I had originally considered something similar, but... for ZFS snapshot
abilities, I am leaning more towards zfs-hosted NFS... Most of the other VMs
(FreeBSD, for example) can install onto NFS, it wouldn't actually be going
over the network, and it would allow file-level restore instead of
Peter Baumgartner wrote:
I'm going to be setting up about 6 virtual machines (Windows Linux) in
either VMWare Server or Xen on a CentOS 5 box. I'd like to connect to a
ZFS iSCSI target to store the vm images and be able to use zfs snapshots
for backup. I have no experience with ZFS, so I
, bad cables, etc. But cannot
detect and repair
ppf errors in its (ZFS) code.
Not in its code but definitely in a firmware code in a controller.
As Jeff pointed out: if you mirror two different storage arrays.
przemol
___
zfs-discuss mailing list
zfs
On Thu, Jun 29, 2006 at 10:01:15AM +0200, Robert Milkowski wrote:
Hello przemolicc,
Thursday, June 29, 2006, 8:01:26 AM, you wrote:
ppf On Wed, Jun 28, 2006 at 03:30:28PM +0200, Robert Milkowski wrote:
ppf What I wanted to point out is the Al's example: he wrote about
damaged data.
Hello przemolicc,
Thursday, June 29, 2006, 10:08:23 AM, you wrote:
ppf On Thu, Jun 29, 2006 at 10:01:15AM +0200, Robert Milkowski wrote:
Hello przemolicc,
Thursday, June 29, 2006, 8:01:26 AM, you wrote:
ppf On Wed, Jun 28, 2006 at 03:30:28PM +0200, Robert Milkowski wrote:
ppf What I
Hello przemolicc,
Wednesday, June 28, 2006, 10:57:17 AM, you wrote:
ppf On Tue, Jun 27, 2006 at 04:16:13PM -0500, Al Hopper wrote:
Case in point, there was a gentleman who posted on the Yahoo Groups solx86
list and described how faulty firmware on a Hitach HDS system damaged a
bunch of data.
On Wed, Jun 28, 2006 at 02:23:32PM +0200, Robert Milkowski wrote:
Hello przemolicc,
Wednesday, June 28, 2006, 10:57:17 AM, you wrote:
ppf On Tue, Jun 27, 2006 at 04:16:13PM -0500, Al Hopper wrote:
Case in point, there was a gentleman who posted on the Yahoo Groups solx86
list and
Hello,
What I wanted to point out is the Al's example: he wrote about damaged data.
Data
were damaged by firmware _not_ disk surface ! In such case ZFS doesn't help.
ZFS can
detect (and repair) errors on disk surface, bad cables, etc. But cannot detect
and repair
errors in its (ZFS) code.
I
[EMAIL PROTECTED] wrote:
On Wed, Jun 28, 2006 at 02:23:32PM +0200, Robert Milkowski wrote:
What I wanted to point out is the Al's example: he wrote about damaged data.
Data
were damaged by firmware _not_ disk surface ! In such case ZFS doesn't help.
ZFS can
detect (and repair) errors on disk
Depends on your definition of firmware. In higher end arrays the data is
checksummed when it comes in and a hash is written when it gets to disk.
Of course this is no where near end to end but it is better then nothing.
The checksum is often stored with the data (so if the data is not
Depends on your definition of firmware. In higher end arrays the data
is checksummed when it comes in and a hash is written when it gets to
disk. Of course this is no where near end to end but it is better then
nothing.
... and code is code. Easier to debug is a context sensitive term.
The vdev can handle dynamic lun growth, but the underlying VTOC or
EFI label
may need to be zero'd and reapplied if you setup the initial vdev on
a slice. If
you introduced the entire disk to the pool you should be fine, but I
believe you'll
still need to offline/online the pool.
Fine, at
but there may not be filesystem space for double the data.
Sounds like there is a need for a zfs-defragement-file utility
perhaps?
Or if you want to be politically cagey about naming choice, perhaps,
zfs-seq-read-optimize-file ? :-)
For Datawarehouse and streaming applications a
on the
existence of regions of free contiguous disk space. This
will get more difficult as we get close to full on the
storage.
-r
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
Most controllers support a background-scrub that will read a volume
and repair any bad stripes. This addresses the bad block issue in
most cases.
It still doesn't help when a double-failure occurs. Luckily, that's
very rare. Usually, in that case, you need to evacuate the volume
and
Bart Smaalders wrote:
Gregory Shaw wrote:
On Tue, 2006-06-27 at 09:09 +1000, Nathan Kroenert wrote:
How would ZFS self heal in this case?
You're using hardware raid. The hardware raid controller will rebuild
the volume in the event of a single drive failure. You'd need to keep
on top of
Unfortunately, a storage-based RAID controller cannot detect errors which occurred
between the filesystem layer and the RAID controller, in either direction - in or
out. ZFS will detect them through its use of checksums.
But ZFS can only fix them if it can access redundant bits. It can't
Not at all. ZFS is a quantum leap in Solaris filesystem/VM
functionality.
However, I don't see a lot of use for RAID-Z (or Z2) in large
enterprise customers situations. For instance, does ZFS enable Sun
to walk into an account and say You can now replace all of your high-
end (EMC)
This is getting pretty picky. You're saying that ZFS will detect any
errors introduced after ZFS has gotten the data. However, as stated
in a previous post, that doesn't guarantee that the data given to ZFS
wasn't already corrupted.
If you don't trust your storage subsystem, you're going
This is getting pretty picky. You're saying that ZFS will detect any
errors introduced after ZFS has gotten the data. However, as stated
in a previous post, that doesn't guarantee that the data given to ZFS
wasn't already corrupted.
But there's a big difference between the time ZFS gets
On Tue, Jun 27, 2006 at 09:41:10AM -0600, Gregory Shaw wrote:
This is getting pretty picky. You're saying that ZFS will detect any
errors introduced after ZFS has gotten the data. However, as stated
in a previous post, that doesn't guarantee that the data given to ZFS
wasn't already
Torrey McMahon wrote:
ZFS is greatfor the systems that can run it. However, any enterprise
datacenter is going to be made up of many many hosts running many many
OS. In that world you're going to consolidate on large arrays and use
the features of those arrays where they cover the most
Jason Schroeder wrote:
Torrey McMahon wrote:
[EMAIL PROTECTED] wrote:
I'll bet that ZFS will generate more calls about broken hardware
and fingers will be pointed at ZFS at first because it's the new
kid; it will be some time before people realize that the data was
rotting all along.
Nicolas Williams wrote:
On Tue, Jun 27, 2006 at 09:41:10AM -0600, Gregory Shaw wrote:
This is getting pretty picky. You're saying that ZFS will detect any
errors introduced after ZFS has gotten the data. However, as stated
in a previous post, that doesn't guarantee that the data given to
Torrey McMahon wrote:
Darren J Moffat wrote:
So everything you are saying seems to suggest you think ZFS was a
waste of engineering time since hardware raid solves all the problems ?
I don't believe it does but I'm no storage expert and maybe I've drank
too much cool aid. I'm software
Hi
Now that Solaris 10 06/06 is finally downloadable I have some questions
about ZFS.
-We have a big storage sytem supporting RAID5 and RAID1. At the moment,
we only use RAID5 (for non-solaris systems as well). We are thinking
about using ZFS on those LUNs instead of UFS. As ZFS on Hardware
About:
-I've read the threads about zfs and databases. Still I'm not 100%
convenienced about read performance. Doesn't the fragmentation of the
large database files (because of the concept of COW) impact
read-performance?
I do need to get back to this thread. The way I am currently
On Jun 26, 2006, at 1:15 AM, Mika Borner wrote:
Hi
Now that Solaris 10 06/06 is finally downloadable I have some
questions
about ZFS.
-We have a big storage sytem supporting RAID5 and RAID1. At the
moment,
we only use RAID5 (for non-solaris systems as well). We are thinking
about using
Roch wrote:
And, ifthe load can accomodate a
reorder, to get top per-spindle read-streaming performance,
a cp(1) of the file should do wonders on the layout.
but there may not be filesystem space for double the data.
Sounds like there is a need for a zfs-defragement-file utility
Eric Schrock wrote:
On Mon, Jun 26, 2006 at 05:26:24PM -0600, Gregory Shaw wrote:
You're using hardware raid. The hardware raid controller will rebuild
the volume in the event of a single drive failure. You'd need to keep
on top of it, but that's a given in the case of either hardware or
Gregory Shaw wrote:
On Tue, 2006-06-27 at 09:09 +1000, Nathan Kroenert wrote:
How would ZFS self heal in this case?
You're using hardware raid. The hardware raid controller will rebuild
the volume in the event of a single drive failure. You'd need to keep
on top of it, but that's a given
Olaf Manczak wrote:
Eric Schrock wrote:
On Mon, Jun 26, 2006 at 05:26:24PM -0600, Gregory Shaw wrote:
You're using hardware raid. The hardware raid controller will rebuild
the volume in the event of a single drive failure. You'd need to keep
on top of it, but that's a given in the case of
37 matches
Mail list logo