On May 27, 2011, at 6:20 AM, Jim Klimov wrote:
> > > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> > > boun...@opensolaris.org] On Behalf Of Frank Van Damme
> > >
> > > Op 26-05-11 13:38, Edward Ned Harvey schreef:
> > > But what if you loose it (the vdev), would there be a way
2011/5/27 Edward Ned Harvey
:
> I don't think this is true. The reason you need arc+l2arc to store your DDT
> is because when you perform a write, the system will need to check and see
> if that block is a duplicate of an already existing block. If you dedup
> once, and later disable dedup, the s
> 2011/5/26 Eugen Leitl :
> > How bad would raidz2 do on mostly sequential writes
> and reads
> > (Athlon64 single-core, 4 GByte RAM, FreeBSD 8.2)?
> >
> > The best way is to go is striping mirrored pools,
> right?
> > I'm worried about losing the two "wrong" drives out
> of 8.
> > These are all 72
> > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> > boun...@opensolaris.org] On Behalf Of Frank Van Damme
> >
> > Op 26-05-11 13:38, Edward Ned Harvey schreef:
> > But what if you loose it (the vdev), would there be a way to
> reconstruct> the DDT (which you need to be able to
Dan> ... It would still need a complex bp_rewrite.
Are you certain about that?
For example, scrubbing/resilvering and fixing corrupt blocks with
non-matching checksums is a post-processing operation which
works on an existing pool and rewrites some blocks if needed.
And it works without a bp_rew
On Fri, May 27, 2011 at 04:38:15PM +0400, Jim Klimov wrote:
> And if the ZFS is supposedly smart enough to use request coalescing
> as to minimize mechanical seek times, then it might actually be
> possible that your disks would get "stuck" averagely serving requests
> from different parts of the
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Frank Van Damme
>
> Op 26-05-11 13:38, Edward Ned Harvey schreef:
> > Perhaps a property could be
> > set, which would store the DDT exclusively on that device.
>
> Oh yes please, let me put m
Did you try it as a single command, somewhat like:
zpool create -R /a -o cachefile=/a/etc/zfs/zpool.cache mypool c3d0
Using altroots and cachefile(=none) explicitly is a nearly-documented
way to avoid caching pools which you would not want to see after
reboot, i.e. removable media.
I think that
2011-05-27 13:50, Frank Van Damme wrote:
Sequential? Let's suppose no spares.
4 mirrors of 2 = sustained bandwidth of 4 disks
raidz2 with 8 disks = sustained bandwidth of 6 disks
Well, technically, for reads the mirrors might get parallelized to read
different portions of data for separate user
> From: Daniel Carosone [mailto:d...@geek.com.au]
> Sent: Thursday, May 26, 2011 8:19 PM
>
> Once your data is dedup'ed, by whatever means, access to it is the
> same. You need enough memory+l2arc to indirect references via
> DDT.
I don't think this is true. The reason you need arc+l2arc to s
2011/5/26 Eugen Leitl :
> How bad would raidz2 do on mostly sequential writes and reads
> (Athlon64 single-core, 4 GByte RAM, FreeBSD 8.2)?
>
> The best way is to go is striping mirrored pools, right?
> I'm worried about losing the two "wrong" drives out of 8.
> These are all 7200.11 Seagates, refu
Hi,
Trying to ensure a newly created data pool gets import on boot into a
new BE.
Scenario :
Just completed a AI install, and on the client before I reboot I want
to create a data pool, and have this pool automatically imported on boot
into the newly installed AI Boot Env.
Trying to us
Op 26-05-11 13:38, Edward Ned Harvey schreef:
> Perhaps a property could be
> set, which would store the DDT exclusively on that device.
Oh yes please, let me put my DDT on an SSD.
But what if you loose it (the vdev), would there be a way to reconstruct
the DDT (which you need to be able to delet
13 matches
Mail list logo