On Sun, Jun 17, 2012 at 03:19:18PM -0500, Timothy Coalson wrote:
> Replacing devices will not change the ashift, it is set permanently
> when a vdev is created, and zpool will refuse to replace a device in
> an ashift=9 vdev with a device that it would use ashift=12 on.
Yep.
> [..] while hitachi
On Tue, Jun 12, 2012 at 03:46:00PM +1000, Scott Aitken wrote:
> Hi all,
Hi Scott. :-)
> I have a 5 drive RAIDZ volume with data that I'd like to recover.
Yeah, still..
> I tried using Jeff Bonwick's labelfix binary to create new labels but it
> carps because the txg is not zero.
Can you provid
On Wed, Jun 13, 2012 at 05:56:56PM -0500, Timothy Coalson wrote:
> client: ubuntu 11.10
> /etc/fstab entry: :/mainpool/storage /mnt/myelin nfs
> bg,retry=5,soft,proto=tcp,intr,nfsvers=3,noatime,nodiratime,async 0
> 0
nfsvers=3
> NAME PROPERTY VALUE SOURCE
> m
On Mon, May 28, 2012 at 01:34:18PM -0700, Richard Elling wrote:
> I'd be interested in the results of such tests.
Me too, especially for databases like postgresql where there's a
complementary cache size tunable within the db that often needs to be
turned up, since they implicitly rely on some fi
On Mon, May 28, 2012 at 09:23:25AM -0600, Nigel W wrote:
> After a snafu
> last week at $work where a 512 byte pool would not resilver with a 4K
> drive plugged in, it appears that (keep in mind that these are
> consumer drives) Seagate no longer manufactures the 7200.12 series
> drives which has a
On Tue, May 22, 2012 at 12:42:02PM +0400, Jim Klimov wrote:
> 2012-05-22 7:30, Daniel Carosone wrote:
>> I've done basically this kind of thing before: dd a disk and then
>> scrub rather than replace, treating errors as expected.
>
> I got into similar situation last ni
On Mon, May 21, 2012 at 09:18:03PM -0500, Bob Friesenhahn wrote:
> On Mon, 21 May 2012, Jim Klimov wrote:
>> This is so far a relatively raw idea and I've probably missed
>> something. Do you think it is worth pursuing and asking some
>> zfs developers to make a POC? ;)
>
> I did read all of your t
On Fri, May 18, 2012 at 04:18:12PM +1000, Daniel Carosone wrote:
>
> When doing a scrub, you start at the root bp and walk the tree, doing
> reads for everything, verifying checksums, and letting repair happen
> for any errors. That traversal is either a breadth-first or
> depth-
On Fri, May 18, 2012 at 03:05:09AM +0400, Jim Klimov wrote:
> While waiting for that resilver to complete last week,
> I caught myself wondering how the resilvers (are supposed
> to) work in ZFS?
The devil finds work for idle hands... :-)
> Based on what I see in practice and read in this lis
On Mon, Apr 23, 2012 at 05:48:16AM +0200, Manuel Ryan wrote:
> After a reboot of the machine, I have no more write errors on disk 2 (only
> 4 checksum, not growing), I was able to access data which I previously
> couldn't and now only the checksum errors on disk 5 are growing.
Well, that's good, b
On Mon, Apr 23, 2012 at 02:16:40PM +1200, Ian Collins wrote:
> If it were my data, I'd set the pool read only, backup, rebuild and
> restore. You do risk further data loss (maybe even pool loss) while the
> new drive is resilvering.
You're definitely in a pickle. The first priority is to try
On Sat, Apr 14, 2012 at 09:04:45AM -0400, Edward Ned Harvey wrote:
> Then, about 2 weeks later, the support rep emailed me to say they
> implemented a new feature, which could autoresize +/- some small
> percentage difference, like 1Mb difference or something like that.
There are two elements to
On Thu, Mar 29, 2012 at 05:54:47PM +0200, casper@oracle.com wrote:
> >Is it possible to access the data from a detached device from an
> >mirrored pool.
>
> If it is detached, I don't think there is a way to get access
> to the mirror. Had you used split, you should be able to reimport it.
>
On Tue, Feb 21, 2012 at 11:12:14AM +, Darren J Moffat wrote:
> Did you ever do a send|recv of these filesystems ? There was a bug with
> send|recv in 151a that has since been fixed that could cause the salt to
> be zero'd out in some cases.
Ah, so that's what that was.
I hit this problem
On Fri, Jan 13, 2012 at 05:16:36AM +0400, Jim Klimov wrote:
> 2012-01-13 4:26, Richard Elling wrote:
>> On Jan 12, 2012, at 4:12 PM, Jim Klimov wrote:
>>> Alternatively (opportunistically), a flag might be set
>>> in the DDT entry requesting that a new write mathching
>>> this stored checksum shoul
On Thu, Jan 12, 2012 at 05:01:48PM -0800, Richard Elling wrote:
> > This thread is about checksums - namely, now, what are
> > our options when they mismatch the data? As has been
> > reported by many blog-posts researching ZDB, there do
> > happen cases when checksums are broken (i.e. bitrot in
>
On Fri, Jan 13, 2012 at 04:48:44AM +0400, Jim Klimov wrote:
> As Richard reminded me in another thread, both metadata
> and DDT can contain checksums, hopefully of the same data
> block. So for deduped data we may already have a means
> to test whether the data or the checksum is incorrect...
It's
On Thu, Jan 12, 2012 at 03:05:32PM +1100, Daniel Carosone wrote:
> On Sun, Jan 08, 2012 at 06:25:05PM -0800, Richard Elling wrote:
> > ZIL makes zero impact on resilver. I'll have to check to see if L2ARC is
> > still used, but
> > due to the nature of the ARC design
On Sun, Jan 08, 2012 at 06:25:05PM -0800, Richard Elling wrote:
> ZIL makes zero impact on resilver. I'll have to check to see if L2ARC is
> still used, but
> due to the nature of the ARC design, read-once workloads like backup or
> resilver do
> not tend to negatively impact frequently used da
On Fri, Dec 02, 2011 at 01:59:37AM +0100, Ragnar Sundblad wrote:
>
> I am sorry if these are dumb questions. If there are explanations
> available somewhere for those questions that I just haven't found, please
> let me know! :-)
I'll give you a brief summary.
> 1. It has been said that when the
On Tue, Oct 11, 2011 at 08:17:55PM -0400, John D Groenveld wrote:
> Under both Solaris 10 and Solaris 11x, I receive the evil message:
> | I/O request is not aligned with 4096 disk sector size.
> | It is handled through Read Modify Write but the performance is very low.
I got similar with 4k secto
On Wed, Nov 09, 2011 at 11:09:45AM +1100, Daniel Carosone wrote:
> On Wed, Nov 09, 2011 at 03:52:49AM +0400, Jim Klimov wrote:
> > Recently I stumbled upon a Nexenta+Supermicro report [1] about
> > cluster-in-a-box with shared storage boasting an "active-active
> >
On Wed, Nov 09, 2011 at 03:52:49AM +0400, Jim Klimov wrote:
> Recently I stumbled upon a Nexenta+Supermicro report [1] about
> cluster-in-a-box with shared storage boasting an "active-active
> cluster" with "transparent failover". Now, I am not certain how
> these two phrases fit in the same sent
On Tue, Nov 01, 2011 at 06:17:57PM -0400, Edward Ned Harvey wrote:
> You can do both poorly for free, or you can do both very well for big bucks.
> That's what opensolaris was doing.
That mess was costing someone money and considered very well done?
Good riddance.
--
Dan.
pgp9EbJq1tUD1.pgp
On Thu, Oct 27, 2011 at 10:49:22AM +1100, afree...@mac.com wrote:
> Hi all,
>
> I'm seeing some puzzling behaviour with my RAID-Z.
>
Indeed. Start with zdb -l on each of the disks to look at the labels in more
detail.
--
Dan.
pgpRTwLfC9flo.pgp
Description: PGP signature
_
On Mon, Oct 10, 2011 at 04:43:30PM -0400, James Lee wrote:
> I found an old post by Jeff Bonwick with some code that does EXACTLY
> what I was looking for [1]. I had to update the 'label_write' function
> to support the newer ZFS interfaces:
That's great!
Would someone in the community please ki
On Wed, Oct 05, 2011 at 08:19:20AM +0400, Jim Klimov wrote:
>
> Hello, Daniel,
>
> Apparently your data is represented by rather small files (thus
> many small data blocks)
It's a zvol, default 8k block size, so yes.
> , so proportion of metadata is relatively
> high, and your<4k blocks are now u
On Mon, Oct 03, 2011 at 07:34:07PM -0400, Edward Ned Harvey wrote:
> It is also very similar to running iscsi targets on ZFS,
> while letting some other servers use iscsi to connect to the ZFS server.
The SAS, IB and FCoE targets, too..
SAS might be the most directly comparable to replace a tradi
On Tue, Oct 04, 2011 at 09:28:36PM -0700, Richard Elling wrote:
> On Oct 4, 2011, at 4:14 PM, Daniel Carosone wrote:
>
> > I sent it twice, because something strange happened on the first send,
> > to the ashift=12 pool. "zfs list -o space" showed figures at least
&
I sent a zvol from host a, to host b, twice. Host b has two pools,
one ashift=9, one ashift=12. I sent the zvol to each of the pools on
b. The original source pool is ashift=9, and an old revision (2009_06
because it's still running xen).
I sent it twice, because something strange happened on
On Tue, Sep 27, 2011 at 08:29:03PM -0400, Edward Ned Harvey wrote:
> There is only one way for this to make sense: You did not have mirror-1 in
> the first place. You accidentally added 4 & 5 without mirroring.
Not true. 4 & 5 may have been added initially as a mirror, then 5
detached from the
On Wed, Sep 14, 2011 at 05:36:53PM -0400, Paul Kraus wrote:
> T2000 with 32 GB RAM
>
> zpool that hangs the machine by running it out of kernel memory when
> trying to import the zpool
>
> zpool has an "incomplete" snapshot from a zfs recv that it is trying
> to destroy on import
>
> I *can* imp
On Wed, Sep 14, 2011 at 04:08:19PM +0200, Hans Rosenfeld wrote:
> On Mon, Sep 05, 2011 at 02:18:48AM -0400, Daniel Carosone wrote:
> > I see via the issue tracker that there have been several updates
> > since, and an integration back into the main Illumos tree. How do I
>
On Wed, Sep 07, 2011 at 11:20:06AM +0200, Roy Sigurd Karlsbakk wrote:
> Hi all
>
> Reading the docs for the Hitachi drives, it seems CCTL (aka TLER) is settable
> for Deskstar drives. See page 97 in http://goo.gl/ER0WD
Looks like another positive for these drives over the "competition".
The same
On Wed, Sep 07, 2011 at 08:47:36AM -0600, Lori Alt wrote:
> On 09/ 6/11 11:45 PM, Daniel Carosone wrote:
>> My understanding was that 'zfs send -D' would use the pool's DDT in
>> building its own, if present.
> It does not use the pool's DDT, but it does u
On Tue, Sep 06, 2011 at 10:05:54PM -0700, Richard Elling wrote:
> On Sep 6, 2011, at 9:01 PM, Freddie Cash wrote:
>
> > For example, does 'zfs send -D' use the same DDT as the pool?
>
> No.
My understanding was that 'zfs send -D' would use the pool's DDT in
building its own, if present. If bloc
On Tue, Aug 09, 2011 at 10:51:37AM +1000, Daniel Carosone wrote:
> On Mon, Aug 01, 2011 at 01:25:35PM +1000, Daniel Carosone wrote:
> > To be clear, the system I was working on the other day is now running
> > with a normal ashift=9 pool, on a mirror of WD 2TB EARX. Not quite
On Tue, Aug 30, 2011 at 03:53:48PM +0100, Darren J Moffat wrote:
> On 08/30/11 15:31, Edward Ned Harvey wrote:
>>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>>> boun...@opensolaris.org] On Behalf Of Jesus Cea
>>>
>>> 1. Is the L2ARC data stored in the SSD checksummed?. If so, c
On Mon, Aug 29, 2011 at 11:40:34PM -0400, Edward Ned Harvey wrote:
> > On Sat, Aug 27, 2011 at 08:44:13AM -0700, Richard Elling wrote:
> > > I'm getting a but tired of people designing for fast resilvering.
> >
> > It is a design consideration, regardless, though your point is valid
> > that it sh
On Sat, Aug 27, 2011 at 08:44:13AM -0700, Richard Elling wrote:
> I'm getting a but tired of people designing for fast resilvering.
It is a design consideration, regardless, though your point is valid
that it shouldn't be the overriding consideration.
To the original question and poster:
This
On Mon, Aug 01, 2011 at 01:25:35PM +1000, Daniel Carosone wrote:
> To be clear, the system I was working on the other day is now running
> with a normal ashift=9 pool, on a mirror of WD 2TB EARX. Not quite
> what I was hoping for, but hopefully it will be OK; I won't have much
&g
On Wed, Aug 03, 2011 at 12:32:56PM -0700, Brandon High wrote:
> On Mon, Aug 1, 2011 at 4:27 PM, Daniel Carosone wrote:
> > The other thing that can cause a storm of tiny IOs is dedup, and this
> > effect can last long after space has been freed and/or dedup turned
> > off,
On Mon, Aug 01, 2011 at 03:10:28PM -0700, Richard Elling wrote:
> On Aug 1, 2011, at 2:16 PM, Neil Perrin wrote:
>
> > In general the blogs conclusion is correct . When file systems get full
> > there is
> > fragmentation (happens to all file systems) and for ZFS the pool uses gang
> > blocks of
On Mon, Aug 01, 2011 at 11:22:36AM +1000, Daniel Carosone wrote:
> On Fri, Jul 29, 2011 at 05:58:49PM +0200, Hans Rosenfeld wrote:
>
> > I'm working on a patch for grub that fixes the ashift=12 issue.
>
> Oh, great - and from the looks of the patch, for other
On Fri, Jul 29, 2011 at 05:58:49PM +0200, Hans Rosenfeld wrote:
> I'm working on a patch for grub that fixes the ashift=12 issue.
Oh, great - and from the looks of the patch, for other values of 12 as
well :)
> I'm probably not going to fix the div-by-zero reboot.
Fair enough, if it's an exist
.. evidently doesn't work. GRUB reboots the machine moments after
loading stage2, and doesn't recognise the fstype when examining the
disk loaded from an alernate source.
This is with SX-151. Here's hoping a future version (with grub2?)
resolves this, as well as lets us boot from raidz.
Just a
On Wed, Jul 27, 2011 at 08:27:42AM +0200, Carsten John wrote:
> Hello everybody,
>
> is there any known way to configure the point-in-time *when* the time-slider
> will snapshot/rotate?
>
> With hundreds of zfs filesystems, the daily snapshot rotation slows down a
> big file server significantl
On Wed, Jul 27, 2011 at 08:00:43PM -0500, Bob Friesenhahn wrote:
> On Tue, 26 Jul 2011, Charles Stephens wrote:
>
>> I'm on S11E 150.0.1.9 and I replaced one of the drives and the pool
>> seems to be stuck in a resilvering loop. I performed a 'zpool clear'
>> and 'zpool scrub' and just complain
> > "Processing" the request just means flagging the blocks, though, right?
> > And the actual benefits only acrue if the garbage collection / block
> > reshuffling background tasks get a chance to run?
>
> I think that's right. TRIM just gives hints to the garbage collector that
> sectors are no
On Mon, Jul 18, 2011 at 06:44:25PM -0700, Paul B. Henson wrote:
> It would be really
> nice if the aclmode could be specified on a per object level rather than
> a per file system level, but that would be considerably more difficult
> to achieve 8-/.
If there were an acl permission for "set
On Fri, Jul 15, 2011 at 07:56:25AM +0400, Jim Klimov wrote:
> 2011-07-15 6:21, Daniel Carosone ?:
>> um, this is what xargs -P is for ...
>
> Thanks for the hint. True, I don't often use xargs.
>
> However from the man pages, I don't see a "-P" o
um, this is what xargs -P is for ...
--
Dan.
On Thu, Jul 14, 2011 at 07:24:52PM +0400, Jim Klimov wrote:
> 2011-07-14 15:48, Frank Van Damme ?:
>> It seems counter-intuitive - you'd say: concurrent disk access makes
>> things only slower - , but it turns out to be true. I'm deleting a
>>
On Tue, Jul 05, 2011 at 09:03:50AM -0400, Edward Ned Harvey wrote:
> > I suspect the problem is because I changed to AHCI.
>
> This is normal, no matter what OS you have. It's the hardware.
That is simply false.
> If you start using a disk in non-AHCI mode, you must always continue to use
> it
On Mon, Jul 04, 2011 at 01:11:09PM -0700, Richard Elling wrote:
> Thomas,
>
> On Jul 4, 2011, at 9:53 AM, Thomas Nau wrote:
> This is a roundabout way to do this, but it can be done without changing any
> source :-)
> With the Nexenta or Solaris iSCSI target, you can set the blocksize for a LUN.
On Sun, Jul 03, 2011 at 05:44:34PM -0500, Harry Putnam wrote:
> My zfs machine has croaked to the point that it just quits after some
> 10 15 minutes of uptime. No interesting logs or messages what so
> ever. At least not that I've found. It just quietly quits.
>
> I'm not interested in dinking
On Thu, Jun 30, 2011 at 11:40:53PM +0100, Andrew Gabriel wrote:
> On 06/30/11 08:50 PM, Orvar Korvar wrote:
>> I have a 1.5TB disk that has several partitions. One of them is 900GB. Now I
>> can only see 300GB. Where is the rest? Is there a command I can do to reach
>> the rest of the data? Will
This article raises the concern that SSD controllers (in particular
SandForce) do internal dedup, and in particular that this could defeat
ditto-block style replication of critical metadata as done by
filesystems including ZFS.
http://storagemojo.com/2011/06/27/de-dup-too-much-of-good-thing/
Alo
On Wed, Jun 22, 2011 at 12:49:27PM -0700, David W. Smith wrote:
> # /home/dws# zpool import
> pool: tank
> id: 13155614069147461689
> state: FAULTED
> status: The pool metadata is corrupted.
> action: The pool cannot be imported due to damaged devices or data.
>see: http://www.sun.com/ms
On Wed, Jun 22, 2011 at 02:01:12PM -0700, Larry Liu wrote:
> You can try
>
> #fdisk /dev/rdsk/c5d0t0p0
Or just dd /dev/zero over the raw device, eject and start from clean.
--
Dan.
pgpqmaR5Jw6Q0.pgp
Description: PGP signature
___
zfs-discuss mailing l
On Sun, Jun 19, 2011 at 08:03:25AM -0700, Richard Elling wrote:
> Yes. I've been looking at what the value of zfs_vdev_max_pending should be.
> The old value was 35 (a guess, but a really bad guess) and the new value is
> 10 (another guess, but a better guess). I observe that data from a fast,
>
On Fri, Jun 17, 2011 at 07:41:41AM -0400, Edward Ned Harvey wrote:
> > From: Daniel Carosone [mailto:d...@geek.com.au]
> > Sent: Thursday, June 16, 2011 11:05 PM
> >
> > the [sata] channel is idle, blocked on command completion, while
> > the heads seek.
>
>
On Thu, Jun 16, 2011 at 10:40:25PM -0400, Edward Ned Harvey wrote:
> > From: Daniel Carosone [mailto:d...@geek.com.au]
> > Sent: Thursday, June 16, 2011 10:27 PM
> >
> > Is it still the case, as it once was, that allocating anything other
> > than whole disks as vdev
On Thu, Jun 16, 2011 at 09:15:44PM -0400, Edward Ned Harvey wrote:
> My personal preference, assuming 4 disks, since the OS is mostly reads and
> only a little bit of writes, is to create a 4-way mirrored 100G partition
> for the OS, and the remaining 900G of each disk (or whatever) becomes either
On Thu, Jun 16, 2011 at 07:06:48PM +0200, Roy Sigurd Karlsbakk wrote:
> > I have decided to bite the bullet and change to 2TB disks now rather
> > than go through all the effort using 1TB disks and then maybe changing
> > in 6-12 months time or whatever. The price difference between 1TB and
> > 2TB
On Wed, Jun 15, 2011 at 07:19:05PM +0200, Roy Sigurd Karlsbakk wrote:
>
> Dedup is known to require a LOT of memory and/or L2ARC, and 24GB isn't really
> much with 34TBs of data.
The fact that your second system lacks the l2arc cache device is absolutely
your prime suspect.
--
Dan.
pgp3PZu7c6
On Wed, Jun 08, 2011 at 11:44:16AM -0700, Marty Scholes wrote:
> And I looked in the source. My C is a little rusty, yet it appears
> that prefetch items are not stored in L2ARC by default. Prefetches
> will satisfy a good portion of sequential reads but won't go to
> L2ARC.
Won't go to L2ARC
On Sun, Jun 05, 2011 at 01:26:20PM -0500, Tim Cook wrote:
> I'd go with the option of allowing both a weighted and a forced option. I
> agree though, if you do primarycache=metadata, the system should still
> attempt to cache userdata if there is additional space remaining.
I think I disagree. R
Edward Ned Harvey writes:
> > If you consider the extreme bias... If the system would never give up
> > metadata in cache until all the cached data were gone... Then it would be
> > similar to the current primarycache=metadata, except that the system would
> > be willing to cache data too, wh
On Thu, Jun 02, 2011 at 09:59:39PM -0400, Edward Ned Harvey wrote:
> > From: Daniel Carosone [mailto:d...@geek.com.au]
> > Sent: Thursday, June 02, 2011 9:03 PM
> >
> > Separately, with only 4G of RAM, i think an L2ARC is likely about a
> > wash, since L2ARC entri
Thanks, I like this summary format and the effort it took
to produce seems well-spent.
On Thu, Jun 02, 2011 at 08:50:58PM -0400, Edward Ned Harvey wrote:
> > but I figured spending 500G on ZIL
> > would be unwise.
>
> You couldn't possibly ever use 500G of ZIL, because the ZIL is required to
>
On Wed, Jun 01, 2011 at 07:42:24AM -0600, Mark Shellenbaum wrote:
>
> Looks like the linux client did a chmod(2) after creating the file.
I bet this is it, and this seems to have been ignored in the later thread.
> what happens when you create a file locally in that directory on the
> solaris s
On Wed, Jun 01, 2011 at 05:45:14AM +0400, Jim Klimov wrote:
> Also, in a mirroring scenario is there any good reason to keep a warm spare
> instead of making a three-way mirror right away (beside energy saving)?
> Rebuild times and non-redundant windows can be decreased considerably ;)
Perhaps wh
On Tue, May 31, 2011 at 05:32:47PM +0100, Matt Keenan wrote:
> Jim,
>
> Thanks for the response, I've nearly got it working, coming up against a
> hostid issue.
>
> Here's the steps I'm going through :
>
> - At end of auto-install, on the client just installed before I manually
> reboot I do th
On Wed, Jun 01, 2011 at 10:16:28AM +1000, Daniel Carosone wrote:
> On Tue, May 31, 2011 at 06:57:53PM -0400, Edward Ned Harvey wrote:
> > If you make it a 3-way mirror, your write performance will be unaffected,
> > but your read performance will increase 50% over a 2-way mirror. A
On Tue, May 31, 2011 at 06:57:53PM -0400, Edward Ned Harvey wrote:
> If you make it a 3-way mirror, your write performance will be unaffected,
> but your read performance will increase 50% over a 2-way mirror. All 3
> drives can read different data simultaneously for the net effect of 3x a
> singl
On Fri, May 27, 2011 at 07:28:06AM -0400, Edward Ned Harvey wrote:
> > From: Daniel Carosone [mailto:d...@geek.com.au]
> > Sent: Thursday, May 26, 2011 8:19 PM
> >
> > Once your data is dedup'ed, by whatever means, access to it is the
> > same. You need enough
On Wed, May 25, 2011 at 11:54:16PM -0700, Matthew Ahrens wrote:
> >
> > On Thu, May 12, 2011 at 08:52:04PM +1000, Daniel Carosone wrote:
> > > Other than the initial create, and the most
> > > recent scrub, the history only contains a sequence of auto-snapshot
> &g
On Thu, May 26, 2011 at 10:25:04AM -0400, Edward Ned Harvey wrote:
> (2) Now, in a pool with 2.4M unique blocks and dedup enabled (no verify), a
> test file requires 10m38s to write and 2m54s to delete, but with dedup
> disabled it only requires 0m40s to write and 0m13s to delete exactly the
> same
On Fri, May 27, 2011 at 04:32:03AM +0400, Jim Klimov wrote:
> One more rationale in this idea is that with deferred dedup
> in place, the DDT may be forced to hold only non-unique
> blocks (2+ references), and would require less storage in
> RAM, disk, L2ARC, etc. - in case we agree to remake the
>
On Thu, May 26, 2011 at 07:38:05AM -0400, Edward Ned Harvey wrote:
> > From: Daniel Carosone [mailto:d...@geek.com.au]
> > Sent: Wednesday, May 25, 2011 10:10 PM
> >
> > These are additional
> > iops that dedup creates, not ones that it substitutes for others in
>
On Thu, May 26, 2011 at 09:04:04AM -0700, Brandon High wrote:
> On Thu, May 26, 2011 at 8:37 AM, Edward Ned Harvey
> wrote:
> > Question:? Is it possible, or can it easily become possible, to periodically
> > dedup a pool instead of keeping dedup running all the time?? It is easy to
>
> I think i
On Thu, May 26, 2011 at 08:20:03AM -0400, Edward Ned Harvey wrote:
> > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> > boun...@opensolaris.org] On Behalf Of Daniel Carosone
> >
> > On Wed, May 25, 2011 at 10:59:19PM +0200, Roy Sigurd Karlsbakk wrote:
&g
y used?
On Thu, May 12, 2011 at 08:52:04PM +1000, Daniel Carosone wrote:
> On Thu, May 12, 2011 at 10:04:19AM +0100, Darren J Moffat wrote:
> > There is a possible bug in in that area too, and it is only for the
> > keysource=passphrase case.
>
> Ok, sounds like it's
On Wed, May 25, 2011 at 10:59:19PM +0200, Roy Sigurd Karlsbakk wrote:
> The systems where we have had issues, are two 100TB boxes, with some
> 160TB "raw" storage each, so licensing this with nexentastor will be
> rather expensive. What would you suggest? Will a solaris express
> install give us go
On Wed, May 25, 2011 at 03:50:09PM -0700, Matthew Ahrens wrote:
> That said, for each block written (unique or not), the DDT must be updated,
> which means reading and then writing the block that contains that dedup
> table entry, and the indirect blocks to get to it. With a reasonably large
> DD
On Thu, May 12, 2011 at 12:23:55PM +1000, Daniel Carosone wrote:
> They were also sent from an ashift=9 to an ashift=12 pool
This reminded me to post a note describing how I made pools with
different ashift. I do this both for pools on usb flash sticks, and
on disks with an underlying
On Tue, Jul 06, 2010 at 05:29:54PM +0200, Arne Jansen wrote:
> Daniel Carosone wrote:
> > Something similar would be useful, and much more readily achievable,
> > from ZFS from such an application, and many others. Rather than a way
> > to compare reliably between two fil
On Wed, Jun 30, 2010 at 12:54:19PM -0400, Edward Ned Harvey wrote:
> If you're talking about streaming to a bunch of separate tape drives (or
> whatever) on a bunch of separate systems because the recipient storage is
> the bottleneck instead of the network ... then "split" probably isn't the
> mos
On Tue, May 11, 2010 at 04:15:24AM -0700, Bertrand Augereau wrote:
> Is there a O(nb_blocks_for_the_file) solution, then?
>
> I know O(nb_blocks_for_the_file) == O(nb_bytes_in_the_file), from Mr.
> Landau's POV, but I'm quite interested in a good constant factor.
If you were considering the hash
On Sun, May 09, 2010 at 09:24:38PM -0500, Mike Gerdts wrote:
> The best thing to do with processes that can be swapped out forever is
> to not run them.
Agreed, however:
#1 Shorter values of "forever" (like, say, "daily") may still be useful.
#2 This relies on knowing in advance what these proc
On Wed, May 05, 2010 at 04:34:13PM -0400, Edward Ned Harvey wrote:
> The suggestion I would have instead, would be to make the external drive its
> own separate zpool, and then you can incrementally "zfs send | zfs receive"
> onto the external.
I'd suggest doing both, to different destinations :)
On Tue, Apr 27, 2010 at 10:36:37AM +0200, Roy Sigurd Karlsbakk wrote:
> - "Daniel Carosone" skrev:
> > SAS: Full SCSI TCQ
> > SATA: Lame ATA NCQ
>
> What's so lame about NCQ?
Primarily, the meager number of outstanding requests; write cache is
needed to
On Tue, Apr 27, 2010 at 11:29:04AM -0600, Cindy Swearingen wrote:
> The revised ZFS Administration Guide describes the ZFS version
> descriptions and the Solaris OS releases that provide the version
> and feature, starting on page 293, here:
>
> http://hub.opensolaris.org/bin/view/Community+Group+z
On Mon, Apr 26, 2010 at 10:02:42AM -0700, Chris Du wrote:
> SAS: full duplex
> SATA: half duplex
>
> SAS: dual port
> SATA: single port (some enterprise SATA has dual port)
>
> SAS: 2 active channel - 2 concurrent write, or 2 read, or 1 write and 1 read
> SATA: 1 active channel - 1 read or 1 writ
On Thu, Apr 22, 2010 at 09:58:12PM -0700, thomas wrote:
> Assuming newer version zpools, this sounds like it could be even
> safer since there is (supposedly) less of a chance of catastrophic
> failure if your ramdisk setup fails. Use just one remote ramdisk or
> two with battery backup.. whatever
On Tue, Apr 20, 2010 at 12:55:10PM -0600, Cindy Swearingen wrote:
> You can use the OpenSolaris beadm command to migrate a ZFS BE over
> to another root pool, but you will also need to perform some manual
> migration steps, such as
> - copy over your other rpool datasets
> - recreate swap and dump
I have certainly moved a root pool from one disk to another, with the
same basic process, ie:
- fuss with fdisk and SMI labels (sigh)
- zpool create
- snapshot, send and recv
- installgrub
- swap disks
I looked over the "root pool recovery" section in the Best Practices guide
at the time,
On Mon, Apr 19, 2010 at 03:37:43PM +1000, Daniel Carosone wrote:
> the filesystem holding /etc/zpool.cache
or, indeed, /etc/zfs/zpool.cache :-)
--
Dan.
pgpSCBv4eR19k.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-disc
On Sun, Apr 18, 2010 at 07:37:10PM -0700, Don wrote:
> I'm not sure to what you are referring when you say my "running BE"
Running boot environment - the filesystem holding /etc/zpool.cache
--
Dan.
pgpbKUgqnePjv.pgp
Description: PGP signature
___
zfs-d
On Sun, Apr 18, 2010 at 10:33:36PM -0500, Bob Friesenhahn wrote:
> Probably the DDRDrive is able to go faster since it should have lower
> latency than a FLASH SSD drive. However, it may have some bandwidth
> limits on its interface.
It clearly has some. They're just as clearly well in excess
1 - 100 of 309 matches
Mail list logo