On 11/26/2012 12:54 PM, Grégory Giannoni wrote:
[snip]
I switched few month ago from Sun X45x0 to HP things : My fast NAS are now DL 180
G6. I got better perfs using LSI 9240-8I rather than HP SmartArray (tried P410
P812). I'm using only 600Gb SSD drives.
That LSI controllers supports SATA
On 11/24/2012 5:17 AM, Edmund White wrote:
Heh, I wouldn't be using G5's for ZFS purposes now. G6 and better
ProLiants are a better deal for RAM capacity and CPU core countŠ
Either way, I also use HP systems as the basis for my ZFS/Nexenta storage
systems. Typically DL380's, since I have
On 11/23/2012 5:50 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
I wonder if it would make weird sense to get the boxes, forfeit the
cool-looking Fishworks,
Do make sure you're getting one that has the proper firmware.
Those with BIOS don't work in SPARC boxes, and those with OpenBoot don't
work in x64 stuff.
A quick Sun FC HBA search on ebay turns up a whole list of stuff
that's official Sun HBAs, which will give you an idea of the (max)
On 8/6/2012 2:53 PM, Bob Friesenhahn wrote:
On Mon, 6 Aug 2012, Stefan Ring wrote:
Intel's brief also clears up a prior controversy of what types of
data are actually cached, per the brief it's both user and system
data!
So you're saying that SSDs don't generally flush data to stable medium
On 5/5/2012 8:04 AM, Bob Friesenhahn wrote:
On Fri, 4 May 2012, Erik Trimble wrote:
predictable, and the backing store is still only giving 1 disk's
IOPS. The RAIDZ* may, however, give you significantly more
throughput (in MB/s) than a single disk if you do a lot of sequential
read or write
On 5/4/2012 1:24 PM, Peter Tribble wrote:
On Thu, May 3, 2012 at 3:35 PM, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
I think you'll get better, both performance reliability, if you break each
of those 15-disk raidz3's into three 5-disk raidz1's. Here's why:
On 3/24/2012 4:54 PM, The Honorable Senator and Mrs. John Blutarsky wrote:
laotsu said:
well check this link
https://shop.oracle.com/pls/ostore/product?p1=3DSunFireX4270M2serverp2=3Dp=
3=3Dp4=3Dsc=3Docom_x86_SunFireX4270M2servertz=3D-4:00
you may not like the price
Hahahah! Thanks for the
On 1/14/2012 8:15 AM, Anil Jangity wrote:
I have a couple of Sun/Oracle x2270 boxes and am planning to get some 2.5
intel 320 SSD for the rpool.
Do you happen to know what kind of bracket is required to get the 2.5 SSD to fit
into the 3.5 slots?
Thanks
Anything that looks like this:
On 1/4/2012 2:59 PM, grant lowe wrote:
Hi all,
I've got a solaris 10 running 9/10 on a T3. It's an oracle box with
128GB memory RIght now oracle . I've been trying to load test the box
with bonnie++. I can seem to get 80 to 90 K writes, but can't seem to
get more than a couple K for writes.
On 12/12/2011 12:23 PM, Richard Elling wrote:
On Dec 11, 2011, at 2:59 PM, Mertol Ozyoney wrote:
Not exactly. What is dedup'ed is the stream only, which is infect not very
efficient. Real dedup aware replication is taking the necessary steps to
avoid sending a block that exists on the other
On 12/1/2011 4:59 PM, Ragnar Sundblad wrote:
I am sorry if these are dumb questions. If there are explanations
available somewhere for those questions that I just haven't found, please
let me know! :-)
1. It has been said that when the DDT entries, some 376 bytes or so, are
rolled out on L2ARC,
On 12/1/2011 6:44 PM, Ragnar Sundblad wrote:
Thanks for your answers!
On 2 dec 2011, at 02:54, Erik Trimble wrote:
On 12/1/2011 4:59 PM, Ragnar Sundblad wrote:
I am sorry if these are dumb questions. If there are explanations
available somewhere for those questions that I just haven't found
It occurs to me that your filesystems may not be in the same state.
That is, destroy both pools. Recreate them, and run the tests. This
will eliminate any possibility of allocation issues.
-Erik
On 10/27/2011 10:37 AM, weiliam.hong wrote:
Hi,
Thanks for the replies. In the beginning, I
On 10/14/2011 5:49 AM, Darren J Moffat wrote:
On 10/14/11 13:39, Jim Klimov wrote:
Hello, I was asked if the CF port in Thumpers can be accessed by the OS?
In particular, would it be a good idea to use a modern 600x CF card
(some reliable one intended for professional photography) as an L2ARC
On 9/27/2011 10:39 AM, Bob Friesenhahn wrote:
On Tue, 27 Sep 2011, Matt Banks wrote:
Also, maybe I read it wrong, but why is it that (in the previous
thread about hw raid and zpools) zpools with large numbers of
physical drives (eg 20+) were frowned upon? I know that ZFS!=WAFL
There is no
writes.
Yes. You can attach a ZIL or L2ARC device anytime after the pool is created.
Also, I think you want an Intel 320, NOT the 311, for use as a ZIL. The
320 includes capacitors, so if you lose power, your ZIL doesn't lose
data. The 311 DOESN'T include capacitors.
--
Erik Trimble
Java
.
Honestly, I think TRIM isn't really useful for anyone. It took too long
to get pushed out to the OSes, and the SSD vendors seem to have just
compensated by making a smarter controller able to do better
reallocation. Which, to me, is the better ideal, in any case.
--
Erik Trimble
Java
On 7/25/2011 6:43 AM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Erik Trimble
Honestly, I think TRIM isn't really useful for anyone.
I'm going to have to disagree.
There are only two times when TRIM isn't
On 7/25/2011 4:28 AM, Tomas Ögren wrote:
On 25 July, 2011 - Erik Trimble sent me these 2,0K bytes:
On 7/25/2011 3:32 AM, Orvar Korvar wrote:
How long have you been using a SSD? Do you see any performance decrease? I
mean, ZFS does not support TRIM, so I wonder about long term effects
noticeable impact - the SSD is constantly being used, and
has no time for GC. It's stuck in the read-erase-modify-write cycle
even with TRIM.
--
Erik Trimble
Java Platform Group Infrastructure
Mailstop: usca22-317
Phone: x67195
Santa Clara, CA
Timezone: US/Pacific (UTC-0800
framework - not sure about the SAS framework)
--
Erik Trimble
Java Platform Group Infrastructure
Mailstop: usca22-317
Phone: x67195
Santa Clara, CA
Timezone: US/Pacific (UTC-0800)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Erik Trimble
Java Platform Group - Infrastructure
Mailstop: usca22-317
Phone: x67195
Santa Clara, CA
Timezone: US/Pacific (UTC-0800)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
drive is c1t0d0, you'll have to use c1t0d0s0.
--
Erik Trimble
Java Platform Group Infrastructure
Mailstop: usca22-317
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (UTC-0800)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
. That way, most of this could be done in
hardware seamlessly.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
On 6/27/2011 1:13 PM, David Magda wrote:
On Mon, June 27, 2011 15:24, Erik Trimble wrote:
[...]
I'm always kind of surprised that there hasn't been a movement to create
standardized crypto commands, like the various FP-specific commands that
are part of MMX/SSE/etc. That way, most
in ascending
order by ATA ID, SCSI ID, SAS WWN, or FC WWN.
The naming rules can get a bit complex.
--
Erik Trimble
Java Platform Group Infrastructure
Mailstop: usca22-317
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (UTC-0800)
___
zfs-discuss mailing
large shops with thousands of spindles handle this.
We pay for the brand-name disk enclosures or servers where the
fault-management stuff is supported by Solaris.
Including the blinky lights.
grin
--
Erik Trimble
Java Platform Group Infrastructure
Mailstop: usca22-317
Phone: x17195
Santa Clara
On 6/16/2011 12:09 AM, Simon Walter wrote:
On 06/16/2011 09:09 AM, Erik Trimble wrote:
We had a similar discussion a couple of years ago here, under the
title A Versioning FS. Look through the archives for the full
discussion.
The jist is that application-level versioning (and consistency
*think* there is a better way to get to the file
history/version information now.
--
Erik Trimble
Java Platform Group Infrastructure
Mailstop: usca22-317
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (UTC-0800)
___
zfs-discuss mailing list
zfs
different creatures.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Nehalem and later CPUs have this feature, and I'm pretty sure all AMD
MangyCours and later CPUs do also.
Without V-IO, doing anything that pounds on a disk under *any*
Virtualization product is sure to make you cry.
--
Erik Trimble
Java Platform Group Infrastructure
Mailstop: usca22-317
Phone
generally say that
2x6raidz2 vdevs would be better than either 1x12raidz3 or 4x3raidz1 (or
3x4raidz1, for a home server not looking for super-critical protection
(in which case, you should be using mirrors with spares, not raidz*).
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone
On 6/2/2011 5:12 PM, Jens Elkner wrote:
On Wed, Jun 01, 2011 at 06:17:08PM -0700, Erik Trimble wrote:
On Wed, 2011-06-01 at 12:54 -0400, Paul Kraus wrote:
Here's how you calculate (average) how long a random IOPs takes:
seek time + ((60 / RPMs) / 2))]
A truly sequential IOPs is:
(60 / RPMs
version number advances.
Solaris 11 is due out RSN, which means probably sometime before the end
of the calendar year. But who knows, and Oracle hasn't officially
announced a launch date for S11.
--
Erik Trimble
Java System Support
Mailstop: usca22-317
Phone: x67195
Santa Clara, CA
Timezone
from something like 'iostat', which is
measuring not the *actual* writes to physical disk, but the *requested*
write operations.
--
Erik Trimble
Java System Support
Mailstop: usca22-317
Phone: x67195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800
import the pool
and use the features your current OS supports, but that's pretty darned
dicey, and I'd be very happy if importing worked when both systems
supported the same featureset.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific
. It *could* have patent issues from
NetApp.
The possible impact of that is beyond my knowledge. IANAL. Nor do I
speak for Oracle in any manner, official or unofficial.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800
];
do
j = $[10 + ( 1 * 1)]
./run_your_script j
sync
sleep 10
i = $[$i+1]
done
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
I/O.
Such deletion should take milliseconds to a minute or so.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
of 6.001 problem sets, written by Prof
Sussman sometime in the 1980s.
(yes, I went to MIT.)
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
). Frankly,
at this point, I'd almost change the design to REQUIRE a L2ARC device in
order to turn on Dedup.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
___
zfs-discuss mailing list
zfs-discuss
On 5/4/2011 2:54 PM, Ray Van Dolson wrote:
On Wed, May 04, 2011 at 12:29:06PM -0700, Erik Trimble wrote:
(2) Block size: a 4k block size will yield better dedup than a 128k
block size, presuming reasonable data turnover. This is inherent, as
any single bit change in a block will make it non
configs out to a total of maybe
100 clients, and probably never exceed 100GB max on the deduped end.
Which means that I'll be able to get away with 16GB of RAM for the whole
server, comfortably.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
On 5/4/2011 4:17 PM, Ray Van Dolson wrote:
On Wed, May 04, 2011 at 03:49:12PM -0700, Erik Trimble wrote:
On 5/4/2011 2:54 PM, Ray Van Dolson wrote:
On Wed, May 04, 2011 at 12:29:06PM -0700, Erik Trimble wrote:
(2) Block size: a 4k block size will yield better dedup than a 128k
block size
On 5/4/2011 4:44 PM, Tim Cook wrote:
On Wed, May 4, 2011 at 6:36 PM, Erik Trimble erik.trim...@oracle.com
mailto:erik.trim...@oracle.com wrote:
On 5/4/2011 4:14 PM, Ray Van Dolson wrote:
On Wed, May 04, 2011 at 02:55:55PM -0700, Brandon High wrote:
On Wed, May 4
to avoid is having the OS image written to, and waiting
for any other configuration and customization to happen AFTER it was
placed on the ZFS server is sub-optimal.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
have 19GB of RAM in your system, with 16GB being a
likely reasonable amount under most conditions (e.g. typical dedup ARC
size is going to be ~3.5G, not the 7G maximum used above).
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
star).
That said, rsync is really the only solution if you have a partial or
interrupted copy. It's also really the best method to do verification.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
___
zfs
just a lowly Java Platform Group dude. Solaris ain't my silo.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
as a test and does all the above calculations, to
see how dedup would work on a given dataset. 'zdb -S' sorta, kinda does
that, but...
--
Erik Trimble
Java System Support
Mailstop: usca22-317
Phone: x67195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800
/kmem.c#920
Ugg. I hadn't even thought of memory alignment/allocation issues.
Pizza: Mushroom and anchovy - er, just kidding.
Neil.
And, let me say: Yuck! What is that, an ISO-standard pizza? Disgusting.
ANSI-standard pizza, all the way! (pepperoni mushrooms)
--
Erik Trimble
Java System
a checksum algorithm specific to dedup (i.e. there's no way to
override the default for dedup).
--
Erik Trimble
Java System Support
Mailstop: usca22-317
Phone: x67195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing list
zfs
to generate a list of files from that list of
blocks.
--
Erik Trimble
Java System Support
Mailstop: usca22-317
Phone: x67195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
On 4/26/2011 3:59 AM, Fred Liu wrote:
-Original Message-
From: Erik Trimble [mailto:erik.trim...@oracle.com]
Sent: 星期二, 四月 26, 2011 12:47
To: Ian Collins
Cc: Fred Liu; ZFS discuss
Subject: Re: [zfs-discuss] How does ZFS dedup space accounting work
with quota?
On 4/25/2011 6:23 PM
On 4/26/2011 9:29 AM, Fred Liu wrote:
From: Erik Trimble [mailto:erik.trim...@oracle.com]
It is true, quota is in charge of logical data not physical data.
Let's assume an interesting scenario -- say the pool is 100% full in logical
data
(such as 'df' tells you 100% used) but not full
) at least a 14G L2ARC device for dedup + 10G more of RAM
for the DDT L2ARC requirements + 1GB of RAM for every 20GB of additional
space in the L2ARC cache beyond that used by the DDT.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
consumption of 70
(50+20 deduped) + 30+30+80+100 (unique data) = 310MB to actual storage,
for 400MB of apparent storage (i.e. dedup ratio of 1.29:1 )
A, B, C, D would each still have a quota usage of 100MB.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara
/unified-storage/index.html
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 4/8/2011 12:37 AM, Ian Collins wrote:
On 04/ 8/11 06:30 PM, Erik Trimble wrote:
On 4/7/2011 10:25 AM, Chris Banal wrote:
While I understand everything at Oracle is top secret these days.
Does anyone have any insight into a next-gen X4500 / X4540? Does
some other Oracle / Sun partner make
://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
SATA
model with (2) 6-core Westmeres + 16GB RAM.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
justification
wry smile
I want my J4000's back, too. And, I still want something like HP's MSA
70 (25 x 2.5 drive JBOD in a 2U formfactor)
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800
idea is for certain test machines, where you expect frequent memory
dumps (in /var/crash) - if you have a large amount of RAM, you'll need a
lot of disk space, so it might be good to limit /var in this case by
making it a separate dataset.
--
Erik Trimble
Java System Support
Mailstop: usca22
...
It's here that I think Solaris' strengths can beat its competitors, and
where its weaknesses aren't significant.
Sadly, I think Solaris' future as a general-purpose OS is likely finished.
Of course, that's just my reading of the tea leaves...
--
Erik Trimble
Java System Support
Mailstop: usca22
it, of course, it up to the end-user.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
-discuss/2010-September/thread.html#44633
I think this message by Erik Trimble is a good summary:
hmmm... I must've missed that one, otherwise I would have said...
Scenario 1:I have 5 1TB disks in a raidz1, and I assume I have 128k slab
sizes. Thus, I have 32k of data for each slab written
.
___
Nah, probably just a Beehive (our mail system) burp. Happens a lot.
Besides, it's 8:45 PST here, and I'm still at work. :-)
--
Erik Trimble
Java System Support
Mailstop: usca22-317
Phone: x67195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800
you carry the drive
over there later.
As Richard mentioned, that snapshot is unique, and it doesn't matter
that you recovered it onto an external drive first, then copied that
snapshot over to the backup machine. It's a frozen snapshot, so you're
all good for future incrementals.
--
Erik
On 2/15/2011 1:37 PM, Torrey McMahon wrote:
On 2/14/2011 10:37 PM, Erik Trimble wrote:
That said, given that SAN NVRAM caches are true write caches (and not
a ZIL-like thing), it should be relatively simple to swamp one with
write requests (most SANs have little more than 1GB of cache
several seconds) isn't large, but
where latency is critical. For larger I/O requests (or, for consistent,
sustained I/O of more than small amounts), all bets are off as far as
possibly advantage of multiple LUNS/array.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
(and, there a reports it does occasionally),
you'll need to boot without the /etc/system, append the '-a' flag to the
end of the GRUB menu entry that you boot from. This will push you into
an interactive boot, where, when it asks you for a /etc/system to use,
specify /dev/null.
--
Erik Trimble
Java
still very unlikely to hit a CPU bottleneck
before RAM starvation or disk wait occurs.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
still very unlikely to hit a CPU bottleneck
before RAM starvation or disk wait occurs.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
in the above: develop your app, using it
on UFS w/directio to work out the application issues and tune. When you
deploy it, use ZFS and its caching techniques to get maximum (though not
absolutely consistently measurable) performance for the already-tuned app.
--
Erik Trimble
Java System Support
can't
imagine it would require any reformating or reinstalling.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
can't
imagine it would require any reformating or reinstalling.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
VLSI Engineer to do
the design. wink
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
properly
usable.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
got the same 3k of data in both files.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
).
And, I doubt 8GB for ARC is sufficient, either, for a DDT consuming over
100GB of space.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
of what type of
vdev it is composed of.
--
Erik Trimble
Java System Support
Mailstop: usca22-317
Phone: x67195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
a bare minimum of 4GB of RAM (8GB for anything other than light use),
even with Dedup turned off.
--
Erik Trimble
Java System Support
Mailstop: usca22-317
Phone: x67195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing list
zfs
on Dedup, you need at least 8GB of RAM to go
with the SSD.
-Erik
On Tue, 2011-01-18 at 18:35 +, Michael Armstrong wrote:
Thanks everyone, I think overtime I'm gonna update the system to include an
ssd for sure. Memory may come later though. Thanks for everyone's responses
Erik Trimble
side of the mirror has against what the
bad side has, and only copies the differences over to sync them up.
This is one of ZFS's great strengths, in that most other RAID systems
can't do this.
--
Erik Trimble
Java System Support
Mailstop: usca22-317
Phone: x67195
Santa Clara, CA
Timezone: US
. It will always look at the replaced drive to see if it was a
prior member of the mirror, and do smart resilvering if possible.
If the device path stays the same (which, hopefully, it should), you can
even do:
zpool replace (old device) (old device)
--
Erik Trimble
Java System Support
Mailstop: usca22
else's
implementation than have to do it myself from scratch.
I'd prefer a private contact, as I realize that such work may not be
ready for public discussion yet.
Thanks, folks!
Oh, and this is completely just me, not Oracle talking in any way.
--
Erik Trimble
Java System Support
Mailstop
plug directly into a standard 3.5 SAS/SATA hotswap bay...
And, of course, the ANS9010 is limited to the SATA2 interface speed, so
it is cheaper and lower-performing (but still better than an SSD) than
the DDRdrive.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa
basis for
a lawsuit doesn't prevent one from being dragged through the (U.S.)
courts for the better part of a decade.
sigh
Why can't we have a loser-pays civil system like every other civilized
country?
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
at 7th-grade level, so I might have
missed some subtleties...)
[As obvious as it is, it should be pointed out, I'm making these statements as
a very personal opinion, and I'm certain Oracle wouldn't have the same one. I
in no way represent Oracle.]
--
Erik Trimble
Java System Support
Mailstop
On 12/25/2010 11:19 AM, Tim Cook wrote:
On Sat, Dec 25, 2010 at 1:10 PM, Erik Trimble erik.trim...@oracle.com
mailto:erik.trim...@oracle.com wrote:
On 12/25/2010 6:25 AM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org
mailto:zfs-discuss-boun
be the right people to talk to.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
,
moving away from the current one-page-of-multiple-blocks as the atomic
entity of writing, and straight to a one-block-per-page setup. Don't
hold your breath.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800
.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
simple thing to overwhelm the very limited cache on such
a controller, in which case your performance tanks again.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss
cheaper than buying a DDRdrive. wink
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
, as the problem is media-specific.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
that seems to work well
(under simulation). I'm sure it could use some performance improvement,
but it works reasonably well on a simulated fragmented pool.
Please, Santa, can a good little boy get a BP-rewrite code commit in his
stocking for Christmas?
--
Erik Trimble
Java System Support
) - it is triggered by a system condition.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
1 - 100 of 507 matches
Mail list logo