On 7/1/2010 10:17 PM, Neil Perrin wrote:
On 07/01/10 22:33, Erik Trimble wrote:
On 7/1/2010 9:23 PM, Geoff Nordli wrote:
Hi Erik.
Are you saying the DDT will automatically look to be stored in an
L2ARC device if one exists in the pool, instead of using ARC?
Or is there some sort of memory
On 07/02/10 00:57, Erik Trimble wrote:
On 7/1/2010 10:17 PM, Neil Perrin wrote:
On 07/01/10 22:33, Erik Trimble wrote:
On 7/1/2010 9:23 PM, Geoff Nordli wrote:
Hi Erik.
Are you saying the DDT will automatically look to be stored in an
L2ARC device if one exists in the pool, instead of using
On 7/2/2010 6:30 AM, Neil Perrin wrote:
On 07/02/10 00:57, Erik Trimble wrote:
That's what I assumed. One further thought, though. Is the DDT is
treated as a single entity - so it's *all* either in the ARC or in
the L2ARC? Or does it move one entry at a time into the L2ARC as it
fills the
np == Neil Perrin neil.per...@oracle.com writes:
np The L2ARC just holds blocks that have been evicted from the
np ARC due to memory pressure. The DDT is no different than any
np other object (e.g. file).
The other cacheable objects require pointers to stay in the ARC
pointing to
On 07/02/10 11:14, Erik Trimble wrote:
On 7/2/2010 6:30 AM, Neil Perrin wrote:
On 07/02/10 00:57, Erik Trimble wrote:
That's what I assumed. One further thought, though. Is the DDT is
treated as a single entity - so it's *all* either in the ARC or in
the L2ARC? Or does it move one entry at
Actually, I think the rule-of-thumb is 270 bytes/DDT
entry. It's 200
bytes of ARC for every L2ARC entry.
DDT doesn't count for this ARC space usage
E.g.:I have 1TB of 4k files that are to be
deduped, and it turns
out that I have about a 5:1 dedup ratio. I'd also
like to see
On 07/01/10 22:33, Erik Trimble wrote:
On 7/1/2010 9:23 PM, Geoff Nordli wrote:
Hi Erik.
Are you saying the DDT will automatically look to be stored in an
L2ARC device if one exists in the pool, instead of using ARC?
Or is there some sort of memory pressure point where the DDT gets
moved
Thanks to everyone for such helpful and detailed answers. Contrary to some of
the trolls in other threads, I've had a fantastic experience here, and am
grateful to the community.
Based on the feedback, I'll upgrade my machine to 8 GB of RAM. I only have two
slots on the motherboard, and either
Another question on SSDs in terms of performance vs. capacity.
Between $150 and $200, there are at least three SSDs that would fit the rough
specifications for the L2ARC on my system:
1. Crucial C300, 64 GB: $150: medium performance, medium capacity.
2. OCZ Vertex 2, 50 GB: $180: higher
On Wed, Jun 30, 2010 at 01:35:31PM -0700, valrh...@gmail.com wrote:
Finally, for my purposes, it doesn't seem like a ZIL is necessary? I'm
the only user of the fileserver, so there probably won't be more than
two or three computers, maximum, accessing stuff (and writing stuff)
remotely.
It
On Wed, 2010-06-30 at 16:41 -0500, Nicolas Williams wrote:
On Wed, Jun 30, 2010 at 01:35:31PM -0700, valrh...@gmail.com wrote:
Finally, for my purposes, it doesn't seem like a ZIL is necessary? I'm
the only user of the fileserver, so there probably won't be more than
two or three computers,
On 6/30/2010 2:01 PM, valrh...@gmail.com wrote:
Another question on SSDs in terms of performance vs. capacity.
Between $150 and $200, there are at least three SSDs that would fit the rough
specifications for the L2ARC on my system:
1. Crucial C300, 64 GB: $150: medium performance, medium
I'm putting together a new server, based on a Dell PowerEdge T410.
I have simple SAS controller, with six 2TB Hitachi DeskStar 7200 RPM SATA
drives. The processor is a quad-core 2 GHz Core i7-based Xeon.
I will run the drives as one set of three mirror pairs striped together, for 6
TB of
2. Are the RAM requirements for ZFS with dedup based on the total
available zpool size (I'm not using thin provisioning), or just on how
much data is in the filesystem being deduped? That is, if I have 500
GB of deduped data but 6 TB of possible storage, which number is
relevant for
On 6/28/2010 12:33 PM, valrh...@gmail.com wrote:
I'm putting together a new server, based on a Dell PowerEdge T410.
I have simple SAS controller, with six 2TB Hitachi DeskStar 7200 RPM SATA
drives. The processor is a quad-core 2 GHz Core i7-based Xeon.
I will run the drives as one set of
On 6/28/2010 12:53 PM, Roy Sigurd Karlsbakk wrote:
2. Are the RAM requirements for ZFS with dedup based on the total
available zpool size (I'm not using thin provisioning), or just on how
much data is in the filesystem being deduped? That is, if I have 500
GB of deduped data but 6 TB of possible
16 matches
Mail list logo