the Web.
You can then over-over-provision down to any size you want by setting
the HPA (I've only done this under Linux using hdparm; not sure how it's
done in Solaris).
Wes Felter
___
zfs-discuss mailing list
zfs-dis
On 8/25/10 12:42 PM, J.P. King wrote:
What I would like to achieve:
Large (by my standard) scale storage. Lets say petabyte scale...
Redundancy across machines of data
An Amazon S3 style interface
Sounds like OpenStack Swift http://openstack.org/projects/storage/
Wes Felter
said, you're assuming dynamic wear leveling but
modern SSDs also use static wear leveling, so this problem doesn't
exist. (Note that in this context the terms "dynamic" and "static" may
not mean what you think they mean.)
Wes Felter
_
the corresponding txg is committed? Think of it as a poor man's
group commit.
Wes Felter
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ears.
A MegaRAID card with write-back cache? It should also be cheaper than
the F20.
Wes Felter
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
-bcast.inrialpes.fr/rubrique.php3?id_rubrique=5
I'm skeptical about the benefit, but there you are.
Wes Felter
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Have you considered Promise JBODs? They officially support
bring-your-own-drives.
Wes Felter
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
significantly.
Has anyone compared RAID-Z2 against something like LSI MegaRAID RAID-6?
If a sub-$1,000 RAID controller can save thousands of dollars worth of
disks it would somewhat put the lie to the idea that ZFS kills hardware
RAID.
Wes Felter
___
zfs
Eric D. Mudama wrote:
On Mon, Jan 4 at 16:43, Wes Felter wrote:
Eric D. Mudama wrote:
I am not convinced that a general purpose CPU, running other software
in parallel, will be able to be timely and responsive enough to
maximize bandwidth in an SSD controller without specialized hardware
uses a fairly
simple controller (I guess the controller still performs ECC and maybe
XOR) and the driver eats a whole x86 core. The result is very high
performance.
Wes Felter
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
L2ARC.
Any thought?
The 7310/7410 uses this type of configuration, so obviously it works.
When in doubt, just think What Would Fishworks Do?
Wes Felter
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/ma
Dave McDorman wrote:
I don't think is at liberty to discuss ZFS Deduplication at this point in time:
Did Jeff Bonwick and Bill Moore give a presentation at kernel.conf.au or
not? If so, did anyone see the presentation? Did the conference
attendees all sign NDAs or something?
Wes F
Paul van der Zwan wrote:
Strange thing I noticed in the keynote is that they claim the disk usage
of Snow Leopard
is 6 GB less than Leopard mostly because of compression.
Either they have implemented compressed binaries or they use filesystem
compression.
Neither feature is present in Leopard
Richard Elling wrote:
Wes Felter wrote:
proportional scheduling for storage performance
slog and L2ARC on the same SSD
The current scheduler is rather simple, there might be room for
improvements -- but that may be a rather extended research topic.
Yes. For GSoC it would probably be wise
C. Bergström wrote:
10) Did I miss something..
T10 DIF support in zvols
T10 UNMAP/thin provisioning support in zvols
proportional scheduling for storage performance
slog and L2ARC on the same SSD
These are probably difficult but hopefully not "world hunger" level.
Wes Fe
squeeze Solaris
down to 128MB and somehow get it in the flash, I bet it will work.
Wes Felter - [EMAIL PROTECTED]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I wonder how hard it would be to get Solaris running on the new ReadyNAS.
http://www.netgear.com/Products/Storage/ReadyNASPro.aspx
Wes Felter - [EMAIL PROTECTED]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
cedric briner wrote:
But _how_ can you achieve a well sized storage (40TB)
with such technologies. I mean, how can you bind physicaly 70 HD in an
zfs pool.
Using SAS JBODs sounds simpler, but I get the impression that they don't
actually work correctly right now.
Wes Felter - [
Kevin Abbey wrote:
Does this seem like a good idea? I am not a storage expert and am
attempting to create a scalable distributed storage cluster for an HPC
cluster.
An AOE/ZFS/NFS setup doesn't sound scalable or distributed; your ZFS/NFS
server may turn out to be a bottleneck.
19 matches
Mail list logo