On 11/26/2010 1:11 PM, Krunal Desai wrote:
What about powering the X25-E by an external power source, one that is also
solid-state and backed by a UPS? In my experience, smaller power supplies tend
to be much more reliable than typical ATX supplies.
I don't think the different PSU would be
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
In fact, I recently got one of these Samsung drives...
http://tinyurl.com/38s3ac3
The spec sheet says sequential read 220MB/s, sequential write 120MB/s...
Which is 2-4 times faster than the best SATA disk out there... And
A noob question:
These drives that people talk about, can you use them as a system disc too?
Install Solaris 11 Express on them? Or can you only use them as a L2ARC or Zil?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
I haven't had a chance to test a Vertex 2 PRO against my 2 EX, and I'd
be interested if anyone else has.
I recently presented at the OpenStorage Summit 2010 and compared
exactly the three devices you mention in your post (Vertex 2 EX,
Vertex 2 Pro, and the DDRdrive X1) as ZIL Accelerators.
On Sat, Nov 27, 2010 at 9:34 AM, Christopher George cgeo...@ddrdrive.comwrote:
I haven't had a chance to test a Vertex 2 PRO against my 2 EX, and I'd
be interested if anyone else has.
I recently presented at the OpenStorage Summit 2010 and compared
exactly the three devices you mention in
That's a great deck, Chris.
-marc
Sent from my iPhone
On 2010-11-27, at 10:34 AM, Christopher George cgeo...@ddrdrive.com wrote:
I haven't had a chance to test a Vertex 2 PRO against my 2 EX, and I'd
be interested if anyone else has.
I recently presented at the OpenStorage Summit 2010
On Sat, Nov 27, 2010 at 8:10 AM, Orvar Korvar
knatte_fnatte_tja...@yahoo.com wrote:
A noob question:
These drives that people talk about, can you use them as a system disc too?
Install Solaris 11 Express on them? Or can you only use them as a L2ARC or
Zil?
--
They're a standard SATA hard
On Sat, Nov 27, 2010 at 01:19:50PM -0600, Tim Cook wrote:
They're a standard SATA hard drive. You can use them for whatever you'd
like. For the price though, they aren't really worth the money to buy just
to put your OS on. Your system drive on a Solaris system generally doesn't
see
Your system drive on a Solaris system generally doesn't see enough I/O
activity to require the kind of IOPS you can get out of most modern SSD's.
My system drive sees a lot of activity, to the degree everything is going slow.
I have a SunRay that my girlfriend use, and I have 5-10 torrents
Why would you disable TRIM on an SSD benchmark?
Because ZFS does *not* support TRIM, so the benchmarks
are configured to replicate actual ZIL Accelerator workloads.
If you're doing sustained high-IOPS workloads like that, the
back-end is going to fall over and die long before the hour
On Sat, Nov 27, 2010 at 2:16 PM, Orvar Korvar
knatte_fnatte_tja...@yahoo.com wrote:
Your system drive on a Solaris system generally doesn't see enough I/O
activity to require the kind of IOPS you can get out of most modern SSD's.
My system drive sees a lot of activity, to the degree
On Sat, Nov 27, 2010 at 2:24 PM, Christopher George cgeo...@ddrdrive.comwrote:
Why would you disable TRIM on an SSD benchmark?
Because ZFS does *not* support TRIM, so the benchmarks
are configured to replicate actual ZIL Accelerator workloads.
If you're doing sustained high-IOPS workloads
Hello,
A ZFS VDI related question. I'm exporting an iscsi share from a linux
box which i'm mounting on a Solaris 10 VDI broker and subsequently used
by the desktop providers. This is for a proof of concept. This works
fine under VDI 3.2.1 until i reboot the VDI broker.
After the broker
TRIM was putback in July... You're telling me it didn't make it into S11
Express?
Without top level ZFS TRIM support, SATA Framework (sata.c) support
has no bearing on this discussion.
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
A word of caution on the Silicon Image 3124. I have tested out a two
extremely cheap card using the si3124 driver on b134 and OIb147. One card
was PCI, the other PCI-X. I found that both are unusable until the driver
is updated. Large'ish file transfers, say over 1GB would lock up the
machine
I am waiting for the next gen Intel SSD drives, G3. They are arriving very
soon. And from what I can infer by reading here, I can use it without issues.
Solaris will recognize the Intel SDD drive without any drivers needed, or
whatever?
Intel new SSD should work with Solaris 11 Express, yes?
Agreed, SSD with SandForce controllers are the only way to go. The
controller makes a world of difference.
-Moazam
On Sat, Nov 27, 2010 at 12:27 PM, Tim Cook t...@cook.ms wrote:
On Sat, Nov 27, 2010 at 2:16 PM, Orvar Korvar
knatte_fnatte_tja...@yahoo.com wrote:
Your system drive on a
On Sat, Nov 27, 2010 at 3:12 PM, Orvar Korvar
knatte_fnatte_tja...@yahoo.com wrote:
I am waiting for the next gen Intel SSD drives, G3. They are arriving very
soon. And from what I can infer by reading here, I can use it without
issues. Solaris will recognize the Intel SDD drive without any
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Christopher George
Jump to slide 37 for the write IOPS benchmarks:
http://www.ddrdrive.com/zil_accelerator.pdf
Anybody who designs or works with NAND (flash) at a low level knows it can't
Furthermore, I don't think 1 hour sustained is a very accurate benchmark.
Most workloads are bursty in nature.
The IOPS degradation is additive, the length of the first and second one hour
sustained period is completely arbitrary. The take away from slides 1 and 2 is
drive inactivity has
On 11/27/2010 6:50 PM, Christopher George wrote:
Furthermore, I don't think 1 hour sustained is a very accurate benchmark.
Most workloads are bursty in nature.
The IOPS degradation is additive, the length of the first and second one hour
sustained period is completely arbitrary. The take away
On Sat, Nov 27, 2010 at 9:29 PM, Erik Trimble erik.trim...@oracle.comwrote:
On 11/27/2010 6:50 PM, Christopher George wrote:
Furthermore, I don't think 1 hour sustained is a very accurate
benchmark.
Most workloads are bursty in nature.
The IOPS degradation is additive, the length of the
TRIM was putback in July... You're telling me it didn't make it into S11
Express?
http://mail.opensolaris.org/pipermail/onnv-notify/2010-July/012674.html
It looks like this refers to the ability to use the TRIM command, but ZFS
doesn't:
I'm doing compiles of the JDK, with a single backed ZFS system handing
the files for 20-30 clients, each trying to compile a 15 million-line
JDK at the same time.
Very cool application!
Can you share any metrics, such as the aggregate size of source files
compiled and the size of the
24 matches
Mail list logo