] On Behalf Of thomas
Sent: Friday, 24 July 2009 5:23 AM
To: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] SSD's and ZFS...
I think it is a great idea, assuming the SSD has good write
performance.
This one claims up to 230MB/s read and 180MB/s write and it's only
$196.
http
Bob Friesenhahn wrote:
On Fri, 24 Jul 2009, Tristan Ball wrote:
I've used 8K IO sizes for all the stage one tests - I know I might get
it to go faster with a larger size, but I like to know how well systems
will do when I treat them badly!
The Stage_1_Ops_thru_run is interesting. 2000+
] On Behalf Of thomas
Sent: Friday, 24 July 2009 5:23 AM
To: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] SSD's and ZFS...
I think it is a great idea, assuming the SSD has good write
performance.
This one claims up to 230MB/s read and 180MB/s write and it's only
$196.
http
,
Tristan.
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of thomas
Sent: Friday, 24 July 2009 5:23 AM
To: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] SSD's and ZFS...
I think it is a great idea, assuming
On Fri, 24 Jul 2009, Tristan Ball wrote:
I've used 8K IO sizes for all the stage one tests - I know I might get
it to go faster with a larger size, but I like to know how well systems
will do when I treat them badly!
The Stage_1_Ops_thru_run is interesting. 2000+ ops/sec on random writes,
5000
On Fri, 24 Jul 2009, Bob Friesenhahn wrote:
This seems like rather low random write performance. My 12-drive array of
rotating rust obtains 3708.89 ops/sec. In order to be effective, it seems
that a synchronous write log should perform considerably better than the
backing store.
Ok, I re-tested my rotating rust with these iozone options (note that
-o requests syncronous writes):
iozone -t 6 -k 8 -i 0 -i 2 -O -r 8K -o -s 1G
and obtained these results:
Children see throughput for 6 random writers=5700.49 ops/sec
Parent sees throughput for 6
Thanks posting this solution.
But I would like to point out that bug 6574286 removing a slog doesn't work
still isn't resolved. A solution is under it's way, according to George Wilson.
But in the mean time, IF something happens you might be in a lot of trouble.
Even without some unfortunate
F. Wessels wrote:
Thanks posting this solution.
But I would like to point out that bug 6574286 removing a slog doesn't work
still isn't resolved. A solution is under it's way, according to George Wilson. But in
the mean time, IF something happens you might be in a lot of trouble. Even without
On Thu, Jul 23, 2009 at 10:28:38AM -0400, Kyle McDonald wrote:
In my case the slog slice wouldn't be the slog for the root pool, it
would be the slog for a second data pool.
I didn't think you could add a slog to the root pool anyway. Or has that
changed in recent builds? I'm a little
Brian Hechinger wrote:
On Thu, Jul 23, 2009 at 10:28:38AM -0400, Kyle McDonald wrote:
In my case the slog slice wouldn't be the slog for the root pool, it
would be the slog for a second data pool.
I didn't think you could add a slog to the root pool anyway. Or has that
On Jul 23, 2009, at 7:28 AM, Kyle McDonald wrote:
F. Wessels wrote:
Thanks posting this solution.
But I would like to point out that bug 6574286 removing a slog
doesn't work still isn't resolved. A solution is under it's way,
according to George Wilson. But in the mean time, IF something
Richard Elling wrote:
On Jul 23, 2009, at 7:28 AM, Kyle McDonald wrote:
F. Wessels wrote:
Thanks posting this solution.
But I would like to point out that bug 6574286 removing a slog
doesn't work still isn't resolved. A solution is under it's way,
according to George Wilson. But in the
On Jul 23, 2009, at 9:37 AM, Kyle McDonald wrote:
Richard Elling wrote:
On Jul 23, 2009, at 7:28 AM, Kyle McDonald wrote:
F. Wessels wrote:
Thanks posting this solution.
But I would like to point out that bug 6574286 removing a slog
doesn't work still isn't resolved. A solution is under
Richard Elling wrote:
On Jul 23, 2009, at 9:37 AM, Kyle McDonald wrote:
Richard Elling wrote:
On Jul 23, 2009, at 7:28 AM, Kyle McDonald wrote:
F. Wessels wrote:
Thanks posting this solution.
But I would like to point out that bug 6574286 removing a slog
doesn't work still isn't
Kyle McDonald wrote:
Richard Elling wrote:
On Jul 23, 2009, at 9:37 AM, Kyle McDonald wrote:
Richard Elling wrote:
On Jul 23, 2009, at 7:28 AM, Kyle McDonald wrote:
F. Wessels wrote:
Thanks posting this solution.
But I would like to point out that bug 6574286 removing a slog
doesn't
I think it is a great idea, assuming the SSD has good write performance.
This one claims up to 230MB/s read and 180MB/s write and it's only $196.
http://www.newegg.com/Product/Product.aspx?Item=N82E16820609393
Compared to this one (250MB/s read and 170MB/s write) which is $699.
Greg Mason wrote:
I think it is a great idea, assuming the SSD has good write performance.
This one claims up to 230MB/s read and 180MB/s write and it's only $196.
http://www.newegg.com/Product/Product.aspx?Item=N82E16820609393
Compared to this one (250MB/s read and 170MB/s write)
In the context of a low-volume file server, for a few users, is the
low-end Intel SSD sufficient?
A.
--
Adam Sherman
+1.613.797.6819
On 2009-07-23, at 14:09, Greg Mason gma...@msu.edu wrote:
I think it is a great idea, assuming the SSD has good write
performance.
This one claims up to
Adam Sherman wrote:
In the context of a low-volume file server, for a few users, is the
low-end Intel SSD sufficient?
You're right, it supposedly has less than half the the write speed, and
that probably won't matter for me, but I can't find a 64GB version of it
for sale, and the 80GB
I think it is a great idea, assuming the SSD has good write performance.
This one claims up to 230MB/s read and 180MB/s write and it's only $196.
http://www.newegg.com/Product/Product.aspx?Item=N82E16820609393
Compared to this one (250MB/s read and 170MB/s write) which is $699.
Are
I don't think this is limited to root pools. None of my pools (root or
non-root) seem to have the write cache enabled. Now that I think about
it, all my disks are hidden behind an LSI1078 controller so I'm not
sure what sort of impact that would have on the situation.
I have a few of those
On Jul 23, 2009, at 11:09 AM, Greg Mason wrote:
I think it is a great idea, assuming the SSD has good write
performance.
This one claims up to 230MB/s read and 180MB/s write and it's only
$196.
http://www.newegg.com/Product/Product.aspx?Item=N82E16820609393
Compared to this one (250MB/s
On Thu, 2009-07-23 at 14:24 -0700, Richard Elling wrote:
On Jul 23, 2009, at 11:09 AM, Greg Mason wrote:
I think it is a great idea, assuming the SSD has good write
performance.
This one claims up to 230MB/s read and 180MB/s write and it's only
$196.
I didn't meant using slog for the root pool. I meant using the slog for a data
pool. Where the data pool consists of (rotating) hard disk and complement them
with a ssd based slog. But instead of a dedicated ssd for the slog I want the
root pool share the ssd with the slog. Both can mirrored to
I can't speak to whether it's a good idea or not, but I also wanted to do this
and it was rather difficult. The problem is the opensolaris installer doesn't
let you setup slices on a device to install to.
The two ways I came up with were:
1) using the automated installer to do everything
26 matches
Mail list logo