Miles Nordin car...@ivy.net writes:
kth == Kjetil Torgrim Homme kjeti...@linpro.no writes:
kth the SCSI layer handles the replaying of operations after a
kth reboot or connection failure.
how?
I do not think it is handled by SCSI layers, not for SAS nor iSCSI.
sorry, I was
Miles Nordin car...@ivy.net writes:
There will probably be clients that might seem to implicitly make this
assuption by mishandling the case where an iSCSI target goes away and
then comes back (but comes back less whatever writes were in its write
cache). Handling that case for NFS was
kth == Kjetil Torgrim Homme kjeti...@linpro.no writes:
kth basically iSCSI just defines a reliable channel for SCSI.
pft.
AIUI a lot of the complexity in real stacks is ancient protocol
arcania for supporting multiple initiators and TCQ regardless of
whther the physical target supports
On Feb 18, 2010, at 4:55 AM, Phil Harman wrote:
This discussion is very timely, but I don't think we're done yet. I've been
working on using NexentaStor with Sun's DVI stack. The demo I've been playing
with glues SunRays to VirtualBox instances using ZFS zvols over iSCSI for the
boot
On 18 feb 2010, at 13.55, Phil Harman wrote:
...
Whilst the latest bug fixes put the world to rights again with respect to
correctness, it may be that some of our performance workaround are still
unsafe (i.e. if my iSCSI client assumes all writes are synchronised to
nonvolatile storage,
On Feb 19, 2010, at 4:57 PM, Ragnar Sundblad ra...@csc.kth.se wrote:
On 18 feb 2010, at 13.55, Phil Harman wrote:
...
Whilst the latest bug fixes put the world to rights again with
respect to correctness, it may be that some of our performance
workaround are still unsafe (i.e. if my iSCSI
On 19/02/2010 21:57, Ragnar Sundblad wrote:
On 18 feb 2010, at 13.55, Phil Harman wrote:
Whilst the latest bug fixes put the world to rights again with respect to
correctness, it may be that some of our performance workaround are still unsafe
(i.e. if my iSCSI client assumes all writes
On 19 feb 2010, at 23.20, Ross Walker wrote:
On Feb 19, 2010, at 4:57 PM, Ragnar Sundblad ra...@csc.kth.se wrote:
On 18 feb 2010, at 13.55, Phil Harman wrote:
...
Whilst the latest bug fixes put the world to rights again with respect to
correctness, it may be that some of our
On 19 feb 2010, at 23.22, Phil Harman wrote:
On 19/02/2010 21:57, Ragnar Sundblad wrote:
On 18 feb 2010, at 13.55, Phil Harman wrote:
Whilst the latest bug fixes put the world to rights again with respect to
correctness, it may be that some of our performance workaround are still
On Wed, Feb 17, 2010 at 11:03 PM, Matt registrat...@flash.shanje.com wrote:
No SSD Log device yet. I also tried disabling the ZIL, with no effect on
performance.
Also - what's the best way to test local performance? I'm _somewhat_ dumb as
far as opensolaris goes, so if you could provide
No one has said if they're using dks, rdsk, or file-backed COMSTAR LUNs yet.
I'm using file-backed COMSTAR LUNs, with ZIL currently disabled.
I can get between 100-200MB/sec, depending on random/sequential and block
sizes.
Using dsk/rdsk, I was not able to see that level of performance at
Hi Matt
Are the seeing low speeds on writes only or on both read AND write?
Are you seeing low speed just with iSCSI or also with NFS or CIFS?
I've tried updating to COMSTAR
(although I'm not certain that I'm actually using it)
To check, do this:
# svcs -a | grep iscsi
If
hellobr
there is a new beta v. 0.220 of napp-it, the free webgui for nexenta(core) 3
br
new:br
-bonnie benchmarks included a href=http://www.napp-it.org/bench.png;
target=_blanksee screenshot/abr
-bug fixesbr
br
if you look at the benchmark screenshot:br
-pool daten: zfs3 of 7 x wd 2TB raid
On 18 February, 2010 - Günther sent me these 1,1K bytes:
hellobr
there is a new beta v. 0.220 of napp-it, the free webgui for nexenta(core) 3
br
new:br
-bonnie benchmarks included a href=http://www.napp-it.org/bench.png;
target=_blanksee screenshot/abr
-bug fixesbr
br
if you look at
hello
my intention was to show , how you can tune up a pool of drives
(how much can you reach when using sas compared to 2 TB high capacity drives)
and now the other results with same config and sas drives:
pre
wd 2TB x 7, z3, dedup and compress on, no ssd
daten 12.6T start
On Wed, Feb 17, 2010 at 11:21:07PM -0800, Matt wrote:
Just out of curiosity - what Supermicro chassis did you get? I've got the
following items shipping to me right now, with SSD drives and 2TB main drives
coming as soon as the system boots and performs normally (using 8 extra 500GB
This discussion is very timely, but I don't think we're done yet. I've
been working on using NexentaStor with Sun's DVI stack. The demo I've
been playing with glues SunRays to VirtualBox instances using ZFS zvols
over iSCSI for the boot image, with all the associated ZFS
snapshot/clone
Responses inline :
Hi Matt
Are the seeing low speeds on writes only or on both
read AND write?
Lows speeds both reading and writing.
Are you seeing low speed just with iSCSI or also with
NFS or CIFS?
Haven't gotten NFS or CIFS to work properly. Maybe I'm just too dumb to figure
it
One question though:
Just this one SAS adaptor? Are you connecting to the
drive
backplane with one cable for the 4 internal SAS
connectors?
Are you using SAS or SATA drives? Will you be filling
up 24
slots with 2 TByte drives, and are you sure you won't
be
oversubscribed with just 4x
On Thu, Feb 18, 2010 at 10:49 AM, Matt registrat...@flash.shanje.comwrote:
Here's IOStat while doing writes :
r/sw/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
1.0 256.93.0 2242.9 0.3 0.11.30.5 11 12 c0t0d0
0.0 253.90.0 2242.9 0.3 0.11.0
Also - still looking for the best way to test local performance - I'd love to
make sure that the volume is actually able to perform at a level locally to
saturate gigabit. If it can't do it internally, why should I expect it to work
over GbE?
--
This message posted from opensolaris.org
On Thu, 18 Feb 2010, Günther wrote:
i was surprised about the seqential write/ rewrite result.
the wd 2 TB drives performs very well only in sequential write of characters
but are horrible bad in blockwise write/ rewrite
the 15k sas drives with ssd read cache performs 20 x better (10MB/s - 200
Run Bonnie++. You can install it with the Sun package manger and it'll
appear under /usr/benchmarks/bonnie++
Look for the command line I posted a couple of days back for a decent set of
flags to truly rate performance (using sync writes).
-marc
On Thu, Feb 18, 2010 at 11:05 AM, Matt
Hi Matt
Haven't gotten NFS or CIFS to work properly.
Maybe I'm just too dumb to figure it out,
but I'm ending up with permissions errors that don't let me do much.
All testing so far has been with iSCSI.
So until you can test NFS or CIFS, we don't know if it's a
general performance problem,
Another things you could check, which has been reported to
cause a problem, is if network or disk drivers share an interrupt
with a slow device, like say a usb device. So try:
# echo ::interrupts -d | mdb -k
... and look for multiple driver names on an INT#.
Regards
Nigel Smith
--
This message
Just wanted to add that I'm in the exact same boat - I'm connecting from a
Windows system and getting just horrid iSCSI transfer speeds.
I've tried updating to COMSTAR (although I'm not certain that I'm actually
using it) to no avail, and I tried updating to the latest DEV version of
On Wed, Feb 17, 2010 at 10:42 PM, Matt registrat...@flash.shanje.com wrote:
I've got a very similar rig to the OP showing up next week (plus an
infiniband card) I'd love to get this performing up to GB Ethernet speeds,
otherwise I may have to abandon the iSCSI project if I can't get it to
No SSD Log device yet. I also tried disabling the ZIL, with no effect on
performance.
Also - what's the best way to test local performance? I'm _somewhat_ dumb as
far as opensolaris goes, so if you could provide me with an exact command line
for testing my current setup (exactly as it
Just out of curiosity - what Supermicro chassis did you get? I've got the
following items shipping to me right now, with SSD drives and 2TB main drives
coming as soon as the system boots and performs normally (using 8 extra 500GB
Barracuda ES.2 drives as test drives).
On Feb 15, 2010, at 11:34 PM, Ragnar Sundblad wrote:
On 15 feb 2010, at 23.33, Bob Beverage wrote:
On Wed, Feb 10, 2010 at 10:06 PM, Brian E. Imhoff
beimh...@hotmail.com wrote:
I've seen exactly the same thing. Basically, terrible
transfer rates
with Windows
and the server sitting there
Some more back story. I initially started with Solaris 10 u8, and was getting
40ish MB/s reads, and 65-70MB/s writes, which was still a far cry from the
performance I was getting with OpenFiler. I decided to try Opensolaris
2009.06, thinking that since it was more state of the art up to date
On Feb 16, 2010, at 9:44 AM, Brian E. Imhoff wrote:
Some more back story. I initially started with Solaris 10 u8, and was
getting 40ish MB/s reads, and 65-70MB/s writes, which was still a far cry
from the performance I was getting with OpenFiler. I decided to try
Opensolaris 2009.06,
On Tue, Feb 16 at 9:44, Brian E. Imhoff wrote:
But, at the end of the day, this is quite a bomb: A single raidz2
vdev has about as many IOs per second as a single disk, which could
really hurt iSCSI performance.
If I have to break 24 disks up in to multiple vdevs to get the
expected
On Wed, Feb 10, 2010 at 10:06 PM, Brian E. Imhoff beimh...@hotmail.com wrote:
I am in the proof-of-concept phase of building a large ZFS/Solaris based SAN
box, and am experiencing absolutely poor / unusable performance.
...
From here, I discover the iscsi target on our Windows server 2008 R2
On Wed, Feb 10, 2010 at 10:06 PM, Brian E. Imhoff
beimh...@hotmail.com wrote:
I've seen exactly the same thing. Basically, terrible
transfer rates
with Windows
and the server sitting there completely idle.
I am also seeing this behaviour. It started somewhere around snv111 but I am
not
On 15 feb 2010, at 23.33, Bob Beverage wrote:
On Wed, Feb 10, 2010 at 10:06 PM, Brian E. Imhoff
beimh...@hotmail.com wrote:
I've seen exactly the same thing. Basically, terrible
transfer rates
with Windows
and the server sitting there completely idle.
I am also seeing this behaviour.
I am in the proof-of-concept phase of building a large ZFS/Solaris based SAN
box, and am experiencing absolutely poor / unusable performance.
Where to begin...
The Hardware setup:
Supermicro 4U 24 Drive Bay Chassis
Supermicro X8DT3 Server Motherboard
2x Xeon E5520 Nehalem 2.26 Quad Core CPUs
On Wed, Feb 10, 2010 at 17:06, Brian E. Imhoff beimh...@hotmail.com wrote:
I am in the proof-of-concept phase of building a large ZFS/Solaris based SAN
box, and am experiencing absolutely poor / unusable performance.
I then, Create a zpool, using raidz2, using all 24 drives, 1 as a hotspare:
On 2/10/10 2:06 PM -0800 Brian E. Imhoff wrote:
I then, Create a zpool, using raidz2, using all 24 drives, 1 as a
hotspare: zpool create tank raidz2 c1t0d0 c1t1d0 [] c1t22d0 spare
c1t23d00
Well there's one problem anyway. That's going to be horribly slow no
matter what.
On Wed, February 10, 2010 16:28, Will Murnane wrote:
On Wed, Feb 10, 2010 at 17:06, Brian E. Imhoff beimh...@hotmail.com
wrote:
I am in the proof-of-concept phase of building a large ZFS/Solaris based
SAN box, and am experiencing absolutely poor / unusable performance.
I then, Create a
On Wed, Feb 10, 2010 at 4:06 PM, Brian E. Imhoff beimh...@hotmail.comwrote:
I am in the proof-of-concept phase of building a large ZFS/Solaris based
SAN box, and am experiencing absolutely poor / unusable performance.
Where to begin...
The Hardware setup:
Supermicro 4U 24 Drive Bay Chassis
On Wed, 10 Feb 2010, Frank Cusack wrote:
On 2/10/10 2:06 PM -0800 Brian E. Imhoff wrote:
I then, Create a zpool, using raidz2, using all 24 drives, 1 as a
hotspare: zpool create tank raidz2 c1t0d0 c1t1d0 [] c1t22d0 spare
c1t23d00
Well there's one problem anyway. That's going to be
Definitely use Comstar as Tim says.
At home I'm using 4*WD Caviar Blacks on an AMD Phenom x4 @ 1.Ghz and
only 2GB of RAM. I'm running svn132. No HBA - onboard SB700 SATA
ports.$
I can, with IOmeter, saturate GigE from my WinXP laptop via iSCSI.
Can you toss the RAID controller aside an use
Bob Friesenhahn bfrie...@simple.dallas.tx.us writes:
On Wed, 10 Feb 2010, Frank Cusack wrote:
The other three commonly mentioned issues are:
- Disable the naggle algorithm on the windows clients.
for iSCSI? shouldn't be necessary.
- Set the volume block size so that it matches the
How does lowering the flush interval help? If he can't ingress data
fast enough, faster flushing is a Bad Thibg(tm).
-marc
On 2/10/10, Kjetil Torgrim Homme kjeti...@linpro.no wrote:
Bob Friesenhahn bfrie...@simple.dallas.tx.us writes:
On Wed, 10 Feb 2010, Frank Cusack wrote:
The other three
On Wed, Feb 10, 2010 at 3:12 PM, Marc Nicholas geekyth...@gmail.com wrote:
How does lowering the flush interval help? If he can't ingress data
fast enough, faster flushing is a Bad Thibg(tm).
-marc
On 2/10/10, Kjetil Torgrim Homme kjeti...@linpro.no wrote:
Bob Friesenhahn
This is a Windows box, not a DB that flushes every write.
The drives are capable of over 2000 IOPS (albeit with high latency as
its NCQ that gets you there) which would mean, even with sync flushes,
8-9MB/sec.
-marc
On 2/10/10, Brent Jones br...@servuhome.net wrote:
On Wed, Feb 10, 2010 at
On Wed, Feb 10, 2010 at 4:05 PM, Brent Jones br...@servuhome.net wrote:
On Wed, Feb 10, 2010 at 3:12 PM, Marc Nicholas geekyth...@gmail.com wrote:
How does lowering the flush interval help? If he can't ingress data
fast enough, faster flushing is a Bad Thibg(tm).
-marc
On 2/10/10, Kjetil
[please don't top-post, please remove CC's, please trim quotes. it's
really tedious to clean up your post to make it readable.]
Marc Nicholas geekyth...@gmail.com writes:
Brent Jones br...@servuhome.net wrote:
Marc Nicholas geekyth...@gmail.com wrote:
Kjetil Torgrim Homme kjeti...@linpro.no
49 matches
Mail list logo