Now, if anyone is still reading, I have another question. The new Solaris
11
device naming convention hides the physical tree from me. I got just a
list of
long disk names all starting with c0 (see below) but I need to know
which
disk is connected to which controller so that I can create two
Thanks for the tips, everybody!
Progress report:
OpenIndiana failed to recognise LSI 9240-8i's. I installed 4.7 drivers
from LSI website (for Solaris 11 and up) but it started throwing
component failed messages. So I gave up on 9240's and re-flashed
them into 9211-8i's (IT mode). Solaris 11
I followed this guide but instead of 2108it.bin I downloaded the
latest firmware file for 9211-8i from LSI web site. I now have three
9211's! :)
http://lime-technology.com/forum/index.php?topic=12767.msg124393#msg124393
On 4 May 2012 18:33, Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote:
Downloaded, unzipped and flying! It shows GUID which is part of the
/dev/rdsk/c0t* name! Thanks!!! And thanks again! This msg goes
to the group.
root@carbon:~/bin/LSI-SAS2IRCU/SAS2IRCU_P13/sas2ircu_solaris_x86_rel#
./sas2ircu 0 DISPLAY | grep GUID
GUID
Hi all,
I have a bad bad problem with our brand new server!
The lengthy details are below but to cut the story short, on the same
hardware (3 x LSI 9240-8i, 20 x 3TB 6gb HDDs) I am getting ZFS
sequential writes of 1.4GB/s on Solaris 10 (20 disks, 10 mirrors) and
only 200-240MB/s on latest
/driver_aliases
imraid_sas pciex1000,73
#
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
ZFS Performance and Training
hi
s11 come with its own driver for some lsi sas HCA
but on the HCL
I only see LSI
SAS 9200-8e
http://www.oracle.com/webfolder/technetwork/hcl/data/components/details/lsi_logic/sol_11_11_11/9409.html
LSI MegaRAID SAS 9260-8i
: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] ZFS performance on LSI 9240-8i?
On May 4, 2012, at 5:25 AM, Roman Matiyenko wrote:
Hi all,
I have a bad bad problem with our brand new server!
The lengthy details are below but to cut the story short, on the same
hardware (3 x LSI
On Fri, 4 May 2012, Rocky Shek wrote:
If I were you, I will not use 9240-8I.
I will use 9211-8I as pure HBA with IT FW for ZFS.
Is there IT FW for the 9240-8i?
They seem to use the same SAS chipset.
My next system will have 9211-8i with IT FW. Playing it safe. Good
enough for Nexenta
on Nexenta and OpenIndiana.
Best regards,
Hugues
-Message initial-
De:Roman Matiyenko rmatiye...@gmail.com
Envoyé:ven. 04-05-2012 14:25
Sujet:[zfs-discuss] ZFS performance on LSI 9240-8i?
À:zfs-discuss@opensolaris.org;
Hi all,
I have a bad bad problem with our brand new server
Hi Bob
I don't know what the request pattern from filebench looks like but it seems
like your ZEUS RAM devices are not keeping up or
else many requests are bypassing the ZEUS RAM devices.
Note that very large synchronous writes will bypass your ZEUS RAM device and
go directly to a log in
Dear all.
We finally got all the parts for our new fileserver following several
recommendations we got over this list. We use
Dell R715, 96GB RAM, dual 8-core Opterons
1 10GE Intel dual-port NIC
2 LSI 9205-8e SAS controllers
2 DataON DNS-1600 JBOD chassis
46 Seagate constellation SAS drives
2
What are the specs on the client?
On Aug 18, 2011 10:28 AM, Thomas Nau thomas@uni-ulm.de wrote:
Dear all.
We finally got all the parts for our new fileserver following several
recommendations we got over this list. We use
Dell R715, 96GB RAM, dual 8-core Opterons
1 10GE Intel dual-port
Tim
the client is identical as the server but no SAS drives attached.
Also right now only one 1gbit Intel NIC Is available
Thomas
Am 18.08.2011 um 17:49 schrieb Tim Cook t...@cook.ms:
What are the specs on the client?
On Aug 18, 2011 10:28 AM, Thomas Nau thomas@uni-ulm.de wrote:
Dear
On Thu, 18 Aug 2011, Thomas Nau wrote:
Tim
the client is identical as the server but no SAS drives attached.
Also right now only one 1gbit Intel NIC Is available
I don't know what the request pattern from filebench looks like but it
seems like your ZEUS RAM devices are not keeping up or else
sirket, could you please share your OS, zfs, and zpool versions?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
~# uname -a
SunOS nas01a 5.11 oi_147 i86pc i386 i86pc Solaris
~# zfs get version pool0
NAME PROPERTY VALUESOURCE
pool0 version 5-
~# zpool get version pool0
NAME PROPERTY VALUESOURCE
pool0 version 28 default
--
This message posted from opensolaris.org
On 2/25/2011 4:15 PM, Torrey McMahon wrote:
On 2/25/2011 3:49 PM, Tomas Ögren wrote:
On 25 February, 2011 - David Blasingame Oracle sent me these 2,6K bytes:
Hi All,
In reading the ZFS Best practices, I'm curious if this statement is
still true about 80% utilization.
It happens at
On Sun, Feb 27, 2011 at 7:35 PM, Brandon High bh...@freaks.com wrote:
It moves from best fit to any fit at a certain point, which is at
~ 95% (I think). Best fit looks for a large contiguous space to avoid
fragmentation while any fit looks for any free space.
I got the terminology wrong, it's
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of David Blasingame Oracle
Keep pool space under 80% utilization to maintain pool performance.
For what it's worth, the same is true for any other filesystem too. What
really matters is the
In reading the ZFS Best practices, I'm curious if this statement is
still true about 80% utilization.
It is, and in my experience, it doesn't matter much if you have a full pool and
add another VDEV, the existing VDEVs will be full still, and performance will
be slow. For this reason, new
On Sun, Feb 27, 2011 at 6:59 AM, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
But there is one specific thing, isn't there? Where ZFS will choose to use
a different algorithm for something, when pool usage exceeds some threshold.
Right? What is that?
It moves
On 27/02/11 9:59 AM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of David Blasingame Oracle
Keep pool space under 80% utilization to maintain pool performance.
For what it's worth, the same is true for any other
On Mon, Feb 28 at 0:30, Toby Thain wrote:
I would expect COW puts more pressure on near-full behaviour compared to
write-in-place filesystems. If that's not true, somebody correct me.
Off the top of my head, I think it'd depend on the workload.
Write-in-place will always be faster with large
Hi All,
In reading the ZFS Best practices, I'm curious if this statement is
still true about 80% utilization.
from :
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
Hi Dave,
Still true.
Thanks,
Cindy
On 02/25/11 13:34, David Blasingame Oracle wrote:
Hi All,
In reading the ZFS Best practices, I'm curious if this statement is
still true about 80% utilization.
from :
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
On 25 February, 2011 - David Blasingame Oracle sent me these 2,6K bytes:
Hi All,
In reading the ZFS Best practices, I'm curious if this statement is
still true about 80% utilization.
It happens at about 90% for me.. all of a sudden, the mail server got
butt slow.. killed an old snapshot to
On 2/25/2011 3:49 PM, Tomas Ögren wrote:
On 25 February, 2011 - David Blasingame Oracle sent me these 2,6K bytes:
Hi All,
In reading the ZFS Best practices, I'm curious if this statement is
still true about 80% utilization.
It happens at about 90% for me.. all of a sudden, the mail
I have an OpenSolaris (technically OI 147) box running ZFS with Comstar (zpool
version 28, zfs version 5)
The box is a 2950 with 32 GB of RAM, Dell SAS5/e card connected to 6 Promise
vTrak J610sD (dual controller SAS) disk shelves spread across both channels of
the card (2 chains of 3
Hi,
i am working with ZFS now a days, i am facing some performance issues
from application team, as they said writes are very slow in ZFS w.r.t UFS.
Kindly send me some good reference or books links. i will be very thankful
to you.
BR,
Tayyab
___
On Aug 4, 2010, at 3:22 AM, TAYYAB REHMAN wrote:
Hi,
i am working with ZFS now a days, i am facing some performance issues
from application team, as they said writes are very slow in ZFS w.r.t UFS.
Kindly send me some good reference or books links. i will be very thankful to
you.
Hi
Hi,
I just installed OpenSolaris on my Dell Optiplex 755 and created raidz2
with a few slices on a single disk. I was expecting a good read/write
performance but I got the speed of 12-15MBps.
How can I enhance the read/write performance of my raid?
Thanks,
Abhi.
Abhishek Gupta wrote:
Hi,
I just installed OpenSolaris on my Dell Optiplex 755 and created
raidz2 with a few slices on a single disk. I was expecting a good
read/write performance but I got the speed of 12-15MBps.
How can I enhance the read/write performance of my raid?
Thanks,
Abhi.
You
On 18/03/10 08:36 PM, Kashif Mumtaz wrote:
Hi,
I did another test on both machine. And write
performance on ZFS extraordinary slow.
Which build are you running?
On snv_134, 2x dual-core cpus @ 3GHz and 8Gb ram (my
desktop), I
see these results:
$ time dd if=/dev/zero of=test.dbf
hi, Thanks for all the reply.
I have found the real culprit.
Hard disk was faulty. I changed the hard disk.And now ZFS performance is much
better.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Hi,
I did another test on both machine. And write performance on ZFS extraordinary
slow.
I did the following test on both machines
For write
time dd if=/dev/zero of=test.dbf bs=8k count=1048576
For read
time dd if=/testpool/test.dbf of=/dev/null bs=8k
ZFS machine has 32GB memory
UFS machine
On 18/03/10 08:36 PM, Kashif Mumtaz wrote:
Hi,
I did another test on both machine. And write performance on ZFS extraordinary
slow.
Which build are you running?
On snv_134, 2x dual-core cpus @ 3GHz and 8Gb ram (my desktop), I
see these results:
$ time dd if=/dev/zero of=test.dbf bs=8k
Hi, Thanks for your reply
BOTH are Sun Sparc T1000 machines.
Hard disk 1 TB sata on both
ZFS system Memory32 GB , Processor 1GH 6 core
os Solaris 10 10/09 s10s_u8wos_08a SPARC
PatchCluster level 142900-02(Dec 09 )
UFS machine
Hard disk 1 TB sata
Memory 16 GB
Processor Processor 1GH 6
On Thu, Mar 18, 2010 at 03:36:22AM -0700, Kashif Mumtaz wrote:
I did another test on both machine. And write performance on ZFS
extraordinary slow.
-
In ZFS data was being write around 1037 kw/s while disk remain busy
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 18.03.2010 21:31, Daniel Carosone wrote:
You have a gremlin to hunt...
Wouldn't Sun help here? ;)
(sorry couldn't help myself, I've spent a week hunting gremlins until I
hit the brick wall of the MPT problem)
//Svein
- --
-
On 18/03/10 10:05 PM, Kashif Mumtaz wrote:
Hi, Thanks for your reply
BOTH are Sun Sparc T1000 machines.
Hard disk 1 TB sata on both
ZFS system Memory32 GB , Processor 1GH 6 core
os Solaris 10 10/09 s10s_u8wos_08a SPARC
PatchCluster level 142900-02(Dec 09 )
UFS machine
Hard disk 1 TB
James C. McPherson wrote:
On 18/03/10 10:05 PM, Kashif Mumtaz wrote:
Hi, Thanks for your reply
BOTH are Sun Sparc T1000 machines.
Hard disk 1 TB sata on both
ZFS system Memory32 GB , Processor 1GH 6 core
os Solaris 10 10/09 s10s_u8wos_08a SPARC
PatchCluster level 142900-02(Dec 09 )
Erik Trimble wrote:
James C. McPherson wrote:
On 18/03/10 10:05 PM, Kashif Mumtaz wrote:
Hi, Thanks for your reply
BOTH are Sun Sparc T1000 machines.
Hard disk 1 TB sata on both
ZFS system Memory32 GB , Processor 1GH 6 core
os Solaris 10 10/09 s10s_u8wos_08a SPARC
PatchCluster level
hi ,
I'm using sun T1000 machines one machine is installed Solaris 10 with UFS and
other system with ZFS file system , ZFS machine is performing slow . Running
following commands on both systems shows Disk get busy immediatly to 100%
ZFS MACHINE
find /
On Wed, 17 Mar 2010, Kashif Mumtaz wrote:
but on UFS file system averge busy is 50% ,
any idea why ZFS makes disk more busy ?
Clearly there are many more reads per second occuring on the zfs
filesystem than the ufs filesystem. Assuming that the
application-level requests are really the
On Wed, Mar 17, 2010 at 10:15:53AM -0500, Bob Friesenhahn wrote:
Clearly there are many more reads per second occuring on the zfs
filesystem than the ufs filesystem.
yes
Assuming that the application-level requests are really the same
From the OP, the workload is a find /.
So, ZFS makes
ZFS has intelligent prefetching. AFAIK, Solaris disk drivers do not
prefetch.
Can you point me to any reference? I didn't find anything stating yay or
nay, for either of these.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Doesn't this mean that if you enable write back, and you have
a single, non-mirrored raid-controller, and your raid controller
dies on you so that you loose the contents of the nvram, you have
a potentially corrupt file system?
It is understood, that any single point of failure could result
One more thing I¹d like to add here:
The PERC cache measurably and significantly accelerates small disk writes.
However, for read operations, it is insignificant compared to system ram,
both in terms of size and speed. There is no significant performance
improvement by enabling adaptive
hello
i have made some benchmarks with my napp-it zfs-serverbr
a href=http://www.napp-it.org/bench.pdf; target=_blankscreenshot/abr
br
a href=http://www.napp-it.org/bench.pdf;
target=_blankwww.napp-it.org/bench.pdf/abr
br
- 2gb vs 4 gb vs 8 gb rambr
- mirror vs raidz vs raidz2 vs raidz3br
- dedup
On Feb 19, 2010, at 8:35 AM, Edward Ned Harvey wrote:
One more thing I’d like to add here:
The PERC cache measurably and significantly accelerates small disk writes.
However, for read operations, it is insignificant compared to system ram,
both in terms of size and speed. There is no
On 19 feb 2010, at 17.35, Edward Ned Harvey wrote:
The PERC cache measurably and significantly accelerates small disk writes.
However, for read operations, it is insignificant compared to system ram,
both in terms of size and speed. There is no significant performance
improvement by
If I understand correctly, ZFS now adays will only flush data to
non volatile storage (such as a RAID controller NVRAM), and not
all the way out to disks. (To solve performance problems with some
storage systems, and I believe that it also is the right thing
to do under normal circumstances.)
Ok, I've done all the tests I plan to complete. For highest performance, it
seems:
. The measure I think is the most relevant for typical operation is
the fastest random read /write / mix. (Thanks Bob, for suggesting I do this
test.)
The winner is clearly striped mirrors in ZFS
.
On Thu, 18 Feb 2010, Edward Ned Harvey wrote:
Ok, I’ve done all the tests I plan to complete. For highest performance, it
seems:
· The measure I think is the most relevant for typical operation is the
fastest random read
/write / mix. (Thanks Bob, for suggesting I do this test.)
A most excellent set of tests. We could use some units in the PDF
file though.
Oh, hehehe. ;-) The units are written in the raw txt files. On your
tests, the units were ops/sec, and in mine, they were Kbytes/sec. If you
like, you can always grab the xlsx and modify it to your tastes, and
A most excellent set of tests. We could use some units in the PDF
file though.
Oh, by the way, you originally requested the 12G file to be used in
benchmark, and later changed to 4G. But by that time, two of the tests had
already completed on the 12G, and I didn't throw away those results,
On Thu, 18 Feb 2010, Edward Ned Harvey wrote:
Actually, that's easy. Although the zpool create happens instantly, all
the hardware raid configurations required an initial resilver. And they
were exactly what you expect. Write 1 Gbit/s until you reach the size of
the drive. I watched the
On Thu, Feb 18, 2010 at 10:39:48PM -0600, Bob Friesenhahn wrote:
This sounds like an initial 'silver' rather than a 'resilver'.
Yes, in particular it will be entirely seqential.
ZFS resilver is in txg order and involves seeking.
What I am interested in is the answer to these sort of
Richard Elling wrote:
...
As you can see, so much has changed, hopefully for the better, that running
performance benchmarks on old software just isn't very interesting.
NB. Oracle's Sun OpenStorage systems do not use Solaris 10 and if they did, they
would not be competitive in the market. The
Never mind. I have no interest in performance tests for Solaris 10.
The code is so old, that it does not represent current ZFS at all.
Whatever. Regardless of what you say, it does show:
. Which is faster, raidz, or a stripe of mirrors?
. How much does raidz2 hurt
iozone -m -t 8 -T -O -r 128k -o -s 12G
Actually, it seems that this is more than sufficient:
iozone -m -t 8 -T -r 128k -o -s 4G
Good news, cuz I kicked off the first test earlier today, and it seems like
it will run till Wednesday. ;-) The first run, on a single disk, took 6.5
hrs,
Whatever. Regardless of what you say, it does show:
· Which is faster, raidz, or a stripe of mirrors?
· How much does raidz2 hurt performance compared to raidz?
· Which is faster, raidz, or hardware raid 5?
· Is a mirror twice as fast as a single disk for
On Sun, 14 Feb 2010, Edward Ned Harvey wrote:
Never mind. I have no interest in performance tests for Solaris 10.
The code is so old, that it does not represent current ZFS at all.
Whatever. Regardless of what you say, it does show:
Since Richard abandoned Sun (in favor of gmail), he has
On Sun, 14 Feb 2010, Edward Ned Harvey wrote:
iozone -m -t 8 -T -O -r 128k -o -s 12G
Actually, it seems that this is more than sufficient:
iozone -m -t 8 -T -r 128k -o -s 4G
Good news, cuz I kicked off the first test earlier today, and it seems like
it will run till Wednesday. ;-)
On Sun, 14 Feb 2010, Thomas Burgess wrote:
Solaris 10 has a really old version of ZFS. i know there are some
pretty big differences in zfs versions from my own non scientific
benchmarks. It would make sense that people wouldn't be as
interested in benchmarks of solaris 10 ZFS seeing as
On Feb 14, 2010, at 6:45 PM, Thomas Burgess wrote:
Whatever. Regardless of what you say, it does show:
· Which is faster, raidz, or a stripe of mirrors?
· How much does raidz2 hurt performance compared to raidz?
· Which is faster, raidz, or hardware raid 5?
I have a new server, with 7 disks in it. I am performing benchmarks on it
before putting it into production, to substantiate claims I make, like
striping mirrors is faster than raidz and so on. Would anybody like me to
test any particular configuration? Unfortunately I don't have any SSD, so I
Some thoughts below...
On Feb 13, 2010, at 6:06 AM, Edward Ned Harvey wrote:
I have a new server, with 7 disks in it. I am performing benchmarks on it
before putting it into production, to substantiate claims I make, like
“striping mirrors is faster than raidz” and so on. Would anybody
On Sat, 13 Feb 2010, Edward Ned Harvey wrote:
Will test, including the time to flush(), various record sizes inside file
sizes up to 16G,
sequential write and sequential read. Not doing any mixed read/write
requests. Not doing any
random read/write.
iozone -Reab somefile.wks -g 17G -i 1 -i
On Sat, 13 Feb 2010, Bob Friesenhahn wrote:
Make sure to also test with a command like
iozone -m -t 8 -T -O -r 128k -o -s 12G
Actually, it seems that this is more than sufficient:
iozone -m -t 8 -T -r 128k -o -s 4G
since it creates a 4GB test file for each thread, with 8 threads.
Bob
IMHO, sequential tests are a waste of time. With default configs, it
will be
difficult to separate the raw performance from prefetched
performance.
You might try disabling prefetch as an option.
Let me clarify:
Iozone does a nonsequential series of sequential tests, specifically
On Sat, 13 Feb 2010, Edward Ned Harvey wrote:
kind as to collect samples of iosnoop -Da I would be eternally
grateful :-)
I'm guessing iosnoop is an opensolaris thing? Is there an equivalent for
solaris?
Iosnoop is part of the DTrace Toolkit by Brendan Gregg, which does
work on
On Feb 13, 2010, at 10:54 AM, Edward Ned Harvey wrote:
Please add some raidz3 tests :-) We have little data on how raidz3
performs.
Does this require a specific version of OS? I'm on Solaris 10 10/09, and
man zpool doesn't seem to say anything about raidz3 ... I haven't tried
using
Hi,
We have approximately 3 million active users and have a storage capacity of
300 TB in ZFS zpools.
The ZFS is mounted on Sun cluster using 3*T2000 servers connected with FC to
SAN storage.
Each zpool is a LUN in SAN which already provides raid so we're not doing
raidz on top of it.
We started
Final rant on this.
Managed to get the box re-installed and the performance issue has vanished.
So there is a performance bug in zfs some where.
Not sure to put in a bug log as I can't now provide any more information.
--
This message posted from opensolaris.org
So I have poked and prodded the disks and they both seem fine.
Any yet my rpool is still slow.
Any ideas on what do do now.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Please unsubscribe me
COLLIER
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of John-Paul Drawneek
Sent: Thursday, September 03, 2009 2:13 AM
To: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] zfs performance
No joy.
c1t0d0 89 MB/sec
c1t1d0 89 MB/sec
c2t0d0 123 MB/sec
c2t1d0 123 MB/sec
First two are the rpool
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
i did not migrate my disks.
I now have 2 pools - rpool is at 60% as is still dog slow.
Also scrubbing the rpool causes the box to lock up.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Tue, 1 Sep 2009, John-Paul Drawneek wrote:
i did not migrate my disks.
I now have 2 pools - rpool is at 60% as is still dog slow.
Also scrubbing the rpool causes the box to lock up.
This sounds like a hardware problem and not something related to
fragmentation. Probably you have a
On Tue, 1 Sep 2009, Jpd wrote:
Thanks.
Any idea on how to work out which one.
I can't find smart in ips, so what other ways are there?
You could try using a script like this one to find pokey disks:
#!/bin/ksh
# Date: Mon, 14 Apr 2008 15:49:41 -0700
# From: Jeff Bonwick
As I understand it, when you expand a pool, the data do not automatically
migrate to the other disks. You will have to rewrite the data somehow, usually
a backup/restore.
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
Ok had a pool which got full - so performance tanked.
Ran off and got some more disks to create a new pool to put all the extra data.
Got the original pool down to 60% util, but the performance is still bad.
Any ideas on how to get the performance back?
Bad news is that the pool in question is
You might want to also try toggling the Nagle tcp setting to see if that helps
with your workload:
ndd -get /dev/tcp tcp_naglim_def
(save that value, default is 4095)
ndd -set /dev/tcp tcp_naglim_def 1
If no (or a negative) difference, set it back to the original value
ndd -set /dev/tcp
2008/9/30 Jean Dion [EMAIL PROTECTED]:
iSCSI requires dedicated network and not a shared network or even VLAN.
Backup cause large I/O that fill your network quickly. Like ans SAN today.
Could you clarify why it is not suitable to use VLANs for iSCSI?
Simple. You cannot go faster than the slowest link.
Any VLAN share the bandwidth workload and do not provide a dedicated
bandwidth for each of them. That means if you have multiple VLAN
coming out of the same wire of your server you do not have "n" time the
bandwidth but only a fraction of
On Mon, Sep 29, 2008 at 06:01:18PM -0700, Jean Dion wrote:
Do you have dedicated iSCSI ports from your server to your NetApp?
Yes, it's a dedicated redundant gigabit network.
iSCSI requires dedicated network and not a shared network or even VLAN.
Backup cause large I/O that fill your
For Solaris internal debugging tools look here
http://opensolaris.org/os/community/advocacy/events/techdays/seattle/OS_SEA_POD_JMAURO.pdf;jsessionid=9B3E275EEB6F1A0E0BC191D8DEC0F965
ZFS specifics is available here
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide
Jean
Gary -
Besides the network questions...
What does your zpool status look like?
Are you using compression on the file systems?
(Was single-threaded and fixed in s10u4 or equiv patches)
--
This message posted from opensolaris.org
___
2008/9/30 Jean Dion [EMAIL PROTECTED]:
Simple. You cannot go faster than the slowest link.
That is indeed correct, but what is the slowest link when using a
Layer 2 VLAN? You made a broad statement that iSCSI 'requires' a
dedicated, standalone network. I do not believe this is the case.
Any
Normal iSCSI setup split network traffic at physical layer and not
logical layer. That mean physical ports and often physical PCI bridge
chip if you can. That will be fine for small traffic but we are
talking backup performance issues. IP network and number of small
files are very often the
On Tue, Sep 30, 2008 at 10:32:50AM -0700, William D. Hathaway wrote:
Gary -
Besides the network questions...
Yes, I suppose I should see if traffic on the Iscsi network is
hitting a limit of some sort.
What does your zpool status look like?
Pretty simple:
$ zpool status
pool:
On Mon, Sep 29, 2008 at 06:01:18PM -0700, Jean Dion wrote:
Legato client and server contains tuning parameters to avoid such small file
problems. Check your Legato buffer parameters. These buffer will use your
server memory as disk cache.
Our backup person tells me that there are no
2008/9/30 Jean Dion [EMAIL PROTECTED]:
If you want performance you do not put all your I/O across the same physical
wire. Once again you cannot go faster than the physical wire can support
(CAT5E, CAT6, fibre). No matter if it is layer 2 or not. Using VLAN on
single port you share the
gm_sjo wrote:
2008/9/30 Jean Dion [EMAIL PROTECTED]:
If you want performance you do not put all your I/O across the same physical
wire. Once again you cannot go faster than the physical wire can support
(CAT5E, CAT6, fibre). No matter if it is layer 2 or not. Using VLAN on
single port
Do you have dedicated iSCSI ports from your server to your NetApp?
iSCSI requires dedicated network and not a shared network or even VLAN. Backup
cause large I/O that fill your network quickly. Like ans SAN today.
Backup are extremely demanding on hardware (CPU, Mem, I/O ports, disk etc).
We have a moderately sized Cyrus installation with 2 TB of storage
and a few thousand simultaneous IMAP sessions. When one of the
backup processes is running during the day, there's a noticable
slowdown in IMAP client performance. When I start my `mutt' mail
reader, it pauses for several seconds
Ralf Bertling wrote:
Hi list,
as this matter pops up every now and then in posts on this list I just
want to clarify that the real performance of RaidZ (in its current
implementation) is NOT anything that follows from raidz-style data
efficient redundancy or the copy-on-write design used
Hi list,
as this matter pops up every now and then in posts on this list I just
want to clarify that the real performance of RaidZ (in its current
implementation) is NOT anything that follows from raidz-style data
efficient redundancy or the copy-on-write design used in ZFS.
In a M-Way
1 - 100 of 212 matches
Mail list logo