SuSE 10 and SuSE10w/SP1 install a kernel below 2.6.17. If your kernel
is 2.6.16 or less, then the following kernel tuning variables may help
with TCP performance. Add these to /etc/sysctl.conf and reboot:
net.core.rmem_max = 873200
net.core.wmem_max = 873200
net.ipv4.tcp_rmem = 32768 436600
On Wed, Dec 10, 2008 at 05:38:15PM -0800, fuubar2...@yahoo.com spake thusly:
SuSE 10 and SuSE10w/SP1 install a kernel below 2.6.17. If your kernel
is 2.6.16 or less, then the following kernel tuning variables may help
Just out of curiosity, what changed after 2.6.16 which makes these
variables
On Mon, Dec 08, 2008 at 12:45:11PM -0800, Kmec wrote:
IIRC IOmeter for Linux had some issues.. related to queue depth maybe? So
you should use other tools than IOmeter on Linux. Dunno if that problem is
already fixed or if there is a patch available for IOmeter for Linux..
Oh,
On 9 Dec 2008 at 2:24, [EMAIL PROTECTED] wrote:
2) our next problem is multipath. When we configure multipath, over
one NIC with dd we get 90 MBps read, but over 2 NICs just 80 MBps what
is strange. On
switch and SAN we see that data flow is over both NICs, but dd shows
still 80
I would like think you can use many dd proc for testingand test dd use
raw device, can u give more infomation about multipath -ll -v3, iscsi
connection status in testing and NIC ip address and topology ? and like
iostat -x -d
2008/12/10 Ulrich Windl [EMAIL PROTECTED]
On 9 Dec 2008 at
On Sun, Dec 07, 2008 at 11:01:46AM -0800, Kmec wrote:
Hi,
I would like to ask for help with some strange behavior of linux
iscsi. Situation is as follows: iSCSI SAN Dell Equallogic, SAS 10k RPM
drives, 4x Broadcom NIC or 4x Intel NIC in Dell R900 server (24 cores,
64 GB RAM). It's testing
On Mon, Dec 08, 2008 at 02:17:45PM +0200, Pasi Kärkkäinen wrote:
On Sun, Dec 07, 2008 at 11:01:46AM -0800, Kmec wrote:
Hi,
I would like to ask for help with some strange behavior of linux
iscsi. Situation is as follows: iSCSI SAN Dell Equallogic, SAS 10k RPM
drives, 4x Broadcom NIC
IIRC IOmeter for Linux had some issues.. related to queue depth maybe? So
you should use other tools than IOmeter on Linux. Dunno if that problem is
already fixed or if there is a patch available for IOmeter for Linux..
Oh, and please try using 'noop' elevator/scheduler on your iSCSI
2) our next problem is multipath. When we configure multipath, over
one NIC with dd we get 90 MBps read, but over 2 NICs just 80 MBps what
is strange. On
switch and SAN we see that data flow is over both NICs, but dd shows
still 80 MBps.
Are the dd results just one time results or an
Kmec wrote:
Hi,
I would like to ask for help with some strange behavior of linux
iscsi. Situation is as follows: iSCSI SAN Dell Equallogic, SAS 10k RPM
drives, 4x Broadcom NIC or 4x Intel NIC in Dell R900 server (24 cores,
64 GB RAM). It's testing environment where we are trying to measure
10 matches
Mail list logo