[ewg] ib-bonding release 22

2008-01-24 Thread Moni Shoua
Please take from my home dir. latest.txt was also updated.

Change Log:
--
1. Apply backport patches also for kernel 2.6.9-67 (Redhat4 UP6)
2. Fix bugs for working with OS tools (sysconfig/initscripts)
3. Change in documentation for working with OS tools
4. Fix: in ib-bons - bond0 appears as constant instead of $BOND_NAME in one 
place
5. Fix: Destroy bonding master only if it exists



Vlad,
Please apply the patch below to ofed_1_3_scripts to complete the support for 
RH4 Up6


Add 2.6.9-67 to the list of kernels that are supported by ib-bonding

Signed-off-by: Moni Shoua [EMAIL PROTECTED]
---

diff --git a/install.pl b/install.pl
index 256263d..fcae1fb 100755
--- a/install.pl
+++ b/install.pl
@@ -1579,7 +1579,7 @@ sub set_availability
 }
 
 # ib-bonding
-if ($kernel =~ 
m/2.6.9-34|2.6.9-42|2.6.9-55|2.6.16.[0-9.]*-[0-9.]*-[A-Za-z0-9.]*|el5|fc6/) {
+if ($kernel =~ 
m/2.6.9-34|2.6.9-42|2.6.9-55|2.6.9-67|2.6.16.[0-9.]*-[0-9.]*-[A-Za-z0-9.]*|el5|fc6/)
 {
 $packages_info{'ib-bonding'}{'available'} = 1;
 $packages_info{'ib-bonding-debuginfo'}{'available'} = 1;
 }

___
ewg mailing list
ewg@lists.openfabrics.org
http://lists.openfabrics.org/cgi-bin/mailman/listinfo/ewg


Re: [ofa-general] RE: [ewg] Not seeing any SDP performance changes inOFED 1.3 beta, and I get Oops when enabling sdp_zcopy_thresh

2008-01-24 Thread Weikuan Yu

Hi, Scott,

I have been running SDP tests across two woodcrest nodes with 4x DDR 
cards using OFED-1.2.5.4. The card/firmware info is below.


CA 'mthca0'
CA type: MT25208
Number of ports: 2
Firmware version: 5.1.400
Hardware version: a0
Node GUID: 0x0002c90200228e0c
System image GUID: 0x0002c90200228e0f

I could not get a bandwidth more than 5Gbps like you have shown here. 
Wonder if I need to upgrade to the latest software or firmware? Any 
suggestions?


Thanks,
--Weikuan


TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.225.77 
(192.168

.225.77) port 0 AF_INET
Recv   SendSend  Utilization   Service 
Demand

Socket Socket  Message  Elapsed  Send Recv SendRecv
Size   SizeSize Time Throughput  localremote   local 
remote

bytes  bytes   bytessecs.10^6bits/s  % S  % S  us/KB   us/KB

131072 131072 13107210.00  4918.95   21.2924.991.418 
1.665



Scott Weitzenkamp (sweitzen) wrote:

Jim,

I am trying OFED-1.3-20071231-0600 and RHEL4 x86_64 on a dual CPU
(single core each CPU) Xeon system.  I do not see any performance
improvement (either throughput or CPU utilization) using netperf when I
set /sys/module/ib_sdp/sdp_zcopy_thresh to 16384.  Can you elaborate on
your HCA type, and performance improvement you see?

Here's an example netperf command line when using a Cheetah DDR HCA and
1.2.917 firmware (I have also tried ConnectX and 2.3.000 firmware too):

[EMAIL PROTECTED] ~]$ LD_PRELOAD=libsdp.so netperf241 -v2 -4 -H
192.168.1.201 -l 30 -t TCP_STREAM -c -C --   -m 65536
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.1.201
(192.168.1.201) port 0 AF_INET : histogram : demo

Recv   SendSend  Utilization   Service
Demand
Socket Socket  Message  Elapsed  Send Recv Send
Recv
Size   SizeSize Time Throughput  localremote   local
remote
bytes  bytes   bytessecs.10^6bits/s  % S  % S  us/KB
us/KB

 87380  16384  6553630.01  7267.70   55.0661.271.241
1.381 


Alignment  Offset BytesBytes   Sends   Bytes
Recvs
Local  Remote  Local  Remote  Xfered   Per Per
Send   RecvSend   Recv Send (avg)  Recv (avg)
8   8  0   0 2.726e+10  65536.00415942   48106.01
566648


___
ewg mailing list
ewg@lists.openfabrics.org
http://lists.openfabrics.org/cgi-bin/mailman/listinfo/ewg


RE: [ofa-general] RE: [ewg] Not seeing any SDP performance changes inOFED 1.3 beta, and I get Oops when enabling sdp_zcopy_thresh

2008-01-24 Thread Jim Mott
Hi,
  64K is borderline for seeing bzcopy effect.  Using an AMD 6000+ (3 Ghz
dual core) in Asus M2A-VM motherboard with ConnectX running 2.3 firmware
and OFED 1.3-rc3 stack running on 2.6.23.8 kernel.org kernel, I ran the
test for 128K:
  5546  sdp_zcopy_thresh=0 (off)
  8709  sdp_zcopy_thresh=65536

For these tests, I just have LD_PRELOAD set in my environment.

===

I see that TCP_MAXSEG is not being handled by libsdp and will look into
it.


[EMAIL PROTECTED] ~]# modprobe ib_sdp
[EMAIL PROTECTED] ~]# netperf -v2 -4 -H 193.168.10.198 -l 30 -t TCP_STREAM -c
-C -- -m 128K
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 193.168.10.198
(193.168.10.198) port 0 AF_INET
netperf: get_tcp_info: getsockopt TCP_MAXSEG: errno 92
Recv   SendSend  Utilization   Service
Demand
Socket Socket  Message  Elapsed  Send Recv Send
Recv
Size   SizeSize Time Throughput  localremote   local
remote
bytes  bytes   bytessecs.10^6bits/s  % S  % S  us/KB
us/KB

 87380  16384 13107230.01  5545.69   51.4714.431.521
1.706  

Alignment  Offset BytesBytes   Sends   Bytes
Recvs
Local  Remote  Local  Remote  Xfered   Per Per
Send   RecvSend   Recv Send (avg)  Recv (avg)
8   8  0   0 2.08e+10  131072.00158690   33135.60
627718

Maximum
Segment
Size (bytes)
-1
[EMAIL PROTECTED] ~]# echo 65536
/sys/module/ib_sdp/parameters/sdp_zcopy_thresh 
[EMAIL PROTECTED] ~]# netperf -v2 -4 -H 193.168.10.198 -l 30 -t TCP_STREAM -c
-C -- -m 128K
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 193.168.10.198
(193.168.10.198) port 0 AF_INET
netperf: get_tcp_info: getsockopt TCP_MAXSEG: errno 92
Recv   SendSend  Utilization   Service
Demand
Socket Socket  Message  Elapsed  Send Recv Send
Recv
Size   SizeSize Time Throughput  localremote   local
remote
bytes  bytes   bytessecs.10^6bits/s  % S  % S  us/KB
us/KB

 87380  16384 13107230.01  8708.58   50.6314.550.953
1.095  

Alignment  Offset BytesBytes   Sends   Bytes
Recvs
Local  Remote  Local  Remote  Xfered   Per Per
Send   RecvSend   Recv Send (avg)  Recv (avg)
8   8  0   0 3.267e+10  131072.00249228   26348.30
1239807

Maximum
Segment
Size (bytes)
-1

Thanks,
JIm

Jim Mott
Mellanox Technologies Ltd.
mail: [EMAIL PROTECTED]
Phone: 512-294-5481


-Original Message-
From: Weikuan Yu [mailto:[EMAIL PROTECTED] 
Sent: Thursday, January 24, 2008 9:09 AM
To: Scott Weitzenkamp (sweitzen)
Cc: Jim Mott; ewg@lists.openfabrics.org; [EMAIL PROTECTED]
Subject: Re: [ofa-general] RE: [ewg] Not seeing any SDP performance
changes inOFED 1.3 beta, and I get Oops when enabling sdp_zcopy_thresh

Hi, Scott,

I have been running SDP tests across two woodcrest nodes with 4x DDR 
cards using OFED-1.2.5.4. The card/firmware info is below.

CA 'mthca0'
 CA type: MT25208
 Number of ports: 2
 Firmware version: 5.1.400
 Hardware version: a0
 Node GUID: 0x0002c90200228e0c
 System image GUID: 0x0002c90200228e0f

I could not get a bandwidth more than 5Gbps like you have shown here. 
Wonder if I need to upgrade to the latest software or firmware? Any 
suggestions?

Thanks,
--Weikuan


TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.225.77 
(192.168
.225.77) port 0 AF_INET
Recv   SendSend  Utilization   Service 
Demand
Socket Socket  Message  Elapsed  Send Recv Send
Recv
Size   SizeSize Time Throughput  localremote   local 
remote
bytes  bytes   bytessecs.10^6bits/s  % S  % S  us/KB
us/KB

131072 131072 13107210.00  4918.95   21.2924.991.418 
1.665


Scott Weitzenkamp (sweitzen) wrote:
 Jim,
 
 I am trying OFED-1.3-20071231-0600 and RHEL4 x86_64 on a dual CPU
 (single core each CPU) Xeon system.  I do not see any performance
 improvement (either throughput or CPU utilization) using netperf when
I
 set /sys/module/ib_sdp/sdp_zcopy_thresh to 16384.  Can you elaborate
on
 your HCA type, and performance improvement you see?
 
 Here's an example netperf command line when using a Cheetah DDR HCA
and
 1.2.917 firmware (I have also tried ConnectX and 2.3.000 firmware
too):
 
 [EMAIL PROTECTED] ~]$ LD_PRELOAD=libsdp.so netperf241 -v2 -4 -H
 192.168.1.201 -l 30 -t TCP_STREAM -c -C --   -m 65536
 TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.1.201
 (192.168.1.201) port 0 AF_INET : histogram : demo
 
 Recv   SendSend  Utilization   Service
 Demand
 Socket Socket  Message  Elapsed  Send Recv Send
 Recv
 Size   SizeSize Time Throughput  localremote   local
 remote
 bytes  bytes   bytessecs.

[ewg] [PATCH] IB/ehca: Prevent sending UD packets to QP0

2008-01-24 Thread Joachim Fenkes
IB spec doesn't allow packets to QP0 sent on any other VL than VL15.
Hardware doesn't filter those packets on the send side, so we need to do
this in the driver and firmware.

As eHCA doesn't support QP0, we can just filter out all traffic going to
QP0, regardless of SL or VL.

Signed-off-by: Joachim Fenkes [EMAIL PROTECTED]
---
 drivers/infiniband/hw/ehca/ehca_reqs.c |4 
 1 files changed, 4 insertions(+), 0 deletions(-)

diff --git a/drivers/infiniband/hw/ehca/ehca_reqs.c 
b/drivers/infiniband/hw/ehca/ehca_reqs.c
index 3aacc8c..2ce8cff 100644
--- a/drivers/infiniband/hw/ehca/ehca_reqs.c
+++ b/drivers/infiniband/hw/ehca/ehca_reqs.c
@@ -209,6 +209,10 @@ static inline int ehca_write_swqe(struct ehca_qp *qp,
ehca_gen_err(wr.ud.ah is NULL. qp=%p, qp);
return -EINVAL;
}
+   if (unlikely(send_wr-wr.ud.remote_qpn == 0)) {
+   ehca_gen_err(dest QP# is 0. qp=%x, qp-real_qp_num);
+   return -EINVAL;
+   }
my_av = container_of(send_wr-wr.ud.ah, struct ehca_av, ib_ah);
wqe_p-u.ud_av.ud_av = my_av-av;
 
-- 
1.5.2


___
ewg mailing list
ewg@lists.openfabrics.org
http://lists.openfabrics.org/cgi-bin/mailman/listinfo/ewg


RE: [ofa-general] RE: [ewg] Not seeing any SDP performance changes inOFED 1.3 beta, and I get Oops when enabling sdp_zcopy_thresh

2008-01-24 Thread Scott Weitzenkamp (sweitzen)
I've tested on RHEL4 and RHEL5, and see no sdp_zcopy_thresh improvement
for any message size, as measured with netperf, for any Arbel or
ConnectX HCA.

Scott

 
 -Original Message-
 From: Jim Mott [mailto:[EMAIL PROTECTED] 
 Sent: Thursday, January 24, 2008 7:57 AM
 To: Weikuan Yu; Scott Weitzenkamp (sweitzen)
 Cc: ewg@lists.openfabrics.org; [EMAIL PROTECTED]
 Subject: RE: [ofa-general] RE: [ewg] Not seeing any SDP 
 performance changes inOFED 1.3 beta, and I get Oops when 
 enabling sdp_zcopy_thresh
 
 Hi,
   64K is borderline for seeing bzcopy effect.  Using an AMD 
 6000+ (3 Ghz
 dual core) in Asus M2A-VM motherboard with ConnectX running 
 2.3 firmware
 and OFED 1.3-rc3 stack running on 2.6.23.8 kernel.org kernel, 
 I ran the
 test for 128K:
   5546  sdp_zcopy_thresh=0 (off)
   8709  sdp_zcopy_thresh=65536
 
 For these tests, I just have LD_PRELOAD set in my environment.
 
 ===
 
 I see that TCP_MAXSEG is not being handled by libsdp and will 
 look into
 it.
 
 
 [EMAIL PROTECTED] ~]# modprobe ib_sdp
 [EMAIL PROTECTED] ~]# netperf -v2 -4 -H 193.168.10.198 -l 30 -t TCP_STREAM -c
 -C -- -m 128K
 TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 
 193.168.10.198
 (193.168.10.198) port 0 AF_INET
 netperf: get_tcp_info: getsockopt TCP_MAXSEG: errno 92
 Recv   SendSend  Utilization   Service
 Demand
 Socket Socket  Message  Elapsed  Send Recv Send
 Recv
 Size   SizeSize Time Throughput  localremote   local
 remote
 bytes  bytes   bytessecs.10^6bits/s  % S  % S  us/KB
 us/KB
 
  87380  16384 13107230.01  5545.69   51.4714.431.521
 1.706  
 
 Alignment  Offset BytesBytes   Sends   Bytes
 Recvs
 Local  Remote  Local  Remote  Xfered   Per Per
 Send   RecvSend   Recv Send (avg)  Recv (avg)
 8   8  0   0 2.08e+10  131072.00158690   33135.60
 627718
 
 Maximum
 Segment
 Size (bytes)
 -1
 [EMAIL PROTECTED] ~]# echo 65536
 /sys/module/ib_sdp/parameters/sdp_zcopy_thresh 
 [EMAIL PROTECTED] ~]# netperf -v2 -4 -H 193.168.10.198 -l 30 -t TCP_STREAM -c
 -C -- -m 128K
 TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 
 193.168.10.198
 (193.168.10.198) port 0 AF_INET
 netperf: get_tcp_info: getsockopt TCP_MAXSEG: errno 92
 Recv   SendSend  Utilization   Service
 Demand
 Socket Socket  Message  Elapsed  Send Recv Send
 Recv
 Size   SizeSize Time Throughput  localremote   local
 remote
 bytes  bytes   bytessecs.10^6bits/s  % S  % S  us/KB
 us/KB
 
  87380  16384 13107230.01  8708.58   50.6314.550.953
 1.095  
 
 Alignment  Offset BytesBytes   Sends   Bytes
 Recvs
 Local  Remote  Local  Remote  Xfered   Per Per
 Send   RecvSend   Recv Send (avg)  Recv (avg)
 8   8  0   0 3.267e+10  131072.00249228   26348.30
 1239807
 
 Maximum
 Segment
 Size (bytes)
 -1
 
 Thanks,
 JIm
 
 Jim Mott
 Mellanox Technologies Ltd.
 mail: [EMAIL PROTECTED]
 Phone: 512-294-5481
 
 
 -Original Message-
 From: Weikuan Yu [mailto:[EMAIL PROTECTED] 
 Sent: Thursday, January 24, 2008 9:09 AM
 To: Scott Weitzenkamp (sweitzen)
 Cc: Jim Mott; ewg@lists.openfabrics.org; [EMAIL PROTECTED]
 Subject: Re: [ofa-general] RE: [ewg] Not seeing any SDP performance
 changes inOFED 1.3 beta, and I get Oops when enabling sdp_zcopy_thresh
 
 Hi, Scott,
 
 I have been running SDP tests across two woodcrest nodes with 4x DDR 
 cards using OFED-1.2.5.4. The card/firmware info is below.
 
 CA 'mthca0'
  CA type: MT25208
  Number of ports: 2
  Firmware version: 5.1.400
  Hardware version: a0
  Node GUID: 0x0002c90200228e0c
  System image GUID: 0x0002c90200228e0f
 
 I could not get a bandwidth more than 5Gbps like you have shown here. 
 Wonder if I need to upgrade to the latest software or firmware? Any 
 suggestions?
 
 Thanks,
 --Weikuan
 
 
 TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 
 192.168.225.77 
 (192.168
 .225.77) port 0 AF_INET
 Recv   SendSend  Utilization  
  Service 
 Demand
 Socket Socket  Message  Elapsed  Send Recv Send
 Recv
 Size   SizeSize Time Throughput  localremote   local 
 remote
 bytes  bytes   bytessecs.10^6bits/s  % S  % S  us/KB
 us/KB
 
 131072 131072 13107210.00  4918.95   21.2924.991.418 
 1.665
 
 
 Scott Weitzenkamp (sweitzen) wrote:
  Jim,
  
  I am trying OFED-1.3-20071231-0600 and RHEL4 x86_64 on a dual CPU
  (single core each CPU) Xeon system.  I do not see any performance
  improvement (either throughput or CPU utilization) using 
 netperf when
 I
  set /sys/module/ib_sdp/sdp_zcopy_thresh to 16384.  Can you elaborate
 on
  your HCA type, and performance improvement you 

[ewg] [GIT PULL ofed-1.3] - Tag the ofed cxgb3 driver version.

2008-01-24 Thread Steve Wise

Vlad,

Please pull the following patch from:

git://git.openfabrics.org/~swise/ofed-1.3 ofed_kernel

This patch must have gotten lost from 1.2.5 - 1.3.


Thanks,

Steve.



-

Tag -ofed for cxgb3 driver version.

This keeps kernel.org vs ofed driver versions unique.

Signed-off-by: Steve Wise [EMAIL PROTECTED]
---

 .../fixes/cxgb3_00300_add_ofed_version_tag.patch   |   13 +
 1 files changed, 13 insertions(+), 0 deletions(-)

diff --git a/kernel_patches/fixes/cxgb3_00300_add_ofed_version_tag.patch 
b/kernel_patches/fixes/cxgb3_00300_add_ofed_version_tag.patch

new file mode 100644
index 000..ffee40a
--- /dev/null
+++ b/kernel_patches/fixes/cxgb3_00300_add_ofed_version_tag.patch
@@ -0,0 +1,13 @@
+diff --git a/drivers/net/cxgb3/version.h b/drivers/net/cxgb3/version.h
+index ef1c633..ef2405a 100644
+--- a/drivers/net/cxgb3/version.h
 b/drivers/net/cxgb3/version.h
+@@ -35,7 +35,7 @@
+ #define DRV_DESC Chelsio T3 Network Driver
+ #define DRV_NAME cxgb3
+ /* Driver version */
+-#define DRV_VERSION 1.0-ko
++#define DRV_VERSION 1.0-ofed
+
+ /* Firmware version */
+ #define FW_VERSION_MAJOR 4
___
ewg mailing list
ewg@lists.openfabrics.org
http://lists.openfabrics.org/cgi-bin/mailman/listinfo/ewg


[ewg] Re: [ofa-general] [PATCH] IB/ehca: Prevent sending UD packets to QP0

2008-01-24 Thread Hal Rosenstock
On Thu, 2008-01-24 at 17:59 +0100, Joachim Fenkes wrote:
 IB spec doesn't allow packets to QP0 sent on any other VL than VL15.
 Hardware doesn't filter those packets on the send side, so we need to do
 this in the driver and firmware.
 
 As eHCA doesn't support QP0, we can just filter out all traffic going to
 QP0, regardless of SL or VL.

Is this a hardware or software limitation ? If it is software, is there
any plan to enable QP0 support ?

-- Hal

 Signed-off-by: Joachim Fenkes [EMAIL PROTECTED]
 ---
  drivers/infiniband/hw/ehca/ehca_reqs.c |4 
  1 files changed, 4 insertions(+), 0 deletions(-)
 
 diff --git a/drivers/infiniband/hw/ehca/ehca_reqs.c 
 b/drivers/infiniband/hw/ehca/ehca_reqs.c
 index 3aacc8c..2ce8cff 100644
 --- a/drivers/infiniband/hw/ehca/ehca_reqs.c
 +++ b/drivers/infiniband/hw/ehca/ehca_reqs.c
 @@ -209,6 +209,10 @@ static inline int ehca_write_swqe(struct ehca_qp *qp,
   ehca_gen_err(wr.ud.ah is NULL. qp=%p, qp);
   return -EINVAL;
   }
 + if (unlikely(send_wr-wr.ud.remote_qpn == 0)) {
 + ehca_gen_err(dest QP# is 0. qp=%x, qp-real_qp_num);
 + return -EINVAL;
 + }
   my_av = container_of(send_wr-wr.ud.ah, struct ehca_av, ib_ah);
   wqe_p-u.ud_av.ud_av = my_av-av;
  

___
ewg mailing list
ewg@lists.openfabrics.org
http://lists.openfabrics.org/cgi-bin/mailman/listinfo/ewg


[ewg] non SRQ patch for OFED 1.3

2008-01-24 Thread Pradeep Satyanarayana
Some HCAs like ehca do not natively support srq. This patch would enable IPoIB 
CM
for such HCAs. This patch has been accepted into Roland's for-2.6.25 git tree 
for 
about 3 months now.

Please consider including this patch into OFED 1.3.

Signed-off-by: Pradeep Satyanarayana [EMAIL PROTECTED]
---

--- a/drivers/infiniband/ulp/ipoib/ipoib.h  2008-01-23 13:29:06.0 
-0800
+++ b/drivers/infiniband/ulp/ipoib/ipoib.h  2008-01-23 16:03:33.0 
-0800
@@ -69,6 +69,7 @@ enum {
IPOIB_TX_RING_SIZE= 64,
IPOIB_MAX_QUEUE_SIZE  = 8192,
IPOIB_MIN_QUEUE_SIZE  = 2,
+   IPOIB_CM_MAX_CONN_QP  = 4096,
 
IPOIB_NUM_WC  = 4,
 
@@ -188,10 +189,13 @@ enum ipoib_cm_state {
 struct ipoib_cm_rx {
struct ib_cm_id *id;
struct ib_qp*qp;
+   struct ipoib_cm_rx_buf *rx_ring;
struct list_head list;
struct net_device   *dev;
unsigned longjiffies;
enum ipoib_cm_state  state;
+   int  index;
+   int  recv_count;
 };
 
 struct ipoib_cm_tx {
@@ -234,6 +238,7 @@ struct ipoib_cm_dev_priv {
struct ib_wcibwc[IPOIB_NUM_WC];
struct ib_sge   rx_sge[IPOIB_CM_RX_SG];
struct ib_recv_wr   rx_wr;
+   int nonsrq_conn_qp;
int max_cm_mtu;
int num_frags;
 };
@@ -463,6 +468,8 @@ void ipoib_drain_cq(struct net_device *d
 /* We don't support UC connections at the moment */
 #define IPOIB_CM_SUPPORTED(ha)   (ha[0]  (IPOIB_FLAGS_RC))
 
+extern int ipoib_max_conn_qp;
+
 static inline int ipoib_cm_admin_enabled(struct net_device *dev)
 {
struct ipoib_dev_priv *priv = netdev_priv(dev);
@@ -493,6 +500,12 @@ static inline void ipoib_cm_set(struct i
neigh-cm = tx;
 }
 
+static inline int ipoib_cm_has_srq(struct net_device *dev)
+{
+   struct ipoib_dev_priv *priv = netdev_priv(dev);
+   return !!priv-cm.srq;
+}
+
 void ipoib_cm_send(struct net_device *dev, struct sk_buff *skb, struct 
ipoib_cm_tx *tx);
 int ipoib_cm_dev_open(struct net_device *dev);
 void ipoib_cm_dev_stop(struct net_device *dev);
@@ -510,6 +523,8 @@ void ipoib_cm_handle_tx_wc(struct net_de
 
 struct ipoib_cm_tx;
 
+#define ipoib_max_conn_qp 0
+
 static inline int ipoib_cm_admin_enabled(struct net_device *dev)
 {
return 0;
@@ -535,6 +550,11 @@ static inline void ipoib_cm_set(struct i
 {
 }
 
+static inline int ipoib_cm_has_srq(struct net_device *dev)
+{
+   return 0;
+}
+
 static inline
 void ipoib_cm_send(struct net_device *dev, struct sk_buff *skb, struct 
ipoib_cm_tx *tx)
 {
--- a/drivers/infiniband/ulp/ipoib/ipoib_cm.c   2008-01-23 13:29:06.0 
-0800
+++ b/drivers/infiniband/ulp/ipoib/ipoib_cm.c   2008-01-23 16:46:47.0 
-0800
@@ -39,6 +39,13 @@
 #include linux/icmpv6.h
 #include linux/delay.h
 
+int ipoib_max_conn_qp = 128;
+
+module_param_named(max_nonsrq_conn_qp, ipoib_max_conn_qp, int, 0444);
+MODULE_PARM_DESC(max_nonsrq_conn_qp,
+Max number of connected-mode QPs per interface 
+(applied only if shared receive queue is not available));
+
 #ifdef CONFIG_INFINIBAND_IPOIB_DEBUG_DATA
 static int data_debug_level;
 
@@ -81,7 +88,7 @@ static void ipoib_cm_dma_unmap_rx(struct
ib_dma_unmap_single(priv-ca, mapping[i + 1], PAGE_SIZE, 
DMA_FROM_DEVICE);
 }
 
-static int ipoib_cm_post_receive(struct net_device *dev, int id)
+static int ipoib_cm_post_receive_srq(struct net_device *dev, int id)
 {
struct ipoib_dev_priv *priv = netdev_priv(dev);
struct ib_recv_wr *bad_wr;
@@ -104,7 +111,33 @@ static int ipoib_cm_post_receive(struct 
return ret;
 }
 
-static struct sk_buff *ipoib_cm_alloc_rx_skb(struct net_device *dev, int id, 
int frags,
+static int ipoib_cm_post_receive_nonsrq(struct net_device *dev,
+   struct ipoib_cm_rx *rx, int id)
+{
+   struct ipoib_dev_priv *priv = netdev_priv(dev);
+   struct ib_recv_wr *bad_wr;
+   int i, ret;
+
+   priv-cm.rx_wr.wr_id = id | IPOIB_OP_CM | IPOIB_OP_RECV;
+
+   for (i = 0; i  IPOIB_CM_RX_SG; ++i)
+   priv-cm.rx_sge[i].addr = rx-rx_ring[id].mapping[i];
+
+   ret = ib_post_recv(rx-qp, priv-cm.rx_wr, bad_wr);
+   if (unlikely(ret)) {
+   ipoib_warn(priv, post recv failed for buf %d (%d)\n, id, ret);
+   ipoib_cm_dma_unmap_rx(priv, IPOIB_CM_RX_SG - 1,
+ rx-rx_ring[id].mapping);
+   dev_kfree_skb_any(rx-rx_ring[id].skb);
+   rx-rx_ring[id].skb = NULL;
+   }
+
+   return ret;
+}
+
+static struct sk_buff *ipoib_cm_alloc_rx_skb(struct net_device *dev,
+struct ipoib_cm_rx_buf *rx_ring,
+int id, int frags,
 u64 mapping[IPOIB_CM_RX_SG])