From: Andrew Morton a...@linux-foundation.org
Date: Mon, 9 Aug 2010 16:43:46 -0700
drivers/net/wan/farsync.c: In function 'fst_intr_rx':
drivers/net/wan/farsync.c:1312: warning: cast to pointer from integer of
different size
drivers/net/wan/farsync.c: In function 'do_bottom_half_tx':
Vlad, please pull both into OFED 1.5.2 RC4:
Thanks,
-arlin
Done,
Regards,
Vladimir
--
To unsubscribe from this list: send the line unsubscribe linux-rdma in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
I agree with you that changing kernel ABI is not necessary.
I will follow your directions regarding a single allocation at start.
Regards,
Mirek
-Original Message-
From: Roland Dreier [mailto:rdre...@cisco.com]
Sent: Friday, August 06, 2010 5:58 PM
To: Walukiewicz, Miroslaw
Cc:
Hello Jason,
Do you have any benchmarks that show the alloca is a measurable
overhead?
We changed overall path (both kernel and user space) to allocation-less
approach and
We achieved twice better latency using call to kernel driver. I have no data
which path
Is dominant - kernel or user
On Tue, Aug 3, 2010 at 5:44 PM, David Dillow d...@thedillows.org wrote:
On Tue, 2010-08-03 at 17:26 +0200, Bart Van Assche wrote:
[ ... ]
I'm not sure it is a good idea to allow that all transmit buffers get
allocated for sending CMD_RSP information units and that none remain
for replying
There are two kinds supported. QLogic's driver does them in
the host driver so they are atomic with respect to all the CPUs
in the host.
I'm just curious about this: how does this work? Is the CPU getting
interrupted and doing the operation while the Mellanox HCA does
everything in hardware?
You can work around this by creating a loopback connection (ie an RC
connection from the local HCA to itself) and post atomic operations to
that QP instead of accessing the memory directly with the CPU.
Right but that's really slow, specially if you're implementing some
sort of
On 08/09/2010 06:36 PM, Hefty, Sean wrote:
Several new opcodes have been added since the last time ib_pack.h was
updated.
These changes add them.
Will anything make use of these?
diff --git a/include/rdma/ib_pack.h b/include/rdma/ib_pack.h
index cbb50f4..df10acc 100644
---
Hello Sean,
On 08/09/2010 03:53 PM, Hefty, Sean wrote:
This allow rdma ucm to establish an XRC connection between two nodes. Most
of the changes are related to modify_qp since the API is different
whether the QP is on the send or receive side.
To create an XRC receive QP, the cap.max_send_wr
On Tue, Aug 10, 2010 at 02:15, Stephen Rothwell s...@canb.auug.org.au wrote:
On Mon, 9 Aug 2010 16:43:46 -0700 Andrew Morton a...@linux-foundation.org
wrote:
Guys. What's goin' on out there?
I guess we are all so up to date that noone does 32 bit builds any
more ... Also noone is
- XRC support upstream (kernel and user space) is still pending.
(I can start a librdmacm branch for XRC support.)
- Changes are needed to the kernel rdma_cm.
We could start submitting patches against Roland's xrc branch for
these.
- Please update to the latest librdmacm tree.
On Tue, Aug 10, 2010 at 09:59:50AM -0700, Hefty, Sean wrote:
The general parameters would be the same as for RC. Should we create a new
ai_flag ? or a new port space ?
There's a ai_qp_type field available. I think the RDMA TCP port
space would work.
Not sure the port space matters at
There's a ai_qp_type field available. I think the RDMA TCP port
space would work.
Not sure the port space matters at all?
Is there anything additional CM information for XRC other than
requesting an XRC QP type? (XRCSRQ or something?)
It's nothing huge:
Modifications to Table 99:
*
On Tue, 2010-08-10 at 04:46 -0700, Rui Machado wrote:
There are two kinds supported. QLogic's driver does them in
the host driver so they are atomic with respect to all the CPUs
in the host.
I'm just curious about this: how does this work? Is the CPU getting
interrupted and doing the
On 08/10/2010 12:14 PM, Jason Gunthorpe wrote:
On Tue, Aug 10, 2010 at 09:59:50AM -0700, Hefty, Sean wrote:
The general parameters would be the same as for RC. Should we create a new
ai_flag ? or a new port space ?
There's a ai_qp_type field available. I think the RDMA TCP port
space
Andrew Morton wrote:
fs/squashfs/xattr.c:37: warning: 'squashfs_xattr_handler' declared inline after
being called
fs/squashfs/xattr.c:37: warning: previous declaration of
'squashfs_xattr_handler' was here
The fix for this is in linux-next, and it will be in my imminent 2.6.36 pull
request
This series of three patches adds SRP_CRED_REQ support in ib_srp, which is a
feature defined in the SRP (draft) standard.
Changes in v4 compared to v3:
- Dropped the fourth patch since it has been merged.
- Introduced the symbolic constant SRP_TSK_MGMT_RSV, which represents the
number of slots
The information unit transmit ring (srp_target.tx_ring) in ib_srp is currently
only used for allocating requests sent by the initiator to the target. This
patch prepares using that ring buffer for allocation of both requests and
responses. Also, this patch differentiates the uses of SRP_SQ_SIZE,
Implements SRP_CRED_REQ, which is an information unit defined in the SRP
(draft) standard and that allows an SRP target to inform an SRP initiator that
more requests may be sent by the initiator. Adds declarations for the
SRP_CRED_REQ and SRP_CRED_RSP information units to include/scsi/srp.h.
The SRP (draft) standard specifies that an SRP initiator must never queue more
than (SRP request limit) - 1 unanswered SRP_CMD information units. This patch
makes sure that the SCSI mid-layer never tries to queue more than (SRP request
limit) - 1 SCSI commands to ib_srp. This improves performance
Sorry for the massive lag in this conversation - between trying to balance
working with Linux-RDMA with the rest of my job, support problems and
vacations, this got pushed to the bottom of my queue while I thought about the
best approach to the issue.
When we left this, we were discussing the
enum ibv_event_type {
...
+ IBV_EVENT_GID
...
struct ibv_async_event {
union {
+ struct ibv_gid_event *gid_event;
...
+ int ibv_reg_gid_event(struct ibv_context *context, uint8_t port_num);
We need to get Roland's thoughts on
That thought occurred to me, but I thought it might be easier for the app
developer if the api explicitly broke up the generic concepts of traps and
notices into specific types.
-Original Message-
From: linux-rdma-ow...@vger.kernel.org
[mailto:linux-rdma-ow...@vger.kernel.org] On
On 08/10/2010 12:14 PM, Jason Gunthorpe wrote:
On Tue, Aug 10, 2010 at 09:59:50AM -0700, Hefty, Sean wrote:
The general parameters would be the same as for RC. Should we create a new
ai_flag ? or a new port space ?
There's a ai_qp_type field available. I think the RDMA TCP port
space
On Tue, Aug 10, 2010 at 04:05:42PM -0500, frank zago wrote:
It seems the new API has too many constraints for XRC. There are a
couple things that don't fit:
I'll try to take a more careful look at this later, but just want to
say that the new APIs are so new that we could still change them -
Well.. the XRC domain needs to be an input to create_ep just like
the PD :(
In looking at how this API turned out maybe the PD should have been
carried in the rdma_addrinfo? Certainly I would put the XRC domain
in there.. Recall my original comments about the PD being used to
restrict
On Tue, Aug 10, 2010 at 04:18:57PM -0700, Hefty, Sean wrote:
Well.. the XRC domain needs to be an input to create_ep just like
the PD :(
In looking at how this API turned out maybe the PD should have been
carried in the rdma_addrinfo? Certainly I would put the XRC domain
in there..
ibstat command doesn't show all HCAs when the number of HCAs in one system
exceeds 20. We need to change this limit. Increase it to 32 to be consistent
with the define IB_UVERBS_MAX_DEVICES = 32
Signed-off-by: Arputham Benjamin abenja...@sgi.com
---
diff -rup
28 matches
Mail list logo