Mike Christie wrote: > Erez Zilber wrote: >> This new thread summarizes and continues a discussion that we (Mike, Or >> and myself) had outside the list. This is what we have so far: >> >> * Having a parent device: commit >> 62786b526687db54c6dc22a1786d6df8b03da3f3 in the bnx2i branch looks >> ok, and will solve the DMA mask problem. I think that it's cleaner >> than calling slave_alloc etc. However, this code cannot be used >> outside the bnx2i branch. I think that we need to create another >> patch (based on this one) to submit upstream. Mike - what do you >> think? > > I am going to push that code soon. Since it is not critical for 2.6.26 I > am waiting to push it for .27. > >> * iSER alignment issue: I'm not sure if we can force our >> restrictions through scsi_host_template. Again, the restrictions are: >> o The 1st element must end at the page boundary. >> o The last element must start at the page boundary. >> o All other elements must be page aligned (i.e. start at the >> beginning of a page and end at the page boundary). >> >> Can it be done using blk_queue_dma_alignment? pad_mask? >> >> * Host per session or host per IB device: I agree with Or about the >> need to have a host per session. I understand that the main >> problem that commit 7e8e8af6511afafff33ef7eb0f519bf8702b78ed tries >> to solve is what happens if we failover from one IB device to >> another, right? We prefer to continue using a host per session, > > It is not related to this. > > It deals with being able to set some limits at a level higher than per > session. If we had X sessions on one HCA port then we do not want to > always to push X * session->can_queue IOs onto the port because the port > may not be able to take that much IO. The commands will time out and the > sessions will be stopped, and the scsi eh will run. The problem we face > is that the ib_device is per HCA and not per port like we would see with > normal scsi pci drivers or iscsi network drivers. > > It deals with aligning data structures to objects that can be removed > and handling refcounting in a common way, and aligning this with how we > normally do it in the scsi layer. With a normal scsi hba and driver, we > have a pci device and allocate a scsi host for it (we actually get a pci > resource for each port on the hba and allocate a scsi_host per port). We > implement the pci driver's remove and probe callouts which get called at > those times. > > Unlike software iscsi for infinniband we have something close to this. > We have the ib_client which makes allocating a host per ib_device really > nice when having to handle resource limits and handling removal of the > host object and the objects accessing it and doing it in a standard way. > For example if I remove a broadcom card I will remove the iscsi/scsi > host for each port and that will cause each session running on that card > to be removed. For iser if I do a host per ib_device and you did > something like rmmod the HCA's module then each devices's remove callout > gets called. We can then remove the iscsi/scsi host for the ib_device > and the iscsi layer will remove the sessions referencing it. And it is > all a common code path and there is no special casing for different > drivers of this class, and the driver does not need to maintain its own > struct to represent what other driver's represent with the scsi_host. >
Oh yeah, I am not sure if you saw the end of that thread about this item, but I had said I was ok with leaving it as is for now. Because we allocate a ib_device per HCA, we run into problems with doing a scsi_host per ib_device (when I did the patch I thought we were getting a device per port). I am not going to push that patch because of this, so we do not have to waste any time on this issue. At this time if you are not complaining then I am not either :) --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "open-iscsi" group. To post to this group, send email to open-iscsi@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/open-iscsi -~----------~----~----~----~------~----~------~--~---