On 03/14/2011 09:15 AM, Vikas Chaudhary wrote:
-----Original Message-----
From: Mike Christie [mailto:micha...@cs.wisc.edu]
Sent: Sunday, March 13, 2011 6:05 PM
To: open-iscsi@googlegroups.com
Cc: Vikas Chaudhary; Lalit Chandivade; Harish Zunjarrao; Manish Rangankar;
Ravi Anand
Subject: Re: [PATCH] [RFC] Add netconfig support in iscsiadm and iscsid

I think overall this is ok. Thank you for working on this.


Some comments below.

On 02/25/2011 04:56 AM, Vikas Chaudhary wrote:
struct iscsi_net_param {
          uint32_t param_type; /* enum iscsi_net_param */
          uint32_t iface_type; /* IPv4 or IPv6 */
          uint32_t iface_num; /* iface number, 0 - n */
          uint32_t length; /* Actual length of the param */
          uint32_t offset; /* For param with length>   256 */

I did not understand the offset value.

If length of param is greater than 256, then param will be split into two 
params with same type but different offset.
The value is split into two param structs. First param will have offset value 
zero and second param will have offset value as 256.


It does not see very friendly. Do you have any params using this in the patches sent?



          uint8_t value[256]; /* max is 223 iscsi name */ }
__attribute__((packed));

I think you can just dynamically allocate the value.

We did not do this for other events/params, because they are needed for
the reconnection. For the case where we could not allocate memory and
are trying to reconnect a sessions that the vm is trying to write pages
out to, then having the set value size made it simple to preallocate
param/event structs.

For setup there is not need to preallocate.


We use pre-allocated memory as we need to send all network parameters to the 
driver as single data blob.
We can modify code to use dynamically allocated memory as suggested but it will 
add complexity in the code.

If we use dynamic memory allocation, we need to keep track of the location 
where each parameter and its corresponding data starts.
The number of network parameters to be send can change each time based on how 
user configures iface. Therefore it becomes complex to identify
how much memory needs to be allocated each time.

For example:
Suppose we have following two iface files:
IFACE A:
# BEGIN RECORD 2.0-872
iface.iscsi_ifacename = qla4xxx.00:0e:1e:04:93:92
iface.ipaddress = 192.168.1.71
iface.subnet_mask = 255.255.255.0
iface.gateway = 192.168.1.1
iface.hwaddress = 00:0e:1e:04:93:92
iface.transport_name = qla4xxx
iface.state = enable
# END RECORD

IFACE B:
# BEGIN RECORD 2.0-872
iface.iscsi_ifacename = qla4xxx.00:0e:1e:04:93:92
iface.ipaddress = 1234::5678
iface.hwaddress = 00:0e:1e:04:93:92
iface.transport_name = qla4xxx
iface.state = enable
# END RECORD

In IFACE A, we require 4 bytes for ipv4 address whereas in IFACE B, we require 
16 bytes for ipv16 address value.
As the number of parameters go on increasing, we need to find out memory 
requirement for them as well.

As we need to send complete netconfig data as a single blob, first we need to 
find out number of valid parameters
in the iface which are required to be sent to the driver. Depending on the 
parameter, we need to find out memory requirement of each parameter.

This requires iface files to be parsed twice, one to find out memory 
requirement and other to fill up the data in the allocated memory.
This additional processing is required for dynamic memory allocation.

In your patch though you read in the iface into memory. If you did variable length params, you still would only need to read it once, but you would need to loop over the list of params twice.


Currently we assume a fixed amount of memory which can accommodate max params 
defined as 20 per interface * 5 interface per host.
This allows traversing the iface record once and fill up to the max supported 
params per interface.

Please let us know your thoughts about should we continue using pre-allocated 
memory or dynamically allocated memory.

I think you can do variable sized params, and just loop over the params once by doing a multi iovec sendmsg. I think maybe long ago the kernel did not support multiple iovecs in netlink, but i think now you could do something like:



__fill_some_param(struct iovec *iov)
{
        int len;

        len = sizeof(struct iscsi_net_param) + sizof(net_param->length);
        iov->iov_base = malloc(len);
        iov->iov_len = len;
.......
}



__iface_build_net_config(nl_buf)
{
        iovs = iscsi_nl_buf_get_iovs(nl_buf);

        __fill_some_param(iovs[0])
        __fill_some_other_param(iovs[1])

}


netlink.c

struct iscsi_nl_buf {
        int nr_iovs;
        struct iovec *iovs;
}

iscsi_nl_allocate_buf(int num_params)
{
        struct iscsi_nl_buf buf;

        /* +2 for the event and the nlmsghdr */
nl_buf = malloc(sizeof(struct iscsi_nl_buf) + ((num_params + 2) * sizeof(*iov)))

        nl_buf->iovs = nl_buf + 1;
......

        return nl_buf;
}

struct iovec *iscsi_nl_get_iovs(nl_buf)
{
        /* start at 2, because 0 is for nlmsghdr and 1 is for event */
        return nl_buf->iovs + 2;
}

set_net_config(nl_buf)
{
        struct iscsi_event ev;


        ev.X = Y;
        ........

        nl_buf->iovs[1].iov_base = &ev;
        nl_buf->iovs[1].iov_len = sizeof(ev);

        __kipc_call(nl_buf);
}

kwritev(nl_buf)
{
        struct nlmsghdr nlh;

        nlh.X = Y;
        ........

        nl_buf->iovs[0].iov_base = &nlh;

        /* remove the memcpy to NLMSG_DATA  and send iovec */

        .....
        msg.msg_iov.iov = nl_buf->iovs;
        .....

}

/* convert other code */
        

--
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.

Reply via email to