jamal wrote:
> On Sun, 2008-13-01 at 14:26 +0200, Timo Teräs wrote:
>> I tried to address all these problems, and added the xfrm_policy
>> and xfrm_state structure to a new linked list that is used solely
>> for dumping. This way the dumps can be done in O(n) and have an
>> iterator point to them. I also modified to af_key to stop dumping
>> when receive queue is filling up and continue it from a callback at
>> recvfrom() when there's more room.
>>
>> But I'd like to hear about:
>> - if the approach is ok (adding the entries in one more list)?
>> - if the new/modified variables, structures and function names are ok?
>> - if I should port these against net-2.6 or net-2.6.25 git tree?
>> - if I should split this to more than one commit? (maybe xfrm core,
>> xfrm user and af_key parts?)
> 
> To answer your last 2 questions:
> There are two issues that are inter-mingled in there. The most
> important is pf_key not being robust on dump. The other being the
> accurracy of netlink dumping - this has much broader implications. I
> think you need to separate the patches into those two at least and
> prioritize on the pf_key as a patch that you push in. I would also
> try to make the pf_key dumping approach to mirror what netlink does
> today - in the worst case it would be as innacurate as netlink; which
> is not bad. I think youll find no resistance getting such a patch in.

Creating a separate af_key patch would not be a big problem. I was
just hoping avoiding it as the xfrm_state / xfrm_policy changes
modify the API and requires changing af_key also.

> To answer your first 2 questions:
> 
> My main dilema with your approach is the policy of maintaining such a
>  list in the kernel and memory consumption needed vs the innacuracy 
> of netlink dumps.With your approach of maintaining extra SA/P D, i
> would need double the RAM amount.

No. I'm not creating second copies of the SADB/SPD entries. The entries
are just added to one more list. Memory requirements go up on per entry
(figures based on 2.6.22.9 kernel):
sizeof(struct xfrm_policy) 636 -> 644 bytes
sizeof(struct xfrm_state)  452 -> 460 bytes

For 400K entries it would take about 3 megs of more memory. Where the
total database size is around 240-250 megs.

> Netlink dumping gives up accuracy[1], uses 50% of the memory you are 
> proposing but abuses more cpu cycles. User space could maintain the
> list you are proposing instead - and compensate for the innaccuracies
> by watching events[2] Of course that shifts the accuracy to events
> which could be lost on their way to user space. This issue is
> alleviated a little more with your approach of keeping the list in
> the kernel and adding new updates to the tail of the dump list
> (which, btw, your patch didnt seem to do); however, even if you solve
> that: ** you are still will be faced with challenges of not being
> atomic on updates; example an entry already on your dump list could
> be deleted before being read by user space. I cant see you solving
> that without abusing a _lot_ more cpu (trying to find the entry on
> the dump list that may have gotten updated or deleted). Theres also
> the issue of how long to keep the dump list before aging it out and
> how to deal with multiple users.

If more entries are added, you can get notifications of them.
 
Cheers,
  Timo

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to