On Tue, Dec 07, 2010 at 10:53:23PM +0200, Nir Muchtar wrote:

> I'm just not convinced this can be easily completed/accepted and not
> divert us from the primary goal, so I prefer picking this up later
> after those

This part of the kernel is mature, the 'primary goal' is to not add
new half backed things that need to churn userspace APIs before they
are complete :)

> patches are accepted. If I discover there's no good way to obtain this 
> through userspace then I won't.

There isn't. Read my last message.

> > > The thing is, there's no easy and clean way to retrieve the export when
> > > using dump_start.
> > 
> > I don't follow this comment, can you elaborate?
> > 
> > This really needs to use the dump api, and I can't see any reason why
> > it can't.
 
> As I said, there's just no way (I know of) to use dump_start, divide data
> into several packets, and receive a consistent snapshot of the data, and
> this is an issue. We can achieve all that by doing something a little 
> different so why shouldn't we? 

You have to give up on 100% consistency to use dump_start, which is OK
for diags, and what other dumpers in the kernel do.

What you've done in your v2 patch won't work if the table you are
dumping is too large, once you pass sk_rmem_alloc for the netlink
socket it will deadlock. The purpose of dump_start is to avoid that
deadlock. (review my past messages on the subject)

Your v1 patch wouldn't deadlock, but it would fail to dump with
ENOMEM, and provides an avenue to build an unprivileged kernel OOM
DOS.

The places in the kernel that don't use dump_start have to stay under
sk_rmem_alloc.

Jason
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to