Re: Reading large amounts from /dev/urandom broken

2014-08-12 Thread Pavel Machek
On Sun 2014-08-10 14:51:08, Andrey Utkin wrote:
> 2014-08-09 10:45 GMT+03:00 Pavel Machek :
> > Warn about my quick benchmark?
> >
> > http://atrey.karlin.mff.cuni.cz/~pavel/quickbench.html
> >
> > I don't see what is insane about it. Yes, there might be more
> > effective generators of random bits, but that is not the point, this
> > is quick attempt at benchmark.
> >
> > Also people will use cat /dev/urandom > /dev/sdX (and similar) to
> > clean hard drives. There should be no need to warn about
> > that. (Performance is going to be disk-limited anyway).
> 
> I believe "cat /dev/urandom" doesn't result in reading with _block
> size_ more than 32 MB. As it was already explained by Theodore,
> currently single read() operation will return "short read" when
> requested to give more than 32 MB at once. You still can read more in
> consecuential reading acts.

Well, I still don't think this behaviour change is a good idea.

Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) 
http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Reading large amounts from /dev/urandom broken

2014-08-12 Thread Pavel Machek
On Sun 2014-08-10 14:51:08, Andrey Utkin wrote:
 2014-08-09 10:45 GMT+03:00 Pavel Machek pa...@ucw.cz:
  Warn about my quick benchmark?
 
  http://atrey.karlin.mff.cuni.cz/~pavel/quickbench.html
 
  I don't see what is insane about it. Yes, there might be more
  effective generators of random bits, but that is not the point, this
  is quickdirty attempt at benchmark.
 
  Also people will use cat /dev/urandom  /dev/sdX (and similar) to
  clean hard drives. There should be no need to warn about
  that. (Performance is going to be disk-limited anyway).
 
 I believe cat /dev/urandom doesn't result in reading with _block
 size_ more than 32 MB. As it was already explained by Theodore,
 currently single read() operation will return short read when
 requested to give more than 32 MB at once. You still can read more in
 consecuential reading acts.

Well, I still don't think this behaviour change is a good idea.

Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) 
http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Reading large amounts from /dev/urandom broken

2014-08-10 Thread Andrey Utkin
2014-08-09 10:45 GMT+03:00 Pavel Machek :
> Warn about my quick benchmark?
>
> http://atrey.karlin.mff.cuni.cz/~pavel/quickbench.html
>
> I don't see what is insane about it. Yes, there might be more
> effective generators of random bits, but that is not the point, this
> is quick attempt at benchmark.
>
> Also people will use cat /dev/urandom > /dev/sdX (and similar) to
> clean hard drives. There should be no need to warn about
> that. (Performance is going to be disk-limited anyway).

I believe "cat /dev/urandom" doesn't result in reading with _block
size_ more than 32 MB. As it was already explained by Theodore,
currently single read() operation will return "short read" when
requested to give more than 32 MB at once. You still can read more in
consecuential reading acts.

-- 
Andrey Utkin
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Reading large amounts from /dev/urandom broken

2014-08-10 Thread Andrey Utkin
2014-08-09 10:45 GMT+03:00 Pavel Machek pa...@ucw.cz:
 Warn about my quick benchmark?

 http://atrey.karlin.mff.cuni.cz/~pavel/quickbench.html

 I don't see what is insane about it. Yes, there might be more
 effective generators of random bits, but that is not the point, this
 is quickdirty attempt at benchmark.

 Also people will use cat /dev/urandom  /dev/sdX (and similar) to
 clean hard drives. There should be no need to warn about
 that. (Performance is going to be disk-limited anyway).

I believe cat /dev/urandom doesn't result in reading with _block
size_ more than 32 MB. As it was already explained by Theodore,
currently single read() operation will return short read when
requested to give more than 32 MB at once. You still can read more in
consecuential reading acts.

-- 
Andrey Utkin
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Reading large amounts from /dev/urandom broken

2014-08-09 Thread Pavel Machek
Hi!

> I've inquired on the bugzilla why the reporter is abusing urandom in
> this way.  The other commenter on the bug replicated the problem, but
> that's not a "second bug report" in my book.
> 
> At the very least, this will probably cause me to insert a warning
> printk: "insane user of /dev/urandom: [current->comm] requested %d
> bytes" whenever someone tries to request more than 4k.

Warn about my quick benchmark?

http://atrey.karlin.mff.cuni.cz/~pavel/quickbench.html

I don't see what is insane about it. Yes, there might be more
effective generators of random bits, but that is not the point, this
is quick attempt at benchmark.

Also people will use cat /dev/urandom > /dev/sdX (and similar) to
clean hard drives. There should be no need to warn about
that. (Performance is going to be disk-limited anyway).

Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) 
http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Reading large amounts from /dev/urandom broken

2014-08-09 Thread Pavel Machek
Hi!

 I've inquired on the bugzilla why the reporter is abusing urandom in
 this way.  The other commenter on the bug replicated the problem, but
 that's not a second bug report in my book.
 
 At the very least, this will probably cause me to insert a warning
 printk: insane user of /dev/urandom: [current-comm] requested %d
 bytes whenever someone tries to request more than 4k.

Warn about my quick benchmark?

http://atrey.karlin.mff.cuni.cz/~pavel/quickbench.html

I don't see what is insane about it. Yes, there might be more
effective generators of random bits, but that is not the point, this
is quickdirty attempt at benchmark.

Also people will use cat /dev/urandom  /dev/sdX (and similar) to
clean hard drives. There should be no need to warn about
that. (Performance is going to be disk-limited anyway).

Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) 
http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Reading large amounts from /dev/urandom broken

2014-07-24 Thread Alex Elsayed
Hannes Frederic Sowa wrote:

> On Mi, 2014-07-23 at 11:14 -0400, Theodore Ts'o wrote:
>> On Wed, Jul 23, 2014 at 04:52:21PM +0300, Andrey Utkin wrote:
>> > Dear developers, please check bugzilla ticket
>> > https://bugzilla.kernel.org/show_bug.cgi?id=80981 (not the initial
>> > issue, but starting with comment#3.
>> > 
>> > Reading from /dev/urandom gives EOF after 33554431 bytes.  I believe
>> > it is introduced by commit 79a8468747c5f95ed3d5ce8376a3e82e0c5857fc,
>> > with the chunk
>> > 
>> > nbytes = min_t(size_t, nbytes, INT_MAX >> (ENTROPY_SHIFT + 3));
>> > 
>> > which is described in commit message as "additional paranoia check to
>> > prevent overly large count values to be passed into urandom_read()".
>> > 
>> > I don't know why people pull such large amounts of data from urandom,
>> > but given today there are two bugreports regarding problems doing
>> > that, i consider that this is practiced.
>> 
>> I've inquired on the bugzilla why the reporter is abusing urandom in
>> this way.  The other commenter on the bug replicated the problem, but
>> that's not a "second bug report" in my book.
>> 
>> At the very least, this will probably cause me to insert a warning
>> printk: "insane user of /dev/urandom: [current->comm] requested %d
>> bytes" whenever someone tries to request more than 4k.
> 
> Ok, I would be fine with that.
> 
> The dd if=/dev/urandom of=random_file.dat seems reasonable to me to try
> to not break it. But, of course, there are other possibilities.

Personally, I'd say that _is_ insane - reading from urandom still consumes 
entropy (causing readers of /dev/random to block more often); when 
alternatives (such as dd'ing to dm-crypt) both avoid the issue _and_ are 
faster then it should very well be considered pathological.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Reading large amounts from /dev/urandom broken

2014-07-24 Thread Alex Elsayed
Hannes Frederic Sowa wrote:

 On Mi, 2014-07-23 at 11:14 -0400, Theodore Ts'o wrote:
 On Wed, Jul 23, 2014 at 04:52:21PM +0300, Andrey Utkin wrote:
  Dear developers, please check bugzilla ticket
  https://bugzilla.kernel.org/show_bug.cgi?id=80981 (not the initial
  issue, but starting with comment#3.
  
  Reading from /dev/urandom gives EOF after 33554431 bytes.  I believe
  it is introduced by commit 79a8468747c5f95ed3d5ce8376a3e82e0c5857fc,
  with the chunk
  
  nbytes = min_t(size_t, nbytes, INT_MAX  (ENTROPY_SHIFT + 3));
  
  which is described in commit message as additional paranoia check to
  prevent overly large count values to be passed into urandom_read().
  
  I don't know why people pull such large amounts of data from urandom,
  but given today there are two bugreports regarding problems doing
  that, i consider that this is practiced.
 
 I've inquired on the bugzilla why the reporter is abusing urandom in
 this way.  The other commenter on the bug replicated the problem, but
 that's not a second bug report in my book.
 
 At the very least, this will probably cause me to insert a warning
 printk: insane user of /dev/urandom: [current-comm] requested %d
 bytes whenever someone tries to request more than 4k.
 
 Ok, I would be fine with that.
 
 The dd if=/dev/urandom of=random_file.dat seems reasonable to me to try
 to not break it. But, of course, there are other possibilities.

Personally, I'd say that _is_ insane - reading from urandom still consumes 
entropy (causing readers of /dev/random to block more often); when 
alternatives (such as dd'ing to dm-crypt) both avoid the issue _and_ are 
faster then it should very well be considered pathological.

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Reading large amounts from /dev/urandom broken

2014-07-23 Thread Hannes Frederic Sowa
On Mi, 2014-07-23 at 11:14 -0400, Theodore Ts'o wrote:
> On Wed, Jul 23, 2014 at 04:52:21PM +0300, Andrey Utkin wrote:
> > Dear developers, please check bugzilla ticket
> > https://bugzilla.kernel.org/show_bug.cgi?id=80981 (not the initial
> > issue, but starting with comment#3.
> > 
> > Reading from /dev/urandom gives EOF after 33554431 bytes.  I believe
> > it is introduced by commit 79a8468747c5f95ed3d5ce8376a3e82e0c5857fc,
> > with the chunk
> > 
> > nbytes = min_t(size_t, nbytes, INT_MAX >> (ENTROPY_SHIFT + 3));
> > 
> > which is described in commit message as "additional paranoia check to
> > prevent overly large count values to be passed into urandom_read()".
> > 
> > I don't know why people pull such large amounts of data from urandom,
> > but given today there are two bugreports regarding problems doing
> > that, i consider that this is practiced.
> 
> I've inquired on the bugzilla why the reporter is abusing urandom in
> this way.  The other commenter on the bug replicated the problem, but
> that's not a "second bug report" in my book.
> 
> At the very least, this will probably cause me to insert a warning
> printk: "insane user of /dev/urandom: [current->comm] requested %d
> bytes" whenever someone tries to request more than 4k.

Ok, I would be fine with that.

The dd if=/dev/urandom of=random_file.dat seems reasonable to me to try
to not break it. But, of course, there are other possibilities.

Bye,
Hannes

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Reading large amounts from /dev/urandom broken

2014-07-23 Thread Theodore Ts'o
On Wed, Jul 23, 2014 at 04:52:21PM +0300, Andrey Utkin wrote:
> Dear developers, please check bugzilla ticket
> https://bugzilla.kernel.org/show_bug.cgi?id=80981 (not the initial
> issue, but starting with comment#3.
> 
> Reading from /dev/urandom gives EOF after 33554431 bytes.  I believe
> it is introduced by commit 79a8468747c5f95ed3d5ce8376a3e82e0c5857fc,
> with the chunk
> 
> nbytes = min_t(size_t, nbytes, INT_MAX >> (ENTROPY_SHIFT + 3));
> 
> which is described in commit message as "additional paranoia check to
> prevent overly large count values to be passed into urandom_read()".
> 
> I don't know why people pull such large amounts of data from urandom,
> but given today there are two bugreports regarding problems doing
> that, i consider that this is practiced.

I've inquired on the bugzilla why the reporter is abusing urandom in
this way.  The other commenter on the bug replicated the problem, but
that's not a "second bug report" in my book.

At the very least, this will probably cause me to insert a warning
printk: "insane user of /dev/urandom: [current->comm] requested %d
bytes" whenever someone tries to request more than 4k.

- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Reading large amounts from /dev/urandom broken

2014-07-23 Thread Hannes Frederic Sowa
Hi Andrey,

thanks for the heads up!

On Mi, 2014-07-23 at 16:52 +0300, Andrey Utkin wrote:
> Dear developers, please check bugzilla ticket
> https://bugzilla.kernel.org/show_bug.cgi?id=80981 (not the initial
> issue, but starting with comment#3.
> 
> Reading from /dev/urandom gives EOF after 33554431 bytes.  I believe
> it is introduced by commit 79a8468747c5f95ed3d5ce8376a3e82e0c5857fc,
> with the chunk
> 
> nbytes = min_t(size_t, nbytes, INT_MAX >> (ENTROPY_SHIFT + 3));
> 
> which is described in commit message as "additional paranoia check to
> prevent overly large count values to be passed into urandom_read()".
> 
> I don't know why people pull such large amounts of data from urandom,
> but given today there are two bugreports regarding problems doing
> that, i consider that this is practiced.

Ted, we should roll back the min_t change and just account for SIZE_MAX
bytes in case we overflow the nbytes << (ENTROPY_SHIFT + 3) calculation.

Or something alike?

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 71529e1..f11a6cc 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -1006,7 +1006,10 @@ retry:
WARN_ON(1);
entropy_count = 0;
}
-   nfrac = ibytes << (ENTROPY_SHIFT + 3);
+   if (ibytes > SIZE_MAX >> (ENTROPY_SHIFT + 3))
+   nfrac = SIZE_MAX;
+   else
+   nfrac = ibytes << (ENTROPY_SHIFT + 3);
if ((size_t) entropy_count > nfrac)
entropy_count -= nfrac;
else
@@ -1386,7 +1389,6 @@ urandom_read(struct file *file, char __user *buf,
size_t nbytes, loff_t *ppos)
"with %d bits of entropy available\n",
current->comm, nonblocking_pool.entropy_total);
 
-   nbytes = min_t(size_t, nbytes, INT_MAX >> (ENTROPY_SHIFT + 3));
ret = extract_entropy_user(_pool, buf, nbytes);
 
trace_urandom_read(8 * nbytes, ENTROPY_BITS(_pool),


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Reading large amounts from /dev/urandom broken

2014-07-23 Thread Andrey Utkin
Dear developers, please check bugzilla ticket
https://bugzilla.kernel.org/show_bug.cgi?id=80981 (not the initial
issue, but starting with comment#3.

Reading from /dev/urandom gives EOF after 33554431 bytes.  I believe
it is introduced by commit 79a8468747c5f95ed3d5ce8376a3e82e0c5857fc,
with the chunk

nbytes = min_t(size_t, nbytes, INT_MAX >> (ENTROPY_SHIFT + 3));

which is described in commit message as "additional paranoia check to
prevent overly large count values to be passed into urandom_read()".

I don't know why people pull such large amounts of data from urandom,
but given today there are two bugreports regarding problems doing
that, i consider that this is practiced.

-- 
Andrey Utkin
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Reading large amounts from /dev/urandom broken

2014-07-23 Thread Andrey Utkin
Dear developers, please check bugzilla ticket
https://bugzilla.kernel.org/show_bug.cgi?id=80981 (not the initial
issue, but starting with comment#3.

Reading from /dev/urandom gives EOF after 33554431 bytes.  I believe
it is introduced by commit 79a8468747c5f95ed3d5ce8376a3e82e0c5857fc,
with the chunk

nbytes = min_t(size_t, nbytes, INT_MAX  (ENTROPY_SHIFT + 3));

which is described in commit message as additional paranoia check to
prevent overly large count values to be passed into urandom_read().

I don't know why people pull such large amounts of data from urandom,
but given today there are two bugreports regarding problems doing
that, i consider that this is practiced.

-- 
Andrey Utkin
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Reading large amounts from /dev/urandom broken

2014-07-23 Thread Hannes Frederic Sowa
Hi Andrey,

thanks for the heads up!

On Mi, 2014-07-23 at 16:52 +0300, Andrey Utkin wrote:
 Dear developers, please check bugzilla ticket
 https://bugzilla.kernel.org/show_bug.cgi?id=80981 (not the initial
 issue, but starting with comment#3.
 
 Reading from /dev/urandom gives EOF after 33554431 bytes.  I believe
 it is introduced by commit 79a8468747c5f95ed3d5ce8376a3e82e0c5857fc,
 with the chunk
 
 nbytes = min_t(size_t, nbytes, INT_MAX  (ENTROPY_SHIFT + 3));
 
 which is described in commit message as additional paranoia check to
 prevent overly large count values to be passed into urandom_read().
 
 I don't know why people pull such large amounts of data from urandom,
 but given today there are two bugreports regarding problems doing
 that, i consider that this is practiced.

Ted, we should roll back the min_t change and just account for SIZE_MAX
bytes in case we overflow the nbytes  (ENTROPY_SHIFT + 3) calculation.

Or something alike?

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 71529e1..f11a6cc 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -1006,7 +1006,10 @@ retry:
WARN_ON(1);
entropy_count = 0;
}
-   nfrac = ibytes  (ENTROPY_SHIFT + 3);
+   if (ibytes  SIZE_MAX  (ENTROPY_SHIFT + 3))
+   nfrac = SIZE_MAX;
+   else
+   nfrac = ibytes  (ENTROPY_SHIFT + 3);
if ((size_t) entropy_count  nfrac)
entropy_count -= nfrac;
else
@@ -1386,7 +1389,6 @@ urandom_read(struct file *file, char __user *buf,
size_t nbytes, loff_t *ppos)
with %d bits of entropy available\n,
current-comm, nonblocking_pool.entropy_total);
 
-   nbytes = min_t(size_t, nbytes, INT_MAX  (ENTROPY_SHIFT + 3));
ret = extract_entropy_user(nonblocking_pool, buf, nbytes);
 
trace_urandom_read(8 * nbytes, ENTROPY_BITS(nonblocking_pool),


--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Reading large amounts from /dev/urandom broken

2014-07-23 Thread Theodore Ts'o
On Wed, Jul 23, 2014 at 04:52:21PM +0300, Andrey Utkin wrote:
 Dear developers, please check bugzilla ticket
 https://bugzilla.kernel.org/show_bug.cgi?id=80981 (not the initial
 issue, but starting with comment#3.
 
 Reading from /dev/urandom gives EOF after 33554431 bytes.  I believe
 it is introduced by commit 79a8468747c5f95ed3d5ce8376a3e82e0c5857fc,
 with the chunk
 
 nbytes = min_t(size_t, nbytes, INT_MAX  (ENTROPY_SHIFT + 3));
 
 which is described in commit message as additional paranoia check to
 prevent overly large count values to be passed into urandom_read().
 
 I don't know why people pull such large amounts of data from urandom,
 but given today there are two bugreports regarding problems doing
 that, i consider that this is practiced.

I've inquired on the bugzilla why the reporter is abusing urandom in
this way.  The other commenter on the bug replicated the problem, but
that's not a second bug report in my book.

At the very least, this will probably cause me to insert a warning
printk: insane user of /dev/urandom: [current-comm] requested %d
bytes whenever someone tries to request more than 4k.

- Ted
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Reading large amounts from /dev/urandom broken

2014-07-23 Thread Hannes Frederic Sowa
On Mi, 2014-07-23 at 11:14 -0400, Theodore Ts'o wrote:
 On Wed, Jul 23, 2014 at 04:52:21PM +0300, Andrey Utkin wrote:
  Dear developers, please check bugzilla ticket
  https://bugzilla.kernel.org/show_bug.cgi?id=80981 (not the initial
  issue, but starting with comment#3.
  
  Reading from /dev/urandom gives EOF after 33554431 bytes.  I believe
  it is introduced by commit 79a8468747c5f95ed3d5ce8376a3e82e0c5857fc,
  with the chunk
  
  nbytes = min_t(size_t, nbytes, INT_MAX  (ENTROPY_SHIFT + 3));
  
  which is described in commit message as additional paranoia check to
  prevent overly large count values to be passed into urandom_read().
  
  I don't know why people pull such large amounts of data from urandom,
  but given today there are two bugreports regarding problems doing
  that, i consider that this is practiced.
 
 I've inquired on the bugzilla why the reporter is abusing urandom in
 this way.  The other commenter on the bug replicated the problem, but
 that's not a second bug report in my book.
 
 At the very least, this will probably cause me to insert a warning
 printk: insane user of /dev/urandom: [current-comm] requested %d
 bytes whenever someone tries to request more than 4k.

Ok, I would be fine with that.

The dd if=/dev/urandom of=random_file.dat seems reasonable to me to try
to not break it. But, of course, there are other possibilities.

Bye,
Hannes

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/