On Tue, Jun 21, 2005 at 05:02:36PM -0400, Jeff Dike wrote:
> That would never make it anywhere near mainline, so you would have a
> choice of dumping that into every UML pool you build or fixing your
> filesystems to use the rngtools.
Yeah -- it comes down to how often you build new
distributions
The current ubd implementation doesn't support write
barriers (blk_queue_ordered etc.), and the reccomendation
is to use synchronous IO to ensure data integrity. In
principle, adding barrier request support ought to get
around this problem, allowing writes without O_SYNC with
occasional calls to fd
On Tue, Oct 04, 2005 at 12:31:56PM +0200, Blaisorblade wrote:
> On Tuesday 04 October 2005 11:31, Chris Lightfoot wrote:
> > The current ubd implementation doesn't support write
> > barriers (blk_queue_ordered etc.), and the reccomendation
> > is to use synchronous IO to
Presently the code in ubd_kern.c carries the following
comments:
- for ubd_handler,
/* Called without ubd_io_lock held */
- for do_ubd_request,
/* Called with ubd_io_lock held */
ubd_handler locks ubd_io_lock before calling end_request,
but it then calls do_ubd_request, after relea
At the moment, the performance of UBD is disappointing.
For instance, here is a trivial benchmark on a quiet and
reasonably modern P4 server with 3Ware 8xxx RAID (this is
linux 2.6.13.3, but I haven't spent any effort on tuning
the IO subsystem; in particular, the below results are for
the default,
On Sat, Oct 08, 2005 at 05:32:10PM +0100, Chris Lightfoot wrote:
[...]
> ... while the same experiment on a 2.6.13.3 UML kernel
> (booting with parameters ubd0s=filesystem mem=128M)
> running on the same hardware gives the following less
> impressive results:
>
> # time
On Sun, Oct 09, 2005 at 02:45:42PM -0400, Jeff Dike wrote:
> On Sat, Oct 08, 2005 at 05:32:10PM +0100, Chris Lightfoot wrote:
> > Jeff Dike also has an AIO reimplementation of UBD in the
> > works, but I haven't had a chance to look at it yet.
>
> Why don't you
On Sun, Oct 09, 2005 at 09:35:15PM -0400, Jeff Dike wrote:
> On Sun, Oct 09, 2005 at 11:51:28PM +0100, Chris Lightfoot wrote:
> > (TBH I'm surprised that the AIO code shows so little
> > improvement. The kernel does report that it's using 2.6
> > host AIO, so it is u
On Mon, Oct 10, 2005 at 10:38:03AM -0400, Jeff Dike wrote:
> On Mon, Oct 10, 2005 at 10:10:24AM +0100, Chris Lightfoot wrote:
> > OK. The host is indeed using a 4K block filesystem; I
> > couldn't find your O_DIRECT patch, but turning on O_DIRECT
> > with fcntl just afte
On Fri, Nov 25, 2005 at 02:56:49PM +, Nix wrote:
> You could certainly do just that with POSIX shm :)
Another option is to mlock the memory, which should
prevent paging, but requires root. I have a patch which
does this using a helper binary, if people would like it.
--
``As usual the Libera
On Fri, Nov 25, 2005 at 02:18:43PM -0600, Rob Landley wrote:
> Using /tmp for anything has been kind of discouraged for a while, because
> throwing any insufficiently randomized filename in there is a security hole
> waiting to happen.
Which case are you worried about here? SFAIK all the
filesys
On Sat, Nov 26, 2005 at 04:03:54AM -0600, Rob Landley wrote:
> On Friday 25 November 2005 17:46, Chris Lightfoot wrote:
> > On Fri, Nov 25, 2005 at 02:18:43PM -0600, Rob Landley wrote:
> > > Using /tmp for anything has been kind of discouraged for a while, because
> > >
On Tue, Jan 17, 2006 at 01:04:21AM +0100, Blaisorblade wrote:
[...]
> However, going through ptrace for interprocess comunication is far from
> optimal - using something like, say, POSIX message queues (it's a wild guess)
> would be probably faster. Something purely in userspace seems difficu
13 matches
Mail list logo