Re: disk devices speed is ugly

2012-02-20 Thread Alex Samorukov

On 02/15/2012 05:50 AM, Scott Long wrote:


What would be nice is a generic caching subsystem that any FS can use
- similar to the old block devices but with hooks to allow the FS to
request read-ahead, advise of unwanted blocks and ability to flush
dirty blocks in a requested order with the equivalent of barriers
(request Y will not occur until preceeding request X has been
committed to stable media).  This would allow filesystems to regain
the benefits of block devices with minimal effort and then improve
performance  cache efficiency with additional work.

Any filesystem that uses bread/bwrite/cluster_read are already using the generic 
caching subsystem that you propose.  This includes UDF, CD9660, MSDOS, NTFS, XFS, 
ReiserFS, EXT2FS, and HPFS, i.e. every local storage filesystem in the tree except for 
ZFS.  Not all of them implement VOP_GETPAGES/VOP_PUTPAGES, but those are just 
optimizations for the vnode pager, not requirements for using buffer-cache services on 
block devices.  As Kostik pointed out in a parallel email, the only thing that was 
removed from FreeBSD was the userland interface to cached devices via /dev nodes.  This 
has nothing to do with filesystems, though I suppose that could maybe sorta kinda be an 
issue for FUSE?.
May be its possible to provide some generic interface for fuse based 
filesystems to use this generic cache? I can test it and report 
performance.


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: disk devices speed is ugly

2012-02-20 Thread Scott Long

On Feb 20, 2012, at 12:24 PM, Alex Samorukov wrote:

 On 02/15/2012 05:50 AM, Scott Long wrote:
 
 What would be nice is a generic caching subsystem that any FS can use
 - similar to the old block devices but with hooks to allow the FS to
 request read-ahead, advise of unwanted blocks and ability to flush
 dirty blocks in a requested order with the equivalent of barriers
 (request Y will not occur until preceeding request X has been
 committed to stable media).  This would allow filesystems to regain
 the benefits of block devices with minimal effort and then improve
 performance  cache efficiency with additional work.
 
 Any filesystem that uses bread/bwrite/cluster_read are already using the 
 generic caching subsystem that you propose.  This includes UDF, CD9660, 
 MSDOS, NTFS, XFS, ReiserFS, EXT2FS, and HPFS, i.e. every local storage 
 filesystem in the tree except for ZFS.  Not all of them implement 
 VOP_GETPAGES/VOP_PUTPAGES, but those are just optimizations for the vnode 
 pager, not requirements for using buffer-cache services on block devices.  
 As Kostik pointed out in a parallel email, the only thing that was removed 
 from FreeBSD was the userland interface to cached devices via /dev nodes.  
 This has nothing to do with filesystems, though I suppose that could maybe 
 sorta kinda be an issue for FUSE?.
 May be its possible to provide some generic interface for fuse based 
 filesystems to use this generic cache? I can test it and report performance.
 

What you're asking for is to bring back the cached raw devices.  I don't have a 
strong opinion on this one way or another, except that it's a pretty specific 
use case.  Does the inherent performance gap with user land filesystems warrant 
this?  Maybe a simple cache layer can be put into FUSE that would allow client 
filesystems the same control over block caching and clustering that is afforded 
in the kernel?

Scott

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: disk devices speed is ugly

2012-02-15 Thread Konstantin Belousov
On Wed, Feb 15, 2012 at 12:27:19AM -0600, Adam Vande More wrote:
 On Tue, Feb 14, 2012 at 10:50 PM, Scott Long sco...@samsco.org wrote:
 
 
  Any filesystem that uses bread/bwrite/cluster_read are already using the
  generic caching subsystem that you propose.  This includes UDF, CD9660,
  MSDOS, NTFS, XFS, ReiserFS, EXT2FS, and HPFS, i.e. every local storage
  filesystem in the tree except for ZFS.  Not all of them implement
  VOP_GETPAGES/VOP_PUTPAGES, but those are just optimizations for the vnode
  pager, not requirements for using buffer-cache services on block devices.
   As Kostik pointed out in a parallel email, the only thing that was removed
  from FreeBSD was the userland interface to cached devices via /dev nodes.
 
 
 Does this mean the Architecture Handbook page is wrong?:
 
 http://www.freebsd.org/doc/en/books/arch-handbook/driverbasics-block.html

No, why did you decided that it is wrong ?


pgpObX1YCs2ug.pgp
Description: PGP signature


Re: disk devices speed is ugly

2012-02-15 Thread Andriy Gapon
on 15/02/2012 06:50 Scott Long said the following:
 The ARC is limited by available wired memory; attempts to allocate such
 memory will evict pages from the buffer cache as necessary, until all
 available RAM is consumed.  If anything, ZFS starves the rest of the system,
 not the other way around, and that's simply because the ARC isn't integrated
 with the normal VM.

I just would like to add for completeness that this is an oversimplified view of
ARC's behavior.  ARC tries to monitor overall memory usage and tries to throttle
itself when necessary.  It also reacts to lowmem signal from pagedaemon.

-- 
Andriy Gapon
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: disk devices speed is ugly

2012-02-14 Thread Peter Jeremy
On 2012-Feb-13 08:28:21 -0500, Gary Palmer gpal...@freebsd.org wrote:
The filesystem is the *BEST* place to do caching.  It knows what metadata
is most effective to cache and what other data (e.g. file contents) doesn't
need to be cached.

Agreed.

  Any attempt to do this in layers between the FS and
the disk won't achieve the same gains as a properly written filesystem. 

Agreed - but traditionally, Unix uses this approach via block devices.
For various reasons, FreeBSD moved caching into UFS and removed block
devices.  Unfortunately, this means that any FS that wants caching has
to implement its own - and currently only UFS  ZFS do.

What would be nice is a generic caching subsystem that any FS can use
- similar to the old block devices but with hooks to allow the FS to
request read-ahead, advise of unwanted blocks and ability to flush
dirty blocks in a requested order with the equivalent of barriers
(request Y will not occur until preceeding request X has been
committed to stable media).  This would allow filesystems to regain
the benefits of block devices with minimal effort and then improve
performance  cache efficiency with additional work.

One downside of the each FS does its own caching in that the caches
are all separate and need careful integration into the VM subsystem to
prevent starvation (eg past problems with UFS starving ZFS L2ARC).

-- 
Peter Jeremy


pgpa3o0LQ2kfG.pgp
Description: PGP signature


Re: disk devices speed is ugly

2012-02-14 Thread Konstantin Belousov
On Wed, Feb 15, 2012 at 07:02:58AM +1100, Peter Jeremy wrote:
 On 2012-Feb-13 08:28:21 -0500, Gary Palmer gpal...@freebsd.org wrote:
 The filesystem is the *BEST* place to do caching.  It knows what metadata
 is most effective to cache and what other data (e.g. file contents) doesn't
 need to be cached.
 
 Agreed.
 
   Any attempt to do this in layers between the FS and
 the disk won't achieve the same gains as a properly written filesystem. 
 
 Agreed - but traditionally, Unix uses this approach via block devices.
 For various reasons, FreeBSD moved caching into UFS and removed block
 devices.  Unfortunately, this means that any FS that wants caching has
 to implement its own - and currently only UFS  ZFS do.
Block caching is still there, only user-accessible interface was removed.
UFS utilizes the buffer cache for the device which carries the volume,
for metadata caching. There are some memory areas in UFS which can be
classified as caches on its own, but their existence is mostly to support
operation, and not caching (e.g. the inodeblock copy accompaniying each
inode).

 
 What would be nice is a generic caching subsystem that any FS can use
 - similar to the old block devices but with hooks to allow the FS to
 request read-ahead, advise of unwanted blocks and ability to flush
 dirty blocks in a requested order with the equivalent of barriers
 (request Y will not occur until preceeding request X has been
 committed to stable media).  This would allow filesystems to regain
 the benefits of block devices with minimal effort and then improve
 performance  cache efficiency with additional work.
 
 One downside of the each FS does its own caching in that the caches
 are all separate and need careful integration into the VM subsystem to
 prevent starvation (eg past problems with UFS starving ZFS L2ARC).
Other filesystems which use vfs_bio, like cd9660 or ufs, use the same
disk cache layer as UFS.


pgpqbAGs3GLrm.pgp
Description: PGP signature


Re: disk devices speed is ugly

2012-02-14 Thread Scott Long

On Feb 14, 2012, at 1:02 PM, Peter Jeremy wrote:

 On 2012-Feb-13 08:28:21 -0500, Gary Palmer gpal...@freebsd.org wrote:
 The filesystem is the *BEST* place to do caching.  It knows what metadata
 is most effective to cache and what other data (e.g. file contents) doesn't
 need to be cached.
 
 Agreed.
 
 Any attempt to do this in layers between the FS and
 the disk won't achieve the same gains as a properly written filesystem. 
 
 Agreed - but traditionally, Unix uses this approach via block devices.
 For various reasons, FreeBSD moved caching into UFS and removed block
 devices.  Unfortunately, this means that any FS that wants caching has
 to implement its own - and currently only UFS  ZFS do.
 
 What would be nice is a generic caching subsystem that any FS can use
 - similar to the old block devices but with hooks to allow the FS to
 request read-ahead, advise of unwanted blocks and ability to flush
 dirty blocks in a requested order with the equivalent of barriers
 (request Y will not occur until preceeding request X has been
 committed to stable media).  This would allow filesystems to regain
 the benefits of block devices with minimal effort and then improve
 performance  cache efficiency with additional work.
 

Any filesystem that uses bread/bwrite/cluster_read are already using the 
generic caching subsystem that you propose.  This includes UDF, CD9660, 
MSDOS, NTFS, XFS, ReiserFS, EXT2FS, and HPFS, i.e. every local storage 
filesystem in the tree except for ZFS.  Not all of them implement 
VOP_GETPAGES/VOP_PUTPAGES, but those are just optimizations for the vnode 
pager, not requirements for using buffer-cache services on block devices.  As 
Kostik pointed out in a parallel email, the only thing that was removed from 
FreeBSD was the userland interface to cached devices via /dev nodes.  This has 
nothing to do with filesystems, though I suppose that could maybe sorta kinda 
be an issue for FUSE?.

ZFS isn't in this list because it implements its own private buffer/cache (the 
ARC) that understands the special requirements of ZFS.  There are good and bad 
aspects to this, noted below.

 One downside of the each FS does its own caching in that the caches
 are all separate and need careful integration into the VM subsystem to
 prevent starvation (eg past problems with UFS starving ZFS L2ARC).
 

I'm not sure what you mean here.  The ARC is limited by available wired memory; 
attempts to allocate such memory will evict pages from the buffer cache as 
necessary, until all available RAM is consumed.  If anything, ZFS starves the 
rest of the system, not the other way around, and that's simply because the ARC 
isn't integrated with the normal VM.  Such integration is extremely hard and 
has nothing to do with having a generic caching subsystem.

Scott

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: disk devices speed is ugly

2012-02-14 Thread Adam Vande More
On Tue, Feb 14, 2012 at 10:50 PM, Scott Long sco...@samsco.org wrote:


 Any filesystem that uses bread/bwrite/cluster_read are already using the
 generic caching subsystem that you propose.  This includes UDF, CD9660,
 MSDOS, NTFS, XFS, ReiserFS, EXT2FS, and HPFS, i.e. every local storage
 filesystem in the tree except for ZFS.  Not all of them implement
 VOP_GETPAGES/VOP_PUTPAGES, but those are just optimizations for the vnode
 pager, not requirements for using buffer-cache services on block devices.
  As Kostik pointed out in a parallel email, the only thing that was removed
 from FreeBSD was the userland interface to cached devices via /dev nodes.


Does this mean the Architecture Handbook page is wrong?:

http://www.freebsd.org/doc/en/books/arch-handbook/driverbasics-block.html

-- 
Adam Vande More
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: disk devices speed is ugly

2012-02-13 Thread Gary Palmer
On Mon, Feb 13, 2012 at 07:36:25AM +0100, Alex Samorukov wrote:
 On 02/13/2012 06:27 AM, Adrian Chadd wrote:
 On 12 February 2012 09:34, Alex Samorukovm...@os2.kiev.ua  wrote:
 
 Yes. But it will nit fix non-cached access to the disk (raw) devices. And
 this is the main reason why ntfs-3g and exfat are much slower then working
 on Linux.
 But _that_ can be fixed with the appropriate application of a sensible
 caching layer.
 With every application?  :) Are you know anyone who wants to do this? At 
 least for 3 fuse filesystems.

The filesystem is the *BEST* place to do caching.  It knows what metadata
is most effective to cache and what other data (e.g. file contents) doesn't
need to be cached.  Any attempt to do this in layers between the FS and
the disk won't achieve the same gains as a properly written filesystem. 
e.g. in a UFS implementation the disk layer may see a lot of I/Os for 
blocks, not necessarily sequential, as a program lists a directory and stats
all the files which pulls in the inode tables.  The filesystem knows that it
needs the inode tables and is likely to need not only the current inode table
disk block but subsequent ones also, and instead of requesting the disk sector
that it needs to service the immediate stat(2) request but maybe the next few
also.  Without that insight into whats going on it is difficult to see how a
highly effective cache could be done at the geom layer.

Gary
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: disk devices speed is ugly

2012-02-13 Thread Alex Samorukov

On 02/13/2012 02:28 PM, Gary Palmer wrote:



Yes. But it will nit fix non-cached access to the disk (raw) devices. And
this is the main reason why ntfs-3g and exfat are much slower then working
on Linux.

But _that_ can be fixed with the appropriate application of a sensible
caching layer.

With every application?  :) Are you know anyone who wants to do this? At
least for 3 fuse filesystems.

The filesystem is the *BEST* place to do caching.  It knows what metadata
is most effective to cache and what other data (e.g. file contents) doesn't
need to be cached.  Any attempt to do this in layers between the FS and
the disk won't achieve the same gains as a properly written filesystem.
e.g. in a UFS implementation the disk layer may see a lot of I/Os for
blocks, not necessarily sequential, as a program lists a directory and stats
all the files which pulls in the inode tables.  The filesystem knows that it
needs the inode tables and is likely to need not only the current inode table
disk block but subsequent ones also, and instead of requesting the disk sector
that it needs to service the immediate stat(2) request but maybe the next few
also.  Without that insight into whats going on it is difficult to see how a
highly effective cache could be done at the geom layer.

I think we are playing in a captain obvious.

I have nothing against statement that FS is a best place for caching. 
Also - i am absolutely sure that its better to have kernel space fs 
driver then FUSE one.


But unfortunately there is no kernel space driver for the exfat, kernel 
driver for ntfs is ugly and buggy (and r/o) and i don`t think that 
anyone is going to change this.


And i really don`t understand why are you trying to tell that it cannot 
be effective when its so easy to proof that it can. Just try this with 
fuse based filesystems in Linux, and you will get speed compared to 
underlying device (especially on relatively slow USB devices). Then try 
the same code on FreeBSD to see how ugly things are.


And yes, in ideal world ever fs needs to have good written cache 
implementation and kernel should not care about caching raw devices at 
all. But as i mentioned before - there is no kernel-space drivers with a 
good cache implementation for this 2 widely used systems (and probably 
not only). Linux is a good example that device-level caching works, and 
works fine.


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: disk devices speed is ugly

2012-02-13 Thread Gary Palmer
On Mon, Feb 13, 2012 at 03:50:36PM +0100, Alex Samorukov wrote:
 On 02/13/2012 02:28 PM, Gary Palmer wrote:
 
 Yes. But it will nit fix non-cached access to the disk (raw) devices. 
 And
 this is the main reason why ntfs-3g and exfat are much slower then 
 working
 on Linux.
 But _that_ can be fixed with the appropriate application of a sensible
 caching layer.
 With every application?  :) Are you know anyone who wants to do this? At
 least for 3 fuse filesystems.
 The filesystem is the *BEST* place to do caching.  It knows what metadata
 is most effective to cache and what other data (e.g. file contents) doesn't
 need to be cached.  Any attempt to do this in layers between the FS and
 the disk won't achieve the same gains as a properly written filesystem.
 e.g. in a UFS implementation the disk layer may see a lot of I/Os for
 blocks, not necessarily sequential, as a program lists a directory and 
 stats
 all the files which pulls in the inode tables.  The filesystem knows that 
 it
 needs the inode tables and is likely to need not only the current inode 
 table
 disk block but subsequent ones also, and instead of requesting the disk 
 sector
 that it needs to service the immediate stat(2) request but maybe the next 
 few
 also.  Without that insight into whats going on it is difficult to see how 
 a
 highly effective cache could be done at the geom layer.
 I think we are playing in a captain obvious.
 
 I have nothing against statement that FS is a best place for caching. 
 Also - i am absolutely sure that its better to have kernel space fs 
 driver then FUSE one.
 
 But unfortunately there is no kernel space driver for the exfat, kernel 
 driver for ntfs is ugly and buggy (and r/o) and i don`t think that 
 anyone is going to change this.
 
 And i really don`t understand why are you trying to tell that it cannot 
 be effective when its so easy to proof that it can. Just try this with 
 fuse based filesystems in Linux, and you will get speed compared to 
 underlying device (especially on relatively slow USB devices). Then try 
 the same code on FreeBSD to see how ugly things are.
 
 And yes, in ideal world ever fs needs to have good written cache 
 implementation and kernel should not care about caching raw devices at 
 all. But as i mentioned before - there is no kernel-space drivers with a 
 good cache implementation for this 2 widely used systems (and probably 
 not only). Linux is a good example that device-level caching works, and 
 works fine.

Please re-read my message.  At no time did I say that caching below the
FS could not provide speed improvements.  I said it could not be as
effective as a properly implemented filesystem.  I'm sure if you throw
memory at it, a geom layer cache can provide substantial speed ups.  However
I strongly suspect that a proper FS cache would provide a better memory/hit
ratio than a geom layer cache.

Gary
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: disk devices speed is ugly

2012-02-13 Thread Adrian Chadd
I tend to say the right solution to a problem is to not do it wrong.

But.. given that Linux is fine with all the unaligned accesses, is the
major sticking point here the fact that Linux's block dev layer is
doing all the caching that FreeBSD's direct device layer isn't, and
all of those (cached) accesses are what is improving performance?

So perhaps it'd be worthwhile investing some time in a geom caching
layer to see if that's at all feasible.

I had the same problem with userland cyclic filesystems on FreeBSD
versus Linux - the Linux FS performed better in synthetic tests
because it did caching of the blockdev data. FreeBSD was doing direct
IO. Tuning the direct IO sizes and fixing the filesystem code to do
everything correctly aligned eliminated a lot of the ridiculous
issues. Making Squid cache reads from disk would've improved it too.
:-)

Finally - I've seen this same issue under linux, especially when you
stick a filesystem on a RAID device with the stripe/alignment all
wrong. It's not just a BSD problem. :)



Adrian
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: disk devices speed is ugly

2012-02-12 Thread Alex Samorukov

On 02/12/2012 01:54 AM, Adrian Chadd wrote:

Hi,

What about the disk access is unaligned? Do you mean not sector aligned? or?

Hi. Sector aligned.

This is a common problem people face doing disk IO analysis.

The whole point about not allowing unaligned access is to make the
disk IO path cleaner. It does mean that the filesystem code (and GEOM
modules involved) need to be smarter.

If the filesystem is doing ridiculously unaligned access then it
likely should be fixed.
Yes. But it will nit fix non-cached access to the disk (raw) devices. 
And this is the main reason why ntfs-3g and exfat are much slower then 
working on Linux.

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: disk devices speed is ugly

2012-02-12 Thread Adrian Chadd
On 12 February 2012 09:34, Alex Samorukov m...@os2.kiev.ua wrote:

 Yes. But it will nit fix non-cached access to the disk (raw) devices. And
 this is the main reason why ntfs-3g and exfat are much slower then working
 on Linux.

But _that_ can be fixed with the appropriate application of a sensible
caching layer.

So if there are alignment issues, let's fix those up first so
filesystems act sensibly with the block device layer. Then yes, adding
a caching layer that works. I didn't get very good performance with
g_cache when i last tried it.



Adrian
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: disk devices speed is ugly

2012-02-12 Thread Alex Samorukov

On 02/13/2012 06:27 AM, Adrian Chadd wrote:

On 12 February 2012 09:34, Alex Samorukovm...@os2.kiev.ua  wrote:


Yes. But it will nit fix non-cached access to the disk (raw) devices. And
this is the main reason why ntfs-3g and exfat are much slower then working
on Linux.

But _that_ can be fixed with the appropriate application of a sensible
caching layer.
With every application?  :) Are you know anyone who wants to do this? At 
least for 3 fuse filesystems.


Also, caching in user-land is much slower and more dangerous.

There is a libublio utility which is done to provide userland caching 
(it implements pwrite/pread replacement) and it is in use by this 2 ports.




So if there are alignment issues, let's fix those up first so
filesystems act sensibly with the block device layer. Then yes, adding
a caching layer that works. I didn't get very good performance with
g_cache when i last tried it.
Because its very primitive. Once again - try to compare performance of 
the exfat or ntfs-3g on Linux and FreeBSD. Raw device speed (i used USB) 
is pretty the same, but resulting speed is very different, as well as 
I/O characteristic.


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: disk devices speed is ugly

2012-02-11 Thread Adrian Chadd
Hi,

What about the disk access is unaligned? Do you mean not sector aligned? or?

This is a common problem people face doing disk IO analysis.

The whole point about not allowing unaligned access is to make the
disk IO path cleaner. It does mean that the filesystem code (and GEOM
modules involved) need to be smarter.

If the filesystem is doing ridiculously unaligned access then it
likely should be fixed.


Adrian
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: disk devices speed is ugly

2012-01-31 Thread Harald Schmalzbauer
 schrieb Alex Samorukov am 26.01.2012 14:52 (localtime):
 Hi,

 I ported exfat fuse module to FreeBSD (PR 164473) and found that it
 works much slower then on Linux. I found 2 reasons for this:


Thanks a lot! I saw the new port :-)
Hope that someone can help you improove fusefs-kmod. I remember more
porters were blaming FreeBSDs fusefs support making their work
hard/impossible (TrueCrypt). Hopefully some kernel hacker will read and
help...

Best regards,

-Harry

 1) FreeBSD kernel do not allow to have nonalignment access to device
 with standard read/write commands. mmap to the entire disk
 (/dev/da0s1) doesn`t work also (EINVAL).

 When its not a big deal for read requests, for write it becomes a real
 issue - to write non-aligned data i need to read beginning and end of
 the block. So in fact for one write request i can get 2 reads.

 2) It seems that there is a very simple read caching on such devices
 without write caching at all. It makes write performance enormously
 slow. I found geom_cache module, but it provides only read optimization.


 I decided to compare speed on Linux and FreeBSD and below are my
 results. I used old USB flash drive to do the tests.

 Read Speed of 100Mb:

 Linux 3.0.0:  22.7 Mb/sec
 FreeBSD: 10.22 Mb/sec
 FreeBSD + gcache: 18.75 Mb/sec (!)

 Write speed of 100Mb file:
 Linux: 90Mb/sec (cache, much higher then device speed)
 FreeBSD: 0.52 Mb/sec (!)
 FreeBSD + gcache: 0.52 Mb/sec

 As you could see write performance is enormously slow. May be we need
 to create some geom provider for such caching or i am missing
 something? I think, that other fuse modules like ntfs-3g and fuse-ext4
 having same issue. Also i found that fuse4bsd itself is non stable and
 may crash the system without any visible reasons.
 ___
 freebsd-stable@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-stable
 To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org



signature.asc
Description: OpenPGP digital signature


Re: disk devices speed is ugly

2012-01-31 Thread Alex Samorukov

On 01/31/2012 11:19 AM, Harald Schmalzbauer wrote:

  schrieb Alex Samorukov am 26.01.2012 14:52 (localtime):

Hi,

I ported exfat fuse module to FreeBSD (PR 164473) and found that it
works much slower then on Linux. I found 2 reasons for this:


Thanks a lot! I saw the new port :-)
Hope that someone can help you improove fusefs-kmod. I remember more
porters were blaming FreeBSDs fusefs support making their work
hard/impossible (TrueCrypt). Hopefully some kernel hacker will read and
help...
Thank you for comment. It is now mostly not about fuse itself, but about 
non-buffered raw device access. I really think that something like 
improved geom_cache should solve this.


I`ll add soon updated version of the patch with [optional] libublio 
support. This improves performance a lot. Read speed is comparable with 
Linux (about 20 Mb/sec on my old USB) and write is much faster aw well 
(but not so good as in Linux and with a lot read requests for align).


Also i contacted upstream about unaligned writes and he told that it is 
in his todo list, but probably after 1.0.0 version, because it will 
require a lot of changes in the code. Also i found a libexfat bug in a 
fat time handling and creating patch to use freebsd code for this 
instead. So if you are using exfat any testing and comments are welcome.

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org