On 2015-07-23 15:12, james harvey wrote:
Up to date Arch.  linux kernel 4.1.2-2.  Fresh O/S install 12 days
ago.  No where near full - 34G used on a 4.6T drive.   32GB memory.

Installed bonnie++ 1.97-1.

$ bonnie++ -d bonnie -m btrfs-disk -f -b

I started trying to run with a "-s 4G" option, to use 4GB files for
performance measuring.  It refused to run, and said "file size should
be double RAM for good results".  I sighed, removed the option, and
let it run, defaulting to **64GB files**.  So, yeah, big files.  But,
I do work with Photoshop .PSB files that get that large.

During the first two lines ("Writing intelligently..." and
"Rewriting..." the filesystem seems to be completely locked out for
anything other than bonnie++.  KDE stops being able to switch focus,
change tasks.  Can switch to tty's and log in, do things like "ls",
but attempting to write to the filesystem hangs.  Can switch back to
KDE, but screen is black with cursor until bonnie++ completes.  top
didn't show excessive CPU usage.

My dmesg is at http://www.pastebin.ca/3072384  Attaching it seemed to
make the message not go out to the list.

Yes, my kernel is tained... See "[5.310093] nvidia: module license
'NVIDIA' taints kernel."  Sigh, it's just that the nvidia module
license isn't GPL...

During later bonnie++ writing phases (start 'em", "Create files in
sequential order...", "Create files in random order") show no
detrimental effect on the system.

I see some 1.5+ year old references to messages like "INFO: task
btrfs... blocked for more than 120 seconds."  With the amount of
development since then, figured I'd pretty much ignore those and bring
up the issue again.

I think the "Writing intelligently" phase is sequential, and the old
references I saw were regarding many re-writes sporadically in the
middle.

What I did see from years ago seemed to be that you'd have to disable
COW where you knew there would be large files.  I'm really hoping
there's a way to avoid this type of locking, because I don't think I'd
be comfortable knowing a non-root user could bomb the system with a
large file in the wrong area.

IF I do HAVE to disable COW, I know I can do it selectively.  But, if
I did it everywhere... Which in that situation I would, because I
can't afford to run into many minute long lockups on a mistake... I
lose compression, right?  Do I lose snapshots?  (Assume so, but hope
I'm wrong.)  What else do I lose?  Is there any advantage running
btrfs without COW anywhere over other filesystems?

How would one even know where the division is between a file small
enough to allow on btrfs, vs one not to?

First off, you're running on a traditional hard disk, aren't you? That's almost certainly why the first few parts of bonnie++ effectively hung the system. WRT that issue, there's nothing I can really give advice wise other than to either get more and faster RAM, or get an SSD to use for your system disk (and use the huge hard drive for data files only).

As far as NOCOW, you can still do snapshots, although you lose compression, data integrity (without COW, BTRFS's built in RAID is actually _worse_ than other software RAID, because without COW being enabled, it can't use checksums on the filesystem blocks), and data de-duplication. Overall, there are still advantages to using BTRFS even with NOCOW (much easier data migration when upgrading storage for example, btrfs-replace is a wonderful thing :), but most of the biggest advantages are lost.

Also, if you can deal with not having CUDA support, you should probably try using the noveau driver instead of NVIDIA's proprietary one, OpenGL (and almost every other rendering API as well) is horribly slow on the official NVIDIA driver.


Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

Reply via email to