Re: Anyone tried out btrbk yet?

2015-07-15 Thread Sander
Marc MERLIN wrote (ao):
 On Wed, Jul 15, 2015 at 10:03:16AM +1000, Paul Harvey wrote:
  The way it works in snazzer (and btrbk and I think also btrfs-sxbackup
  as well), local snapshots continue to happen as normal (Eg. daily or
  hourly) and so when your backup media or backup server is finally
  available again, the size of each individual incremental is still the
  same as usual, it just has to perform more of them.
  
 Good point. My system is not as smart. Every night, it'll make a new
 backup and only send one incremental and hope it gets there. It doesn't
 make a bunch of incrementals and send multiple.
 
 The other options do a better job here.

FWIW, I've written a bunch of scripts for making backups. The lot has
grown over the past years to what is is now. Not very pretty to see, but
reliable.

The subvolumes backupadmin home root rootvolume and var are snapshotted
every hour.

Each subvolume has their own entry in crontab for the actual backup.
For example rootvolume once a day, home and backupadmin every hour.

The scripts uses tar to make a full backup every first backup of a
subvolume that month, an incremental daily backup, and an incremental
hourly backup if applicable.

For a full backup the oldest available snapshot for that month is used,
regardless of when the backup is started. This way the backup of each
subvolume can be spread not to overload a system.

Backups are running in the idle queue to not hinder other processes, are
compressed with lbzip2 to utilize all cores, and are encrypted with gpg
for obvious reasons. In my tests lbzip2 gives the best size/speed ratio
compared to lzop, xz, bzip2, gzip, pxz and lz4(hc).

The script outputs what files and directories are in the backup to the
backupadmin subvolume. This data is compressed with lz4hc as lz4hc is
the fastest to decompress (useful to determine which archive contains
what you want restored).

Archives get transfered to a remote server by ftp, as ftp is the leanest
way of file transfer and supports resume. The initial connection is
encrypted to hide username/password, but as the archive is already
encrypted, the data channel is not. The ftp transfer is throttled to
only use part of the available bandwith.

A daily running script checks for archives which are not transfered yet
due to remote server not available or failed connection or the like, and
retransmits those archives.

Snapshots and archives are pruned based on disk usage (yet another
script).

Restore can be done by hand from snapshots (obviously), or by a script
from the locale archive if still available, or the remote archive.

The restore script can search a specific date-time range, and checks
both local and remote for the availability of an archive that contains
the wanted.

A bare metal restore can be done by fetching the archives from the
remote host and pipe them directly into gpg/tar. No need for additional
local storage and no delay. First the monthly full backup is restored,
then every daily incremental since, and then every hourly since the
youngest daily, if applicable. tar incremental restore is smart, and
removes the files and directories that were removed between backups.

Sander
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: btrfs subvolume clone or fork (btrfs-progs feature request)

2015-07-09 Thread Sander
Austin S Hemmelgarn wrote (ao):
 On 2015-07-09 08:41, Sander wrote:
 Austin S Hemmelgarn wrote (ao):
 What's wrong with btrfs subvolume snapshot?
 
 Well, personally I would say the fact that once something is tagged as
 a snapshot, you can't change it to a regular subvolume without doing a
 non-incremental send/receive.
 
 A snapshot is a subvolume. There is no such thing as tagged as a
 snapshot.
 
  Sander
 
 No, there is a bit in the subvolume metadata that says whether it's
 considered a snapshot or not.  Internally, they are handled identically, but
 it does come into play when you consider things like btrfs subvolume show -s
 (which only lists snapshots), which in turn means that certain tasks are
 more difficult to script robustly.

I stand corrected. Thanks for the info.

Sander
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: btrfs subvolume clone or fork (btrfs-progs feature request)

2015-07-09 Thread Sander
Austin S Hemmelgarn wrote (ao):
 What's wrong with btrfs subvolume snapshot?

 Well, personally I would say the fact that once something is tagged as
 a snapshot, you can't change it to a regular subvolume without doing a
 non-incremental send/receive.

A snapshot is a subvolume. There is no such thing as tagged as a
snapshot.

Sander
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: BTRFS: read error corrected: ino 1 off 226840576 (dev /dev/mapper/dshelf1 sector 459432)

2015-06-17 Thread Sander
Hugo Mills wrote (ao):
 On Wed, Jun 17, 2015 at 12:16:54AM -0700, Marc MERLIN wrote:
  I had a few power offs due to a faulty power supply, and my mdadm raid5
  got into fail mode after 2 drives got kicked out since their sequence
  numbers didn't match due to the abrupt power offs.

  gargamel:~# btrfs fi df /mnt/btrfs_pool1
  Data, single: total=8.29TiB, used=8.28TiB
  System, DUP: total=8.00MiB, used=920.00KiB
  System, single: total=4.00MiB, used=0.00B
  Metadata, DUP: total=14.00GiB, used=10.58GiB
  Metadata, single: total=8.00MiB, used=0.00B
  GlobalReserve, single: total=512.00MiB, used=0.00B

  I'll do a scrub later, for now I have to wait 20 hours for the raid
  rebuild first.
 
You'll probably find that the rebuild is equivalent to a scrub anyway.

He has mdadm raid, which is rebuilding. This is obviously not equivalent
to a btrfs scrub.

Sander
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: possible raid6 corruption

2015-06-02 Thread Sander
Christoph Anton Mitterer wrote (ao):
 May 19 03:25:50 lcg-lrz-dc10 kernel: [903106.581205] sd 0:0:14:0: Device 
 offlined - not ready after error recovery

 May 28 16:38:43 lcg-lrz-dc10 kernel: [1727488.984810] sd 0:0:14:0: rejecting 
 I/O to offline device

 May 28 16:39:19 lcg-lrz-dc10 kernel: [1727524.067182] BTRFS: lost page write 
 due to I/O error on /dev/sdm
 May 28 16:39:19 lcg-lrz-dc10 kernel: [1727524.067426] BTRFS: bdev /dev/sdm 
 errs: wr 1, rd 0, flush 0, corrupt 0, gen 0

 May 28 21:03:06 lcg-lrz-dc10 kernel: [1743336.347191] sd 0:0:14:0: rejecting 
 I/O to offline device
 May 28 21:03:06 lcg-lrz-dc10 kernel: [1743336.369569] BTRFS: lost page write 
 due to I/O error on /dev/sdm

 Well as I've said,.. maybe it's not an issue at all, but at least it's
 strange that this happens on brand new hardware only with the
 btrfs-raid56 node, especially the gazillions of megasas messages.

Brand new hardware is most likely to show (hardware) issues as it has no
proven track record yet while it was subject to any kind of abuse during
transport. I'm sure you will see the same if you put sw raid + ext4 on
this server.

Nice hardware btw, please share your findings.

Sander
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


home server (was: Re: Kernel oops: 17 on PREEMPT ARM when scrubbing)

2015-05-19 Thread Sander
Wolfgang Mader wrote (ao):
 By the way. Since the last two years I server my files in my home network 
 using low-power arm devices, but by now I am a little frustrated to depend on 
 the manufacturer when it comes to u-boot updates. What are you guys using for 
 your home network as server hardware?

Currently an Arndale (uefi), which replaced a Pandaboard (barebox), but
now I'm looking at a Supermicro 5028D-TN4T for (much) better fs
performance and more reliability (ecc memory), but still a bit low power
(45W Xeon).

I hoped for 64bit arm with ecc and all as announced quite some time ago
(AMD), but I'm pretty much done waiting.

Sander
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: corruption in USB harddrive - backup via send/receive - question

2015-04-20 Thread Sander
Miguel Negrão wrote (ao):
  - Given that I'm running a laptop and comunicating with the harddrives via
 USB, is it expected that I will get some corruption from time to time or is
 this abnormal

Abnormal. I have three Intel ssd's usb connected to an Arndale. Two of
them have luks and btrfs raid0 on top, and is used as a home server. The
third ssd is plain btrfs, and used for backup archives.

Sander
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: directory defrag

2015-04-14 Thread Sander
Russell Coker wrote (ao):
 The current defragmentation options seem to only support defragmenting
 named files/directories or a recursive defragmentation of files and
 directories.
 
 I'd like to recursively defragment directories.

find / -xdev -type d -execdir btrfs filesystem defrag -c {} +

Would that work for you?

Sander
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Understanding btrfs and backups

2014-03-07 Thread Sander
Eric Mesa wrote (ao):
 Duncan - thanks for this comprehensive explanation. For a huge portion of
 your reply...I was all wondering why you and others were saying snapshots
 aren't backups. They certainly SEEMED like backups. But now I see that the
 problem is one of precise terminology vs colloquialisms. In other words,
 snapsshots are not backups in and of themselves. They are like Mac's Time
 Machine. BUT if you take these snapshots and then put them on another media
 - whether that's local or not - THEN you have backups. Am I right, or am I
 still missing something subtle? 

Snapshots are backups, but only protect you against a limited amount of
disasters. Snapshots are very convenient to quickly go back in time for
some or all files and directories. But if the filesystem or underlaying
disk goes up in flames, the snapshots are toast as well. So you need
additional backups, preferably not on the same hardware, for real
protection against data loss.

The convenience of snapshots is that you can (almost) make them as often
as you want, fully automated, with (almost) no impact on performance,
without the need for extra hardware, and a restore is no more than a
simple copy.

Sander
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: correct way to rollback a root filesystem?

2014-01-07 Thread Sander
Jim Salter wrote (ao):
 I tried a kernel upgrade with moderately disastrous
 (non-btrfs-related) results this morning; after the kernel upgrade
 Xorg was completely borked beyond my ability to get it working
 properly again through any normal means. I do have hourly snapshots
 being taken by cron, though, so I'm successfully X'ing again on the
 machine in question right now.
 
 It was quite a fight getting back to where I started even so, though
 - I'm embarassed to admit I finally ended up just doing a cp
 --reflink=all /mnt/@/.snapshots/snapshotname /mnt/@/ from the
 initramfs BusyBox prompt.  Which WORKED well enough, but obviously
 isn't ideal.
 
 I tried the btrfs sub set-default command - again from BusyBox - and
 it didn't seem to want to work for me; I got an inappropriate ioctl
 error (which may be because I tried to use / instead of /mnt, where
 the root volume was CURRENTLY mounted, as an argument?). Before
 that, I'd tried setting subvol=@root (which is the writeable
 snapshot I created from the original read-only hourly snapshot I
 had) in GRUB and in fstab... but that's what landed me in BusyBox to
 begin with.
 
 When I DID mount the filesystem in BusyBox on /mnt, I saw that @ and
 @home were listed under /mnt, but no other directories were -
 which explains why mounting -o subvol=@root didn't work. I guess the
 question is, WHY couldn't I see @root in there, since I had a
 working, readable, writeable snapshot which showed its own name as
 root when doing a btrfs sub show /.snapshots/root ?

I don't quite get how your setup is.

In my setup, all subvolumes and snapshots are under /.root/

# cat /etc/fstab
LABEL=panda   /  btrfs  
subvol=rootvolume,space_cache,inode_cache,compress=lzo,ssd  0  0
LABEL=panda   /home   btrfs   subvol=home   
0  0
LABEL=panda   /root   btrfs   subvol=root   
0  0
LABEL=panda   /varbtrfs   subvol=var
0  0
LABEL=panda   /holdingbtrfs   subvol=.holding   
0  0
LABEL=panda   /.root  btrfs   subvolid=0
0  0
/Varlib   /var/libnonebind  
0  0


In case of an OS upgrade gone wrong, I would mount subvolid=0, move
subvolume 'rootvolume' out of the way, and move (rename) the last known
good snapshot to 'rootvolume'.

Not sure if that works though. Never tried.

Sander
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: question regarding caching

2014-01-03 Thread Sander
Austin S Hemmelgarn wrote (ao):
 The data is probably still cached in the block layer, so after
 unmounting, you could try 'echo 1  /proc/sys/vm/drop_caches' before
 mounting again, but make sure to run sync right before doing that,
 otherwise you might lose data.

Lose data? Where you get this from?

Sander
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Btrfs: improve the performance fluctuating of the fsync

2014-01-02 Thread Sander
Leonidas Spyropoulos wrote (ao):
 Will this help with apt-get performance over btrfs file system? As far
 as I understand it it's happening because of multiple fsync calls.

apt-get install eatmydata

This package contains a small LD_PRELOAD library (libeatmydata) and a
couple of helper utilities designed to transparently disable fsync and
friends (like open(O_SYNC)).

Then use it like:
eatmydata apt-get install package
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: rootfs crash

2013-09-17 Thread Sander
Jogi Hofmüller wrote (ao):
 I am limited to working with the tools the Debian initramfs
 provides.  This means kernel 3.10.2 (Debian 3.10.7-1) and
 btrfs-tools  0.19+20130705-1.  The latter seems to be up to date
 with git although `btrfs version` says v0.20 rc1. All this is
 happening on an Asus Zen book  UX32V with two 128GB SSDs.
 
 If anyone is interested in images produced by btrfs-image, they are
 available at
 
   http://plagi.at/images/
 
 I am fresh out of ideas at the moment, so if anyone has a suggestion
 I am willing to try.

Did you try btrfs chunk-recover ?

Sander
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Creating recursive snapshots for all filesystems

2013-05-03 Thread Sander
Alexander Skwar wrote (ao):
 Where I'm hanging right now, is that I can't seem to figure out a
 bullet proof way to find all the subvolumes of the filesystems I
 might have.

 Is there an easier way to achieve what I want? I want to achieve:
 
 Creating recursive snapshots for all filesystems

Not sure if this helps, but I have subvolid=0, which contains all my
subvolumes, mounted under /.root/

/etc/fstab:
LABEL=panda   /  btrfs  
subvol=rootvolume,space_cache,inode_cache,compress=lzo,ssd  0  0
LABEL=panda   /home   btrfs   subvol=home   
0  0
LABEL=panda   /root   btrfs   subvol=root   
0  0
LABEL=panda   /varbtrfs   subvol=var
0  0
LABEL=panda   /holdingbtrfs   subvol=.holding   
0  0
LABEL=panda   /.root  btrfs   subvolid=0
0  0
LABEL=panda   /.backupadmin   btrfs   subvol=backupadmin
0  0 
/Varlib   /var/libnonebind  
0  0

panda:~# ls -l /.root/
total 0
drwxr-xr-x. 1 root root 580800 Jan 30 17:46 backupadmin
drwxr-xr-x. 1 root root 24 Mar 27  2012 home
drwx--. 1 root root742 Mar 19 15:50 root
drwxr-xr-x. 1 root root226 May 16  2012 rootvolume
drwxr-xr-x. 1 root root 96 Apr  3  2012 var

In my snapshots script:

  ...
  mmddhhmm=`date +%Y%m%d_%H.%M`
  ...
  for subvolume in `ls /.root/`
  do
...
/sbin/btrfs subvolume snapshot ${filesystem}/${subvolume}/ \
  /.root/.snapshot_${mmddhhmm}_${hostname}_${subvolume}/ || result=2
...
  done
  ...

This creates timestamped snapshots for all subvolumes.

Sander
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Logical/i-nodes lookups give path resolving failed with ret=-2

2013-04-26 Thread Sander
Adrien Dessemond wrote (ao):
 I scrubbed a BTRFS volume (mounted as a VFS root) and got several
 errors. However I am not able to make btrfs print the path of the
 corrupted files...
 
 E.g. kernel log gives :
 
 
 [51078.682876] btrfs: unable to fixup (regular) error at logical
 51241746432 on dev /dev/root
 [51078.683013] btrfs: checksum error at logical 51242385408 on dev
 /dev/root, sector 102196320, root 684, inode 60676040, offset
 352583680: path resolving failed with ret=-2
 [51078.683016] btrfs: bdev /dev/root errs: wr 0, rd 0, flush 0,
 corrupt 35782, gen 0
 
 Manual lookup  :
 
 # btrfs  inspect-internal logical-resolve -v  51241746432 /
 ioctl ret=0, bytes_left=4080, bytes_missing=0, cnt=0, missed=0
 # ./btrfs  inspect-internal inode-resolve 60676040 /
 ioctl ret=-1, error: No such file or directory
 
 Kernel is Linux 3.9-rc8, latest btrfs-progs from Chris Mason's Git repository.
 
 I am sure I am missing something but I am unable to figure out
 what Any idea? Thanks!!

Maybe: find / -xdev -inum 60676040
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: One random read streaming is fast (~1200MB/s), but two or more are slower (~750MB/s)?

2013-04-17 Thread Sander
Matt Pursley wrote (ao):
 I have an LSI HBA card (LSI SAS 9207-8i) with 12 7200rpm SAS drives
 attached.  When it's formated with mdraid6+ext4 I get about 1200MB/s
 for multiple streaming random reads with iozone.  With btrfs in
 3.9.0-rc4 I can also get about 1200MB/s, but only with one stream at a
 time.

Just curious, is that btrfs on top of mdraid6, or is this experimental
btrfs raid6 without md?
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Activating space_cache after read-only snapshots without space_cache have been taken

2013-04-16 Thread Sander
Liu Bo wrote (ao):
 On Tue, Apr 16, 2013 at 02:28:51AM +0200, Ochi wrote:
  The situation is the following: I have created a backup-volume to
  which I regularly rsync a backup of my system into a subvolume.
  After rsync'ing, I take a _read-only_ snapshot of that subvolume
  with a timestamp added to its name.
  
  Now at the time I started using this backup volume, I was _not_
  using the space_cache mount option and two read-only snapshots were
  taken during this time. Then I started using the space_cache option
  and continued doing snapshots.
  
  A bit later, I started having very long lags when unmounting the
  backup volume (both during shutdown and when unmounting manually). I
  scrubbed and fsck'd the volume but this didn't show any errors.
  Defragmenting the root and subvolumes took a long time but didn't
  improve the situation much.
 
 So are you using '-o nospace_cache' when creating two RO snapshots?

No, he first created two ro snapshots, then (some time later) mounted
with nospace_cache, and then continued to take ro snapshots.

  Now I started having the suspicion that maybe the space cache
  possibly couldn't be written to disk for the readonly
  subvolumes/snapshots that were created during the time when I wasn't
  using the space_cache option, forcing the cache to be rebuilt every
  time.
  
  Clearing the cache didn't help. But when I deleted the two snapshots
  that I think were taken during the time without the mount option,
  the unmounting time seems to have improved considerably.
 
 I don't know why this happens, but maybe you can observe the umount
 process's very slow behaviour by using 'cat /proc/{umount-pid}/stack'
 or 'perf top'.

AFAIUI the problem is not there anymore, but this is a good tip for the
future.

Sander
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RAID 0 across SSD and HDD

2013-01-30 Thread Sander
Roger Binns wrote (ao):
 I'm happy to wait till it is available. btrfs has been beneficial to
 me in so many other respects (eg checksums, compression, online
 everything, not having to deal with LVM and friends). I was just
 hoping that joining an SSD and HDD would be somewhat worthwhile now
 even if it isn't close to what hot data will deliver in the future.

Do you know about bcache and EnhanceIO ?

http://bcache.evilpiepirate.org/
and
https://github.com/stec-inc/EnhanceIO

Sander
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: btrfs: could not do orphan cleanup -22

2013-01-18 Thread Sander
Reartes Guillermo wrote (ao):
 [   71.617841] device label testfs1 devid 1 transid 4143 /dev/sdb1
 [   71.619164] btrfs: disk space caching is enabled
 [   71.629969] device label fedora devid 1 transid 2038 /dev/sda2
 [   71.805339] btrfs: Error removing orphan entry, stopping orphan cleanup
 [   71.806597] btrfs: could not do orphan cleanup -22
 [   71.986601] device label testfs1 devid 1 transid 4143 /dev/sdb1
 [   72.934724] btrfs: open_ctree failed

 Since sda2 is mounted i could not check it, but from the logs it is
 not clear to me if the issue is in sda2 or sdb1.

Seems sda2 from the above.
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: btrfs subvolume snapshot performance problem

2012-12-17 Thread Sander
Sylvain Alain wrote (ao):
 gentootux ~ # mount /dev/sda4 -o
 noatime,ssd,discard,compress=lzo,noacl,space_cache,subvolid=0
  ^^^

 Instead of 3 secondes to run the snapshot, it took almost 4 minutes.

Let me repeat the answer cwillu gave to Russell on this, and Russell's
response:

Russell Coker wrote (ao):
 On Sun, 16 Dec 2012, cwillu cwi...@cwillu.com wrote:
  Don't use discard; it's a non-queuing command, which means your
  performance will suck unless your device is really terrible at
  garbage collection (in which case, it's just the lesser of two evils).
 
 Thanks for the advice. On one of my systems a reinstall of the linux-
 image-3.6-trunk-amd64 package went from almost 4 minutes to only 29
 seconds when I removed the discard option.
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Encryption

2012-12-13 Thread Sander
merc1...@f-m.fm wrote (ao):
 Oh pardon me, it's BTRFS RAID that's a no-go, which is just as critical
 to me as I have a 4 disk 8TB array.
 The FAQ goeth on to Say:
 ---
 This pretty much forbids you to use btrfs' cool RAID features if you
 need encryption.

Forbids? That is just plain wrong.

I have one btrfs filesystem on top of two encrypted devices. Works just
fine.

Sander
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Question about btrfs snapshot delay and rm -rf delay

2012-12-06 Thread Sander
Sylvain Alain wrote (ao):
 Hi, right now I own this SSD :
 
 Intel SSD 520 Series MLC 120 Gigs

 Also, this is my /etc/fstab
 /dev/sda3  /bootext2   noauto,noatime,defaults
 /dev/sda1  /boot/efivfatnoauto,defaults
 /dev/sda4  /  btrfs

SSDs are sensitive to partitioning. Easiest is not to partition at all.

Sander
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Question about btrfs snapshot delay and rm -rf delay

2012-12-06 Thread Sander
Martin Steigerwald wrote (ao):
 Am Donnerstag, 6. Dezember 2012 schrieb Martin Steigerwald:
  Am Donnerstag, 6. Dezember 2012 schrieb Sander:
   Sylvain Alain wrote (ao):
Hi, right now I own this SSD :

Intel SSD 520 Series MLC 120 Gigs
   
Also, this is my /etc/fstab
/dev/sda3  /bootext2   noauto,noatime,defaults
/dev/sda1  /boot/efivfatnoauto,defaults
/dev/sda4  /  btrfs
   
   SSDs are sensitive to partitioning. Easiest is not to partition at all.
  
  Huh? How so?
  
  My Intel SSD 320 is partitioned and uses LVM in the biggest partition
  and is still just fine after 19 month. Media wearout indicator still at 100
  (of 100).
 
 Or did you indirectly refer to partition alignment?

Yes I did, I meant sensitive performance wise. I think your Intel 320
will survive just fine, but partition missalignment gives the controller
quite a bit more work to do.

Sander
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Example of BTRFS uglyssima performance : Bitcoin

2012-12-04 Thread Sander
Swâmi Petaramesh wrote (ao):
 But I've been using pretty *anything over LUKS/LVM for years, and I've
 never notice it cause any (noticeable to the point of becoming annoying)
 system slowdown, whatever tasks I may have processed in such setups
 (including servers, big databases, compilations, NAS, etc...)

If your system is stable, you could also consider running bitcoin with
eatmydata.

Sander
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: btrfs defrag problem

2012-11-06 Thread Sander
David Sterba wrote (ao):
 On Thu, Nov 01, 2012 at 05:17:04AM +0800, ching wrote:
  when a device is mounted under a directory, files in the directory
  is hidden, and files in the device is available, right?  when a
  directory is polyinstantied, files in the original directory is
  hidden, and files in the polyinstantied directory is available,
  
  How to get past them and pass those hidden files to defrag
  command?
 
 I hope I get it right, so unless you have a reference to the directory
 with hidden files (using your term), there's no way to access them.
 And this is a more generic question, not related to btrfs itself. The
 hidden files may also belong to a different filesystem.

What Ching means (I think), is that if you have directories in /home,
and you mount a device onto /home, you cannot see the original
directories in /home anymore.

You can still access them though, with a 'mount -o bind':

# mount -o bind / /mnt
# ls /mnt/home

Sander
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Need help mounting laptop corrupted root btrfs. Kernel BUG at fs/btrfs/volumes.c:3707 - FIXED

2012-11-01 Thread Sander
Marc MERLIN wrote (ao):
 That said, it's working fine again for now after I went back to kernel 3.5.3 
 (down from 3.6.3). It hasn't been long enough to say for sure, but there is
 a remote possibility that changes in 3.6 actually caused my drive to freeze
 after several hours of use.
 When that happened (3 times), 2 of those times, btrfs did not manage to
 write all its data before access was cutoff, and I got the bug I reported
 here, which in turn crashes any kernel you try to mount the FS with.
 Cleaning the log manually fixed it both times so far.
 
 For now, I'll stick with 3.5.3 for a while to make sure my drive is actually
 ok (it seems to be afterall), and once I'm happy that it's the case, I'll go
 back to 3.6.3 with serial console remote logging and try to capture the full
 sata failure I got with 3.6.3.

Thanks for the info. You could put some load on the ssd to see if you
can trigger an issue under 3.6.3(+) with btrfs filesystem scrub or
badblocks (in the default non-destructive mode).

Can you collect SMART data (with smartctl) from the ssd?

Sander
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Need help mounting laptop corrupted root btrfs. Kernel BUG at fs/btrfs/volumes.c:3707 - FIXED

2012-10-31 Thread Sander
Marc MERLIN wrote (ao):
 What happened is that my SSD is craping out and failing to write after
 a certain number of uptime hours.

What model ssd is that if I may ask?

Sander
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: how to cleanup old superblock

2012-06-26 Thread Sander
David Sterba wrote (ao):
 Thanks. I redid the calculations and the statement that it 'will not
 touch anything else' may not be correct in rare cases.

What about wipefs?

wipefs allows to erase filesystem or raid signatures (magic strings)
from the device to make the filesystem invisible for libblkid. wipefs
does not erase the whole filesystem or any other data from the device.
When used without options -a or -o, it lists all visible filesystems and
offsets of their signatures.

Sander
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: SSD format/mount parameters questions

2012-05-18 Thread Sander
Martin wrote (ao):
 Are there any format/mount parameters that should be set for using
 btrfs on SSDs (other than the ssd mount option)?

If possible, format the whole device, do not partition the ssd. This
will guarantee proper allignment.

The kernel will detect the ssd, and apply the ssd mount option
automatically.

 I've got a mix of various 120/128GB SSDs to newly set up. I will be
 using ext4 on the critical ones, but also wish to compare with
 btrfs...

I would use btrfs on the critical ones, as btrfs has checksums to detect
datacorruption.

 The mix includes some SSDs with the Sandforce controller that implements
 its own data compression and data deduplication. How well does btrfs fit
 with those compared to other non-data-compression controllers?

Since you have them both, you might want to find out yourself, and let
us know ;-)

FWIW (not much, as you already have them), I would not buy anything else
than intel. I have about 26 of them for years now (both in servers and
workstations, several series), and never had an issue. Two of my
colleagues have OCZ, and both had to RMA them.

Sander
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: kernel BUG at fs/btrfs/volumes.c:2733

2012-03-30 Thread Sander
Hello Ilya,

Ilya Dryomov wrote (ao):
   I'm definitely intrested in reproducing it. Could you please umount this
   filesystem, capture the output of 'btrfs-debug-tree -d dev' and post it
   somewhere ?
  
  Will do. It is the / filesystem, so I'll need to reboot.
 
 I need this to confirm that balance item is on disk.

I'm sorry it took so long. I'll mail the output to you directly.

   After that mount it back and see if there is btrfs: continuing
   balance line in dmesg (and if btrfs-balance kthread shows up)?

There is no such line in dmesg, and currently no btrfs-balance kthread
is running. I've pulled Chris Masons for-linus and booted with the
resulting kernel.

   If so, just let it run, it should finish the balance and remove
   on-disk item. (You can query the status of running balance with 'btrfs
   balance status mnt')
  
  Do I need newer tools for that? This is Debian Sid (unstable):
 
 Yeah, you do. That command is in master now, but it's not really
 needed. If btrfs-balance shows up, just wait for it to finish, it
 should get rid of the balance item. If it doesn't show up but the item
 is there we will have to dig deeper.

Ok :-)

Sander

-- 
Humilis IT Services and Solutions
http://www.humilis.net
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: kernel BUG at fs/btrfs/volumes.c:2733

2012-03-30 Thread Sander
Ilya Dryomov wrote (ao):
 On Fri, Mar 30, 2012 at 07:49:56PM +0200, Sander wrote:
 Thanks. btrfs-debug-tree confirms that you've got a balance item on
 media.

 After that mount it back and see if there is btrfs: continuing
 balance line in dmesg (and if btrfs-balance kthread shows up)?
  
  There is no such line in dmesg, and currently no btrfs-balance kthread
  is running. I've pulled Chris Masons for-linus and booted with the
  resulting kernel.
 
 And given the above it's weird. We are failing to locate the item
 during mount for some reason and I would like to find out why. So if
 you are up for running debugging patches (really just compiling btrfs
 module and sending me dmesg output) I would appreciate that.

Sure, please send me patches.

In the mean time, I got these (not related I guess, but it's the first
time it mentions btrfs, and I wonder where gzip is from):

[10013.866973] kworker/0:2: page allocation failure: order:3, mode:0x20
[10013.866973] [c000ff5b] (unwind_backtrace+0x1/0x8a) from [c00601f3] 
(warn_alloc_failed+0x9f/0xc4)
[10013.881286] [c00601f3] (warn_alloc_failed+0x9f/0xc4) from [c0061ed7] 
(__alloc_pages_nodemask+0x3e3/0x410)
[10013.883270] [c0061ed7] (__alloc_pages_nodemask+0x3e3/0x410) from 
[c007b57b] (cache_alloc_refill+0x1ab/0x364)
[10013.893646] [c007b57b] (cache_alloc_refill+0x1ab/0x364) from [c007b78d] 
(__kmalloc+0x59/0x84)
[10013.893646] [c007b78d] (__kmalloc+0x59/0x84) from [c02e0bcd] 
(__alloc_skb+0x37/0xb2)
[10013.922058] [c02e0bcd] (__alloc_skb+0x37/0xb2) from [c02e1033] 
(__netdev_alloc_skb+0x15/0x2e)
[10013.922058] [c02e1033] (__netdev_alloc_skb+0x15/0x2e) from [c0243839] 
(rx_submit+0x15/0x130)
[10013.931365] [c0243839] (rx_submit+0x15/0x130) from [c0248187] 
(usb_hcd_giveback_urb+0x3f/0x74)
[10013.931365] [c0248187] (usb_hcd_giveback_urb+0x3f/0x74) from [c0250739] 
(ehci_urb_done+0x5f/0x68)
[10013.931365] [c0250739] (ehci_urb_done+0x5f/0x68) from [c0252497] 
(qh_completions+0x6f/0x2b8)
[10013.968780] [c0252497] (qh_completions+0x6f/0x2b8) from [c0252ca5] 
(ehci_work+0x65/0x5d8)
[10013.968780] [c0252ca5] (ehci_work+0x65/0x5d8) from [c0253635] 
(ehci_irq+0x171/0x198)
[10013.986175] [c0253635] (ehci_irq+0x171/0x198) from [c0247c47] 
(usb_hcd_irq+0x1f/0x3a)
[10013.986175] [c0247c47] (usb_hcd_irq+0x1f/0x3a) from [c0057165] 
(handle_irq_event_percpu+0x19/0xd4)
[10013.986175] [c0057165] (handle_irq_event_percpu+0x19/0xd4) from 
[c0057249] (handle_irq_event+0x29/0x3c)
[10013.986175] [c0057249] (handle_irq_event+0x29/0x3c) from [c0058c5d] 
(handle_fasteoi_irq+0x81/0xb4)
[10013.986175] [c0058c5d] (handle_fasteoi_irq+0x81/0xb4) from [c0056dcf] 
(generic_handle_irq+0x13/0x1c)
[10014.02] [c0056dcf] (generic_handle_irq+0x13/0x1c) from [c000cc97] 
(handle_IRQ+0x4b/0x7c)
[10014.02] [c000cc97] (handle_IRQ+0x4b/0x7c) from [c00084b1] 
(gic_handle_irq+0x4d/0x68)
[10014.052398] [c00084b1] (gic_handle_irq+0x4d/0x68) from [c000bfdb] 
(__irq_svc+0x3b/0x60)
[10014.052398] Exception stack(0xedf73f00 to 0xedf73f48)
[10014.052398] 3f00: ef002a64 ef00a440  ee097b40 ef000140 ef002a40 
 c1a40d08
[10014.052398] 3f20:  c1a40d08 c1a404bc  0020 edf73f48 
c0421079 c042107a
[10014.052398] 3f40: 6033 
[10014.083526] [c000bfdb] (__irq_svc+0x3b/0x60) from [c042107a] 
(_raw_spin_unlock_irq+0x8/0xa)
[10014.083526] [c042107a] (_raw_spin_unlock_irq+0x8/0xa) from [c007b297] 
(cache_reap+0x5b/0xb8)
[10014.083526] [c007b297] (cache_reap+0x5b/0xb8) from [c002f827] 
(process_one_work+0x155/0x22e)
[10014.083526] [c002f827] (process_one_work+0x155/0x22e) from [c002fc3b] 
(worker_thread+0x127/0x1e8)
[10014.083526] [c002fc3b] (worker_thread+0x127/0x1e8) from [c0032059] 
(kthread+0x4d/0x60)
[10014.133026] [c0032059] (kthread+0x4d/0x60) from [c000cd39] 
(kernel_thread_exit+0x1/0x6)
[10014.133026] Mem-info:
[10014.133026] Normal per-cpu:
[10014.133026] CPU0: hi:  186, btch:  31 usd: 156
[10014.133026] CPU1: hi:  186, btch:  31 usd: 168
[10014.152069] active_anon:19949 inactive_anon:506 isolated_anon:0
[10014.152069]  active_file:52991 inactive_file:52991 isolated_file:0
[10014.157104]  unevictable:469 dirty:2108 writeback:0 unstable:0
[10014.157104]  free:3283 slab_reclaimable:51064 slab_unreclaimable:5690
[10014.157104]  mapped:2477 shmem:522 pagetables:569 bounce:0
[10014.188293] Normal free:13132kB min:3512kB low:4388kB high:5268kB 
active_anon:79796kB inactive_anon:2024kB active_file:211964kB 
inactive_file:211964kB unevictable:1876kB isolated(anon):0kB isolated(file):0kB 
present:771136kB mlocked:0kB dirty:8432kB writeback:0kB mapped:9908kB 
shmem:2088kB slab_reclaimable:204256kB slab_unreclaimable:22760kB 
kernel_stack:1608kB pagetables:2276kB unstable:0kB bounce:0kB writeback_tmp:0kB 
pages_scanned:0 all_unreclaimable? no
[10014.188293] lowmem_reserve[]: 0 0
[10014.188293] Normal: 2383*4kB 278*8kB 34*16kB 26*32kB 0*64kB 0*128kB 0*256kB 
0*512kB 0*1024kB 0*2048kB 0*4096kB = 13132kB
[10014.188293] 107008 total pagecache pages
[10014.188293

kernel BUG at fs/btrfs/volumes.c:2733

2012-03-29 Thread Sander
      
ed73fcd4 ed73fcd8
[   81.162597] 5e00:      271aee1c 
000200da c0160fd5
[   81.162597] 5e20: ed784c00  ee117cc0  ee311c00  
ed73f4e8 ed73fcb0
[   81.188262] 5e40: ed73f000  beb62d64 c013d489 eec7ace8 beb61ba8 
ed784c00 eecd5370
[   81.188262] 5e60: ee117cc0  eecd5528  beb62d64 c013fc6b 
001d 00ec
[   81.199951] 5e80: 0007 0001 ed74f200 c015cecd edda5ea4  
edda5ef0 
[   81.199951] 5ea0: e7fb4050 01ff   0001  
eea3c740 0001
[   81.199951] 5ec0: eebe59c0 c1414788 ee117cc8 00ec edda5ef0 c016011b 
edda5ef0 
[   81.222503] 5ee0: 0001 c016018f  0817 0001 271aee1c 
ede92250 ee117cc0
[   81.222503] 5f00: beb61ba8 beb61ba8 eecd5528  edda4000  
beb62d64 c0088075
[   81.248168] 5f20: 4000 c00887ff     
 
[   81.248168] 5f40:      271aee1c 
0003 ee117cc0
[   81.248168] 5f60: beb61ba8 5000940c ee117cc0 beb61ba8 5000940c 0003 
 edda4000
[   81.273834] 5f80:  c008885d 0003  beb62e8c 0003 
01438478 0036
[   81.273834] 5fa0: c000c5a4 c000c401 beb62e8c 0003 0003 5000940c 
beb61ba8 beb62ba8
[   81.290954] 5fc0: beb62e8c 0003 01438478 0036 0002 b7ad 
0001 beb62d64
[   81.293914] 5fe0: 00024b3d beb61ba0 b7f7 b6ee7f9c 8110 0003 
372a242a 72d76d15
[   81.293914] [c0138c3a] (btrfs_balance+0x312/0xb04) from [c013d489] 
(btrfs_ioctl_balance+0x109/0x174)
[   81.309387] [c013d489] (btrfs_ioctl_balance+0x109/0x174) from [c013fc6b] 
(btrfs_ioctl+0xbf5/0xd42)
[   81.309387] [c013fc6b] (btrfs_ioctl+0xbf5/0xd42) from [c0088075] 
(vfs_ioctl+0xd/0x28)
[   81.309387] [c0088075] (vfs_ioctl+0xd/0x28) from [c00887ff] 
(do_vfs_ioctl+0x35d/0x38e)
[   81.345001] [c00887ff] (do_vfs_ioctl+0x35d/0x38e) from [c008885d] 
(sys_ioctl+0x2d/0x44)
[   81.345001] [c008885d] (sys_ioctl+0x2d/0x44) from [c000c401] 
(ret_fast_syscall+0x1/0x44)
[   81.345001] Code: d107 f116 0f11 d100 (de02) 4620 
[   81.367645] ---[ end trace 6b16e1c6e6a2dd9c ]---


The system is a pandaboard running a plain Linus kernel 3.3.0 with a
btrfs filesystem, over two Intel 320 600GB ssd's, connected via usb (on
an usb hub), on top of md_crypt. Mount options:
subvol=rootvolume,space_cache,inode_cache,compress=lzo,ssd

Before the balance, I deleted about 2500 snapshots and waited for the
btrfs kernel threads to calm down. Then I initiated a btrfs filesystem
scrub. Unfortunately during the scrub, the filesystem balance started.
Might be related.

Sander

-- 
Humilis IT Services and Solutions
http://www.humilis.net



--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: kernel BUG at fs/btrfs/volumes.c:2733

2012-03-29 Thread Sander
Hello Josef,

Josef Bacik wrote (ao):
 On Thu, Mar 29, 2012 at 12:52:35PM +0200, Sander wrote:
  I can't seem to balance my btrfs filesystem. It segfaults, and gives a
  kernel bug:
  
  [ 1355.139099] [ cut here ]
  [ 1355.139099] kernel BUG at fs/btrfs/volumes.c:2733!
  [ 1355.149322] Internal error: Oops - BUG: 0 [#1] SMP
  [ 1355.149322] Modules linked in:
  [ 1355.154479] CPU: 0Not tainted  (3.3.0 #8)
  [ 1355.162109] PC is at btrfs_balance+0x312/0xb04
  [ 1355.166778] LR is at btrfs_run_delayed_iputs+0x2d/0xac

  The system is a pandaboard running a plain Linus kernel 3.3.0 with a
  btrfs filesystem, over two Intel 320 600GB ssd's, connected via usb (on
  an usb hub), on top of md_crypt. Mount options:
  subvol=rootvolume,space_cache,inode_cache,compress=lzo,ssd
  
  Before the balance, I deleted about 2500 snapshots and waited for the
  btrfs kernel threads to calm down. Then I initiated a btrfs filesystem
  scrub. Unfortunately during the scrub, the filesystem balance started.
  Might be related.
 
 Well that's kind of cool.  So 2 options
 
 1) If you are in a hurry and need this stuff back right away run btrfs fi
 balance resume / and it should work, buuutt
 
 2) If you aren't in a hurry I'd really like to try and reproduce this locally
 and if I can't I'd like to be able to send you patches to help me figure out 
 how
 to fix this problem.

I am in no hurry at all. The filesystem seems just fine the way it is
(after a reboot), so there is no stuff to get back right away. Does
the kernel bug suggest the filesystem is fubar?

I'll keep the filesystem as is (no resume) and am happy to test any
patches you have.

Sander

-- 
Humilis IT Services and Solutions
http://www.humilis.net
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


kernel BUG at fs/btrfs/extent-tree.c:4263 (due to lost usb connection)

2012-03-26 Thread Sander
Hello all,

I've just encountered a kernel BUG at fs/btrfs/extent-tree.c:4263 due
to a usb connection hickup to the storage (so you might as well stop
reading now ;-) )

With google, a lot turns up on kernel BUG at fs/btrfs/extent-tree.c,
but not on the specific line number, nor on
PC is at drop_outstanding_extent
or
LR is at btrfs_delalloc_release_metadata

The system is a pandaboard running a plain Linus kernel 3.3.0 with a
btrfs filesystem just created this morning, over two Intel 320 600GB
ssd's, connected via usb (on an usb hub), on top of md_crypt. Mount
options: subvol=rootvolume,space_cache,inode_cache,compress=lzo,ssd

It seems the usb hub lost its connection, so it seems perfectly normal
for the system to Oops. I'm reporting this just as a datapoint.

FWIW, the filesystem seems to be just fine after the reboot. A 'scrub'
found 0 errors. I'd say that is pretty neat!

Sander


[ 8409.792541] [ cut here ]
[ 8409.797576] kernel BUG at fs/btrfs/extent-tree.c:4263!
[ 8409.797576] Internal error: Oops - BUG: 0 [#1] SMP
[ 8409.797576] Modules linked in:
[ 8409.809112] CPU: 1Not tainted  (3.3.0 #8)
[ 8409.815734] PC is at drop_outstanding_extent+0xa/0x44
[ 8409.821044] LR is at btrfs_delalloc_release_metadata+0x39/0x94
[ 8409.827148] pc : [c010e0e2]lr : [c01138bd]psr: 6133
[ 8409.827148] sp : ed72fe30  ip :   fp : 
[ 8409.827148] r10: 1000  r9 : eee13da8  r8 : d3d2c738
[ 8409.827148] r7 : eee13c08  r6 : ed54c800  r5 :   r4 : 9fff
[ 8409.851440] r3 : eee13bf0  r2 :   r1 : 1000  r0 : eee13da8
[ 8409.858276] Flags: nZCv  IRQs on  FIQs on  Mode SVC_32  ISA Thumb  Segment 
kernel
[ 8409.863708] Control: 50c5387d  Table: ad50804a  DAC: 0015
[ 8409.863708] Process btrfs-transacti (pid: 241, stack limit = 0xed72e2f8)
[ 8409.879119] Stack: (0xed72fe30 to 0xed73)
[ 8409.879119] fe20: eee13da8 0001 
9000 eee13da8
[ 8409.892242] fe40:  ed54c800 d3d4c428 d3d2c738 ed658c00 9000 
 c014761d
[ 8409.892242] fe60: d3d4c428 d3d2c738     
ed54c800 d3d4c428
[ 8409.902862] fe80: eee13da8 ed658c00 d3d2c738 c011a283   
9000 
[ 8409.902862] fea0: 9000  ed72fec8 d3d4c428 ed72fec8  
 
[ 8409.902862] fec0:   5ecb   d3d4c428 
ed5c303c 
[ 8409.926483] fee0: ed5c3000 ed54c800 0002 ed5c304c  c011ed07 
 d3c8af54
[ 8409.926483] ff00: ed54c800 ee0f6000 ed653410  ed5c3000 c0152c2d 
ed653400 d3c8ae90
[ 8409.952148] ff20: ed653410 ed5c0c00 d3d4c428 d3c8ae90 d3c8af14 d3c8ae98 
 00d5
[ 8409.952148] ff40:  c011f869  c011f037   
 
[ 8409.952148] ff60: ed72da40 c00322e5 ed72ff68 ed72ff68 ed5c0c00  
 ed5c0c00
[ 8409.977813] ff80: 0033     c011b473 
 ed70bc10
[ 8409.977813] ffa0: ed5c0c00 c011b379 0033  ed70bc10 ed5c0c00 
c011b379 c0032059
[ 8409.994934] ffc0: ed70bc10  ed5c0c00   dead4ead 
 
[ 8409.996551] ffe0: ed72ffe0 ed72ffe0 ed70bc10 c003200d c000cd39 c000cd39 
 
[ 8410.012054] [c010e0e2] (drop_outstanding_extent+0xa/0x44) from 
[c01138bd] (btrfs_delalloc_release_metadata+0x39/0x94)
[ 8410.023559] [c01138bd] (btrfs_delalloc_release_metadata+0x39/0x94) from 
[c014761d] (btrfs_write_out_ino_cache+0x55/0x68)
[ 8410.023559] [c014761d] (btrfs_write_out_ino_cache+0x55/0x68) from 
[c011a283] (btrfs_save_ino_cache+0x1eb/0x22c)
[ 8410.023559] [c011a283] (btrfs_save_ino_cache+0x1eb/0x22c) from 
[c011ed07] (commit_fs_roots+0x6f/0xe0)
[ 8410.051208] [c011ed07] (commit_fs_roots+0x6f/0xe0) from [c011f869] 
(btrfs_commit_transaction+0x2cb/0x506)
[ 8410.051208] [c011f869] (btrfs_commit_transaction+0x2cb/0x506) from 
[c011b473] (transaction_kthread+0xfb/0x188)
[ 8410.066650] [c011b473] (transaction_kthread+0xfb/0x188) from [c0032059] 
(kthread+0x4d/0x60)
[ 8410.066650] [c0032059] (kthread+0x4d/0x60) from [c000cd39] 
(kernel_thread_exit+0x1/0x6)
[ 8410.090301] Code: 73dc f8d3 21a4 b902 (de02) 3a01 
[ 8410.100433] ---[ end trace 6b16e1c6e6a2dd9c ]---

-- 
Humilis IT Services and Solutions
http://www.humilis.net
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Invalid argument when mounting a btrfs raid1 filesystem

2012-03-24 Thread Sander
Christoph Groth wrote (ao):
 I'm trying to install current Debian testing (=kernel version 3.2) with
 btrfs as the root file system.  There is also a small ext3 /boot
 partition.
 
 I create a btrfs raid1 file system with the command
 
 mkfs.btrfs -d raid1 /dev/sda2 /dev/sdb2
 
 Then I can mount it and finish the installation successfully.  Booting
 doesn't work, however: initrd complains that it cannot mount /dev/sda2:
 Invalid argument.
 
 The funny thing is, that in the initrd console I can mount /dev/sdb2!
 So I changed the kernel parameter in grub.cfg to mount /dev/sdb2
 instead, but the problem persists: Now I can mount /dev/sda2 in the
 initrd console!
 
 In fact, when I boot a rescue system from a thumbdrive, the same thing
 happens:
 
 # mount -t btrfs /dev/sda2 /mnt
 mount: mounting /dev/sda2 on /mnt failed: Invalid argument
 # mount -t btrfs /dev/sdb2 /mnt
 #
 
 When I keep trying to mount the same device, it keeps failing.  When I
 start mounting /dev/sdb2, it works for /dev/sda2...
 
 Isn't this very weird?  Any ideas?

You might need 'btrfs device scan' before you can mount a multi-device
filesystem.

Sander

-- 
Humilis IT Services and Solutions
http://www.humilis.net
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] [RFC] Add btrfs autosnap feature

2012-03-02 Thread Sander
cwillu wrote (ao):
  While developing snapper I faced similar problems and looked at
  find-new but unfortunately it is not sufficient. E.g. when a file
  is deleted find-new does not report anything, see the reply to my
  mail here one year ago [1]. Also for newly created empty files
  find-new reports nothing, the same with metadata changes.

 For a system-wide undo'ish sort of thing that I think autosnapper is
 going for, it should work quite nicely, but you're right that it
 doesn't help a whole lot with a backup system.  It can't tell you
 which files were touched or deleted, but it will still tell you that
 _something_ in the subvolume was touched, modified or deleted (at
 least, as of the last commit), which is all you need if you're only
 ever comparing it to its source.

Tar can remove deleted files for you during a restore. This is (imho) a
really cool feature of tar, and I use it in combination with btrfs
snapshots.

https://www.gnu.org/software/tar/manual/tar.html#SEC94

The option `--listed-incremental' instructs tar to operate on an
incremental archive with additional metadata stored in a standalone
file, called a snapshot file. The purpose of this file is to help
determine which files have been changed, added or deleted since the last
backup

When extracting from the incremental backup GNU tar attempts to restore
the exact state the file system had when the archive was created. In
particular, it will delete those files in the file system that did not
exist in their directories when the archive was created

Sander

-- 
Humilis IT Services and Solutions
http://www.humilis.net
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Tuning of btrfs for throughput?

2012-01-30 Thread Sander
Richard Sharpe wrote (ao):
 I am running on 3.2.1 and have set up btrfs across 11 7200RPM 1TB 3.5
 drives. I told btrfs to mirror metadata and stripe data.

 However, what I would like to know is are there any tuning parameters
 I can tweak to push the numbers up a bit?
 
 I see lots of idle time (80+%) on my 16 cores (probably two by four by two).

Do you have a partition on the disks? On a partitionless disk you don't
have to deal with aligning.

You could try mkfs.btrfs -l 32k -n 32k as per
http://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg14585.html

Depending on the nature of your data, you could try with zlib, lzo or
snappy compression.

I'd say dd is a lousy benchmarktool btw.

Sander

-- 
Humilis IT Services and Solutions
http://www.humilis.net
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Setting options permanently?

2012-01-29 Thread Sander
Hadmut Danisch wrote (ao):
 I currently don't see how to repair this afterwards without removing the
 uncompressed files and writing new ones, which on the other hand spoils
 hte memory saving effect of using snapshots instead of copies.

As also mentioned by Li Zefan, you can use defrag. But this will indeed
not work nicely with snapshots. And you need more free space than the
largest file on the filesystem.

find / -xdev -execdir btrfs filesystem defrag -czlib {} +

Sander

-- 
Humilis IT Services and Solutions
http://www.humilis.net
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: ENOSPC on file deletion with 3.1.6

2012-01-03 Thread Sander
Arie Peterson wrote (ao):
 After upgrading my kernel from 2.6.38 (which has worked fine for months) to 
 3.1.6, I got ENOSPC on recompiling gcc (even though df says there is 16G free 
 of 50G; this is a raid1 setup, so in fact it's 8 of 25).
 
 After this error, I tried to remove the compilation directory (with rm -r): 
 this also gives ENOSPC. I am trying to work around this by first truncating 
 files using echo  $file, but this fails for some files, again with ENOSPC. 
 Also, removal of files is very slow even if it succeeds.
 
 Moreover, any write operation on the file system now fails with ENOSPC.
 
 Reverting to my old kernel does not help: it now shows the same problem.
 
 Is this a known issue? Is there a way to make this file system unstuck? (I 
 have 
 backups, but I'd like to preserve snapshot information if possible.) Should I 
 try upgrading to an even newer kernel?

Maybe your snapshots take up space. Can you show 'btrfs filesystem df /' ?

FWIW, I also had a disk full just a few days ago. Removed all snapshots
and some big files, but to no avail. Likely the background cleanup took
too much time. A reboot fixed this.

Sander

-- 
Humilis IT Services and Solutions
http://www.humilis.net
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: ENOSPC on file deletion with 3.1.6

2012-01-03 Thread Sander
Arie Peterson wrote (ao):
 On Tuesday 03 January 2012 15:06:43 Sander wrote:
  Maybe your snapshots take up space. Can you show 'btrfs filesystem df /' ?
 
 Data, RAID1: total=22.72GB, used=14.73GB
 Data: total=8.00MB, used=0.00
 System, RAID1: total=8.00MB, used=12.00KB
 System: total=4.00MB, used=0.00
 Metadata, RAID1: total=2.25GB, used=1.88GB
 Metadata: total=8.00MB, used=0.00

Hm, not full.

  FWIW, I also had a disk full just a few days ago. Removed all snapshots
  and some big files, but to no avail. Likely the background cleanup took
  too much time. A reboot fixed this.
 
 OK, I'll keep this in mind. I'm a bit anxious to reboot, because I'm afraid 
 booting will fail if the root file system cannot be written to.

But you did already reboot as you said the old kernel exposed the same
behavior?

Sander

-- 
Humilis IT Services and Solutions
http://www.humilis.net
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: COW a file from snapshot

2011-12-22 Thread Sander
Chris Samuel wrote (ao):
 On Thu, 22 Dec 2011 07:12:13 PM Roman Kapusta wrote:
  I'm using btrfs for about two years and this is the key feature I'm
  missing all the time. Why is it not part of mainline btrfs already?
 
 Because nobody has written the code to do it yet?
 
 I'm sure the developers would welcome patches for this with open arms!

As posted in this thread by Jerome two days ago:

You would need to apply this patch to your kernel:
http://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg09096.html

Is there any chance this patch gets in linux-next ?
I use this feature all the time and it never broke on me.


Sander

-- 
Humilis IT Services and Solutions
http://www.humilis.net
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Extreme slowdown

2011-12-16 Thread Sander
Tobias wrote (ao):
 On Fri, Dec 16, 2011 at 1:49 AM, Tobiastra...@robotech.de  wrote:
 My BTRFS-FS ist getting really slow. Reading is ok, writing is
 slow and deleting is horrible slow.
 
 There are many files and many links on the FS.

Do you happen to have (many) snapshots? Are btrfs kernel threads using a
lot of cpu?

Sander

-- 
Humilis IT Services and Solutions
http://www.humilis.net
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: What is best practice when partitioning a device that holds one or more btr-filesystems

2011-12-15 Thread Sander
dima wrote (ao):
 Maybe just skip partitioning altogether ;)

+1

 format the device to
 btrfs and use subvolumes instead of your usual partitions (some
 /boot restrictions apply). You won't be able to use grub2 though,
 but syslinux will work.

Grub2 has btrfs support for quite some time now, which you are aware of
I assume. Grub2 can't cope with / in a subvolume or something?

Sander

-- 
Humilis IT Services and Solutions
http://www.humilis.net
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: What is best practice when partitioning a device that holds one or more btr-filesystems

2011-12-15 Thread Sander
dima wrote (ao):
 format the device to
 btrfs and use subvolumes instead of your usual partitions (some
 /boot restrictions apply). You won't be able to use grub2 though,
 but syslinux will work.
 
 Grub2 has btrfs support for quite some time now, which you are aware of
 I assume. Grub2 can't cope with / in a subvolume or something?
 
 No, btrfs has nothing to do with this. It is just that grub2 cannot
 be installed to a partition-less drive (at least 1 partition is
 needed), while syslinux can.

Ah, wasn't aware of that. Thanks for the info!

Sander
 
-- 
Humilis IT Services and Solutions
http://www.humilis.net
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: btrfs encryption problems

2011-11-23 Thread Sander
810d4rk wrote (ao):
 Hi to all, I have a hard drive encrypted using the gnome disk utility
 and it is formated with with btrfs and GUID, the problem started when
 moving a 4gb file to other disk it stooped saying input output error I
 think, then when I tried to access it I entered the password to
 decrypt and it now says that I must specify filesystem type so it
 doesn't recognize the filesystem, I used 3.0 kernel, meanwhile I
 upgraded to 3.1, I have a backup of important files in other disk the
 problem is that it is also encrypted and it has btrfs so I don't touch
 it for now, can anyone help here?

dd that backup disk to another disk, so you have a backup of your
backup, and work with that.

You can also post the dmesg output you get when you mount the broken
filesystem, and ask the experts if it might be worth to try experimental
btrfs.fsck on it.

Sander

fwiw, I backup to gpg encrypted files stored on ext4 to cope with
regressions in both btrfs and disk encryption.

-- 
Humilis IT Services and Solutions
http://www.humilis.net
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Abysmal Performance

2011-06-21 Thread Sander
Henning Rohlfs wrote (ao):
 - space_cache was enabled, but it seemed to make the problem worse.
 It's no longer in the mount options.

space_cache is a one time mount option which enabled space_cache. Not
supplying it anymore as a mount option has no effect (dmesg | grep btrfs).

Sander

-- 
Humilis IT Services and Solutions
http://www.humilis.net
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Btrfs: make lzo the default compression scheme

2011-05-27 Thread Sander
Li Zefan wrote (ao):
 As the lzo compression feature has been established for quite
 a while, we are now ready to replace zlib with lzo as the default
 compression scheme.

Please be aware that grub2 currently can't load files from a btrfs with
lzo compression (on debian sid/experimental at least).

Just found out the hard way after a kernel upgrade on a system with no
separate /boot partition :-)

Found this: https://bugs.archlinux.org/task/23901

Sander

-- 
Humilis IT Services and Solutions
http://www.humilis.net
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Cannot Deinstall a Debian Package

2011-05-06 Thread Sander
Chris Mason wrote (ao):
 I'm happy to patch up bugs in the FS (or point you to newer
 kernels that have them fixed) but at this point we don't have enough
 info to say if it is an FS problem or a debian package problem.
 
 Perhaps if you ran it under strace?
 
 Other distros don't have problems with btrfs on /, so somehow this is
 specific to debian's setup.

I believe this is fixed in Debian testing/unstable:
http://packages.debian.org/changelogs/pool/main/g/grub2/grub2_1.99~rc1-13/changelog

grub2 (1.99~20110106-1) experimental; urgency=low

   * New Bazaar snapshot.
 - Check that named RAID array devices exist before using them
   (closes:
   #606035).
 - Clear terminfo output on initialisation (closes: #569678).
 - Fix grub-probe when btrfs is on / without a separate /boot.


Sander

-- 
Humilis IT Services and Solutions
http://www.humilis.net
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Cannot Deinstall a Debian Package

2011-05-06 Thread Sander
cac...@quantum-sci.com wrote (ao):
 On Thursday 5 May, 2011 23:33:33 Sander wrote:
  Can you do:
  
  echo true  /var/lib/dpkg/info/grub-installer.postinst
  
  and try again?
 
 At some point somehow grup-pc apparently got installed, even with the
 script failure.  So I tried my dist-upgrade again, and seems to have
 completed almost 400 packages, but three still fail: Errors were
 encountered while processing:
  linux-image-2.6.38-2-amd64
  grub-pc
  linux-image-2.6-amd64
 E: Sub-process /usr/bin/dpkg returned an error code (1)

Can you post the error?

Sander

-- 
Humilis IT Services and Solutions
http://www.humilis.net
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Cannot Deinstall a Debian Package

2011-05-06 Thread Sander
cac...@quantum-sci.com wrote (ao):
 On Friday 6 May, 2011 05:20:28 Sander wrote:
  Can you post the error?

 Do you want to continue [Y/n]? 
 Setting up linux-image-2.6.38-2-amd64 (2.6.38-3) ...

 /usr/sbin/grub-probe: error: cannot stat `/dev/root'.
 run-parts: /etc/kernel/postinst.d/zz-update-grub exited with return code 1
 Failed to process /etc/kernel/postinst.d at 
 /var/lib/dpkg/info/linux-image-2.6.38-2-amd64.postinst line 801, STDIN line 
 7.
 dpkg: error processing linux-image-2.6.38-2-amd64 (--configure):
  subprocess installed post-installation script returned error exit status 9
 configured to not write apport reports
   Setting up grub-pc (1.99~rc1-13) ...
 grub-probe: error: cannot stat `/dev/root'.
 Installation finished. No error reported.
 /usr/sbin/grub-probe: error: cannot stat `/dev/root'.

Can you try:

dpkg -i /var/cache/apt/archives/grub-pc_1.99~rc1-13_amd64.deb
apt-get dist-upgrade

If that (second step) doesn't work:

echo true  /var/lib/dpkg/info/grub-installer.postinst
apt-get dist-upgrade

grub-install /dev/sda
update-grub

Sander

-- 
Humilis IT Services and Solutions
http://www.humilis.net
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Cannot Deinstall a Debian Package

2011-05-06 Thread Sander
cac...@quantum-sci.com wrote (ao):
 # dpkg -i /var/cache/apt/archives/grub-pc_1.99~rc1-13_amd64.deb
 (Reading database ... 135273 files and directories currently installed.)
 Preparing to replace grub-pc 1.99~rc1-13 (using 
 .../grub-pc_1.99~rc1-13_amd64.deb) ...
 Unpacking replacement grub-pc ...
 Setting up grub-pc (1.99~rc1-13) ...
 grub-probe: error: cannot stat `/dev/root'.

Hm. Just do cp /bin/true /usr/sbin/grub-probe

Sander

-- 
Humilis IT Services and Solutions
http://www.humilis.net
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Cannot Deinstall a Debian Package

2011-05-06 Thread Sander
cac...@quantum-sci.com wrote (ao):
  Wow.  I was very nearly completely screwed.  I went ahead and
  rebooted, but grub.cfg was not set up at all.  I had no way to run
  update-grub on that root, and so tried manually filling in the
  missing parameters.  That didn't work, probably through the
  obfuscation of UUIDs I couldn't determine what was really going on.
  What a terrible idea the way they were implemented, UUIDs.  Why not
  put the current device assignment somewhere in the number?
  Terrible.

That would kinda defeat the purpose :p

  I ended up copying an old grub.cfg from a backup, and that got me at
  least booted, though with lots of grub errors.  Now I am at a loss.

What errors ..

  I don't understand this.  grub-update at least -pretended- to work
  before I rebooted.  I am still in shock.  MUST get some actual work
  done today, rather than bit-twiddling.  MUST try and make a living.

Dude, really ..

Sander

-- 
Humilis IT Services and Solutions
http://www.humilis.net
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Cannot Deinstall a Debian Package

2011-05-04 Thread Sander
cac...@quantum-sci.com wrote (ao):
 I would be happy to upgrade grub, but the package management system is
 jammed because of this.

Put an exit on top of /etc/kernel/postrm.d/zz-update-grub and try again.

Install grub-pc 1.99~rc1-13 from Sid.

http://packages.debian.org/changelogs/pool/main/g/grub2/grub2_1.99~rc1-13/changelog

Sander

-- 
Humilis IT Services and Solutions
http://www.humilis.net
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: SSD optimizations

2010-12-12 Thread Sander
Gordan Bobic wrote (ao):
 On 12/12/2010 17:24, Paddy Steed wrote:
 In a few weeks parts for my new computer will be arriving. The storage
 will be a 128GB SSD. A few weeks after that I will order three large
 disks for a RAID array. I understand that BTRFS RAID 5 support will be
 available shortly. What is the best possible way for me to get the
 highest performance out of this setup. I know of the option to optimize
 for SSD's
 
 BTRFS is hardly the best option for SSDs. I typically use ext4
 without a journal on SSDs, or ext2 if that is not available.
 Journalling causes more writes to hit the disk, which wears out
 flash faster. Plus, SSDs typically have much slower writes than
 reads, so avoiding writes is a good thing.

Gordan, this you wrote is so wrong I don't even know where to begin.

You'd better google a bit on the subject (ssd, and btrfs on ssd) as much
is written about it already.

Sander

-- 
Humilis IT Services and Solutions
http://www.humilis.net
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 2/2] Cancel filesystem balance.

2010-11-12 Thread Sander
Chris Samuel wrote (ao):
 On 12/11/10 12:33, Li Zefan wrote:
 
  Is there any blocker that prevents us from canceling balance
  by just Ctrl+C ?
 
 Given that there's been at least 1 report of it taking 12 hours
 to balance a non-trivial amount of data I suspect putting this
 operation into the background by default and having the cancel
 option might be a better plan.
 
 Thoughts ?

My humble opinion: I very much like the way mdadm works, with the
progress bar in /proc/mdstat if an array is rebuilding for example.

Sander

-- 
Humilis IT Services and Solutions
http://www.humilis.net
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: BTRFS SSD

2010-09-30 Thread Sander
Yuehai Xu wrote (ao):
 So, is it a bottleneck in the case of SSD since the cost for over
 write is very high? For every write, I think the superblocks should be
 overwritten, it might be much more frequent than other common blocks
 in SSD, even though SSD will do wear leveling inside by its FTL.

The FTL will make sure the write cycles are evenly divided among the
physical blocks, regardless of how often you overwrite a single spot on
the fs.

 What I current know is that for Intel x25-V SSD, the write throughput
 of BTRFS is almost 80% less than the one of EXT3 in the case of
 PostMark. This really confuses me.

Can you show the script you use to test this, provide some info
regarding your setup, and show the numbers you see?

Sander

-- 
Humilis IT Services and Solutions
http://www.humilis.net
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/3] Btrfs: save free space cache to the disk

2010-09-20 Thread Sander
Josef Bacik wrote (ao):
 This patch series introduces the ability for btrfs to store the free space 
 cache
 ondisk to make the caching of a block group much quicker.  Previously we had 
 to
 search the entire extent-tree to look for gaps everytime we wanted to allocate
 in a block group.  This approach instead dumps all of the free space cache to
 disk for every dirtied block group each time we commit the transaction.  This 
 is
 a disk format change, but in order to use the feature you will have to mount
 with -o space_cache, and then from then on you won't be able to use old 
 kernels
 with your filesystem.

Will this go into a future version of btrfs?

If so, would it make sense to include other changes that would require a
format change?

Sander

-- 
Humilis IT Services and Solutions
http://www.humilis.net
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Status of BTRFS

2010-07-19 Thread Sander
Ken D'Ambrosio wrote (ao):
  Edward Ned Harvey wrote (ao):
   Is it included in any distributions yet?
  
  Yes, Fedora is one of the releases that has officially supported it
  for a while now.
  been implemented for Arch Linux, so you might see btrfs being an
  option for that in the next version of the installer :-)
 
 I also believe that Ubuntu 10.10 is slated to have it; I think it's in the
 current alpha, though based on my reading, there are still some rough
 edges.

Btrfs is in Ubuntu 10.10 alpha and that installs and works oke is my
experience.

Sander

-- 
Humilis IT Services and Solutions
http://www.humilis.net
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: slow deletion of files

2010-07-13 Thread Sander
Clemens Eisserer wrote (ao):
 Another reason I moved away was btrfs corrupted, and btrfsck is still
 not able to repair it.
 I really like btrfs but in my opinion it has still a long road to go
 and declaring it stable in 2.6.35 is quite optimistic at best.

Please allow me to report in favor of btrfs.

I use btrfs since February 2009 on my workstation, since September 2009
on my home server, and since December 2009 on several ARM computers.
Recently I've started to use btrfs on production servers.

Btrfs has not let me down yet. I do make hourly incremental backups and
keep a close eye on the btrfs mailinglist though.

Sander

-- 
Humilis IT Services and Solutions
http://www.humilis.net
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Still ENOSPC problems with 2.6.35-rc3

2010-06-17 Thread Sander
Yan, Zheng  wrote (ao):
 what will happen if you keep deleting files using 2.6.35?

From the list: Things you don't want your fs developer to say ;-)

PS, I am a very happy btrfs user on several systems (including
AMR based OpenRD-Client and SheevaPlug, and large 64bit servers), so no
flame intended ;-)

Sander

-- 
Humilis IT Services and Solutions
http://www.humilis.net
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: SSD Optimizations

2010-03-11 Thread Sander
Stephan von Krawczynski wrote (ao):
 Honestly I would just drop the idea of an SSD option simply because the
 vendors implement all kinds of neat strategies in their devices. So in the end
 you cannot really tell if the option does something constructive and not
 destructive in combination with a SSD controller.

My understanding of the ssd mount option is also that the fs doens't try
to do all kinds of smart (and potential expensive) things which make
sense for rotating media to reduce seeks and the like.

Sander

-- 
Humilis IT Services and Solutions
http://www.humilis.net
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: SSD Optimizations

2010-03-10 Thread Sander
Hello Gordan,

Gordan Bobic wrote (ao):
 Mike Fedyk wrote:
 On Wed, Mar 10, 2010 at 11:49 AM, Gordan Bobic gor...@bobich.net wrote:
 Are there options available comparable to ext2/ext3 to help reduce
 wear and improve performance?

With SSDs you don't have to worry about wear.

 And while I appreciate hopeful remarks along the lines of I think
 you'll get more out of btrfs, I am really after specifics of what
 the ssd mount option does, and what features comparable to the
 optimizations that can be done with ext2/3/4 (e.g. the mentioned
 stripe-width option) are available to get the best possible
 alignment of data and metadata to increase both performance and life
 expectancy of a SSD.

Alignment is about the partition, not the fs, and thus taken care of
with fdisk and the like.

If you don't create a partition, the fs is aligned with the SSD.

 Also, for drives that don't support TRIM, is there a way to make the
 FS apply aggressive re-use of erased space (in order to help the
 drive's internal wear-leveling)?

TRIM has nothing to do with wear-leveling (although it helps reducing
wear).
TRIM lets the OS tell the disk which blocks are not in use anymore, and
thus don't have to be copied during a rewrite of the blocks.
Wear-leveling is the SSD making sure all blocks are more or less equally
written to avoid continuous load on the same blocks.

Sander

-- 
Humilis IT Services and Solutions
http://www.humilis.net
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: btrfs kernel oops and hot storage removing

2010-01-29 Thread Sander
Hello Maksim,

Maksim 'max_posedon' Melnikau wrote (ao):
 I'm running btrfs on my sheevaplug on storage attached via usb. I use
 multi-device configuration for testing (use different partitions for
 emulate this). I catched kernel oops on hot removing storage (without
 umount/etc). First one was one device decide reboot themselves, second
 when I manually turned it off and on.
 
 Basically I don't expect correct btrfs working in such situation, but
 as I know it designed to work correctly in raid configuration, so, may
 be somebody expect better behavior even in such situation.

You pull the entire raid array when removing the USB storage, as the
Sheevaplug has only one USB port. That will not work and it is not the
fault of btrfs :-)

With kind regards, Sander

-- 
Humilis IT Services and Solutions
http://www.humilis.net
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Unable to handle kernel NULL pointer dereference at virtual address 00000008

2010-01-21 Thread Sander
Hello Tomasz,

Tomasz Torcz wrote (ao):
 On Thu, Jan 21, 2010 at 07:07:10AM +0100, Sander wrote:
  [26678.568532] [c026c294] (btrfs_get_acl+0x60/0x250) from [c026c494] 
  (btrfs_xattr_get_acl+0x10/0x70)
  [26678.577802] [c026c494] (btrfs_xattr_get_acl+0x10/0x70) from 
  [c019bb20] (generic_getxattr+0x78/0x7c)
  [26678.587243] [c019bb20] (generic_getxattr+0x78/0x7c) from [c019c01c] 
  (vfs_getxattr+0x58/0x5c)
  [26678.596074] [c019c01c] (vfs_getxattr+0x58/0x5c) from [c019c0c4] 
  (getxattr+0xa4/0x11c)
  [26678.604298] [c019c0c4] (getxattr+0xa4/0x11c) from [c019c220] 
  (sys_getxattr+0x44/0x58)
  [26678.612525] [c019c220] (sys_getxattr+0x44/0x58) from [c0122e20] 
  (ret_fast_syscall+0x0/0x28)
 
   Although your oops is in btrfs_get_acl(), you may need similar fix
 as done for btrfs_set_acl() in this commit:
 
   
 http://git.kernel.org/?p=linux/kernel/git/mason/btrfs-unstable.git;a=commitdiff;h=a9cc71a60c29a09174bee2fcef8f924c529fd4b7

Thanks, that makes sense.

Unfortunately I'm no kernel hacker so I can't provide a patch. I'm more
than happy to test patches of course.

With kind regards, Sander

-- 
Humilis IT Services and Solutions
http://www.humilis.net
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Unable to handle kernel NULL pointer dereference at virtual address 00000008

2010-01-20 Thread Sander
Hello,

I get the following error if I edit fstab with vi on a fresh btrfs
filesystem. vi Segfaults at saving the file.

# mkfs.btrfs /dev/sda2
# mount /mnt/
# cd /
# find . -xdev | cpio -vdump /mnt
# vi /mnt/etc/fstab
Segmentation fault

This also happens with a 'cp -a':

# cd /mnt/
# cp etc/fstab tmp/
# cp etc/fstab tmp/
# cp -a etc/fstab tmp/
Segmentation fault

And 'ls -l'

# cd /mnt/
# ls tmp/
bla  fstab  network.configured
# ls -l tmp/
Segmentation fault

Kernel config:
CONFIG_BTRFS_FS=y
CONFIG_BTRFS_FS_POSIX_ACL=y

(I'll try without ACL now, but takes about an hour to compile the
kernel).

This kernel is a patched 2.6.33-rc1 from
git://repo.or.cz/linux-2.6/linux-2.6-openrd.git

This error also happens if I remove linux-2.6-openrd/fs/btrfs/ and copy
btrfs-unstable/fs/btrfs/ (latetst as of yesterday and this morning). I'm
not sure if that is allowed though.

# mkfs.btrfs -V
mkfs.btrfs, part of Btrfs Btrfs v0.19

The system is Debian Sid on an Openrd-client (ARM). The ssd is an Intel
X25-E.

I didn't find a similar bugreport.

With kind regards, Sander


[26055.036656] device fsid 904e5c0206a9b9d1-f00b47d7270b119a devid 1 transid 7 
/dev/sda2
[26055.045253] btrfs: use ssd allocation scheme
[26678.340511] Unable to handle kernel NULL pointer dereference at virtual 
address 0008
[26678.348648] pgd = cebb8000
[26678.351367] [0008] *pgd=03127031, *pte=, *ppte=
[26678.357691] Internal error: Oops: 17 [#1]
[26678.361716] last sysfs file: /sys/kernel/uevent_helper
[26678.366878] Modules linked in:
[26678.369950] CPU: 0Not tainted  (2.6.33-rc1 #1)
[26678.374768] PC is at btrfs_get_acl+0x60/0x250
[26678.379142] LR is at btrfs_xattr_get_acl+0x10/0x70
[26678.383956] pc : [c026c294]lr : [c026c494]psr: 2093
[26678.383962] sp : c5199e08  ip : c04fc87c  fp : bec451d4
[26678.395498] r10: 00186058  r9 : c5198000  r8 : c0496b5a
[26678.400749] r7 :   r6 : 8000  r5 : ce0ef600  r4 : 0008
[26678.407307] r3 : 2013  r2 : 2093  r1 : 8000  r0 : 0008
[26678.413864] Flags: nzCv  IRQs off  FIQs on  Mode SVC_32  ISA ARM  Segment 
user
[26678.421119] Control: 0005397f  Table: 0ebb8000  DAC: 0015
[26678.426891] Process vi (pid: 1033, stack limit = 0xc5198270)
[26678.432576] Stack: (0xc5199e08 to 0xc519a000)
[26678.436958] 9e00:   c5199e70 db119840 0084  
c0496b5a c5198000
[26678.445182] 9e20: 00186058 c026c494 c5199e70 0017 c04fc838 c019bb20 
 c5199e70
[26678.453405] 9e40: ce0ef600 c98734c8 db119840 0084 c5199e70 c019c01c 
0084 db119840
[26678.461628] 9e60: bec450c0 ce0ef600 0017 c019c0c4 74737973 702e6d65 
7869736f 6c63615f
[26678.469851] 9e80: 6363615f 00737365 c5199f18  0001 c98734c8 
 c018e1e4
[26678.478073] 9ea0:  0001 0371 117ee576 0005 c3ba3009 
dfc4a280 ce0ef600
[26678.486297] 9ec0:  c5199f18 c5199ee8 c5198000 c3ba3000  
c5199f18 c5198000
[26678.494520] 9ee0: c3ba3000 c018e4cc c5199f2c c3ba3000  0001 
c5199f18 c3ba3000
[26678.502743] 9f00:  c018e57c c04f26f0 c3ba3000 c3ba3000 c018eeb0 
dfc4a280 ce0ef600
[26678.510966] 9f20: c5199f30 c5199f48 bec45170  df412400 0001 
0001 
[26678.519189] 9f40: 00179424 c01883a8 0001f9da  0010 c51981a4 
c04f240c d5b20900
[26678.527411] 9f60: 0012 c98734c8 c5199f88 0002 c5199f88 0084 
bec450c0 4012f994
[26678.535635] 9f80:  c019c220 dfc4a280 ce0ef600 0084 bec450c0 
4012c7e8 00e5
[26678.543858] 9fa0: c0122fa4 c0122e20 0084 bec450c0 00186058 4012f994 
bec450c0 0084
[26678.552081] 9fc0: 0084 bec450c0 4012c7e8 00e5 0001 00179444 
00186058 bec451d4
[26678.560304] 9fe0: bec450b0 bec450a0 40274ecc 401fb5d0 6010 00186058 
d5abcade ace1046d
[26678.568532] [c026c294] (btrfs_get_acl+0x60/0x250) from [c026c494] 
(btrfs_xattr_get_acl+0x10/0x70)
[26678.577802] [c026c494] (btrfs_xattr_get_acl+0x10/0x70) from [c019bb20] 
(generic_getxattr+0x78/0x7c)
[26678.587243] [c019bb20] (generic_getxattr+0x78/0x7c) from [c019c01c] 
(vfs_getxattr+0x58/0x5c)
[26678.596074] [c019c01c] (vfs_getxattr+0x58/0x5c) from [c019c0c4] 
(getxattr+0xa4/0x11c)
[26678.604298] [c019c0c4] (getxattr+0xa4/0x11c) from [c019c220] 
(sys_getxattr+0x44/0x58)
[26678.612525] [c019c220] (sys_getxattr+0x44/0x58) from [c0122e20] 
(ret_fast_syscall+0x0/0x28)
[26678.621265] Code: 0a77 e10f3000 e3832080 e121f002 (e5942000)
[26678.627621] ---[ end trace a16c1078eb68be38 ]---


-- 
Humilis IT Services and Solutions
http://www.humilis.net
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Unable to handle kernel NULL pointer dereference at virtual address 00000008

2010-01-20 Thread Sander
Sander wrote (ao):
 I get the following error if I edit fstab with vi on a fresh btrfs
 filesystem. vi Segfaults at saving the file.
 
 # mkfs.btrfs /dev/sda2
 # mount /mnt/
 # cd /
 # find . -xdev | cpio -vdump /mnt
 # vi /mnt/etc/fstab
 Segmentation fault
 
 This also happens with a 'cp -a':
 
 # cd /mnt/
 # cp etc/fstab tmp/
 # cp etc/fstab tmp/
 # cp -a etc/fstab tmp/
 Segmentation fault
 
 And 'ls -l'
 
 # cd /mnt/
 # ls tmp/
 bla  fstab  network.configured
 # ls -l tmp/
 Segmentation fault
 
 Kernel config:
 CONFIG_BTRFS_FS=y
 CONFIG_BTRFS_FS_POSIX_ACL=y
 
 (I'll try without ACL now, but takes about an hour to compile the
 kernel).

Without CONFIG_BTRFS_FS_POSIX_ACL I can't reproduce the segfaults.

With kind regards, Sander


 This kernel is a patched 2.6.33-rc1 from
 git://repo.or.cz/linux-2.6/linux-2.6-openrd.git
 
 This error also happens if I remove linux-2.6-openrd/fs/btrfs/ and copy
 btrfs-unstable/fs/btrfs/ (latetst as of yesterday and this morning). I'm
 not sure if that is allowed though.
 
 # mkfs.btrfs -V
 mkfs.btrfs, part of Btrfs Btrfs v0.19
 
 The system is Debian Sid on an Openrd-client (ARM). The ssd is an Intel
 X25-E.
 
 I didn't find a similar bugreport.
 
   With kind regards, Sander
 
 
 [26055.036656] device fsid 904e5c0206a9b9d1-f00b47d7270b119a devid 1 transid 
 7 /dev/sda2
 [26055.045253] btrfs: use ssd allocation scheme
 [26678.340511] Unable to handle kernel NULL pointer dereference at virtual 
 address 0008
 [26678.348648] pgd = cebb8000
 [26678.351367] [0008] *pgd=03127031, *pte=, *ppte=
 [26678.357691] Internal error: Oops: 17 [#1]
 [26678.361716] last sysfs file: /sys/kernel/uevent_helper
 [26678.366878] Modules linked in:
 [26678.369950] CPU: 0Not tainted  (2.6.33-rc1 #1)
 [26678.374768] PC is at btrfs_get_acl+0x60/0x250
 [26678.379142] LR is at btrfs_xattr_get_acl+0x10/0x70
 [26678.383956] pc : [c026c294]lr : [c026c494]psr: 2093
 [26678.383962] sp : c5199e08  ip : c04fc87c  fp : bec451d4
 [26678.395498] r10: 00186058  r9 : c5198000  r8 : c0496b5a
 [26678.400749] r7 :   r6 : 8000  r5 : ce0ef600  r4 : 0008
 [26678.407307] r3 : 2013  r2 : 2093  r1 : 8000  r0 : 0008
 [26678.413864] Flags: nzCv  IRQs off  FIQs on  Mode SVC_32  ISA ARM  Segment 
 user
 [26678.421119] Control: 0005397f  Table: 0ebb8000  DAC: 0015
 [26678.426891] Process vi (pid: 1033, stack limit = 0xc5198270)
 [26678.432576] Stack: (0xc5199e08 to 0xc519a000)
 [26678.436958] 9e00:   c5199e70 db119840 0084  
 c0496b5a c5198000
 [26678.445182] 9e20: 00186058 c026c494 c5199e70 0017 c04fc838 c019bb20 
  c5199e70
 [26678.453405] 9e40: ce0ef600 c98734c8 db119840 0084 c5199e70 c019c01c 
 0084 db119840
 [26678.461628] 9e60: bec450c0 ce0ef600 0017 c019c0c4 74737973 702e6d65 
 7869736f 6c63615f
 [26678.469851] 9e80: 6363615f 00737365 c5199f18  0001 c98734c8 
  c018e1e4
 [26678.478073] 9ea0:  0001 0371 117ee576 0005 c3ba3009 
 dfc4a280 ce0ef600
 [26678.486297] 9ec0:  c5199f18 c5199ee8 c5198000 c3ba3000  
 c5199f18 c5198000
 [26678.494520] 9ee0: c3ba3000 c018e4cc c5199f2c c3ba3000  0001 
 c5199f18 c3ba3000
 [26678.502743] 9f00:  c018e57c c04f26f0 c3ba3000 c3ba3000 c018eeb0 
 dfc4a280 ce0ef600
 [26678.510966] 9f20: c5199f30 c5199f48 bec45170  df412400 0001 
 0001 
 [26678.519189] 9f40: 00179424 c01883a8 0001f9da  0010 c51981a4 
 c04f240c d5b20900
 [26678.527411] 9f60: 0012 c98734c8 c5199f88 0002 c5199f88 0084 
 bec450c0 4012f994
 [26678.535635] 9f80:  c019c220 dfc4a280 ce0ef600 0084 bec450c0 
 4012c7e8 00e5
 [26678.543858] 9fa0: c0122fa4 c0122e20 0084 bec450c0 00186058 4012f994 
 bec450c0 0084
 [26678.552081] 9fc0: 0084 bec450c0 4012c7e8 00e5 0001 00179444 
 00186058 bec451d4
 [26678.560304] 9fe0: bec450b0 bec450a0 40274ecc 401fb5d0 6010 00186058 
 d5abcade ace1046d
 [26678.568532] [c026c294] (btrfs_get_acl+0x60/0x250) from [c026c494] 
 (btrfs_xattr_get_acl+0x10/0x70)
 [26678.577802] [c026c494] (btrfs_xattr_get_acl+0x10/0x70) from [c019bb20] 
 (generic_getxattr+0x78/0x7c)
 [26678.587243] [c019bb20] (generic_getxattr+0x78/0x7c) from [c019c01c] 
 (vfs_getxattr+0x58/0x5c)
 [26678.596074] [c019c01c] (vfs_getxattr+0x58/0x5c) from [c019c0c4] 
 (getxattr+0xa4/0x11c)
 [26678.604298] [c019c0c4] (getxattr+0xa4/0x11c) from [c019c220] 
 (sys_getxattr+0x44/0x58)
 [26678.612525] [c019c220] (sys_getxattr+0x44/0x58) from [c0122e20] 
 (ret_fast_syscall+0x0/0x28)
 [26678.621265] Code: 0a77 e10f3000 e3832080 e121f002 (e5942000)
 [26678.627621] ---[ end trace a16c1078eb68be38 ]---

-- 
Humilis IT Services and Solutions
http://www.humilis.net
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: worse than expected compression ratios with -o compress

2010-01-17 Thread Sander
Hello Jim,

Jim Faulkner wrote (ao):
 To contrast, rzip can compress a database dump of this data to
 around 7% of its original size.  This is an older database dump,
 which is why it is smaller.  Before:
 -rw--- 1 root root  69G 2010-01-15 14:55 mysqlurdbackup.2010-01-15
 and after:
 -rw--- 1 root root 5.2G 2010-01-16 05:34 mysqlurdbackup.2010-01-15.rz
 
 Of course it took 15 hours to compress the data, and btrfs wouldn't
 be able to use rzip for compression anyway.

The difference between a life MySQL database and a dump of that database
is that the dump is text, while the database files are binary.

A fair comparison would be to compress the actual database files.

With kind regards, Sander

-- 
Humilis IT Services and Solutions
http://www.humilis.net
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: btrfs volume mounts and dies (was Re: Segfault in btrfsck)

2010-01-07 Thread Sander
Hello Steve,

Steve Freitas wrote (ao):
 Alright, I'll trash it and start over with a different drive.

With the danger of mentioning the obvious: you could do a few
destructive badblocks runs on that disk to see if SMART keeps adding up
to the bad blocks list.

With kind regards, Sander

-- 
Humilis IT Services and Solutions
http://www.humilis.net
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: btrfs volume mounts and dies (was Re: Segfault in btrfsck)

2010-01-06 Thread Sander
Hello Steve,

Steve Freitas wrote (ao):
 Should I take it by the lack of list response that I should just flush
 this partition down the toilet and start over? Or is everybody either
 flummoxed or on vacation?

I don't have your original mail, but I think I remember you mentioned a
lot of bad sectors on that disk reported by SMART.

If that is indeed the case it might be dificult for the people who might
be able to help you, to help you.

Please ignore me if I confused your mail with another.

With kind regard, Sander

-- 
Humilis IT Services and Solutions
http://www.humilis.net
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: New idea about RAID and SSD

2009-09-01 Thread Sander
Hello Massimo,

Massimo Maggi wrote (ao):
 SSDs have low latency but a high price per GB,
 Traditional hard disks have high latency, but high sequential read/write
 speed and low price per GB.
 Is possibile to use a SSD for metadata, which requires many seeks and is
 relatively small, in a special RAID mode with a traditional hard disk
 for the extents of the real data?
 A cheap but performant SSD (maybe 32 GB) + a big and fast HD (maybe 1.5
 TB, or two in RAID0 - 3TB ), wouldn't create an array much cheaper than
 a ssd-only array of the same size, and much faster (in
 not-only-sequential workload)  than one or two traditional HDs in RAID0?
 Would it work?

If you talk RAID0 (eg no redundancy), you could RAID0 one or several
traditional disks, and use the SSD as a journal device. That would be
ext3/4 only btw.

With mdadm you could create a RAID1 and use --write-mostly:

   -W, --write-mostly
  subsequent devices listed in a --build, --create, or --add  com-
  mand will be flagged as 'write-mostly'.  This is valid for RAID1
  only and means that the 'md'  driver  will  avoid  reading  from
  these devices if at all possible.  This can be useful if mirror-
  ing over a slow link.

Where the 'slow link' would be the traditional disk. But this is raid1 and
doesn't help in your case (but couldn't resist the need to mention it :-)

Sander

-- 
Humilis IT Services and Solutions
http://www.humilis.net
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Offtopic: Which SAS / SATA HBA do you recommend?

2009-07-20 Thread Sander
Hi all,

Sorry for the offtopic question. I hope though that others on this list
or reading the archive find the answers useful too.

It seems the Adaptec 1405 4port SAS HBA I bought only works with RHEL
and SuSE through a closed source driver, and thus is quite useless :-(
I was stupid enought to think Works with RHEL and SuSE meant Certified
for RHEL and SuSE, but driver in mainstream kernel ..

What SAS (or SATA) controller do you use or recommend in combination
with a SSD and BTRFS?

I'm looking for a non-RAID controller with four or eight ports and of
course full Linux support.

Currently I have a 8x 2.5 SAS/SATA chassis and two Intel X25-E 64GB
drivers which will have a BTRFS RAID0 filesystem.

I already have a Marvell MV88SX6081 eight port SATA controller, but it
has no free ports and is pretty old.

With kind regards, Sander

-- 
Humilis IT Services and Solutions
http://www.humilis.net
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Offtopic: Which SAS / SATA HBA do you recommend?

2009-07-20 Thread Sander
S?bastien Wacquiez wrote (ao):
 Sander wrote:
  What SAS (or SATA) controller do you use or recommend in combination
  with a SSD and BTRFS?
  I'm looking for a non-RAID controller with four or eight ports and
  of course full Linux support.
 
 I don't have any ssd yet, but if you want cheap card without hardware
 raid, you could look at those :
 
 http://www.supermicro.com/products/accessories/addon/AOC-USASLP-L8i.cfm
 http://www.supermicro.com/products/accessories/addon/AOC-USAS-L8i.cfm
 
 You'll need to revert the bracket as they're designed to be in a
 supermicro UIO slot (In fact, they are just standart PCIe card upside
 down). They use the mptsas driver in linux which works ok for a long
 time, and can be found for ~ 130$. It work pretty fast (~ 800 Mo/s with
 a 8 drives raid10f2 in my brand new nehalem server).
 
 You can also find this one  for ~ 120 bucks :
 http://www.supermicro.com/products/accessories/addon/AOC-SASLP-MV8.cfm
 
 No proprietary slot this time, the chipset is supported in 2.6.30 (but I
 haven't tested it yet) via the mvsas drivers.
 
 Hope you find this usefull.

Very useful, much appreciated, thanks!

I think I go for a Marvell based controller (again), as Marvell chips
seem to have quite a history when it comes to open source drivers.

No more Adaptec ..

Thanks again Sebastien.

With kind regards, Sander

-- 
Humilis IT Services and Solutions
http://www.humilis.net
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Phoronix article slaming BTRFS

2009-06-24 Thread Sander
Mike Ramsey wrote (ao):
 Depends on who you talk to.
 
 http://www.tomshardware.com/news/ocz-ssd-vertex-intel-solid-state,7127.html
 
 OCZ Says Its New Vertex SSD Beats Intel's X25-E
 
 I am not taking sides.  I am just saying that the SSD market is fluid.

Read and write speeds specs mean (almost) nothing when it comes to SSD.

The true performance is shown in heavy long-running benchmarks. OCZ has
a long history of very bad performing SSD products.

The Intel SSD did set the standard since it came on the market (hence
the reason OCZ mentions the X25-E).

Btw, not only benchmarks show paper specs mean (almost) nothing: check
the OCZ forums and google on real life usage performance problems
(stutters mostly) under normal to low load.

Especially small writes kill OCZ SSD performance, although their
products have improved with the last releases.

With kind regards, Sander

-- 
Humilis IT Services and Solutions
http://www.humilis.net
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Phoronix article slaming BTRFS

2009-06-23 Thread Sander
Chris Mason wrote (ao):
 Jens Axboe tried to reproduce the phoronix results on his ocz drive,
 and generally found that each run was slower than the last regardless
 of which mount options were used. This isn't entirely surprising, but
 it did make it very difficult to nail down good or bad performance.

The performance should stabilize within a handful max fills I believe?

There should be a moment where things don't get more complicated for the
controller I thought.

-- 
Humilis IT Services and Solutions
http://www.humilis.net
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Data Deduplication with the help of an online filesystem check

2009-05-06 Thread Sander
Heinz-Josef Claes wrote (ao):
 Am Dienstag, 28. April 2009 19:38:24 schrieb Chris Mason:
  On Tue, 2009-04-28 at 19:34 +0200, Thomas Glanzmann wrote:
   Hello,
  
I wouldn't rely on crc32: it is not a strong hash,
Such deduplication can lead to various problems,
including security ones.
  
   sure thing, did you think of replacing crc32 with sha1 or md5, is this
   even possible (is there enough space reserved so that the change can be
   done without changing the filesystem layout) at the moment with btrfs?
 
  It is possible, there's room in the metadata for about about 4k of
  checksum for each 4k of data.  The initial btrfs code used sha256, but
  the real limiting factor is the CPU time used.
 
  -chris
 
 It's not only cpu time, it's also memory. You need 32 byte for each 4k block. 
 It needs to be in RAM for performance reason.

Less so with SSD I would assume.

-- 
Humilis IT Services and Solutions
http://www.humilis.net
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: btrfs for enterprise raid arrays

2009-04-03 Thread Sander
Dear Erwin,

Erwin van Londen wrote (ao):
 Another thing is that some arrays have the capability to
 thin-provision volumes. In the back-end on the physical layer the
 array configures, let say, a 1 TB volume and virtually provisions 5TB
 to the host. On writes it dynamically allocates more pages in the pool
 up to the 5TB point. Now if for some reason large holes occur on the
 volume, maybe a couple of ISO images that have been deleted, what
 normally happens is just some pointers in the inodes get deleted so
 from an array perspective there is still data on those locations and
 will never release those allocated blocks. New firmware/microcode
 versions are able to reclaim that space if it sees a certain number of
 consecutive zero's and will reclaim that space to the volume pool. Are
 there any thoughts on writing a low-priority tread that zeros out
 those non-used blocks?

SSD would also benefit from such a feature as it doesn't need to copy
deleted data when erasing blocks.

The storage could use the ATA/SCSI commands TRIM, UNMAP and DISCARD for
that?

I have one question on thin provisioning: if Windows XP performs defrag
on a 20GB 'virtual' size LUN with 2GB in actuall use, whil the volume
grow to 20GB on the storage and never shrink afterwards anymore, while
the client still has only 2GB in use?

This would make thin provisioning on virtual desktops less useful.

Do you have any numbers on the performance impact of thin provisioning?
I can imagine that thin provisioning causes on-storage defragmentation
of disk images, which would kill any OS optimisations like grouping often
read files.

With kind regards, Sander

-- 
Humilis IT Services and Solutions
http://www.humilis.net
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Bonnie++ run with RAID-1 on a single SSD (2.6.29-rc4-224-g4b6136c)

2009-02-13 Thread Sander
Hi Chris,

Thank you for sharing your numbers.

Chris Samuel wrote (ao):
 For people who might be interested, here is how btrfs performs
 with two partitions on a single SSD drive in a RAID-1 mirror.
 
 This is on a Dell E4200 with Core 2 Duo U9300 (1.2GHz), 2GB RAM
 and a Samsung SSD (128GB Thin uSATA SSD).

MLC SSDs are famous for their write stalls when the disk gets full and
old blocks need to be reused.

Do you experience that too? Or can you test that situation?

On your site you write:

As SSD's are not necessarily as reliable as spinning disk yet for data
integrity ..

I've skimmed the article you link to. I still think SSDs are much more
reliable than spinning disks, especially the high end SLC SSDs.

What is the general opinion on this?

Could you also test without RAID1?
And with the compression mount flag?
And without the ssd mount flag?

 Version 1.03c   --Sequential Output-- --Sequential Input- 
 --Random-
 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- 
 --Seeks--
 MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec 
 %CP
 sys262G   28299  17 18633  12   85702  29  3094  
 18
 --Sequential Create-- Random 
 Create
 -Create-- --Read--- -Delete-- -Create-- --Read--- 
 -Delete--
   files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec 
 %CP
  16  7513  99 + +++  5140  98  3964  67 + +++  5652  
 99
 sys26,2G,,,28299,17,18633,12,,,85702,29,3093.9,18,16,7513,99,+,+++,5140,98,3964,67,+,+++,5652,99
 
 real3m51.883s
 user0m0.360s
 sys 0m46.099s

I have no experience with Bonnie++, but based on the output it seems you
use a 2GB file while you have 2GB RAM. Is that a valid test?

Also the test run of only 3 minutes 52 seconds seems way too short.

With kind regards, Sander

-- 
Humilis IT Services and Solutions
http://www.humilis.net
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html