Re: Two identical copies of an image mounted result in changes to both images if only one is modified
On Thu, Jun 20, 2013 at 3:47 PM, Clemens Eisserer linuxhi...@gmail.com wrote: Hi, I've observed a rather strange behaviour while trying to mount two identical copies of the same image to different mount points. Each modification to one image is also performed in the second one. Example: dd if=/dev/sda? of=image1 bs=1M cp image1 image2 mount -o loop image1 m1 mount -o loop image2 m2 touch m2/hello ls -la m1 //will now also include a file calles hello What do you get if you unmount BOTH m1 and m2, and THEN mount m1 again? Is the file still there? Is this behaviour intentional and known or should I create a bug-report? I've deleted quite a bunch of files on my production system because of this... I'm pretty sure this is a known behavior in btrfs. http://markmail.org/message/i522sdkrhlxhw757#query:+page:1+mid:ksdi5d4v26eqgxpi+state:results -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Scary OOPS when playing with --bind, --move, and friends
On Tue, Dec 21, 2010 at 10:51 AM, C Anthony Risinger anth...@extof.me wrote: in short, everything works fine until you --bind across a subvol via the special folders created when one takes a snapshot, # mount --bind root/subvol of my current root/home/anthony bind # touch bind/TEST you can now see TEST at ~/TEST and bind/TEST bind/ is a mounted snapshot, right? if yes, then when you touch bind/TEST, it should also appear in root/subvol of my current root/home/anthony/TEST, and NOT in root/home/anthony/TEST or /home/anthony/TEST i'm on 2.6.36.2 Try 2.6.35 or later. I tested something similar under ubuntu maverick (2.6.35-24-generic) and it works just fine. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Scary OOPS when playing with --bind, --move, and friends
On Tue, Dec 21, 2010 at 11:16 AM, Fajar A. Nugraha l...@fajar.net wrote: On Tue, Dec 21, 2010 at 10:51 AM, C Anthony Risinger anth...@extof.me wrote: i'm on 2.6.36.2 Try 2.6.35 or later. I tested something similar under ubuntu maverick (2.6.35-24-generic) and it works just fine. Sorry, hit send to soon. I though you wrote 2.6.32 :P Still curious about your test scenario though. Can you double check it? A write on the snapshot should not appear on the parent filesystem. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Synching a Backup Server
On Fri, Jan 7, 2011 at 12:35 AM, Carl Cook cac...@quantum-sci.com wrote: I want to keep a duplicate copy of the HTPC data, on the backup server Is there a BTRFS tool that would do this? AFAIK zfs is the only opensource filesystem today that can transfer block-level delta between two snapshots, making it ideal for backup purposes. With other filesystems, something like rsync + LVM snapshot is probably your best bet, and it doesn't really care what filesystem you use. -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Synching a Backup Server
On Fri, Jan 7, 2011 at 5:26 AM, Carl Cook cac...@quantum-sci.com wrote: On Thu 06 January 2011 13:58:41 Freddie Cash wrote: Simplest solution is to write a script to create a mysqldump of all databases into a directory, add that to cron so that it runs at the same time everyday, 10-15 minutes before the rsync run is done. That way, rsync to the backup server picks up both the text dump of the database(s), along with the binary files under /var/lib/mysql/* (the actual running database). I am sure glad you guys mentioned database backup in relation to rsync. I would never have guessed. When I do my regular backups I back up the export dump and binary of the database. When dealing with database, binary backup is only usable if all the files backed up is from the same point in time. That means you need either: - tell the database server you're going to do backup, so it doesn't change the datafile and store changes temporarily elsewhere (Oracle DB can do this), or - snapshot the storage, whether at block level (e.g. using LVM) or filesystem level (e.g. btrfs and zfs have snapshot capability) - shutdown the database before backup, or The first two options will require some kind of log replay during restore operation, but it doesn't need downtime on the source, and is much faster than restoring from export dump. So overall I do the export dump of the database 15 minutes before rsync. If you're talking about MySQL, add snapshot the source before rsync. Otherwise your binary backup will be useless. Then snapshot the destination array. Then do the rsync. Right? Don't forget --inplace. Very important if you're using snapshot on destination. Otherwise disk usage will skyrocket. But how does merely backing up the database prevent it from being hosed in the rsync? Or does snapshot do that? Or does snapshot prevent other data on the disk from getting hosed? what do you mean being hosed in the rsync? Rsync shouldn't destroy anything. Snapshot in the source is necessary to have a consistent point-in-time view of database files. I'm about to install the two new 2TB drives in the HTPC to make a BTRFS Raid0 array. Hope it goes According To Doyle... Generally I'd not recommed using Raid0; It's asking for trouble. Use btrfs raid 10, or use Linux md raid. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: btrfsck segmentation fault
On Sat, Jan 8, 2011 at 5:29 AM, cwillu cwi...@cwillu.com wrote: On Fri, Jan 7, 2011 at 3:15 PM, Andrew Schretter schr...@math.duke.edu wrote: I have a 10TB btrfs filesystem over iSCSI that is currently unmountable. I'm currently running Fedora 13 with a recent Fedora 14 kernel (2.6.35.9-64.fc14.i686.PAE) and the system hung with messages like : parent transid verify failed on 5937615339520 wanted 48547 found 48542 I've rebooted and and am attempting to recover with btrfsck from the btrfs-progs-unstable git tree, but it is segfaulting after finding a superblock and listing out 3 of the parent transid messages. Anyone have any ideas? I tried btrfsck /dev/sdb, btrfsck -s 1 /dev/sdb, and btrfsck -s 2 /dev/sdb with the same result for each. The btrfsck binary I compiled does work on a small (800MB) test btrfs file system. I suspect it may be due to the size of the filesystem I am trying to repair. Segfaulting is what the current btrfsck does when it finds a problem; it doesn't try to fix anything yet. Is there something we can do to fix this particular problem (e.g. editing the metadata manually to use older transaction group), or is this one of those forhet-in-you're-screwed kind of thing? -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Synching a Backup Server
On Sun, Jan 9, 2011 at 6:46 PM, Alan Chandler a...@chandlerfamily.org.uk wrote: then create snapshots of these directories: /mnt/btrfs/ |- server-a |- server-b |- server-c |- snapshots-server-a |- @GMT-2010.12.21-16.48.09 \- @GMT-2010.12.22-16.45.14 |- snapshots-server-b \- snapshots-server-c For instance, if I create the initial file system using mkfs.btrfs and then mount it on /mnt/btrfs is there already a default subvolume? or do I have to make one? from btrfs FAQ: A subvolume is like a directory - it has a name, there's nothing on it when it is created, and it can hold files and other directories. There's at least one subvolume in every Btrfs filesystem, the default subvolume. The equivalent in Ext4 would be a filesystem. Each subvolume behaves as a individual filesystem. What happens when you unmount the whole filesystem and then come back whatever subvolume and snapshot you already have will still be there. The wiki also makes the following statement *Note:* to be mounted the subvolume or snapshot have to be in the root of the btrfs filesystem. but you seems to have snapshots at one layer down from the root. By default, when you do something like mount /dev/sdb1 /mnt/btrfs the default subvolume will be mounted under /mnt/btrfs. Snapshots and subvolumes will be visible as subdirectories under it, regardless whether it's in the root or several directories under it. Most likely this is enough for what you need, no need to mess with mounting subvolumes. Mounting subvolumes allows you to see a particular subvolume directly WITHOUT having to see the default subvolume or other subvolumes. This is particularly useful when you use btrfs as / or /home and want to rollback to a previous snapshot. So assuming snapshots-server-b above is a snapshot, you can run mount /dev/sdb1 /mnt/btrfs -o subvol=snapshots-server-b and what previously was in /mnt/btrfs/snapshots-server-b will now be accessible under /mnt/btrfs directly, and you can NOT see what was previously under /mnt/btrfs/snapshots-server-c. Also on a side note, you CAN mount subvolumes not located in the root of btrfs filesystem using subvolid instead of subvol. It might require a newer kernel/btrfs-progs version though (works fine in Ubuntu maverick.) -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Synching a Backup Server
On Mon, Jan 10, 2011 at 5:01 AM, Hugo Mills hugo-l...@carfax.org.uk wrote: There is a root subvolume namespace (subvolid=0), which may contain files, directories, and other subvolumes. This root subvolume is what you see when you mount a newly-created btrfs filesystem. Is there a detailed explanation in the wiki about subvolid=0? What does top level 5 in the output of btrfs subvolume list mean (I thought 5 was subvolid for root subvolume)? # btrfs subvolume list / ID 256 top level 5 path maverick-base ID 257 top level 5 path kernel-2.6.37 The default subvolume is simply what you get when you mount the filesystem without a subvol or subvolid parameter to mount. Initially, the default subvolume is set to be the root subvolume. If another subvolume is set to be the default, then the root subvolume can only be mounted with the subvolid=0 mount option. ... and mounting with either subvolid=5 and subvolid=0 gives the same result in my case. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Adding a disk fails
On Fri, Jan 21, 2011 at 2:00 PM, Helmut Hullen hul...@t-online.de wrote: Hallo, Carl, Du meintest am 20.01.11: If you shutdown the system, at the reboot you should scan all the device in order to find the btrfs ones # find the btrfs device btrfs device scan This must be done at every boot? Yes - this advice is added in the Wiki (?). If so, where is recommended, in rc.local? That depends - it has to be done before mounting. And if the device is part of the boot partition then you may put the scan command into an init-ramdisk. Using something like device=/dev/sdb,device=/dev/sdc on fstab mount options should also work. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Cannot Create Partition
On Mon, Jan 24, 2011 at 1:07 AM, cac...@quantum-sci.com wrote: On /dev/sda I have sda1 which is my / bootable filesystem for Debian formatted ext4. This is 256MB on a 2TB drive. Really? How do you know it's 256 MB? # fdisk /dev/sda WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted. WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u'). Command (m for help): p Disk /dev/sda: 2000.4 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x Device Boot Start End Blocks Id System /dev/sda1 1 243202 1953514583+ ee GPT ... cause the fdisk output pretty much shows the first partition uses up all space. You can check again if you want, using parted /dev/sda print (just in case it's really fdisk problem). Maybe it's possible that I just mkfs.btrfs /dev/sda and it will set up -only- the remaining space, but I'm afraid that this may destroy my OS. You might be able to boot using a live CD and use gparted to resize the current ext4 partition. Also, what if I want to set up the whole drive as BTRFS? Could this be bootable, and can the canned Debian kernel load the BTRFS driver for boot at install? Or would I boot to the CD, mkfs.btrfs the drive, then install Debian? Anyone tried this? Ubuntu Natty's grub2 has btrfs support, but It's still in alpha stage though. Don't know about Debian. At this point it's easiest if you use ext3/4 for /boot, and use btrfs only for /. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: cloning single-device btrfs file system onto multi-device one
On Mon, Mar 21, 2011 at 11:24 PM, Stephane Chazelas stephane.chaze...@gmail.com wrote: AFAICT, compression is enabled at mount time and would only apply to newly created files. Is there a way to compress files already in a btrfs filesystem? You need to select the files manually (not possible to select a directory), but yes, it's possible using btrfs filesystem defragment -c # mount -o loop /tmp/test.img /mnt/tmp # cd /mnt/tmp # dd if=/dev/zero of=100M.bin bs=1M count=100;sync;df -h . 100+0 records in 100+0 records out 104857600 bytes (105 MB) copied, 1.20833 s, 86.8 MB/s FilesystemSize Used Avail Use% Mounted on /dev/loop01.0G 101M 794M 12% /mnt/tmp # /sbin/btrfs fi de -c /mnt/tmp/100M.bin;sync;df -h . FilesystemSize Used Avail Use% Mounted on /dev/loop01.0G 3.5M 891M 1% /mnt/tmp For a whole filesystem, you might be able to automate it using shell script with find . -type f ... -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: read-only subvolumes?
On Wed, Mar 23, 2011 at 3:21 PM, Andreas Philipp philipp.andr...@gmail.com wrote: I think it is since I upgraded to kernel version 2.6.38 (I do not create subvolumes on a regular basis.). thor btrfs # btrfs subvolume create 123456789 Create subvolume './123456789' thor btrfs # touch 123456789/lsdkfj touch: cannot touch `123456789/lsdkfj': Read-only file system It works on my system # touch test1 # btrfs su cr 123456789 Create subvolume './123456789' # touch 123456789/lsdkfj # uname -a Linux HP 2.6.38-020638-generic #201103151303 SMP Tue Mar 15 14:33:40 UTC 2011 i686 GNU/Linux -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 0/2] btrfs: allow cross-subvolume BTRFS_IOC_CLONE
On Fri, Apr 1, 2011 at 8:40 PM, Chris Mason chris.ma...@oracle.com wrote: Excerpts from Christoph Hellwig's message of 2011-04-01 09:34:05 -0400: I don't think it's a good idea to introduce any user visible operations over subvolume boundaries. Currently we don't have any operations over mount boundaries, which is pretty fumdamental to the unix filesystem semantics. If you want to change this please come up with a clear description of the semantics and post it to linux-fsdevel for discussion. That of course requires a clear description of the btrfs subvolumes, which is still completely missing. The subvolume is just a directory tree that can be snapshotted, and has it's own private inode number space. reflink across subvolumes is no different from copying a file from one subvolume to another at the VFS level. The src and destination are different files and different inodes, they just happen to share data extents. ... and currently copying file from one subvolume to another requires copying the data as well. It'd be great if you could only copy the metadata. A good possible use that comes to mind is to quickly merge several subvolumes into a new one. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: How Snapshots Inter-relate?
On Fri, Apr 22, 2011 at 10:03 PM, cac...@quantum-sci.com wrote: Would it be good practice to say, once a year, do a completely new fresh snapshot? There's no such thing as new fresh snapshot. You can create a new, empty subvolume. Or you can create snapshot of existing root/subvolume, which once created, behaves just like another subvolume (with the exception that it contains identical content to the origin root/subvolume). Or you can create snapshot of a snapshot (which, again, is just another subvolume with identical content) so in practice there will be no difference between the first snapshot and the second snapshot. So if by new fresh snapshot you mean creating a snapshot of root/subvolume at that time, then you can do it as often as you need. But if by new fresh snapshot you mean something like full backup in traditional tape backup, then like Calvin said every snapshot is already independent, so in a way it's a full backup. Of course if you're using it for backup purposes then it's usually best to have a copy elsewhere (not on the same disk/server/datacenter), but that's beyond the scope of this list. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Rename a btrfs filesystem?
On Sat, Apr 30, 2011 at 9:24 AM, Evert Vorster evors...@gmail.com wrote: Hi there. Just a quick question: How do I rename an existing btrfs filesystem without destroying all the subvolumes on it? From mkfs.btrfs it says -L sets the initial filesystem label. With ext2, 3 and 4 the filesystem label can be changed with tune2fs. Does similar functionality exist in btrfs? There's a patch for btrfsprogs which introduce a new command: btrfslabel https://patchwork.kernel.org/patch/70701/ You might need to make some changes to merge it with current version, but the userland-only patch works great for changing unmounted btrfs filesystem. It works even without the kernel-side patch. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Cannot Deinstall a Debian Package
On Wed, May 4, 2011 at 2:27 AM, cac...@quantum-sci.com wrote: Having a failure that may be because grub2 doesn't BTRFS. /boot is ext3 and / is BTRFS. Does Debian (or whatever distro you use) support BTRFS /? If yes, you should ask them. If no, then you should've already known that there's a risk when using unsupported filesystem. # dpkg -r linux-image-2.6.32-5-amd64 (Reading database ... 136673 files and directories currently installed.) Removing linux-image-2.6.32-5-amd64 ... Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 2.6.32-5-amd64 /boot/vmlinuz-2.6.32-5-amd64 run-parts: executing /etc/kernel/postrm.d/zz-update-grub 2.6.32-5-amd64 /boot/vmlinuz-2.6.32-5-amd64 /usr/sbin/grub-probe: error: cannot find a device for / (is /dev mounted?). run-parts: /etc/kernel/postrm.d/zz-update-grub exited with return code 1 Failed to process /etc/kernel/postrm.d at /var/lib/dpkg/info/linux-image-2.6.32-5-amd64.postrm line 234. dpkg: error processing linux-image-2.6.32-5-amd64 (--remove): subprocess installed post-removal script returned error exit status 1 Errors were encountered while processing: linux-image-2.6.32-5-amd64 Looks like grub problem. I know that Ubuntu Natty's grub-pc (grub2) work just fine, so you might be able to fix it by upgrading to newer grub/grub-pc (perhaps from Debian-unstable). -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: btrfs csum failed
On Wed, May 4, 2011 at 7:44 AM, Martin Schitter m...@mur.at wrote: Am 2011-05-04 02:28, schrieb Josef Bacik: Wait why are you running with btrfs in production? do you know a better alternative for continuous snapshots? :) zfs :D it works surprisingly well since more than a year. well the performance could be better for vm-image-hosting but it works. we used cache='writeback' for a long time but now all virtual instances have set cache='none' What OS is in this vm image? 2.6.30-bpo.1-amd64 with virtio-driver could you give me some advice how to debug/report this specific problem more precise? If it's not reproducible then I'd suspect it'd be hard to do. Usually checksum errors is early sign of hardware failure (most common are disk or power supply). -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Cannot Deinstall a Debian Package
On Wed, May 4, 2011 at 5:20 AM, cac...@quantum-sci.com wrote: On Tuesday 3 May, 2011 14:26:52 Fajar A. Nugraha wrote: Does Debian (or whatever distro you use) support BTRFS /? If yes, you should ask them. What do you mean 'does Debian support BTRFS'? The kernel supports it. Just because you can use something doesn't mean it's supported by the distro. For example, RHEL6 marks btrfs as technology preview, in other words you can use it to try out new features but don't complain if anything's broken. There should be a similar warning in Debian. In your case, the broken part is the integration with other distro components (e.g grub) And why would they know more about BTRFS than you? A good distro would normally test all components included, and mark them as supported or not. If it's not supported, then you should expect lower level of funcionality or integration compared to supported components. Some signs of unsupported components: - it's marked as technology preview (like in RHEL6 case) - kernel supports it, but the distro installer does not let you use it by default (needs some manual setup of installer flags) - it's on a different repository (like Ubuntu's universe/multiverse) - it's not listed as supported component Sometimes when a distro includes a technology preview, they'd also include known issues, caveats, or workaround needed to make it work. In Ubuntu maverick (I suspect it's also the same in Debian) you need to manually update to newer version of grub-pc. My whole system is installed over BTRFS. If this is non-functional in any OS there should be a warning indicating it is non-functional. There is, though the location and form may be distributed all over the place. There's a warning in the kernel (see Chris's post) Debian install manual (http://www.debian.org/releases/stable/amd64/ch06s03.html.en#di-partition) also doesn't list btrfs as supported partition type, in other words it's unsupported. Looks like grub problem. I know that Ubuntu Natty's grub-pc (grub2) work just fine, so you might be able to fix it by upgrading to newer grub/grub-pc (perhaps from Debian-unstable). I would be happy to upgrade grub, but the package management system is jammed because of this. You can download grub-related packages (should be grub and grub-pc, possibly from Debian unstable) and install it manually using dpkg. You might also need to temporarily rename /usr/sbin/update-grub manually elsewhere and replace it with symlink to /bin/true, or move /etc/kernel/postinst.d/zz-update-grub out of the way (just to enable update process run correctly). -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Compression: per filesystem, or per subvolume?
Currently using Ubuntu Natty, kernel 2.6.38-9-generic, I have these mount points using btrs subvolumes $ mount -t btrfs /dev/sda2 on / type btrfs (rw,noatime,subvolid=256,compress-force=zlib) /dev/sda2 on /home type btrfs (rw,noatime,subvolid=258,compress=lzo) Yet dmesg seems to show only zlib compression enabled $ dmesg | grep btrfs [ 11.097908] btrfs: force zlib compression Is this by design, or a bug? -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Compression: per filesystem, or per subvolume?
On Sun, May 8, 2011 at 8:38 PM, cwillu cwi...@gmail.com wrote: It's by not-implemented-yet. Mount options are still currently global to the filesystem. Thanks for the info. I was testing combination of grub2, btrfs / without separate /boot, and lzo. Using lzo feels much faster compared to zlib, so right now I just use separate /boot/grub in ext4 to make it work correctly. -- Fajar On May 8, 2011 7:35 AM, Fajar A. Nugraha l...@fajar.net wrote: Currently using Ubuntu Natty, kernel 2.6.38-9-generic, I have these mount points using btrs subvolumes $ mount -t btrfs /dev/sda2 on / type btrfs (rw,noatime,subvolid=256,compress-force=zlib) /dev/sda2 on /home type btrfs (rw,noatime,subvolid=258,compress=lzo) Yet dmesg seems to show only zlib compression enabled $ dmesg | grep btrfs [ 11.097908] btrfs: force zlib compression Is this by design, or a bug? -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: BTRFS, encrypted LVM and disk write cache ?
On Fri, May 13, 2011 at 4:36 AM, Swâmi Petaramesh sw...@petaramesh.org wrote: However shifting from ext3 to BTRFS has been enough to turn my perfectly stable system into a perfectly unstable and crash-prone system :-/ Well, first of all, btrfs is still under heavy development. Add to that the fact that you use Ubuntu Natty, which also have some known bugs. There should be some hints there on what the outcome would be :P Anyway, I'm using linux-image-2.6.38-9-generic from natty-proposed, with btrfs as /, separate /boot/grub on ext4, and Corsair Force SSD. It works great so far. If you're used to ext4 speed, then you'll notice that btrfs is considerably slower. That's why I use SSD to add some I/O speed (currently booting to gnome classic desktop only takes about 30 seconds). So in summary, if you have problems but still want to try btrfs, try upgrading your kernel. If you think it's too slow to be usable, then the best you can try right now is use SSD. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: BTRFS, encrypted LVM and disk write cache ?
On Fri, May 13, 2011 at 3:59 PM, Swâmi Petaramesh sw...@petaramesh.org wrote: Adding to the fact that it comes included with the stock and distro kernels... That gives a bit contradictory signals... Should I stay or should I go ? Looks a bit like legal babble boiling down to « Yes, it is supposed to work and be usable, so please use it, just be so kind no to sue us if you run into trouble. Not our fault, we won't accept any liability. » Something like that. Note also since it's in heavy development, even a small difference in kernel version can make a difference between whether a bug is present or not, so it's advisable to always try latest kernel version. Add to that the fact that you use Ubuntu Natty, which also have some known bugs. There should be some hints there on what the outcome would be :P Every distro has bugs. However Natty's kernel never ever hanged my system until I shifted to using BTRFS... I won't relate BTRFS issues to using Natty unless there is some evidence pointing in this direction... Not wanting to troll, of course, uh... ;-) No troll taken. Just wanted to point out that since Natty is new and has known bugs (which might include kernel package as well), if you encounter problems try to use latest available kernel for Natty first. If it doesn't work, try latest vanilla kernel from kernel.org. For example, IIRC 2.6.38 has a bug in that you can't use mount option subvol=... if the fs/subvolume is newly created, while 2.6.39 works fine. At least that's what I experienced, so now I use subvolid to select subvolume. If you're used to ext4 speed, then you'll notice that btrfs is considerably slower. That's why I use SSD to add some I/O speed (currently booting to gnome classic desktop only takes about 30 seconds). Well, I noticed it's slower, still usable for me anyway, I'm a patient guy ;-) Anyway an SSD is not an option to me, financially speaking ;-) A 60GB Corsair Force should be about $140. Highly recommended. Higher capacities provide even better cost/GB. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: BTRFS, encrypted LVM and disk write cache ?
On Fri, May 13, 2011 at 4:33 PM, Swâmi Petaramesh sw...@petaramesh.org wrote: # uname -a Linux tethys 2.6.38-8-generic #42-Ubuntu SMP Mon Apr 11 03:31:50 UTC 2011 i686 i686 i386 GNU/Linux # mount | grep btrfs /dev/mapper/VG1-TETHYS on / type btrfs (rw,relatime,subvol=UBUNTU,compress=zlib) /dev/mapper/VG1-TETHYS on /tmp type btrfs (rw,relatime,subvol=TMP,compress=zlib) /dev/mapper/VG1-TETHYS on /home type btrfs (rw,relatime,subvol=HOME,compress=zlib) /dev/mapper/VG1-TETHYS on /var type btrfs (rw,relatime,subvol=VAR,compress=zlib) /dev/sda2 on /boot type btrfs (rw,relatime,compress=zlib) Weird. $ sudo btrfs su li / ID 256 top level 5 path natty ID 258 top level 5 path home $ sudo mount /dev/sda2 -o subvol=home /mnt/tmp $ ls /mnt/tmp $ sudo umount /mnt/tmp $ sudo mount /dev/sda2 -o subvolid=258 /mnt/tmp $ ls /mnt/tmp user $ sudo umount /mnt/tmp $ uname -a Linux HP 2.6.38-9-generic #43-Ubuntu SMP Thu Apr 28 15:25:15 UTC 2011 i686 i686 i386 GNU/Linux I don't even know what got mounted when I use -o subvol=home since all of my subvolumes have some files/directory on it, while the mount shows nothing on it :P -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
btrfs error after using kernel 3.0-rc1
While using btrfs as root on kernel 3.0-rc1, there was some errors (I wasn't able to capture the error) that forced me to do hard reset. Now during startup system drops to busybox shell because it's unable to mount root partition. Is there a way to recover the data, as at least grub2 was still happy enough to load kernel and initrd (both of which located on the same btrfs partition)? This is what dmesg says [4.536798] device label SSD-ROOT devid 1 transid 38245 /dev/sda2 [9.552086] device label SSD-ROOT devid 1 transid 38245 /dev/disk/by-label/SSD-ROOT [9.554563] btrfs: disk space caching is enabled [9.564301] parent transid verify failed on 44040192 wanted 38240 found 32526 [9.564535] parent transid verify failed on 44040192 wanted 38240 found 32526 [9.564778] parent transid verify failed on 44040192 wanted 38240 found 32526 [9.575679] parent transid verify failed on 44052480 wanted 38240 found 31547 [9.575904] parent transid verify failed on 44052480 wanted 38240 found 31547 [9.576176] parent transid verify failed on 44052480 wanted 38240 found 31547 [9.586121] parent transid verify failed on 44064768 wanted 38240 found 34145 [9.586319] parent transid verify failed on 44064768 wanted 38240 found 34145 [9.586515] parent transid verify failed on 44064768 wanted 38240 found 34145 [9.587027] parent transid verify failed on 44068864 wanted 38240 found 34476 [9.589732] Btrfs detected SSD devices, enabling SSD mode [9.592923] block group 29360128 has an wrong amount of free space [9.592959] btrfs: failed to load free space cache for block group 29360128 [9.601802] [ cut here ] [9.601835] kernel BUG at fs/btrfs/inode.c:4582! [9.601867] invalid opcode: [#1] SMP [9.601896] Modules linked in: nbd btrfs zlib_deflate libcrc32c i915 drm_kms_helper drm tg3 i2c_algo_bit video ahci libahci [9.601983] [9.601996] Pid: 319, comm: exe Not tainted 3.0.0-rc1 #2 Hewlett-Packard HP Compaq 2210b/0ABC [9.602054] EIP: 0060:[f89dae88] EFLAGS: 00010282 CPU: 0 [9.602104] EIP is at btrfs_add_link+0x1b8/0x240 [btrfs] [9.602140] EAX: ffef EBX: f4baeb44 ECX: 007d EDX: 007c [9.602176] ESI: 00b5 EDI: f4052b44 EBP: f46bbba0 ESP: f46bbb40 [9.602212] DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068 [9.602245] Process exe (pid: 319, ti=f46ba000 task=f7052610 task.ti=f46ba000) [9.602288] Stack: [9.602303] 000a f4baeb44 f46bbb83 0001 00b5 f89f58af f46bbba0 [9.602359] f89e4910 f46bbb90 0982 f4018000 f4563800 f4baea20 f4052b44 [9.602415] 7e01 02ee 0100 fdf0 f482f2a0 [9.602471] Call Trace: [9.602499] [f89f58af] ? unmap_extent_buffer+0xf/0x20 [btrfs] [9.602551] [f89e4910] ? btrfs_inode_ref_index+0xe0/0xf0 [btrfs] [9.602598] [f8a058e9] add_inode_ref+0x2d9/0x380 [btrfs] [9.602642] [f8a07216] replay_one_buffer+0x226/0x2f0 [btrfs] [9.602687] [f8a04859] walk_down_log_tree+0x1d9/0x370 [btrfs] [9.602737] [f8a04a91] walk_log_tree+0xa1/0x1c0 [btrfs] [9.602778] [c127712a] ? radix_tree_lookup+0xa/0x10 [9.602823] [f8a08ec4] btrfs_recover_log_trees+0x1e4/0x2b0 [btrfs] [9.602872] [f8a06ff0] ? replay_one_extent+0x6b0/0x6b0 [btrfs] [9.602918] [f89cc311] open_ctree+0x1261/0x15e0 [btrfs] [9.602957] [c1279389] ? strlcpy+0x39/0x50 [9.604434] [f89ab692] btrfs_mount+0x4a2/0x5d0 [btrfs] [9.605678] [c12741ce] ? ida_get_new_above+0x11e/0x1a0 [9.605678] [c11296aa] mount_fs+0x3a/0x180 [9.605678] [c10f877f] ? __alloc_percpu+0xf/0x20 [9.605678] [c113fd8b] vfs_kern_mount+0x4b/0xa0 [9.605678] [c11402fe] do_kern_mount+0x3e/0xe0 [9.605678] [c1141ae6] do_mount+0x596/0x6c0 [9.605678] [c11414a8] ? copy_mount_options+0xa8/0x110 [9.605678] [c1141f5b] sys_mount+0x6b/0xa0 [9.605678] [c1524e1f] sysenter_do_call+0x12/0x28 [9.605678] Code: 24 14 8b 45 d0 89 7c 24 08 89 54 24 0c 8b 55 d4 89 0c 24 8b 4d 08 e8 d8 ab fe ff 85 c0 0f 84 e4 fe ff ff 83 c4 54 5b 5e 5f 5d c3 0f 0b 8b 55 dc 8d 7d e3 b9 11 00 00 00 8b b2 dc fe ff ff 8b 55 [9.605678] EIP: [f89dae88] btrfs_add_link+0x1b8/0x240 [btrfs] SS:ESP 0068:f46bbb40 [9.622016] ---[ end trace d5d085f53c746e86 ]--- -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: btrfs error after using kernel 3.0-rc1
On Wed, Jun 1, 2011 at 6:06 AM, Fajar A. Nugraha l...@fajar.net wrote: While using btrfs as root on kernel 3.0-rc1, there was some errors (I wasn't able to capture the error) that forced me to do hard reset. Now during startup system drops to busybox shell because it's unable to mount root partition. Is there a way to recover the data, as at least grub2 was still happy enough to load kernel and initrd (both of which located on the same btrfs partition)? This is what dmesg says [ 4.536798] device label SSD-ROOT devid 1 transid 38245 /dev/sda2 [ 9.552086] device label SSD-ROOT devid 1 transid 38245 /dev/disk/by-label/SSD-ROOT [ 9.554563] btrfs: disk space caching is enabled [ 9.564301] parent transid verify failed on 44040192 wanted 38240 found 32526 [ 9.564535] parent transid verify failed on 44040192 wanted 38240 found 32526 [ 9.564778] parent transid verify failed on 44040192 wanted 38240 found 32526 [ 9.575679] parent transid verify failed on 44052480 wanted 38240 found 31547 [ 9.575904] parent transid verify failed on 44052480 wanted 38240 found 31547 [ 9.576176] parent transid verify failed on 44052480 wanted 38240 found 31547 [ 9.586121] parent transid verify failed on 44064768 wanted 38240 found 34145 [ 9.586319] parent transid verify failed on 44064768 wanted 38240 found 34145 [ 9.586515] parent transid verify failed on 44064768 wanted 38240 found 34145 [ 9.587027] parent transid verify failed on 44068864 wanted 38240 found 34476 [ 9.589732] Btrfs detected SSD devices, enabling SSD mode [ 9.592923] block group 29360128 has an wrong amount of free space [ 9.592959] btrfs: failed to load free space cache for block group 29360128 For anyone who got the same problem, I was finally able to mount the fs using Ubuntu Natty's 2.6.38-8-generic (the one on live CD). Previously I tried using 2.6.38-9-generic and and 3.0-rc1, none works. Now I'm copying the files somewhere else before reinstalling this system. On another note, does anybody know how btrfs allocates ID for subvols? It doesn't seem to reuse deleted subvol's ID. What happens when the last subvol ID is 999? -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
btrfs-progs-unstable tmp branch build error
When building from tmp branch I got this error: mkfs.c: In function ‘main’: mkfs.c:730:6: error: ‘ret’ may be used uninitialized in this function mkfs.c:841:43: error: ‘parent_dir_entry’ may be used uninitialized in this function make: *** [mkfs.o] Error 1 git blame shows the last commit for both lines was commit e3736c698e8b490bea1375576b718a2de6e89603 Author: Donggeun Kim dg77@samsung.com Date: Thu Jul 8 09:17:59 2010 + btrfs-progs: Add new feature to mkfs.btrfs to make file system image file from source directory Removing -Werror flag from Makefile made it compile succesfully though. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH] make btrfs filesystem label command actually work
This simple patch makes btrfs filesystem label command actually work. On tmp branch, commit d1dc6a9, btrfs filesystem label functionality was introduced. However the commit lacks one component that lets btrfs accept filesystem label command. Test case: #=== # truncate -s 1G /tmp/dev.img # losetup -f /dev/loop0 # losetup /dev/loop0 /tmp/dev.img # mkfs.btrfs -L old /dev/loop0 WARNING! - Btrfs Btrfs v0.19 IS EXPERIMENTAL WARNING! - see http://btrfs.wiki.kernel.org before using fs created label old on /dev/loop0 nodesize 4096 leafsize 4096 sectorsize 4096 size 1.00GB Btrfs Btrfs v0.19 # btrfs fi la /dev/loop0 old # btrfs fi la /dev/loop0 new # btrfs fi la /dev/loop0 new # mount /dev/disk/by-label/new /mnt/tmp # btrfs fi la /dev/loop0 FATAL: the filesystem has to be unmounted # umount /dev/loop0 # btrfs fi la /dev/loop0 new #=== Not sure if you need if you need a signoff for something as trivial as this, but here it is just in case. Signed-off-by: Fajar A. Nugraha l...@fajar.net --- btrfs.c |6 ++ 1 files changed, 6 insertions(+), 0 deletions(-) diff --git a/btrfs.c b/btrfs.c index 4cd4210..84c2337 100644 --- a/btrfs.c +++ b/btrfs.c @@ -95,6 +95,12 @@ static struct Command commands[] = { filesystem balance, path\n Balance the chunks across the device. }, + { do_change_label, -1, + filesystem label, device [newlabel]\n + With one argument, get the label of filesystem on device.\n + If newlabel is passed, set the filesystem label to newlabel.\n + The filesystem must be unmounted.\n + }, { do_scan, 999, device scan, [device...]\n Scan all device for or the passed device for a btrfs\n --- -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Announcing btrfs-gui
On Thu, Jun 2, 2011 at 6:20 AM, Hugo Mills h...@carfax.org.uk wrote: Over the last few weeks, I've been playing with a foolish idea, mostly triggered by a cluster of people being confused by btrfs's free space reporting (df vs btrfs fi df vs btrfs fi show). I also wanted an excuse, and some code, to mess around in the depths of the FS data structures. Like all silly ideas, this one got a bit out of hand, and seems to have turned into something vaguely useful. I'm therefore pleased to announce the first major public release of btrfs-gui[1]: a point-and- click tool for managing btrfs filesystems. The tool currently can scan for and list btrfs filesystems and the volumes they live on. It can show the allocation and usage of data in a selected filesystem, categorised by use, replication, and device. It can show and manipulate subvolumes and snapshots: creation, deletion, and setting the default. Some comments: (1) Currently it needs to be run from the directory where it's downloaded, even after a python3 setup.py install. When run from other directory, it bails with Traceback (most recent call last): File /usr/local/bin/btrfs-gui, line 5, in module btrfsgui.main.main() File /usr/local/lib/python3.2/dist-packages/btrfsgui/main.py, line 24, in main subproc = init_root_process(options) File /usr/local/lib/python3.2/dist-packages/btrfsgui/sudo.py, line 31, in init_root_process stdin=subprocess.PIPE, stdout=subprocess.PIPE) File /usr/lib/python3.2/subprocess.py, line 736, in __init__ restore_signals, start_new_session) File /usr/lib/python3.2/subprocess.py, line 1330, in _execute_child raise child_exception_type(errno_num, err_msg) OSError: [Errno 2] No such file or directory: './btrfs-gui-helper' Is this intentional? (2) When showing space usage for a single-device FS, selecting Show unallocated space as raw space, why is the top and bottom graph different? Shouldn't it be the same, since there's only one device? (3) Not directly related to btrfs-gui, but I've been wondering what's the correct way to SHOW the current default subvolume? -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Can anyone boot a system using btrfs root with linux 3.14 or newer?
On Thu, Apr 24, 2014 at 10:23 AM, Chris Murphy li...@colorremedies.com wrote: It sounds like either a grub.cfg misconfiguration, or a failure to correctly build the initrd/initramfs. So I'd post the grub.cfg kernel command line for the boot entry that works and the entry that fails, for comparison. And then also check and see if whatever utility builds your initrd has been upgraded along with your kernel, maybe there's a bug/regression. I believe the OP mentioned that he's using a distro without initrd, and that all required modules are built in. -- Fajar Chris Murphy-- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Which companies contribute to Btrfs?
On Thu, Apr 24, 2014 at 6:39 PM, David Sterba dste...@suse.cz wrote: On Wed, Apr 23, 2014 at 06:18:34PM -0700, Marc MERLIN wrote: I writing slides about btrfs for an upcoming talk (at linuxcon) and I was trying to gather a list of companies that contribute code to btrfs. https://btrfs.wiki.kernel.org/index.php/Main_Page [...] Jointly developed at Oracle, Red Hat, Fujitsu, Intel, SUSE, STRATO [...] Are there other companies I missed? The page now says ... Jointly developed at Facebook, Oracle, Red Hat :D -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Convert btrfs software code to ASIC
On Mon, May 19, 2014 at 3:40 PM, Le Nguyen Tran lntran...@gmail.com wrote: Hi, I am Nguyen. I am not a software development engineer but an IC (chip) development engineer. I have a plan to develop an IC controller for Network Attached Storage (NAS). The main idea is converting software code into hardware implementation. Because the chip is customized for NAS, its performance is high, and its cost is lower than using micro processor like Atom or Xeon (for servers). I plan to use btrfs as the file system specification for my NAS. The main point is that I need to understand the btrfs sofware code in order to covert them into hardware implementation. I am wandering if any of you can help me. If we can make the chip in a good shape, we can start up a company and have our own business. I'm not sure if that's a good idea. AFAIK btrfs depends a lot on other linux subsystems (e.g. vfs, block, etc). Rather than converting/reimplementing everything, if your aim is lower cost, you might have easier time using something like a mediatek SOC (the ones used on smartphones) and run a custom-built linux with btrfs support on it. For documentation, https://btrfs.wiki.kernel.org/index.php/Main_Page#Developer_documentation is probably the best place to start -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Convert btrfs software code to ASIC
On Mon, May 19, 2014 at 8:09 PM, Le Nguyen Tran lntran...@gmail.com wrote: I now need to understand the operation of btrfs source code to determine. I hope that one of you can help me Have you read the wiki link? -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Very slow filesystem
On Thu, Jun 5, 2014 at 5:15 AM, Igor M igor...@gmail.com wrote: Hello, Why btrfs becames EXTREMELY slow after some time (months) of usage ? # btrfs fi show Label: none uuid: b367812a-b91a-4fb2-a839-a3a153312eba Total devices 1 FS bytes used 2.36TiB devid1 size 2.73TiB used 2.38TiB path /dev/sde # btrfs fi df /mnt/old Data, single: total=2.36TiB, used=2.35TiB Is that the fs that is slow? It's almost full. Most filesystems would exhibit really bad performance when close to full due to fragmentation issue (threshold vary, but 80-90% full usually means you need to start adding space). You should free up some space (e.g. add a new disk so it becomes multi-device, or delete some files) and rebalance/defrag. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Very slow filesystem
(resending to the list as plain text, the original reply was rejected due to HTML format) On Thu, Jun 5, 2014 at 10:05 AM, Duncan 1i5t5.dun...@cox.net wrote: Igor M posted on Thu, 05 Jun 2014 00:15:31 +0200 as excerpted: Why btrfs becames EXTREMELY slow after some time (months) of usage ? This is now happened second time, first time I though it was hard drive fault, but now drive seems ok. Filesystem is mounted with compress-force=lzo and is used for MySQL databases, files are mostly big 2G-8G. That's the problem right there, database access pattern on files over 1 GiB in size, but the problem along with the fix has been repeated over and over and over and over... again on this list, and it's covered on the btrfs wiki as well Which part on the wiki? It's not on https://btrfs.wiki.kernel.org/index.php/FAQ or https://btrfs.wiki.kernel.org/index.php/UseCases so I guess you haven't checked existing answers before you asked the same question yet again. Never-the-less, here's the basic answer yet again... Btrfs, like all copy-on-write (COW) filesystems, has a tough time with a particular file rewrite pattern, that being frequently changed and rewritten data internal to an existing file (as opposed to appended to it, like a log file). In the normal case, such an internal-rewrite pattern triggers copies of the rewritten blocks every time they change, *HIGHLY* fragmenting this type of files after only a relatively short period. While compression changes things up a bit (filefrag doesn't know how to deal with it yet and its report isn't reliable), it's not unusual to see people with several-gig files with this sort of write pattern on btrfs without compression find filefrag reporting literally hundreds of thousands of extents! For smaller files with this access pattern (think firefox/thunderbird sqlite database files and the like), typically up to a few hundred MiB or so, btrfs' autodefrag mount option works reasonably well, as when it sees a file fragmenting due to rewrite, it'll queue up that file for background defrag via sequential copy, deleting the old fragmented copy after the defrag is done. For larger files (say a gig plus) with this access pattern, typically larger database files as well as VM images, autodefrag doesn't scale so well, as the whole file must be rewritten each time, and at that size the changes can come faster than the file can be rewritten. So a different solution must be used for them. If COW and rewrite is the main issue, why don't zfs experience the extreme slowdown (that is, not if you have sufficient free space available, like 20% or so)? -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: latest btrfs-progs and asciidoc dependency
On Thu, Jun 5, 2014 at 9:41 PM, Marc MERLIN m...@merlins.org wrote: On Thu, Jun 05, 2014 at 12:52:04PM +0100, Tomasz Chmielewski wrote: And it looks the dependency is ~1 GB of new packages? O_o That seems painful, but at the same time, the alternative, nroff/troff sucks. Part ofyour problem however seems to be runaway dependencies. You are getting x11 and stuff like libdrm which clearly you shouldn't need. If your disk space is more valuable than your time, I recommend you build asciidoc yourself and you should hopefully end up with less. Or you can also remove asciidoc from the makefile and read the raw files which are readable. ... or try this # apt-get install --no-install-recommends asciidoc If that still doesn't work, AND you have lost of free time, AND familiar with debian packaging, then you can use latest available debian source, adapt it for latest version, and use opensuse build service to compile it. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Filesystem corrupted, is there any hope?
On Fri, Jun 24, 2011 at 5:16 PM, Michael Stephenson mickstephen...@googlemail.com wrote: Hello, I formatted my home partition with btrfs, not realising that the fsck tool can't actually fix errors, as I have just discovered on your wiki. Had I knew this I would have not used it so early, this detail you would think would make distributions wary to make it an option on the livecd with no warning that if you ever have a power cut you are most certainly going to lose your data even if your hardware is fine. Anyway is their any hope? Is there beta code for an fsck that can do repairs? How long do I have to wait before there is? I can always just wait for a few months to get the data back. That depends on what kind of corruption you have (dmesg / syslog output would be nice) For some cases, you can try: - btrfsck -s1, and if can finish without errors, try btrfs-select-super - if it complains about some transid not available, try btrfs-zero-log both btrfs-select-super and btrfs-zero-log need to be compiled manually from source (i.e. make btrfs-select-super and make btrfs-zero-log). if you use lzo, you might need the tmp branch (http://git.kernel.org/?p=linux/kernel/git/mason/btrfs-progs-unstable.git;a=shortlog;h=refs/heads/tmp) Some more suggestions: - when possible, create a copy first of the device (e.g. with dd_rescue) and work on the copy - use mount -o ro, and if readonly mount is successful, copy the files somewhere safe and recreate your fs. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: will mkfs.btrfs do an initial pre-discard for SSDs like mke2fs does for Ext4?
On Sun, Jul 3, 2011 at 6:00 PM, Werner Fischer devli...@wefi.net wrote: Hi all, are there any plans that future versions of mkfs.btrfs will do an initial pre-discard for SSDs? (AFAIK mkfs.btrfs does not do this currently) It should already have it. That is, if you look in the right place commit e6bd18d8938986c997c45f0ea95b221d4edec095 Author: Christoph Hellwig h...@lst.de Date: Thu Apr 21 16:24:07 2011 -0400 btrfs-progs: add discard support to mkfs Discard the whole device before starting to create the filesystem structures. Modelled after similar support in mkfs.xfs. Signed-off-by: Christoph Hellwig h...@lst.de Signed-off-by: Chris Mason chris.ma...@oracle.com IIRC it's already in tmp branch of Chris' btrfs-progs-unstable, but not in master (yet). -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: TRIM support
On Mon, Jul 11, 2011 at 3:58 AM, Leonidas Spyropoulos artafi...@gmail.com wrote: On Sun, Jul 10, 2011 at 9:33 AM, Chris Samuel ch...@csamuel.org wrote: On Sun, 3 Jul 2011 05:45:17 AM Calvin Walton wrote: This LWN article from 2009 explains why it can be problematic (especially on SATA drives where TRIM is a non-queued command): https://lwn.net/Articles/347511/ So the current problem with TRIM in ATA (and SATA) is that it introduce delays? As long as it keeps your SSD in a good shape it's still better than not having TRIM at all, right? Not quite. Sandforce-based SSDs have their own way of reducing writes (e.g. by using internal compression), so you don't have to do anything special. Also, AFAIK currently TRIM is useless if the drives are behind a hardware raid controller anyway. My Corsair F60 (on a notebook) is actually MUCH SLOWER with -o discard (i.e. writes capped at 100 iops) -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: TRIM support
On Mon, Jul 11, 2011 at 5:34 AM, Leonidas Spyropoulos artafi...@gmail.com wrote: So any clues for the intel 320 series? I think it doesn't use compression. At this point your best bet is to try it yourself and see. If it doesn't result in poor performance, then keep on using -o discard. -- Fajar On Sun, Jul 10, 2011 at 10:59 PM, Fajar A. Nugraha l...@fajar.net wrote: On Mon, Jul 11, 2011 at 3:58 AM, Leonidas Spyropoulos artafi...@gmail.com wrote: On Sun, Jul 10, 2011 at 9:33 AM, Chris Samuel ch...@csamuel.org wrote: On Sun, 3 Jul 2011 05:45:17 AM Calvin Walton wrote: This LWN article from 2009 explains why it can be problematic (especially on SATA drives where TRIM is a non-queued command): https://lwn.net/Articles/347511/ So the current problem with TRIM in ATA (and SATA) is that it introduce delays? As long as it keeps your SSD in a good shape it's still better than not having TRIM at all, right? Not quite. Sandforce-based SSDs have their own way of reducing writes (e.g. by using internal compression), so you don't have to do anything special. Also, AFAIK currently TRIM is useless if the drives are behind a hardware raid controller anyway. My Corsair F60 (on a notebook) is actually MUCH SLOWER with -o discard (i.e. writes capped at 100 iops) -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: corruption. notreelog has no effect? anything else to try?
On Sat, Jul 16, 2011 at 3:51 AM, mck m...@wever.org wrote: My laptop btrfs partition has become corrupt after a power+battery outage. # btrfs-show Label: none uuid: e7b37e5d-c704-4ca8-ae7e-f22dd063e165 Total devices 1 FS bytes used 116.33GB devid 1 size 226.66GB used 226.66GB path /dev/sda4 I typically mount w/ -o subvol=xyz,compress,noatime,nodiratime and take snapshots of (some) subvolumes at every boot. (not that these snapshots seem to be of much use now). Now: btrfsck gives loads of root x inode y errors 400 and finishes w/ found 124906840074 bytes used err is 1 total csum bytes: 117696892 total tree bytes: 4385128448 total fs tree bytes: 3926626304 btree space waste bytes: 1078295916 file data blocks allocated: 3619739602944 referenced 162976624640 Btrfs Btrfs v0.19 does btrfsck -s1 complete without error? If yes, you can try btrfs-select-super. mounting always results in the crash [ 6977.513528] Call Trace: [ 6977.513542] [a057ac18] replay_one_dir_item+0x88/0xb0 [btrfs] [ 6977.513557] [a057d5b3] replay_one_buffer+0x223/0x330 [btrfs] [ 6977.513573] [a056a8ba] ? alloc_extent_buffer+0x7a/0x420 [btrfs] [ 6977.513584] [a057c139] walk_down_log_tree+0x339/0x480 [btrfs] [ 6977.513595] [a057c375] walk_log_tree+0xf5/0x230 [btrfs] [ 6977.513606] [a057f1c1] btrfs_recover_log_trees+0x221/0x310 [btrfs] if there's something mentioning log, I'd try btrfs-zero-log first. It's not built by default, so you should run make btrfs-zero-log on btrfs-tools source directory. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: corruption. notreelog has no effect? anything else to try?
On Sun, Jul 17, 2011 at 7:28 AM, Mck m...@wever.org wrote: Knowing very little about zero-log and select-super should i continue using my laptop like normal now? Or is this filesystem still considered corrupt and i should backup and format it all from scratch? This is my guess: - since you clear the log, there might be corruption on files that you worked on when the crash occured. - since btrfs uses cow, snapshots created (and not being accessed) before the crash should be fine - check syslog for weird logs If no new error log comes up, and the files you're working on before are intact, I'd just keep on using it. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Emergency - Can't Boot
On Sun, Jul 31, 2011 at 4:12 AM, cac...@quantum-sci.com wrote: On Saturday 30 July, 2011 13:46:21 Hugo Mills wrote: On Sat, Jul 30, 2011 at 12:51:51PM -0700, . wrote: I just did my monthly dist-upgrade and rebooted, only to have it stall at Control D. It tried to automatically run fsck.btrfs and of course it failed, and insists that I run it manually. I can't. I've rebooted several times and can't get past Control D. Don't know where it keeps track of the number of reboots since last fsck. What do you do in a case like this? [Just a note -- this seems to have been fixed in a conversation on IRC, by linking /bin/true to /bin/fsck.btrfs] Yes that fixed it. IMHO a better fix is to just disable fsck on fstab for that fs. Something like # file system mount point type options dump pass LABEL=ROOT / btrfs subvolid=258,compress-force=lzo,noatime0 0 -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: corrupted btrfs volume: parent transid verify failed
On Mon, Aug 15, 2011 at 4:13 AM, Yalonda Gishtaka yalonda.gisht...@gmail.com wrote: Halp! I was recently forced to power cycle my desktop PC, and upon restart, the btrfs /home volume would no longer mount, citing the error BUG: scheduling while atomic: mount /5584/0x2. I retrieved the latest btrfs-progs git repositories from git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-progs-unstable.git and http://git.darksatanic.net/repo/btrfs-progs-unstable.git -b integration-20110805, but when running sudo ./btrfsck -s 1 /dev/mapper/home from either repo builds, I receive the error parent transid verify failed on 647363842048 wanted 210333 found 210302 (repeated 3x). I've also tried the flags -s 0, -s 1, and -s 2, all with the same results. Is there something in the log about replaying log? If yes, try btrfs-zero-log https://btrfs.wiki.kernel.org/index.php/Problem_FAQ -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Rename BTRfs to MuchSlowerFS ?
On Tue, Sep 6, 2011 at 10:30 PM, Swâmi Petaramesh sw...@petaramesh.org wrote: On Monday 5 September 2011 22:25:23 Sergei Trofimovich wrote: I've seen similar problem on Ubuntu-11 + Aspire One (8GB of slow SSD). More specifically half of ubuntu install went very fast and when disk was ~50% free things suddenly gone slow. I'm just about to give up and definitely quit using BTRFS. My system has become so slow after upgrade Ubuntu Natty = Oneiric Beta, that's it's purely and simply unusable (although I'm the kind of old, white-haired, experienced, used to prehistoric systems, thus very patient, IT guy...) Compared to ext3/4, btrfs or zfs (on linux) in its current state would seem like snail. The only way I can get decent speed with btrfs on my laptop is after using sandforce-based SSD (which helps offset some of the slowness). But since I like lzo compression and snapshot feature, I'll keep on using btrfs on this one :) It'a most probable that all of my usage patterns, i.e. read my mail, browse the web, etc, definitely do not correspond to what BTRFS was designed for. (Sorry for the rant, but this really pisses me off...) So I'm only wonderign whether I reformat my system to ext4 or ZFS, and whether I do it right now or on thursday... Reading your post, at this point I'd actually recommend you stick with ext4. Both btrfs and zfs are great, but IMHO btrfs is not ready for daily use by ordinary user yet, while zfs is a memory hog (especially for laptops, which is part of the reason why I'm using btrfs instead of zfs on this one). FWIW I have another laptop with 4GB ram and Ubuntu + zfs root on it, and it would seem stall on some operations with no apparent cpu or disk activity. zfs compression and snapshot feature works great though (and much more stable compared to btrfs, which stiil doesn't have a functioning fsck), so if you have the resource to spare you might want to give zfs another try later. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Rename BTRfs to MuchSlowerFS ?
On Fri, Sep 16, 2011 at 1:21 PM, Maciej Marcin Piechotka uzytkown...@gmail.com wrote: On Fri, 2011-09-16 at 05:16 +0700, Fajar A. Nugraha wrote: On Fri, Sep 16, 2011 at 2:37 AM, Felix Blanke felixbla...@gmail.com wrote: I'm using btrfs since one year now and it's quite fast. I don't feel any differences to other filesystems. Never tried a benchmark but for my daily work it's nice. Your workload must be light :) I recently repeatedly rsync whole partitions (30GB) without ill effects. (ok - first sync took whole 1s). Wait, you mean you sync 30GB data to another partition in one second? It should not be possible for single HDD no matter what the filesystem is. Unless you're using SSD or many HDD in raid. That's how I'm using btrfs btw, on SSD, so I pretty much don't see the slowness since the SSD is blazingly fast. Add to that the fact btrfs has the features I need (compression, snapshot), and more memory-efficient (compared to zfs), it's suitable for my needs. The advantage to ext4 for me is the build in raid1 and the snapshots. I'm using the snapshot feature for my local backups. I like it because it's really easy and uses very few storage. A simple Snapshot - Rsync to a different disk - Snapshot script is the perfect local backup method. you've never used zfs have you :) For that purpose, think same feature as btrfs snapshot + rsync but without needing rsync. This can be very useful as the process of rsync determining what data to transfer can be quite CPU/disk intensive. Now I'm curious - how do zsf get data off the partition without rsync? First hit on Google: http://www.markround.com/archives/38-ZFS-Replication.html As additinal info, zfs send/receive stream is at DMU layer, which (in over-simplified term) is similar to raw disk blocks in normal partition+ext4 setup. zfs keeps track of which blocks are used, so when given two different snapshot, it can easily find out which blocks are different. When using incremental send zfs only has to send those blocks. It doesn't have to explicitly re-examine which parts of the file is unmodified (thus not wasting disk, CPU, and network the way rsync does). IIRC there was a proposal in this list sometime ago on implementing similar functionality (send/receive) in btrfs. No actual working code (yet) that I know of though. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Honest timeline for btrfsck
On Sun, Oct 9, 2011 at 4:13 AM, Asdo a...@shiftmail.org wrote: On 10/07/11 22:19, Diego Calleja wrote: On Viernes, 7 de Octubre de 2011 21:10:33 Asdo escribió: failures, but you can always mount by rolling back to a previous uberblock, showing an earlier view of the filesystem, which would be consistent. This is already available in Btrfs, command btrfsck -s. Whops!? Then I am wondering what causes these corrupted unmountable filesystems. -s does not select previous uberblock. It selects alternate uberblock. I think that in Btrfs wiki (which is now down) there was written that btrfs was substantially stable, with the only exception that a power loss combined with drives not honoring barriers could result in an unmountable filesystems. for this condition btrfs-zero-log will be more useful -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
btrfs root + mount subvolid=0 problem
Hi I have a system with Ubuntu natty i386 which uses btrfs root. It has worked mostly well, but I have a problem when I want to create new snapshot. Current layout looks something like this $ mount | grep btrfs /dev/sda6 on / type btrfs (rw,noatime,subvolid=258,compress-force=lzo) /dev/sda6 on /home type btrfs (rw,noatime,subvolid=259,compress-force=lzo) /dev/sda6 on /data type btrfs (rw,noatime,subvolid=260,compress-force=lzo) /dev/sda6 on /data/vm type btrfs (rw,noatime,subvolid=261,compress-force=lzo) /dev/sda6 on /boot type btrfs (rw,noatime,subvolid=289,compress-force=lzo) $ sudo btrfs su li / [sudo] password for user: ID 258 top level 5 path subvol/root ID 259 top level 5 path subvol/home ID 260 top level 5 path subvol/data ID 261 top level 5 path subvol/vm ID 289 top level 5 path boot ID 318 top level 5 path subvol/oneiric A few days ago I was able to mount subvolid=0 to a temporary path (e.g. /mnt/tmp), and on that create a snapshot of subvol/root as subvol/oneiric (for test upgrade purposes). That worked well. The problem is now I can't repeat the process $ sudo mount /dev/sda6 -o subvolid=0 /mnt/tmp $ ls /mnt/tmp boot subvol $ ls /mnt/tmp/subvol (hangs at this point, I have to perform hard reset or reboot -f). This happens both with natty's 2.6.38-11-generic and kernel 3.0.4 (backported from oneiric). Does anyone know if this is a know problem, or how to get further information to fix this? Thanks, Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: btrfs root + mount subvolid=0 problem
On Mon, Oct 10, 2011 at 5:20 PM, David Sterba d...@jikos.cz wrote: On Mon, Oct 10, 2011 at 02:30:30PM +0700, Fajar A. Nugraha wrote: This happens both with natty's 2.6.38-11-generic and kernel 3.0.4 (backported from oneiric). Does anyone know if this is a know problem, or how to get further information to fix this? I'm afraid with the kernels that old, nobody will try to fix it unless you reproduce it on the most recent kernels. 3.0.4 is considered old already? Wow, 3.1 is not even out yet I'll retry with 3.1-rc9 then. Do you see any relevant messages in the log before/while the ls is stuck? Nope I'd be good to know where excatly the ls is stuck by 'cat /proc/lspid/stack', and if it's in D state, possibly gethering stacks of all such processes. cpu usage of the ls process is 96-99% The good news is that it seems to be doing something (the stack changes somewhat). The bad news is that it seems stuck in a loop. This is on kernel 3.0.4 $ for n in `seq 1 10`;do echo =;sudo cat /proc/9354/stack;done = [c1041ebb] __cond_resched+0x1b/0x30 [c113b3f9] iget5_locked+0x79/0x1a0 [f8572ccc] btrfs_iget+0x3c/0x4a0 [btrfs] [f8573278] btrfs_orphan_cleanup+0x148/0x320 [btrfs] [f8573757] btrfs_lookup_dentry+0x307/0x4a0 [btrfs] [f8573900] btrfs_lookup+0x10/0x30 [btrfs] [c112cf27] d_alloc_and_lookup+0x37/0x70 [c112eb89] do_lookup+0x279/0x2f0 [c112f977] path_lookupat+0x107/0x5e0 [c112fe7c] do_path_lookup+0x2c/0xb0 [c11302b3] user_path_at+0x43/0x80 [c1128165] vfs_fstatat+0x55/0xa0 [c11281d0] vfs_lstat+0x20/0x30 [c11285a6] sys_lstat64+0x16/0x30 [c150b3e4] syscall_call+0x7/0xb [] 0x = [c1041ebb] __cond_resched+0x1b/0x30 [c113b3f9] iget5_locked+0x79/0x1a0 [f8572ccc] btrfs_iget+0x3c/0x4a0 [btrfs] [f8573278] btrfs_orphan_cleanup+0x148/0x320 [btrfs] [f8573757] btrfs_lookup_dentry+0x307/0x4a0 [btrfs] [f8573900] btrfs_lookup+0x10/0x30 [btrfs] [c112cf27] d_alloc_and_lookup+0x37/0x70 [c112eb89] do_lookup+0x279/0x2f0 [c112f977] path_lookupat+0x107/0x5e0 [c112fe7c] do_path_lookup+0x2c/0xb0 [c11302b3] user_path_at+0x43/0x80 [c1128165] vfs_fstatat+0x55/0xa0 [c11281d0] vfs_lstat+0x20/0x30 [c11285a6] sys_lstat64+0x16/0x30 [c150b3e4] syscall_call+0x7/0xb [] 0x = [f8545273] generic_bin_search.clone.39+0x1b3/0x210 [btrfs] [c102cf7e] kmap_atomic_prot+0xde/0x100 [] 0x = [f8545273] generic_bin_search.clone.39+0x1b3/0x210 [btrfs] [f854617a] bin_search+0x4a/0x90 [btrfs] [f854ac44] btrfs_search_slot+0x104/0x5c0 [btrfs] [f85731c7] btrfs_orphan_cleanup+0x97/0x320 [btrfs] [f8573757] btrfs_lookup_dentry+0x307/0x4a0 [btrfs] [f8573900] btrfs_lookup+0x10/0x30 [btrfs] [c112cf27] d_alloc_and_lookup+0x37/0x70 [c112eb89] do_lookup+0x279/0x2f0 [c112f977] path_lookupat+0x107/0x5e0 [c112fe7c] do_path_lookup+0x2c/0xb0 [c11302b3] user_path_at+0x43/0x80 [c1128165] vfs_fstatat+0x55/0xa0 [c11281d0] vfs_lstat+0x20/0x30 [c11285a6] sys_lstat64+0x16/0x30 [c150b3e4] syscall_call+0x7/0xb [] 0x = [] 0x = [f8545222] generic_bin_search.clone.39+0x162/0x210 [btrfs] [f854617a] bin_search+0x4a/0x90 [btrfs] [f854accc] btrfs_search_slot+0x18c/0x5c0 [btrfs] [f85731c7] btrfs_orphan_cleanup+0x97/0x320 [btrfs] [f8573757] btrfs_lookup_dentry+0x307/0x4a0 [btrfs] [f8573900] btrfs_lookup+0x10/0x30 [btrfs] [c112cf27] d_alloc_and_lookup+0x37/0x70 [c112eb89] do_lookup+0x279/0x2f0 [c112f977] path_lookupat+0x107/0x5e0 [c112fe7c] do_path_lookup+0x2c/0xb0 [c11302b3] user_path_at+0x43/0x80 [c1128165] vfs_fstatat+0x55/0xa0 [c11281d0] vfs_lstat+0x20/0x30 [c11285a6] sys_lstat64+0x16/0x30 [c150b3e4] syscall_call+0x7/0xb [] 0x = [c1041ebb] __cond_resched+0x1b/0x30 [c113b3f9] iget5_locked+0x79/0x1a0 [f8572ccc] btrfs_iget+0x3c/0x4a0 [btrfs] [f8573278] btrfs_orphan_cleanup+0x148/0x320 [btrfs] [f8573757] btrfs_lookup_dentry+0x307/0x4a0 [btrfs] [f8573900] btrfs_lookup+0x10/0x30 [btrfs] [c112cf27] d_alloc_and_lookup+0x37/0x70 [c112eb89] do_lookup+0x279/0x2f0 [c112f977] path_lookupat+0x107/0x5e0 [c112fe7c] do_path_lookup+0x2c/0xb0 [c11302b3] user_path_at+0x43/0x80 [c1128165] vfs_fstatat+0x55/0xa0 [c11281d0] vfs_lstat+0x20/0x30 [c11285a6] sys_lstat64+0x16/0x30 [c150b3e4] syscall_call+0x7/0xb [] 0x = [c1041ebb] __cond_resched+0x1b/0x30 [c113b3f9] iget5_locked+0x79/0x1a0 [f8572ccc] btrfs_iget+0x3c/0x4a0 [btrfs] [f8573278] btrfs_orphan_cleanup+0x148/0x320 [btrfs] [f8573757] btrfs_lookup_dentry+0x307/0x4a0 [btrfs] [f8573900] btrfs_lookup+0x10/0x30 [btrfs] [c112cf27] d_alloc_and_lookup+0x37/0x70 [c112eb89] do_lookup+0x279/0x2f0 [c112f977] path_lookupat+0x107/0x5e0 [c112fe7c] do_path_lookup+0x2c/0xb0 [c11302b3] user_path_at+0x43/0x80 [c1128165] vfs_fstatat+0x55/0xa0 [c11281d0] vfs_lstat+0x20/0x30 [c11285a6] sys_lstat64+0x16/0x30 [c150b3e4
Re: btrfs root + mount subvolid=0 problem
On Mon, Oct 10, 2011 at 7:09 PM, Fajar A. Nugraha l...@fajar.net wrote: On Mon, Oct 10, 2011 at 5:20 PM, David Sterba d...@jikos.cz wrote: On Mon, Oct 10, 2011 at 02:30:30PM +0700, Fajar A. Nugraha wrote: This happens both with natty's 2.6.38-11-generic and kernel 3.0.4 (backported from oneiric). Does anyone know if this is a know problem, or how to get further information to fix this? I'm afraid with the kernels that old, nobody will try to fix it unless you reproduce it on the most recent kernels. 3.0.4 is considered old already? Wow, 3.1 is not even out yet I'll retry with 3.1-rc9 then. Well, apparently 3.0.4 IS old in btrfs world :) With 3.1-rc9, it looks stuck for a while (same high cpu load), but after a while it completes succesfully I'd be good to know where excatly the ls is stuck by 'cat /proc/lspid/stack', and if it's in D state, possibly gethering stacks of all such processes. cpu usage of the ls process is 96-99% The good news is that it seems to be doing something (the stack changes somewhat). The bad news is that it seems stuck in a loop. This is on kernel 3.0.4 $ for n in `seq 1 10`;do echo =;sudo cat /proc/9354/stack;done = [c1041ebb] __cond_resched+0x1b/0x30 [c113b3f9] iget5_locked+0x79/0x1a0 [f8572ccc] btrfs_iget+0x3c/0x4a0 [btrfs] [f8573278] btrfs_orphan_cleanup+0x148/0x320 [btrfs] [f8573757] btrfs_lookup_dentry+0x307/0x4a0 [btrfs] [f8573900] btrfs_lookup+0x10/0x30 [btrfs] [c112cf27] d_alloc_and_lookup+0x37/0x70 [c112eb89] do_lookup+0x279/0x2f0 [c112f977] path_lookupat+0x107/0x5e0 [c112fe7c] do_path_lookup+0x2c/0xb0 [c11302b3] user_path_at+0x43/0x80 [c1128165] vfs_fstatat+0x55/0xa0 [c11281d0] vfs_lstat+0x20/0x30 [c11285a6] sys_lstat64+0x16/0x30 [c150b3e4] syscall_call+0x7/0xb [] 0x The stack looks a little different. The 3.0.4 sometimes has this while 3.1-rc9 doesn't (at least I wasn't able to capture it) [f8545222] generic_bin_search.clone.39+0x162/0x210 [btrfs] ...and 3.0.4 has these [f8573900] btrfs_lookup+0x10/0x30 [btrfs] [c112cf27] d_alloc_and_lookup+0x37/0x70 [c112eb89] do_lookup+0x279/0x2f0 while 3.1-rc has [f8955ec8] btrfs_lookup+0x18/0x60 [btrfs] [c112ddaf] d_inode_lookup.clone.9+0x1f/0x50 [c113008b] do_lookup+0x31b/0x360 Anyway, since 3.1-rc9 works I'll stick with this for now. Thanks for your help, David. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: btrfs-progs: new integration branch out
On Wed, Oct 12, 2011 at 7:34 PM, Hugo Mills h...@carfax.org.uk wrote: All - After a long wait (sorry about that, things have been busy for me lately), I've managed to pull together a new integration branch for btrfs-progs. This can be pulled from: http://git.darksatanic.net/repo/btrfs-progs-unstable.git/ integration-20111012 This is only compile-tested so far, so there may still be serious problems with it. Please take care. I'm not likely to be able to do much additional work on it before the weekend, so I thought it would be better to get *something* out, even if it's unstable/buggy. This _is_ the integration branch, after all... Fixes or updated patches for any problems you may find are welcomed, of course. I've got a few additional things still to bring in -- David Sterba did a trawl round quite a few distributions and turned up a couple of useful-looking patches that he's sent my way; plus there's Jan Schmidt's inspect-internal patches, which sadly clashed horribly with Xin Zhong's subvol-get-default patch. I hope I'll be able to run up another version within the week with these other parts in it, and actually do some proper review/testing on it before it goes out. Personally I'd like to see Josef's work on https://github.com/josefbacik/btrfs-progs merged, in particular the restore command (which should be useful even when a working fsck is available). I'm not sure though if it's intended to be merged or just a personal project as this point. Also a small note on grouping of btrfs commands help text. For example, it now jumps from btrfs filesystem label to four btrfs scrub commands, then back to btrfs filesystem restripe. Not really critical, just cosmetic. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: btrfs-progs: new integration branch out
On Wed, Oct 12, 2011 at 7:34 PM, Hugo Mills h...@carfax.org.uk wrote: Fixes or updated patches for any problems you may find are welcomed, of course. I noticed that btrfs subvolume snapshot is now broken. It keeps on saying Invalid arguments for subvolume snapshot. Further checking shows it's caused by commit f71210f87e0c684d8c76dfa2e19ea86256fc3d1f Author: Andreas Philipp philipp.andr...@gmail.com Date: Thu Aug 11 08:45:40 2011 +0200 check number of args for btrfs sub snap correctly Check whether there are the right number of arguments (exatly 2 without the flag -r) in the subcommand handler for the btrfs subvolume snapshot command. changing if (argc - optind != 3) { back to if (argc - optind != 2) { makes the snapshot creation works again, tested both with and without -r on Ubuntu Natty + kernel 3.1.0-rc9. I reverted that commit for my system for now. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: btrfs-progs: new integration branch out
On Wed, Oct 12, 2011 at 11:50 PM, Mitch Harder mitch.har...@sabayonlinux.org wrote: On Wed, Oct 12, 2011 at 10:22 AM, Fajar A. Nugraha l...@fajar.net wrote: I noticed that btrfs subvolume snapshot is now broken. It keeps on saying Invalid arguments for subvolume snapshot. Further checking shows it's caused by commit f71210f87e0c684d8c76dfa2e19ea86256fc3d1f Author: Andreas Philipp philipp.andr...@gmail.com Date: Thu Aug 11 08:45:40 2011 +0200 check number of args for btrfs sub snap correctly It looks like there have been two patches that touched on this issue, and they conflicted with one-another. Arne Jansen's btrfs-progs: add qgroup commands patch added a optind = 1; line, where optind was defaulting to zero before. This conflicted with Hugo Millsfix incorrect argument checking for btrfs sub snap -r patch. So it looks like (argc - optind != 2) is now correct. Ah, that explains it then. Thanks. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Could I create volumes on one device ?
On Wed, Oct 12, 2011 at 11:51 PM, bbsposters bbspost...@yahoo.com.tw wrote: Hi list, I want to create volumes (not subvolumes) on one device. Could it work? If it works, how can I do by btrfs tools ? If it can't, is there any way to create subvolumes which have their independent space? For example, I want to create 100MB for /home and 50 MB for /root. thanks, Short version: No. For that particular purpose, currently you'd better stick with wither LVM+ext4 or zfs. Long version: IIRC Arne Jansen posted patches for subvolume/qgroup quota support, so it can be used for what you need, up to a certain degree. It's not as complete as zfs (which also supports reservation in addition to quota), and it hasn't reached upstream kernel yet (at least it's not in 3.1-rc9). -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Snapshot rollback
On Mon, Oct 24, 2011 at 12:45 PM, dima dole...@parallels.com wrote: Phillip Susi psusi at cfl.rr.com writes: I created a snapshot of my root subvol, then used btrfs-subvolume set-default to make the snapshot the default subvol and rebooted. This seems to have correctly gotten the system to boot from the snapshot instead of the original subvol, but now /home ( @home subvol ) refuses to mount claiming that /dev/sda1 is not a valid block device. What gives? Try mounting using sobvolid. Use btrfs su li / (or wherever it's mounted) to list the ids. Personally I do not store anything in subvolid=0 directly and never bothered with 'set-default' option - just used a new subvolume/snapshot name +1 A problem with that, though, if you decide to put /boot on btrfs as well. Grub uses the default subvolume to determine paths (for kernel, initrd, etc). A workaround is to manually create and manage your grub.cfg (or create and use a manual-managed include file, like custom-top.cfg, that gets parsed before the automatically created entries). I really like zfs grub2 support, where it will correctly use the dataset name for file locations. Unfortunately grub's btrfs support doesn't have it (yet). - create a named snapshot - edit bootloader config to include the new rootflags=subvol=your_new_snapshot_name I had some problem with subvol option in old version of kernel/btrfs in Lucid/Natty. I use subvolid now, which seems to be more reliable. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Snapshot rollback
On Mon, Oct 24, 2011 at 3:24 PM, dima dole...@parallels.com wrote: Fajar A. Nugraha list at fajar.net writes: A problem with that, though, if you decide to put /boot on btrfs as well. Grub uses the default subvolume to determine paths (for kernel, initrd, etc). A workaround is to manually create and manage your grub.cfg (or create and use a manual-managed include file, like custom-top.cfg, that gets parsed before the automatically created entries). Oops, sorry, I was incorrect of course. Thanks Fajar. I do have /boot in my subvolid=0 because bootloaders can't read inside subvolumes (other than the default) as far as I understand. And I just bind /boot through fstab. AFAIK you have three possible ways to use /boot on btrfs: (1) put /boot on subvolid=0, don't change the default subvolume. That works, but all your snapshot/subvols will be visible under /boot. Some people might not want that for estetic reason. (2) put /boot (or /, when /boot is part of / ) on a subvolume, then change the default subvolume. This works cleanly, but there were some problems in the past when changing the default subvolume (at least I had problem). Current kernel version might not have this problem anymore (I haven't tried again) (3) put /boot on a subvolume, do not change the default subvolume, and manage grub.cfg manually. This is what I currently do. I wish grub's btrfs support is more like zfs support, where both the bootloader and tools (e.g. update-grub, grub-probe, etc) can intellegently recognize what dataset /boot is on, and create the correct entry. For example, if you have this # df -h /boot FilesystemSize Used Avail Use% Mounted on rpool/ROOT/ubuntu-1 384G 1.2G 383G 1% / rpool/ROOT/ubuntu-1/boot 383G 71M 383G 1% /boot ... then grub.cfg will have an entry like this zfs-bootfs ($root) bootfs linux /ROOT/ubuntu-1/boot/@/vmlinuz-2.6.38-10-generic root=/dev/sda5 ro boot=zfs $bootfs rpool=rpool bootfs=rpool/ROOT/ubuntu-1 initrd /ROOT/ubuntu-1/boot/@/initrd.img-2.6.38-10-generic This flexibility allows me (for example) to have multiple version of kernel, initrd, and root fs in different datasets, all selectable during boot process (possibly by manually editing grub command line to suplly the correct path/argument). Good for rescue purposes, just in case a recent update made the system broken :) The closest thing we can get to that with btrfs is currently option (1), as option (2) requires you to boot to an alternate environment (e.g live cd) to change the default subvol first. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Snapshot rollback
On Tue, Oct 25, 2011 at 9:00 AM, dima dole...@parallels.com wrote: Fajar A. Nugraha list at fajar.net writes: AFAIK you have three possible ways to use /boot on btrfs: (1) put /boot on subvolid=0, don't change the default subvolume. That works, but all your snapshot/subvols will be visible under /boot. Some people might not want that for estetic reason. Hi Fajar, I think I am doing just this, but my subvolumes are not visible under /boot. I have all my subvolumes set up like this: /path/to/subvolid_0/boot a simple directory bind-mounted to / /path/to/subvolid_0/__active my / subvolume /path/to/subvolid_0/__home my /home subvolume Actually with that setup you're using option (3) that I described. That means all your subvolumes is still visible under /path/to/subvolid_0/, right? I'm not sure how well grub can manage this. Probably it can't, so you'd have to manage boot entries manually. (3) put /boot on a subvolume, do not change the default subvolume, and manage grub.cfg manually. This is what I currently do. Could you elaborate about this option pls., and if possible post your grub.cfg? I don't quite understand how it works. Though I am doing syslinux at the moment, I think the process is the same. For example, with the following subvolume sturcture (using default subvolid 0, unchanged) / /boot /root /home ... and fstab set to mount subvol boot on /boot. Then when grub looks for kernels and initrd it'd see that /boot is it's own block device, so it's use /vmlinuz and /initrd in grub.cfg. However, when grub actually boots, it'd see the files located an /boot/vmlinuz and /boot/initrd, and when reading the configuration it'd be unable to find the files (since from grub's point of view there's nothing on /vmlinuz and /initrd). Which is why I said you'd need to manage grub.cfg manually in this setup. For comparison purposes, I'd just setup two versions of Ubuntu with zfs / and /boot. In a way it's kinda like having different linux distros/versions installed on different partitions with one master bootloader choosing which partition is active. Each distro/version can manage it's own boot configuration file without disturbing each other (e.g. oneiric won't be able to see natty's kernel and initrd). Although the initial setup is done manually, each version's grub will be able to manage it's own kernels and initrd independently. The setup is something like this: # zfs list | grep boot rpool/ROOT/oneiric/boot 23.3M 369G 23.3M /rpool/ROOT/oneiric/boot rpool/ROOT/natty/boot 121M 369G 96.3M /rpool/ROOT/natty/boot rpool/boot 1.32M 369G 1.32M /rpool/boot rpool/boot is where the master configuration file for grub is installed. It's job is to select which configfile (oneiric or natty) to use next, so it's something like this #=== # cat /rpool/boot/grub/grub.cfg insmod part_msdos insmod zfs search --no-floppy --fs-uuid --set=root c4f4006ef59df197 set timeout=1 menuentry 'Ubuntu Natty boot menu' { configfile /ROOT/natty/boot/@/grub/grub.cfg } menuentry 'Ubuntu Oneiric boot menu' { configfile /ROOT/oneiric/boot/@/grub/grub.cfg } #=== Each Ubuntu version has it's own /boot directory, which will be mountted as /boot when it's active, but remains unmounted (or mounted in alternate path) when it's inactive. grub's zfs support will use dataset name as part of path (regardless where it's currently mounted), so the config for oneiric looks something like this #=== menuentry 'Ubuntu, with Linux 3.0.0-12-generic' --class ubuntu --class gnu-linux --class gnu --class os { search --no-floppy --fs-uuid --set=root c4f4006ef59df197 linux /ROOT/oneiric/boot/@/vmlinuz-3.0.0-12-generic root=/dev/sda5 ro boot=zfs boot=zfs rpool=rpool bootfs=rpool/ROOT/oneiric initrd /ROOT/oneiric/boot/@/initrd.img-3.0.0-12-generic } #=== Unfortunately this kind of setup is currently not possible with btrfs. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Snapshot rollback
On Tue, Oct 25, 2011 at 3:54 PM, dima dole...@parallels.com wrote: Hi Fajar, I think I am doing just this, but my subvolumes are not visible under /boot. I have all my subvolumes set up like this: /path/to/subvolid_0/boot a simple directory bind-mounted to / /path/to/subvolid_0/__active my / subvolume /path/to/subvolid_0/__home my /home subvolume Actually with that setup you're using option (3) that I described. That means all your subvolumes is still visible under /path/to/subvolid_0/, right? I'm not sure how well grub can manage this. Probably it can't, so you'd have to manage boot entries manually. Yes, you are right. I can see all subvolumes under /path/to/subvolid_0 By the way grub2 can manage this setup correctly and generate the right menu entries without any problems. But - /boot is not in its own subvolume in this setup. I think this is the reason it works out of the box. Thank you for your explanations. I can see if I can make it work with /boot in its own subvolume. If you currently have it working, then as long as you have the full subvolid=0 mounted somewhere and use bind-mount from that, it shouldn't matter whether it's a subdir or a subvol. But if you mount the subvol directly in /boot, you'll have the same problem I do. So it's a trade off, I guess. You can have grub manage grub.cfg correctly, but you need to have the full tree mounted somewhere. Not ideal, but a possible option which I hadn't thought of before. Thanks for sharing your setup. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Unable to mount (or, why not to work late at night).
On Thu, Oct 27, 2011 at 10:22 PM, Ken D'Ambrosio k...@jots.org wrote: So, I was trying to downgrade my Ubuntu last night, and, before doing anything risky like that, I backed up my disk via dd to an image on an external disk. some of us make use of snapshot/clone, whether it's using btrfs or zfs :) So, I dd'd everything back, and now it crashes on boot. Booting to a 2.6.x kernel (which is what I had on-hand on a USB drive) mounts it, but doesn't let me *do* anything (though it spews btrfs errors in dmesg). What do you mean don't let you do anything? Can you mount it read-only and copy the data off the disk? Getting Ubuntu 11.10 (kernel rev. 3.0.0) gives me this: I'd try 3.1. If you use 11.10 try http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.1-oneiric/ [ 121.226246] device fsid d657ce6a-d353-4c2c-858a-6a1f4d9e766e devid 1 transid 217713 /dev/sda1 [ 121.232430] parent transid verify failed on 100695031808 wanted 217713 found 217732 [ 121.232898] parent transid verify failed on 100695031808 wanted 217713 found 217732 [ 121.233357] parent transid verify failed on 100695031808 wanted 217713 found 217732 [ 121.233365] parent transid verify failed on 100695031808 wanted 217713 found 217732 [ 121.248231] btrfs: open_ctree failed As I have this complete image on-disk, I'm more than willing to try Extreme Measures(tm), whatever that might entail. Try getting source of btrfs-progs, do make btrfs-zero-log, and use it. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Unable to mount (or, why not to work late at night).
On Fri, Oct 28, 2011 at 7:32 AM, Ken D'Ambrosio k...@jots.org wrote: some of us make use of snapshot/clone, whether it's using btrfs or zfs :) No, this is just flat my fault: it doesn't matter what backup method you use if you do it wrong. (I actually have three snapshots of each of my two partitions.) What I meant was, if you use snapshot you can easily rollback, and not having to dd-back. But you're right though, it doesn't matter what you use if you do it wrong. Try getting source of btrfs-progs, do make btrfs-zero-log, and use it. Already got 'em. Everything that tries to even think about modifying stuff (btrfs-zero-log, btrfsck, and btrfs-debug-tree) all dump core: Your last resort (for now, anyway) might be using restore from Josef's btrfs-progs: https://github.com/josefbacik/btrfs-progs It might be able to copy some data. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: How to remount btrfs without compression?
On Tue, Nov 8, 2011 at 8:06 AM, Eric Griffith egriffit...@gmail.com wrote: Edit your fstab, remove the compress flag, reboot. Tell btrfs to rebalance the system, reboot again. And I -THINK- that'll decompress all the files I think the original question was how to force uncompressed mode, whether specific to a file or to a whole filesystem, without having to reboot :) AFAIK there's no way to do that. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: How to remount btrfs without compression?
On Wed, Nov 9, 2011 at 2:48 PM, Lubos Kolouch lubos.kolo...@gmail.com wrote: Sorry for possibly OT question - when I have historical btrfs system mounted with zlib compression, can I remount it with lzo ? yes What will happen? Will the COW be broken and the files taking duplicate space? Or will the Universe explode and be replaced with something even more bizzare? New written block/extents will use lzo compression (if it's compressible, or if it's mounted with compress-force). Old, unmodified block/extents will remain unchanged, using zlib or uncompressed. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: fsck with err is 1
On Wed, Nov 23, 2011 at 12:33 PM, Blair Zajac bl...@orcaware.com wrote: Hello, I'm trying btrfs in a VirtualBox VM running Ubuntu 11.10 with kernel 3.0.0. Running fsck I get a message with err is 1. Does this mean there's an error? Is err either always 0 or 1, or does err increment beyond 1? I can't answer that, but I can tell you that fsck for btrfs right now is almost useless. It can't fix anyting. Short summary, if you can mount the fs, and can access the data, and don't have any weird messages on syslog, then it's most likely OK. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: btrfs and load (sys)
On Thu, Nov 24, 2011 at 8:00 AM, Chris Samuel ch...@csamuel.org wrote: Another possibility I *think* is that you could try 3.1 with Chris Mason's for-linus git branch pulled into it. Hopefully someone who knows the procedure better than I can correct me on this! :-) My method is: - use 3.1.1 (latest stable at the time I was compiling it), compiie btrfs as module - use only fs/btrfs directory from Chris' for-linus tree, compile the module externally (make -C /lib/modules/`pwd`/build M=$(pwd) modules) - put the resulting btrfs.ko on /lib/modules/`uname -r`/updates/manual/ - depmod -a - verify the new module is selected by default (modinfo btrfs) - rebuild initrd - reboot -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: btrfs/git question.
On Tue, Nov 29, 2011 at 8:58 AM, Phillip Susi ps...@cfl.rr.com wrote: On 11/28/2011 12:53 PM, Ken D'Ambrosio wrote: Seems I've picked up a wireless regression, and randomly drop my WiFi connection with more recent kernels. While I'd love to try to track down the issue, the sporadic nature makes it difficult. But I don't want to revert to a flat-out old kernel because of all the btrfs modifications. Is it possible using git to add *just* btrfs patches to an older kernel? Sure: use git rebase to apply the patches to the older kernel. ... or use 3.1.2, and get ONLY fs/btrfs from Chris' for-linus tree, compile it out-of-tree, and use it to replace the original btrfs.ko. There used to be this: https://btrfs.wiki.kernel.org/articles/b/t/r/Btrfs_source_repositories.html#Building_latest_btrfs_against_a_recent_kernel_with_DKMS But personally it's much easier to just compile it manually without dkms: make -C /llib/modules/`uname -r`/build M=$(pwd) modules -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: btrfs/git question.
On Tue, Nov 29, 2011 at 10:22 PM, Chris Mason chris.ma...@oracle.com wrote: On Tue, Nov 29, 2011 at 09:33:37AM +0700, Fajar A. Nugraha wrote: On Tue, Nov 29, 2011 at 8:58 AM, Phillip Susi ps...@cfl.rr.com wrote: On 11/28/2011 12:53 PM, Ken D'Ambrosio wrote: Seems I've picked up a wireless regression, and randomly drop my WiFi connection with more recent kernels. While I'd love to try to track down the issue, the sporadic nature makes it difficult. But I don't want to revert to a flat-out old kernel because of all the btrfs modifications. Is it possible using git to add *just* btrfs patches to an older kernel? Sure: use git rebase to apply the patches to the older kernel. ... or use 3.1.2, and get ONLY fs/btrfs from Chris' for-linus tree, compile it out-of-tree, and use it to replace the original btrfs.ko. If you're on a 3.1 kernel, you can pull my for-linus directly on top of it with git pull. I always keep a btrfs tree against the previous kernel so that people can use the latest btrfs goodness without having to use an rc kernel. Yes, thanks for that. My suggestion is simply an alternative (instead of git pull) for people who: - aren't quite familiar with git, but know enough to grab a directory snapshot from gitweb (e.g. http://git.kernel.org/?p=linux/kernel/git/mason/linux-btrfs.git;a=tree;f=fs/btrfs;h=5f51bd7e3b8b6c4825681408450e6580bdbccce1;hb=refs/heads/for-linus) - know how to build a module out-of-tree - on the latest stable, but don't want to re-compile the whole kernel just to get btrfs fix -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: btrfs errors
On Fri, Dec 2, 2011 at 7:34 PM, Mike Thomas bt...@thomii.com wrote: Hi, I've been using btrfs for a while now, I've been utilizing snapshotting nightly/weekly/monthly. During the weekly I also do a backup of the filesystem to an ext4 filesystem. My storage is a linux md raid 5 volume. I've recently noticed these errors in the logs during the backup of the files to the ext4 filesystem. I am running RHEL 6.1 (2.6.32-131.0.15.el6.x86_64) that's like dinosaur age, in btrfs terms :P I'd like to help in any way I can so I thought I'd post here to see if there is anything I can do to help. I'd try compiling latest 3.2-rc kernel. or Chris' for-linus tree (http://git.kernel.org/?p=linux/kernel/git/mason/linux-btrfs.git;a=shortlog;h=refs/heads/for-linus) which is based on 3.1, and see if it goes away. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Filesystem acting up during balance
2011/12/9 Ricardo Bánffy rban...@gmail.com: Dec 9 01:06:21 adams kernel: [ 207.912535] usb 1-2.1: reset high speed USB device number 7 using ehci_hcd That's usually a REALLY bad sign. If you can remove the drive from the USB enclosure, I suggest you plug it to onboard SATA port. That way at least you won't have to deal with USB reset/retries. After that, I'd try copying the data off the disk first (with dd_rescue). You'd need a big enough disk for this. Only after that, I'd try again mounting the copy. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: btrfs encryption problems
On Thu, Dec 1, 2011 at 5:15 AM, 810d4rk 810d...@gmail.com wrote: I plugged it directly by sata and this is what I get from the 3.1 kernel: [ 581.921417] sdb: sdb1 [ 581.921642] sd 2:0:0:0: [sdb] Attached SCSI disk [ 660.040263] EXT4-fs (dm-4): VFS: Can't find ext4 filesystem ... and then what? Did you try decrypting and mounting it? -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: What is best practice when partitioning a device that holds one or more btr-filesystems
On Thu, Dec 15, 2011 at 4:42 AM, Wilfred van Velzen wvvel...@gmail.com wrote: On Wed, Dec 14, 2011 at 9:56 PM, Gareth Pye gar...@cerberos.id.au wrote: On Thu, Dec 15, 2011 at 5:51 AM, Wilfred van Velzen wvvel...@gmail.com wrote: (I'm not interested in what early adopter users do when they are using rc kernels...) Yet your going to use a FS without a working fsck? That puts you in early adopter territory to me. Yeah maybe. But I'm still not interested in it regarding partitioning! ;) I'd just use one big partition. That way all subvolume can share free space, making space use more efficient. If you decide to go that route, the missing feature is quota and space accounting. At this moment you can't tell which subvol use how much, and limit it. There are (unmerged) patches for that though. But actually I decided not to use it for the production environment. The missing working fsck is one of the reasons. Although opensuse supports it and Suse Linux Enterprise Server 11 is going to support it with their next SP release in Februari, and Fedora might use it as default in their next release... Did I miss any? Oracle linux :D But I'm going to use it at home and probably in some test environments rsn... ;) If you're keeping your options open, try zfsonlinux as well. It might be better suited for certain needs. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Extreme slowdown
On Fri, Dec 16, 2011 at 1:49 AM, Tobias tra...@robotech.de wrote: Hi all! My BTRFS-FS ist getting really slow. Reading is ok, writing is slow and deleting is horrible slow. There are many files and many links on the FS. # btrfs filesystem df /srv/storage Data: total=3.09TB, used=3.07TB this is ... what, over 99% full? The slow down is normal, somewhat. Same thing happens on zfs, which became slower when usage is above 80-90%. Maybe it's because there is so much Metadata and it needs so many seeks on the discs when deleting? I doubt it. I'd like to delete some of the old Files but its so horrible slow that i think its maybe faster to copy all needed Data to a different Disc, killing the FS and move the Files back... The machine is a QuadCore with 8GB RAM. Kernel is 3.1+for-linus. Any hints how i could speed it up? Try: - mounting it with nodatacow: https://btrfs.wiki.kernel.org/articles/f/a/q/FAQ_1fe9.html#Can_copy-on-write_be_turned_off_for_data_blocks.3F - clobbering a big file: https://btrfs.wiki.kernel.org/articles/f/a/q/FAQ_1fe9.html#if_your_device_is_large_.28.3E16gb.29 ... until you have at least 20% free space available. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: BTRFS fsck apparent errors
On Wed, Jul 4, 2012 at 8:42 PM, David Sterba d...@jikos.cz wrote: On Wed, Jul 04, 2012 at 07:40:05AM +0700, Fajar A. Nugraha wrote: Are there any known btrfs regression in 3.4? I'm using 3.4.0-3-generic from a ppa, but a normal mount - umount cycle seems MUCH longer compared to how it was on 3.2, and iostat shows the disk is read-IOPS-bound Is it just mount/umount without any other activity? Yes Is the fs fragmented Not sure how to check that quickly (or aged), Over 1 year, so yes almost full, df says 83% used, so probably yes (depending on how you define almost) ~ $ df -h /media/WD-root Filesystem Size Used Avail Use% Mounted on /dev/sdc2 922G 733G 155G 83% /media/WD-root ~ $ sudo btrfs fi df /media/WD-root/ Data: total=883.95GB, used=729.68GB System, DUP: total=8.00MB, used=104.00KB System: total=4.00MB, used=0.00 Metadata, DUP: total=18.75GB, used=1.49GB Metadata: total=8.00MB, used=0.00 has lots of files? it's a normal 1 TB usb disk, with docs, movies, vm images, etc. No particular lots-of-small-files like maildir or anything like that. # time umount /media/WD-root/ real 0m22.419s user 0m0.000s sys 0m0.064s # /proc/10142/stack --- the PID of umount process The process(es) actually doing the work are the btrfs workers, usual sucspects are btrfs-cache (free space cache) or btrfs-ino (inode cache) that are writing the cache states back to disk. Not sure about that, since iostat shows it's mostly read, not write. Will try iotop later. I tested also with Chris' for-linus on top of 3.4, same result (really long time to umount). Reverting back to ubuntu's 3.2.0-26-generic, umount only took less than 1 s :P So I guess I'm switching back to 3.2 for now. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: file system corruption removal / documentation quandry
On Thu, Jul 12, 2012 at 12:13 PM, eric gisse jowr...@gmail.com wrote: Basically, phoronix showed there is a --repair option. After enabling snapshotting and playing around with the various discussed options, I discovered that --repair and no special mount options was sufficient to get the files removable. I'm curious, whether running it directly on newer kernel (e.g. latest ubuntu kernel-ppa/mainline) will be able to solve the problem, even without btrfsck. Also note that if by snapshotting you mean create LVM snapshots, then you might be in for another surprise, as btrfs doesn't play nice with block devices with the same fs UUID. Don't rely on that as backup option. Now what I'm hoping for is better documentation on btrfsck even if it just boils down to a brief enumeration of the options as that would be better than nothing which is what we have now. Do I need to file a bug or is this sufficient? Edit https://btrfs.wiki.kernel.org/index.php/Btrfsck ? -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: brtfs on top of dmcrypt with SSD - Trim or no Trim
On Thu, Jul 19, 2012 at 1:13 AM, Marc MERLIN m...@merlins.org wrote: TL;DR: I'm going to change the FAQ to say people should use TRIM with dmcrypt because not doing so definitely causes some lesser SSDs to suck, or possibly even fail and lose our data. Longer version: Ok, so several months later I can report back with useful info. Not using TRIM on my Crucial RealSSD C300 256GB is most likely what caused its garbage collection algorithm to fail (killing the drive and all its data), and it was also causing BRTFS to hang badly when I was getting within 10GB of the drive getting full. I reported some problems I had with btrfs being very slow and hanging when I only had 10GB free, and I'm now convinced that it was the SSD that was at fault. On the Crucial RealSSD C300 256GB, and from talking to their tech support and other folks who happened to have gotten that 'drive' at work and also got weird unexplained failures, I'm convinced that even its latest 007 firmware (the firmware it shipped with would just hang the system for a few seconds every so often so I did upgrade to 007 early on), the drive does very poorly without TRIM when it's getting close to full. If you're going to edit the wiki, I'd suggest you say SOME SSDs might need to use TRIM with dmcrypt. That's because some SSD controllers (e.g. sandforce) performs just fine without TRIM, and in my case TRIM made performance worse. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Very slow samba file transfer speed... any ideas ?
On Thu, Jul 19, 2012 at 7:39 PM, Shavi N shav...@gmail.com wrote: So btrfs gives a massive difference locally, but that still doesn't explain the slow transfer speeds. Is there a way to test this? I'd try with real data, not /dev/zero. e.g: dd_rescue -b 1M -m 1.4G /dev/sda testfile.img ... or use whatever non-zero data source you have. dd_rescue will give a nice progress bar and speed indicator. Also, run iostat -mx 3 while you're running dd, and while accessing it from samba. In my experice, btrfs is simply slower than ext4. Period. There's no way around it for now. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Upgrading from 2.6.38, how?
On Wed, Jul 25, 2012 at 11:39 AM, Gareth Pye gar...@cerberos.id.au wrote: My proposed upgrade method is: Boot from a live CD with the latest kernel I can find so I can do a few tests: A - run the fsck in read only mode to confirm things look good B - mount read only, confirm that I can read files well C - mount read write, confirm working Install latest OS, upgrade to latest kernel, then repeat above steps. Any likely hiccups with the above procedure and suggested alternatives? I'd simply install the new OS on a new partition/subvol. This is what I did when upgrading from natty - oneiric - precise. IIRC there are some incompatibilites (e.g. space/inode cache disk format?) but newer kernels will just do the right thing, drop the old cache and create a new one. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: How can btrfs take 23sec to stat 23K files from an SSD?
On Wed, Aug 1, 2012 at 1:01 PM, Marc MERLIN m...@merlins.org wrote: So, clearly, there is something wrong with the samsung 830 SSD with linux It it were a random crappy SSD from a random vendor, I'd blame the SSD, but I have a hard time believing that samsung is selling SSDs that are slower than hard drives at random IO and 'seeks'. You'd be surprised on how badly some vendors can screw up :) First: btrfs is the slowest: gandalfthegreat:/mnt/ssd/var/local# grep /mnt/ssd/var /proc/mounts /dev/mapper/ssd /mnt/ssd/var btrfs rw,noatime,compress=lzo,ssd,discard,space_cache 0 0 Just checking, did you explicitly activate discard? Cause on my setup (with corsair SSD) it made things MUCH slower. Also, try adding noatime (just in case the slow down was because du cause many access time updates) -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: raw partition or LV for btrfs?
On Sun, Aug 12, 2012 at 11:46 PM, Daniel Pocock dan...@pocock.com.au wrote: I notice this question on the wiki/faq: https://btrfs.wiki.kernel.org/index.php/UseCases#What_is_best_practice_when_partitioning_a_device_that_holds_one_or_more_btr-filesystems and as it hasn't been answered, can anyone make any comments on the subject Various things come to mind: a) partition the disk, create an LVM partition, and create lots of small LVs, format each as btrfs b) partition the disk, create an LVM partition, and create one big LV, format as btrfs, make subvolumes c) what about using btrfs RAID1? Does either approach (a) or (b) seem better for someone who wants the RAID1 feature? IMHO when the qgroup feature is stable (i.e. adopted by distros, or at least in stable kernel) then simply creating one big partition (and letting btrfs handle RAID1, if you use it) is better. When 3.6 is out, perhaps? Until then I'd use LVM. d) what about booting from a btrfs system? Is it recommended to follow the ages-old practice of keeping a real partition of 128-500MB, formatting it as btrfs, even if all other data is in subvolumes as per (b)? You can have one single partition only and boot directly from that. However btrfs has the same problems as zfs in this regard: - grub can read both, but can't write to either. In other words, no support for grubenv - the best compression method (gzip for zfs, lzo for btrfs) is not supported by grub For the first problem, an easy workaroud is just to disable the grub configuration that uses grubenv. Easy enough, and no major functionality loss. The second one is harder for btrfs. zfs allows you to have separate dataset (i.e. subvolume, in btfs terms) with different compression, so you can have a dedicated dataset for /boot with different compression setting from the rest of the dataset. With btrfs you're currently stuck with using the same compression setting for everything, so if you love lzo this might be a major setback. There's also a btrfs-specific problem: it's hard to have a system which have /boot on a separate subvol while managing it with current automatic tools (e.g. update-grub). Due to second and third problem, I'd recommend you just use a separate partition with ext2/4 for now. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: I want to try something on the BTR file system,...
On Mon, Aug 13, 2012 at 8:22 AM, Ben Leverett ben...@live.com wrote: could you please send me a copy of the btr driver/kernel? I wonder if using live.com email has something to do with how you ask that question :P Anyway, depending on what you want to use it for, you might find it easier to just download latest version of Ubuntu or whatever-your-favorite-linux-distro. Or, if you want to modify the source code, The link that Michael sends provide a good starting point. What is it that you want to try? If your question is more specific, you can get more specific answer. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: raw partition or LV for btrfs?
On Mon, Aug 13, 2012 at 11:19 AM, Kyle Gates kylega...@hotmail.com wrote: Also, I think the current grub2 has lzo support. You're right grub2 (1.99-18) unstable; urgency=low [ Colin Watson ] ... * Backport from upstream: - Add support for LZO compression in btrfs (LP: #727535). so Ubuntu has it since precise, which is roughly the time I switched to zfs for rootfs :P Thanks for letting us know about that. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: raw partition or LV for btrfs?
On Tue, Aug 14, 2012 at 8:28 PM, Daniel Pocock dan...@pocock.com.au wrote: Can you just elaborate on the qgroups feature? - Does this just mean I can make the subvolume sizes rigid, like LV sizes? Pretty much. - Or is it per-user restrictions or some other more elaborate solution? No If I create 10 LVs today, with btrfs on each, can I merge them all into subvolumes on a single btrfs later? No If I just create a 1TB btrfs with subvolumes now, can I upgrade to qgroups later? Yes Or would I have to recreate the filesystem? No If I understand correctly, if I don't use LVM, then such move and resize operations can't be done for an online filesystem and it has more risk. You can resize, add, and remove devices from btrfs online without the need for LVM. IIRC LVM has finer granularity though, you can do something like move only the first 10GB now, I'll move the rest later. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: raw partition or LV for btrfs?
On Tue, Aug 14, 2012 at 9:09 PM, cwillu cwi...@cwillu.com wrote: If I understand correctly, if I don't use LVM, then such move and resize operations can't be done for an online filesystem and it has more risk. You can resize, add, and remove devices from btrfs online without the need for LVM. IIRC LVM has finer granularity though, you can do something like move only the first 10GB now, I'll move the rest later. You can certainly resize the filesystem itself, but without lvm I don't believe you can resize the underlying partition online. I'm pretty sure you can do that with parted. At least, when your version of parted is NOT 2.2. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
oops with btrfs on zvol
Hi, I'm experimenting with btrfs on top of zvol block device (using zfsonlinux), and got oops on a simple mount test. While I'm sure that zfsonlinux is somehow also at fault here (since the same test with zram works fine), the oops only shows things btrfs-related without any usable mention of zfs/zvol. Could anyone help me interpret the kernel logs, which btrfs-zvol interaction is at fault, so I can pass it on to zfs guys to work on their side as well? Thanks. The test is creating a sparse 100G block device (zfs create -V 100G -s -o volblocksize=4k rpool/vbd/test1), format it (mkfs.btrfs /dev/zvol/rpool/vbd/test1), and mount it. Oops occured, and the mount process stuck. Same thing happens on ubuntu precise's kernel 3.2 and quantal's 3.5. What's interesting is: - if I use ext4 (instead of btrfs) on the zvol, it works just fine - if I add a layer on top of zvol (losetup, or iscsi export-import) then btrfs works just fine. Syslog shows this (from Ubuntu's 3.2 kernel): #= Aug 31 20:30:13 DELL kernel: [34307.828311] zd0: unknown partition table Aug 31 20:30:34 DELL kernel: [34328.129249] device fsid cfd88ff9-def8-4d1f-9435-65becd5fa2b7 devid 1 transid 4 /dev/zd0 Aug 31 20:30:34 DELL kernel: [34328.134001] btrfs: disk space caching is enabled Aug 31 20:30:34 DELL kernel: [34328.135701] BUG: unable to handle kernel NULL pointer dereference at (null) Aug 31 20:30:34 DELL kernel: [34328.137200] IP: [a0068439] extent_range_uptodate+0x59/0xe0 [btrfs] Aug 31 20:30:34 DELL kernel: [34328.138759] PGD 0 Aug 31 20:30:34 DELL kernel: [34328.140248] Oops: [#1] SMP Aug 31 20:30:34 DELL kernel: [34328.141777] CPU 3 Aug 31 20:30:34 DELL kernel: [34328.141811] Modules linked in: ses enclosure ppp_mppe ppp_async crc_ccitt pci_stub vboxpci(O) vboxnetadp(O) vboxnetflt (O) vboxdrv(O) arc4 ath9k mac80211 radeon uvcvideo snd_hda_codec_hdmi ath9k_common snd_hda_codec_realtek ath9k_hw videodev ipt_MASQUERADE xt_state ipt able_nat nf_nat v4l2_compat_ioctl32 i915 nf_conntrack_ipv4 nf_conntrack iptable_filter nf_defrag_ipv4 ip_tables dm_multipath dummy x_tables bnep ath3k btusb bridge rfcomm bluetooth snd_hda_intel snd_hda_codec stp joydev ath snd_hwdep ttm snd_pcm mei(C) drm_kms_helper drm snd_seq_midi snd_rawmidi snd _seq_midi_event dell_wmi sparse_keymap snd_seq dell_laptop wmi snd_timer i2c_algo_bit video psmouse snd_seq_device cfg80211 snd mac_hid serio_raw soun dcore dcdbas snd_page_alloc parport_pc ppdev lp parport binfmt_misc zfs(P) zcommon(P) znvpair(P) zavl(P) zunicode(P) spl(O) ums_realtek uas r8169 btrf s zlib_deflate libcrc32c usb_storage Aug 31 20:30:34 DELL kernel: [34328.155820] Aug 31 20:30:34 DELL kernel: [34328.157974] Pid: 15887, comm: btrfs-endio-met Tainted: P C O 3.2.0-29-generic #46-Ubuntu Dell Inc. De ll System Inspiron N4110/03NKW8 Aug 31 20:30:34 DELL kernel: [34328.160283] RIP: 0010:[a0068439] [a0068439] extent_range_uptodate+0x59/0xe0 [btrfs] Aug 31 20:30:34 DELL kernel: [34328.162700] RSP: 0018:8800351dfde0 EFLAGS: 00010246 Aug 31 20:30:34 DELL kernel: [34328.165099] RAX: RBX: 01401000 RCX: Aug 31 20:30:34 DELL kernel: [34328.167548] RDX: 0001 RSI: 1401 RDI: Aug 31 20:30:34 DELL kernel: [34328.169989] RBP: 8800351dfe00 R08: R09: 880067021418 Aug 31 20:30:34 DELL kernel: [34328.172474] R10: 8800b680d010 R11: 1000 R12: 88011d997bf0 Aug 31 20:30:34 DELL kernel: [34328.174922] R13: 01401fff R14: 880031c45c00 R15: 88011aedc9b0 Aug 31 20:30:34 DELL kernel: [34328.177401] FS: () GS:88013e6c() knlGS: Aug 31 20:30:34 DELL kernel: [34328.179904] CS: 0010 DS: ES: CR0: 8005003b Aug 31 20:30:34 DELL kernel: [34328.182426] CR2: CR3: 0001291e CR4: 000406e0 Aug 31 20:30:34 DELL kernel: [34328.185005] DR0: DR1: DR2: Aug 31 20:30:34 DELL kernel: [34328.187602] DR3: DR6: 0ff0 DR7: 0400 Aug 31 20:30:34 DELL kernel: [34328.190246] Process btrfs-endio-met (pid: 15887, threadinfo 8800351de000, task 880031c45c00) Aug 31 20:30:34 DELL kernel: [34328.193171] Stack: Aug 31 20:30:34 DELL kernel: [34328.196542] 8800351dfdf0 880088ff6638 8800b61953c0 88011cbbb000 Aug 31 20:30:34 DELL kernel: [34328.199469] 8800351dfe10 a004224d 8800351dfe40 a00422d6 Aug 31 20:30:34 DELL kernel: [34328.202295] 8800351dfe88 88011aedc960 8800351dfe88 8800351dfe98 Aug 31 20:30:34 DELL kernel: [34328.204685] Call Trace: Aug 31 20:30:34 DELL kernel: [34328.206645] [a004224d] bio_ready_for_csum.isra.107+0xbd/0xc0 [btrfs] Aug 31 20:30:34 DELL kernel: [34328.208591] [a00422d6] end_workqueue_fn+0x86/0xa0 [btrfs] Aug 31 20:30:34 DELL kernel: [34328.210565] [a00714e0]
Re: enquiry about defrag
On Sun, Sep 9, 2012 at 2:49 PM, ching lschin...@gmail.com wrote: On 09/09/2012 08:30 AM, Jan Steffens wrote: On Sun, Sep 9, 2012 at 2:03 AM, ching lschin...@gmail.com wrote: 2. Is there any command for the fragmentation status of a file/dir ? e.g. fragment size, number of fragments. Use the filefrag command, part of e2fsprogs. my image is a 16G sparse file, after defragment, it still has 101387 extents, is it normal? Is compression enabled? If so, yes, it's normal. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Workaround for hardlink count problem?
On Mon, Sep 10, 2012 at 4:12 PM, Martin Steigerwald mar...@lichtvoll.de wrote: Am Samstag, 8. September 2012 schrieb Marc MERLIN: I was migrating a backup disk to a new btrfs disk, and the backup had a lot of hardlinks to collapse identical files to cut down on inode count and disk space. Then, I started seeing: […] Has someone come up with a cool way to work around the too many link error and only when that happens, turn the hardlink into a file copy instead? (that is when copying an entire tree with millions of files). What about: - copy first backup version - btrfs subvol create first next - copy next backup version - btrfs subvol create previous next Wouldn't btrfs subvolume snapshot, plus rsync --inplace more useful here? That is. if the original hardlink is caused by multiple versions of backup of the same file. Personally, if I need a feature not currently impelented yet in btrfs, I'd just switch to something else for now, like zfs. And revisit btrfs later when it has the needed features have been merged. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: specify UUID for btrfs
On Thu, Sep 13, 2012 at 1:07 PM, ching lu lschin...@gmail.com wrote: Is it possible to specify UUID for btrfs when creating the filesystem? Not that I know of or changing it when it is offline? This one is a definite no. i have several script/setting file which have hardcoded UUID and i do not want to update them every time when restore backup. Using label would probably make more sense for that purpose. It can be set and changed later. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Experiences: Why BTRFS had to yield for ZFS
On Wed, Sep 19, 2012 at 2:28 PM, Casper Bang casper.b...@gmail.com wrote: Anand Jain Anand.Jain at oracle.com writes: archive-log-apply script - if you could, can you share the script itself ? or provide more details about the script. (It will help to understand the work-load in question). Our setup entails a whole bunch of scripts, but the apply script looks like this (orion is the production environment, pandium is the shadow): http://pastebin.com/k4T7deap The script invokes rman passing rman_recover_database.rcs: IIRC there were some patches post-3.0 which relates to sync. If oracle db uses sync writes (or call sync somewhere, which it should), it might help to re-run the test with more recent kernel. kernel-ml repository might help. Ext4 starts out with a realtime to SCN ratio of about 3.4 and ends down around a factor 2.2. ZFS starts out with a realtime to SCN ratio of about 7.5 and ends down around a factor 4.4. So zfsonlinux is actually faster than ext4 for that purpuse? coool ! Btrfs starts out with a realtime to SCN ratio of about 2.2 and ends down around a factor 0.8. This of course means we will never be able to catch up with production, as btrfs can't apply these as fast as they're created. It was even worse with btrfs on our 10xSSD server, where 20 min. of realtime work would end up taking some 5h to get applied (factor 0.06), obviously useless to us. Just wondering, did you use discard option by any chance? In my experience it makes btrfs MUCH slower. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Tunning - cache write (database)
On Mon, Oct 1, 2012 at 8:27 PM, Cesar Inacio Martins cesar_inacio_mart...@yahoo.com.br wrote: My problem: * Using btrfs + compression , flush of 60 MB/s take 4 minutes (on this 4 minutes they keep constatly I/O of +- 4MB/s no disks) (flush from Informix database) * OpenSuse 12.1 64bits, running over VmWare ESXi 5 * Btrfs version : btrfsprogs-0.19-43.1.2.x86_64 * Kernel : Linux jdivm06 3.1.10-1.16-desktop #1 SMP PREEMPT Wed Jun 27 My question, what I believed will help to avoid this long flush : * Have some way to force this flush all in memory cache and then use the btrfs background process to flush to disk ... Security and recover aren't a priority for now, because this is part of a database bulkload ...after finish , integrity will be desirable (not a obligation, since this is a test environment) For now, performance is the mainly requirement... I suggest you start by reading http://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg18827.html After that, PROBABLY start your database by preloading libeatmydata to disable fsync completely. On a side note, zfs has sync property, which when set to disabled, have pretty much the same effect as libeatmydata. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Tunning - cache write (database)
On Tue, Oct 2, 2012 at 3:16 AM, Clemens Eisserer linuxhi...@gmail.com wrote: I suggest you start by reading http://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg18827.html After that, PROBABLY start your database by preloading libeatmydata to disable fsync completely. Which will cure the sympthoms, not the issue itself - I remember the same advice was given for Reiser4 back then ;) Usually for non-toy use-cases data is too valueable to just disable fsync. The OP DID say he doesn't really care about security, recovery, nor integrity (or at least, it's not an obligatiion) :D Other than trying latest -rc and using libeatmydata, I can't see what else can be done to improve current db performance on btrfs. As the list archive shows, zfs is currently MUCH more suitable for that. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: btrfs causing reboots and kernel oops on SL 6 (RHEL 6)
On Sat, Jun 4, 2011 at 11:33 AM, Joel Pearson japear...@agiledigital.com.au wrote: Hi, I'm using SL 6 (RHEL 6) and I've been playing around with running PostgreSQL on btrfs. Snapshotting works ok, but the computer keeps rebooting without warning (can be 5 mins or 1.5 hours), finally I actually managed to get a Kernel Crash instead of just a reboot. I took a picture of the screen: http://imageshack.us/photo/my-images/716/img0143y.jpg/ The important bits are: IP: [a032c471] btrfs_print_leaf +0x31/0x820 [btrfs] PGD 0 Oops: [#1] SMP last sysfs file: /sys/devices/virtual/block/dm-3/dm/name The crashes aren't predictable either. Like it doesn't always happen when I do a snapshot or anything like that. Is this a known problem, that is fixed in a later kernel or something like that? Which kernel is this? If it's the default SL/RHEL 2.6.32 kernel, then you should try upgrade first. http://elrepo.org/tiki/kernel-ml is a good choice. It's highly unlikely that anyone would be willing to look at bugs on that archaic (in btrfs world) kernel. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Naming of subvolumes
On Sat, Oct 27, 2012 at 8:58 AM, cwillu cwi...@cwillu.com wrote: I haven't tried btrfs send/receive for this purpose, so I can't compare. But btrfs subvolume set-default is faster than the release of my finger from the return key. And it's easy enough the user could do it themselves if they had reasons for regression to a snapshot that differ than the automagic determination of the upgrade pass/fail. The one needed change, however, is to get /etc/fstab to use an absolute reference for home. Chris Murphy I'd argue that everything should be absolute references to subvolumes (/@home, /@, etc), and neither set-default nor subvolume id's should be touched. There's no need, as you can simply mv those around (even while mounted). More importantly, it doesn't result in a case where the fstab in one snapshot points its mountpoint to a different snapshot, with all the hilarity that would cause over time, and also allows multiple distros to be installed on the same filesystem without having them stomp on each others set-defaults: /@fedora, /@rawhide, /@ubuntu, /@home, etc. What I do with zfs, which might also be applicable on btrfs: - Have a separate dataset to install grub: poolname/boot. This can also be a dedicated partition, if you want. The sole purpose for this partition/dataset is to select which dataset's grub.cfg to load next (using configfile directive). The grub.cfg here is edited manually. - Have different datasets for each versioned OS (e.g. before and after upgrades): poolname/ROOT/ubuntu-1, poolname/ROOT/ubuntu-2, etc. Each dataset is independent of each other, contains their own /boot (complete with grub/grub.cfg, kernel, and initrd). grub.cfg on each dataset selects its own dataset to boot using bootfs kernel command line. - Have a common home for all environment: poolname/home - Have zfs set the mountpoint (or mounted in initramfs, in root case), so I can get away with an empty fstab. - Do upgrades/modifications in the currently-booted root environment, but create a clone of current environment (and give it a different name) so I can roll back to it if needed. It works great for me so far, since: - each boot environment is portable-enough to move around when needed, with only about four config files needed to be changed (e.g. grub.cfg) when moving between different computers, or when renaming a root dataset. - I can rename each root environment easily, or even move it to different pool/disk when needed. - I can move back and forth between multiple versions of the boot environment (all are ubuntu so far, cause IMHO it currently has best zfs root support). So back to the original question, I'd suggest NOT to use either send/receive or set-default. Instead, setup multiple boot environment (e.g. old version, current version) and let user choose which one to boot using a menu. However for this to work, grub (the bootloader, and the userland programs like update-grub) needs to be able to refer to each grub.cfg/kernel/initrd in a global manner regardless of what the current default subvolume is (zfs' grub code uses something like /poolname/dataset_name/@/path/to/file/in/dataset). -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Naming of (bootable) subvolumes
On Sun, Oct 28, 2012 at 12:22 AM, Chris Murphy li...@colorremedies.com wrote: On Oct 26, 2012, at 9:03 PM, Fajar A. Nugraha l...@fajar.net wrote: So back to the original question, I'd suggest NOT to use either send/receive or set-default. Instead, setup multiple boot environment (e.g. old version, current version) and let user choose which one to boot using a menu. Is it possible to make a functioning symbolic or hard link of a subvolume? Nope, I don't think so. I'm fine with current and previous options. More than that seems unnecessary. But then, how does the user choose? WIth up and down arrow :) What's the UI? Grub boot menu. Is this properly the domain of GRUB2 or something else? In my setup I use grub2's configfile ability. Which basically does a go evaluate this other menu config file. Each boot environment (BE, the term that solaris uses) has a different entry on the main grub.cfg, which loads the BE's corresponding grub.cfg. On BIOS machines, perhaps GRUB. On UEFI, I'd say distinctly not GRUB (I think it's a distinctly bad idea to have a combined boot manager and bootloader in a UEFI context, but that's a separate debate). I don't use UEFI. But the general idea is to have one bootloader which can load additional config files. And the location of that additional config file depends on which BE user wants to boot. On this system, grub-mkconfig produces a grub.cfg only for the system I'm currently booted from. It does not include any entries for fedora18/boot, fedora18/root, even though they are well within the normal search path. And the reference used is relative, i.e. the kernel parameter in the grub.cfg is rootflags=subvol=root If it were to create entries potentially for every snapshotted system, it would be a very messy grub.cfg indeed. It stands to reason that each distro will continue to have their own grub.cfg. No arguments there. Even in my setup, when I run update-grub, it will only update its own grub.cfg, and leave the main grub.cfg untouched. This is how my main grub.cfg looks like: #=== set timeout=2 menuentry 'Ubuntu - 20120905 boot menu' { configfile /ROOT/precise-5/@/boot/grub/grub.cfg } menuentry 'Ubuntu - 20120814 boot menu' { configfile /ROOT/precise-4/@/boot/grub/grub.cfg } #=== each BE's grub cfg (e.g. the one under ROOT/precise-5 dataset) is just your typical Ubuntu's grub.cfg, with only references to kernel/initrd under that dataser. For BIOS machines, it could be useful if a single core.img containing a single standardized prefix specifying a grub location could be agreed upon. And then merely changing the set-default subvolume would allow different distro grub.cfg's to be found, read and workable with the relative references now in place, (except for home which likely needs to be mounted using subvolid). IMHO the biggest difference is that grub support for zfsonlinux, even though it has bootfs pool property, has a way to reference ALL versions of a file (including grub.cfg/kernel/initrd) during boot time. This way you don't even need to change bootfs whenever you want to change to a boot environment, you simply choose (or write) a different grub stanza to boot. If we continue to rely on current btrfs grub support, unfortunately we can't have the same thing. And the closest thing would be set-default. Which IMHO is VERY messy. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [Request for review] [RFC] Add label support for snapshots and subvols
On Fri, Nov 2, 2012 at 5:16 AM, cwillu cwi...@cwillu.com wrote: btrfs fi label -t /btrfs/snap1-sv1 Prod-DB-sand-box-testing Why is this better than: # btrfs su snap /btrfs/Prod-DB /btrfs/Prod-DB-sand-box-testing # mv /btrfs/Prod-DB-sand-box-testing /btrfs/Prod-DB-production-test # ls /btrfs/ Prod-DB Prod-DB-production-test ... because it would mean possibilty to decouple subvol name from whatever-data-you-need (in this case, a label). My request, though, is to just implement properties, and USER properties, like what we have in zfs. This seems to be a cleaner, saner approach. For example, this is on Ubutu + zfsonlinux: # zfs create rpool/u # zfs set user:label=Some test filesystem rpool/u # zfs get creation,user:label rpool/u NAME PROPERTYVALUE SOURCE rpool/u creationFri Nov 2 5:24 2012 - rpool/u user:label Some test filesystem local More info about zfs user properties here: http://docs.oracle.com/cd/E19082-01/817-2271/gdrcw/index.html -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [Request for review] [RFC] Add label support for snapshots and subvols
On Fri, Nov 2, 2012 at 5:32 AM, Hugo Mills h...@carfax.org.uk wrote: On Fri, Nov 02, 2012 at 05:28:01AM +0700, Fajar A. Nugraha wrote: On Fri, Nov 2, 2012 at 5:16 AM, cwillu cwi...@cwillu.com wrote: btrfs fi label -t /btrfs/snap1-sv1 Prod-DB-sand-box-testing Why is this better than: # btrfs su snap /btrfs/Prod-DB /btrfs/Prod-DB-sand-box-testing # mv /btrfs/Prod-DB-sand-box-testing /btrfs/Prod-DB-production-test # ls /btrfs/ Prod-DB Prod-DB-production-test ... because it would mean possibilty to decouple subvol name from whatever-data-you-need (in this case, a label). My request, though, is to just implement properties, and USER properties, like what we have in zfs. This seems to be a cleaner, saner approach. For example, this is on Ubutu + zfsonlinux: # zfs create rpool/u # zfs set user:label=Some test filesystem rpool/u # zfs get creation,user:label rpool/u NAME PROPERTYVALUE SOURCE rpool/u creationFri Nov 2 5:24 2012 - rpool/u user:label Some test filesystem local Don't we already have an equivalent to that with user xattrs? Hugo. Anand did say one way to implement the label is by using attr, so +1 from me for that approach. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Production use with vanilla 3.6.6
On Mon, Nov 5, 2012 at 7:07 PM, Stefan Priebe - Profihost AG s.pri...@profihost.ag wrote: Hello list, is btrfs ready for production use in 3.6.6? Or should i backport fixes from 3.7-rc? Is it planned to have a stable kernel which will get all btrfs fixes backported? I would say no to both, but you should check with distros that supports btrfs (Oracle Linux and SLES). In particular, whether they backport fixes, and what exactly does supported status gives you when you buy support for that distro. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: fstrim on BTRFS
On Thu, Dec 29, 2011 at 11:37 AM, Roman Mamedov r...@romanrm.ru wrote: On Thu, 29 Dec 2011 11:21:14 +0700 Fajar A. Nugraha l...@fajar.net wrote: I'm trying fstrim and my disk is now pegged at write IOPS. Just wondering if maybe a btrfs fi balance would be more useful, since: Modern controllers (like the SandForce you mentioned) do their own wear leveling 'under the hood', i.e. the same user-visible sectors DO NOT neccessarily map to the same locations on the flash at all times; and introducing 'manual' wear leveling by additional rewriting is not a good idea, it's just going to wear it out more. I know that modern controllers have their own wear leveling, but AFAIK they basically: (1) have reserved a certain size for wear leveling purposes (2) when a write request comes, they basically use new sectors from the pool, and put the old sectors to the pool (doing garbage collection like trim/rewrite in the process) (3) they can't re-use sectors that are currently being used and not rewritten (e.g. sectors used by OS files) If (3) is still valid, then the only way to reuse the sectors is by forcing a rewrite (e.g. using btrfs fi defrag). So the question is, is (3) still valid? -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: fstrim on BTRFS
On Thu, Dec 29, 2011 at 11:02 AM, Li Zefan l...@cn.fujitsu.com wrote: Martin Steigerwald wrote: With 3.2-rc4 (probably earlier), Ext4 seems to remember what areas it trimmed: But BTRFS does not: There's no such plan, but it's do-able, and I can take care of it. There's an issue though. For btrfs this issue can't be solved without disk format change that will break older kernels, but only 3.2-rcX kernels will be affected if we push the following change into mainline before 3.2 release. Slightly off-topic, how useful would trim be for btrfs when using newer SSD which have their own garbage collection and wear leveling (e.g. sandforce-based)? I'm trying fstrim and my disk is now pegged at write IOPS. Just wondering if maybe a btrfs fi balance would be more useful, since: - with trim, used space will remain used. Thus future writes will only utilized space marked as free, making them wear faster - with btrfs fi balance, btrfs will move the data around so (to some degree) the currently-unused space will be used, and currently-used space will be unused, which will improve wear leveling. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: fstrim on BTRFS
On Wed, Dec 28, 2011 at 11:57 PM, Martin Steigerwald mar...@lichtvoll.de wrote: But BTRFS does not: merkaba:~ fstrim -v / /: 4431613952 bytes were trimmed merkaba:~ fstrim -v / /: 4341846016 bytes were trimmed and apparently it can't trim everything. Or maybe my kernel is just too old. $ sudo fstrim -v / 2258165760 Bytes was trimmed $ df -h / FilesystemSize Used Avail Use% Mounted on /dev/sda6 50G 34G 12G 75% / $ mount | grep / /dev/sda6 on / type btrfs (rw,noatime,subvolid=258,compress-force=lzo) so only about 2G out of 12G can be trimmed. This is on kernel 3.1.4. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: fstrim on BTRFS
On Thu, Dec 29, 2011 at 11:21 AM, Fajar A. Nugraha l...@fajar.net wrote: I'm trying fstrim and my disk is now pegged at write IOPS. Just wondering if maybe a btrfs fi balance would be more useful, Sorry, I meant btrfs fi defrag -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: fstrim on BTRFS
On Thu, Dec 29, 2011 at 4:39 PM, Li Zefan l...@cn.fujitsu.com wrote: Fajar A. Nugraha wrote: On Wed, Dec 28, 2011 at 11:57 PM, Martin Steigerwald mar...@lichtvoll.de wrote: But BTRFS does not: merkaba:~ fstrim -v / /: 4431613952 bytes were trimmed merkaba:~ fstrim -v / /: 4341846016 bytes were trimmed and apparently it can't trim everything. Or maybe my kernel is just too old. $ sudo fstrim -v / 2258165760 Bytes was trimmed $ df -h / Filesystem Size Used Avail Use% Mounted on /dev/sda6 50G 34G 12G 75% / $ mount | grep / /dev/sda6 on / type btrfs (rw,noatime,subvolid=258,compress-force=lzo) so only about 2G out of 12G can be trimmed. This is on kernel 3.1.4. That's because only free spaces in block groups will be trimmed. Btrfs allocates space from block groups, and when there's no space availabe, it will allocate a new block group from the pool. In your case there's ~10G in the pool. Thanks for your response. You can do a btrfs fi df /, and you'll see the total size of existing block groups. $ sudo btrfs fi df / Data: total=43.47GB, used=31.88GB System, DUP: total=8.00MB, used=12.00KB System: total=4.00MB, used=0.00 Metadata, DUP: total=3.25GB, used=619.88MB Metadata: total=8.00MB, used=0.00 That should mean existing block groups is at least 46GB, right? In which case my pool (a 50G partition) should only have about 4GB of space not allocated to block groups. The numbers don't seem to match. You can empty the pool by: # dd if=/dev/zero of=/mytmpfile bs=1M Then release the space (but it won't return back to the pool): # rm /mytmpfile # sync Is there a bad side effect of doing so? For example, since all free space in the pool would be allocated to data block group, would that mean my metadata block group is capped at 3.25GB? Or would some data block group can be converted to metadata, and vice versa? -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Compession, on filesystem or volume?
On Thu, Dec 29, 2011 at 5:51 PM, Remco Hosman re...@hosman.xs4all.nl wrote: Hi, Something i could not find in the documentation i managed to find: if you mount with compress=lzo and rebalance, is compression on for that filesystem or only a single volume? eg, can i have a @boot volume uncompressed and @ and @home compressed. Last time I asked a similar question, the answer was no. It's per filesystem. however you can change compression of individual files between zlib/lzo using btrfs fi defragment -c, regardless of what the filesystem is currently mounted with. -- Fajar -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html