Re: filesystem full when it's not? out of inodes? huh?

2012-02-26 Thread Duncan
Fahrzin Hemmati posted on Sat, 25 Feb 2012 18:37:24 -0800 as excerpted:

 On 2/25/2012 6:16 PM, Brian J. Murrell wrote:
 Others might know of a way of changing the allocation size to less
 than 1GB, but otherwise I recommend switching to something more stable
 like ext4/reiserfs/etc.

 So btrfs is still not yet suitable to be a root/usr/var filesystem,
 even in kernel 3.0.0?

 Nope, still in heavy development, though you should upgrade to 3.2.
 Also, the devs mentioned in several places it's not friendly to small
 drives, and I'm pretty sure 5GB is considered tiny.
 
 I don't think you need to separate /usr out to it's own disk. You could
 instead create a single drive with multiple subvolumes for /, /var,
 /usr, etc. When you have Ubuntu use btrfs for /, it creates @ and @home
 for / and /home, respectively, so it's a common phenomenon if you look
 for help.

It's astonishing to me the number of people that come in here complaining 
about problems with a filesystem the kernel option of which says 

Title:

Btrfs filesystem (EXPERIMENTAL) Unstable disk format

Description (excerpt):

Btrfs is highly experimental, and THE DISK FORMAT IS NOT YET
FINALIZED.  You should say N here unless you are interested in
testing Btrfs with non-critical data.


So, does testing with non-critical data sound like it's appropriate for 
running on your (OP's) root or home (or anything else you might be using 
it for) filesystem, the way you're doing it now?  If not, what are you 
doing on btrfs?  You better either change your choice of filesystem, or 
your update and backup strategy, to be more in line with the testing 
that's an appropriate use of btrfs at this point.

About twice a week we get people on the list asking about recovery tools 
because the filesystem won't mount, too, and they apparently had no 
backups.  WTF?  Not only should they have had backups (that's normal with 
every filesystem, you always have backups if the data's valuable to you, 
and always test them), but btrfs isn't appropriate for anything besides 
testing data that they *EXPECT* the file system to chew up and spit out 
at them!  IOW, not only should there be backups, but the btrfs copy 
should be considered the non-primary copy, essentially garbage, because 
there's a very real chance that's what it'll be if something goes wrong!

Of course, anyone who *IS* using it as intended at this point will be 
following the lists, know what bugs are being seen frequently, know the 
status of the various recovery tools, etc.  They won't HAVE to ask, as 
they'll already KNOW if they're using the filesystem as intended, for 
testing, reporting bugs, and possibly submitting patches if they have the 
skills, in coordination with the devs and other testers on the list, at 
this point.

Meanwhile, if you DO decide to continue with btrfs for testing...

There's a wiki covering the ENOSPC problem, discussing kernel and tools 
status (keep upto date, the kernel especially is in HEAVY development, 
and even running the latest stable kernel, means you're already missing 
the bug-fixes and better stability in the current development kernel!), 
etc.

Actually, there's two wikis, the kernel.org wiki which is static content 
(not updated and thus not upto date) ATM due to the kernel.org breakin a 
few months ago, and a temporary one that's rather more current.

Here's the link to both:

Kernel.org wiki (outdated static ATM):
https://btrfs.wiki.kernel.org/

Temporary upto-date wiki:
http://btrfs.ipv5.de/index.php?title=Main_Page

There, the second paragraph of the main page says:

Btrfs is under heavy development, but every effort is being made to keep 
the filesystem stable and fast. Because of the speed of development, you 
should run the latest kernel you can (either the latest release kernel 
from kernel.org, or the latest -rc kernel.

So YES, a 3.0 kernel is *OLD*!  Even the latest stable 3.2.x kernel won't 
have the latest btrfs fixes.  For that, you need the latest 3.3-rc if you 
aren't running git kernels and updating between the rcs!

The FAQ page there has a whole section titled Help! I ran out of disk 
space!  In particular, see the If your device is small subsection.

http://btrfs.ipv5.de/index.php?title=FAQ

Also, down further in the FAQ, see

Why does df show incorrect free space for my RAID volume?
Aaargh! My filesystem is full, and I've put almost nothing into it!
Why are there so many ways to check the amount of free space?

(That's one section covering the three.)  Plus the answers to these two 
as well:

Why is free space so complicated?

Why is there so much space overhead?

Meanwhile, what you SHOULD have read before even STARTING with btrfs, the 
getting started page:

http://btrfs.ipv5.de/index.php?title=Getting_started

Right at the top:
[begin quote]

btrfs is a fast-moving target. There are typically a great many bug fixes 
and enhancements between one kernel release and the next. Therefore:

* If you have btrfs filesystems, run the latest 

Re: filesystem full when it's not? out of inodes? huh?

2012-02-26 Thread Helmut Hullen
Hallo, Duncan,

Du meintest am 26.02.12:

 It's astonishing to me the number of people that come in here
 complaining about problems with a filesystem the kernel option of
 which says

 Title:

 Btrfs filesystem (EXPERIMENTAL) Unstable disk format

 Description (excerpt):

 Btrfs is highly experimental, and THE DISK FORMAT IS NOT YET
 FINALIZED.  You should say N here unless you are interested in
 testing Btrfs with non-critical data.

Just take a look at Fedora.
The maintainers had planned to use btrfs as standard filesystem for  
Fedora 16 (but haven't done so), they had planned to use btrfs for  
Fedora 17, but perhaps hesitate, see

  https://fedorahosted.org/fesco/ticket/704

There are some other distributions which also seem to (perhaps) follow  
the bleading edge.

And therefore end users believe that using btrfs is safe.
(I've learned my lesson ...)

Just for the record: using btrfs (when it runs stable) may reduce many  
other problems on my system. I'm still hoping.

Viele Gruesse!
Helmut
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: filesystem full when it's not? out of inodes? huh?

2012-02-26 Thread Hugo Mills
On Sat, Feb 25, 2012 at 06:10:32PM -0800, Fahrzin Hemmati wrote:
 btrfs is horrible for small filesystems (like a 5GB drive). df -h
 says you have 967MB available, but btrfs (at least by default)
 allocates 1GB at a time to data/metadata. This means that your 10MB
 file is too big for the current allocation and requires a new data
 chunk, or another 1GB, which you don't have.
 
 Others might know of a way of changing the allocation size to less
 than 1GB, but otherwise I recommend switching to something more
 stable like ext4/reiserfs/etc.

   The option that nobody's mentioned yet is to use mixed mode. This
is the -M or --mixed option when you create the filesystem. It's
designed specifically for small filesystems, and removes the
data/metadata split for more efficient packing.

 On 2/25/2012 5:55 PM, Brian J. Murrell wrote:
 I have a 5G /usr btrfs filesystem on a 3.0.0-12-generic kernel that is
 returning ENOSPC when it's only 75% full:

   As mentioned before, you probably need to upgrade to 3.2 or 3.3-rc5
anyway. There were quite a few fixes in the ENOSPC/allocation area
since then.

 FilesystemSize  Used Avail Use% Mounted on
 /dev/mapper/rootvol-mint_usr
5.0G  2.8G  967M  75% /usr
 
 And yet I can't even unpack a linux-headers package on to it, which
 should be nowhere near 967MB.  dpkg says it will need 10MB:
 
 So this starts to feel like some kind of inode count limitation.  But I
 didn't think btrfs had inode count limitations.  Here's the df stats on
 the filesystem:

   It doesn't have inode limitations. It does, however, have some
peculiar limitations on the use of space. Specifically, the
copy-on-write nature has some implications.

   When you write *anything* to the FS, it does a CoW copy of
everything involved in the write. This includes all of the related
metadata, including the path from the B-tree leaves being touched up
to the root of the tree [in each B-tree being touched]. So, if you're
near the bounds of space in metadata, you can end up in a situation
where you modify a lot of metadata, and need a lot of space to do the
CoW in, so you try to allocate more metadata block groups -- which
requires metadata to be modified -- and run out of metadata space in
the allocation operation, which is treated as ENOSPC.

   That's not necessarily what's happened here, but it's highly
plausible. The FS does actually keep a small set of metadata reserved
to deal with this situation, but sometimes it's not very good at
planning how much metadata it needs before an operation is started.
It's that code that's had a lot of work since 3.0.

 $ btrfs filesystem df /usr
 Data: total=3.22GB, used=3.22GB
 System, DUP: total=8.00MB, used=4.00KB
 System: total=4.00MB, used=0.00
 Metadata, DUP: total=896.00MB, used=251.62MB
 Metadata: total=8.00MB, used=0.00
 
 I don't know if that's useful or not.

   Not to me directly -- there appears to be enough metadata to do
pretty much anything, so the above scenario _probably_ isn't the
problem, but it's clearly trying to allocate a new data block group
(which it should be able to do -- it should just take all the
remaining space, unlike Fahrzin's hypothesis).

   There have been some issues over having very large metadata
allocations that can't apparently be reused, though. It's possible
you've hit this one -- particularly if you're trying to untar
something, which performs lots and lots of writes all in one
transaction. Again, there's been some work on this since 3.0.

   Hugo.

-- 
=== Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk ===
  PGP key: 515C238D from wwwkeys.eu.pgp.net or http://www.carfax.org.uk
  --- What part of gestalt don't you understand? ---   


signature.asc
Description: Digital signature


LABEL only 1 device

2012-02-26 Thread Helmut Hullen
Hallo, linux-btrfs,

maybe it's a big error using the commmand

  mkfs.btrfs -L xyz /dev/sdx1 /dev/sdy1 /dev/sdz1

(and so labelling many partitions) because each device/partition gets  
the same label.

Mounting seems to be no problem, but (p.e.) delete doesn't kill the  
btrfs informations shown with (p.e.) blkid /dev/sdy1, especially it  
doesn't delete the label.

Viele Gruesse!
Helmut
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: LABEL only 1 device

2012-02-26 Thread Hugo Mills
On Sun, Feb 26, 2012 at 04:23:00PM +0100, Helmut Hullen wrote:
 Hallo, linux-btrfs,
 
 maybe it's a big error using the commmand
 
   mkfs.btrfs -L xyz /dev/sdx1 /dev/sdy1 /dev/sdz1
 
 (and so labelling many partitions) because each device/partition gets  
 the same label.
 
 Mounting seems to be no problem, but (p.e.) delete doesn't kill the  
 btrfs informations shown with (p.e.) blkid /dev/sdy1, especially it  
 doesn't delete the label.

   What do you mean by delete here?

   The label is a *filesystem* label, not a label for the block
device(s) it lives on, so it doesn't make much sense to talk about
putting an FS label on only one of the devices that the FS is on.

   Hugo.

-- 
=== Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk ===
  PGP key: 515C238D from wwwkeys.eu.pgp.net or http://www.carfax.org.uk
 --- I am the author. You are the audience. I outrank you! --- 


signature.asc
Description: Digital signature


Re: LABEL only 1 device

2012-02-26 Thread Helmut Hullen
Hallo, Hugo,

Du meintest am 26.02.12:

 Mounting seems to be no problem, but (p.e.) delete doesn't kill
 the btrfs informations shown with (p.e.) blkid /dev/sdy1,
 especially it doesn't delete the label.

What do you mean by delete here?

   btrfs device delete device path

The label is a *filesystem* label, not a label for the block
 device(s) it lives on, so it doesn't make much sense to talk about
 putting an FS label on only one of the devices that the FS is on.

My (planned) usual work (once a year or so):

btrfs device add biggerdevice path
btrfs filesystem balance path
btrfs device delete smallerdevice path

And the devices are (p.e.) /dev/sdj1, /dev/sdk1 etc. (partitions on a  
device).

Therefor I can see some informations via (p.e.)

blkid /dev/sdj1

I prefer LABELling the devices/partitions, and then I'd seen that the  
option -L makes problems when I use it for more than 1 device/ 
partition.

With other file systems there's no real problem with the same label for  
several partitions - it doesn't work. But btrfs bundles these partitions  
(perhaps sometimes/most times regardless of the labels of the other  
partitions).

Viele Gruesse!
Helmut
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


device delete kills contents

2012-02-26 Thread Helmut Hullen
Hallo, linux-btrfs,

I've (once again) tried add and delete.

First, with 3 devices (partitions):

  mkfs.btrfs -d raid0 -m raid1 /dev/sdk1 /dev/sdl1 /dev/sdm1


Mounted (to /mnt/btr), filled with about 100 GByte data.

Then

  btrfs device add /dev/sdj1 /mnt/btr

results in

# show
Label: none  uuid: 6bd7d4df-e133-47d1-9b19-3c7565428770
Total devices 4 FS bytes used 100.44GB
devid3 size 68.37GB used 44.95GB path /dev/sdm1
devid2 size 136.73GB used 43.95GB path /dev/sdl1
devid1 size 16.96GB used 16.96GB path /dev/sdk1
devid4 size 136.73GB used 0.00 path /dev/sdj1

Btrfs Btrfs v0.19

# df
Data, RAID0: total=103.81GB, used=100.30GB
Data: total=8.00MB, used=0.00
System, RAID1: total=8.00MB, used=12.00KB
System: total=4.00MB, used=0.00
Metadata, RAID1: total=1.00GB, used=136.29MB
Metadata: total=8.00MB, used=0.00

-

#

Then

   mkfs filesystem balance /mnt/btr

# show

Label: none  uuid: 6bd7d4df-e133-47d1-9b19-3c7565428770
Total devices 4 FS bytes used 100.44GB
devid3 size 68.37GB used 42.94GB path /dev/sdm1
devid2 size 136.73GB used 43.20GB path /dev/sdl1
devid1 size 16.96GB used 16.94GB path /dev/sdk1
devid4 size 136.73GB used 2.20GB path /dev/sdj1

Btrfs Btrfs v0.19
# df
Data, RAID0: total=104.75GB, used=100.30GB
System, RAID1: total=8.00MB, used=12.00KB
System: total=4.00MB, used=0.00
Metadata, RAID1: total=256.00MB, used=136.23MB

-

#

Next step:

   btrfs device delete /dev/sdk1 /mnt/btr

# show
Label: none  uuid: 6bd7d4df-e133-47d1-9b19-3c7565428770
Total devices 4 FS bytes used 100.43GB
devid3 size 68.37GB used 43.00GB path /dev/sdm1
devid2 size 136.73GB used 43.26GB path /dev/sdl1
devid4 size 136.73GB used 17.26GB path /dev/sdj1
*** Some devices missing

Btrfs Btrfs v0.19
# df
Data, RAID0: total=103.00GB, used=100.30GB
System, RAID1: total=8.00MB, used=12.00KB
Metadata, RAID1: total=256.00MB, used=131.62MB

-

All commands seemed to work well, without any error message.

blkid showed the expected data, especially

blkid /dev/sdk1

shows nothing - the partitions seems to be really empty.

Unmounted, mounted again:

# show
Btrfs Btrfs v0.19

# df
Data: total=8.00MB, used=64.00KB
System, DUP: total=8.00MB, used=4.00KB
System: total=4.00MB, used=0.00
Metadata, DUP: total=1.00GB, used=24.00KB
Metadata: total=8.00MB, used=0.00



show doesn't show any part of the bundle of 3 partitions.

That's more than I've expected from the option delete ...

-

Famous last words from dmesg:

device fsid 6bd7d4df-e133-47d1-9b19-3c7565428770 devid 3 transid 437 /dev/sdm1
device fsid 6bd7d4df-e133-47d1-9b19-3c7565428770 devid 2 transid 437 /dev/sdl1
device fsid 6bd7d4df-e133-47d1-9b19-3c7565428770 devid 4 transid 437 /dev/sdj1
device label SCSI devid 1 transid 7 /dev/sdj1
btrfs: disk space caching is enabled
device label SCSI devid 1 transid 10 /dev/sdj1
btrfs: disk space caching is enabled
device label SCSI devid 1 transid 13 /dev/sdj1
btrfs: disk space caching is enabled
device fsid 6bd7d4df-e133-47d1-9b19-3c7565428770 devid 3 transid 437 /dev/sdm1
btrfs: disk space caching is enabled
btrfs: failed to read chunk tree on sdm1
btrfs: open_ctree failed
device fsid 6bd7d4df-e133-47d1-9b19-3c7565428770 devid 2 transid 437 /dev/sdl1
btrfs: disk space caching is enabled
btrfs: failed to read chunk tree on sdm1
btrfs: open_ctree failed
device label SCSI devid 1 transid 16 /dev/sdj1
btrfs: disk space caching is enabled
end_request: I/O error, dev fd0, sector 0
end_request: I/O error, dev fd0, sector 0


--

My dmesg doesn't write time stamps, there may be some lines from  
previous tests.

---

Kernel 3.2.5 (self made), btrfs from darksatanic.net, bztfs-progs- 
unstable.

Viele Gruesse!
Helmut
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: LABEL only 1 device

2012-02-26 Thread Hugo Mills
On Sun, Feb 26, 2012 at 05:12:00PM +0100, Helmut Hullen wrote:
 Hallo, Hugo,
 
 Du meintest am 26.02.12:
 
  Mounting seems to be no problem, but (p.e.) delete doesn't kill
  the btrfs informations shown with (p.e.) blkid /dev/sdy1,
  especially it doesn't delete the label.
 
 What do you mean by delete here?
 
btrfs device delete device path

   OK.

 The label is a *filesystem* label, not a label for the block
  device(s) it lives on, so it doesn't make much sense to talk about
  putting an FS label on only one of the devices that the FS is on.
 
 My (planned) usual work (once a year or so):
 
 btrfs device add biggerdevice path
 btrfs filesystem balance path
 btrfs device delete smallerdevice path
 
 And the devices are (p.e.) /dev/sdj1, /dev/sdk1 etc. (partitions on a  
 device).
 
 Therefor I can see some informations via (p.e.)
 
 blkid /dev/sdj1

   OK, the real problem you're seeing is that when btrfs removes a
device from the filesystem, that device is not modified in any way.
This means that the old superblock is left behind on it, containing
the FS label information. What you need to do is, immediately after
removing a device from the FS, zero the first part of the partition
with dd and /dev/zero.

 I prefer LABELling the devices/partitions, and then I'd seen that the  
 option -L makes problems when I use it for more than 1 device/ 
 partition.

   As far as I know, you can't label partitions or devices. Labels are
a filesystem thing, and are stored in a FS-dependent manner. There's a
confusion that historically it's been a one-to-one mapping, so people
get *very* sloppy about the distinction (particularly since there's no
real way of referring to a filesystem independently of the block
device(s) it's resident on).

 With other file systems there's no real problem with the same label for  
 several partitions - it doesn't work. But btrfs bundles these partitions  
 (perhaps sometimes/most times regardless of the labels of the other  
 partitions).

   I say again, partitions are not labelled. *Filesystems* are
labelled. I think that with a GPT you can refer to the disk itself and
its partitions by a UUID each, but I'm not 100% certain.

   Hugo.

-- 
=== Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk ===
  PGP key: 515C238D from wwwkeys.eu.pgp.net or http://www.carfax.org.uk
   --- emacs: Emacs Makes A Computer Slow. ---   


signature.asc
Description: Digital signature


Re: LABEL only 1 device

2012-02-26 Thread Helmut Hullen
Hallo, Hugo,

Du meintest am 26.02.12:

 My (planned) usual work (once a year or so):

 btrfs device add biggerdevice path
 btrfs filesystem balance path
 btrfs device delete smallerdevice path

OK, the real problem you're seeing is that when btrfs removes a
 device from the filesystem, that device is not modified in any way.
 This means that the old superblock is left behind on it, containing
 the FS label information. What you need to do is, immediately after
 removing a device from the FS, zero the first part of the partition
 with dd and /dev/zero.

Ok - I'll try again (not today ...).
If I remember correct in early times deleting only the first block of  
the partition didn't reach ...

My last try with delete let me believe that btrfs had deleted the  
critical informations; I had tested it with blkid. But looking into  
the first sector of the partition may be more reliable.

 I prefer LABELling the devices/partitions, and then I'd seen that
 the option -L makes problems when I use it for more than 1 device/
 partition.

[...]

I say again, partitions are not labelled. *Filesystems* are
 labelled. I think that with a GPT you can refer to the disk itself
 and its partitions by a UUID each, but I'm not 100% certain.

My last try:

mkfs.btrfs -d raid0 -m raid1 /dev/sdk1 /dev/sdl1 /dev/sdm1

mkfs.btrfs -L SCSI /dev/sdk1

seemed to work.

mount LABEL=SCSI /mnt/btr

worked as expected, the bundle of 3 partitions was mounted. And only / 
dev/sdk1 got this label, no other partition.

Viele Gruesse!
Helmut
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: LABEL only 1 device

2012-02-26 Thread Duncan
Hugo Mills posted on Sun, 26 Feb 2012 16:44:00 + as excerpted:

 I prefer LABELling the devices/partitions, and then I'd seen that the
 option -L makes problems when I use it for more than 1 device/
 partition.
 
As far as I know, you can't label partitions or devices. Labels are
 a filesystem thing, and are stored in a FS-dependent manner. There's a
 confusion that historically it's been a one-to-one mapping, so people
 get *very* sloppy about the distinction (particularly since there's no
 real way of referring to a filesystem independently of the block
 device(s) it's resident on).

With legacy MBR-based partitioning, that is correct, devices don't have a 
label, filesystems do.  Take an md/raid1 device for instance, and put a 
filesystem on it.  It's the filesystem that gets the label when mkfs 
(make filesystem) is done, putting the same label on the filesystem on 
all the md/raid1 component devices since it's mirrored (raid-1-ed) to all 
of them.

However, GPT-based partitioning *DOES* have partition level labels 
available.  I'm not sure if for instance parted exposes that 
functionality, but gptfdisk, which I use, certainly does.  That's useful 
with partitioned md/raid, since the filesystem on the partition gets a 
different label than the gpt-partition itself, which has a different 
label than all the underlying physical device partitions that compose the 
md/raid1.

Unfortunately, since gpt is reasonably new in terms of filesystem and 
partitioning tools, there isn't really anything (mount, etc) that makes 
/use/ of that label yet, tho gptfdisk does display it, let you modify it, 
etc, so it's easier to keep track at that level of whether you're 
operating on what you intended to operate on, as long as you keep the 
physical device partition labels distinct from the partitioned md/raid 
device labels, from the filesystem labels as created by mkfs.  (I have a 
consistent scheme I use, so they are distinct here.)

FWIW, gpt was designed by Intel and others to be used by EFI, but BIOS 
based devices support it as well, as do grub2, grub-legacy (with patches 
applied), and the kernel (with the related kernel config options 
enabled).  Since it does away with the primary/secondary/logical 
partition distinction, has dual-copy checksummmed partition tables, and 
has partition labels, plus the fact that it supports 2+TiB drives, it's 
gradually replacing MBR even on BIOS systems, but it's a slow process as 
MBR has been around for decades!

-- 
Duncan - List replies preferred.   No HTML msgs.
Every nonfree program has a lord, a master --
and if you use the program, he is your master.  Richard Stallman

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: LABEL only 1 device

2012-02-26 Thread Helmut Hullen
Hallo, Hugo,

Du meintest am 26.02.12:

 What you need to do is, immediately after
 removing a device from the FS, zero the first part of the partition
 with dd and /dev/zero.


 Ok - I'll try again (not today ...).
 If I remember correct in early times deleting only the first block
 of the partition didn't reach ...

No, it won't -- the first superblock on btrfs is at 64k into the
 device. Most filesystems do something similar, because there's other
 things that occasionally put metadata in the first part of the
 device, so it avoids having the FS's superblock overwritten
 accidentally.

Ok - but deleting the first 100 kByte or the first 1 MByte does reach?
Last times I'd run a job which deleted all (but that's nasty for disks  
with much more than 100 GByte ...)

 mkfs.btrfs -L SCSI /dev/sdk1

 seemed to work.

 mount LABEL=SCSI /mnt/btr

 worked as expected, the bundle of 3 partitions was mounted. And only
 / dev/sdk1 got this label, no other partition.

That's because you've just destroyed part of the original
 filesystem that was on /dev/sd[klm]1 and created a new single-device
 filesystem on /dev/sdk1.

mkfs.btrfs creates a new filesystem. The -L option sets the label
 for the newly-created FS. It *cannot* be used to change the label of
 an existing FS. If you want to do that, use btrfs filesystem label.

H - I'll try ...

Thank you!

-

Label: 'SCSI'  uuid: 8e287956-d73f-46cb-8938-b00315c596c6
Total devices 1 FS bytes used 92.00KB
devid1 size 136.73GB used 2.04GB path /dev/sdj1

Label: 'Scsi'  uuid: b59caf71-1a38-47cc-bad3-c2d87357c971
Total devices 3 FS bytes used 9.09GB
devid2 size 136.73GB used 4.01GB path /dev/sdl1
devid3 size 68.37GB used 5.01GB path /dev/sdm1
devid1 size 16.96GB used 5.02GB path /dev/sdk1

Btrfs Btrfs v0.19

looks good ...


Viele Gruesse!
Helmut
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: filesystem full when it's not? out of inodes? huh?

2012-02-26 Thread Daniel Lee

On 02/25/2012 05:55 PM, Brian J. Murrell wrote:

$ btrfs filesystem df /usr
Data: total=3.22GB, used=3.22GB
System, DUP: total=8.00MB, used=4.00KB
System: total=4.00MB, used=0.00
Metadata, DUP: total=896.00MB, used=251.62MB
Metadata: total=8.00MB, used=0.00

I don't know if that's useful or not.

Any ideas?

Cheers
b.

3.22GB + (896MB * 2) = 5GB

There's no mystery here, you're simply out of space. The system df 
command basically doesn't understand btrfs so will erroneously report 
free space if there isn't any.


--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: filesystem full when it's not? out of inodes? huh?

2012-02-26 Thread Brian J. Murrell
On 12-02-26 02:37 PM, Daniel Lee wrote:
 3.22GB + (896MB * 2) = 5GB
 
 There's no mystery here, you're simply out of space.

Except the mystery that I had to expand the filesystem to something
between 20GB and 50GB in order to complete the operation, after which I
could reduce it back down to 5GB.

Cheers,
b.




signature.asc
Description: OpenPGP digital signature


Re: filesystem full when it's not? out of inodes? huh?

2012-02-26 Thread Brian J. Murrell
On 12-02-26 02:19 AM, Jérôme Poulin wrote:
 
 What would be interesting is getting an eye on btrfs fi df of your
 filesystem to see what part is getting full, or maybe just do a
 balance.

I did try a balance.  As I had mentioned subsequently, I ended up having
to grow the filesystem to 10x (somewhere between 20 and 50GB) it's data
requirement in order to get that kernel headers .deb to unpack, and
after it unpacked I was successful in shrinking back down to 5G, so it
seems the problem was something worse than just not ideal metadata
allocation.

 I have been running 3.0.0 for quite a while without any problem,
 metadata grew a bit too much (1.5 TB for 2 TB of data) and balance
 fixed it back to 50 GB of metadata then 20 GB after deleting some
 snapshots.

Interesting data point, thanks.

b.





signature.asc
Description: OpenPGP digital signature


Re: filesystem full when it's not? out of inodes? huh?

2012-02-26 Thread Daniel Lee

On 02/26/2012 11:48 AM, Brian J. Murrell wrote:

On 12-02-26 02:37 PM, Daniel Lee wrote:

3.22GB + (896MB * 2) = 5GB

There's no mystery here, you're simply out of space.

Except the mystery that I had to expand the filesystem to something
between 20GB and 50GB in order to complete the operation, after which I
could reduce it back down to 5GB.

Cheers,
b.

What's mysterious about that? When you shrink it btrfs is going to throw 
away unused data to cram it all in the requested space and you had empty 
space that was taken up by the metadata allocation. Did you compare 
btrfs fi df after you shrank it with before?

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: filesystem full when it's not? out of inodes? huh?

2012-02-26 Thread Brian J. Murrell
On 12-02-26 02:52 PM, Daniel Lee wrote:
 What's mysterious about that?

What's mysterious about needing to grow the filesystem to over 20GB to
unpack 10MB of (small, so yes, many) files?

 When you shrink it btrfs is going to throw
 away unused data to cram it all in the requested space and you had empty
 space that was taken up by the metadata allocation.

The shrinking is secondary mystery.  It's the need for more than 20GB of
space for less than 3GB of files that's the major mystery.

 Did you compare
 btrfs fi df after you shrank it with before?

I didn't unfortunately.

b.



signature.asc
Description: OpenPGP digital signature


Re: filesystem full when it's not? out of inodes? huh?

2012-02-26 Thread Daniel Lee
On 02/26/2012 12:05 PM, Brian J. Murrell wrote:
 On 12-02-26 02:52 PM, Daniel Lee wrote:
 What's mysterious about that?
 What's mysterious about needing to grow the filesystem to over 20GB to
 unpack 10MB of (small, so yes, many) files?
 When you shrink it btrfs is going to throw
 away unused data to cram it all in the requested space and you had empty
 space that was taken up by the metadata allocation.
 The shrinking is secondary mystery.  It's the need for more than 20GB of
 space for less than 3GB of files that's the major mystery.
Several people in this list have already answered this question but here
goes.

Btrfs isn't like other more common filesystems where metadata is fixed
at filesystem creation. Rather, metadata allocations happen just like
data allocations do. Btrfs also tries to allocate metadata in big chunks
so it doesn't get fragmented and lead to slowdowns when doing something
like running du on the root folder. The downside to all of this is that
it's not very friendly to small filesystems, in your case it allocated
some 1.8 GB of metadata of which only 500 MB was actually in use.

In the future you can create your filesystem with metadata=single to
free up more space for regular data or look into forcing the mixed block
groups mode which is normally only enabled for 1GB or smaller
filesystems. Mixed block group mode can't be switched off so you could
make a really tiny FS, several hunder MB or so, and then just grow it to
whatever size you want. The btrfs wiki seems to define small filesystems
as anything under 16GB so might be a good lower bound for actually using
btrfs in a day to day environment.


--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH][trivial] btrfs: assignment in write_dev_flush() doesn't need two semi-colons

2012-02-26 Thread Jesper Juhl
One is enough.

Signed-off-by: Jesper Juhl j...@chaosbits.net
---
 fs/btrfs/disk-io.c |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index 534266f..f87590b 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -2744,7 +2744,7 @@ static int write_dev_flush(struct btrfs_device *device, 
int wait)
 * one reference for us, and we leave it for the
 * caller
 */
-   device-flush_bio = NULL;;
+   device-flush_bio = NULL;
bio = bio_alloc(GFP_NOFS, 0);
if (!bio)
return -ENOMEM;
-- 
1.7.9.2


-- 
Jesper Juhl j...@chaosbits.net   http://www.chaosbits.net/
Don't top-post http://www.catb.org/jargon/html/T/top-post.html
Plain text mails only, please.

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Tracing tools to understand performance of Btrfs

2012-02-26 Thread Kai Ren
Hi, 

I am running some benchmarks to understand the performance of Btrfs.
Is there any way to classify the disk traffic so that one can know the disk 
traffic generated by which activities in Btrfs.
Is there any tracing tools can be enabled in Btrfs?

Best regards,
-- 
Ren Kai--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] btrfs: fixup module.h usage as required

2012-02-26 Thread Paul Gortmaker
Delete the instances of module.h that aren't actually used
or needed.  Replace with export.h as required.

Signed-off-by: Paul Gortmaker paul.gortma...@windriver.com
---
[This is 100% independent of any cleanups I'm working on, so it
 can go in via the btrfs tree seamlessly.]

diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index a55fbe6..9452204 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -4,7 +4,6 @@
 #include linux/mm.h
 #include linux/pagemap.h
 #include linux/page-flags.h
-#include linux/module.h
 #include linux/spinlock.h
 #include linux/blkdev.h
 #include linux/swap.h
diff --git a/fs/btrfs/extent_map.c b/fs/btrfs/extent_map.c
index 7c97b33..711c877 100644
--- a/fs/btrfs/extent_map.c
+++ b/fs/btrfs/extent_map.c
@@ -1,6 +1,5 @@
 #include linux/err.h
 #include linux/slab.h
-#include linux/module.h
 #include linux/spinlock.h
 #include linux/hardirq.h
 #include ctree.h
diff --git a/fs/btrfs/sysfs.c b/fs/btrfs/sysfs.c
index daac9ae..5b326cd 100644
--- a/fs/btrfs/sysfs.c
+++ b/fs/btrfs/sysfs.c
@@ -21,7 +21,6 @@
 #include linux/spinlock.h
 #include linux/completion.h
 #include linux/buffer_head.h
-#include linux/module.h
 #include linux/kobject.h
 
 #include ctree.h
diff --git a/fs/btrfs/ulist.c b/fs/btrfs/ulist.c
index 12f5147..273dea8 100644
--- a/fs/btrfs/ulist.c
+++ b/fs/btrfs/ulist.c
@@ -5,7 +5,7 @@
  */
 
 #include linux/slab.h
-#include linux/module.h
+#include linux/export.h
 #include ulist.h
 
 /*
-- 
1.7.9.1

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [BUG] Kernel Bug at fs/btrfs/volumes.c:3638

2012-02-26 Thread Jérôme Carretero
On Fri, 24 Feb 2012 16:11:29 +0530
Nageswara R Sastry rnsas...@linux.vnet.ibm.com wrote:

 Hello,
 
 While working with 'fsfuzz - file system fuzzing tool' on 'btrfs' 
 encountered the following kernel bug.

I inquired about robustness a while ago and it seems it's at some point on the 
horizon, but not now.
My concern was about hot-unplugged disk drives, but btrfs also doesn't 
appreciate meta-data corruption.
btrfs-raid users could be concerned, because contrarily to on a real RAID, the 
btrfs meta-data is a potential weak link.

At some point, I would appreciate some kind of thorough evaluation using a 
fuzzer on small disk images.
The btrfs developers could for instance:
- provide a script to create a filesystem image with a known layout (known 
corpus)
- provide .config and reference to kernel sources to build the kernel
- provide a minimal root filesystem to be run under qemu, it would run a 
procedure on the other disk image at boot
  crashes wouldn't affect the host, which is good.
- provide a way to retrieve the test parameters and results for every test case
  in case of bug, the test can be reproduced by the developers since the 
configuration is known
- expect volunteers to run the scenarios (I know I would)
The tricky part is of course the potentially super-costly procedure...
Simplest case: flipping every bit / writing blocks with pseudo-random data, 
even on meta-data only, as the outcome on data is supposed to be known.
Smarter: flipping bits on every btrfs meta-data structure type at every 
possible logical location.

The kind of stuff that would help all this could be something like Python 
bindings for a *btrfs library*.
Helpful even for prototyping fsck stuff, making illustrations, etc.

As of today, how are btrfs developers testing the filesystem implementation 
(except with xfstests) ?

Best regards,

-- 
cJ

PS: don't be mistaken, I'm not asking for all that, just suggesting.
My time goes to something else, but I do have sleepy computers at home, and 
they could help.
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [BUG] Kernel Bug at fs/btrfs/volumes.c:3638

2012-02-26 Thread Nageswara R Sastry

On ఫిబ్రవరి 25 శనివారం 2012 ఉ. 11:42, Liu Bo wrote:
Hi, I guess you're mounting a quite small partition. Given that this 
oops is in such an early stage, could you please show 1) your 
mkfs.btrfs options and 2) the log of btrfs-debug-tree /dev/loop0? 
thanks, liubo

Here are the steps with options,

1. dd if=/dev/zero of=filename.img bs=1M count=16

2. mkfs.btrfs filename.img

3. Corrupt the filename.img using 'mangle'

4. mount -t btrfs -o loop filename.img mount point

5. # btrfs-debug-tree filename.img
Couldn't map the block 3221392384
Couldn't read chunk root
unable to open filename.img
# btrfs-debug-tree /dev/loop0
Couldn't map the block 3221392384
Couldn't read chunk root
unable to open /dev/loop0

Regards,
R.Nageswara Sastry

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: LABEL only 1 device

2012-02-26 Thread Helmut Hullen
Hallo, Hugo,

Du meintest am 26.02.12:

mkfs.btrfs creates a new filesystem. The -L option sets the label
 for the newly-created FS. It *cannot* be used to change the label of
 an existing FS.

The safest way may be deleting this option ... it seems to work as  
expected only when I create a new FS on 1 disk/partition.

 If you want to do that, use btrfs filesystem label.

And that seems to work as I expected - fine.

Adding a device works, deleting a device works. Fine!
Now I'll try the job with my Terabyte disks.

(Yes - I have backups ...)

Viele Gruesse!
Helmut
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


btrfs-convert options

2012-02-26 Thread Helmut Hullen
Hallo, linux-btrfs,

I want to change some TByte disks (at least one) from ext4 to btrfs. And  
I want -d raid0 -m raid1. Is it possible to tell btrfs-convert  
especially these options for data and metadata?

Or have I to use mkfs.btrfs (and then copy the backup) when I want  
these options?

Viele Gruesse!
Helmut
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html