On Sat, Dec 11, 2010 at 10:43:39PM -0500, Ian! D. Allen wrote:
I put in a larger disk (250GB), set up a partition for btrfs, and ran
the same continuous snapshotting test. It got up to creating snapshot
150 and then btrfs hung again. So the bug is repeatable and makes
btrfs 0.19 on Ubuntu
Well, this appears to be much more critical than it seemed. It
happened again, same symptoms and same call trace.
After that, my root filesystem was destroyed. Now the laptop does not
boot anymore. It look like mount segfaulting at boot time and there is
a call trace printed on the screen.
BTW,
Creating new snapshots from previous snapshots eventually causes btrfs
subvolume list to omit some of the created snapshots. The set of
omitted snapshots changes as one creates new snapshots.
(This different bug thread was found while exploring this previous thread:
Subject: btrfs subvolume
On Sat, Dec 11, 2010 at 9:16 PM, Jon Nelson jnel...@jamponi.net wrote:
On Sat, Dec 11, 2010 at 7:40 PM, Ted Ts'o ty...@mit.edu wrote:
Yes, indeed. Is this in the virtualized environment or on real
hardware at this point? And how many CPU's do you have configured in
your virtualized
On Fr, 10.12.10 15:11 Chris Mason chris.ma...@oracle.com wrote:
What would be the steps to get it mounted?
If btrfsck -s is able to find a good super, I've setup a tool that
will copy the good super over into the default super. It is currently
sitting in the next branch of the
On Sun, Dec 12, 2010 at 04:18:29AM -0600, Jon Nelson wrote:
I have one CPU configured in the environment, 512MB of memory.
I have not done any memory-constriction tests whatsoever.
I've finally been able to reproduce it myself, on real hardware. SMP
is not necessary to reproduce it, although
On Sun, Dec 12, 2010 at 6:43 AM, Ted Ts'o ty...@mit.edu wrote:
On Sun, Dec 12, 2010 at 04:18:29AM -0600, Jon Nelson wrote:
I have one CPU configured in the environment, 512MB of memory.
I have not done any memory-constriction tests whatsoever.
I've finally been able to reproduce it myself,
On Wednesday 08 of December 2010 22:53:25 William Sheffler wrote:
Hello btrfs community.
First off, thanks for all your hard work... I have been following
btrfs with interest for several years now and very much look forward
to the day it replaces ext4. The real killer feature (of btrfs
Josef's fs_mark test
fs_mark -d /mnt/btrfs-test -D 512 -t 16 -n 4096 -F -S0
on a 2GB single metadata fs leaves about 400Mb of metadata almost unused.
This patch reduces metadata chunk allocations by considering the proper
metadata chunk size of 200MB in should_alloc_chunk(), not the default
On Sun, Dec 12, 2010 at 8:24 AM, Hubert Kario h...@qbs.com.pl wrote:
On Wednesday 08 of December 2010 22:53:25 William Sheffler wrote:
Hello btrfs community.
First off, thanks for all your hard work... I have been following
btrfs with interest for several years now and very much look forward
cwillu wrote:
On Sun, Dec 12, 2010 at 8:24 AM, Hubert Kario h...@qbs.com.pl wrote:
On Wednesday 08 of December 2010 22:53:25 William Sheffler wrote:
Hello btrfs community.
First off, thanks for all your hard work... I have been following
btrfs with interest for several years now and very
In a few weeks parts for my new computer will be arriving. The storage
will be a 128GB SSD. A few weeks after that I will order three large
disks for a RAID array. I understand that BTRFS RAID 5 support will be
available shortly. What is the best possible way for me to get the
highest performance
On 12/12/2010 17:24, Paddy Steed wrote:
In a few weeks parts for my new computer will be arriving. The storage
will be a 128GB SSD. A few weeks after that I will order three large
disks for a RAID array. I understand that BTRFS RAID 5 support will be
available shortly. What is the best possible
the attached patch is against ubuntu maverick latest git, and I
believe it is final. It is forward-compatible, as there is space in it
to define 29 more deferred things to wait for, if needed, as well as a
flag bit reserved for strict versioning. Calling it with a flags field
of 0xFFFA will
Gordan Bobic wrote (ao):
On 12/12/2010 17:24, Paddy Steed wrote:
In a few weeks parts for my new computer will be arriving. The storage
will be a 128GB SSD. A few weeks after that I will order three large
disks for a RAID array. I understand that BTRFS RAID 5 support will be
available
Hi,
We have file readahead to do asyn file read, but has no metadata
readahead. For a list of files, their metadata is stored in fragmented
disk space and metadata read is a sync operation, which impacts the
efficiency of readahead much. The patches try to add meatadata readahead
for btrfs.
In
Add an ioctl to dump filesystem's metadata in memory in vfs. Userspace collects
such info and uses it to do metadata readahead.
Filesystem can hook to super_operations.metadata_incore to get metadata in
specific approach. Next patch will give an example how to implement
.metadata_incore in btrfs.
Implement btrfs specific .metadata_incore.
In btrfs, all metadata pages are in a special btree_inode, we take pages from
it.
we only account updated and referenced pages here. Say we collect metadata info
in one boot, do metadata readahead in next boot and we might collect metadata
again. The
Add metadata readahead ioctl in vfs. Filesystem can hook to
super_operations.metadata_readahead to handle filesystem specific task.
Next patch will give an example how btrfs implements it.
Signed-off-by: Shaohua Li shaohua...@intel.com
---
fs/compat_ioctl.c |1 +
fs/ioctl.c | 21
Implementation btrfs .metadata_readahead. In btrfs, all metadata pages are in a
special btree_inode. We do readahead in it.
Signed-off-by: Shaohua Li shaohua...@intel.com
---
fs/btrfs/disk-io.c | 10 ++
fs/btrfs/super.c | 13 +
mm/readahead.c |1 +
3 files
do validation for extent_buffer if it's skipped before
With metadata readahead, we slightly change the behavior. Before it, we allocate
an extent_buffer (so set page-private), do metadata read and
btree_readpage_end_io_hook() will do validation. After it, we directly do
metadata readahead, and
21 matches
Mail list logo