the
time that my message got to the SE Linux list.
The kernel from Debian/Stable still has the issue. So using a testing kernel
might be a good option to deal with this problem at the moment.
On Monday, 4 June 2018 11:14:52 PM AEST Russell Coker wrote:
> The command "reboot -nffd" (
The command "reboot -nffd" (kernel reboot without flushing kernel buffers or
writing status) when run on a BTRFS system with SE Linux will often result in
/var/log/audit/audit.log being unlabeled. It also results in some systemd-
journald files like
On Monday, 4 September 2017 2:57:18 PM AEST Stefan Priebe - Profihost AG
wrote:
> > Then roughly make sure the complete set of metadata blocks fits in the
> > cache. For an fs of this size let's say/estimate 150G. Then maybe same
> > of double for data, so an SSD of 500G would be a first try.
>
I have a system with less than 50% disk space used. It just started rejecting
writes due to lack of disk space. I ran "btrfs balance" and then it started
working correctly again. It seems that a btrfs filesystem if left alone will
eventually get fragmented enough that it rejects writes (I've
http://selinux.coker.com.au/play.html
There are a variety of ways of giving the same result that rm doesn't reject.
"/*" Wasn't caught last time I checked. See the above URL if you want to test
out various rm operations as root. ;)
On 10 August 2016 9:24:23 AM AEST, Christian Kujau
I've just written a script for "mon" to monitor BTRFS filesystems. I had to
use sudo because "btrfs device stats" needs to be run as root.
Would it be possible to do some of these things as non-root? I think it would
be ideal if there was a "btrfs tunefs" operation somewhat comparable to
On Mon, 14 Dec 2015 03:59:18 PM Christoph Anton Mitterer wrote:
> I've had some discussions on the list these days about not having
> checksumming with nodatacow (mostly with Hugo and Duncan).
>
> They both basically told me it wouldn't be straight possible with CoW,
> and Duncan thinks it may
One of my test laptops started hanging on mounting the root filesystem. I
think that it had experience an unexpected power outage prior to that which
may have caused corruption.
When I tried to mount the root filesystem the mount process would stick in D
state, there would be no disk IO, and
On Sat, 5 Dec 2015 12:53:07 AM Austin S Hemmelgarn wrote:
> > The only reason I'm not running Unstable kernels on my Debian systems is
> > because I run some Xen servers and upgrading Xen is problemmatic. Linode
> > is moving from Xen to KVM so I guess I should consider doing the
> > same. If I
On Sat, 5 Dec 2015 12:08:58 AM Austin S Hemmelgarn wrote:
> > I know that there are no plans to backport things to 3.16 and I don't
> > think the Debian people are going to be very interested in this. So
> > this message is a FYI for users, maybe consider not using the
> > Debian/Jessie kernel
On Mon, 9 Nov 2015 08:10:13 AM Duncan wrote:
> Russell Coker posted on Sun, 08 Nov 2015 17:38:32 +1100 as excerpted:
> > https://lwn.net/Articles/663474/
> > http://thread.gmane.org/gmane.comp.file-systems.btrfs/49500
> >
> > Above is a BTRFS issue that is menti
On Sun, 15 Nov 2015 03:01:57 PM Duncan wrote:
> That looks to me like native drive limitations.
>
> Due to the fact that a modern hard drive spins at the same speed no
> matter where the read/write head is located, when it's reading/writing to
> the first part of the drive -- the outside --
On Wed, 28 Oct 2015 11:07:20 PM Austin S Hemmelgarn wrote:
> Using this methodology, I can have a new Gentoo PV domain running in
> about half an hour, whereas it takes me at least two and a half hours
> (and often much longer than that) when using the regular install process
> for Gentoo.
On
On Wed, 21 Oct 2015 12:00:59 AM Austin S Hemmelgarn wrote:
> > https://www.gnu.org/software/ddrescue/
> >
> > At this stage I would use ddrescue or something similar to copy data from
> > the failing disk to a fresh disk, then do a BTRFS scrub to regenerate
> > the missing data.
> >
> > I
On Tue, 20 Oct 2015 03:16:15 PM james harvey wrote:
> sda appears to be going bad, with my low threshold of "going bad", and
> will be replaced ASAP. It just developed 16 reallocated sectors, and
> has 40 current pending sectors.
>
> I'm currently running a "btrfs scrub start -B -d -r /terra",
On Fri, 2 Oct 2015 10:07:24 PM Austin S Hemmelgarn wrote:
> > ARC presumably worked better than the other Solaris caching options. It
> > was ported to Linux with zfsonlinux because that was the easy way of
> > doing it.
>
> Actually, I think part of that was also the fact that ZFS is a COW
>
(sysadm_t:SystemLow-SystemHigh)root@unstable:~/pol# btrfs qgroup show -r -e
/tmp
qgroupid rfer excl max_rfer max_excl
0/5 1647689728 1647689728 0 0
0/25816384 16384 524288000 0
0/259
On Sat, 26 Sep 2015 06:47:26 AM Chris Murphy wrote:
> And then
>
> Aug 28 17:06:49 host mdadm[2751]: RebuildFinished event detected on md
> device /dev/md/0, component device mismatches found: 2048 (on raid
> level 10)
> Aug 28 17:06:49 host mdadm[2751]: SpareActive event detected on md
> device
On Sat, 26 Sep 2015 12:20:41 AM Austin S Hemmelgarn wrote:
> > FYI:
> > Linux pagecache use LRU cache algo, and in general case it's working good
> > enough
>
> I'd argue that 'general usage' should be better defined in this
> statement. Obviously, ZFS's ARC implementation provides better
>
On Sat, 19 Sep 2015 12:13:29 AM Austin S Hemmelgarn wrote:
> The other option (which for some reason I almost never see anyone
> suggest), is to expose 2 disks to the guest (ideally stored on different
> filesystems), and do BTRFS raid1 on top of that. In general, this is
> what I do (except I
On Fri, 18 Sep 2015 12:00:15 PM Duncan wrote:
> The caveat here is that if the VM/DB is active during the backups (btrfs
> send/receive or other), it'll still COW1 any writes during the existence
> of the btrfs snapshot. If the backup can be scheduled during VM/DB
> downtime or at least when
On Fri, 28 Aug 2015 07:35:02 PM Hugo Mills wrote:
> On Fri, Aug 28, 2015 at 10:50:12AM +0200, George Duffield wrote:
> > Running a traditional raid5 array of that size is statistically
> > guaranteed to fail in the event of a rebuild.
>
>Except that if it were, you wouldn't see anyone
On Thu, 20 Aug 2015 10:09:26 PM Austin S Hemmelgarn wrote:
2: Out of curiosity, why is data checksumming tied to COW?
There's no safe way to sanely handle checksumming without COW, because
there is no way (at least on current hardware) to ensure that the data
block and the checksums both
On Thu, 20 Aug 2015 11:55:43 AM Chris Murphy wrote:
Question 1: If I apply the NOCOW attribute to a file or directory, how
does that affect my ability to run btrfs scrub?
nodatacow includes nodatasum and no compression. So it means these
files are presently immune from scrub check and
I have a Xen server with 14 DomUs that are being used for BTRFS and ZFS
training. About 5 people are corrupting virtual disks and scrubbing them,
lots of IO.
All the virtual machine disk images are snapshots of a master image with copy
on write. I just had the following error which ended
[ 2918.502237] BTRFS info (device loop1): disk space caching is enabled
[ 2918.503213] BTRFS: failed to read chunk tree on loop1
[ 2918.540082] BTRFS: open_ctree failed
I just had a test RAID-1 filesystem with a missing device. I mounted it with
the degraded option and added a new device. I
Below is the result of testing a corrupted filesystem. What's going on here?
The kernel message log and the btrfs output don't tell me how many errors
there were. Also the data is RAID-0 (the default for a filesystem created with
2 devices) so if this was in a data area it should have lost
On Fri, 7 Aug 2015 06:49:58 PM Robert Krig wrote:
What exactly is contained in btrfs metadata?
Much the same as in metadata for every other filesystem.
I've read about some users setting up their btrfs volumes as
data=single, but metadata=raid1
Is there any actual benefit to that? I mean,
On Sat, 1 Aug 2015 02:35:39 PM John Ettedgui wrote:
It seems that you're using Chromium while doing the dump. :)
If no CD drive, I'll recommend to use Archlinux installation iso to make
a bootable USB stick and do the dump.
(just download and dd would do the trick)
As its kernel and
On Fri, 24 Jul 2015 11:11:22 AM Duncan wrote:
The option is mem=nn[KMG]. You may also need memmap=, presumably
memmap=nn[KMG]$ss[KMG], to reserve the unused memory area, preventing its
use for PCI address space, since that would collide with the physical
memory that's there but unused due to
On Fri, 24 Jul 2015 05:12:38 AM james harvey wrote:
I started trying to run with a -s 4G option, to use 4GB files for
performance measuring. It refused to run, and said file size should
be double RAM for good results. I sighed, removed the option, and
let it run, defaulting to **64GB
On Tue, 23 Jun 2015 02:52:43 AM Chris Murphy wrote:
OK I actually don't know what the intended block layer behavior is
when unplugging a device, if it is supposed to vanish, or change state
somehow so that thing that depend on it can know it's missing or
what. So the question here is, is this
When I have a mounted filesystem why doesn't the kernel store the amount of
free space? Why does it need to spin up a disk that had been spun down?
--
My Main Blog http://etbe.coker.com.au/
My Documents Bloghttp://doc.coker.com.au/
--
To unsubscribe from this list: send the line
On Sun, 24 May 2015 01:02:21 AM Jan Voet wrote:
Doing a 'btrfs balance cancel' immediately after the array was mounted
seems to have done the trick. A subsequent 'btrfs check' didn't show any
errors at all and all the data seems to be there. :-)
I add rootflags=skip_balance to the kernel
On Tue, 21 Apr 2015, Qu Wenruo quwen...@cn.fujitsu.com wrote:
Although we may add extra check for such problem to improve robustness,
but IMHO it's not a real world problem.
Some of the ReiserFS developers gave a similar reaction to some of my bug
reports. ReiserFS wasn't the most robust
On Mon, 20 Apr 2015, Craig Ringer cr...@2ndquadrant.com wrote:
PostgreSQL is its self copy-on-write (because of multi-version
concurrency control), so it doesn't make much sense to have the FS
doing another layer of COW.
That's a matter of opinion.
I think it's great if PostgreSQL can do
On Sat, 18 Apr 2015, Christoph Anton Mitterer cales...@scientia.net wrote:
On Sat, 2015-04-18 at 04:24 +, Russell Coker wrote:
dd works. ;)
There are patches to rsync that make it work on block devices. Of course
that will copy space occupied by deleted files too.
I think both
On Fri, 17 Apr 2015 11:08:44 PM Christoph Anton Mitterer wrote:
How can I best copy one btrfs filesystem (with snapshots and subvolumes)
into another, especially with keeping the CoW/reflink status of all
files?
dd works. ;)
And ideally incrementally upgrade it later (again with all
The current defragmentation options seem to only support defragmenting named
files/directories or a recursive defragmentation of files and directories.
I'd like to recursively defragment directories. One of my systems has a large
number of large files, the files are write-once and read
On Fri, 10 Apr 2015, Liu Bo bo.li@oracle.com wrote:
Above are some consecutive du runs. Why does the space used go from 1.2G
to 1.1G before going up again? The file was created by cat /dev/sde
2gsd so it definitely wasn't getting smaller.
What's going on here?
What's your
# zfs list -t snapshot
NAMEUSED AVAIL REFER MOUNTPOINT
hetz0/be0-mail@2015-03-10 2.88G - 387G -
hetz0/be0-mail@2015-03-11 1.12G - 388G -
hetz0/be0-mail@2015-03-12 1.11G - 388G -
hetz0/be0-mail@2015-03-13 1.19G - 388G -
On Tue, 7 Apr 2015 02:03:04 PM arnaud gaboury wrote:
Would you mind give the return of # btrfs subvolume list and $ cat
/etc/fstab ? It would help me. TY
ID 262 gen 1579103 top level 5 path mysql
On Tue, 7 Apr 2015 10:58:28 AM arnaud gaboury wrote:
After more reading, it seems to me creating a top root subvolume is
the right thing to do:
# btrfs subvolume create root
# btrfs subvolume create root/var
# btrfs subvolume create root/home
Am I right?
A filesystem is designed to
On Mon, 6 Apr 2015 07:40:03 AM Pavel Volkov wrote:
On Sunday, April 5, 2015 1:04:17 PM MSK, Hugo Mills wrote:
That's these, I think:
#define BTRFS_FEATURE_INCOMPAT_BIG_METADATA (1ULL 5)
#define BTRFS_FEATURE_INCOMPAT_EXTENDED_IREF(1ULL 6)
so it's definitely -O^extref. I
On Mon, 6 Apr 2015 03:21:18 AM Duncan wrote:
So... for 3.2 compatibility, extref must not be enabled (tho it's now the
default and AFAIK there's no way to actually disable it, only enable, so
an old btrfs-tools would have to be used that doesn't enable it by
default), AND the nodesize must
On Fri, 3 Apr 2015 05:14:12 AM Duncan wrote:
Well, btrfs itself isn't really stable yet... Stable series should be
stable at least to the extent that whatever you're using in them is, but
with btrfs itself not yet entirely stable...
Also for stable operation you want both forward and
On Sun, 5 Apr 2015 03:16:21 AM Duncan wrote:
Hugo Mills posted on Sat, 04 Apr 2015 13:00:47 + as excerpted:
On Sat, Apr 04, 2015 at 12:55:08PM +, Russell Coker wrote:
As an aside are there options to mkfs.btrfs that would make a
filesystem mountable by kernel 3.2.65? If so I'll
I've been having ongoing issues with balance failing with no space errors in
spite of having plenty. Strangely it seems to most often happen from cron jobs,
when a cron job fails I can count on a manual balance succeeding.
I'm running the latest Debian/Jessie kernel.
--
Sent from my Samsung
Debian/Wheezy userspace can't be expected to work as well as desired with a
3.19 kernel.
Wheezy with BTRFS single or RAID1-1 works reasonably well as long as you have
lots of free space, balance it regularly, and configure it not to resume a
balance on reboot.
Debian/Jessie works well with
On Fri, 20 Mar 2015 04:18:38 AM Duncan wrote:
If cp --reflink=auto was the default, it'd just work, making a reflink
where possible, falling back to a normal copy where not possible to
reflink.
However, I'd be wary of such a change, because admins are used to cp
creating a separate copy
On Thu, 19 Mar 2015 07:18:29 AM Erkki Seppala wrote:
But as a user level facility, I want to be able to snapshot before
making a change to a tree full of source code and (re)building it all
over again. I may want to keep my new build, but I may want to flush
it and return to known good
On Sun, 15 Mar 2015, peer@gmx.net wrote:
Following common recommendations [1], I use these mount options on my
main developing machine: noatime,autodefrag. This is desktop machine and
it works well so far. Now, I'm also going to install several KVM virtual
machines on this system. I want
# btrfs fi df /big
Data, RAID1: total=2.56TiB, used=2.56TiB
System, RAID1: total=32.00MiB, used=388.00KiB
Metadata, RAID1: total=19.25GiB, used=14.06GiB
GlobalReserve, single: total=512.00MiB, used=0.00B
Why is GlobalReserve single? That filesystem has been RAID-1 for ages (since
long before
On Sun, 4 Jan 2015 13:46:30 Chris Murphy wrote:
So do use dd you need to also use bs= setting it to a multiple of
4096. Of course most people using dd for zeroing set bs= to a decently
high value because it makes the process go much faster than the
default block size of 512 bytes.
You could
I've attached the kernel message log that I get after booting kernel 3.16.7
from Debian/Unstable. This is the kernel branch that will go into
Debian/Jessie so it's important to get it fixed.
Below has the start of the errors, the attached file has everything from boot.
I've got similar
On Wed, 10 Dec 2014 17:17:28 Robert White wrote:
A _monthly_ scrub is maybe worth scheduling if you have a lot of churn
in your disk contents.
I do weekly scrubs. I recently had 2 disks in a RAID-1 array develop read
errors within a month of each other. The first scrub after replacing sdb
On Mon, 1 Dec 2014, Chris Murphy li...@colorremedies.com wrote:
On Sun, Nov 30, 2014 at 3:06 PM, Russell Coker russ...@coker.com.au wrote:
When the 2 disks have different data mdadm has no way of knowing which
one is correct and has a 50% chance of overwriting good data. But BTRFS
does
When the 2 disks have different data mdadm has no way of knowing which one is
correct and has a 50% chance of overwriting good data. But BTRFS does checksums
on all reads and solves the problem of corrupt data - as long as you don't have
2 corrupt sectors in matching blocks.
--
Sent from my
I had a RAID-1 filesystem with 2*3TB disks and 330G of disk space free
according to df -h. I replaced a 3TB disk with a 4TB disk and df reported no
change in the free space (as expected).
I added a 1TB disk to the filesystem and there was still no change! I
expected that adding a 1TB disk
On Fri, 28 Nov 2014, Zygo Blaxell ce3g8...@umail.furryterror.org wrote:
On Fri, Nov 28, 2014 at 01:37:50AM +1100, Russell Coker wrote:
I had a RAID-1 filesystem with 2*3TB disks and 330G of disk space free
according to df -h. I replaced a 3TB disk with a 4TB disk and df
reported
When running Debian kernel version 3.16.0-4-amd64 and btrfs-tools version
3.17-1.1 I ran a btrfs replace operation to replace a 3TB disk that was giving
read errors with a new 4TB disk.
After the replace the btrfs device stats command reported that the 4TB disk
had 16 read errors. It appears
I am in the middle of replacing /dev/sdb (which is 3TB SATA disk that gives a
few read errors on every scrub) with /dev/sdc2 (a partition on a new 4TB SATA
disk). I am running btrfs-tools version 3.17-1.1 from Debian/Unstable and
Debian kernel 3.16.0-4-amd64. I get the following, the last
I have a workstation running Linux 3.14.something on a 120G SSD. It recently
had a problem and now the root filesystem can't be mounted, here is the
message I get when trying to mount it read-only on Debian kernel 3.16.2-3:
[4703937.784447] BTRFS info (device loop0): disk space caching is
Strangely I repeated the same process on the same system (btrfs-zero-log and
mount read-only) and it worked. While it's a concern that repeating the same
process gives different results it's nice that I'm getting all my data back.
On Sun, 23 Nov 2014, Russell Coker russ...@coker.com.au wrote
Also it would be nice to have checksums on the swap data. It's a bit of a waste
to pay for ECC RAM and then lose the ECC benefits as soon as data is paged out.
--
Sent from my Samsung Galaxy Note 3 with K-9 Mail.
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body
Also a device replace operation requires that the replacement be the same size
(or maybe larger). While a remove and replace allows the replacement to be
merely large enough to contain all the data. Given the size variation in what
might be called the same size disk by manufcturers this isn't
On Tue, 21 Oct 2014, Zygo Blaxell zblax...@furryterror.org wrote:
On Mon, Oct 20, 2014 at 04:38:28AM +, Duncan wrote:
Russell Coker posted on Sat, 18 Oct 2014 14:54:19 +1100 as excerpted:
# find . -name *546
./1412233213.M638209P10546 # ls -l ./1412233213.M638209P10546 ls:
cannot
I've just upgraded the Dom0 (NFS server) from 3.16.3 to 3.16.5 and it all
works.
Prior to upgrading the Dom0 I had the same problem occur with different file
names. All the names in question were truncated names of files that exist.
It seems that 3.16.3 has a bug with NFS serving files with
On Sat, 18 Oct 2014, Michael Johnson - MJ m...@revmj.com wrote:
The NFS client is part of the kernel iirc, so it should be 64 bit. This
would allow the creation of files larger than 4gb and create possible
issues with a 32 bit user space utility.
A correctly written 32bit application will
On Sun, 19 Oct 2014, Robert White rwh...@pobox.com wrote:
On 10/17/2014 08:54 PM, Russell Coker wrote:
# find . -name *546
./1412233213.M638209P10546
# ls -l ./1412233213.M638209P10546
ls: cannot access ./1412233213.M638209P10546: No such file or directory
Any suggestions?
Does ls
I have a system running the Debian 3.16.3-2 AMD64 kernel for the Xen Dom0 and
the DomUs.
The Dom0 has a pair of 500G SATA disks in a BTRFS RAID-1 array. The RAID-1
array has some subvols exported by NFS as well as a subvol for the disk images
for the DomUs - I am not using NoCOW as
On Sun, 21 Sep 2014 11:05:46 Chris Murphy wrote:
On Sep 20, 2014, at 7:39 PM, Russell Coker russ...@coker.com.au wrote:
Anyway the new drive turned out to have some errors, writes failed and
I've
got a heap of errors such as the above.
I'm curious if smartctl -t conveyance reveals any
On Sun, 21 Sep 2014, Duncan 1i5t5.dun...@cox.net wrote:
Russell Coker posted on Sun, 21 Sep 2014 11:39:17 +1000 as excerpted:
On a system running the Debian 3.14.15-2 kernel I added a new drive to a
RAID-1 array. My aim was to add a device and remove one of the old
devices.
That's
On a system running the Debian 3.14.15-2 kernel I added a new drive to a
RAID-1 array. My aim was to add a device and remove one of the old devices.
Sep 21 11:26:51 server kernel: [2070145.375221] BTRFS: lost page write due to
I/O error on /dev/sdc3
Sep 21 11:26:51 server kernel:
We need to have a way to determine the progress of a device delete operation.
Also for a balance of a RAID-1 that has more than 2 devices it would be good
to know how much space is used on each device.
Could btrfs fi df be extended to show information separately for each device?
--
My Main
On Mon, 8 Sep 2014, Austin S Hemmelgarn ahferro...@gmail.com wrote:
Also, I've found out the hard way that system chunks really should be
RAID1, NOT RAID10, otherwise it's very likely that the filesystem
won't mount at all if you lose 2 disks.
Why would that be different?
In a RAID-1 you
It would be nice if a file system mounted ro counted as ro snapshots for btrfs
send.
When a file system is so messed up it can't be mounted rw it should be regarded
as ro for all operations.
--
Sent from my Samsung Galaxy Note 2 with K-9 Mail.
--
To unsubscribe from this list: send the line
I've attached the dmesg output from a system running Debian kernel 3.14.13
which locked up. Everything which needed to write to disk was blocked. The
dmesg output didn't catch the first messages which had scrolled out of the
buffer. As the disk wasn't writable there was nothing useful in
On Sun, 17 Aug 2014 12:31:42 Duncan wrote:
OTOH, I tend to be rather more of an independent partition booster than
many. The biggest reason for that is the too many eggs in one basket
problem. Fully separate filesystems on separate partitions separate
those data eggs into separate
On Fri, 8 Aug 2014 16:35:29 Jose Ildefonso Camargo Tolosa wrote:
uname -a
Linux server1 3.15.8-031508-generic #201407311933 SMP Thu Jul 31
23:34:33 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
The complete story:
The filesystem was created on Ubuntu 12.04, running kernel 3.11.
mount options
On Mon, 4 Aug 2014 14:17:02 Peter Waller wrote:
For anyone else having this problem, this article is fairly useful for
understanding disk full problems and rebalance:
http://marc.merlins.org/perso/btrfs/post_2014-05-04_Fixing-Btrfs-Filesystem-
Full-Problems.html
It actually covers the
Please get yourself a NUMA system and test this out.
--
Sent from my Samsung Galaxy Note 2 with K-9 Mail.
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at
On Mon, 4 Aug 2014 04:02:53 Peter Roberts wrote:
I've just recently started testing btrfs on my server but after just 24
hours problems have started. I get booted to a busybox prompt user
ubuntu 14.04. I have a multi device FS setup and I can't say for sure if
it managed to boot initially
On Sun, 3 Aug 2014 22:44:26 Nick Krause wrote:
On Sun, Aug 3, 2014 at 7:48 PM, Russell Coker russ...@coker.com.au wrote:
Please get yourself a NUMA system and test this out.
Unfortunately I don't have money for an extra machine as of now as I
am a student
If you can't get an extra machine
On Sun, 3 Aug 2014 21:00:19 George Mitchell wrote:
But just changing your boot configuration to use /dev/sdx is probably the
best option.
Assuming you are booting with grub2, you will need to use /dev/sdx in
the grub2 configuration file. This is known issue with grub2. Example
from my
On Sun, 3 Aug 2014 21:34:29 George Mitchell wrote:
I see what you are saying. Its a hack. But I suspect that most of the
distros are not yet accommodating btrfs with their standard mkinitrd
process. At this point modifying grub2 config does solve the problem.
If you know a reasonably
On Sun, 3 Aug 2014 00:35:28 Peter Waller wrote:
I'm running Ubuntu 14.04. I wonder if this problem is related to the
thread titled Machine lockup due to btrfs-transaction on AWS EC2
Ubuntu 14.04 which I started on the 29th of July:
http://thread.gmane.org/gmane.comp.file-systems.btrfs/37224
On Fri, 18 Jul 2014 13:56:58 Sam Bull wrote:
On ven, 2014-07-18 at 14:35 +1000, Russell Coker wrote:
Ignoring directories in send/recv is done by subvol. Even if you use
rsync it's a good idea to have different subvols for directory trees
with different backup requirements.
So, an inner
Daily snapshots work welk with kernel 3.14 and above (I had problems with 3.13
and previous). I have snapshots every 15 mins on some subvols.
Very large numbers of snapshots can cause performance problems. I suggest
keeping below 1000 snapshots at this time.
You can use send/recv functionality
On Tue, 15 Jul 2014 11:42:05 constantine wrote:
Thank you very much for your advice. It worked!
Great!
I verified that the superblocks 1 and 2 had similar information with
btrfs-show-super -i 1 /dev/sdc1 (and -i 2) and then with crossed
fingers:
btrfs-select-super -s 2 /dev/sdc1
which
On Fri, 11 Jul 2014 10:38:22 Duncan wrote:
I've moved all drives and move those to my main rig which got a nice
16GB of ecc ram, so errors of ram, cpu, controller should be kept
theoretically eliminated.
It's worth noting that ECC RAM doesn't necessarily help when it's an in-
transit bus
On Fri, 11 Jul 2014 13:42:40 constantine wrote:
Btrfs filesystem could not be mounted because /dev/sdc1 had unreadable
sectors. It is/was a single filesystem (not raid1 or raid0) over /dev/sda1
and /dev/sdc1.
What does file -s /dev/sda1 /dev/sdc1 report?
--
My Main Blog
On Fri, 11 Jul 2014 21:29:07 constantine wrote:
Thank you very much for your response:
# file -s /dev/sda1 /dev/sdc1
/dev/sda1: BTRFS Filesystem label partition, sectorsize 4096,
nodesize 4096, leafsize 4096,
UUID=c1eb1aaf-665a-4337-9d04-3c3921aa67e0, 1683870334976/3010310701056
bytes
On Wed, 9 Jul 2014 16:48:05 Martin Steigerwald wrote:
- for someone using SAS or enterprise SATA drives with Linux, I
understand btrfs gives the extra benefit of checksums, are there any
other specific benefits over using mdadm or dmraid?
I think I can answer this one.
Most important
root@yoyo:/# btrfs fi df /
Data, RAID1: total=9.00GiB, used=6.95GiB
System, RAID1: total=32.00MiB, used=16.00KiB
Metadata, RAID1: total=1.00GiB, used=82.95MiB
root@yoyo:/# df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 273G 15G 257G 6% /
I have a Xen server that has
On Thu, 3 Jul 2014 18:19:38 Marc MERLIN wrote:
I upgraded my server from 3.14 to 3.15.1 last week, and since then it's been
running out of memory and deadlocking (panic= doesn't even work).
I downgraded back to 3.14, but I already had the problem once since then.
Is there any correlation
On Sat, 28 Jun 2014 04:26:43 Duncan wrote:
Russell Coker posted on Sat, 28 Jun 2014 10:51:00 +1000 as excerpted:
On Fri, 27 Jun 2014 20:30:32 Zack Coffey wrote:
Can I get more protection by using more than 2 drives?
I had an onboard RAID a few years back that would let me use RAID1
On Sat, 28 Jun 2014 11:38:47 Duncan wrote:
And with the size of disks we have today, the statistics on multiple
whole device reliability are NOT good to us! There's a VERY REAL chance,
even likelihood, that at least one block on the device is going to be
bad, and not be caught by its own
On Fri, 27 Jun 2014 18:34:34 Goffredo Baroncelli wrote:
I don't think that it is possible to mount the _same device_ at the _same
time_ on two different machines. And this doesn't depend by the filesystem.
If you use a clustered filesystem then you can safely mount it on multiple
machines.
If
On Fri, 27 Jun 2014 20:30:32 Zack Coffey wrote:
Can I get more protection by using more than 2 drives?
I had an onboard RAID a few years back that would let me use RAID1
across up to 4 drives.
Currently the only RAID level that fully works in BTRFS is RAID-1 with data on
2 disks. If you
1 - 100 of 196 matches
Mail list logo