About a year ago now, I decided to set up a small storage cluster to
store backups (and partially replace Dropbox for my usage, but that's a
separate story). I ended up using GlusterFS as the clustering software
itself, and BTRFS as the back-end storage.
GlusterFS itself is actually a pretty
On 2017-04-10 18:59, Hans van Kranenburg wrote:
On 04/10/2017 02:23 PM, Austin S. Hemmelgarn wrote:
On 2017-04-08 16:19, Hans van Kranenburg wrote:
So... today a real life story / btrfs use case example from the trenches
at work...
tl;dr 1) btrfs is awesome, but you have to carefully choose
On 2017-04-11 05:55, Adam Borowski wrote:
On Tue, Apr 11, 2017 at 06:01:19AM +0200, Kai Krakow wrote:
Yes, I know all this. But I don't see why you still want noatime or
relatime if you use lazytime, except for super-optimizing. Lazytime
gives you POSIX conformity for a problem that the other op
On 2017-04-10 14:18, Kai Krakow wrote:
Am Mon, 10 Apr 2017 13:13:39 -0400
schrieb "Austin S. Hemmelgarn" :
On 2017-04-10 12:54, Kai Krakow wrote:
Am Mon, 10 Apr 2017 18:44:44 +0200
schrieb Kai Krakow :
Am Mon, 10 Apr 2017 08:51:38 -0400
schrieb "Austi
On 2017-04-10 12:54, Kai Krakow wrote:
Am Mon, 10 Apr 2017 18:44:44 +0200
schrieb Kai Krakow :
Am Mon, 10 Apr 2017 08:51:38 -0400
schrieb "Austin S. Hemmelgarn" :
On 2017-04-10 08:45, Kai Krakow wrote:
Am Mon, 10 Apr 2017 08:39:23 -0400
schrieb "Austin S. Hemmelgarn" :
On 2017-04-10 08:45, Kai Krakow wrote:
Am Mon, 10 Apr 2017 08:39:23 -0400
schrieb "Austin S. Hemmelgarn" :
They've been running BTRFS
with LZO compression, the SSD allocator, atime disabled, and mtime
updates deferred (lazytime mount option) the whole time, so it may be
a sli
On 2017-04-08 17:07, Adam Borowski wrote:
Unbreaks ARM and possibly other 32-bit architectures.
Fixes: 7d0ef8b4d: Btrfs: update scrub_parity to use u64 stripe_len
Reported-by: Icenowy Zheng
Signed-off-by: Adam Borowski
---
You'd probably want to squash this with Liu's commit, to be nice to fut
On 2017-04-10 04:53, Adam Borowski wrote:
Hi!
While messing with the division failure on current -next, I've noticed that
parity scrub splats immediately on all 32-bit archs I tried. But, it's not
a regression: it bisects to 5a6ac9eacb49143cbad3bbfda72263101cb1f3df (merged
in 3.19) which happens
On 2017-04-09 19:23, Hans van Kranenburg wrote:
On 04/08/2017 01:16 PM, Hans van Kranenburg wrote:
On 04/07/2017 11:25 PM, Hans van Kranenburg wrote:
Ok, I'm going to revive a year old mail thread here with interesting new
info:
[...]
Now, another surprise:
From the exact moment I did mount
On 2017-04-08 16:19, Hans van Kranenburg wrote:
So... today a real life story / btrfs use case example from the trenches
at work...
tl;dr 1) btrfs is awesome, but you have to carefully choose which parts
of it you want to use or avoid 2) improvements can be made, but at least
the problems releva
On 2017-04-08 01:12, Duncan wrote:
Austin S. Hemmelgarn posted on Fri, 07 Apr 2017 07:41:22 -0400 as
excerpted:
2. Results from 'btrfs scrub'. This is somewhat tricky because scrub is
either asynchronous or blocks for a _long_ time. The simplest option
I've found is
On 2017-04-07 13:05, John Petrini wrote:
The use case actually is not Ceph, I was just drawing a comparison
between Ceph's object replication strategy vs BTRF's chunk mirroring.
That's actually a really good comparison that I hadn't thought of
before. From what I can tell from my limited unders
On 2017-04-07 12:58, John Petrini wrote:
When you say "running BTRFS raid1 on top of LVM RAID0 volumes" do you
mean creating two LVM RAID-0 volumes and then putting BTRFS RAID1 on
the two resulting logical volumes?
Yes, although it doesn't have to be LVM, it could just as easily be MD
or even ha
On 2017-04-07 12:28, Chris Murphy wrote:
On Fri, Apr 7, 2017 at 7:50 AM, Austin S. Hemmelgarn
wrote:
If you care about both performance and data safety, I would suggest using
BTRFS raid1 mode on top of LVM or MD RAID0 together with having good backups
and good monitoring. Statistically
On 2017-04-07 12:04, Chris Murphy wrote:
On Fri, Apr 7, 2017 at 5:41 AM, Austin S. Hemmelgarn
wrote:
I'm rather fond of running BTRFS raid1 on top of LVM RAID0 volumes,
which while it provides no better data safety than BTRFS raid10 mode, gets
noticeably better performance.
This do
On 2017-04-07 09:28, John Petrini wrote:
Hi Austin,
Thanks for taking to time to provide all of this great information!
Glad I could help.
You've got me curious about RAID1. If I were to convert the array to
RAID1 could it then sustain a multi drive failure? Or in other words
do I actually en
On 2017-04-06 23:25, John Petrini wrote:
Interesting. That's the first time I'm hearing this. If that's the
case I feel like it's a stretch to call it RAID10 at all. It sounds a
lot more like basic replication similar to Ceph only Ceph understands
failure domains and therefore can be configured t
On 2017-04-04 09:29, Brian B wrote:
On 04/04/2017 12:02 AM, Robert Krig wrote:
My storage array is BTRFS Raid1 with 4x8TB Drives.
Wouldn't it be possible to simply disconnect two of those drives, mount
with -o degraded and still have access (even if read-only) to all my data?
Just jumping on th
On 2017-04-01 05:48, Kai Herlemann wrote:
Hi,
I have on my ext4 filesystem some sparse files, mostly images from
ext4 filesystems.
Is btrfs-convert (4.9.1) able to deal with sparse files or can that
cause any problems?
I would tend to agree with some of the other people who have commented
here,
On 2017-04-01 02:06, UGlee wrote:
We are working on a small NAS server for home user. The product is
equipped with a small fast SSD (around 60-120GB) and a large HDD (2T
to 4T).
We have two choices:
1. using bcache to accelerate io operation
2. combining SSD and HDD into a single btrfs volume.
On 2017-03-30 11:55, Peter Grandi wrote:
My guess is that very complex risky slow operations like that are
provided by "clever" filesystem developers for "marketing" purposes,
to win box-ticking competitions. That applies to those system
developers who do know better; I suspect that even some fil
On 2017-03-30 09:07, Tim Cuthbertson wrote:
On Wed, Mar 29, 2017 at 10:46 PM, Duncan <1i5t5.dun...@cox.net> wrote:
Tim Cuthbertson posted on Wed, 29 Mar 2017 18:20:52 -0500 as excerpted:
So, another question...
Do I then leave the top level mounted all the time for snapshots, or
should I crea
On 2017-03-29 01:38, Duncan wrote:
Austin S. Hemmelgarn posted on Tue, 28 Mar 2017 07:44:56 -0400 as
excerpted:
On 2017-03-27 21:49, Qu Wenruo wrote:
The problem is, how should we treat subvolume.
Btrfs subvolume sits in the middle of directory and (logical) volume
used in traditional
On 2017-03-28 10:43, Peter Grandi wrote:
This is going to be long because I am writing something detailed
hoping pointlessly that someone in the future will find it by
searching the list archives while doing research before setting
up a new storage system, and they will be the kind of person
that
On 2017-03-28 09:53, Marat Khalili wrote:
There are a couple of reasons I'm advocating the specific behavior I
outlined:
Some of your points are valid, but some break current behaviour and
expectations or create technical difficulties.
1. It doesn't require any specific qgroup setup. By defi
shots are handled.
--
With Best Regards,
Marat Khalili
On 28/03/17 14:24, Austin S. Hemmelgarn wrote:
On 2017-03-27 15:32, Chris Murphy wrote:
How about if qgroups are enabled, then non-root user is prevented from
creating new subvolumes?
Or is there a way for a new nested subvolume to be inc
On 2017-03-27 21:49, Qu Wenruo wrote:
At 03/27/2017 08:01 PM, Austin S. Hemmelgarn wrote:
On 2017-03-27 07:02, Moritz Sichert wrote:
Am 27.03.2017 um 05:46 schrieb Qu Wenruo:
At 03/27/2017 11:26 AM, Andrei Borzenkov wrote:
27.03.2017 03:39, Qu Wenruo пишет:
At 03/26/2017 06:03 AM
On 2017-03-27 15:32, Chris Murphy wrote:
How about if qgroups are enabled, then non-root user is prevented from
creating new subvolumes?
Or is there a way for a new nested subvolume to be included in its
parent's quota, rather than the new subvolume having a whole new quota
limit?
Tricky proble
On 2017-03-27 09:54, Christian Theune wrote:
Hi,
On Mar 27, 2017, at 3:50 PM, Christian Theune wrote:
Hi,
On Mar 27, 2017, at 3:46 PM, Austin S. Hemmelgarn wrote:
Something I’d like to verify: does having traffic on the volume have
the potential to delay this infinitely? I.e. does the
On 2017-03-27 09:50, Christian Theune wrote:
Hi,
On Mar 27, 2017, at 3:46 PM, Austin S. Hemmelgarn wrote:
Something I’d like to verify: does having traffic on the volume have
the potential to delay this infinitely? I.e. does the system write
to any segments that we’re trying to free so it
On 2017-03-27 09:24, Hugo Mills wrote:
On Mon, Mar 27, 2017 at 03:20:37PM +0200, Christian Theune wrote:
Hi,
On Mar 27, 2017, at 3:07 PM, Hugo Mills wrote:
On my hardware (consumer HDDs and SATA, RAID-1 over 6 devices), it
takes about a minute to move 1 GiB of data. At that rate, it would
On 2017-03-27 07:02, Moritz Sichert wrote:
Am 27.03.2017 um 05:46 schrieb Qu Wenruo:
At 03/27/2017 11:26 AM, Andrei Borzenkov wrote:
27.03.2017 03:39, Qu Wenruo пишет:
At 03/26/2017 06:03 AM, Moritz Sichert wrote:
Hi,
I tried to configure qgroups on a btrfs filesystem but was really
surp
On 2017-03-25 23:00, J. Hart wrote:
I have a Btrfs filesystem on a backup server. This filesystem has a
directory to hold backups for filesystems from remote machines. In this
directory is a subdirectory for each machine. Under each machine
subdirectory is one directory for each filesystem (ex
On 2017-03-23 06:09, Hugo Mills wrote:
On Wed, Mar 22, 2017 at 10:37:23PM -0700, Sean Greenslade wrote:
Hello, all. I'm currently tracking down the source of some strange
behavior in my setup. I recognize that this isn't strictly a btrfs
issue, but I figured I'd start at the bottom of the stack
On 2017-03-17 15:01, Eric Sandeen wrote:
On 3/17/17 11:25 AM, Austin S. Hemmelgarn wrote:
I'm currently working on a plugin for colllectd [1] to track per-device
per-filesystem error rates for BTRFS volumes. Overall, this is actually going
quite well (I've got most of the secon
On 2017-03-17 15:25, John Marrett wrote:
Peter,
Bad news. That means that probably the disk is damaged and
further issues may happen.
This system has a long history, I have had a dual drive failure in the
past, I managed to recover from that with ddrescue. I've subsequently
copied the content
I'm currently working on a plugin for colllectd [1] to track per-device
per-filesystem error rates for BTRFS volumes. Overall, this is actually
going quite well (I've got most of the secondary logic like matching
filesystems to watch and parsing the data done already), but I've come
across a r
On 2017-03-13 07:52, Juan Orti Alcaine wrote:
2017-03-13 12:29 GMT+01:00 Hérikz Nawarro :
Hello everyone,
Today is safe to use btrfs for home storage? No raid, just secure
storage for some files and create snapshots from it.
In my humble opinion, yes. I'm running a RAID1 btrfs at home for 5
On 2017-03-09 04:49, Peter Grandi wrote:
Consider the common case of a 3-member volume with a 'raid1'
target profile: if the sysadm thinks that a drive should be
replaced, the goal is to take it out *without* converting every
chunk to 'single', because with 2-out-of-3 devices half of the
chunks w
r.c | 5 +-
fs/btrfs/volumes.c | 156 -
fs/btrfs/volumes.h | 37 +
6 files changed, 188 insertions(+), 101 deletions(-)
Everything appears to work as advertised here, so for the patcheset as a
whole, you can add:
Tested-by: Austin S
On 2017-03-05 14:13, Peter Grandi wrote:
What makes me think that "unmirrored" 'raid1' profile chunks
are "not a thing" is that it is impossible to remove
explicitly a member device from a 'raid1' profile volume:
first one has to 'convert' to 'single', and then the 'remove'
copies back to the rem
On 2017-03-03 15:10, Kai Krakow wrote:
Am Fri, 3 Mar 2017 07:19:06 -0500
schrieb "Austin S. Hemmelgarn" :
On 2017-03-03 00:56, Kai Krakow wrote:
Am Thu, 2 Mar 2017 11:37:53 +0100
schrieb Adam Borowski :
On Wed, Mar 01, 2017 at 05:30:37PM -0700, Chris Murphy wrote:
[...]
Wel
On 2017-03-03 00:56, Kai Krakow wrote:
Am Thu, 2 Mar 2017 11:37:53 +0100
schrieb Adam Borowski :
On Wed, Mar 01, 2017 at 05:30:37PM -0700, Chris Murphy wrote:
[1717713.408675] BTRFS warning (device dm-8): missing devices (1)
exceeds the limit (0), writeable mount is not allowed
[1717713.446453
On 2017-03-02 19:47, Peter Grandi wrote:
[ ... ] Meanwhile, the problem as I understand it is that at
the first raid1 degraded writable mount, no single-mode chunks
exist, but without the second device, they are created. [
... ]
That does not make any sense, unless there is a fundamental
mista
On 2017-03-02 12:26, Andrei Borzenkov wrote:
02.03.2017 16:41, Duncan пишет:
Chris Murphy posted on Wed, 01 Mar 2017 17:30:37 -0700 as excerpted:
[1717713.408675] BTRFS warning (device dm-8): missing devices (1)
exceeds the limit (0), writeable mount is not allowed
[1717713.446453] BTRFS error
On 2017-02-27 14:15, John Marrett wrote:
Liubo correctly identified direct IO as a solution for my test
performance issues, with it in use I achieved 908 read and 305 write,
not quite as fast as ZFS but more than adequate for my needs. I then
applied Peter's recommendation of switching to raid10
On 2017-02-23 19:54, Qu Wenruo wrote:
At 02/23/2017 06:51 PM, Christian Theune wrote:
Hi,
not sure whether it’s possible, but we tried space_cache=v2 and
obviously after working fine in staging it broke in production. Or
rather: we upgraded from 4.4 to 4.9 and enabled the space_cache. Our
pro
On 2017-02-23 08:19, Christian Theune wrote:
Hi,
just for future reference if someone finds this thread: there is a bit of
output I’m seeing with this crashing kernel (unclear whether related to btrfs
or not):
31 | 02/23/2017 | 09:51:22 | OS Stop/Shutdown #0x4f | Run-time critical stop
| A
On 2017-02-23 05:51, Christian Theune wrote:
Hi,
not sure whether it’s possible, but we tried space_cache=v2 and obviously after
working fine in staging it broke in production. Or rather: we upgraded from 4.4
to 4.9 and enabled the space_cache. Our production volume is around 50TiB
usable (un
On 2017-02-17 03:26, Duncan wrote:
Imran Geriskovan posted on Thu, 16 Feb 2017 13:42:09 +0200 as excerpted:
Opps.. I mean 4.9/4.10 Experiences
On 2/16/17, Imran Geriskovan wrote:
What are your experiences for btrfs regarding 4.10 and 4.11 kernels?
I'm still on 4.8.x. I'd be happy to hear fro
On 2017-02-16 15:36, Chris Murphy wrote:
Hi,
This man page contains a list for pretty much every other file system,
with a oneliner description: ext4, XFS is in there, and even NTFS, but
not Btrfs.
Also, /etc/filesystems doesn't contain Btrfs. Anyone know if either,
or both, ought to contain an
On 2017-02-16 15:13, E V wrote:
It would be nice if there was an easy way to tell btrfs to allocate
another metadata chunk. For example, the below fs is full due to
exhausted metadata:
Device size:1013.28GiB
Device allocated: 1013.28GiB
Device unallocated:
On 2017-02-14 11:46, Austin S. Hemmelgarn wrote:
On 2017-02-14 11:07, Chris Murphy wrote:
On Tue, Feb 14, 2017 at 8:30 AM, Austin S. Hemmelgarn
wrote:
I was just experimenting with snapshots on 4.9.0, and came across some
unexpected behavior.
The simple explanation is that if you snapshot a
On 2017-02-14 11:07, Chris Murphy wrote:
On Tue, Feb 14, 2017 at 8:30 AM, Austin S. Hemmelgarn
wrote:
I was just experimenting with snapshots on 4.9.0, and came across some
unexpected behavior.
The simple explanation is that if you snapshot a subvolume, any files in the
subvolume that have
I was just experimenting with snapshots on 4.9.0, and came across some
unexpected behavior.
The simple explanation is that if you snapshot a subvolume, any files in
the subvolume that have the NOCOW attribute will not have that attribute
in the snapshot. Some further testing indicates that th
On 2017-02-10 09:21, Peter Zaitsev wrote:
Hi,
As I have been reading btrfs whitepaper it speaks about autodefrag in very
generic terms - once random write in the file is detected it is put in the
queue to be defragmented. Yet I could not find any specifics about this
process described anywher
On 2017-02-09 22:58, Andrei Borzenkov wrote:
07.02.2017 23:47, Austin S. Hemmelgarn пишет:
...
Sadly, freezefs (the generic interface based off of xfs_freeze) only
works for block device snapshots. Filesystem level snapshots need the
application software to sync all it's data and then
On 2017-02-09 08:25, Adam Borowski wrote:
On Wed, Feb 08, 2017 at 11:48:04AM +0800, Qu Wenruo wrote:
Just don't believe the vanilla df output for btrfs.
For btrfs, unlike other fs like ext4/xfs, which allocates chunk dynamically
and has different metadata/data profile, we can only get a clear v
On 2017-02-08 20:42, Ian Kelling wrote:
I had a file read fail repeatably, in syslog, lines like this
kernel: BTRFS warning (device dm-5): csum failed ino 2241616 off
51580928 csum 4redacted expected csum 2redacted
I rmed the file.
Another error more recently, 5 instances which look like this:
On 2017-02-09 06:49, Adam Borowski wrote:
On Wed, Feb 08, 2017 at 02:21:13PM -0500, Austin S. Hemmelgarn wrote:
- maybe deduplication (cyrus does it by hardlinking of same content messages
now) later
Deduplication beyond what Cyrus does is probably not worth it. In most
cases about 10% of an
On 2017-02-08 16:45, Peter Grandi wrote:
[ ... ]
The issue isn't total size, it's the difference between total
size and the amount of data you want to store on it. and how
well you manage chunk usage. If you're balancing regularly to
compact chunks that are less than 50% full, [ ... ] BTRFS on
1
On 2017-02-08 09:46, Peter Grandi wrote:
My system is or seems to be running out of disk space but I
can't find out how or why. [ ... ]
FilesystemSize Used Avail Use% Mounted on
/dev/sda3 28G 26G 2.1G 93% /
[ ... ]
So from chunk level, your fs is already full. And
On 2017-02-08 13:38, Libor Klepáč wrote:
Hello,
inspired by recent discussion on BTRFS vs. databases i wanted to ask on
suitability of BTRFS for hosting a Cyrus imap server spool. I haven't found
any recent article on this topic.
I'm preparing migration of our mailserver to Debian Stretch, ie. k
On 2017-02-08 08:46, Tomasz Torcz wrote:
On Wed, Feb 08, 2017 at 07:50:22AM -0500, Austin S. Hemmelgarn wrote:
It is exponentially safer in BTRFS
to run single data single metadata than half raid1 data half raid1 metadata.
Why?
To convert to profiles _designed_ for a single device and
On 2017-02-08 08:26, Martin Raiber wrote:
On 08.02.2017 14:08 Austin S. Hemmelgarn wrote:
On 2017-02-08 07:14, Martin Raiber wrote:
Hi,
On 08.02.2017 03:11 Peter Zaitsev wrote:
Out of curiosity, I see one problem here:
If you're doing snapshots of the live database, each snapshot leave
On 2017-02-07 20:49, Nicholas D Steeves wrote:
Dear btrfs community,
Please accept my apologies in advance if I missed something in recent
btrfs development; my MUA tells me I'm ~1500 unread messages
out-of-date. :/
I recently read about "mount -t btrfs -o user_subvol_rm_allowed" while
doing re
On 2017-02-08 07:14, Martin Raiber wrote:
Hi,
On 08.02.2017 03:11 Peter Zaitsev wrote:
Out of curiosity, I see one problem here:
If you're doing snapshots of the live database, each snapshot leaves
the database files like killing the database in-flight. Like shutting
the system down in the midd
On 2017-02-07 17:28, Kai Krakow wrote:
Am Thu, 19 Jan 2017 15:02:14 -0500
schrieb "Austin S. Hemmelgarn" :
On 2017-01-19 13:23, Roman Mamedov wrote:
On Thu, 19 Jan 2017 17:39:37 +0100
"Alejandro R. Mosteo" wrote:
I was wondering, from a point of view of data saf
On 2017-02-07 22:21, Hans Deragon wrote:
Greetings,
On 2017-02-02 10:06, Austin S. Hemmelgarn wrote:
On 2017-02-02 09:25, Adam Borowski wrote:
On Thu, Feb 02, 2017 at 07:49:50AM -0500, Austin S. Hemmelgarn wrote:
This is a severe bug that makes a not all that uncommon (albeit bad) use
case
On 2017-02-07 13:27, David Sterba wrote:
On Fri, Feb 03, 2017 at 08:48:58AM -0500, Austin S. Hemmelgarn wrote:
This adds some extra documentation to the btrfs-receive manpage that
explains some of the security related aspects of btrfs-receive. The
first part covers the fact that the subvolume
On 2017-02-07 15:54, Kai Krakow wrote:
Am Tue, 7 Feb 2017 15:27:34 -0500
schrieb "Austin S. Hemmelgarn" :
I'm not sure about this one. I would assume based on the fact that
many other things don't work with nodatacow and that regular defrag
doesn't work on files whic
On 2017-02-07 15:36, Kai Krakow wrote:
Am Tue, 7 Feb 2017 09:13:25 -0500
schrieb Peter Zaitsev :
Hi Hugo,
For the use case I'm looking for I'm interested in having snapshot(s)
open at all time. Imagine for example snapshot being created every
hour and several of these snapshots kept at all
On 2017-02-07 15:19, Kai Krakow wrote:
Am Tue, 7 Feb 2017 14:50:04 -0500
schrieb "Austin S. Hemmelgarn" :
Also does autodefrag works with nodatacow (ie with snapshot) or
are these exclusive ?
I'm not sure about this one. I would assume based on the fact that
many other th
On 2017-02-07 14:47, Kai Krakow wrote:
Am Mon, 6 Feb 2017 08:19:37 -0500
schrieb "Austin S. Hemmelgarn" :
MDRAID uses stripe selection based on latency and other measurements
(like head position). It would be nice if btrfs implemented similar
functionality. This would also be h
On 2017-02-07 14:39, Kai Krakow wrote:
Am Tue, 7 Feb 2017 10:06:34 -0500
schrieb "Austin S. Hemmelgarn" :
4. Try using in-line compression. This can actually significantly
improve performance, especially if you have slow storage devices and
a really nice CPU.
Just a side
On 2017-02-07 13:59, Peter Zaitsev wrote:
Jeff,
Thank you very much for explanations. Indeed it was not clear in the
documentation - I read it simply as "if you have snapshots enabled
nodatacow makes no difference"
I will rebuild the database in this mode from scratch and see how
performance ch
On 2017-02-07 14:31, Peter Zaitsev wrote:
Hi Hugo,
As I re-read it closely (and also other comments in the thread) I know
understand there is a difference how nodatacow works even if snapshot are
in place.
On autodefrag I wonder is there some more detailed documentation about how
autodefrag wor
On 2017-02-07 10:20, Timofey Titovets wrote:
I think that you have a problem with extent bookkeeping (if i
understand how btrfs manage extents).
So for deal with it, try enable compression, as compression will force
all extents to be fragmented with size ~128kb.
No, it will compress everything
On 2017-02-07 10:00, Timofey Titovets wrote:
2017-02-07 17:13 GMT+03:00 Peter Zaitsev :
Hi Hugo,
For the use case I'm looking for I'm interested in having snapshot(s)
open at all time. Imagine for example snapshot being created every
hour and several of these snapshots kept at all time provi
On 2017-02-07 08:53, Peter Zaitsev wrote:
Hi,
I have tried BTRFS from Ubuntu 16.04 LTS for write intensive OLTP MySQL
Workload.
It did not go very well ranging from multi-seconds stalls where no
transactions are completed to the finally kernel OOPS with "no space left
on device" error message
On 2017-02-04 16:10, Kai Krakow wrote:
Am Sat, 04 Feb 2017 20:50:03 +
schrieb "Jorg Bornschein" :
February 4, 2017 1:07 AM, "Goldwyn Rodrigues"
wrote:
Yes, please check if disabling quotas makes a difference in
execution time of btrfs balance.
Just FYI: With quotas disabled it took ~20
On 2017-02-05 23:26, Duncan wrote:
Hans van Kranenburg posted on Sun, 05 Feb 2017 22:55:42 +0100 as
excerpted:
On 02/05/2017 10:42 PM, Alexander Tomokhov wrote:
Is it possible, having two drives to do raid1 for metadata but keep
data on a single drive only?
Nope.
Would be a really nice feat
On 2017-02-05 06:54, Kai Krakow wrote:
Am Wed, 1 Feb 2017 17:43:32 +
schrieb Graham Cobb :
On 01/02/17 12:28, Austin S. Hemmelgarn wrote:
On 2017-02-01 00:09, Duncan wrote:
Christian Lupien posted on Tue, 31 Jan 2017 18:32:58 -0500 as
excerpted:
[...]
I'm just a btrfs-using
of the send stream.
Signed-off-by: Austin S. Hemmelgarn
Suggested-by: Graham Cobb
---
Chages since v1:
* Updated the description based on suggestions from Graham Cobb.
Inspired by a recent thread on the ML.
This could probably be more thorough, but I felt it was more important
to get it
On 2017-02-03 14:17, Graham Cobb wrote:
On 03/02/17 16:01, Austin S. Hemmelgarn wrote:
Ironically, I ended up having time sooner than I thought. The message
doesn't appear to be in any of the archives yet, but the message ID is:
<20170203134858.75210-1-ahferro...@gmail.com>
A
On 2017-02-03 10:44, Graham Cobb wrote:
On 03/02/17 12:44, Austin S. Hemmelgarn wrote:
I can look at making a patch for this, but it may be next week before I
have time (I'm not great at multi-tasking when it comes to software
development, and I'm in the middle of helping to fi
of the send stream.
Signed-off-by: Austin S. Hemmelgarn
---
Inspired by a recent thread on the ML.
This could probably be more thorough, but I felt it was more important
to get it documented as quickly as possible, and this should cover the
basic info that most people will care about
On 2017-02-03 04:14, Duncan wrote:
Graham Cobb posted on Thu, 02 Feb 2017 10:52:26 + as excerpted:
On 02/02/17 00:02, Duncan wrote:
If it's a workaround, then many of the Linux procedures we as admins
and users use every day are equally workarounds. Setting 007 perms on
a dir that doesn't
On 2017-02-02 09:25, Adam Borowski wrote:
On Thu, Feb 02, 2017 at 07:49:50AM -0500, Austin S. Hemmelgarn wrote:
This is a severe bug that makes a not all that uncommon (albeit bad) use
case fail completely. The fix had no dependencies itself and
I don't see what's bad in mount
On 2017-02-02 05:52, Graham Cobb wrote:
On 02/02/17 00:02, Duncan wrote:
If it's a workaround, then many of the Linux procedures we as admins and
users use every day are equally workarounds. Setting 007 perms on a dir
that doesn't have anything immediately security vulnerable in it, simply
to k
On 2017-02-01 17:48, Duncan wrote:
Adam Borowski posted on Wed, 01 Feb 2017 12:55:30 +0100 as excerpted:
On Wed, Feb 01, 2017 at 05:23:16AM +, Duncan wrote:
Hans Deragon posted on Tue, 31 Jan 2017 21:51:22 -0500 as excerpted:
But the current scenario makes it difficult for me to put redun
On 2017-02-01 00:09, Duncan wrote:
Christian Lupien posted on Tue, 31 Jan 2017 18:32:58 -0500 as excerpted:
I have been testing btrfs send/receive. I like it.
During those tests I discovered that it is possible to access and modify
(add files, delete files ...) of the new receive snapshot duri
On 2017-01-30 23:58, Duncan wrote:
Oliver Freyermuth posted on Sat, 28 Jan 2017 17:46:24 +0100 as excerpted:
Just don't count on restore to save your *** and always treat what it
can often bring to current as a pleasant surprise, and having it fail
won't be a down side, while having it work, if
On 2017-01-28 00:00, Duncan wrote:
Austin S. Hemmelgarn posted on Fri, 27 Jan 2017 07:58:20 -0500 as
excerpted:
On 2017-01-27 06:01, Oliver Freyermuth wrote:
I'm also running 'memtester 12G' right now, which at least tests 2/3
of the memory. I'll leave that running for
On 2017-01-28 04:17, Andrei Borzenkov wrote:
27.01.2017 23:03, Austin S. Hemmelgarn пишет:
On 2017-01-27 11:47, Hans Deragon wrote:
On 2017-01-24 14:48, Adam Borowski wrote:
On Tue, Jan 24, 2017 at 01:57:24PM -0500, Hans Deragon wrote:
If I remove 'ro' from the option, I cann
On 2017-01-27 11:47, Hans Deragon wrote:
On 2017-01-24 14:48, Adam Borowski wrote:
On Tue, Jan 24, 2017 at 01:57:24PM -0500, Hans Deragon wrote:
If I remove 'ro' from the option, I cannot get the filesystem mounted
because of the following error: BTRFS: missing devices(1) exceeds the
limit(0)
On 2017-01-27 06:01, Oliver Freyermuth wrote:
I'm also running 'memtester 12G' right now, which at least tests 2/3 of the
memory. I'll leave that running for a day or so, but of course it will not
provide a clear answer...
A small update: while the online memtester is without any errors still
On 2017-01-19 13:23, Roman Mamedov wrote:
On Thu, 19 Jan 2017 17:39:37 +0100
"Alejandro R. Mosteo" wrote:
I was wondering, from a point of view of data safety, if there is any
difference between using dup or making a raid1 from two partitions in
the same disk. This is thinking on having some p
On 2017-01-19 11:39, Alejandro R. Mosteo wrote:
Hello list,
I was wondering, from a point of view of data safety, if there is any
difference between using dup or making a raid1 from two partitions in
the same disk. This is thinking on having some protection against the
typical aging HDD that sta
On 2017-01-18 09:21, Steven Hum wrote:
Added 2 drives to my RAID10, then ran btrfs balance. The system appears
to have crashed after several hours (I was ssh'd in at the time on my
local network). When I reboot the Arch system, I ran btrfs check and no
errors were reported.
However, attempting
501 - 600 of 1429 matches
Mail list logo