On 8/1/2017 10:24 PM, Cerem Cem ASLAN wrote:
What is that mean? Can't we replicate the same snapshot with `btrfs
send | btrfs receive` multiple times, because it will have a "Received
UUID" at the first `btrfs receive
You will need to make a new read-write snapshot of the received volume
to
Have you checked that there is no Received UUID on the source subvolume?
# btrfs sub show volume/mysql/
volume/mysql
Name: mysql
UUID: 8a94524e-a956-c14b-bb8d-d453627f27d5
Parent UUID: -
Received UUID: -
From: siranee...@tpc.co.th -- Sent: 2017-08-13 - 16:00
> Hi "A L",
>
> As your suggestion. Does it mean I have to make all my current subvolumes to
> clear
> "Receive UUID" and send them with full snapshots?
You need to clear all subvolumes w
On 8/13/2017 12:52 PM, siranee...@tpc.co.th wrote:
Hi "A L",
[root@backuplogC7 ~]# btrfs sub show /var/lib/mariadb/mysql
/var/lib/mariadb/mysql
Name: mysql
UUID: 92f319c5-e132-3249-9b13-d39ee77a2b44
P
from a Windows client. The volume 'userData' is shared by Samba.
# ls -l
-rw-rwx---+ 1 user user 112643654 Jul 27 17:46
'/mnt/storagePool/snapshots/userData.20170727T2102/Documents/Art &
Craft/Scrapbook-Journal/FAR FAR HILL/Paper/000_29082014_w.zip'
-rw-rw 1 user user 88269510 Jul 27 22:2
ld already be in kernel-4.13_rc2
Greetings
Hermann
On 07/28/2017 07:26 PM, A L wrote:
I often hit the following error when doing incremental btrfs
send-receive:
Btrfs incremental send | receive fails with Error: File not found
Sometimes I can do two-three incremental snapshots, but then the
tstrapping
states "We will need to create a read-only snapshot ,,,"
I am using send/receive (with read-only snapshots) on a regular basis
and never had a problem like yours.
What are the commands you use to create your snapshots?
Greetings
Hermann
On 07/28/2017 07:26 PM, A L wr
Perhaps using a bind mount? It would look and work the same as a ordinary fs.
Just need to make sure du uses one filesystem.
From: Ulli Horlacher -- Sent: 2017-08-22 -
18:57
> On Tue 2017-08-22 (21:45), Roman Mamedov wrote:
>
>> It is beneficial to
From: Ulli Horlacher -- Sent: 2017-08-23 -
09:18
> On Tue 2017-08-22 (22:48), Ulli Horlacher wrote:
>
>> > Assumptions that all Btrfs features such as snapshots are
>> > infinitely scalable at no cost may be optimistic:
>> >
>> >
>> >
Seems to be some inconsistency with btrfs check --clear-space-cache v1.
It only worked after clearing v2 space cache. See below result.
Background. Since some time I was running with space_cache mount option.
I recently changed to space_cache=v2 and used "clear_cache" mount option
to clear
Perhaps netapp is using a VFS overlay. There is really only one snapshot but it
is shown in the overlay on every folder. Kind of the same with samba Shadow
Copies.
From: Ulli Horlacher -- Sent: 2017-09-09 -
21:52
> On Sat 2017-09-09 (22:43), Andrei
The problem can be that you have a Received UUID on the source volume. This
breaks send-receive.
From: Dave -- Sent: 2017-09-07 - 06:43
> Here is more info and a possible (shocking) explanation. This
> aggregates my prior messages and it provides an almost
On 9/4/2017 5:11 PM, Adam Borowski wrote:
Hi!
Here's an utility to measure used compression type + ratio on a set of files
or directories: https://github.com/kilobyte/compsize
Great tool. Just tried it on some of my backup snapshots.
# compsize portage.20170904T2200
142432 files.
all
There is a drawback in that defragmentation re-dups data that is previously
deduped or shared in snapshots/subvolumes.
From: Marat Khalili -- Sent: 2017-09-04 - 11:31
> Hello list,
> good time of the day,
>
> More than once I see mentioned in this list that autodefrag
] ? flush_kthread_worker+0xb0/0xb0
[17838.564277] [81735a2c] ret_from_fork+0x7c/0xb0
[17838.564279] [81088f20] ? flush_kthread_worker+0xb0/0xb0
[17838.564281] ---[ end trace b632fc27406d3441 ]---
Is there some problem?
Regards,
Marco L. Crociani
--
To unsubscribe from this list: send the line
Hello list-
Under heavy load (i/o), one of our fileservers lost two drives
in a raid6 configuration. After the drives were synchronized,
we can no longer mount the multiple-device btrfs filesystem
due to (at least) parent transid verification.
btrfsck built from git commit
Chris Mason chris.ma...@oracle.com writes:
Excerpts from Gregory L Shomo's message of 2011-04-19 15:08:13 -0400:
Hello list-
Under heavy load (i/o), one of our fileservers lost two drives
in a raid6 configuration. After the drives were synchronized,
we can no longer mount the multiple
Chris Mason chris.ma...@oracle.com writes:
Excerpts from Gregory L Shomo's message of 2011-04-20 08:56:02 -0400:
Chris Mason chris.ma...@oracle.com writes:
Excerpts from Gregory L Shomo's message of 2011-04-19 15:08:13 -0400:
Hello list-
Under heavy load (i/o), one of our fileservers
Chris Mason chris.ma...@oracle.com writes:
Excerpts from Gregory L Shomo's message of 2011-04-20 09:20:20 -0400:
Chris Mason chris.ma...@oracle.com writes:
Excerpts from Gregory L Shomo's message of 2011-04-20 08:56:02 -0400:
Chris Mason chris.ma...@oracle.com writes:
Excerpts from
Chris Mason chris.ma...@oracle.com writes:
Mounting the filesystem read-only from /dev/sdd1 fails, but
succeeds from /dev/sdc1... after about 4855 parent transid
verification failures.
kernel: [ 293.827069] Btrfs loaded
kernel: [ 293.828014] device fsid
Chris Mason chris.ma...@oracle.com writes:
OK, we've re-compiled linux-2.6.38 patched up to btrfs-unstable
commit f65647c29b14f5a32ff6f3237b0ef3b375ed5a79 and can now mount
the filesystem.
Mounting the filesystem read-only from /dev/sdd1 fails, but
succeeds from /dev/sdc1... after about
[ 8073.244629] end_request: I/O error, dev sda, sector 494581666
[ 8073.244667] ata1: EH complete
[ 8073.305813] btrfs_readpage_end_io_hook: 155 callbacks suppressed
[ 8073.305823] btrfs csum failed ino 13966 off 235696128 csum
2566472073 private 501345114
# smartctl -d ata -l selftest /dev/sda
smartctl 5.40
We have many other lists as well and this week they are on sale. Just email me
here: willie.lamb...@inspiresuccess.in
To invoke no further correspondence status please send an email to
disapp...@inspiresuccess.in
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs
but now you have the chance to grab a massive list.
Use it for ezine, email, autoresponders, they will bring your website a huge
amount of power, traffic and sales.
You have to check it out at http://www.traffictractor.com
All the best,
Sam L. Carl
--
To unsubscribe from this list: send
://tiny.cc/ndonh
2) Auto Traffic Avalanche - http://tiny.cc/3wsuq
3) Auto Blog System - http://tiny.cc/ytf7r
4) Zero Cost Commissions - http://tiny.cc/ermdw
5) CPA Instruments - http://tiny.cc/ruh9b
All the best,
Sam L. Carl
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs
Hi,
the situation:
Label: 'RootFS' uuid: c87975a0-a575-405e-9890-d3f7f25bbd96
Total devices 2 FS bytes used 284.98GB
devid 2 size 311.82GB used 286.51GB path /dev/sdb3
devid 1 size 897.76GB used 286.51GB path /dev/sda3
RootFS created when sda3 was 897.76GB and sdb3 311.82GB.
I
On Sat, Jan 14, 2012 at 12:17 AM, Hugo Mills h...@carfax.org.uk wrote:
On Sat, Jan 14, 2012 at 12:12:06AM +0100, Marco L. Crociani wrote:
Hi,
the situation:
Label: 'RootFS' uuid: c87975a0-a575-405e-9890-d3f7f25bbd96
Total devices 2 FS bytes used 284.98GB
devid 2 size 311.82GB
On Sat, Jan 14, 2012 at 3:42 PM, Sander san...@humilis.net wrote:
Marco L. Crociani wrote (ao):
On Sat, Jan 14, 2012 at 12:17 AM, Hugo Mills h...@carfax.org.uk wrote:
On Sat, Jan 14, 2012 at 12:12:06AM +0100, Marco L. Crociani wrote:
# ./btrfs fi sh
Label: 'RootFS'?? uuid: c87975a0-a575
On Mon, Apr 16, 2012 at 3:46 PM, David Sterba d...@jikos.cz wrote:
On Sat, Apr 14, 2012 at 06:39:12PM +0200, Marco L. Crociani wrote:
Apr 14 18:07:52 evo kernel: [ 431.054709] btrfs: relocating block
group 1401002393600 flags 17
Apr 14 18:08:14 evo kernel: [ 453.506541] btrfs csum failed ino
On Tue, Apr 17, 2012 at 12:56 AM, Marco L. Crociani
marco.croci...@gmail.com wrote:
On Mon, Apr 16, 2012 at 3:46 PM, David Sterba d...@jikos.cz wrote:
On Sat, Apr 14, 2012 at 06:39:12PM +0200, Marco L. Crociani wrote:
Apr 14 18:07:52 evo kernel: [ 431.054709] btrfs: relocating block
group
Today I tried scrub...
Apr 19 17:36:01 evo kernel: [ 187.932297] device label RootFS devid 1
transid 47046 /dev/sda3
Apr 19 17:36:02 evo kernel: [ 188.145858] device label RootFS devid 3
transid 47046 /dev/sdb3
Apr 19 17:36:19 evo kernel: [ 205.483044] device label RootFS devid 1
transid 47046
Hi all,
today another episode... I have compiled and tried kernel 3.4-rc5
./btrfs fi sh
Label: 'RootFS' uuid: c87975a0-a575-405e-9890-d3f7f25bbd96
Total devices 3 FS bytes used 1006.67GB
devid3 size 1.75TB used 357.00GB path /dev/sdb3
devid1 size 1.75TB used
On Wed, May 2, 2012 at 4:54 PM, David Sterba d...@jikos.cz wrote:
On Thu, Apr 19, 2012 at 05:42:05PM +0200, Marco L. Crociani wrote:
Apr 19 17:38:41 evo kernel: [ 347.661915] Call Trace:
Apr 19 17:38:41 evo kernel: [ 347.661964] [a00b76ac]
btrfs_ioctl_dev_info+0x15c/0x1a0
, May 2, 2012 at 5:27 PM, David Sterba d...@jikos.cz wrote:
On Wed, May 02, 2012 at 04:59:03PM +0200, Marco L. Crociani wrote:
On Thu, Apr 19, 2012 at 05:42:05PM +0200, Marco L. Crociani wrote:
Apr 19 17:38:41 evo kernel: [ 347.661964] [a00b76ac]
btrfs_ioctl_dev_info+0x15c/0x1a0
-by: Marco L. Crociani marco.croci...@gmail.com
Signed-off-by: Stefan Behrens sbehr...@giantdisaster.de
---
fs/btrfs/scrub.c | 7 +++
1 file changed, 7 insertions(+)
diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
index b679bf6..7e487be 100644
--- a/fs/btrfs/scrub.c
+++ b/fs/btrfs
I'd like to know if there are drawbacks in using btrfs with non-ECC RAM instead
of using ext4 with non-ECC RAM. I know that some features of btrfs may rely on
ECC RAM but is the chance of data corruption or even a damaged filesystem
higher than when i use ext4 instead of btrfs?
I want to know
Hello.
Jo estet, es hogyan csinalod? Csak egy gyors, van egy hivatalos
lehetoseget szeretnek beszelni veled negyszemkozt.
Orulnek a gyors valaszt itt az en szemelyes magan e-mail címet a tovabbi
kommunikaciot.
Udvozlettel,
Mrs. Ko majus Leung
e-mail: komayln...@gmail.com
Hi,
I ran a btrfs scrub & found 3 uncorrectable errors:
[ 5461.412007] BTRFS warning (device md126p2): checksum error at logical
193296359424 on dev /dev/md126p2, sector 379645488, root 257, inode
1795724, offset 217088, length 4096, links 1 (path:
On 11/22/2017 11:15 PM, John L. Center wrote:
On 09/09/2017 01:48 AM, Uli Heller wrote:
I dowloaded the btrfs-progs-v4.13.tar.xz tarball and tried to build it.
Unfortunately, I get an error about cmds-inspect-tree-stats.h. The
file seems
to be missing from the tarball, it used to exist
ats.h"
^
$ LANG=C ls -l btrfs-progs-v4.12/cmds-inspect-tree-stats.h
btrfs-progs-v4.13/cmds-inspect-tree-stats.h
ls: cannot access btrfs-progs-v4.13/cmds-inspect-tree-stats.h: No such
file or directory
-rw-rw-r-- 1 uli uli 905 Jul 28 15:29
btrfs-progs-v4.12/cmds-in
Hi,
I have a copy of this data online and a backup offline but I would
like to find out if we can recover this.
A drive broke and I made a mess while replacing it.
each of sda,sdb,sdc,sdd is a 2 disk hardware RAID0.
one disk of sda died and.
I replaced it.
Made a new raid0 array: sde
This was my
41 matches
Mail list logo