Hi,
I cloned several machines from the same disk and they all have same
BTRFS filesystem UUID. I need to recovery one disk from failure, but if
I attach two disks to the same machine, both disks have the same ID.
This seem to confuse UDEV, because /dev/disk/by-uuid seem to show just
one
On Mon, Dec 28, 2015 at 01:31:26AM +1100, Jiri Kanicky wrote:
> Hi,
>
> I cloned several machines from the same disk and they all have same
> BTRFS filesystem UUID. I need to recovery one disk from failure, but
> if I attach two disks to the same machine, both disks have the same
> ID.
>
> This
On 12/27/15 15:31, Jiri Kanicky wrote:
> Is there a way to change the BTRFS ID (generate new one) that I can
> differentiate between the two disks on one host?
btrfstune:
-u
Change fsid to a randomly generated UUID or continue previous
fsid change operation in case it was
On Sun, Dec 27, 2015 at 6:59 AM, Waxhead wrote:
> Hi,
>
> I have a "toy-array" of 6x USB drives hooked up to a hub where I made a
> btrfs raid 6 data+metadata filesystem.
>
> I copied some files to the filesystem, ripped out one USB drive and ruined
> it dd if=/dev/random to
Hi,
Thanks for the reply. Looks like I will have o use some newer distro.
Debian Jessie rescue CD does not seem to have this. Anyway, i will play
with this.
Thank you Jiri
On 28/12/2015 1:45 AM, Hugo Mills wrote:
On Mon, Dec 28, 2015 at 01:31:26AM +1100, Jiri Kanicky wrote:
Hi,
I cloned
Hi,
I have a "toy-array" of 6x USB drives hooked up to a hub where I made a
btrfs raid 6 data+metadata filesystem.
I copied some files to the filesystem, ripped out one USB drive and
ruined it dd if=/dev/random to various locations on the drive. Put the
USB drive back and the filesystem
On Sun, Dec 27, 2015 at 6:09 PM, Christoph Anton Mitterer
wrote:
> On Sun, 2015-12-27 at 17:58 -0700, Chris Murphy wrote:
>> I don't see a good use case for scrubbing a degraded array. First
>> make
>> the array healthy, then scrub.
> As I've said, I agree basically... but
Hey.
Just noted, mine says this:
>Start a scrub on all devices of the filesystem identified by
>or on a single . If a scrub is already running, the new one
>fails.
still not the text you quoted,... but there it is.
Anyway... it still contradicts the main description which implies that
a scrub
On Sun, Dec 27, 2015 at 6:25 PM, Christoph Anton Mitterer
wrote:
> Hey.
>
> Just noted, mine says this:
>>Start a scrub on all devices of the filesystem identified by
>>or on a single . If a scrub is already running, the new one
>>fails.
> still not the text you quoted,...
On Mon, Dec 28, 2015 at 01:50:09AM +0100, Christoph Anton Mitterer wrote:
> On Sun, 2015-12-27 at 07:09 +, Duncan wrote:
> > raid1 mode
> I wonder when that reaches my pain threshold... and I submit a patch
> that renames it "notreallyraid1" in all places ;-)
Isn't this an FAQ already?
Duncan wrote:
Waxhead posted on Mon, 28 Dec 2015 00:06:46 +0100 as excerpted:
btrfs scrub status /mnt scrub status for
2832346e-0720-499f-8239-355534e5721b
scrub started at Sun Mar 29 23:21:04 2015 and finished after
00:01:04
total bytes scrubbed: 1.97GiB with 14549 errors
On Mon, 2015-12-28 at 02:51 +, Duncan wrote:
> 1) Btrfs very specifically and deliberately uses *lowercase* raidN
> in part to make that distinction, as the btrfs variants are chunk-
> level (and designed so that at some point in the future they can be
> subvolume and/or file level), not
On Sun, 2015-12-27 at 18:23 -0700, Chris Murphy wrote:
> I'd want scrub to immediately fail in a degraded case, because the
> higher workload added by the scrub itself could cause additional
> device failures sooner. And that would negatively impact the ability
> to get the array healthy again
On Sun, Dec 27, 2015 at 6:21 PM, Christoph Anton Mitterer
wrote:
> On Sun, 2015-12-27 at 07:22 +, Duncan wrote:
>> I'd call that NOTABUG. As the btrfs-scrub manpage suggests:
>>
>> * When you point scrub at a mountpoint, it scrubs all devices
>> composing
>> that
On Sun, Dec 27, 2015 at 7:04 PM, Waxhead wrote:
> Since all drives register and since I can even mount the filesystem.
OK so you've umounted the file system, reconnected all devices,
mounted the file system normally, and there are no problems reported
in dmesg?
If so, yes I
Jiri Kanicky posted on Mon, 28 Dec 2015 13:13:24 +1100 as excerpted:
> VM with BTRFS filesystem running on XenServer. The VM disk is a VHD
> stored on NFS storage. NFS storage ran out of space, and I found the
> BTRFS in RO mode. I could not remount it as RW after increasing the
> storage space.
On Mon, 2015-12-28 at 03:30 +, Duncan wrote:
> So how is it not the text I quoted?
Uhm... I just thought you meant that:
> * When you point scrub at a mountpoint, it scrubs all devices
> composing
> that filesystem.
to be the quote which I couldn't find after a quick cross reading...
Sorry
Hello Friend
I must Say That I have Enormous Respect for you considering the Manner in Which
I have Made contact with you. My name is Mr.Steven Thomas, The financial
controller at the Royal Bank of Scotland (Main Branch) in New Delhi, INDIA, and
I am getting in touch with you regarding a
On Mon, 2015-12-28 at 01:58 +, Hugo Mills wrote:
> Isn't this an FAQ already? There is already a patch to rename the
> RAID modes. It's been sitting in the progs patch queue for about 2
> years, because none of the senior devs has acked it yet (since it's a
> big user-visible change).
Christoph Anton Mitterer posted on Mon, 28 Dec 2015 02:21:28 +0100 as
excerpted:
> On Sun, 2015-12-27 at 07:22 +, Duncan wrote:
>> I'd call that NOTABUG. As the btrfs-scrub manpage suggests:
>>
>> * When you point scrub at a mountpoint, it scrubs all devices composing
>> that filesystem.
>
On Mon, Dec 28, 2015 at 12:01:39AM +0100, Christoph Anton Mitterer wrote:
> On Mon, 2015-12-28 at 02:27 +1100, Jiri Kanicky wrote:
> > Thanks for the reply. Looks like I will have o use some newer
> > distro.
> As it was already said... btrfs may even corrupt your filesystem if
> colliding UUIDs
On Sun, 2015-12-27 at 07:09 +, Duncan wrote:
> raid1 mode
I wonder when that reaches my pain threshold... and I submit a patch
that renames it "notreallyraid1" in all places ;-)
Cheers,
Chris.
smime.p7s
Description: S/MIME cryptographic signature
On Sun, 2015-12-27 at 17:58 -0700, Chris Murphy wrote:
> I don't see a good use case for scrubbing a degraded array. First
> make
> the array healthy, then scrub.
As I've said, I agree basically... but *if* scrubbing a degraded fs
leads to even more errors (apart from the fact that you may loose
Contents of btrfs-debug-tree are below. item 293 (which I'm assuming
is what "slot=292" refers to with a start-of-index difference) shows:
item 292 key (EXTENT_CSUM EXTENT_CSUM 250718826496) itemoff 10231 itemsize 4
extent csum item
item 293 key (18446744073709551350 EXTENT_CSUM 250718830592)
Christoph Anton Mitterer posted on Mon, 28 Dec 2015 01:50:09 +0100 as
excerpted:
> On Sun, 2015-12-27 at 07:09 +, Duncan wrote:
>> raid1 mode
> I wonder when that reaches my pain threshold... and I submit a patch
> that renames it "notreallyraid1" in all places ;-)
I've seen two responses
On Sun, 2015-12-27 at 11:29 -0700, Chris Murphy wrote:
> then the scrub request is effectively a
> scrub for a volume with a missing drive which you probably wouldn't
> ever do, you'd first replace the missing device.
While that's probably the normal work flow,... it should still work the
other
Have gotten about 300 (mostly duplicate) BTRFS errors in the last
hours. No signs of disk problem. Non-SSD SATA. Was able to dd the
drive into an image file on another drive without errors. Smartctl
reports no issues. It is a single disk though.
Been running btrfs fine since 7/9/15,
On Sun, 2015-12-27 at 07:22 +, Duncan wrote:
> I'd call that NOTABUG. As the btrfs-scrub manpage suggests:
>
> * When you point scrub at a mountpoint, it scrubs all devices
> composing
> that filesystem.
Uhm,.. mine doesn't contain this,... neither do those of the master or
devel branches
Waxhead posted on Sun, 27 Dec 2015 14:59:18 +0100 as excerpted:
> I have a "toy-array" of 6x USB drives hooked up to a hub where I made a
> btrfs raid 6 data+metadata filesystem.
Just noting as an aside comment to the main thread...
While doing this with a "toy-array" for experimental purposes
On Sun, Dec 27, 2015 at 08:03:16PM -0500, james harvey wrote:
> Have gotten about 300 (mostly duplicate) BTRFS errors in the last
> hours. No signs of disk problem. Non-SSD SATA. Was able to dd the
> drive into an image file on another drive without errors. Smartctl
> reports no issues. It is
Chris Murphy wrote:
On Sun, Dec 27, 2015 at 6:59 AM, Waxhead wrote:
Hi,
I have a "toy-array" of 6x USB drives hooked up to a hub where I made a
btrfs raid 6 data+metadata filesystem.
I copied some files to the filesystem, ripped out one USB drive and ruined
it dd
Hugo Mills posted on Sun, 27 Dec 2015 23:13:03 + as excerpted:
> On Mon, Dec 28, 2015 at 12:01:39AM +0100, Christoph Anton Mitterer
> wrote:
>> On Mon, 2015-12-28 at 02:27 +1100, Jiri Kanicky wrote:
>> > Thanks for the reply. Looks like I will have o use some newer distro.
>> As it was
On Sun, Dec 27, 2015 at 5:39 PM, Christoph Anton Mitterer
wrote:
> On Sun, 2015-12-27 at 11:29 -0700, Chris Murphy wrote:
>> then the scrub request is effectively a
>> scrub for a volume with a missing drive which you probably wouldn't
>> ever do, you'd first replace the
Christoph Anton Mitterer posted on Mon, 28 Dec 2015 02:31:16 +0100 as
excerpted:
> On Sun, 2015-12-27 at 18:23 -0700, Chris Murphy wrote:
>> I'd want scrub to immediately fail in a degraded case, because the
>> higher workload added by the scrub itself could cause additional device
>> failures
Hi,
Thanks for the detailed information. The data were not corrupted, so I
could copy them to a new BTRFS partition.
Here is what happened exactly.
VM with BTRFS filesystem running on XenServer. The VM disk is a VHD
stored on NFS storage. NFS storage ran out of space, and I found the
BTRFS
On Mon, 2015-12-28 at 02:27 +1100, Jiri Kanicky wrote:
> Thanks for the reply. Looks like I will have o use some newer
> distro.
As it was already said... btrfs may even corrupt your filesystem if
colliding UUIDs are "seen".
At least to me it's currently unclear what "seen" exactly means...
Waxhead posted on Mon, 28 Dec 2015 03:04:33 +0100 as excerpted:
> Duncan wrote:
>> Waxhead posted on Mon, 28 Dec 2015 00:06:46 +0100 as excerpted:
>>
>>> btrfs scrub status /mnt
>>> scrub status for 2832346e-0720-499f-8239-355534e5721b
>>> scrub started at Sun Mar 29 23:21:04 2015
>>>
Christoph Anton Mitterer posted on Mon, 28 Dec 2015 04:03:05 +0100 as
excerpted:
> On Mon, 2015-12-28 at 02:51 +, Duncan wrote:
>> 1) Btrfs very specifically and deliberately uses *lowercase* raidN in
>> part to make that distinction, as the btrfs variants are chunk- level
>> (and designed
38 matches
Mail list logo