> Traditional hard drives usually do this too these days (they've been
> under-provisioned since before SSD's existed), which is part of why older
> disks tend to be noisier and slower (the reserved space is usually at the far
> inside or outside of the platter, so using sectors from there to
(not a hardcore enterprise drives, just
good customer ones) and leave stuff to a drive + use OS that gives you trim and
you should be golden
> On 15 May 2017, at 00:01, Imran Geriskovan <imran.gerisko...@gmail.com> wrote:
>
> On 5/14/17, Tomasz Kusmierz <tom.kusmi...@gmai
(not a hardcore enterprise drives, just
good customer ones) and leave stuff to a drive + use OS that gives you trim and
you should be golden
> On 15 May 2017, at 00:01, Imran Geriskovan <imran.gerisko...@gmail.com> wrote:
>
> On 5/14/17, Tomasz Kusmierz <tom.kusmi...@gmai
All stuff that Chris wrote holds true, I just wanted to add flash specific
information (from my experience of writing low level code for operating flash)
So with flash, to erase you have to erase a large allocation block, usually it
used to be 128kB (plus some crc data and stuff makes more than
I’ve glazed over on “Not only that …” … can you make youtube video of that :
> On 28 Mar 2017, at 16:06, Peter Grandi wrote:
>
>> I glazed over at “This is going to be long” … :)
>>> [ ... ]
>
> Not only that, you also top-posted while quoting it pointlessly
> in
I glazed over at “This is going to be long” … :)
> On 28 Mar 2017, at 15:43, Peter Grandi wrote:
>
> This is going to be long because I am writing something detailed
> hoping pointlessly that someone in the future will find it by
> searching the list archives while
) in
cleanup_transaction:1854: errno=-2 No such entry
On 21 February 2017 at 22:18, Tomasz Kusmierz <tom.kusmi...@gmail.com> wrote:
> Anyone ?
>
> On 18 Feb 2017, at 16:44, Tomasz Kusmierz <tom.kusmi...@gmail.com> wrote:
>
> So Qu,
>
> currently my situation is th
Anyone ?
On 18 Feb 2017, at 16:44, Tomasz Kusmierz <tom.kusmi...@gmail.com> wrote:
So Qu,
currently my situation is that:
I've tried to go btrfs scan --repair, and it did relair some stuff is
qgroup's ... then tried to mont it and, surprise surpeire system
locked out in 20 seconds.
t;OK" ... another attempt to
mount /dev/sdc /mnt2/main_pool and again after 20 seconds system locks
up hard.
There is nothing in messages, nothing in dmesg ... I think that system
lock up so hard that master btrfs filesystem does not get time those
logs pushed to disk.
On 16 February
attempted a full FS balance that caused this FS to be
unmountable. Is there any other debug you would require before I
proceed (I’ve got a lot i
On 16 Feb 2017, at 01:26, Qu Wenruo <quwen...@cn.fujitsu.com> wrote:
At 02/15/2017 10:11 PM, Tomasz Kusmierz wrote:
So guys, any help here ? I’m
So guys, any help here ? I’m kinda stuck now with system just idling and doing
nothing while I wait for some feedback ...
> On 14 Feb 2017, at 19:38, Tomasz Kusmierz <tom.kusmi...@gmail.com> wrote:
>
> [root@server ~]# btrfs-show-super -af /dev/sdc
> superblock: bytenr=6553
t 02/14/2017 08:23 AM, Tomasz Kusmierz wrote:
>>
>> Forgot to mention:
>>
>> btrfs inspect-internal dump-super -af /dev/sdc
>
>
> Your btrfs-progs is somewhat old, which doesn't integrate dump super into
> inspect-internal.
>
> In that case, you can use btr
min-dev-size [options]
Get the minimum size the device can be shrunk to. The
query various internal information
On 13 February 2017 at 14:58, Tomasz Kusmierz <tom.kusmi...@gmail.com> wrote:
> Problem is to send a larger log into this mailing list :/
>
> Anyway: uname -
169 0)
itemoff 15887 itemsize 33
Feb 10 00:29:11 server kernel: #011#011extent refs 1 gen 142940 flags 258
Feb 10 00:29:11 server kernel: #011#011shared block backref parent 5224641380352
Feb 10 00:29:11 server kernel: #011item 12 key (12288258162688 169 0)
itemoff 15854 itemsize 33
Feb 10 00:29:11
Hi all,
So my main storage filesystem got some sort of veird corruption (that
I can gather). Everything seems to work OK, but when I try to create a
snapshot or run balance (no filters) it will get remounted read only.
Fun part is that balance seems to be running even on read only FS, and
I
That was long winded way of saying “there is no mechanism in btrfs to tell you
exactly which device is missing” but thanks anyway.
> On 12 Jan 2017, at 12:47, Austin S. Hemmelgarn <ahferro...@gmail.com> wrote:
>
> On 2017-01-11 15:37, Tomasz Kusmierz wrote:
>> I would li
I would like to use this thread to ask few questions:
If we have 2 devices dying on us and we run RAID6 - this theoretically will
still run (despite our current problems). Now let’s say that we booted up raid6
of 10 disk and 2 of them dies but operator does NOT know what are dev ID of
disk
> On 10 Jan 2017, at 21:07, Vinko Magecic
> wrote:
>
> Hello,
>
> I set up a raid 1 with two btrfs devices and came across some situations in
> my testing that I can't get a straight answer on.
> 1) When replacing a volume, do I still need to `umount /path`
Chris,
the "btrfs-show-super -fa" gives me nothing useful to work with.
the "btrfs-find-root -a " is actually something that I was
already using (see original post), but the list of roots given had a
rather LARGE hole of 200 generations that is located between right
after I've had everything
nerations of FS (over 200 generations missing) and the most recent
generation that actually can me touched by btrfs restore is over a
month old.
How to over come that ?
On 11 December 2016 at 19:00, Chris Murphy <li...@colorremedies.com> wrote:
> On Sun, Dec 11, 2016 at 10:40 AM, Tomasz Kusm
Hi,
So, I've found my self in a pickle after following this steps:
1. trying to migrate an array to different system, it became apparent
that importing array there was not possible to import it because I've
had a very large amount of snapshots (every 15 minutes during office
hours amounting to
Also I'll quote you on throwing under the bus thing :) (I actually
like that justification)
On 1 December 2016 at 17:28, Chris Murphy <li...@colorremedies.com> wrote:
> On Wed, Nov 30, 2016 at 1:29 PM, Tomasz Kusmierz <tom.kusmi...@gmail.com>
> wrote:
>
>> Please, I beg you add
On 30 November 2016 at 19:09, Chris Murphy wrote:
> On Wed, Nov 30, 2016 at 7:37 AM, Austin S. Hemmelgarn
> wrote:
>
>> The stability info could be improved, but _absolutely none_ of the things
>> mentioned as issues with raid1 are specific to
I think you just described all the benefits of btrfs in that type of
configuration unfortunately after btrfs RAID 5 & 6 was marked as
OK it got marked as "it will eat your data" (and there is a tone of
people in random places poping up with raid 5 & 6 that just killed
their data)
On 11
On 10 October 2016 at 02:01, ronnie sahlberg wrote:
> (without html this time.)
>
> Nas drives are more expensive but also more durable than the normal consumer
> drives, but not as durable as enterprise drives.
> They are meant for near continous use, compared to
And what exactly are NAS drives ?
Are you talking marketing by any chance ? Please, tell me you got the pun.
On 10 October 2016 at 00:12, Charles Zeitler wrote:
> Is there any advantage to using NAS drives
> under RAID levels, as oppposed to regular
> 'desktop' drives for
This is predominantly for maintainers:
I've noticed that there is a lot of code for btrfs ... and after few
glimpses I've noticed that there are occurrences which beg for some
refactoring to make it less of a pain to maintain.
I'm speaking of occurrences where:
- within a function there are
Sorry for late reply, there was a lot of traffic in this thread so:
1. I do apologize but I've got a wrong end of the stick, I was
convinced that btrfs does cause corruption on your disk because some
of the link that you've hav in original post were pointing to topics
with corruptions going on,
Just please don't take it as picking or something:
> It's a Seagate Expansion Desktop 5TB (USB3). It is probably a ST5000DM000.
this is TGMR not SMR disk:
http://www.seagate.com/www-content/product-content/desktop-hdd-fam/en-us/docs/100743772a.pdf
So it still confirms to standard record strategy
No answer here, but mate if you are involved in anything that will provide some
more automated backup tool for btrfs you got a lot of silent people rooting for
you.
> On 16 Jul 2016, at 00:21, Eric Wheeler wrote:
>
> Hello all,
>
> We do btrfs subvolume snapshots
Thou I’m not a hardcore storage system professional:
What disk are you using ? There are two types:
1. SMR managed by device firmware. BTRFS sees that as a normal block device …
problems you get are not related to BTRFS it self …
2. SMR managed by host system, BTRFS still does see this as a
>
> Well, I was able to run memtest on the system last night, that passed with
> flying colors, so I'm now leaning toward the problem being in the sas card.
> But I'll have to run some more tests.
>
Seriously use the "stres.sh" for couple of days, When I was running
memtest it was running
> On 7 Jul 2016, at 02:46, Chris Murphy wrote:
>
Chaps, I didn’t wanted this to spring up as a performance of btrfs argument,
BUT
you are throwing a lot of useful data, maybe diverting some of it into wiki ?
you know, us normal people might find it useful for
> On 7 Jul 2016, at 00:22, Kai Krakow <hurikha...@gmail.com> wrote:
>
> Am Wed, 6 Jul 2016 13:20:15 +0100
> schrieb Tomasz Kusmierz <tom.kusmi...@gmail.com>:
>
>> When I think of it, I did move this folder first when filesystem was
>> RAID 1 (or not eve
> On 6 Jul 2016, at 23:14, Corey Coughlin wrote:
>
> Hi all,
>Hoping you all can help, have a strange problem, think I know what's going
> on, but could use some verification. I set up a raid1 type btrfs filesystem
> on an Ubuntu 16.04 system, here's what it
> On 6 Jul 2016, at 22:41, Henk Slager <eye...@gmail.com> wrote:
>
> On Wed, Jul 6, 2016 at 2:20 PM, Tomasz Kusmierz <tom.kusmi...@gmail.com>
> wrote:
>>
>>> On 6 Jul 2016, at 02:25, Henk Slager <eye...@gmail.com> wrote:
>>>
>>
> On 6 Jul 2016, at 02:25, Henk Slager <eye...@gmail.com> wrote:
>
> On Wed, Jul 6, 2016 at 2:32 AM, Tomasz Kusmierz <tom.kusmi...@gmail.com>
> wrote:
>>
>> On 6 Jul 2016, at 00:30, Henk Slager <eye...@gmail.com> wrote:
>>
>> On Mo
On 6 Jul 2016, at 00:30, Henk Slager <eye...@gmail.com
<mailto:eye...@gmail.com>> wrote:
>
> On Mon, Jul 4, 2016 at 11:28 PM, Tomasz Kusmierz <tom.kusmi...@gmail.com
> <mailto:tom.kusmi...@gmail.com>> wrote:
>> I did consider that, but:
>> - some
, at 22:13, Henk Slager <eye...@gmail.com> wrote:
>
> On Sun, Jul 3, 2016 at 1:36 AM, Tomasz Kusmierz <tom.kusmi...@gmail.com>
> wrote:
>> Hi,
>>
>> My setup is that I use one file system for / and /home (on SSD) and a
>> larger raid 10 for /mn
Hi,
My setup is that I use one file system for / and /home (on SSD) and a
larger raid 10 for /mnt/share (6 x 2TB).
Today I've discovered that 14 of files that are supposed to be over
2GB are in fact just 4096 bytes. I've checked the content of those 4KB
and it seems that it does contain
Hi all !
So it been some time with btrfs, and so far I was very pleased, but
since I've upgraded to ubuntu from 13.10 to 14.04 problems started to
occur (YES I know this might be unrelated).
So in the past I've had problems with btrfs which turned out to be a
problem caused by static from
Hi,
Long story short:
I've got btrfs raid10 six disk array plus 2 other disks just having a
normal setup btrfs filesystems.
Everything was running happily under linux 3.5 and 3.7.
3.5 was a stock ubuntu kernel, 3.7 was slightly less stock ubuntu kernel.
Now I've upgraded my box to 3.8 and none
Hi,
Question is pretty simple:
How to change node size and leaf size on previously created partition?
Now, I know what most people will say: you should've be smarter while
typing mkfs.btrfs. Well, I'm intending to convert in place ext4
partition but there seems to be no option for leaf and
Hi,
Question is pretty simple:
How to change node size and leaf size on previously created partition?
Now, I know what most people will say: you should've be smarter while
typing mkfs.btrfs. Well, I'm intending to convert in place ext4
partition but there seems to be no option for leaf and
On 16/01/13 09:21, Bernd Schubert wrote:
On 01/16/2013 12:32 AM, Tom Kusmierz wrote:
p.s. bizzare that when I fill ext4 partition with test data everything
check's up OK (crc over all files), but with Chris tool it gets
corrupted - for both Adaptec crappy pcie controller and for mother board
On 05/02/13 12:49, Chris Mason wrote:
On Tue, Feb 05, 2013 at 03:16:34AM -0700, Tomasz Kusmierz wrote:
On 16/01/13 09:21, Bernd Schubert wrote:
On 01/16/2013 12:32 AM, Tom Kusmierz wrote:
p.s. bizzare that when I fill ext4 partition with test data everything
check's up OK (crc over all files
On 05/02/13 13:46, Roman Mamedov wrote:
On Tue, 05 Feb 2013 10:16:34 +
Tomasz Kusmierz tom.kusmi...@gmail.com wrote:
that I was using one of those fantastic pci 4 port ethernet cards and
printer was directly to it - after moving it and everything else to
switch all problem and issues have
Hi,
Since I had some free time over Christmas, I decided to conduct few
tests over btrFS to se how it will cope with real life storage for
normal gray users and I've found that filesystem will always mess up
your files that are larger than 10GB.
Long story:
I've used my set of data that
Hi,
Since I had some free time over Christmas, I decided to conduct few
tests over btrFS to se how it will cope with real life storage for
normal gray users and I've found that filesystem will always mess up
your files that are larger than 10GB.
Long story:
I've used my set of data that
On 14/01/13 11:25, Roman Mamedov wrote:
Hello,
On Mon, 14 Jan 2013 11:17:17 +
Tomasz Kusmierz tom.kusmi...@gmail.com wrote:
this point I was a bit spooked up that my controllers are failing or
Which controller manufacturer/model?
Well, this is a home server (which I preffer to tinker
On 14/01/13 14:59, Chris Mason wrote:
On Mon, Jan 14, 2013 at 04:09:47AM -0700, Tomasz Kusmierz wrote:
Hi,
Since I had some free time over Christmas, I decided to conduct few
tests over btrFS to se how it will cope with real life storage for
normal gray users and I've found that filesystem
On 14/01/13 15:57, Chris Mason wrote:
On Mon, Jan 14, 2013 at 08:22:36AM -0700, Tomasz Kusmierz wrote:
On 14/01/13 14:59, Chris Mason wrote:
On Mon, Jan 14, 2013 at 04:09:47AM -0700, Tomasz Kusmierz wrote:
Hi,
Since I had some free time over Christmas, I decided to conduct few
tests over
On 14/01/13 16:20, Roman Mamedov wrote:
On Mon, 14 Jan 2013 15:22:36 +
Tomasz Kusmierz tom.kusmi...@gmail.com wrote:
1) create a single drive default btrfs volume on single partition -
fill with test data - scrub - admire errors.
Did you try ruling out btrfs as the cause of the problem
53 matches
Mail list logo