Hi, i use btrfs as a storage for root and data for ElasticSearch
servers and i catch strange bug then servers hungs.
But i get this stack trace only if start Elastic.
Debian 8 x64
Linux msq-k1-srv-ids-02 4.8.0-1-amd64 #1 SMP Debian 4.8.5-1
(2016-10-28) x86_64 GNU/Linux
Also catch it on Debian
On Fri, Nov 25, 2016 at 03:40:36PM +1100, Gareth Pye wrote:
> On Fri, Nov 25, 2016 at 3:31 PM, Zygo Blaxell
> wrote:
> >
> > This risk mitigation measure does rely on admins taking a machine in this
> > state down immediately, and also somehow knowing not to start
On Fri, Nov 25, 2016 at 3:31 PM, Zygo Blaxell
wrote:
>
> This risk mitigation measure does rely on admins taking a machine in this
> state down immediately, and also somehow knowing not to start a scrub
> while their RAM is failing...which is kind of an annoying
On Tue, Nov 22, 2016 at 07:02:13PM +0100, Goffredo Baroncelli wrote:
> On 2016-11-22 01:28, Qu Wenruo wrote:
> >
> >
> > At 11/22/2016 02:48 AM, Goffredo Baroncelli wrote:
> >> Hi Qu,
> >>
> >> I tested this succefully for RAID5 when doing a scrub (i.e.: I mount a
> >> corrupted disks, then I
On Wed, Nov 23, 2016 at 05:26:18PM -0800, Darrick J. Wong wrote:
[...]
> Keep in mind that the number of bytes deduped is returned to userspace
> via file_dedupe_range.info[x].bytes_deduped, so a properly functioning
> userspace program actually /can/ detect that its 128MB request got cut
> down
In the following situation, scrub will calculate wrong parity to
overwrite correct one:
RAID5 full stripe:
Before
| Dev 1 | Dev 2 | Dev 3 |
| Data stripe 1 | Data stripe 2 | Parity Stripe |
--- 0
| 0x (Bad) |
On Fri, Nov 04, 2016 at 03:41:49PM +0100, Saint Germain wrote:
> On Thu, 3 Nov 2016 01:17:07 -0400, Zygo Blaxell
> wrote :
> > [...]
> > The quality of the result therefore depends on the amount of effort
> > put into measuring it. If you look for the first
On Thu, Nov 24, 2016 at 03:00:26PM +0100, Niccolò Belli wrote:
> Hi,
> I use snapper, so I have plenty of snapshots in my btrfs partition and most
> of my data is already deduplicated because of that.
> Since long time ago I run offline defragmentation once (because I didn't
> know extents get
From: Filipe Manana
The tests mount the second device in the device pool but never unmount
it, causing the next test to fail.
Example:
$ cat local.config
export TEST_DEV=/dev/sdb
export TEST_DIR=/home/fdmanana/btrfs-tests/dev
export
From: Filipe Manana
We were setting the qgroup_rescan_running flag to true only after the
rescan worker started (which is a task run by a queue). So if a user
space task starts a rescan and immediately after asks to wait for the
rescan worker to finish, this second call might
Hi,
I use snapper, so I have plenty of snapshots in my btrfs partition and most
of my data is already deduplicated because of that.
Since long time ago I run offline defragmentation once (because I didn't
know extents get unshared) I wanted to run offline deduplication to free a
couple of GBs.
On Wed, Nov 23, 2016 at 9:22 PM, Liu Bo wrote:
> Hi,
>
> On Wed, Nov 23, 2016 at 06:21:35PM +0100, Stefan Priebe - Profihost AG wrote:
>> Hi,
>>
>> sorry last mail was from the wrong box.
>>
>> Am 04.11.2016 um 20:20 schrieb Liu Bo:
>> > If we have
>> >
>> >
On Wed, Nov 23, 2016 at 01:49:53PM -0500, Mike Gilbert wrote:
> On Wed, Nov 23, 2016 at 1:43 PM, Mike Gilbert wrote:
> > On Wed, Nov 23, 2016 at 4:45 AM, David Sterba wrote:
> >> On Tue, Nov 22, 2016 at 01:49:15PM -0500, Mike Gilbert wrote:
> >>> Hi,
>
unsbubscribe
mit freundlichen Grüßen
Jürgen Sauer
--
Jürgen Sauer - automatiX GmbH,
+49-4209-4699, juergen.sa...@automatix.de
Geschäftsführer: Jürgen Sauer,
Gerichtstand: Amtsgericht Walsrode • HRB 120986
Ust-Id: DE191468481 • St.Nr.: 36/211/08000
GPG Public Key zur Signaturprüfung:
On Wed, Nov 23, 2016 at 9:58 PM, Liu Bo wrote:
> This can only happen with CONFIG_BTRFS_FS_CHECK_INTEGRITY=y.
>
> Commit 1ba98d0 ("Btrfs: detect corruption when non-root leaf has zero item")
> assumes that a leaf is its root when leaf->bytenr == btrfs_root_bytenr(root),
>
15 matches
Mail list logo