Hi,
BTRFS crashed because the system ran out of memory...
I see these entries in your logs:
Jan 4 17:11:06 ceph1 kernel: [756636.535661] kworker/0:2: page
allocation failure: order:1, mode:0x204020
Jan 4 17:11:06 ceph1 kernel: [756636.536112] BTRFS: error (device
sdb1) in
On Sun, Jan 4, 2015 at 8:10 AM, Lionel Bouton lionel+c...@bouton.name wrote:
On 01/04/15 16:25, Jiri Kanicky wrote:
Hi.
I have been experiencing same issues on both nodes over the past 2
days (never both nodes at the same time). It seems the issue occurs
after some time when copying a
On 01/06/15 02:36, Gregory Farnum wrote:
[...]
filestore btrfs snap controls whether to use btrfs snapshots to keep
the journal and backing store in check. WIth that option disabled it
handles things in basically the same way we do with xfs.
filestore btrfs clone range I believe controls how
I'm afraid I don't know what would happen if you change those options.
Hopefully we've set it up so things continue to work, but we definitely
don't test it.
-Greg
On Tue, Jan 6, 2015 at 8:22 AM Lionel Bouton lionel+c...@bouton.name
wrote:
On 01/06/15 02:36, Gregory Farnum wrote:
[...]
On 01/06/15 18:26, Gregory Farnum wrote:
I'm afraid I don't know what would happen if you change those options.
Hopefully we've set it up so things continue to work, but we
definitely don't test it.
Thanks. That's not a problem: when the opportunity arise I'll just adapt
my tests accordingly
Hi,
My OSDs with btrfs are down on one node. I found the cluster in this state:
cephadmin@ceph1:~$ ceph osd tree
# idweight type name up/down reweight
-1 10.88 root default
-2 5.44host ceph1
0 2.72osd.0 down0
1 2.72
Hi,
Here is my memory output. I use HP Microservers with 2GB RAM. Swap is
500MB on SSD disk.
cephadmin@ceph1:~$ free
total used free sharedbuffers cached
Mem: 18857201817860 67860 0 32 694552
-/+ buffers/cache:1123276
Hi.
Correction. My SWAP is 3GB on SSD disk. I dont use th nodes for client
stuff.
Thx Jiri
On 5/01/2015 01:21, Jiri Kanicky wrote:
Hi,
Here is my memory output. I use HP Microservers with 2GB RAM. Swap is
500MB on SSD disk.
cephadmin@ceph1:~$ free
total used
On 2015-01-04 08:21, Jiri Kanicky wrote:
More googling took me to the following post:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-June/040279.html
Linux 3.14.1 is affected by serious Btrfs regression(s) that were fixed
in
later releases.
Unfortunately even latest Linux can
Hi.
I have been experiencing same issues on both nodes over the past 2 days
(never both nodes at the same time). It seems the issue occurs after
some time when copying a large number of files to CephFS on my client
node (I dont use RBD yet).
These are new HP servers and the memory does
Hi.
Do you know how to tell that the option filestore btrfs snap = false
is set?
Thx Jiri
On 5/01/2015 02:25, Jiri Kanicky wrote:
Hi.
I have been experiencing same issues on both nodes over the past 2
days (never both nodes at the same time). It seems the issue occurs
after some time
On 01/04/15 16:25, Jiri Kanicky wrote:
Hi.
I have been experiencing same issues on both nodes over the past 2
days (never both nodes at the same time). It seems the issue occurs
after some time when copying a large number of files to CephFS on my
client node (I dont use RBD yet).
These
12 matches
Mail list logo