Hello Sage Weil,
The patch 85792d0dd6e7: ceph: cope with out of order (unsafe after
safe) mds reply from May 13, 2010, leads to the following static
checker warning:
fs/ceph/mds_client.c:2414 handle_reply()
warn: we tested 'head-safe' before and it was 'false'
-- All Branches --
Adam Crume adamcr...@gmail.com
2014-12-01 20:45:58 -0800 wip-doc-rbd-replay
Alfredo Deza ad...@redhat.com
2015-03-23 16:39:48 -0400 wip-11212
2015-03-25 10:10:43 -0400 wip-11065
2015-07-01 08:34:15 -0400 wip-12037
Alfredo Deza
To all of those who may be afflicted by various build failures after the
c++11 patches were merged, don't panic just yet.
If you are using ccache to help your builds, try clearing the whole
cache first.
As of right now, this has fixed the following build errors on trusty for
(at least) two
Sage, Piotr, sorry for your time and thanx for your help.
Memtest showed me red results. Will be digging for bad memory chips.
Thanks again.
Сахинов Константин
тел.: +7 (909) 945-89-42
2015-08-10 16:58 GMT+03:00 Dałek, Piotr piotr.da...@ts.fujitsu.com:
-Original Message-
From:
Currently putting something in a bufferlist invovles 3 allocations:
1. raw buffer (posix_memalign, or new char[])
 2. buffer::rawÂ(this holds the refcount. lifecycle matches the
raw buffer exactly)
 3. bufferlist's STL list node, which embeds buffer::ptr
--- combine buffer and
On Mon, 10 Aug 2015, ?? ??? wrote:
Sage, Piotr, sorry for your time and thanx for your help.
Memtest showed me red results. Will be digging for bad memory chips.
Thank you for following up, and relieved to hear we have a likely culprit.
Good luck!
sage
Thanks again.
This is pretty much low-level approach, what I was actually wondering is
whether we can reduce amount of memory (de)allocations on higher level, like
improving the message lifecycle logic (from receiving to performing actual
operation and finishing it), so it wouldn't involve so many
We explored a number of these ideas. We have a few branches that might be
picked over.
Having said that, our feeling was that the generality to span shared and
non-shared cases transparently has cost in the unmarked case. Other aspects of
the buffer indirection are essential (e.g., Accelio
-Original Message-
From: Sage Weil [mailto:s...@newdream.net]
Sent: Monday, August 10, 2015 9:20 PM
To: Константин Сахинов
On Mon, 10 Aug 2015, ?? ??? wrote:
Sage, Piotr, sorry for your time and thanx for your help.
Memtest showed me red results. Will be digging for
On Mon, 10 Aug 2015, Da?ek, Piotr wrote:
This is pretty much low-level approach, what I was actually wondering is
whether we can reduce amount of memory (de)allocations on higher level,
like improving the message lifecycle logic (from receiving to performing
actual operation and finishing
Hi Yehuda,
On top of the changes for [1], I would propose another change, which exposes
the number of *stuck threads* via admin socket, so that we can build something
outside of ceph to check if all worker threads are stuck, and if yes, restart
the service.
We can also assertion out if all
Hi Loic,
Thanks very much, we will give a try asap.
Cheers,
Li Wang
On 2015/8/10 21:09, Loic Dachary wrote:
Hi,
You should be able to follow the instructions at
https://github.com/dachary/teuthology/tree/wip-6502-v3-dragonfly#openstack-backend
on your OpenStack cluster. I expect it
On Mon, 10 Aug 2015, ?? ??? wrote:
Posted 2 files
ceph-1-rbd_data.3ef8442ae8944a.0aff
ceph-post-file: 10cddc98-7177-47d8-9a97-4868856f974b
ceph-7-rbd_data.3ef8442ae8944a.0aff
ceph-post-file: f2861c0a-fc9a-4078-b95c-d5ba3cf6057e
By the way, file names
Hi,
You should be able to follow the instructions at
https://github.com/dachary/teuthology/tree/wip-6502-v3-dragonfly#openstack-backend
on your OpenStack cluster. I expect it to be portable accross OpenStack
implemenations back to Havana. Please let me know if your OpenStack cluster has
a
Hi,
The make check bot is back to work. It will not send any report for another 24h
to 48h to avoid a burst false negative. In the meantime you can run it locally
with:
run-make-check.sh
It is the same script that the bot uses and should work from a fresh git clone
of Ceph.
Cheers
On
Uploaded another corrupted piece.
2015-08-10 16:18:40.027726 7f7979697700 -1 log_channel(cluster) log
[ERR] : be_compare_scrubmaps: 3.fd shard 6: soid
f2e832fd/rbd_data.ab7174b0dc51.0249/head//3 data_digest
0x64e94460 != known data_digest 0xaec3bea8 from auth shard 10
#
On Mon, 10 Aug 2015, ?? ??? wrote:
Uploaded another corrupted piece.
2015-08-10 16:18:40.027726 7f7979697700 -1 log_channel(cluster) log
[ERR] : be_compare_scrubmaps: 3.fd shard 6: soid
f2e832fd/rbd_data.ab7174b0dc51.0249/head//3 data_digest
0x64e94460 != known
-Original Message-
From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
ow...@vger.kernel.org] On Behalf Of Sage Weil
Sent: Monday, August 10, 2015 3:52 PM
To: Константин Сахинов
On Mon, 10 Aug 2015, ?? ??? wrote:
Uploaded another corrupted piece.
2015-08-10
I can't see any pattern in OSDs distribution by PGs. Looks like there
are all 8 OSDs of all 4 nodes have inconsistent pgs (4 other OSDs are
in other root/hosts ready to become cache tier).
uploaded ceph pg dump
# ceph-post-file ceph-pg-dump
ceph-post-file: 7fcce58a-8cfa-4e5f-aafb-f6b031d1795f
19 matches
Mail list logo