Hi Sage,
Did you get a chance to look at the crash?
Regards
Srikanth
On Wed, Jun 3, 2015 at 1:38 PM, Srikanth Madugundi
srikanth.madugu...@gmail.com wrote:
Hi Sage,
I saw the crash again here is the output after adding the debug
message from wip-newstore-debuglist
-31 2015-06-03
On Fri, 5 Jun 2015, Srikanth Madugundi wrote:
Hi Sage,
Did you get a chance to look at the crash?
Not yet--I am still focusing on getting wip-temp (and other newstore
prerequisite code) working before turning back to newstore. I'll look at
this once I get back to newstore... hopefully in
Hi Sage,
I saw the crash again here is the output after adding the debug
message from wip-newstore-debuglist
-31 2015-06-03 20:28:18.864496 7fd95976b700 -1
newstore(/var/lib/ceph/osd/ceph-19) start is -1/0//0/0 ... k is
--.7fff..!!!.
Here
I pushed a commit to wip-newstore-debuglist.. can you reproduce the crash
with that branch with 'debug newstore = 20' and send us the log?
(You can just do 'ceph-post-file filename'.)
Thanks!
sage
On Mon, 1 Jun 2015, Srikanth Madugundi wrote:
Hi Sage,
The assertion failed at line 1639,
Hi Sage and all,
I build ceph code from wip-newstore on RHEL7 and running performance
tests to compare with filestore. After few hours of running the tests
the osd daemons started to crash. Here is the stack trace, the osd
crashes immediately after the restart. So I could not get the osd up
and
Hi Sage,
The assertion failed at line 1639, here is the log message
2015-05-30 23:17:55.141388 7f0891be0700 -1 os/newstore/NewStore.cc: In
function 'virtual int NewStore::collection_list_partial(coll_t,
ghobject_t, int, int, snapid_t, std::vectorghobject_t*,
ghobject_t*)' thread 7f0891be0700
On Mon, 1 Jun 2015, Srikanth Madugundi wrote:
Hi Sage and all,
I build ceph code from wip-newstore on RHEL7 and running performance
tests to compare with filestore. After few hours of running the tests
the osd daemons started to crash. Here is the stack trace, the osd
crashes immediately
Hi Sage,
Unfortunately I purged the cluster yesterday and restarted the
backfill tool. I did not see the osd crash yet on the cluster. I am
monitoring the OSDs and will update you once I see the crash.
With the new backfill run I have reduced the rps by half, not sure if
this is the reason for