bout the buckets or objects has been lost, is
there a way to recover the bucket?
I searched through the archives but didn't find anything exactly like
this. Please point me to the documentation if this has been seen before.
Thanks,
Jeff
--
Jeffrey McDonald, PhD
Assistant Director for HPC Ope
> On Wed, 1 Jun 2016 12:31:41 -0500 Jeffrey McDonald wrote:
>
> > Hi,
> >
> > I just performed a minor ceph upgrade on my ubuntu 14.04 cluster from
> > ceph version to0.94.6-1trusty to 0.94.7-1trusty. Upon restarting the
> > OSDs, I receive the error message:
&
crc
2016-06-01 11:23:05.287753 osd.177 10.31.0.71:6842/10245 445 : cluster
[WRN] failed to encode map e282673 with expected crc
How do I clear these up after the upgrade?All of the filesystems on the
OSDs are mounted and the keyrings are there..
Thanks,
Jeff
--
Jeffrey McDonald, PhD
iles that end in something like
> fa202ec9b4b3b217275a_0_long are *not* necessarily orphans -- you need
> to check the user.cephos.lfn3 attr (as you did before) for the full
> length file name and determine whether the file is in the right place.
> -Sam
>
> On Wed, Mar 16, 2016 at 7:49 AM, Jef
next pg marked inconsistent I didn't find any
orphans.
Thanks,
Jeff
--
Jeffrey McDonald, PhD
Assistant Director for HPC Operations
Minnesota Supercomputing Institute
University of Minnesota Twin Cities
599 Walter Library email: jeffrey.mcdon...@msi.umn.edu
117 Pleasant St SE phon
will all be close to one
> > of those limits. It's possible you are not close to one of those
> > limits. It's also possible you are nearing one now. In any case, the
> > remapping gave the orphaned files an opportunity to cause trouble, but
> > they don't appear due to remapp
hink what happened is that on
> one of the earlier repairs it reset the stats to the wrong value (the
> orphan was causing the primary to scan two objects twice, which
> matches the stat mismatch I see here). A pg repair repair will clear
> that up.
> -Sam
>
> On Thu, Mar 17, 201
get around to writing a
> branch for ceph-objectstore-tool. Should happen in the next week or
> two.
> -Sam
>
>
--
Jeffrey McDonald, PhD
Assistant Director for HPC Operations
Minnesota Supercomputing Institute
University of Minnesota Twin Cities
599 Walter Library
then overwritten with a writefull -- if
> that's true it might be the case that you would only see 0 size ones.
> -Sam
>
> On Tue, Mar 15, 2016 at 4:02 PM, Jeffrey McDonald <jmcdo...@umn.edu>
> wrote:
> > Thanks, I can try to write a tool to do this. Does
> ceph-objectsto
-02-01 13:41:10.623892 osd.307 10.31.0.67:6848/13421 27 :
cluster [INF] 70.459s0 restarting backfill on osd.25(1) from (0'0,0'0] MAX
to 135195'206996
...
Regards,
Jeff
--
Jeffrey McDonald, PhD
Assistant Director for HPC Operations
Minnesota Supercomputing Institute
University of Minnesota Twi
ects?
> -sam
>
>
--
Jeffrey McDonald, PhD
Assistant Director for HPC Operations
Minnesota Supercomputing Institute
University of Minnesota Twin Cities
599 Walter Library email: jeffrey.mcdon...@msi.umn.edu
117 Pleasant St SE phone: +1 612 625-6905
Minneapolis, MN 55455f
42 55 6C.tar.gz.2~TWfBUl
00D0 45 6B 6B 34 45 50 48 34 75 5F 4E 6B 4D 6A 6D 7AEkk4EPH4u_NkMjmz
00E0 36 35 43 52 53 6B 4A 41 33 2E 33 5F 32 34 39 FE65CRSkJA3.3_249.
00F0 FF FF FF FF FF FF FF 3B 90 3C 4B 00 00 00 00 00...;.https://drive.google.com/folderview?id=0Bzz8TrxFvfema2NQUmotd1
e out what's going on,
> you'll have to clean them up manually. Do not repair any of these. I
> suggest that you simply disable scrub and ignore the inconsistent flag
> until we have an idea of what is going on.
> -Sam
>
> On Tue, Mar 8, 2016 at 12:06 PM, Jeffrey McDonald <jmc
ACXX\u2\uACTTGA.tar.gz.2~TWfBUlEkk4EPH4u\uNkMjmz65CRSkJA3._215ce1442b16dc173b77_0_long
>
> On Tue, Mar 8, 2016 at 12:00 PM, Samuel Just <sj...@redhat.com> wrote:
> > Yeah, that procedure should have isolated any filesystem issues. Are
> > there still unfound objects?
> > -s
Resent to ceph-users to be under the message size limit
On Tue, Mar 8, 2016 at 6:16 AM, Jeffrey McDonald <jmcdo...@umn.edu> wrote:
> OK, this is done and I've observed the state change of 70.459 from
> active+clean to active+clean+inconsistent after the first scrub.
>
&
bit more tomorrow. Can you get the tree
> structure of the 70.459 pg directory on osd.307 (find . will do fine).
> -Sam
>
> On Mon, Mar 7, 2016 at 4:50 PM, Jeffrey McDonald <jmcdo...@umn.edu> wrote:
> > 307 is on ceph03.
> > Jeff
> >
> > On Mon, Mar 7, 20
nodes, but they
> > were only receiving new data, not migrating it.' -- What do you mean
> > by that?
> > -Sam
> >
> > On Mon, Mar 7, 2016 at 4:42 PM, Jeffrey McDonald <jmcdo...@umn.edu>
> wrote:
> >> The filesystem is xfs everywhere, there are nine ho
pg, it might help.
> > -Sam
> >
> > On Mon, Mar 7, 2016 at 4:34 PM, Jeffrey McDonald <jmcdo...@umn.edu>
> wrote:
> >> they're all the same.see attached.
> >>
> >> On Mon, Mar 7, 2016 at 6:31 PM, Samuel Just <sj...@redhat.com> wrote:
>
r 7, 2016 at 6:36 PM, Samuel Just <sj...@redhat.com> wrote:
> Hmm, so much for that theory, still looking. If you can produce
> another set of logs (as before) from scrubbing that pg, it might help.
> -Sam
>
> On Mon, Mar 7, 2016 at 4:34 PM, Jeffrey McDonald <jmcdo...@umn.
they're all the same.see attached.
On Mon, Mar 7, 2016 at 6:31 PM, Samuel Just <sj...@redhat.com> wrote:
> Have you confirmed the versions?
> -Sam
>
> On Mon, Mar 7, 2016 at 4:29 PM, Jeffrey McDonald <jmcdo...@umn.edu> wrote:
> > I have one other very strang
he problem.
> >> -Sam
> >>
> >> On Mon, Mar 7, 2016 at 2:44 PM, Samuel Just <sj...@redhat.com> wrote:
> >>> So after the scrub, it came up clean? The inconsistent/missing
> >>> objects reappeared?
> >>> -Sam
> >>>
&g
; wrote:
> > The one just scrubbed and now inconsistent.
> > -Sam
> >
> > On Mon, Mar 7, 2016 at 1:57 PM, Jeffrey McDonald <jmcdo...@umn.edu>
> wrote:
> >> Do you want me to enable this for the pg already with unfound objects
> or the
> >> placement grou
s = 1
>
> on all osds in that PG, rescrub, and convey to us the resulting logs?
> -Sam
>
> On Mon, Mar 7, 2016 at 1:36 PM, Jeffrey McDonald <jmcdo...@umn.edu> wrote:
> > Here is a PG which just went inconsistent:
> >
> > pg 70.459 is active+clean+inconsistent, act
objects unfound and apparently lost
Regards,
Jeff
On Mon, Mar 7, 2016 at 3:36 PM, Jeffrey McDonald <jmcdo...@umn.edu> wrote:
> Here is a PG which just went inconsistent:
>
> pg 70.459 is active+clean+inconsistent, acting [307,210,273,191,132,450]
>
> Attached is the
ich has been reported as dirty. I can't help much beyond
> that, but hopefully Kefu or David will chime in once there's a little
> more for them to look at.
> -Greg
>
> On Mon, Mar 7, 2016 at 1:00 PM, Jeffrey McDonald <jmcdo...@umn.edu> wrote:
> > Hi Greg,
> >
>
state active+recovering,
last acting [277,101,218,49,304,412]
pg 70.320 is active+recovering, acting [277,101,218,49,304,412], 18 unfound
There is no indication of any problems with down OSDs or network issues
with OSDs.
Thanks,
Jeff
--
Jeffrey McDonald, PhD
Assistant Director for HPC Oper
.bf46c30c-14fa-4e2a-a013-4e84f24eb63b.130722\uUNC9-SN296\u0385\uAD2F28ACXX\u8\uGTTTCG.tar.gz.2~RGMpBL1jBOB6Pa4ZQrdgVMxKHw0CIGu.6_392587ace40e89b50fac_0_long
How do I find the bucket and owner of this file?
Thanks in advance,
Jeff
--
Jeffrey McDonald, PhD
Assistant Director for HPC Operations
har const*, char const*, int, char
const*)+0x8b) [0xbc60eb]
2: (FileStore::_do_transaction(ObjectStore::Transaction&, unsigned long,
int, ThreadPool::TPHandle*)+0xa52) [0x923d12]
3: (FileStore::_do_transactions(std::list<ObjectStore::Transaction*,
std::allocator<ObjectStore::Trans
"begin": "0\/\/0\/\/-1",
"end": "0\/\/0\/\/-1",
"objects": []
},
"peer_backfill_info": [],
"backfills_in_flight": [],
1 finisher
1/ 5 heartbeatmap
1/ 5 perfcounter
1/ 5 rgw
1/10 civetweb
1/ 5 javaclient
1/ 5 asok
1/ 1 throttle
0/ 0 refs
1/ 5 xio
-2/-2 (syslog threshold)
-1/-1 (stderr threshold)
max_recent 1
max_new 1000
log_file /var/log/ceph/ceph-osd.299.log
--
Hi,
We have a ceph Giant installation with a radosgw interface. There are 198
OSDs on seven OSD servers and we're seeing OSD failures on the system when
users try to write files via the s3 interface.We're more likely to see
the failures if the files are larger than 1 GB and if the files go
31 matches
Mail list logo