ersion, I took that opportunity to upgrade my
> monitors to 10.2.3. Three of the 5 monitors continue to crash. And it
> looks like they are crashing when trying to apply a pending mdsmap
> update.
>
> The log is available here:
> http://people.cis.ksu.edu/~mozes/hobbit01.mon-20160930.lo
they are crashing when trying to apply a pending mdsmap
update.
The log is available here:
http://people.cis.ksu.edu/~mozes/hobbit01.mon-20160930.log.gz
I have attempted (making backups of course) to extract the monmap from
a working monitor and inserting it into a broken one. No luck, and
backup was restored
Hi, I have tried to understand how CEPH stores and retrieves data, and
I have a few beginners questions about this explanation
http://ceph.com/wp-content/uploads/2012/12/pg-placement1.png
1. hash("foo"); what exactly is foo, is that the filename that the
client tries to write, or is it the
hing when trying to apply a pending mdsmap
> update.
>
> The log is available here:
> http://people.cis.ksu.edu/~mozes/hobbit01.mon-20160930.log.gz
>
> I have attempted (making backups of course) to extract the monmap from
> a working monitor and inserting it into a br
://people.cis.ksu.edu/~mozes/hobbit01.mon-20160930.log.gz
I have attempted (making backups of course) to extract the monmap from
a working monitor and inserting it into a broken one. No luck, and
backup was restored.
I need to get these monitors back up post-haste.
If you've got any ideas, I would
Hi,
I just created a new cluster with 0.94.8 and I'm getting this message:
2016-09-29 21:36:47.065642 mon.0 [INF] disallowing boot of OSD osd.35
10.22.21.49:6844/9544 because the osdmap requires CEPH_FEATURE_SERVER_JEWEL but
the osd lacks CEPH_FEATURE_SERVER_JEWEL
This is really bizzare. All
On 09/30/16 15:48, Oliver Dzombic wrote:
> Hi Nick,
>
> thank you for your reply !
>
> Indeed, jumbo frames was not activated.
>
> So ping and all was working, so i thought network is up. But not with
> enough mtu...
>
> The f... supermicro switch just deleted the switch config, so i had to
>
Hi Nick,
thank you for your reply !
Indeed, jumbo frames was not activated.
So ping and all was working, so i thought network is up. But not with
enough mtu...
The f... supermicro switch just deleted the switch config, so i had to
recreate all and forgot about the MTU on the uplink ports.
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Oliver Dzombic
> Sent: 30 September 2016 14:16
> To: ceph-users@lists.ceph.com
> Subject: [ceph-users] production cluster down :(
>
> Hi,
>
> we have:
>
> ceph version 10.2.2
>
>
Hi,
we have:
ceph version 10.2.2
health HEALTH_ERR
2240 pgs are stuck inactive for more than 300 seconds
273 pgs down
2240 pgs peering
2240 pgs stuck inactive
354 requests are blocked > 32 sec
mds cluster is degraded
Hi,
Am 30.09.2016 um 09:45 schrieb Christian Balzer:
> [...]
> Gotta love having (only a few years late) a test and staging cluster that
> is actually usable and comparable to my real ones.
>
> So I did create a 500GB image and filled it up.
> The cache pool is set to 500GB as well and will
Hi,
I have been very impressed with the BlueStore test environment I made,
which is build on the Ubuntu 16.04 using the Ceph development master
repository.
But now I have run into some self inflicted problems.
Yesterday I accidentally updated the OSD while they were being heavily
used.
I just love the sound of my own typing...
See inline, below.
On Fri, 30 Sep 2016 12:18:48 +0900 Christian Balzer wrote:
>
> Hello,
>
> On Thu, 29 Sep 2016 20:15:12 +0200 Sascha Vogt wrote:
>
> > Hi Burkhard,
> >
> > On 29/09/16 15:08, Burkhard Linke wrote:
> > > AFAIK evicting an object
Am 30.09.2016 um 05:18 schrieb Christian Balzer:
> On Thu, 29 Sep 2016 20:15:12 +0200 Sascha Vogt wrote:
>> On 29/09/16 15:08, Burkhard Linke wrote:
>>> AFAIK evicting an object also flushes it to the backing storage, so
>>> evicting a live object should be ok. It will be promoted again at the
>>>
Hello,
On Thu, 29 Sep 2016 07:37:45 -0700 Gerald Spencer wrote:
> Greetings new world of Ceph,
>
> Long story short, at work we perform high throughput volumetric imaging and
> create a decent chunk of data per machine. We are about to bring the next
> generation of our system online and the
Hi,
we are about to move from internal testing to a first production setup
with our object storage based on Ceph RGW. One of the last open problems
is a backup / staging solution for S3 buckets.
As far as I know many of the life-cycle operations available in Amazon
S3 are not implemented
16 matches
Mail list logo