that I've seen myself too. After migrating to a standard
filesystem layout (i.e. no LVM) the issue disappeared.
Regards,
Tom
-
FROM: ceph-users on behalf of Georgios Dimitrakakis
SENT: Thursday, March 23, 2017 10:21:34 PM
TO: ceph-us...@ceph.com
SUBJECT: [ceph-users
are Engineer, Concurrent Computer Corporation
On Sat, Apr 1, 2017 at 4:47 AM Georgios Dimitrakakis wrote:
Hi,
just to provide some more feedback on this one and what I ve done
to
solve it, although not sure if this is the most "elegant"
solution.
I have add manually to /et
no LVM) the issue disappeared.
Regards,
Tom
-
FROM: ceph-users on behalf of Georgios Dimitrakakis
SENT: Thursday, March 23, 2017 10:21:34 PM
TO: ceph-us...@ceph.com
SUBJECT: [ceph-users] CentOS7 Mounting Problem
Hello Ceph community!
I would like some help with a n
ect?
G.
Op 16 november 2017 om 14:46 schreef Caspar Smit
<caspars...@supernas.eu>:
2017-11-16 14:43 GMT+01:00 Wido den Hollander <w...@42on.com>:
>
> > Op 16 november 2017 om 14:40 schreef Georgios Dimitrakakis <
> gior...@acmac.uoc.gr>:
> >
> >
&g
if they lose data because you
have warned them to have 3 replicas. If they dont sign it, then tell
them you will no longer manage Ceph for them. Hopefully they wake up
and make everyones job easier by purchasing a third server.
On Thu, Nov 16, 2017 at 9:26 AM Georgios Dimitrakakis wrote:
Thank you all
Dear cephers,
I have an emergency on a rather small ceph cluster.
My cluster consists of 2 OSD nodes with 10 disks x4TB each and 3
monitor nodes.
The version of ceph running is Firefly v.0.80.9
(b5a67f0e1d15385bc0d60a6da6e7fc810bde6047)
The cluster originally was build with "Replicated
p does that copy all its data
from the only one available copy to the rest unaffected disks which will
consequently end in having again two copies on two different hosts?
Best,
G.
2017-11-16 14:05 GMT+01:00 Georgios Dimitrakakis :
Dear cephers,
I have an emergency on a rather small ceph
where down time is not an option.
On Thu, Feb 22, 2018, 5:31 PM Georgios Dimitrakakis wrote:
All right! Thank you very much Jack!
The way I understand this is that its not necessarily a bad
thing. I
mean as long as it doesnt harm any data or
cannot cause any other issue.
Unfortunately my
on the host .. so the pool really block all
requests
On 02/24/2018 01:45 PM, Georgios Dimitrakakis wrote:
The pool will not actually go read only. All read and write
requests
will block until both osds are back up. If I were you, I would use
min_size=2 and change it to 1 temporarily if needed to do
like to know if it's better to go with min_size=2 or not.
Regards,
G.
If min_size == size, a single OSD failure will place your pool read
only
On 02/22/2018 11:06 PM, Georgios Dimitrakakis wrote:
Dear all,
I would like to know if there are additional risks when running CEPH
with "Min
Dear all,
I would like to know if there are additional risks when running CEPH
with "Min Size" equal to "Replicated Size" for a given pool.
What are the drawbacks and what could be go wrong in such a scenario?
Best regards,
G.
___
ceph-users
ory
/tmp/tmpPQ895t
[ceph_deploy][ERROR ] RuntimeError: Failed to connect any mon
On Wed, Feb 28, 2018 at 5:21 PM, Georgios Dimitrakakis
<gior...@acmac.uoc.gr> wrote:
All,
I have updated my test ceph cluster from Jewer (10.2.10) to
Luminous
(12.2.4) using CentOS packages.
I have
] Failed to connect to host:controller
[ceph_deploy.gatherkeys][INFO ] Destroy temp directory
/tmp/tmpPQ895t
[ceph_deploy][ERROR ] RuntimeError: Failed to connect any mon
On Wed, Feb 28, 2018 at 5:21 PM, Georgios Dimitrakakis
<gior...@acmac.uoc.gr> wrote:
All,
I have updated my test ceph c
ncorrect for OSDs and
MONs after a Jewel to Luminous upgrade. The output of a ceph auth
list
command should help you find out if it’s the case.
Are your ceph daemons still running? What does a ceph daemon
mon.$(hostname -s) quorum_status gives you from a MON server.
JC
On Feb 28, 2018, at
on this ML details about MGR caps being incorrect for OSDs and
MONs after a Jewel to Luminous upgrade. The output of a ceph auth
list
command should help you find out if it’s the case.
Are your ceph daemons still running? What does a ceph daemon
mon.$(hostname -s) quorum_status gives you from a
Excellent! Good to know that the behavior is intentional!
Thanks a lot John for the feedback!
Best regards,
G.
On Thu, Mar 1, 2018 at 12:03 PM, Georgios Dimitrakakis
<gior...@acmac.uoc.gr> wrote:
I have recently updated to Luminous (12.2.4) and I have noticed that
using
"ce
I have recently updated to Luminous (12.2.4) and I have noticed that
using "ceph -w" only produces an initial output like the one below but
never gets updated afterwards. Is this a feature because I was used to
the old way that was constantly
producing info.
Here is what I get as initial
All,
I have updated my test ceph cluster from Jewer (10.2.10) to Luminous
(12.2.4) using CentOS packages.
I have updated all packages, restarted all services with the proper
order but I get a warning that the Manager Daemon doesn't exist.
Here is the output:
# ceph -s
cluster:
id:
Failed to connect any mon
On Wed, Feb 28, 2018 at 5:21 PM, Georgios Dimitrakakis
<gior...@acmac.uoc.gr> wrote:
All,
I have updated my test ceph cluster from Jewer (10.2.10) to Luminous
(12.2.4) using CentOS packages.
I have updated all packages, restarted all services with the pr
Hello,
I would like to see people's opinion about memory configurations.
Would you prefer 2x8GB over 1x16GB or the opposite?
In addition what are the latest memory recommendations? Should we
should keep the rule of thumb of 1GB per TB
or now with Bluestore things have changed?
I am planning
Hello,
I am wondering if there are people out there that still use "old
fashion" CRON scripts to check Ceph's health, monitor and receive email
alerts.
If there are do you mind sharing your implementation?
Probably something similar to this:
101 - 121 of 121 matches
Mail list logo