Ah, I should have mentioned--size=3, min_size=1.
I'm pretty sure that 'down_osds_we_would_probe' is the problem, but it's
not clear if there's a way to fix that.
On Tue, Feb 9, 2016 at 11:30 PM Arvydas Opulskis <
arvydas.opuls...@adform.com> wrote:
> Hi,
>
>
>
> What is min_size for this
Hi,
I have a question about lost objects. I have a placement group in the
active+recovering state with a missing object.Should I wait until the
placement group is active+clean before doing anything about the missing
objects?
# ceph -v
ceph version 0.94.5
On Wed, Feb 10, 2016 at 5:52 AM, Scott Laird wrote:
> Ah, I should have mentioned--size=3, min_size=1.
>
> I'm pretty sure that 'down_osds_we_would_probe' is the problem, but it's not
> clear if there's a way to fix that.
Marking OSDs lost is what's supposed to resolve that.
I am trying to deploy ceph 94.5 (hammer) across a few nodes using
ceph-deploy and passing the --dmcrypt flag. The first osd:journal pair
seems to succeed but all remaining osds that have a journal on the same
ssd seem to silently fail::
http://pastebin.com/2TGG4tq4
In the end I end up with 5
Hi Ceph users
This is my first post on this mailing list. Hope it's the correct one. Please
redirect me to the right place in case it is not.
I am running a small (3 nodes with 3 OSD and 1 monitor on each of them) Ceph
cluster.
Guess what, it is used as Cinder/Glance/Nova RDB storage for
Hi,
On 02/09/2016 03:46 PM, Jason Dillaman wrote:
> What release of Infernalis are you running? When you encounter this error,
> is the partition table zeroed out or does it appear to be random corruption?
>
its
ceph version 9.2.0 (bb2ecea240f3a1d525bcb35670cb07bd1f0ca299)
and dpkg -l ceph:
Grüezi Daniel,
my first question would be: Whats your pool size / min_size?
ceph osd pool get pool-name
It is probably 3 (default size). If you want to have healthy state
again with only 2 nodes (all the OSDs on node 3 are down), you have to
set your pool size to 2:
ceph osd pool set pool-name
Hi Daniel,
oops, wrong copy paste, here are the correct commands:
ceph osd pool get pool-name size
ceph osd pool set pool-name size 2
On Wed, Feb 10, 2016 at 6:27 PM, Ivan Grcic wrote:
> Grüezi Daniel,
>
> my first question would be: Whats your pool size / min_size?
>
> ceph
Can you provide the 'rbd info' dump from one of these corrupt images?
--
Jason Dillaman
- Original Message -
> From: "Udo Waechter"
> To: "Jason Dillaman"
> Cc: "ceph-users"
> Sent: Wednesday, February 10, 2016
As far as i know you can do it in two ways: (assuming you hace a pool size
of 3 on all 3 nodes with min_size 2 to still have access to data)
1. Set noout for not starting the rebalance of the cluster. Reinstall OS on
the faulty node and redeploy the node with all keys and conf files (either
This is the fourth development release for Jewel. Several big pieces have
been added this release, including BlueStore (a new backend for OSD to
replace FileStore), many ceph-disk fixes, a new CRUSH tunable that
improves mapping stability, a new librados object enumeration API, and a
whole
The ability to list watchers wasn't added until the cuttlefish release, which
explains why you are seeing "operation not supported". Is your rbd CLI also
from bobtail?
--
Jason Dillaman
- Original Message -
> From: "Tahir Raza"
> To:
We are using following tools in our cloud.
Openstack - Essex
Ceph - Cluster with Bobtail and few clients with Firefly and Dumpling
We are trying to delete rbd image 'nova8' but it says image/device is busy.
The compute node on which this rbd image was mounted on, crashed few weeks
ago. We have
Dear list,
we have a cluster with 2 nodes, one have ssd and the other do not (host 1
has ssd and host 2 does not have), is there any possibility that host 2
can still use the ssd from host 1 for journaling?
I see that we can change journal path in ceph.conf, but this is a path when
journal and
"Remote journal"? No, don't do it even if it'd be possible via NFS or
any kind of network-FS.
You could always keep the journal on HDD (yes, I know it's not what You
wanted to achieve, but I don't think that setting journal on remote
machine would be a good idea in any way)
Regards
Michał
Going in a tiny bit more detail to what Michał said, one of the key
reasons for having the journal (in particular, to use SSD's) is to
reduce latency on writes (the other being a replay in the event of a
crash). Even if the functionality existed, adding a network trip to
this would be detrimental
Le 10/02/16 03:55, Matt Taylor a écrit :
We are using Dell R730XD's with 2 x Internal SAS in Raid 1 for OS. 24
x 400GB SSD.
PERC H730P Mini is being used with non-RAID passthrough for the SSD's.
CPU and RAM specs aren't really needed to be known as you can do
whatever you want, however I
Hello Cephers,
I'm planing hardware environment, I want to use CEPH for VM's which will
be managed by OpenStack. So far, in my virtualized DEV cloud, all looks
great, OpenStack works well with KVM + CEPH, but I want to increase
performance with SSD's. Full SSD CEPH cluster will be too
>>Dell finally sells a controller with true JBOD mode?
YES ! and better, you can mix hardware raid and JBOD :)
(i'm talking about H330 , lsi 3008 chipset)
- Mail original -
De: "Jan Schermer"
À: "Yann Dupont"
Cc: "ceph-users"
Dell finally sells a controller with true JBOD mode? The last I checked they
only had "JBOD-via-RAID0" as a recommended solution (doh, numerous problems)
and true JBOD was only offered for special use cases like hadoop storage.
One can obviously reflash the controller to another mode, but that's
What kind of authentication you use against the Rados Gateway ?
We had similar problem authenticating against our Keystone server. If
the Keystone server is overloaded the time to read/write RGW objects
increases. You will not see anything wrong on the ceph side.
Saverio
2016-02-08 17:49
Le 10/02/16 12:17, Alexandre DERUMIER a écrit :
Dell finally sells a controller with true JBOD mode?
YES ! and better, you can mix hardware raid and JBOD :)
(i'm talking about H330 , lsi 3008 chipset)
Same for H730P, also LSI chipset (don't know wich one) ... probably not
much differences
22 matches
Mail list logo