I’ve run production Ceph/OpenStack since 2015. The reality is running
OpenStack Newton (the last one with pki) with a post Nautilus release just
isn’t going to work. You are going to have bigger problems than trying to make
object storage work with keystone issued tokens. Worst case is you
10:39 AM, Lars Marowsky-Bree <l...@suse.com> wrote:
>
> On 2017-07-14T10:34:35, Mike Lowe <j.michael.l...@gmail.com> wrote:
>
>> Having run ceph clusters in production for the past six years and upgrading
>> from every stable release starting with argonaut to th
Having run ceph clusters in production for the past six years and upgrading
from every stable release starting with argonaut to the next, I can honestly
say being careful about order of operations has not been a problem.
> On Jul 14, 2017, at 10:27 AM, Lars Marowsky-Bree wrote:
ething from the 1.2 series.
> On Dec 19, 2016, at 6:40 PM, Jason Dillaman <jdill...@redhat.com> wrote:
>
> Do you happen to know if there is an existing bugzilla ticket against
> this issue?
>
> On Mon, Dec 19, 2016 at 3:46 PM, Mike Lowe <j.michael.l...@gmail.com>
It looks like the libvirt (2.0.0-10.el7_3.2) that ships with centos 7.3 is
broken out of the box when it comes to hot plugging new virtio-scsi devices
backed by rbd and cephx auth. If you use openstack, cephx auth, and centos,
I’d caution against the upgrade to centos 7.3 right now.
Can anybody help shed some light on this error I’m getting from radosgw?
2016-05-11 10:09:03.471649 7f1b957fa700 1 -- 172.16.129.49:0/3896104243 -->
172.16.128.128:6814/121075 -- osd_op(client.111957498.0:726 27.4742be4b
97c56252-6103-4ef4-b37a-42739393f0f1.113770300.1_interfaces [create 0~0
I can’t file a documentation bug.
> On Oct 15, 2015, at 2:06 PM, Mike Lowe <j.michael.l...@gmail.com> wrote:
>
> I think so, unless I misunderstand how it works.
>
> (openstack) role list --user jomlowe --project jomlowe
> +--+--
I’m having some trouble with radosgw and keystone integration, I always get the
following error:
user does not hold a matching role; required roles: Member,user,_member_,admin
Despite my token clearly having one of the roles:
"user": {
"id":
gt; wrote:
>
> On Thu, Oct 15, 2015 at 8:34 AM, Mike Lowe <j.michael.l...@gmail.com> wrote:
>> I’m having some trouble with radosgw and keystone integration, I always get
>> the following error:
>>
>> user does not hold a matching role; required roles:
>>
You really should, I believe the osd number is used in computing crush. Bad
things will happen if you don't use sequential numbers.
On Oct 30, 2013, at 11:37 AM, Glen Aidukas gaidu...@behaviormatrix.com wrote:
I wanted to know, does the OSD numbering half to be sequential and what is
the
And a +1 from me as well. It would appear that ubuntu has picked up the 0.67.4
source and included a build of it in their official repo, so you may be able to
get by until the next point release with those.
http://packages.ubuntu.com/search?keywords=ceph
On Oct 22, 2013, at 11:46 AM, Mike
I wouldn't go so far as to say putting a vm in a file on a networked filesystem
is wrong. It is just not the best choice if you have a ceph cluster at hand,
in my opinion. Networked filesystems have a bunch of extra stuff to implement
posix semantics and live in kernel space. You just need
You can add PGs, the process is called splitting. I don't think PG merging,
the reduction in the number of PGs, is ready yet.
On Oct 8, 2013, at 11:58 PM, Guang yguan...@yahoo.com wrote:
Hi ceph-users,
Ceph recommends the PGs number of a pool is (100 * OSDs) / Replicas, per my
Well, in a word, yes. You really expect a network replicated storage system in
user space to be comparable to direct attached ssd storage? For what it's
worth, I've got a pile of regular spinning rust, this is what my cluster will
do inside a vm with rbd writeback caching on. As you can see,
There is TRIM/discard support and I use it with some success. There are some
details here http://ceph.com/docs/master/rbd/qemu-rbd/ The one caveat I have
is that I've sometimes been able to crash an osd by doing fstrim inside a guest.
On Aug 22, 2013, at 10:24 AM, Guido Winkelmann
I think you are missing the distinction between metadata journaling and data
journaling. In most cases a journaling filesystem is one that journal's it's
own metadata but your data is on its own. Consider the case where you have a
replication level of two, the osd filesystems have journaling
Let me make a simpler case, to do ACID (https://en.wikipedia.org/wiki/ACID)
which are all properties you want in a filesystem or a database, you need a
journal. You need a journaled filesystem to make the object store's file
operations safe. You need a journal in ceph to make sure the object
:
On Fri, Jul 19, 2013 at 3:54 PM, Mike Lowe j.michael.l...@gmail.com wrote:
I'm not sure how to get you out of the situation you are in but what you have
in your crush map is osd 2 and osd 3 but ceph starts counting from 0 so I'm
guessing it's probably gotten confused. Some history on your
The ceph kernel module is only for mounting rbd block devices on bare metal
(technically you could do it in a vm but there is no good reason to do so).
QEMU/KVM has its own rbd implementation that tends to lead the kernel
implementation and should be used with vm's.
The rbd module is always
Quorum means you need at least %51 participating be it people following
parliamentary procedures or mons in ceph. With one dead and two up you have
%66 participating or enough to have a quorum. An even number doesn't get you
any additional safety but does give you one more thing than can fail
I think the bug Sage is talking about was fixed in 3.8.0
On Jun 18, 2013, at 11:38 AM, Guido Winkelmann guido-c...@thisisnotatest.de
wrote:
Am Dienstag, 18. Juni 2013, 07:58:50 schrieb Sage Weil:
On Tue, 18 Jun 2013, Guido Winkelmann wrote:
Am Donnerstag, 13. Juni 2013, 01:58:08 schrieb Josh
I wonder if it has something to do with them renaming /usr/bin/kvm, in qemu 1.4
packaged with ubuntu 13.04 it has been replaced with the following:
#! /bin/sh
echo W: kvm binary is deprecated, please use qemu-system-x86_64 instead 2
exec qemu-system-x86_64 -machine accel=kvm:tcg $@
On Jun 3,
Does anybody know exactly what ceph repair does? Could you list out briefly
the steps it takes? I unfortunately need to use it for an inconsistent pg.
___
ceph-users mailing list
ceph-users@lists.ceph.com
FWIW, here is what I have for my ceph cluster:
4 x HP DL 180 G6
12Gb RAM
P411 with 512MB Battery Backed Cache
10GigE
4 HP MSA 60's with 12 x 1TB 7.2k SAS and SATA drives (bought at different times
so there is a mix)
2 HP D2600 with 12 x 3TB 7.2k SAS Drives
I'm currently running 79 qemu/kvm vm's
You've learned on of the three computer science facts you need to know about
distributed systems, and I'm glad I could pass something on:
1. Consistent, Available, Distributed - pick any two
2. To completely guard against k failures where you don't know which one failed
just by looking you need
That is the expected behavior. RBD is emulating a real device, you wouldn't
expect good things to happen if you were to plug the same drive into two
different machines at once (perhaps with some soldering). There is no built in
mechanism for two machines to access the same block device
2. At kernels less than 3.8 BTRFS will loose data with sparse files, so DO NOT
USE IT. I've had trouble with btrfs file deletion hanging my osd's for up to
15 minutes with kernel 3.7 with btrfs sparse file patch applied.
On Apr 23, 2013, at 8:20 PM, Steve Hindle mech...@gmail.com wrote:
Hi
If it says 'active+clean' then it is OK no mater what else may additionally
have as a status. Deep scrubbing is just a normal background process that
makes sure your data is consistent and shouldn't keep you from accessing it.
Repair should only be done as a last resort, it will discard any
When I had similar trouble, it was btrfs file deletion, and I just had to wait
until it recovered. I promptly switched to xfs. Also, if you are using a
kernel before 3.8.0 with btrfs you will loose data.
On Apr 19, 2013, at 7:20 PM, Steven Presser spres...@acm.jhu.edu wrote:
Huh. Mu whole
29 matches
Mail list logo