Luminous 12.1.0(RC)
I replaced two OSD drives(old ones were still good, just too small), using:
ceph osd out osd.12
ceph osd crush remove osd.12
ceph auth del osd.12
systemctl stop ceph-osd@osd.12
ceph osd rm osd.12
I later found that I also should have unmounted it from /var/lib/ceph/osd-12
On Fri, Jul 21, 2017 at 10:02:06PM -0300, Webert de Souza Lima wrote:
> Thank you for all your efforts, Patrick.
Patrick did a fantatic work all this years and I'd like to use this
opportunity to say that's an honor to serve the Ceph community. :)
> Parabéns e boa sorte Leo :)
Thanks Webert!
Thank you for all your efforts, Patrick.
Parabéns e boa sorte Leo :)
Regards,
Webert Lima
DevOps Engineer at MAV Tecnologia
*Belo Horizonte - Brasil*
On Thu, Jul 20, 2017 at 4:21 PM, Patrick McGarry
wrote:
> Hey cephers,
>
> As most of you know, my last day as the Ceph
On Sat, Jul 22, 2017 at 9:38 AM, Blair Bethwaite
wrote:
> Hi Brad,
>
> On 22 July 2017 at 09:04, Brad Hubbard wrote:
>> Could you share what kernel/distro you are running and also please test
>> whether
>> the error message can be triggered by
Hi Brad,
On 22 July 2017 at 09:04, Brad Hubbard wrote:
> Could you share what kernel/distro you are running and also please test
> whether
> the error message can be triggered by running the "blkid" command?
I'm seeing it on RHEL7.3 (3.10.0-514.2.2.el7.x86_64). See Red Hat
On Sat, Jul 22, 2017 at 5:57 AM, Marcus Furlong wrote:
> On 20 July 2017 at 18:48, Brad Hubbard wrote:
>> On Fri, Jul 21, 2017 at 4:23 AM, Marcus Furlong wrote:
>>> On 20 July 2017 at 12:49, Matthew Vernon wrote:
Once again my google-fu has failed me and I can't find the 'correct' way to
map an rbd using rbd-nbd on boot. Everything takes me to rbdmap, which
isn't using rbd-nbd.
If someone could just point me in the right direction I'd appreciated it.
Thanks!
Dan
On 20 July 2017 at 18:48, Brad Hubbard wrote:
> On Fri, Jul 21, 2017 at 4:23 AM, Marcus Furlong wrote:
>> On 20 July 2017 at 12:49, Matthew Vernon wrote:
>>> Hi,
>>>
>>> On 18/07/17 05:08, Marcus Furlong wrote:
On 22 March
I am running 12.1.1, and updated to it on the 18th. So I guess this is
either something else or it was not in the rpms.
-Original Message-
From: Gregory Farnum [mailto:gfar...@redhat.com]
Sent: vrijdag 21 juli 2017 20:21
To: Marc Roos; ceph-users
Subject: Re: [ceph-users] Ceph
On 21-7-2017 12:45, Fulvio Galeazzi wrote:
> Hallo David, all,
> sorry for hi-jacking the thread but I am seeing the same issue,
> although on 10.2.7/10.2.9...
Then this is a problem that had nothing to do with my changes to
ceph-disk, since they only went into HEAD and thus end up in
This was broken in some of the luminous RCs but fixed in master (and I
believe the very latest RC release). See
https://github.com/ceph/ceph/pull/16249
On Fri, Jul 21, 2017 at 7:11 AM Marc Roos wrote:
>
>
> I would like to work on some grafana dashboards, but since the
Ceph does not allow for more than one public network. If you have multiple
subnets that need to access ceph data, then you should be using a router
and firewall rules to route the traffic between them.
On Fri, Jul 21, 2017, 12:48 PM Yang X wrote:
> Our existing setup is as
Hallo again, replying to my own message to provide some more info, and
ask one more question.
Not sure I mentioned, but I am on CentOS 7.3.
I tried to insert a sleep in ExecStartPre in
/usr/lib/systemd/system/ceph-osd@.service but apparently all ceph-osd
are started (and retried) at the
Our existing setup is as follows and we won't be able to change the network
configuration due to security limitations:
client 1: rbd devices on 153.64.X.X network (1GE network)
client 2: rbd devices on 10.25.X.X network (10GE fast switch)
single monitor and MDS server multihomed on both
On Fri, Jul 21, 2017 at 4:06 PM, Дмитрий Глушенок wrote:
> All three mons has value "simple".
OK, so http://tracker.ceph.com/issues/17664 is unrelated. Open a new
kernel client ticket with all the ceph-fuse vs kernel client info and
as many log excerpts as possible. If you've
On 07/20/2017 04:48 PM, Ben Hines wrote:
Still having this RGWLC crash once a day or so. I do plan to update to
Luminous as soon as that is final, but it's possible this issue will
still occur, so i was hoping one of the devs could take a look at it.
My original suspicion was that it happens
I would like to work on some grafana dashboards, but since the upgrade
to luminous rc, there seems to have changed something in json and (a lot
of) metrics are not stored in influxdb.
Does any one have an idea when updates to collectd-ceph in the epel repo
will be updated? Or is there some
Should we report these?
[840094.519612] ceph[12010]: segfault at 8 ip 7f194fc8b4c3 sp
7f19491b6030 error 4 in libceph-common.so.0[7f194f9fb000+7e9000]
CentOS Linux release 7.3.1611 (Core)
Linux 3.10.0-514.26.2.el7.x86_64 #1 SMP Tue Jul 4 15:04:05 UTC 2017
x86_64 x86_64 x86_64
On Thu, Jul 20, 2017 at 6:35 PM, Дмитрий Глушенок wrote:
> Hi Ilya,
>
> While trying to reproduce the issue I've found that:
> - it is relatively easy to reproduce 5-6 minutes hangs just by killing
> active mds process (triggering failover) while writing a lot of data.
>
>>Is there anything changed from Hammer to Jewel that might be affecting the
>>qemu-img convert performance?
maybe object map for exclusive lock ? (I think it could be a little bit slower
when objects are created first)
you could test it, create the target rbd volume, disable exclusive
Thanks Alexandre!
We were using ceph - Hammer before and we never had these performance
issues with qemu-img convert.
Is there anything changed from Hammer to Jewel that might be affecting the
qemu-img convert performance?
On Fri, Jul 21, 2017 at 2:24 PM, Alexandre DERUMIER
Nothing is built-in for this yet, but it's on the roadmap for a future
release [1].
[1] http://pad.ceph.com/p/ceph-top
On Thu, Jul 20, 2017 at 9:52 AM, Stéphane Klein
wrote:
> Hi,
>
> is it possible to get IO stats (read / write bandwidth) by client or image?
>
> I
Hallo David, all,
sorry for hi-jacking the thread but I am seeing the same issue,
although on 10.2.7/10.2.9...
Note that I am using disks taken from a SAN, so the GUIDs in my case are
those relevant to MPATH.
As per other messages in this thread, I modified:
-
On Thu, Jul 20, 2017 at 9:19 PM, Andras Pataki
wrote:
> We are having some difficulties with cephfs access to the same file from
> multiple nodes concurrently. After debugging some large-ish applications
> with noticeable performance problems using CephFS (with the
It's already in qemu 2.9
http://git.qemu.org/?p=qemu.git;a=commit;h=2d9187bc65727d9dd63e2c410b5500add3db0b0d
"
This patches introduces 2 new cmdline parameters. The -m parameter to specify
the number of coroutines running in parallel (defaults to 8). And the -W
parameter to
allow qemu-img to
Hi,
they are an RFC here:
"[RFC] qemu-img: make convert async"
https://patchwork.kernel.org/patch/9552415/
maybe it could help
- Mail original -
De: "Jason Dillaman"
À: "Mahesh Jambhulkar"
Cc: "ceph-users"
- Original Message -
> From: "Pritha Srivastava"
> To: "Graham Allan"
> Cc: "Adam C. Emerson" , "Ceph Users"
>
> Sent: Friday, July 21, 2017 10:27:33 AM
> Subject: Re: [ceph-users] Bucket policies in
Thanks for the information Jason!
We have few concerns:
1. Following is our ceph configuration. Is there something that needs to be
changed here?
#cat /etc/ceph/ceph.conf
[global]
fsid = 0e1bd4fe-4e2d-4e30-8bc5-cb94ecea43f0
mon_initial_members = cephlarge
mon_host = 10.0.0.188
Dear all,
I wonder how to install Ceph on ARM processors.
When I executed "$ ceph-deploy install [hosts]" on x86_64, ceph-deploy
installed Ceph v10.2.9.
However, when it is executed on ARM64, the installation is failed.
$ ceph-deploy install ubuntu
(...)
[ubuntu][DEBUG ] add deb repo to
29 matches
Mail list logo