Dear mailinglist,
I've been struggling to find a working configuration of the network cluster /
addr or even public addr.
* Does ceph interpret multiple values for this in the ceph.conf (I wouldn't say
so out of my tests)?
* Shouldn't public network be your internet facing range and cluster
On Mon, Aug 20, 2018 at 10:45 PM Stefan Priebe - Profihost AG
wrote:
>
>
> Am 20.08.2018 um 22:38 schrieb Dan van der Ster:
> > On Mon, Aug 20, 2018 at 10:19 PM Stefan Priebe - Profihost AG
> > wrote:
> >>
> >>
> >> Am 20.08.2018 um 21:52 schrieb Sage Weil:
> >>> On Mon, 20 Aug 2018, Stefan
Am 21.08.2018 um 11:47 schrieb Dan van der Ster:
> On Mon, Aug 20, 2018 at 10:45 PM Stefan Priebe - Profihost AG
> wrote:
>>
>>
>> Am 20.08.2018 um 22:38 schrieb Dan van der Ster:
>>> On Mon, Aug 20, 2018 at 10:19 PM Stefan Priebe - Profihost AG
>>> wrote:
Am 20.08.2018 um 21:52
Am 21.08.2018 um 11:56 schrieb Dan van der Ster:
> On Tue, Aug 21, 2018 at 11:54 AM Stefan Priebe - Profihost AG
> wrote:
>>
>> Am 21.08.2018 um 11:47 schrieb Dan van der Ster:
>>> On Mon, Aug 20, 2018 at 10:45 PM Stefan Priebe - Profihost AG
>>> wrote:
Am 20.08.2018 um 22:38
Hi list,
does documentation exist that explains the structure of Ceph log files? Other
than the source code?
Thanks,
Uwe
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Den tis 21 aug. 2018 kl 09:31 skrev Nino Bosteels :
>
> * Does ceph interpret multiple values for this in the ceph.conf (I
> wouldn’t say so out of my tests)?
>
> * Shouldn’t public network be your internet facing range and cluster
> network the private range?
>
"Public" doesn't necessarily mean
This issue first started while using Luminous 12.2.5, I upgraded to 12.2.7 and
it's still present. This issue is _not_ present in 12.2.4.
With Ceph 12.2.4, using QEMU/KVM + Libvirt, I'm able to mount an rbd image
using the following syntax and populated xml:
'virsh attach-device $vm foo.xml
On Mon, Aug 20, 2018 at 5:40 PM Wido den Hollander wrote:
>
>
>
> On 08/20/2018 05:20 PM, David Turner wrote:
> > The general talk about the rados cleanup command is to clean things up
> > after benchmarking. Could this command also be used for deleting an old
> > RGW bucket or an RBD. For
Thanks for the all the help. For some bizarre reason I had an empty
host inside default root. Once I dumped a "fake" osd into it
everything started working.
On Mon, Aug 20, 2018 at 7:36 PM Daznis wrote:
>
> Hello,
>
>
> Medic shows everything fine. Whole cluster is on the latest mimic
> version.
On Tue, Aug 21, 2018 at 11:54 AM Stefan Priebe - Profihost AG
wrote:
>
> Am 21.08.2018 um 11:47 schrieb Dan van der Ster:
> > On Mon, Aug 20, 2018 at 10:45 PM Stefan Priebe - Profihost AG
> > wrote:
> >>
> >>
> >> Am 20.08.2018 um 22:38 schrieb Dan van der Ster:
> >>> On Mon, Aug 20, 2018 at
On 08/20/2018 05:36 PM, Ilya Dryomov wrote:
> On Mon, Aug 20, 2018 at 4:52 PM Dietmar Rieder
> wrote:
>>
>> Hi Cephers,
>>
>>
>> I wonder if the cephfs client in RedHat/CentOS 7.5 will be updated to
>> luminous?
>> As far as I see there is some luminous related stuff that was
>> backported,
On Tue, Aug 21, 2018 at 9:12 AM Dietmar Rieder
wrote:
>
> On 08/20/2018 05:36 PM, Ilya Dryomov wrote:
> > On Mon, Aug 20, 2018 at 4:52 PM Dietmar Rieder
> > wrote:
> >>
> >> Hi Cephers,
> >>
> >>
> >> I wonder if the cephfs client in RedHat/CentOS 7.5 will be updated to
> >> luminous?
> >> As
Am 21.08.2018 um 12:03 schrieb Stefan Priebe - Profihost AG:
>
> Am 21.08.2018 um 11:56 schrieb Dan van der Ster:
>> On Tue, Aug 21, 2018 at 11:54 AM Stefan Priebe - Profihost AG
>> wrote:
>>>
>>> Am 21.08.2018 um 11:47 schrieb Dan van der Ster:
On Mon, Aug 20, 2018 at 10:45 PM Stefan
Can this be added to luminous?
https://github.com/ceph/ceph/pull/19358
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Fwiw I added a few things to https://pad.ceph.com/p/alert-conditions and will
circulate this mail a bit wider.
Or maybe there is not all that much interest in alerting...
On Mon, Jul 23, 2018 at 06:10:04PM +0200, Jan Fajerski wrote:
Hi community,
the topic of alerting conditions for a ceph
Private is only for OSDs. Nothing else communicates on that. MONs, MGRs,
MDSs, RGWs, and clients all communicate on the public network. Even OSDs
need to communicate with MONs on the public network.
All of that said, it is generally considered useless to split your private
and public subnets.
On 08/21/2018 02:22 PM, Ilya Dryomov wrote:
> On Tue, Aug 21, 2018 at 9:12 AM Dietmar Rieder
> wrote:
>>
>> On 08/20/2018 05:36 PM, Ilya Dryomov wrote:
>>> On Mon, Aug 20, 2018 at 4:52 PM Dietmar Rieder
>>> wrote:
Hi Cephers,
I wonder if the cephfs client in
Hi Konstantin,
I could only dream of reading this answer! Thank you so much!!!
Regards,
Cody
On Tue, Aug 21, 2018 at 8:50 AM Konstantin Shalygin wrote:
>
> On 08/20/2018 08:15 PM, Cody wrote:
>
> Hi Konstantin,
>
> Thank you for looking into my question.
>
> I was trying to understand how to
On Mon, Aug 20, 2018 at 9:49 PM Dan van der Ster wrote:
>
> On Mon, Aug 20, 2018 at 5:37 PM Ilya Dryomov wrote:
> >
> > On Mon, Aug 20, 2018 at 4:52 PM Dietmar Rieder
> > wrote:
> > >
> > > Hi Cephers,
> > >
> > >
> > > I wonder if the cephfs client in RedHat/CentOS 7.5 will be updated to
> > >
Norman,
I'm cc-ing this back to ceph-users for others the reply to or in future
to find
On 21/08/2018 12:01, Norman Gray wrote:
Willem Jan, hello.
Thanks for your detailed notes on my list question.
On 20 Aug 2018, at 21:32, Willem Jan Withagen wrote:
# zpool create
On 08/20/2018 08:15 PM, Cody wrote:
Hi Konstantin,
Thank you for looking into my question.
I was trying to understand how to set up CRUSH hierarchies and set
rules for different failure domains. I am particularly confused by the
'step take' and 'step choose|chooseleaf' settings for which I
On 08/21/2018 11:44 PM, Andre Goree wrote:
Thank you for your reply.
Interestingly, the same (or similar enough) settings still fail here.
Which version of libvirt are you using? I think maybe this is a
libvirt or QEMU bug, and not specifically Ceph...?
~# qemu-system-x86_64 --version
Am 20. August 2018 17:22:35 MESZ schrieb Mehmet :
>Hello,
Hello me,
>
>AFAIK removing of big RBD-Images would lead ceph to produce blocked
>requests - I dont mean caused by poor disks.
>
>Is this still the case with "Luminous (12.2.4)"?
>
To answer my question by myself :)
There is no
Hey all,
We've had the testing weekly call going on for several months now
(URL: https://ceph.com/testing/) and some people have found it useful,
but we haven't gotten many new attendees so here's a reminder:
Wednesday 8AM Pacific time, there's a BlueJeans session to discuss
anything about testing
I'm running kernel 4.15.0-32-generic. from ubuntu.
Some kernel upgrade might have triggered the errors...
On 21/08/18 18:24, Paul Emmerich wrote:
You might be hitting http://tracker.ceph.com/issues/22464 in this
configuration (it doesn't always show up as the CRC error described
there)
Which
My cluster suddenly shows many inconsistent PGs.
with this kind of log
2018-08-21 15:29:39.065613 osd.2 osd.2 10.64.1.1:6801/1310438 146 :
cluster [ERR] 2.61 shard 5: soid 2:864a5b37:::170510e.0004:head
candidate had a read error
2018-08-21 15:31:38.542447 osd.2 osd.2
I'm seeing an error from the rbd map command running in ceph-container;
I had initially deployed this cluster as Luminous, but a pull of the
ceph/daemon container unexpectedly upgraded me to Mimic 13.2.1.
[root@nodeA2 ~]# ceph version
ceph version 13.2.1
My ceph-$ID mount point looks like this
-rw-r--r-- 1 root root 438 Aug 15 11:05 activate.monmap
-rw-r--r-- 1 ceph ceph 3 Aug 15 11:05 active
lrwxrwxrwx 1 ceph ceph 58 Aug 15 11:05 block ->
/dev/disk/by-partuuid/bd9f8501-2958-4294-8982-1e5cae80deef
lrwxrwxrwx 1 ceph ceph 58 Aug 15 11:05
The problem with the current OSDs was a poorly advised chmod of the OSD
data store. From what I've pieced together the chmod was run against a
running OSD.
On Tue, Aug 21, 2018 at 1:13 PM Paul Emmerich
wrote:
> I would continue with the upgrade of all OSDs this scenario as the old
> ones are
Yes, this is a bummer.
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-November/022687.html
Unfortunately we chose to add the the Ubuntu distro codename suffixes
like "xenial" to the ceph.com packages long ago, because who knew that
the release names would ever wrap around :)
If we
Hi,
I'm trying to find out why ceph-fuse client(s) are slow. Luminous 12.2.7
Ceph cluster, Mimic 13.2.1 ceph-fuse client. Ubuntu xenial, 4.13.0-38-generic
kernel.
Test case:
25 curl requests directed at a single threaded apache process (apache2
-X).
When the requests are handled by ceph-kernel
On Tue, Aug 21, 2018 at 9:19 PM Jacob DeGlopper wrote:
>
> I'm seeing an error from the rbd map command running in ceph-container;
> I had initially deployed this cluster as Luminous, but a pull of the
> ceph/daemon container unexpectedly upgraded me to Mimic 13.2.1.
>
> [root@nodeA2 ~]# ceph
Are you running tight on memory?
Paul
2018-08-21 20:37 GMT+02:00 Alfredo Daniel Rezinovsky
:
> My cluster suddenly shows many inconsistent PGs.
>
> with this kind of log
>
> 2018-08-21 15:29:39.065613 osd.2 osd.2 10.64.1.1:6801/1310438 146 : cluster
> [ERR] 2.61 shard 5: soid
They have talked about working on allowing people to be able to do this,
but for now there is nothing you can do to remove the block.db or block.wal
from a bluestore OSD. However, there is an option to completely replace the
SSD, not remove it. There are a few ML threads discussing how to utilize
You should be able to create issues now; we had a misconfiguration in
the tracker following the recent spam attack.
-Greg
On Tue, Aug 21, 2018 at 3:07 AM, Stefan Priebe - Profihost AG
wrote:
>
> Am 21.08.2018 um 12:03 schrieb Stefan Priebe - Profihost AG:
>>
>> Am 21.08.2018 um 11:56 schrieb Dan
Hi everyone,
I read an earlier thread [1] that made a good explanation on the 'step
choose|chooseleaf' option. Could someone further help me to understand
the 'firstn|indep' part? Also, what is the relationship between 'step
take' and 'step choose|chooseleaf' when it comes to define a failure
I don't think so. Sorry. :(
On Tue, Aug 21, 2018 at 12:06 AM Uwe Sauter wrote:
> Hi list,
>
> does documentation exist that explains the structure of Ceph log files?
> Other than the source code?
>
> Thanks,
>
> Uwe
> ___
> ceph-users mailing
Ceph does not support downgrading OSDs. When you removed the single OSD,
it was probably trying to move data onto the other OSDs in the node with
Infernalis OSDs. I would recommend stopping every OSD in that node and
marking them out so the cluster will rebalance without them. Assuming your
After looking into this further, is it possible that adjusting CRUSH weight of
the OSDs while running mis-matched versions of the ceph-osd daemon across the
cluster can cause this issue? Under certain circumstances in our cluster, this
may happen automatically on the backend. I can’t
Hello David,
Thank you and I'm terribly sorry; I was unaware I was starting new threads.
From the top of my mind I say "yes it'll fit" but obviously I make sure
at first.
Regards,
Kees
On 21-08-18 16:34, David Turner wrote:
Ceph does not support downgrading OSDs. When you removed the
I would continue with the upgrade of all OSDs this scenario as the old
ones are crashing, not the new one.
Maybe with all the flags set (pause, norecover, ...)
Paul
2018-08-21 19:08 GMT+02:00 Kees Meijs :
> Hello David,
>
> Thank you and I'm terribly sorry; I was unaware I was starting new
Can you collect any librados / librbd debug logs and provide them via
pastebin? Just add / tweak the following in your "/etc/ceph/ceph.conf"
file's "[client]" section and re-run to gather the logs.
[client]
log file = /path/to/a/log/file
debug ms = 1
debug monc = 20
debug objecter = 20
debug
42 matches
Mail list logo