On Wed, Jan 29, 2020 at 7:33 AM Stefan Kooman wrote:
>
> Quoting Stefan Kooman (ste...@bit.nl):
> > Hi,
> >
> > The command "ceph daemon mds.$mds perf dump" does not give the
> > collection with MDS specific data anymore. In Mimic I get the following
> > MDS specific collections:
> >
> > - mds
>
Quoting Stefan Kooman (ste...@bit.nl):
> Hi,
>
> The command "ceph daemon mds.$mds perf dump" does not give the
> collection with MDS specific data anymore. In Mimic I get the following
> MDS specific collections:
>
> - mds
> - mds_cache
> - mds_log
> - mds_mem
> - mds_server
> - mds_sessions
>
https://www.mail-archive.com/ceph-users@ceph.io/
https://www.mail-archive.com/ceph-users@lists.ceph.com/
-Original Message-
Sent: 28 January 2020 16:32
To: ceph-users@ceph.io
Subject: [ceph-users] No Activity?
All;
I haven't had a single email come in from the ceph-users list at
On 1/28/20 7:03 PM, David DELON wrote:
> Hi,
>
> i had a problem with one application (seafile) which uses CEPH backend with
> librados.
> The corresponding pools are defined with size=3 and each object copy is on a
> different host.
> The cluster health is OK: all the monitors see all
On 1/28/20 6:58 PM, Anthony D'Atri wrote:
>
>
>> I did this ones. This cluster was running IPv6-only (still is) and thus
>> I had the flexibility of new IPs.
>
> Dumb question — how was IPv6 a factor in that flexibility? Was it just that
> you had unused addresses within an existing block?
I have a server with 12 OSDs on it. Five of them are unable to start, and give
the following error message in the their logs:
2020-01-28 13:00:41.760 7f61fb490c80 0 monclient: wait_auth_rotating timed out
after 30
2020-01-28 13:00:41.760 7f61fb490c80 -1 osd.178 411005 unable to obtain
Hi,
I've run into the same issue while testing:
ceph version 14.2.6 (f0aa067ac7a02ee46ea48aa26c6e298b5ea272e9) nautilus
(stable)
debian bullseye
Ceph was installed using ceph-ansible on a vm from the repo
http://download.ceph.com/debian-nautilus
The output of `sudo sh -c
On Tue, Jan 28, 2020 at 6:55 PM CASS Philip wrote:
> Hi Greg,
>
>
> Thanks – if I understand
> https://ceph.io/geen-categorie/get-omap-keyvalue-size/ correctly, “rados
> -p cephfs.fs1-replicated.data ls” should show any such objects? It’s also
> returning blank (and correctly returns a lot for
Hi,
i had a problem with one application (seafile) which uses CEPH backend with
librados.
The corresponding pools are defined with size=3 and each object copy is on a
different host.
The cluster health is OK: all the monitors see all the hosts.
Now, a network problem just happens between
I did this, but with the benefit of taking the network with me, just a forklift
from one datacenter to the next.
Shutdown the clients, then OSDs, then MDS/MON/MGRs, then switches.
Reverse order back up,
> On Jan 28, 2020, at 4:19 AM, Marc Roos wrote:
>
>
> Say one is forced to move a
On Tue, Jan 28, 2020 at 4:26 PM CASS Philip wrote:
> I have a query about https://docs.ceph.com/docs/master/cephfs/createfs/:
>
>
>
> “The data pool used to create the file system is the “default” data pool
> and the location for storing all inode backtrace information, used for hard
> link
Den tis 28 jan. 2020 kl 17:34 skrev Zorg :
> Hi,
>
> we are planning to use EC
>
> I have 3 questions about it
>
> 1 / what is the advantage of having more machines than (k + m)? We are
> planning to have 11 nodes and use k=8 and m=3. does it improved
> performance to have more node than K+M? of
Hi,
we are planning to use EC
I have 3 questions about it
1 / what is the advantage of having more machines than (k + m)? We are
planning to have 11 nodes and use k=8 and m=3. does it improved
performance to have more node than K+M? of how many ? what ratio?
2 / what behavior should we
Hi.
before I descend into what happened and why it happened: I'm talking about a
test-cluster so I don't really care about the data in this case.
We've recently started upgrading from luminous to nautilus, and for us that
means we're retiring ceph-disk in favour of ceph-volume with lvm and
Jan,
Unfortunately I'm under immense pressure right now to get some form of
Ceph into production, so it's going to be Luminous for now, or maybe a
live upgrade to Nautilus without recreating the OSDs (if that's possible).
The good news is that in the next couple months I expect to add more
On Tue, 28 Jan 2020, dhils...@performair.com wrote:
> All;
>
> I haven't had a single email come in from the ceph-users list at ceph.io
> since 01/22/2020.
>
> Is there just that little traffic right now?
I'm seeing 10-20 messages per day. Confirm your registration and/or check
your filters?
All;
I haven't had a single email come in from the ceph-users list at ceph.io since
01/22/2020.
Is there just that little traffic right now?
Thank you,
Dominic L. Hilsbos, MBA
Director - Information Technology
Perform Air International Inc.
dhils...@performair.com
www.PerformAir.com
And us too, exactly as below. One at a time then wait for things to
recover before moving the next host. We didn't have any issues with this
approach either.
Regards,
Simon.
On 28/01/2020 13:03, Tobias Urdin wrote:
We did this as well, pretty much the same as Wido.
We had a fiber connection
We did this as well, pretty much the same as Wido.
We had a fiber connection with good latency between the locations.
We installed a virtual monitor in the destination datacenter to always
keep quorum then we
simply moved one node at a time after setting noout.
When we took a node up on the
On 1/28/20 11:19 AM, Marc Roos wrote:
>
> Say one is forced to move a production cluster (4 nodes) to a different
> datacenter. What options do I have, other than just turning it off at
> the old location and on on the new location?
>
> Maybe buying some extra nodes, and move one node at a
Hello Igor,
i updated all servers to latest 4.19.97 kernel but this doesn't fix the
situation.
I can provide you with all those logs - any idea where to upload / how
to sent them to you?
Greets,
Stefan
Am 20.01.20 um 13:12 schrieb Igor Fedotov:
> Hi Stefan,
>
> these lines are result of
Say one is forced to move a production cluster (4 nodes) to a different
datacenter. What options do I have, other than just turning it off at
the old location and on on the new location?
Maybe buying some extra nodes, and move one node at a time?
Hi,In my experience it is also wise to make sure the lvtags reflect the new vg/lv names!KasparOp 28 januari 2020 om 10:38 schreef Wido den Hollander :Hi,Keep in mind that /var/lib/ceph/osd/ is a tmpfs which is created by'ceph-bluestore-tool' on OSD startup.All the data in there
Hi,
Keep in mind that /var/lib/ceph/osd/ is a tmpfs which is created by
'ceph-bluestore-tool' on OSD startup.
All the data in there comes from the lvtags set on the LVs.
So I *think* you can just rename the Volume Group and rescan with
ceph-volume.
Wido
On 1/28/20 10:25 AM, Stolte, Felix
Hi Eric,
With regards to "From the output of “ceph osd pool ls detail” you can see
min_size=4, the crush rule says min_size=3 however the pool does NOT
survive 2 hosts failing. Am I missing something?"
For your EC profile you need to set the pool min_size=3 to still read/write
to the pool with
Yes, data that is not synced is not guaranteed to be written to disk,
this is consistent with POSIX semantics.
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
On Mon,
On Mon, Jan 27, 2020 at 03:23:55PM -0500, Dave Hall wrote:
>All,
>
>I've just spent a significant amount of time unsuccessfully chasing
>the _read_fsid unparsable uuid error on Debian 10 / Natilus 14.2.6.
>Since this is a brand new cluster, last night I gave up and moved back
>to Debian 9 /
27 matches
Mail list logo