Recently upgraded a ceph cluster to nautilus 14.2.1 from Luminous, no
issues. One of the reasons for doing so was to take advantage of some of
the new ISCSI updates that were added in Nautilus. I installed CentOS 7.6
and did all the basic stuff to get the server online. I then tried to use
the
On 18/06/2019 08.12, Eitan Mosenkis wrote:
Hi.
I'm running a small single-host Ceph cluster on Proxmox (as my home
NAS). I want to encrypt my OSDs but I don't want the host's SSD to be a
single point of failure. What Ceph config/keyring/secret keys do I need
to make safe [encrypted] copies
Yes, if all your packages are up-to-date then you just need to restart
the VM monitor process, which can be done by live-migrating the VM (to
another host or even to the same host, though I don't know if OpenStack
supports that since it's kind of a corner use case) to avoid actually
restarting
Seems you are using customized crush rules.
There is an 'osd_crush_update_on_start' option that will automatically update
osd's crush location on startup.
You might need to turn it off but keep in mind that will also prevent any newly
created osds from automatically joining the host...
(or
On Tue, Jun 18, 2019 at 2:26 AM ?? ?? wrote:
>
> Thank you very much! Can you point out where is the code of revoke?
The caps code is all over the code base as it's fundamental to the
filesystem's workings. You can get some more general background in my
recent Cephalocon talk "What are “caps”?
Yes, for now I'd personally prefer 60-64 GB per DB volume unless one is
unable to allocate 300+ GB.
This is 2x times larger than your DBs keep right now (and which is
pretty inline with RocksDB Level 3 max size).
Thanks,
Igor
On 6/18/2019 9:30 PM, Brett Chancellor wrote:
Thanks Igor. I'm
Thanks Igor. I'm fine turning the warnings off, but it's curious that only
this cluster is showing the alerts. Is there any value in rebuilding the
with smaller SSD meta data volumes? Say 60GB or 30GB?
-Brett
On Tue, Jun 18, 2019 at 1:55 PM Igor Fedotov wrote:
> Hi Brett,
>
> this issue has
Hi Brett,
this issue has been with you long before upgrade to 14.2.1. This upgrade
just brought corresponding alert visible.
You can turn the alert off by setting
bluestore_warn_on_bluefs_spillover=false.
But generally this warning shows DB data layout inefficiency - some data
is kept at
Does anybody have a fix for BlueFS spillover detected? This started
happening 2 days after an upgrade to 14.2.1 and has increased from 3 OSDs
to 118 in the last 4 days. I read you could fix it by rebuilding the OSDs,
but rebuilding the 264 OSDs on this cluster will take months of rebalancing.
$
My Tree is supposed to look like below but it keeps changing to the map further
below. Notice the drives moving from chassis osd3-shelf1 chassis to host
ceph-osd3. Does anyone know why this may happen?
I wrote a script to monitor for this and to place the osds back where they
belong if they
I think i found where the wrong fsid is located on OSD osdmap but no way to
change fsid...
I tried with ceph-objectstore-tool --op set-osdmap from osdmap from monitor
(ceph osd getmap) but no luck. still with old fsid (no find a way to
set the current epoch on osdmap)
Someone to give a hint ?
Hi Paul,
All the underlying compute nodes Ceph packages were upgraded already but
the instances were not. So are you saying that live-migrate will get them
upgraded?
Thanks,
Pardhiv Karri
On Tue, Jun 18, 2019 at 7:34 AM Paul Emmerich
wrote:
> You can live-migrate VMs to a server with
On 6/18/19 3:39 PM, Paul Emmerich wrote:
> we maintain (unofficial) Nautilus builds for Buster here:
> https://mirror.croit.io/debian-nautilus/
the repository doesn't contain the source packages. just out of
curiosity to see what you might have changes, apart from just
(re)building the packages..
On 6/18/19 3:11 PM, Tobias Gall wrote:
> I would like to switch to debian buster and test the upgrade from
> luminous but there are currently no ceph releases/builds for buster.
shameless plug:
we're re-building ceph packages in our repository that we do for our
university (and a few other
Things are not evolving, If I found an alternative to add a new osds nodes in
the future I’ll mark it here.
I’m abandoning ceph-deploy since it seems to be buggy.
Regards,
De : ceph-users De la part de CUZA Frédéric
Envoyé : 18 June 2019 10:40
À : Brian Topping
Cc : ceph-users@lists.ceph.com
Hi,
we maintain (unofficial) Nautilus builds for Buster here:
https://mirror.croit.io/debian-nautilus/ (signed with
https://mirror.croit.io/keys/release.asc )
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247
All, sorry for the slow response but thank you all for the quick and very
helpful replies !
As you can see below your various advice has fixed the problem - much
appreciated and noted for future reference.
Have a great week !
root@swarmctl:~# ceph mon enable-msgr2
root@swarmctl:~# ceph -s
Hello,
I would like to switch to debian buster and test the upgrade from
luminous but there are currently no ceph releases/builds for buster.
Debian Buster will be released next month[1].
Please resume the debian builds as stated in the release notes for mimic[2].
Will there be a luminous
You can live-migrate VMs to a server with upgraded Ceph libraries; anything
that restarts the underlying qemu process will be enough
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49
Thanks Eugen for answering
Yes it came from another cluster, trying to move all osd from one cluster
to another (1 to 1) so i would avoid wiping the disk
It's indeed a ceph-volume OSD, i checked the lvm label and it's correct
# lvs --noheadings --readonly --separator=";" -o lv_tags
Hi,
this OSD must have been part of a previous cluster, I assume.
I would remove it from crush if it's still there (check just to make
sure), wipe the disk, remove any traces like logical volumes (if it
was a ceph-volume lvm OSD) and if possible, reboot the node.
Regards,
Eugen
Zitat von
Hi all,
Just want to (double) check something – we’re in the process of luminous ->
mimic upgrades for all of our clusters – particularly this section regarding
MDS steps
• Confirm that only one MDS is online and is rank 0 for your FS:
# ceph status
• Upgrade the last
Hello
I have an OSD which is stuck in booting state.
I find out that the daemon osd cluster_fsid is not the same that the actual
cluster fsid, which should explain why it does not join the cluster
# ceph daemon osd.0 status
{
"cluster_fsid": "bb55e196-eedd-478d-99b6-1aad00b95f2a",
"osd_fsid":
Thank you very much! Can you point out where is the code of revoke?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Tue, Jun 18, 2019 at 4:25 PM ?? ?? wrote:
>
>
>
> There are 2 clients, A and B. There is a directory /a/b/c/d/.
>
> Client A create a file /a/b/c/d/a.txt.
>
> Client B move the folder d to /a/.
>
> Now, this directory looks like this:/a/b/c/ and /a/d/.
>
> /a/b/c/d is not exist any more.
>
>
1. All node are under 12.2.12 (luminous stable)
2. Forward and Reverse DNS/SSH is working fine.
3. Things go bad at the install step, ceph is correctly installed but
node is not present in the cluster. It seems that this is the step that goes
wrong.
Regards,
De : Brian
There are 2 clients, A and B. There is a directory /a/b/c/d/.
Client A create a file /a/b/c/d/a.txt.
Client B move the folder d to /a/.
Now, this directory looks like this:/a/b/c/ and /a/d/.
/a/b/c/d is not exist any more.
Client A try to create a file /a/b/c/d/b.txt.
The result is, when use
27 matches
Mail list logo