ged: true" to the specification. After that
ceph orch apply -i osd.yml
Or you could just remove the specification with "ceph orch rm NAME".
The OSD service will be removed but the OSD will remain.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119
orchestrator?
Which version?
Have you tried
ceph orch daemon add osd
host1:data_devices=/dev/sda,/dev/sdb,db_devices=/dev/nvme0
as shown on https://docs.ceph.com/en/quincy/cephadm/services/osd/ ?
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https
the
device class and assign it to the pool charlotte.rgw.buckets.data.
After that the autoscaler will be able to work again.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht
On 27.03.23 16:34, Pat Vaughan wrote:
Yes, all the OSDs are using the SSD device class.
Do you have multiple CRUSH rules by chance?
Are all pools using the same CRUSH rule?
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
d therefor
multiple device classes.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Hei
On 14.03.23 15:22, b...@nocloud.ch wrote:
ah.. ok, it was not clear to me that skipping minor version when doing a major
upgrade was supported.
You can even skip one major version when doing an upgrade.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
On 14.03.23 14:21, bbk wrote:
`
# ceph orch upgrade start --ceph-version 17.2.0
I would never recommend to update to a .0 release.
Why not go directly to the latest 17.2.5?
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
se CLI tools are available.
Regards
--
Robert Sander
Heinlein Support GmbH
Linux: Akademie - Support - Hosting
http://www.heinlein-support.de
Tel: 030-405051-43
Fax: 030-405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein
phadm
as this is the recommended installation method.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 220009 B / Amtsgericht Berlin-Charlotte
You could just label one of the cluster hosts with _admin:
ceph orch host label add hostname _admin
https://docs.ceph.com/en/quincy/cephadm/host-management/#special-host-labels
https://docs.ceph.com/en/quincy/cephadm/operations/#client-keyrings-and-configs
Regards
--
Robert Sander
Heinlein Consu
s a really bad idea outside of a desaster
scenario where the other two copies are completely lost to a fire.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Char
On 28.02.23 16:31, Marc wrote:
Anyone know of a s3 compatible interface that I can just run, and reads/writes
files from a local file system and not from object storage?
Have a look at Minio:
https://min.io/product/overview#architecture
Regards
--
Robert Sander
Heinlein Support GmbH
Linux
.
How would the process look to get development started in this direction?
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer
": {}
}
}
"s3cmd ls s3://testbucket/" shows nothing.
"s3cmd rb s3://testbucket/" removes the bucket but the RADOS
objects of the S3 objects remain in the data pool.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.hei
s a result the bucket
is empty when listing via S3. A bucket remove is successful but leaves
all the RADOS objects in the index and data pools.
Why is there no operation to rebuild the index for a bucket based on the
existing RADOS objects in the data pool?
Regards
--
Robert Sander
Heinlein
Hi,
There is an operation "radosgw-admin bi purge" that removes all bucket
index objects for one bucket in the rados gateway.
What is the undo operation for this?
After this operation the bucket cannot be listed or removed any more.
Regards
--
Robert Sander
Heinlein Consu
to
ask what to do if a file has been changed on both sides.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinl
fails.
To increase fault tolerance you need to streamline your processes and
replace a failed node immediately before the next one fails. In such
small clusters each consecutive failure can lead to data loss.
Best would be to add more nodes.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwed
ph daemons.
Regards
--
Robert Sander
Heinlein Support GmbH
Linux: Akademie - Support - Hosting
http://www.heinlein-support.de
Tel: 030-405051-43
Fax: 030-405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinl
into multiple 4MB
sized RAOS objects by the rados-gateway.
This is why you see much more RADOS objects than S3 objects.
Regards
--
Robert Sander
Heinlein Support GmbH
Linux: Akademie - Support - Hosting
http://www.heinlein-support.de
Tel: 030-405051-43
Fax: 030-405051-19
Zwangsangaben lt. §35a Gmb
cluster.
Please show the output of "ceph versions".
Regards
--
Robert Sander
Heinlein Support GmbH
Linux: Akademie - Support - Hosting
http://www.heinlein-support.de
Tel: 030-405051-43
Fax: 030-405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Gesch
ized setup.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinlein - Si
ve all but the running MON from it. Then this MON will only see
itself as active in the cluster and form the quorum.
https://docs.ceph.com/en/quincy/rados/troubleshooting/troubleshooting-mon
https://docs.ceph.com/en/quincy/rados/operations/add-or-rm-mons/#removing-monitors
Regards
--
Robert Sande
Hi,
you can also use SRV records in DNS to publish the IPs of the MONs.
Read https://docs.ceph.com/en/quincy/rados/configuration/mon-lookup-dns/
for more info.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030
]:3300,v1:[abcd:abcd:abcd::23]:6789]
Does this ceph.conf also exist on the hosts that want to mount the
filesystem? Then you do not need to specify a MON host or IP when
mounting CephFS. Just do
mount -t ceph -o name=admin,secret=XXX :/ /backup
Regards
--
Robert Sander
Heinlein Consulting
represented through "shadow" trees of the cluster
topology.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 220009 B / Amtsgericht Berlin-Char
Am 18.01.23 um 10:12 schrieb Robert Sander:
root@cephtest20:~# ceph fs status
Error EINVAL: Traceback (most recent call last):
File "/usr/share/ceph/mgr/mgr_module.py", line 1757, in _handle_command
return CLICommand.COMMANDS[cmd['prefix']].call(self, cmd, inbuf)
ll information about cephfs.
Where does this AssertionError come from?
Regards
--
Robert Sander
Heinlein Support GmbH
Linux: Akademie - Support - Hosting
http://www.heinlein-support.de
Tel: 030-405051-43
Fax: 030-405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Ge
/dir/.snap: Permission denied [Errno 13]
It can be reproduced in Ceph 17.2.5 by creating the directory
and using "chmod o= /path/to/dir" to not allow "other".
How does the dashboard access the contents of the CephFS?
It looks like the MGR uses something like the nobody acco
Hi,
Am 13.01.23 um 14:35 schrieb Konstantin Shalygin:
ceph-kvstore-tool bluestore-kv /var/lib/ceph/osd/ceph-0/ get S min_alloc_size
This only works when the OSD is not running.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein
namespace ls .nfs
root@cephtest20:~#
Where "nfs01" is a namespace in the pool .nfs
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HR
Hi,
On 12.01.23 11:11, Gerdriaan Mulder wrote:
On 12/01/2023 10.26, Robert Sander wrote:
Is it this line?
bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size
0x1000
That seems to be it:
https://github.com/ceph/ceph/blob/v15.2.17/src/os/bluestore/BlueStore.cc#L11754
ld be great to have.
`config show` on the admin socket I suspect does not show the existing value.
This show the value currently set in the configuration.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 93818 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin
___
ceph
_() got an unexpected keyword argument
'retention_time'
It looks like release 17.2.5 does not contain this code yet.
Why is the content of the documentation already online when
https://github.com/ceph/ceph/pull/47943 has not been released yet?
Regards
--
Robert Sander
Heinlei
"root".
You will need to chown or chgrp and chmod directories and/or files if
you want to change them. This is basic POSIC permissions management.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
What account are you using when doing the same via SCP?
What POSIX access rights do these accounts have in the filesystem?
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht B
t of the CephFS, usually
just /.
You should also switch to "ceph fs authorize" for creating cephx keys
for CephFS usage.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zw
configuration your available capacity is that of the smallest
OSD. You cannot use additional space in larger OSDs.
Such a heterogenous setup is only possible with a large number of OSDs
where the placement groups can be assigned more flexible to the OSDs.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Am 02.12.22 um 21:09 schrieb Wyll Ingersoll:
* What is causing the OMAP data consumption to grow so fast and can it be
trimmed/throttled?
S3 is a heavy user of OMAP data. RBD and CephFS not so much.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
have
"Dell Ent NVMe CM6 RI 15.36TB" which are Kioxia disks.
Does the "RI" stand for read-intensive?
I think you need mixed-use flash storage for a Ceph cluster as it has
many random write accesses.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berl
would set all OSDs of this host to "out" first.
This way the cluster still knows about them and is able to utilize them
when doing the data movement to the other OSDs.
After they are really empty you can purge them and remove the host from
the cluster.
Regards
--
Robert Sander
Heinle
clients also retrieve the cluster map from the MONs and use that
information to talk to the OSDs.
All components only register one IP.
How would the client decide which IP to talk to if there were multiple
per MON or OSD and the client would not be within one of the networks?
Regards
--
Robert
eph client always needs access to all of the public network as it
will speak to each OSD.
Make sure that your routing is correct or apply NAT so that VPN clients
and all Ceph nodes are able to talk to each other.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Be
releases.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 220009 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin
aster than
the data devices. You don't have that. Keep DB and WAL on the data
devices, makes operations easier.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §3
and therefore
less suitable for a constant-load system like ceph)?
This would be an excellent idea.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg -
ceph.conf.
The new cluster then still has the old cluster fsid which may or may not
be an issue if you ever need to couple both clusters for replication.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030
Am 25.09.22 um 19:20 schrieb Murilo Morais:
I set up two hosts with cephadm,
You cannot have HA with only two hosts.
You need at least three separate hosts for three MONs to keep your
cluster running.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
h the specification, i.e.
include vendor or model information or sizes and HDD or SSD type.
This way RBDs would not be included when the orchestrator searches for
new devices.
BTW: It is quite unusual mapping RBDs on OSD nodes. Do you run a
hypercoverged setup?
Regards
--
Robert Sander
Hei
with an /etc/logrotate.de/ceph-common file:
# dpkg -S /etc/logrotate.d/ceph-common
ceph-common: /etc/logrotate.d/ceph-common
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsan
more and logfiles do not get rotated
at all. This is bad.
I tried to remove /etc/logrotate.d/cephadm but it gets automatically re-created.
IMHO cephadm should only create this file if the ceph-common file is
not present.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119
ions on it.
But when you look at the other chapters for more information it seems
like cephadm was never invented.
My feeling is that there should be multiple "tracks" of documentation.
One cephadm track and one for all manual operations clearly separated.
Regards
--
Robert Sander
Hein
Am 23.08.22 um 08:56 schrieb Konstantin Shalygin:
On 19 Aug 2022, at 17:11, Robert Sander wrote:
You could easily add nodes to the CTDB cluster to distribute load there.
How to do that? Add more then one publlic_ip? How to tell Winsows then, about
multiple IP's?
You need to extend
. Current use is about 600 TB of data and 300 million objects in the
data pool.
Identify the bottleneck(s).
What is the load on the MDS?
Do you have multiple active MDS on this CephFS?
You could easily add nodes to the CTDB cluster to distribute load there.
Regards
--
Robert Sander
Heinlein Support
may be a dozen clients for this filesystem.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 220009 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer
Am 09.08.22 um 10:07 schrieb Robert Sander:
When copying the same file to a subdirectory of the CephFS the
performance stays at 500MB/s for the whole time. MDS activity does not
seems to influence the performance here.
There is a new datapoint:
When mounting the subdirectory (and not
deployment with
container images is the way to go. Wouldn't it be sufficient and easier
to just build the images?
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht B
Am 09.08.22 um 22:31 schrieb Patrick Donnelly:
It sounds like a bug. Please create a tracker ticket with details
about your environment and an example.
Just created https://tracker.ceph.com/issues/57084
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
gards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin
___
VM images in qcow2 format there.
Is this a known issue?
Is there something special with the root directory of a CephFS wrt write
performance?
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 /
tting these problems trying to bootstrap quincy
on a clean install of proxmox.
Proxmox puts the hostname with the IP address of the first interface
into /etc/hosts. Could it be a DNS/IP issue?
BTW: Proxmox has its own Ceph management builtin. Why don't you use that?
Regards
--
Robert
Am 05.08.22 um 12:20 schrieb Dhairya Parmar:
Did you try making use of staggered upgrades
<https://docs.ceph.com/en/quincy/cephadm/upgrade/#staggered-upgrade>
functionality?
Dang. This is so new that I have not seen the feature yet. Thanks.
Regards
--
Robert Sander
Heinlein Consultin
.
A downtime of CephFS has to be announced to the consumers of the filesystem.
The rest of the cluster upgrade has no impact to any storage consumer.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030
seconds.
It would be more predictable if this would not happen after a random
timespan where first all MON and OSD instances get updated.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051
group. They do have access to the directory via a group that is listed
in the POSIX ACLs.
Is this a known bug in 16.2.10?
Regards
--
Robert Sander
Heinlein Support GmbH
Linux: Akademie - Support - Hosting
http://www.heinlein-support.de
Tel: 030-405051-43
Fax: 030-405051-19
Zwangsangaben lt
e a partition for a block DB device.
Create a physical volume on your SSD, a volume group and then a
logical volume for each DB device. Use the LV in the command line like this:
cephadm ceph-volume lvm preparee --bluestore --data /dev/sda --block.db
vgname/lvname
Regards
--
Robert Sander
Heinlein
looks like it is the bug. Strange.
Regards
--
Robert Sander
Heinlein Support GmbH
Linux: Akademie - Support - Hosting
http://www.heinlein-support.de
Tel: 030-405051-43
Fax: 030-405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein --
data/dev/sdap 16.37
TB100.00%
block_db/dev/sdaf
128.00 GB 14.31%
Regards
--
Robert Sander
Heinlein Support GmbH
Linux: Akademie - Support - Hosting
http://www.hein
sd df tree"?
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinlein - Si
r creates one cephx key per NFS export.
Ganesha gets a cephx key that is limited to the directory it should export.
It cannot create the directory itself because then it would need to have
permissions in the directory above.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b,
to be asked as there are three projects involved?
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 220009 B / Amtsgericht Berlin-Charlottenburg
Am 24.06.22 um 16:44 schrieb Matthew Darwin:
Not sure. Long enough to try the command and write this email, so at
least 10 minutes.
I had that too today after upgrading my test cluster.
I just ran "ceph telemetry off" and "ceph telemetry on" and the message
was gone.
far as I
know.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 220009 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz
Am 20.06.22 um 09:45 schrieb Arnaud M:
A ZFS file system can store up to *256 quadrillion zettabytes* (ZB).
How would a storage system look like in reality that could hold such an
amount of data?
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http
Am 30.05.22 um 13:16 schrieb Janek Bevendorff:
The image tags on Docker Hub are even more outdated and stop at v16.2.5.
quay.io seems to be up to date.
Docker Hub does not get new images any more. The project has moved to
quay.io.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Am 26.05.22 um 20:21 schrieb Sarunas Burdulis:
size 2 min_size 1
With such a setting you are guaranteed to lose data.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 220009 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin
to use
another deployment tool (ceph-ansible ?) anyway if you want to add OSDs
in the futures and not doing everythin manually.
Regards
--
Robert Sander
Heinlein Support GmbH
Linux: Akademie - Support - Hosting
http://www.heinlein-support.de
Tel: 030-405051-43
Fax: 030-405051-19
Zwangsangaben
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 220009 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin
ard set-grafana-frontend-api-url https://f.q.d.n:3000/
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 220009 B / Amtsgericht Berlin-Charlottenburg,
Geschä
the
used space and the available space of the CephFS data pool. I.e. 6.1 TiB
+ 3.0 TiB makes a df size of 9.2 TiB.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a
the daemon name.
There seems to be no issue as the snapshot mirroring works.
We were just confused by the errors in the log.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangabe
log?
BTW: The documentation only shows how to start cephfs-mirror as systemd
service with "systemctl enable" which is obviously not the way in a
cephadm managed cluster:
https://docs.ceph.com/en/pacific/cephfs/cephfs-mirroring/#starting-mirror-daemon
Regards
--
Robert Sander
Heinlein
ceph
fs command.
Are subvolumes only for k8s and not for "human" consumption?
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 220009 B / A
iner image changes should also be trackable somehow in the
version number (of the container image, not Ceph).
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlotten
?
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin
create a
logical volume fo reach OSD.
OSDs cannot share the same partition.
cephadm ceph-volume lvm prepare --data /dev/sdX --block.db vgname/lvname
should then work. Each logical volume should be around 70GB in size.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119
PGs. But I do not know if that is still the case with current Ceph versions.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 220009 B / Amtsgeri
the CRUSH map.
Keep in mind that the last command is
ceph cephadm osd activate $HOSTNAME
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 220009
SHS public key to
/root/.ssh/authorized_keys. After that you should be able to run
ceph cephadm osd activate $HOSTNAME
to have the orchestrator start the OSD containers on $HOSTNAME.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein
into account and is therefor useless, IMHO.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 220009 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer:
On 21.01.22 14:23, Sebastian Mazza wrote:
Or can it even create some problems for the OSD deamon if the HDD spins down?
The OSD daemon would crash I would assume.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030
nearly all services, a reboot may be even easier.
TL;DR: Do not set the time backwards on a running system. Have ntpd or
chrony adjust it slowly (by having the system clock running slower than
real time).
BTW: Changing the timezone is not the same as jumping backwards in time.
Regards
--
Robert
devices etc.
This is currently only possible by manually preparing the OSDs and AFAIK
not with a OSD service specification.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt
nux.go:235: starting container process
caused "exec: \"/dev/init\": stat /dev/init: no such file or directory".
With Docker 20.10.12 this error does not appear any more.
The Ceph documentation only has a chapter about Podman compatability.
Regards
--
Robert Sander
Heinlein
On 16.12.21 21:57, Andrei Mikhailovsky wrote:
public_network = 192.168.168.0/24,192.168.169.0/24
AFAIK there is only one public_network possible.
In your case you could try with 192.168.168.0/23, as both networks are
direct neighbors bitwise.
Regards
--
Robert Sander
Heinlein Consulting
a. But it's also *very* counterintuitive, don't you
agree?
I totally agree. But containers and automation are the future. And who
could have thought about that the default container image registry would
change or that Docker hub turns into an unusable repo. Not to become
sarcastic he
is can only be changed by "ceph orch upgrade start".
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsf
--image quay.io/ceph/ceph:v15.2.15"
before deploying new RGWs and MDSs you set the new default image for
cephadm. No "real" upgrade will be performered as the adopted containers
already are running on this image.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b,
101 - 200 of 302 matches
Mail list logo