I just run
ceph orch upgrade start
Why does the orchestrator not run the necessary steps?
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB
Hi,
I had to run
ceph fs set cephfs max_mds 1
ceph fs set cephfs allow_standby_replay false
and stop all MDS and NFS containers and start one after the other again
to clear this issue.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin
___
ceph
.
But then it will not work as the same IP subnet cannot span multiple
broadcast domains.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin
The Linux kernel will happily answer ARP requests on any
interface for the IPs it has configured anywhere. That means you have a
constant ARP flapping in your network.
Make the three interfaces bonded and configure all three IPs on the
bonded interface.
Regards
--
Robert Sander
Heinlein Consu
hould have a uniform class of storage.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Pee
be faster, to write it to just one ssd, instead of
writing it to the disk directly.
Usually one SSD carries the WAL and RocksDB of four to five HDD-OSDs.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax
w the data distribution among the OSDs.
Are all of these HDDs? Are these HDDs equipped with RocksDB on SSD?
HDD only will have abysmal performance.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 /
of block devices with the same size
distribution in each node you will get an even data distribution.
If you have a node with 4 3TB drives and one with 4 6TB drives Ceph
cannot use the 6TB drives efficiently.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin
h cluster?
ceph osd set noout
and after the cluster has been booted again and every OSD joined:
ceph osd unset noout
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charl
daemons (outside of osds I believe) from offline hosts.
Sorry for maybe being rude but how on earth does one come up with the
idea to automatically remove components from a cluster where just one
node is currently rebooting without any operator interference?
Regards
--
Robert Sander
Heinlein
have 3 nodes with each 5x 12TB (60TB) and 2 nodes with each 4x
18TB (72TB) the maximum usable capacity will not be the sum of all
disks. Remember that Ceph tries to evenly distribute the data.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein
8 17:13:46 cephtest24 bash[4161252]: debug 2021-07-08T15:13:46.825+
7efc32db4080 -1 ** ERROR: osd init failed: (5) Input/output error
How do I correct the issue?
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405
building and hosting for open source projects
is solved with the openSUSE build service:
https://build.opensuse.org/
But I think what Sage meant was e.g. different versions of GCC on the
distributions and not being able to use all the latest features needed
for compiling Ceph.
Regards
--
Robe
ssing between these two steps.
The first creates /etc/apt/sources.list.d/ceph.list and the second
installs packages, but the repo list was never updated.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 0
could theoretically RAID0 multiple disks and then put an OSD on top
of that but this would create very large OSDs which are not good for
recovering data. Recovering such a "beast" just would take too long.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http
On 15.06.21 15:16, nORKy wrote:
> Why is there no failover ??
Because only one MON out of two is not in the majority to build a quorum.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051
de?
I had success with stopping the "looping" mgr container via "systemctl
stop" on the node. Cephadm then switches to another MGR to continue the
upgrade. After that I just started the stopped mgr container and the
upgrade continued.
Regards
--
Robert Sander
Heinlein Consulting GmbH
S
Am 06.05.21 um 17:18 schrieb Sage Weil:
> I hit the same issue. This was a bug in 16.2.0 that wasn't completely
> fixed, but I think we have it this time. Kicking of a 16.2.3 build
> now to resolve the problem.
Great. I also hit that today. Thanks for fixing it quickly.
Regards
-
ill lead to data loss or at least intermediate
unavailability.
The situation is now that all copies (resp. EC chunks) for a PG are
stored on OSDs of the same host. These PGs will be unavailable if the
host is down.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10
the mds suffer when only 4% of the osd goes
> down (in the same node). I need to modify the crush map?
With an unmodified crush map and the default placement rule this should
not happen.
Can you please show the output of "ceph osd crush rule dump"?
Regards
--
Robert Sander
Hein
crush map. It looks like the
OSD is the failure zone, and not the host. If it woould be the host the
failure of any number of OSDs in a single host would not bring PGs down.
For the default redundancy rule and pool size 3 you need three separate
hosts.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Hi,
to whomever it may concern:
The mirror server eu.ceph.com does to carry the Release files for
15.2.11 in https://eu.ceph.com/debian-15.2.11/dists/*/ and 16.2.1 in
https://eu.ceph.com/debian-16.2.1/dists/*/
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
Am 22.04.21 um 09:07 schrieb Robert Sander:
> What should I do?
I should also upgrade the CLI client which still was at 15.2.8 (Ubuntu
20.04) because a "ceph orch upgrade" run only updates the software
inside the containers.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwed
ied (error connecting to the cluster)
What should I do?
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Char
Hi,
Am 21.04.21 um 10:14 schrieb Robert Sander:
> How do I update a Ceph cluster in this situation?
I learned that I need to create an account on the website hub.docker.com
to be able to download Ceph container images in the future.
With the credentials I need to run "docker login&
Hi,
# docker pull ceph/ceph:v16.2.1
Error response from daemon: toomanyrequests: You have reached your pull
rate limit. You may increase the limit by authenticating and upgrading:
https://www.docker.com/increase-rate-limit
How do I update a Ceph cluster in this situation?
Regards
--
Robert
Hi,
this is one of the use cases mentioned in Tim Serong's talk:
https://youtu.be/pPZsN_urpqw
Containers are great for deploying a fixed state of a software project (a
release), but not so much for the development of plugins etc.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str
ould not
upgrade to Pacific currently.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Pe
Hi,
The DB device needs to be empty for an automatic OSD service. The service will
then create N db slots using logical volumes and not partitions.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030
Hi,
I forgot to mention that CephFS is enabled and working.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 93818 B
Geschäftsführer: Peer
t; bash[9823]: debug 2021-04-04T13:01:04.995+ 7ff80f172440 0 ERROR: failed
> to start datalog_rados service ((5) Input/output error
> bash[9823]: debug 2021-04-04T13:01:04.995+ 7ff80f172440 0 ERROR: failed
> to init services (ret=(5) Input/output error)
I see the same issues on a
d condition which
prevented it from fulfilling the request.", "request_id":
"e89b8519-352f-4e44-a364-6e6faf9dc533"}
']
I have no radosgatewa
B
volumes and one OSD on each SSD.
HDD only SSDs are quite slow. If you do not have enough SSDs for them go
with an SSD only cephfs metadata pool.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 /
check docker.io/ceph/ceph:v15" but it
tells me that the containers do not need to be upgraded.
How will this security fix of OpenSSL be deployed in a timely manner to
users of the Ceph container images?
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://ww
ady rebooted the box so I won't be able to
> test immediately.)
My experience with LVM is that only a reboot helps in this situation.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Am 10.03.21 um 20:44 schrieb Ignazio Cassano:
> 1 small ssd is for operations system and 1 is for mon.
Make that a RAID1 set of SSDs and be happier. ;)
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
0G
bonded interfaces in the cluster network? I would assume that you would
want to go at least 2x 25G here.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HR
Am 10.02.21 um 15:54 schrieb Frank Schilder:
> Which ports are the clients using - if any?
All clients only have outgoing connections and do not listen to any
ports themselves.
The Ceph cluster will not initiate a connection to the client.
Kindest Regards
--
Robert Sander
Heinlein Support G
e cluster.
You need ports 3300 and 6789 for the MONs on their IPs and any dynamic
port starting at 6800 used by the OSDs. The MDS also uses a port above 6800.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Hi,
Am 04.02.21 um 12:10 schrieb Frank Schilder:
> Going to 2+2 EC will not really help
On such a small cluster you cannot even use EC because there are not
enough independent hosts. As a rule of thumb there should be k+m+1 hosts
in a cluster AFAIK.
Regards
--
Robert Sander
Heinlein Supp
(error connecting to the cluster)
This issue is mostly caused by not having a readable ceph.conf and
ceph.client.admin.keyring file in /etc/ceph for the user that starts the
ceph command.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-su
ogether using lvm or somesuch? What are the tradeoffs?
IMHO there are no tradeoffs, there could even be benefits creating a
volume group with multiple physical volumes on RBD as the requests can
be bettere parallelized (i.e. virtio-single SCSI controller for qemu).
Regards
--
Robert Sander
Heinle
stored":27410520278,"objects":6781,"kb_used":80382849,"bytes_used":82312036566,"percent_used":0.1416085809469223,"max_avail":166317473792}},{"name":"cephfs_data","id":3,"stats":{"stored":1282414464
Hi Marc and Dan,
thanks for your quick responses assuring me that we did nothing totally
wrong.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 93818 B
0,88676 0,00338191
true rand 30,1007 82474194304 4194304 1095,92
273 25,5066 313 213 0,05719 0,99140 0,00325295
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-4
ls also
removes the objects and you can start new.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschä
com.tw/
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz:
m=2
You need k+m=4 independent hosts for the EC parts, but your CRUSH map
only shows two hosts. This is why all your PGs are undersized and degraded.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 40
Am 11.11.20 um 13:05 schrieb Hans van den Bogert:
> And also the erasure coded profile, so an example on my cluster would be:
>
> k=2
> m=1
With this profile you can only loose one OSD at a time, which is really
not that redundant.
Regards
--
Robert Sander
Heinlein Support GmbH
S
umber of nodes (more than 10) and a proportional number of OSDs.
Mixed HDDs and SSDs in one pool is not good practice as a pool should
have OSDs of the same speed.
Kindest Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 40
t 7 to 10 nodes and a
corresponding number of OSDs.
This cluster is too small to do any amount of "real" work.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben l
he
> ops and repair tasks for the first time here.
My condolences. Get the data from that cluster and put the cluster down.
In the current setup it will never work.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051
t; partition?
If you do not have faster devices for DB/WAL there is no need to create
them. It does not make the OSD faster.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsge
ed on one node, i.e. the distribution must support
Docker or podman.
cephadm sets up a containerized Ceph cluster with containers based on
CentOS.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 /
to map that onto user name and
group name.
What you use for consistent mappings between your CephFS clients is up
to you. It could be NIS, libnss-ldap, winbind (Active Directory) or any
other method that keeps the passwd and group files in sync.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedt
y User ID locally.
The recommended way is to run a Samba cluster using CephFS as backend.
Your users would then authenticate against Samba which would need to
speak to your LDAP/Kerberos.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-suppor
ing the OSD and applies it again after deploying an OSD with the
same ID.
I do not know why the orchestrator does it but there seems to be a fix
scheduled for 15.2.5.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051
?
Do you know that Proxmox is able to store VM images as RBD directly in a
Ceph cluster?
I would not recommend to store VM images as files on CephFS. Or even
exporting NFS out of a VM to store other VM images on it.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berl
exception with very fast devices like NVMe where one OSD
is not able to fully use the available IO bandwidth. NVMes can have two
OSDs per device.
But you would not create one OSD over multiple devices.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www
h one OSD
on it. Double the space and the same risk.
If you have at least "host" as failure domain then you even have no
copies of the same object in one single host. That means it does not
matter if you take two OSDs offline at the same time.
Regards
--
Robert Sander
Heinlein Support GmbH
Hi,
is it correct that when using the orchestrator to deploy and manage a
cluster you should not use "ceph osd purge" any more as the orchestrator
then is not able to find the OSD for the "ceph orch osd rm" operation?
Regards
--
Robert Sander
Heinlein Support GmbH
Schwed
0.09799 osd.7up 1.0 1.0
Why does osd.1 have a weight of 0 now?
When the OSDs had been initially deployed with the first ceph orch apply
command the weights have been correctly set according to their size.
Why is there a difference between this process and an OSD
Hi,
I am using 15.2.4 as .5 has not been released.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 93818 B
Geschäftsführer: Peer Heinlein - Sitz
;Could not
load module: %s" % module)
rtslib_fb.utils.RTSLibError:
Could not load module: iscsi_target_mod
Solution:
"modprobe iscsi_target_mod" on the host itself,
the container is not allowed to do that.
Regards
--
Robert S
day. ceph versions reports 15.2.4
https://tracker.ceph.com/issues/44877 is exactly the issue I experience.
It seems that the fix is in 15.2.5, any chance that this will be
released until the weekend? ;)
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.h
Hi,
Am 02.09.20 um 23:17 schrieb Dimitri Savineau:
> Did you try to restart the dashboard mgr module after your change ?
>
> # ceph mgr module disable dashboard
> # ceph mgr module enable dashboard
Yes, I should have mentioned that. No effect, though.
Regards
--
Robert Sander
Hein
://ceph01:3000
root@ceph01:~# ceph dashboard get-grafana-api-url
https://ceph01:3000
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht
rbd: error opening vm-501-disk-2: (5) Input/output error
What are the options to repair an RBD pool so that at least the
RBDs (most data objects are still there) are available again.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
estart the OSD (which
resulted in a crash) but the script still is not able to find the info.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 9
ate as several PGs are down and/or stale.
>
Thanks for your input so far.
It looks like this issue: https://tracker.ceph.com/issues/36337
We will try to use the linked Python script to repair the OSD.
ceph-bluestore-tool repair did not find anything.
Regards
--
Robert Sander
Heinlein Su
tracker
0/ 0 objclass
0/ 0 filestore
0/ 0 journal
0/ 0 ms
0/ 0 mon
0/ 0 monc
0/ 0 paxos
0/ 0 tp
0/ 0 auth
0/ 0 crypto
0/ 0 finisher
0/ 0 reserver
0/ 0 heartbeatmap
0/ 0 perfcounter
0/ 0 rgw
1/ 5 rgw_sync
0/ 0 civetweb
0/ 0 javaclient
0/ 0 aso
g ordered to expand this
proof of concept setup for backup storage.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenbu
but no write. s3cmd rb … says operation
halted and radosgw-admin bucket rm also waits for a healthy cluster.
The other option would be to tune the nearfull_ratio and full_ratio
temporarily to allow the cluster to use more space, correct?
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b
ill not work. Ceph does not like swapping.
> My question is: Can I shrink a bluestore lvm?
No, you need to recreate the OSD.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben
ot create a bonding device
from VLAN interfaces, you have to make it the other way around, create
VLAN interfaces on top of your bonding.
Why not bond in LACP mode and use both interface in parallel?
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www
his pool on OSD's. So this is
> threshold for bytes_used.
Could the documentation be changed to express this more clearly, please?
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsang
w CephFS.
There is no inplace migration possible. You also cannot convert a pool
from EC to replicated or vice versa.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenb
it, please also do not hesitate to contact me.
Kindest Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 93818 B
Geschäftsführer: Peer Heinlein - Sitz
something here?
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 93818 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin
signature.asc
Description
s cluster, it's only now showing up
> as a warning.
Thanks Paul, I already thought that this would be the case.
We will recommend using CephFS to our customer for storing ISOs. I
really do not know why they are storing them as single RADOS objects.
Regards
--
Robert Sander
Heinlein Support Gmb
cts
All OSDs are BlueStore.
What is happening here?
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 93818 B
Geschäftsführer: Peer Heinlein - S
ment process. But
has better performance.
If you have a recent kernel, you can use the kernel mount.
If you have an enterprise distribution with older kernel, use FUSE for
current features.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.
ot
start without a certificate.
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Amtsgericht Berlin-Charlottenburg - HRB 93818 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin
signature.asc
Desc
mand
line argument. Otherwise ceph-volume thinks it is a "real" device.
ceph-volume lvm create --bluestore --data /dev/sda --block.db
ceph-block-dbs-ea684aa8-544e-4c4a-8664-6cb50b3116b8/osd-block-db-a8f1489a-d97b-479e-b9a7-30fc9fa99cb5
should work.
Regards
--
Robert Sander
Heinlein Sup
cmu-runner nor ceph-iscsi.
Where do I get these RPMs from?
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Gesch
201 - 287 of 287 matches
Mail list logo