On 15:55 Thu 26 Sep, Roman Penyaev wrote:
> > I'll write patch to remove "ms_async_rdma_device_name" and get the
> > device name through public_addr/cluster_addr.
>
> Removal is not a good option since you always have to think about
> compatibility.
I agree with that compatibility is quite
Hi,
rclone can be your friend: https://rclone.org/
Regards,
--
Jarek
czw., 26 wrz 2019 o 14:55 CUZA Frédéric napisał(a):
> Hi everyone,
>
> As aynone ever made a backup of a ceph bucket into Amazon Glacier ?
>
> If so did you use a script that use the api to “migrate” the objects ?
>
>
>
>
Hi everyone,
As aynone ever made a backup of a ceph bucket into Amazon Glacier ?
If so did you use a script that use the api to "migrate" the objects ?
If no one use amazon s3, how did you make those backups ?
Thanks in advance.
Regards,
___
Hi,
The Telemetry [0] module has been in Ceph since the Mimic release and
when enabled it sends back a anonymized JSON back to
https://telemetry.ceph.com/ every 72 hours with information about the
cluster.
For example:
- Version(s)
- Number of MONs, OSDs, FS, RGW
- Operating System used
- CPUs
On 09:58 Thu 26 Sep, Roman Penyaev wrote:
> On 2019-09-26 02:06, Liu, Changcheng wrote:
> > Hi all,
> > Does anyone know how to set "ms_async_rdma_device_name" for OSD
> > in ceph.conf in production environment?
> >
> > When deploying Ceph, it’s better to isolate public & cluster
> >
Hi Miha,
interesting observation, I don't think we've noticed this before. Would
you mind submitting a bug report about this on our tracker, including
these logs?
https://tracker.ceph.com/projects/mgr/issues/new
Thanks in advance!
Lenz
On 9/26/19 10:01 AM, Miha Verlic wrote:
> On 24. 09.
On 24. 09. 19 14:53, Lenz Grimmer wrote:
> On 9/24/19 1:37 PM, Miha Verlic wrote:
>
>> I've got slightly different problem. After a few days of running fine,
>> dashboard stops working because it is apparently seeking for wrong
>> certificate file in /tmp. If I restart ceph-mgr it starts to work
hi, cephers
recenty, I am testing ceph 12.2.12 with bluestore using cosbench.
both SATA osd and ssd osd has slow request.
many slow request occur, and most slow logs after rocksdb delete wal
or table_file_deletion logs
does it means the bottleneck of Rocksdb? if so how to improve. if not
how to