Hi there,
I have a Ceph cluster with radosgw and I use it in my production
environment for a while. Now I decided to set up another cluster in another
geo place to have a disaster recovery plan. I read some docs like
http://docs.ceph.com/docs/jewel/radosgw/federated-config/, but all of them
is
Hi,
I want to copy objects from one of my pools to another pool with "rados
cppool" but the speed of this operation is so low. on the other hand, the
speed of PUT/GET in radosgw is so different and it's so higher.
Is there any trick to speed it up?
ceph version 12.2.3
Regards,
Behna
nd cable
4- cross exchanging SSD disks
To those who helped me with this problem, I sincerely thank you so much.
Best regards,
Behnam Loghmani
On Thu, Feb 22, 2018 at 3:18 PM, David Turner <drakonst...@gmail.com> wrote:
> Did you remove and recreate the OSDs that used the SSD for their W
; hosed.
>
>
> On Wed, Feb 21, 2018 at 5:46 PM, Behnam Loghmani <
> behnam.loghm...@gmail.com> wrote:
>
>> Hi there,
>>
>> I changed SATA port and cable of SSD disk and also update ceph to version
>> 12.2.3 and rebuild OSDs
>> but when recovery
5e992254400
/var/lib/ceph/osd/ceph-7/block) close
2018-02-21 21:12:18.650473 7f3479fe2d00 1 bdev(0x55e992254000
/var/lib/ceph/osd/ceph-7/block) close
2018-02-21 21:12:18.93 7f3479fe2d00 -1 ** ERROR: osd init failed: (22)
Invalid argument
On Wed, Feb 21, 2018 at 5:06 PM, Behnam Loghma
me faulty hardware (RAID-controller,
> port, cable) but not disk? Does "faulty" disk works OK on other server?
>
> Behnam Loghmani wrote on 21/02/18 16:09:
>
>> Hi there,
>>
>> I changed the SSD on the problematic node with the new one and
>> reconfigure OSDs a
at 5:16 PM, Behnam Loghmani <behnam.loghm...@gmail.com>
wrote:
> Hi Caspar,
>
> I checked the filesystem and there isn't any error on filesystem.
> The disk is SSD and it doesn't any attribute related to Wear level in
> smartctl and filesystem is mounted with default opti
erpret this.
Could you please help me to recover this node or find a way to prove SSD
disk problem.
Best regards,
Behnam Loghmani
On Mon, Feb 19, 2018 at 1:35 PM, Caspar Smit <caspars...@supernas.eu> wrote:
> Hi Behnam,
>
> I would firstly recommend running a filesystem ch
at 1:09 AM, Gregory Farnum <gfar...@redhat.com> wrote:
> The disk that the monitor is on...there isn't anything for you to
> configure about a monitor WAL though so I'm not sure how that enters into
> it?
>
> On Fri, Feb 16, 2018 at 12:46 PM Behnam Loghmani <
> behna
Thanks for your reply
Do you mean, that's the problem with the disk I use for WAL and DB?
On Fri, Feb 16, 2018 at 11:33 PM, Gregory Farnum <gfar...@redhat.com> wrote:
>
> On Fri, Feb 16, 2018 at 7:37 AM Behnam Loghmani <behnam.loghm...@gmail.com>
> wrote:
>
>>
nd wal/db separation on
logical volumes.
Best regards,
Behnam Loghmani
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
t; is pretty static in size, but rocksdb grows with the amount of objects you
> have. You also have copies of the osdmap on each osd. There's just overhead
> that adds up. The biggest is going to be rocksdb with how many objects you
> have.
>
> On Mon, Feb 12, 2018, 8:06 AM Behnam Loghmani
of this high disk usage?
should I change "bluestore_min_alloc_size_hdd"? and If I change it and set
it to smaller size, does it impact on performance?
what is the best practice for storing small files on bluestore?
Best regards,
Behnam Loghmani
__
you have typo in apt source
it must be
https://download.ceph.com/debian-luminous/
not
https://download.ceph.com/debian-luminos/
On Mon, Dec 18, 2017 at 7:58 PM, Andre Goree wrote:
> I'm working on setting up a cluster for testing purposes and I can't see
> to install
14 matches
Mail list logo