a loss happens to you, you restore from your backups.
>
>
>
>
> Den mån 13 aug. 2018 kl 14:26 skrev Surya Bala :
>
>> Any suggestion on this please
>>
>> Regards
>> Surya Balan
>>
>> On Fri, Aug 10, 2018 at 11:28 AM, Surya Bala
>
Any suggestion on this please
Regards
Surya Balan
On Fri, Aug 10, 2018 at 11:28 AM, Surya Bala
wrote:
> Hi folks,
>
> I was trying to test the below case
>
> Having pool with replication count as 1 and if one osd goes down, then the
> PGs mapped to that OSD become stale.
&
Hi folks,
I was trying to test the below case
Having pool with replication count as 1 and if one osd goes down, then the
PGs mapped to that OSD become stale.
If the hardware failure happen then the data in that OSD lost. So some
parts of some files are lost . How can i find what are the files
nough, it can not be stored by only two osds.
>> If the file is very small, as you know object size is 4MB, so it can be
>> stored by only one object in one primary osd, and slave osd.
>>
>>
>> 在 2018年8月2日,下午6:56,Surya Bala 写道:
>>
>> I underst
2.
>
> Hope this will help.
>
> > 在 2018年8月2日,下午3:43,Surya Bala 写道:
> >
> > Hi folks,
> >
> > From the ceph documents i understood about PG and why should PG number
> should be optimal. But i dont find any info about the below point
> >
> > I am us
Hi folks,
>From the ceph documents i understood about PG and why should PG number
should be optimal. But i dont find any info about the below point
I am using cephfs client in my ceph cluster. When we store a file(consider
replication count is 2) , it will be splitted into objects and each
Previosly we had multi-active MDS. But that time we got slow /stuck
requests when multiple clients accessing the cluster. So we decided to have
single active MDS and all others are stand by.
When we got this issue MDS trimming was going on. when we checked the last
ops
{
"ops": [
{
is designed to use 2
servers for each pool
Regards
Surya Balan
On Tue, Jul 17, 2018 at 1:48 PM, Anton Aleksandrov
wrote:
> You need to give us more details about your OSD setup and hardware
> specification of nodes (CPU core count, RAM amount)
>
> On 2018.07.17. 10:25, Surya Bala w
Hi folks,
We have production cluster with 8 nodes and each node has 60 disks of size
6TB each. We are using cephfs and FUSE client with global mount point. We
are doing rsync from our old server to this cluster rsync is slow compared
to normal server
when we do 'ls' inside some folder, which has