Hello,

you're right. 
I misunderstood the meaning of the two configuration params: size and min_size.

Now it works correctly.

Thanks,
Matteo  
________________________________________
Da: Christian Balzer <ch...@gol.com>
Inviato: martedì 16 giugno 2015 09:42
A: ceph-users
Cc: Matteo Dacrema
Oggetto: Re: [ceph-users] CephFS client issue

Hello,

On Tue, 16 Jun 2015 07:21:54 +0000 Matteo Dacrema wrote:

> Hi,
>
> I've shutoff the node without take any cautions for simulate a real case.
>
Normal shutdown (as opposed to simulating a crash by pulling cables)
should not result in any delays due to Ceph timeouts.

> The  osd_pool_default_min_size is 2 .
>
This on the other is most likely your problem.
It would have to be "1" for things to work in your case.
Verify it with "ceph osd pool get <poolname> min_size" for your actual
pool(s).

Christian

> Regards,
> Matteo
>
> ________________________________________
> Da: Christian Balzer <ch...@gol.com>
> Inviato: martedì 16 giugno 2015 01:44
> A: ceph-users
> Cc: Matteo Dacrema
> Oggetto: Re: [ceph-users] CephFS client issue
>
> Hello,
>
> On Mon, 15 Jun 2015 23:11:07 +0000 Matteo Dacrema wrote:
>
> > With 3.16.3 kernel it seems to be stable but I've discovered one new
> > issue.
> >
> > If I take down one of the two osd node all the client stop to respond.
> >
> How did you take the node down?
>
> What is your "osd_pool_default_min_size"?
>
> Penultimately, you wouldn't deploy a cluster with just 2 storage nodes in
> production anyway.
>
> Christian
> >
> > Here the output of ceph -s
> >
> > ceph -s
> >     cluster 2de7b17f-0a3e-4109-b878-c035dd2f7735
> >      health HEALTH_WARN
> >             256 pgs degraded
> >             127 pgs stuck inactive
> >             127 pgs stuck unclean
> >             256 pgs undersized
> >             recovery 1457662/2915324 objects degraded (50.000%)
> >             4/8 in osds are down
> >             clock skew detected on mon.cephmds01, mon.ceph-mon1
> >      monmap e5: 3 mons at
> > {ceph-mon1=10.29.81.184:6789/0,cephmds01=10.29.81.161:6789/0,cephmds02=10.29.81.160:6789/0}
> > election epoch 64, quorum 0,1,2 cephmds02,cephmds01,ceph-mon1 mdsmap
> > e176: 1/1/1 up {0=cephmds01=up:active}, 1 up:standby osdmap e712: 8
> > osds: 4 up, 8 in pgmap v420651: 256 pgs, 2 pools, 133 GB data, 1423
> > kobjects 289 GB used, 341 GB / 631 GB avail
> >             1457662/2915324 objects degraded (50.000%)
> >                  256 undersized+degraded+peered
> >   client io 86991 B/s wr, 0 op/s
> >
> >
> > When I take UP the node all clients resume to work.
> >
> > Thanks,
> > Matteo
> >
> > ?
> >
> >
> > ________________________________
> > Da: ceph-users <ceph-users-boun...@lists.ceph.com> per conto di Matteo
> > Dacrema <mdacr...@enter.it> Inviato: luned? 15 giugno 2015 12:37
> > A: John Spray; Lincoln Bryant; ceph-users
> > Oggetto: Re: [ceph-users] CephFS client issue
> >
> >
> > Ok, I'll update kernel to 3.16.3 version and let you know.
> >
> >
> > Thanks,
> >
> > Matteo
> >
> > ________________________________
> > Da: John Spray <john.sp...@redhat.com>
> > Inviato: luned? 15 giugno 2015 10:51
> > A: Matteo Dacrema; Lincoln Bryant; ceph-users
> > Oggetto: Re: [ceph-users] CephFS client issue
> >
> >
> >
> > On 14/06/15 20:00, Matteo Dacrema wrote:
> >
> > Hi Lincoln,
> >
> >
> > I'm using the kernel client.
> >
> > Kernel version is: 3.13.0-53-generic?
> >
> > That's old by CephFS standards.  It's likely that the issue you're
> > seeing is one of the known bugs (which were actually the motivation for
> > adding the warning message you're seeing).
> >
> > John
> >
> > --
> > Questo messaggio e' stato analizzato con Libra ESVA ed e' risultato non
> > infetto. Clicca qui per segnalarlo come
> > spam.<http://esva01.enter.it/cgi-bin/learn-msg.cgi?id=637D840263.A210F>
> > Clicca qui per metterlo in
> > blacklist<http://esva01.enter.it/cgi-bin/learn-msg.cgi?blacklist=1&id=637D840263.A210F>
> >
> > --
> > Questo messaggio e' stato analizzato con Libra ESVA ed e' risultato non
> > infetto. Clicca qui per segnalarlo come
> > spam.<http://esva01.enter.it/cgi-bin/learn-msg.cgi?id=DBA3140262.AE1FB>
> > Clicca qui per metterlo in
> > blacklist<http://esva01.enter.it/cgi-bin/learn-msg.cgi?blacklist=1&id=DBA3140262.AE1FB>
>
>
> --
> Christian Balzer        Network/Systems Engineer
> ch...@gol.com           Global OnLine Japan/Fusion Communications
> http://www.gol.com/
>
> --
> Questo messaggio e' stato analizzato con Libra ESVA ed e' risultato non
> infetto. Seguire il link qui sotto per segnalarlo come spam:
> http://esva01.enter.it/cgi-bin/learn-msg.cgi?id=03A5340264.ADC79
>
>


--
Christian Balzer        Network/Systems Engineer
ch...@gol.com           Global OnLine Japan/Fusion Communications
http://www.gol.com/

--
Questo messaggio e' stato analizzato con Libra ESVA ed e' risultato non infetto.
Seguire il link qui sotto per segnalarlo come spam:
http://esva01.enter.it/cgi-bin/learn-msg.cgi?id=3049340267.ACFBA


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to