Re: [PVE-User] Again, Ceph: default timeout for osd?

2016-12-16 Thread Jeff Palmer
Marco, I think you've answered the nodown/noout question. As for the "total unreasonable value" for the default.. In my experience the "defaults" become the defaults in 1 of 2 primary ways. The upstream vendor (ceph in this case) has a default that they select based on their expected typical

[PVE-User] Cross CPD HA

2016-12-16 Thread Eneko Lacunza
Hi all, We are doing a preliminary study for a VMWare installation migration to Proxmox. Currently, customer has 2 CPDs in HA, so that if the main CPD goes down, all VMs are restarted in backup CPD. Storage is SAN and storage data is replicated using SAN capabilities. What can be done

Re: [PVE-User] Again, Ceph: default timeout for osd?

2016-12-16 Thread Marco Gaiarin
Mandi! Alexandre DERUMIER In chel di` si favelave... > >>mon osd down out interval > This is the time between when a monitor marks an OSD "down" (not > currently serving data) and "out" (not considered *responsible* for > data by the cluster). IO will resume once the OSD is down (assuming > the

Re: [PVE-User] Windows & Gluster 3.8

2016-12-16 Thread Angel Docampo
On 16/12/16 11:03, Maxence Sartiaux wrote: [2016-12-14 11:08:29.212590] I [MSGID: 114057] [client- handshake.c:1446:select_server_supported_programs] 0-vm-client-2: Using Program GlusterFS 3.3, Num (1298437), Version (330) AFAIK, this is unrelevant, some developer put the version on the log

Re: [PVE-User] Windows & Gluster 3.8

2016-12-16 Thread Maxence Sartiaux
Hello, I've found my problem, i don't know why, gluster started to store data on my arbiter brick and the parition was full. I've recreated the brick and now all VM run fine. Btw i've upgraded to 3.8.7, if i run some troubles, i'll keep you informed. Another little question,

Re: [PVE-User] Again, Ceph: default timeout for osd?

2016-12-16 Thread Alexandre DERUMIER
>>mon osd down out interval This is the time between when a monitor marks an OSD "down" (not currently serving data) and "out" (not considered *responsible* for data by the cluster). IO will resume once the OSD is down (assuming the PG has its minimum number of live replicas); it's just that data

[PVE-User] Again, Ceph: default timeout for osd?

2016-12-16 Thread Marco Gaiarin
I've done some tests in the past, but probably without noting that, because the test system was... a test system, so mostly offloaded. Yesterday i've had to reboot a ceph node, that was MON and with some OSD. I've set the flags: 2016-12-15 17:01:29.139923 mon.0 10.27.251.7:6789/0 1213541 :

Re: [PVE-User] Ceph: Some trouble creating OSD with journal on a sotware raid device...

2016-12-16 Thread Brian ::
This is probably by design.. On Thu, Dec 15, 2016 at 11:48 AM, Marco Gaiarin wrote: > > Sorry, i came back to this topic because i've done some more tests. > > Seems that 'pveceph' tool have some trouble creating OSD with journal > on ''nonstandard'' partition, for examply on a