Re: [ceph-users] Brand new cluster -- pg is stuck inactive

2017-10-14 Thread dE
On 10/15/2017 03:13 AM, Denes Dolhay wrote: Hello, Could you include the monitors and the osds as well to your clock skew test? How did you create the osds? ceph-deploy osd create osd1:/dev/sdX osd2:/dev/sdY osd3: /dev/sdZ ? Some log from one of the osds would be great! Kind regards,

Re: [ceph-users] osd max scrubs not honored?

2017-10-14 Thread J David
On Sat, Oct 14, 2017 at 9:33 AM, David Turner wrote: > First, there is no need to deep scrub your PGs every 2 days. They aren’t being deep scrubbed every two days, nor is there any attempt (or desire) to do so. That would be require 8+ scrubs running at once. Currently,

Re: [ceph-users] Brand new cluster -- pg is stuck inactive

2017-10-14 Thread Denes Dolhay
Hello, Could you include the monitors and the osds as well to your clock skew test? How did you create the osds? ceph-deploy osd create osd1:/dev/sdX osd2:/dev/sdY osd3: /dev/sdZ ? Some log from one of the osds would be great! Kind regards, Denes. On 10/14/2017 07:39 PM, dE wrote: On

Re: [ceph-users] Ceph iSCSI login failed due to authorization failure

2017-10-14 Thread Maged Mokhtar
On 2017-10-14 17:50, Kashif Mumtaz wrote: > Hello Dear, > > I am trying to configure the Ceph iscsi gateway on Ceph Luminious . As per > below > > Ceph iSCSI Gateway -- Ceph Documentation [1] > > [1] > > CEPH ISCSI GATEWAY — CEPH DOCUMENTATION > > Ceph is iscsi gateway are configured

Re: [ceph-users] Brand new cluster -- pg is stuck inactive

2017-10-14 Thread dE
On 10/14/2017 08:18 PM, David Turner wrote: What are the ownership permissions on your osd folders? Clock skew cares about partial seconds. It isn't the networking issue because your cluster isn't stuck peering. I'm not sure if the creating state happens in disk or in the cluster. On

Re: [ceph-users] Ceph iSCSI login failed due to authorization failure

2017-10-14 Thread Jason Dillaman
Have you set the CHAP username and password on both sides (and ensured that the initiator IQN matches)? On the initiator side, you would run the following before attempting to log into the portal: iscsiadm --mode node --targetname --op=update --name node.session.auth.authmethod --value=CHAP

[ceph-users] Backup VM (Base image + snapshot)

2017-10-14 Thread Oscar Segarra
Hi, In my VDI environment I have configured the suggested ceph design/arquitecture: http://docs.ceph.com/docs/giant/rbd/rbd-snapshot/ Where I have a Base Image + Protected Snapshot + 100 clones (one for each persistent VDI). Now, I'd like to configure a backup script/mechanism to perform

[ceph-users] Ceph iSCSI login failed due to authorization failure

2017-10-14 Thread Kashif Mumtaz
Hello Dear, I am trying to configure the Ceph iscsi gateway on Ceph Luminious . As per below Ceph iSCSI Gateway — Ceph Documentation | | | Ceph iSCSI Gateway — Ceph Documentation | | | Ceph is iscsi gateway are configured and chap auth is set. /> lso- /

[ceph-users] 答复: assert(objiter->second->version > last_divergent_update) when testing pull out disk and insert

2017-10-14 Thread zhaomingyue
1、this assert happened accidently, not easy to reproduce; In fact, I also suppose this assert is caused by device data lost; but if has lost,how it can accur that (last_update +1 = log.rbegin.version) , in case of losting data, it's more likely to be confused. At present, this situation can't

Re: [ceph-users] osd max scrubs not honored?

2017-10-14 Thread David Turner
A few things. First, there is no need to deep scrub your PGs every 2 days. Schedule it out so it's closer to a month or so. If you have a really bad power hiccup, up the schedule to check for consistency. Second, you said "Intel SSD DC S3700 1GB divided into three partitions used for Bluestore

Re: [ceph-users] Brand new cluster -- pg is stuck inactive

2017-10-14 Thread David Turner
What is the output of your `ceph status`? On Fri, Oct 13, 2017, 10:09 PM dE wrote: > On 10/14/2017 12:53 AM, David Turner wrote: > > What does your environment look like? Someone recently on the mailing > list had PGs stuck creating because of a networking issue. > > On

Re: [ceph-users] using Bcache on blueStore

2017-10-14 Thread Jorge Pinilla López
Okay I get your point, its way more safer without cache at all. I am talking from totally ignorace, so please correct me if I say something wrong. What I dont really understand is how badly is DB space used. 1-When its a new OSD, it might be totally empty but its not used for storing any actual

Re: [ceph-users] Questions about bluestore

2017-10-14 Thread Jorge Pinilla López
There are 2 configs to set the size of your DB and WalBluestore_block_db_sizeBluestore_block_wal_size If you have an SSD you should give as much space as you can to the DB and don't care about the Wal (Wal would always be placed in the fastest device)  I am not sure about hot moving the DB but

Re: [ceph-users] Questions about bluestore

2017-10-14 Thread Mario Giammarco
Nobody can help me? Il ven 6 ott 2017, 07:31 Mario Giammarco ha scritto: > Hello, > I am trying Ceph luminous with Bluestore. > > I create an osd: > > ceph-disk prepare --bluestore /dev/sdg --block.db /dev/sdf > > and I see that on ssd it creates a partition of only 1g