Re: [ceph-users] Rbd map command doesn't work

2016-08-18 Thread EP Komarla
To: Somnath Roy <somnath@sandisk.com> Cc: EP Komarla <ep.koma...@flextronics.com>; ceph-users@lists.ceph.com Subject: Re: [ceph-users] Rbd map command doesn't work EP, Try setting the crush map to use legacy tunables. I've had the same issue with the"feature mismatch" errors when

Re: [ceph-users] Rbd map command doesn't work

2016-08-16 Thread EP Komarla
9 missing required protocol features [1204486.821279] libceph: mon0 172.20.60.51:6789 feature set mismatch, my 102b84a842a42 < server's 40102b84a842a42, missing 400 From: Somnath Roy [mailto:somnath@sandisk.com] Sent: Tuesday, August 16, 2016 3:59 PM To: EP Komarla <ep.koma...@flextron

[ceph-users] Rbd map command doesn't work

2016-08-16 Thread EP Komarla
atures [1198606.813825] libceph: mon1 172.20.60.52:6789 feature set mismatch, my 102b84a842a42 < server's 40102b84a842a42, missing 400 [1198606.820929] libceph: mon1 172.20.60.52:6789 missing required protocol features [test@ep-c2-client-01 ~]$ sudo rbd map rbd/test1 EP KOMARLA, [Flex_

[ceph-users] rbd readahead settings

2016-08-15 Thread EP Komarla
, - epk EP KOMARLA, [Flex_RGB_Sml_tm] Emal: ep.koma...@flextronics.com Address: 677 Gibraltor Ct, Building #2, Milpitas, CA 94035, USA Phone: 408-674-6090 (mobile) Legal Disclaimer: The information contained in this message may be privileged and confidential. It is intended to be read only

[ceph-users] Ceph-deploy on Jewel error

2016-08-03 Thread EP Komarla
[ep-c2-mon-01][DEBUG ] You could try running: rpm -Va --nofiles --nodigest [ep-c2-mon-01][ERROR ] RuntimeError: command returned non-zero exit status: 1 [ceph_deploy][ERROR ] RuntimeError: Failed to execute command: yum -y install ceph ceph-radosgw EP KOMARLA, [Flex_RGB_Sml_tm] Emal: ep.koma

Re: [ceph-users] Ceph performance pattern

2016-07-27 Thread EP Komarla
I am using O_DIRECT=1 -Original Message- From: Mark Nelson [mailto:mnel...@redhat.com] Sent: Wednesday, July 27, 2016 8:33 AM To: EP Komarla <ep.koma...@flextronics.com>; ceph-users@lists.ceph.com Subject: Re: [ceph-users] Ceph performance pattern Ok. Are you using O_

Re: [ceph-users] Ceph performance pattern

2016-07-27 Thread EP Komarla
cause havoc with RBD sequential reads in general. Mark On 07/26/2016 06:38 PM, EP Komarla wrote: > Hi, > > > > I am showing below fio results for Sequential Read on my Ceph cluster. > I am trying to understand this pattern: > > > > - why there is a dip in the perfor

Re: [ceph-users] Ceph performance pattern

2016-07-26 Thread EP Komarla
Thanks Somnath. I am running with CentOS7.2. Have you seen this pattern before? - epk From: Somnath Roy [mailto:somnath@sandisk.com] Sent: Tuesday, July 26, 2016 4:44 PM To: EP Komarla <ep.koma...@flextronics.com>; ceph-users@lists.ceph.com Subject: RE: Ceph performance pattern Wh

[ceph-users] Ceph performance pattern

2016-07-26 Thread EP Komarla
Hi, I am showing below fio results for Sequential Read on my Ceph cluster. I am trying to understand this pattern: - why there is a dip in the performance for block sizes 32k-256k? - is this an expected performance graph? - have you seen this kind of pattern before

[ceph-users] Ceph performance calculator

2016-07-22 Thread EP Komarla
Team, Have a performance related question on Ceph. I know performance of a ceph cluster depends on so many factors like type of storage servers, processors (no of processor, raw performance of processor), memory, network links, type of disks, journal disks, etc. On top of the hardware

Re: [ceph-users] OSD dropped out, now trying to get them back on to the cluster

2016-07-18 Thread EP Komarla
The first question I have is to understand why some disks/OSDs showed status of 'DOWN' - there was no activity on the cluster. Last night all the OSDs were up. What can cause OSDs to go down? - epk From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of EP Komarla Sent

[ceph-users] OSD dropped out, now trying to get them back on to the cluster

2016-07-18 Thread EP Komarla
on how to bring these OSDs back? I know I am making some mistake, but can't figure out. Thanks in advance, - epk EP KOMARLA, [Flex_RGB_Sml_tm] Emal: ep.koma...@flextronics.com Address: 677 Gibraltor Ct, Building #2, Milpitas, CA 94035, USA Phone: 408-674-6090 (mobile) Legal Disclaimer

Re: [ceph-users] rbd command anomaly

2016-07-13 Thread EP Komarla
Thanks. It works. From: c.y. lee [mailto:c...@inwinstack.com] Sent: Wednesday, July 13, 2016 6:17 PM To: EP Komarla <ep.koma...@flextronics.com> Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] rbd command anomaly Hi, You need to specify pool name. rbd -p testpool info tes

[ceph-users] rbd command anomaly

2016-07-13 Thread EP Komarla
Hi, I am seeing an issue. I created 5 images testvol11-15 and I mapped them to /dev/rbd0-4. When I execute the command 'rbd showmapped', it shows correctly the image and the mappings as shown below: [root@ep-compute-2-16 run1]# rbd showmapped id pool image snap device 0 testpool

[ceph-users] Question on Sequential Write performance at 4K blocksize

2016-07-13 Thread EP Komarla
Hi All, Have a question on the performance of sequential write @ 4K block sizes. Here is my configuration: Ceph Cluster: 6 Nodes. Each node with :- 20x HDDs (OSDs) - 10K RPM 1.2 TB SAS disks SSDs - 4x - Intel S3710, 400GB; for OSD journals shared across 20 HDDs (i.e., SSD journal ratio 1:5)

[ceph-users] Ceph OSD journal utilization

2016-06-17 Thread EP Komarla
Hi, I am looking for a way to monitor the utilization of OSD journals - by observing the utilization pattern over time, I can determine if I have over provisioned them or not. Is there a way to do this? When I googled on this topic, I saw one similar request about 4 years back. I am

Re: [ceph-users] Do you see a data loss if a SSD hosting several OSD journals crashes

2016-05-20 Thread EP Komarla
So, which is correct, all replicas must be written or only min_size before ack? But for me the takeaway is that writes are protected - even if the journal drive crashes, I am covered. - epk -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of

[ceph-users] NVRAM cards as OSD journals

2016-05-20 Thread EP Komarla
Hi, I am contemplating using a NVRAM card for OSD journals in place of SSD drives in our ceph cluster. Configuration: * 4 Ceph servers * Each server has 24 OSDs (each OSD is a 1TB SAS drive) * 1 PCIe NVRAM card of 16GB capacity per ceph server * Both Client &

[ceph-users] Do you see a data loss if a SSD hosting several OSD journals crashes

2016-05-19 Thread EP Komarla
* We are trying to assess if we are going to see a data loss if an SSD that is hosting journals for few OSDs crashes. In our configuration, each SSD is partitioned into 5 chunks and each chunk is mapped as a journal drive for one OSD. What I understand from the Ceph documentation: