Re: [ceph-users] Where is source/rpm package of jewel(10.2.10) ?

2018-01-04 Thread Chengguang Xu
Sorry for the noise, please ignore this, I just had misread the number. Thanks, Chengguang. > 在 2018年1月5日,下午2:49,Chengguang Xu 写道: > > Hello, > > Let me ask a simple stupid question, where can I get source/rpm package of > jewel(10.2.10)? > > I looked at

[ceph-users] Where is source/rpm package of jewel(10.2.10) ?

2018-01-04 Thread Chengguang Xu
Hello, Let me ask a simple stupid question, where can I get source/rpm package of jewel(10.2.10)? I looked at download.ceph.com carefully for latest version of jewel but found nothing. Thanks, Chengguang. ___ ceph-users mailing list

Re: [ceph-users] "ceph -s" shows no osds

2018-01-04 Thread Hüseyin Atatür YILDIRIM
Thanks a lot Sergey. I searched about the upgrade of ceph-deploy and found out "pip install" is the most reasonable one; normal software repo install (i.e. sudo apt install ceph-deploy) always installs version 1.32. Do you agree with this ? Regards, Atatür From: Sergey Malinin

[ceph-users] Reduced data availability: 4 pgs inactive, 4 pgs incomplete

2018-01-04 Thread Brent Kennedy
We have upgraded from Hammer to Jewel and then Luminous 12.2.2 as of today. During the hammer upgrade to Jewel we lost two host servers and let the cluster rebalance/recover, it ran out of space and stalled. We then added three new host servers and then let the cluster rebalance/recover. During

Re: [ceph-users] iSCSI over RBD

2018-01-04 Thread Joshua Chen
Hello Steven, I am using CentOS 7.4.1708 with kernel 3.10.0-693.el7.x86_64 and the following packages: ceph-iscsi-cli-2.5-9.el7.centos.noarch.rpm ceph-iscsi-config-2.3-12.el7.centos.noarch.rpm libtcmu-1.3.0-0.4.el7.centos.x86_64.rpm libtcmu-devel-1.3.0-0.4.el7.centos.x86_64.rpm

Re: [ceph-users] iSCSI over RBD

2018-01-04 Thread Joshua Chen
Hello Michael, Thanks for the reply. I did check this ceph doc at http://docs.ceph.com/docs/master/rbd/iscsi-target-cli/ And yes, I need acl instead of chap usr/passwd, but I will negotiate with my colleagues for changing the management style. Really appreciated for pointing the doc's bug

Re: [ceph-users] ceph.conf not found

2018-01-04 Thread David Turner
If you named your cluster anything other than ceph, hopefully you can go back and rename it ceph. If not, you need to run 'ceph --cluster home -s'. Every single command you ever run against the cluster will need to have the cluster name specified. Not every tool out there is compatible with

Re: [ceph-users] ceph.conf not found

2018-01-04 Thread Donny Davis
Change the name of the cluster to ceph and create /etc/ceph/ceph.conf On Thu, Jan 4, 2018 at 6:31 PM, Nathan Dehnel wrote: > Hey, I get this error: > > gentooserver ~ # ceph -s > 2018-01-04 14:38:35.390154 7f0a6bae8700 -1 Errors while parsing config > file! > 2018-01-04

[ceph-users] ceph.conf not found

2018-01-04 Thread Nathan Dehnel
Hey, I get this error: gentooserver ~ # ceph -s 2018-01-04 14:38:35.390154 7f0a6bae8700 -1 Errors while parsing config file! 2018-01-04 14:38:35.390157 7f0a6bae8700 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory 2018-01-04 14:38:35.390158 7f0a6bae8700 -1 parse_file:

Re: [ceph-users] Cephalocon 2018?

2018-01-04 Thread David Turner
I'm still getting a vibe that this still isn't going to happen. I'd like to get my tickets purchased including hotel, but there isn't a venue yet. With the conference only 2.5 months away, the details aren't nailed down well enough for people to book their hotels... I'm also worried that there's

Re: [ceph-users] MDS cache size limits

2018-01-04 Thread Patrick Donnelly
Hello Stefan, On Thu, Jan 4, 2018 at 1:45 AM, Stefan Kooman wrote: > I have a question about the "mds_cache_memory_limit" parameter and MDS > memory usage. We currently have set mds_cache_memory_limit=150G. > The MDS server itself (and its active-standby) have 256 GB of RAM. >

Re: [ceph-users] mon_max_pg_per_osd setting not active? too many PGs per OSD (240 > max 200)

2018-01-04 Thread Stefan Priebe - Profihost AG
Am 04.01.2018 um 18:37 schrieb Gregory Farnum: > On Thu, Jan 4, 2018 at 4:57 AM Stefan Priebe - Profihost AG > > wrote: > > Hello, > > i set mon_max_pg_per_osd to 300 but the cluster keeps in warn state. > > # ceph -s >  

Re: [ceph-users] object lifecycle and updating from jewel

2018-01-04 Thread Ben Hines
Yes, it works fine with pre existing buckets. On Thu, Jan 4, 2018 at 8:52 AM, Graham Allan wrote: > I've only done light testing with lifecycle so far, but I'm pretty sure > you can apply it to pre-existing buckets. > > Graham > > > On 01/02/2018 10:42 PM, Robert Stanford wrote: >

[ceph-users] help needed after an outage - Is it possible to rebuild a bucket index ?

2018-01-04 Thread Vincent Godin
Yesterday we had an outage on our ceph cluster. One OSD was looping on << [call rgw.bucket_complete_op] snapc 0=[] ack+ondisk+write+known_if_redirected e359833) currently waiting for degraded object >> for hours blocking all the requests to this OSD and then ... We had to delete the degraded

Re: [ceph-users] iSCSI over RBD

2018-01-04 Thread Steven Vacaroaia
Hi Joshua, How did you manage to use iSCSI gateway ? I would like to do that but still waiting for a patched kernel What kernel/OS did you use and/or how did you patch it ? Tahnsk Steven On 4 January 2018 at 04:50, Joshua Chen wrote: > Dear all, > Although I

Re: [ceph-users] mon_max_pg_per_osd setting not active? too many PGs per OSD (240 > max 200)

2018-01-04 Thread Gregory Farnum
On Thu, Jan 4, 2018 at 4:57 AM Stefan Priebe - Profihost AG < s.pri...@profihost.ag> wrote: > Hello, > > i set mon_max_pg_per_osd to 300 but the cluster keeps in warn state. > > # ceph -s > cluster: > id: 5482b798-0bf1-4adb-8d7a-1cd57bdc1905 > health: HEALTH_WARN > too

Re: [ceph-users] Ceph scrub logs: _scan_snaps no head for $object?

2018-01-04 Thread Konstantin Shalygin
On 01/04/2018 11:53 PM, Stefan Kooman wrote: OpenNebula 5.4.3 (issuing rbd commands to ceph cluster). Yes! And what librbd is installed on "commands issuer"? k ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] Ceph scrub logs: _scan_snaps no head for $object?

2018-01-04 Thread Stefan Kooman
Quoting Konstantin Shalygin (k0...@k0ste.ru): > On 01/04/2018 11:38 PM, Stefan Kooman wrote: > >Only luminous clients. Mostly rbd (qemu-kvm) images. > > Who is managed your images? May be OpenStack Cinder? OpenNebula 5.4.3 (issuing rbd commands to ceph cluster). Gr. Stefan -- | BIT BV

Re: [ceph-users] object lifecycle and updating from jewel

2018-01-04 Thread Graham Allan
I've only done light testing with lifecycle so far, but I'm pretty sure you can apply it to pre-existing buckets. Graham On 01/02/2018 10:42 PM, Robert Stanford wrote:  I would like to use the new object lifecycle feature of kraken / luminous.  I have jewel, with buckets that have lots and

Re: [ceph-users] iSCSI over RBD

2018-01-04 Thread Michael Christie
On 01/04/2018 03:50 AM, Joshua Chen wrote: > Dear all, > Although I managed to run gwcli and created some iqns, or luns, > but I do need some working config example so that my initiator could > connect and get the lun. > > I am familiar with targetcli and I used to do the following ACL style

Re: [ceph-users] Ceph scrub logs: _scan_snaps no head for $object?

2018-01-04 Thread Konstantin Shalygin
On 01/04/2018 11:38 PM, Stefan Kooman wrote: Only luminous clients. Mostly rbd (qemu-kvm) images. Who is managed your images? May be OpenStack Cinder? k ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] Performance issues on Luminous

2018-01-04 Thread Rafał Wądołowski
They are configured with bluestore. The network, cpu and disk are doing nothing.  I was observing with atop, iostat, top. Similiar hardware configuration I have on jewel (with filestore), and there are performing good. Cheers, Rafał Wądołowski On 04.01.2018 17:05, Luis Periquito wrote:

Re: [ceph-users] Ceph scrub logs: _scan_snaps no head for $object?

2018-01-04 Thread Stefan Kooman
Quoting Konstantin Shalygin (k0...@k0ste.ru): > >This is still a pre-production cluster. Most tests have been done > >using rbd. We did make some rbd clones / snapshots here and there. > > What clients you used? Only luminous clients. Mostly rbd (qemu-kvm) images. Gr. Stefan -- | BIT BV

[ceph-users] Linux Meltdown (KPTI) fix and how it affects performance?

2018-01-04 Thread Nick Fisk
Hi All, As the KPTI fix largely only affects the performance where there are a large number of syscalls made, which Ceph does a lot of, I was wondering if anybody has had a chance to perform any initial tests. I suspect small write latencies will the worse affected? Although I'm thinking the

Re: [ceph-users] Performance issues on Luminous

2018-01-04 Thread Luis Periquito
you never said if it was bluestore or filestore? Can you look in the server to see which component is being stressed (network, cpu, disk)? Utilities like atop are very handy for this. Regarding those specific SSDs they are particularly bad when running some time without trimming - performance

Re: [ceph-users] Ceph scrub logs: _scan_snaps no head for $object?

2018-01-04 Thread ceph
Am 3. Januar 2018 08:59:41 MEZ schrieb Stefan Kooman : >Quoting Sage Weil (s...@newdream.net): >> Hi Stefan, Mehmet, >> >> Are these clusters that were upgraded from prior versions, or fresh >> luminous installs? > >Fresh luminous install... The cluster was installed with

Re: [ceph-users] Performance issues on Luminous

2018-01-04 Thread Rafał Wądołowski
I have size of 2. We know about this risk and we accept it, but we still don't know why performance so so bad. Cheers, Rafał Wądołowski On 04.01.2018 16:51, c...@elchaka.de wrote: I assume you have size of 3 then divide your expected 400 with 3 and you are not far Away from what you get...

Re: [ceph-users] Performance issues on Luminous

2018-01-04 Thread ceph
I assume you have size of 3 then divide your expected 400 with 3 and you are not far Away from what you get... In Addition you should Never use Consumer grade ssds for ceph as they will be reach the DWPD very soon... Am 4. Januar 2018 09:54:55 MEZ schrieb "Rafał Wądołowski"

Re: [ceph-users] Ceph scrub logs: _scan_snaps no head for $object?

2018-01-04 Thread Konstantin Shalygin
This is still a pre-production cluster. Most tests have been done using rbd. We did make some rbd clones / snapshots here and there. What clients you used? k ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] Bluestore: inaccurate disk usage statistics problem?

2018-01-04 Thread Igor Fedotov
On 1/4/2018 5:52 PM, Sage Weil wrote: On Thu, 4 Jan 2018, Igor Fedotov wrote: On 1/4/2018 5:27 PM, Sage Weil wrote: On Thu, 4 Jan 2018, Igor Fedotov wrote: Additional issue with the disk usage statistics I've just realized is that BlueStore's statfs call reports total disk space as   block

Re: [ceph-users] Bluestore: inaccurate disk usage statistics problem?

2018-01-04 Thread Sage Weil
On Thu, 4 Jan 2018, Igor Fedotov wrote: > On 1/4/2018 5:27 PM, Sage Weil wrote: > > On Thu, 4 Jan 2018, Igor Fedotov wrote: > > > Additional issue with the disk usage statistics I've just realized is that > > > BlueStore's statfs call reports total disk space as > > > > > >   block device total

Re: [ceph-users] Bluestore: inaccurate disk usage statistics problem?

2018-01-04 Thread Igor Fedotov
On 1/4/2018 5:27 PM, Sage Weil wrote: On Thu, 4 Jan 2018, Igor Fedotov wrote: Additional issue with the disk usage statistics I've just realized is that BlueStore's statfs call reports total disk space as   block device total space + DB device total space while available space is measured

Re: [ceph-users] Bluestore: inaccurate disk usage statistics problem?

2018-01-04 Thread Sage Weil
On Thu, 4 Jan 2018, Igor Fedotov wrote: > Additional issue with the disk usage statistics I've just realized is that > BlueStore's statfs call reports total disk space as > >   block device total space + DB device total space > > while available space is measured as > >   block device's free

Re: [ceph-users] data cleaup/disposal process

2018-01-04 Thread Sergey Malinin
http://cephnotes.ksperis.com/blog/2014/07/04/remove-big-rbd-image From: ceph-users on behalf of M Ranga Swami Reddy Sent: Thursday, January 4, 2018 3:55:27 PM To: ceph-users; ceph-devel Subject:

[ceph-users] mon_max_pg_per_osd setting not active? too many PGs per OSD (240 > max 200)

2018-01-04 Thread Stefan Priebe - Profihost AG
Hello, i set mon_max_pg_per_osd to 300 but the cluster keeps in warn state. # ceph -s cluster: id: 5482b798-0bf1-4adb-8d7a-1cd57bdc1905 health: HEALTH_WARN too many PGs per OSD (240 > max 200) # ceph --admin-daemon /var/run/ceph/ceph-mon.1.asok config show|grep -i

[ceph-users] data cleaup/disposal process

2018-01-04 Thread M Ranga Swami Reddy
Hello, In Ceph, is the way to cleanup data before deleting an image? Means wipe the data with '0' before deleting an image. Please let me know if you have any suggestions here. Thanks Swami ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] "ceph -s" shows no osds

2018-01-04 Thread Sergey Malinin
Mgr installation was introduced in 1.5.38, you need to upgrade ceph-deploy. From: Hüseyin Atatür YILDIRIM Sent: Thursday, January 4, 2018 2:01:57 PM To: Sergey Malinin; ceph-users@lists.ceph.com Subject: RE: [ceph-users] "ceph -s" shows

Re: [ceph-users] Bluestore: inaccurate disk usage statistics problem?

2018-01-04 Thread Igor Fedotov
Additional issue with the disk usage statistics I've just realized is that BlueStore's statfs call reports total disk space as   block device total space + DB device total space while available space is measured as   block device's free space + bluefs free space at block device -

[ceph-users] One object degraded cause all ceph requests hang - Jewel 10.2.6 (rbd + radosgw)

2018-01-04 Thread Vincent Godin
Yesterday we just encountered this bug. One OSD was looping on "2018-01-03 16:20:59.148121 7f011a6a1700 0 log_channel(cluster) log [WRN] : slow request 30.254269 seconds old, received at 2018-01-03 16:20:28.883837: osd_op(client.48285929.0:14601958 35.8abfc02e

Re: [ceph-users] Ceph Developer Monthly - January 2018

2018-01-04 Thread Leonardo Vaz
On Wed, Jan 03, 2018 at 01:38:24AM -0200, Leonardo Vaz wrote: > Hey Cephers, > > This is just a friendly reminder that the next Ceph Developer Montly > meeting is coming up: > > http://wiki.ceph.com/Planning > > If you have work that you're doing that it a feature work, significant >

Re: [ceph-users] "ceph -s" shows no osds

2018-01-04 Thread Hüseyin Atatür YILDIRIM
Hi, ceph-deploy --version 1.5.32 Thank you, Atatür From: Sergey Malinin [mailto:h...@newmail.com] Sent: Thursday, January 4, 2018 12:51 PM To: Hüseyin Atatür YILDIRIM ; ceph-users@lists.ceph.com Subject: Re: [ceph-users] "ceph -s" shows no osds What is you

Re: [ceph-users] "ceph -s" shows no osds

2018-01-04 Thread Sergey Malinin
What is you “ceph-deploy --version” ? From: Hüseyin Atatür YILDIRIM Sent: Thursday, January 4, 2018 9:14:39 AM To: Sergey Malinin; ceph-users@lists.ceph.com Subject: RE: [ceph-users] "ceph -s" shows no osds Hello Sergey, I issued the

Re: [ceph-users] iSCSI over RBD

2018-01-04 Thread Joshua Chen
Dear all, Although I managed to run gwcli and created some iqns, or luns, but I do need some working config example so that my initiator could connect and get the lun. I am familiar with targetcli and I used to do the following ACL style connection rather than password, the targetcli setting

[ceph-users] MDS cache size limits

2018-01-04 Thread Stefan Kooman
Hi Ceph fs'ers I have a question about the "mds_cache_memory_limit" parameter and MDS memory usage. We currently have set mds_cache_memory_limit=150G. The MDS server itself (and its active-standby) have 256 GB of RAM. Eventually the MDS process will consume ~ 87.5% of available memory. At that

Re: [ceph-users] rbd-nbd timeout and crash

2018-01-04 Thread Jan Pekař - Imatic
Sorry for late answer. No - I'm not mounting with trimming, only noatime. Problem is, that cluster was highly loaded, so there were timeouts. I "solved" it by compiling https://github.com/jerome-pouiller/ioctl and set NBD_SET_TIMEOUT ioctl timeout after creating the device. With regards Jan

[ceph-users] Performance issues on Luminous

2018-01-04 Thread Rafał Wądołowski
Hi folks, I am currently benchmarking my cluster for an performance issue and I have no idea, what is going on. I am using these devices in qemu. Ceph version 12.2.2 Infrastructure: 3 x Ceph-mon 11 x Ceph-osd Ceph-osd has 22x1TB Samsung SSD 850 EVO 1TB 96GB RAM 2x E5-2650 v4 4x10G