Re: [ceph-users] ceph-fuse segfaults ( jewel 10.2.2)

2016-07-18 Thread Goncalo Borges
Hi All... We do have some good news. As promised, I've recompiled ceph 10.2.2 (in an intel processor without AVX2) with and without the patch provided by Zheng. It turns out that Zheng's patch _*is*_ the solution for the segfaults we saw in ObjectCacher when ceph-fuse runs in AMD 62xx

Re: [ceph-users] Slow performance into windows VM

2016-07-18 Thread K K
robocopy in Windows have flag /MT:N, where N - thread's count. With MT:24 I have 20-30MB/sec copy from one VM instance to another. It's all after disabling scrub in working time. >Вторник, 12 июля 2016, 5:44 +05:00 от Christian Balzer : > > >Hello, > >scrub settings will only

Re: [ceph-users] OSD dropped out, now trying to get them back on to the cluster

2016-07-18 Thread EP Komarla
The first question I have is to understand why some disks/OSDs showed status of 'DOWN' - there was no activity on the cluster. Last night all the OSDs were up. What can cause OSDs to go down? - epk From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of EP Komarla Sent:

[ceph-users] OSD dropped out, now trying to get them back on to the cluster

2016-07-18 Thread EP Komarla
Hi, I have created a cluster with the below configuration: - 6 Storage nodes, each with 20 disks - I have total of 120 OSDs Cluster was working fine. All of a sudden today morning I noticed some OSD's (7 to be exact) were down on one server. I rebooted the server, 4 OSDs came back. Three

[ceph-users] OSD / Journal disk failure

2016-07-18 Thread Pei Feng Lin
Dear Cephers: I have two questions that needs advice. 1) If there is a OSD disk failure (for example, pulling disk out), how long does the osd daemon detect the disk failure? and how long does the ceph cluster mark this osd daemon down? Is there any config option to allow the ceph cluster to

Re: [ceph-users] S3 API - Canonical user ID

2016-07-18 Thread Victor Efimov
2016-07-19 1:21 GMT+03:00 Robin H. Johnson : > On Mon, Jul 18, 2016 at 10:48:16AM +0300, Victor Efimov wrote: >> > xmlns="http://s3.amazonaws.com/doc/2006-03-01/;>someownerSOMEOWNER >> >> note the "someowner" is used as id. >> Problem that S3-compatible library that I use

Re: [ceph-users] S3 API - Canonical user ID

2016-07-18 Thread Robin H. Johnson
On Mon, Jul 18, 2016 at 10:48:16AM +0300, Victor Efimov wrote: > xmlns="http://s3.amazonaws.com/doc/2006-03-01/;>someownerSOMEOWNER > > note the "someowner" is used as id. > Problem that S3-compatible library that I use crashes on this, it > expects 64 character hex string. > > According to S3

Re: [ceph-users] ceph admin socket from non root

2016-07-18 Thread Gregory Farnum
I'm not familiar with how it's set up but skimming and searching through the code I'm not seeing anything, no. We've got a chown but no chmod. That's a reasonably feature idea though, and presumably you could add a chmod it to your init scripts? -Greg On Mon, Jul 18, 2016 at 3:02 AM, Stefan

Re: [ceph-users] mds standby + standby-reply upgrade

2016-07-18 Thread Dzianis Kahanovich
Patrick Donnelly пишет: >> Infernalis: e5165: 1/1/1 up {0=c=up:active}, 1 up:standby-replay, 1 >> up:standby >> >> Now after upgrade start and next mon restart, active monitor falls with >> "assert(info.state == MDSMap::STATE_STANDBY)" (even without running mds) . > > This is the first time

Re: [ceph-users] CephFS | Recursive stats not displaying with GNU ls

2016-07-18 Thread Gregory Farnum
Specifically, this has caused trouble with certain (common?) rsync configurations. -Greg On Monday, July 18, 2016, David wrote: > Thanks Zheng, I should have checked that. > > Sean, from the commit: > > When rbytes mount option is enabled, directory size is recursive

Re: [ceph-users] CephFS | Recursive stats not displaying with GNU ls

2016-07-18 Thread David
Thanks Zheng, I should have checked that. Sean, from the commit: When rbytes mount option is enabled, directory size is recursive size. Recursive size is not updated instantly. This can cause directory size to change between successive stat(1) On Mon, Jul 18, 2016 at 2:49 PM, Sean Redmond

Re: [ceph-users] Ceph RBD object-map and discard in VM

2016-07-18 Thread Vaibhav Bhembre
Updated the issue with zipped copies of raw LTTng files. Thanks for taking a look! I will also look at fixing the linking issue on librados/ceph-osd side and send a PR up. On 07/18, Jason Dillaman wrote: Any chance you can zip up the raw LTTng-UST files and attach them to the ticket? It

[ceph-users] Unknown error (95->500) when creating buckets or putting files to RGW after upgrade from Infernalis to Jewel

2016-07-18 Thread Naruszewicz, Maciej
Hi, We recently upgraded our Ceph Cluster to Jewel including RGW. Everything seems to be in order except for RGW which doesn't let us create buckets or add new files. # s3cmd --version s3cmd version 1.6.1 # s3cmd mb s3://test WARNING: Retrying failed request: / WARNING: 500 (UnknownError)

Re: [ceph-users] PG stuck remapped+incomplete

2016-07-18 Thread Hein-Pieter van Braam
Hi all, I seem to have forgotten to mention my setup. I have  Ceph Hammer (ceph version 0.94.7 (d56bdf93ced6b80b07397d57e3fa68fe68304432) CentOS 7.2 w/ Linux 4.4.13 The pool in question is an EC pool on SSD with an SSD cache pool in front for RBD.  I've done some more digging and I really

Re: [ceph-users] CephFS | Recursive stats not displaying with GNU ls

2016-07-18 Thread Sean Redmond
Hi, Is this disabled because its not a stable feature or just user preference? Thanks On Mon, Jul 18, 2016 at 2:37 PM, Yan, Zheng wrote: > On Mon, Jul 18, 2016 at 9:00 PM, David wrote: > > Hi all > > > > Recursive statistics on directories are no

Re: [ceph-users] CephFS | Recursive stats not displaying with GNU ls

2016-07-18 Thread Yan, Zheng
On Mon, Jul 18, 2016 at 9:00 PM, David wrote: > Hi all > > Recursive statistics on directories are no longer showing on an ls -l output > but getfattr is accurate: > > # ls -l > total 0 > drwxr-xr-x 1 root root 3 Jul 18 12:42 dir1 > drwxr-xr-x 1 root root 0 Jul 18 12:42

[ceph-users] CephFS | Recursive stats not displaying with GNU ls

2016-07-18 Thread David
Hi all Recursive statistics on directories are no longer showing on an ls -l output but getfattr is accurate: # ls -l total 0 drwxr-xr-x 1 root root 3 Jul 18 12:42 dir1 drwxr-xr-x 1 root root 0 Jul 18 12:42 dir2 ]# getfattr -d -m ceph.dir.* dir1 # file: dir1 ceph.dir.entries="3"

Re: [ceph-users] Ceph RBD object-map and discard in VM

2016-07-18 Thread Jason Dillaman
Any chance you can zip up the raw LTTng-UST files and attach them to the ticket? It appears that the rbd-replay-prep tool doesn't record translate discard events. The change sounds good to me -- but it would also need to be made in librados and ceph-osd since I'm sure they would have the same

Re: [ceph-users] ceph health

2016-07-18 Thread Martin Palma
I assume you installed Ceph using 'ceph-deploy'. I noticed the same thing on CentOS when deploying a cluster for testing... As Wido already noted the OSDs are marked as down & out. From each OSD node you can do a "ceph-disk activate-all" to start the OSDs. On Mon, Jul 18, 2016 at 12:59 PM, Wido

Re: [ceph-users] ceph health

2016-07-18 Thread Wido den Hollander
> Op 18 juli 2016 om 11:49 schreef Ivan Koortzen : > > > Hi All, > > I quite new to ceph but did a initial setup on these Virtual Machines: > > 1x Ceph admin > 3 x Ceph mons > 3x Ceph OSD's > > each osd has 3x 100GB drives, and 3x 20GB journals > > After

Re: [ceph-users] Problem with auto mounts osd on v10.2.2

2016-07-18 Thread Henrik Korkuc
On 16-07-18 13:37, Eduard Ahmatgareev wrote: Hi guys. Could you help me with some small trouble? We have new installation ceph version 10.2.2 and we have some interesting trouble with auto mounting osd after reboot storage node. We forced to mount osd manual after reboot, and osd work fine.

[ceph-users] Problem with auto mounts osd on v10.2.2

2016-07-18 Thread Eduard Ahmatgareev
Hi guys. Could you help me with some small trouble? We have new installation ceph version 10.2.2 and we have some interesting trouble with auto mounting osd after reboot storage node. We forced to mount osd manual after reboot, and osd work fine. But in previous version 0.94.5 it was

Re: [ceph-users] New to Ceph - osd autostart problem

2016-07-18 Thread Ruben Kerkhof
On Mon, Jul 18, 2016 at 12:20 PM, Henrik Korkuc wrote: > This file was removed by Sage: > > commit 9f76b9ff31525eac01f04450d72559ec99927496 > Author: Sage Weil > Date: Mon Apr 18 09:16:02 2016 -0400 > > udev: remove 60-ceph-partuuid-workaround-rules > >

Re: [ceph-users] New to Ceph - osd autostart problem

2016-07-18 Thread Henrik Korkuc
On 16-07-18 11:11, Henrik Korkuc wrote: On 16-07-18 10:53, Henrik Korkuc wrote: On 16-07-15 10:40, Oliver Dzombic wrote: Hi, Partition GUID code: 4FBD7E29-9D25-41B8-AFD0-062C0CEFF05D (Unknown) Partition unique GUID: 79FD1B30-F5AA-4033-BA03-8C7D0A7D49F5 First sector: 256 (at 1024.0 KiB) Last

Re: [ceph-users] ceph admin socket from non root

2016-07-18 Thread Stefan Priebe - Profihost AG
Nobody? Is it at least possible with jewel to give the sockets group write permissions? Am 10.07.2016 um 23:51 schrieb Stefan Priebe - Profihost AG: > Hi, > > is there a proposed way how to connect from non root f.e. a monitoring > system to the ceph admin socket? > > In the past they were

Re: [ceph-users] ceph health

2016-07-18 Thread Oliver Dzombic
Hi, please show the output of: ceph osd pool ls detail also ceph health detail please. -- Mit freundlichen Gruessen / Best regards Oliver Dzombic IP-Interactive mailto:i...@ip-interactive.de Anschrift: IP Interactive UG ( haftungsbeschraenkt ) Zum Sonnenberg 1-3 63571 Gelnhausen HRB

[ceph-users] ceph health

2016-07-18 Thread Ivan Koortzen
Hi All, I quite new to ceph but did a initial setup on these Virtual Machines: 1x Ceph admin 3 x Ceph mons 3x Ceph OSD's each osd has 3x 100GB drives, and 3x 20GB journals After initial setup of Ceph and running # ceph healt I get the following error Any help would be appreciated!

Re: [ceph-users] how to use cache tiering with proxy in ceph-10.2.2

2016-07-18 Thread m13913886148
thank you very much!  On Monday, July 18, 2016 5:31 PM, Oliver Dzombic wrote: Hi, everything is here: http://docs.ceph.com/docs/jewel/ except osd_tier_promote_max_bytes_sec and other stuff, but its enough there that you can make it work. -- Mit

Re: [ceph-users] how to use cache tiering with proxy in ceph-10.2.2

2016-07-18 Thread Oliver Dzombic
Hi, everything is here: http://docs.ceph.com/docs/jewel/ except osd_tier_promote_max_bytes_sec and other stuff, but its enough there that you can make it work. -- Mit freundlichen Gruessen / Best regards Oliver Dzombic IP-Interactive mailto:i...@ip-interactive.de Anschrift: IP

Re: [ceph-users] how to use cache tiering with proxy in ceph-10.2.2

2016-07-18 Thread m13913886148
Where to find base docu?Official website does not update the document On Monday, July 18, 2016 5:16 PM, Oliver Dzombic wrote: Hi i suggest you to read some base docu about that. osd_tier_promote_max_bytes_sec = how much bytes per second are going on tier

Re: [ceph-users] how to use cache tiering with proxy in ceph-10.2.2

2016-07-18 Thread Oliver Dzombic
Hi i suggest you to read some base docu about that. osd_tier_promote_max_bytes_sec = how much bytes per second are going on tier ceph osd pool set ssd-pool target_max_bytes = maximum size in bytes on this specific pool ( its like a quota ) -- Mit freundlichen Gruessen / Best regards Oliver

Re: [ceph-users] how to use cache tiering with proxy in ceph-10.2.2

2016-07-18 Thread m13913886148
what is "osd_tier_promote_max_bytes_sec" in ceph.conf file  and command "ceph osd pool set ssd-pool target_max_bytes" are not the same ? On Monday, July 18, 2016 4:40 PM, Oliver Dzombic wrote: Hi, osd_tier_promote_max_bytes_sec is your friend. -- Mit

Re: [ceph-users] Is anyone seeing iissues with task_numa_find_cpu?

2016-07-18 Thread Василий Ангапов
Guys, This bug is hitting me constantly, may be once per several days. Does anyone know is there a solution already? 2016-07-05 11:47 GMT+03:00 Nick Fisk : >> -Original Message- >> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of >> Alex Gorbachev

Re: [ceph-users] how to use cache tiering with proxy in ceph-10.2.2

2016-07-18 Thread Oliver Dzombic
Hi, osd_tier_promote_max_bytes_sec is your friend. -- Mit freundlichen Gruessen / Best regards Oliver Dzombic IP-Interactive mailto:i...@ip-interactive.de Anschrift: IP Interactive UG ( haftungsbeschraenkt ) Zum Sonnenberg 1-3 63571 Gelnhausen HRB 93402 beim Amtsgericht Hanau

[ceph-users] how to use cache tiering with proxy in ceph-10.2.2

2016-07-18 Thread m13913886148
hello cepher!    I have a problem like this :    I want to config a cache tiering to my ceph with writeback mode.In ceph-0.94,it runs ok. IO is First through hot-pool. then it flush to cold-pool.    But in ceph-10.2.2,it doesn't like tihs. IO wrties to  hot-pool and cold-pool at the same time.

Re: [ceph-users] New to Ceph - osd autostart problem

2016-07-18 Thread Henrik Korkuc
On 16-07-18 10:53, Henrik Korkuc wrote: On 16-07-15 10:40, Oliver Dzombic wrote: Hi, Partition GUID code: 4FBD7E29-9D25-41B8-AFD0-062C0CEFF05D (Unknown) Partition unique GUID: 79FD1B30-F5AA-4033-BA03-8C7D0A7D49F5 First sector: 256 (at 1024.0 KiB) Last sector: 976754640 (at 3.6 TiB) Partition

Re: [ceph-users] New to Ceph - osd autostart problem

2016-07-18 Thread Henrik Korkuc
On 16-07-15 10:40, Oliver Dzombic wrote: Hi, Partition GUID code: 4FBD7E29-9D25-41B8-AFD0-062C0CEFF05D (Unknown) Partition unique GUID: 79FD1B30-F5AA-4033-BA03-8C7D0A7D49F5 First sector: 256 (at 1024.0 KiB) Last sector: 976754640 (at 3.6 TiB) Partition size: 976754385 sectors (3.6 TiB)

[ceph-users] S3 API - Canonical user ID

2016-07-18 Thread Victor Efimov
Hello. I've made list all buckets request and got the following response (the part of it): http://s3.amazonaws.com/doc/2006-03-01/;>someownerSOMEOWNER note the "someowner" is used as id. Problem that S3-compatible library that I use crashes on this, it expects 64 character hex string.

Re: [ceph-users] Physical maintainance

2016-07-18 Thread Kees Meijs
Hi, Thanks guys, this worked like a charm. Activating the OSDs wasn't necessary: it seemed udev(7) helped me with that. Cheers, Kees On 13-07-16 14:47, Kees Meijs wrote: > So to sum up, I'd best: > > * set the noout flag > * stop the OSDs one by one > * shut down the physical node > *

[ceph-users] [RGW] how to choise the best placement groups ?

2016-07-18 Thread Khang Nguyễn Nhật
Hi all, I have a cluster consists of: 3 Monitors, 1 RGW, 1 host of 24 OSDs(2TB/OSD) and some pool as: ap-southeast.rgw.data.root ap-southeast.rgw.control ap-southeast.rgw.gc ap-southeast.rgw.log ap-southeast.rgw.intent-log ap-southeast.rgw.usage