Re: [ceph-users] OSDs are down, don't know why

2016-01-18 Thread Jeff Epstein
Hi Steve Thanks for your answer. I don't have a private network defined. Furthermore, in my current testing configuration, there is only one OSD, so communication between OSDs should be a non-issue. Do you know how OSD up/down state is determined when there is only one OSD? Best, Jeff On

[ceph-users] Ceph and NFS

2016-01-18 Thread david
Hello All. Does anyone provides Ceph rbd/rgw/cephfs through NFS? I have a requirement about Ceph Cluster which needs to provide NFS service. ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] OSD Capacity via Python / C API

2016-01-18 Thread Wido den Hollander
On 18-01-16 10:22, Alex Leake wrote: > ​Hello All. > > > Does anyone know if it's possible to retrieve the remaining OSD capacity > via the Python or C API? > Using a mon_command in librados you can send a 'osd df' if you want to. See this snippet:

[ceph-users] OSD Capacity via Python / C API

2016-01-18 Thread Alex Leake
?Hello All. Does anyone know if it's possible to retrieve the remaining OSD capacity via the Python or C API? I can get all other sorts of information, but I thought it would be nice to see near-full OSDs via the API. Kind Regards, Alex. ___

Re: [ceph-users] CentOS 7 iscsi gateway using lrbd

2016-01-18 Thread Василий Ангапов
https://github.com/swiftgist/lrbd/wiki According to lrbd wiki it still uses KRBD (see those /dev/rbd/... devices in targetcli config). I was thinking that Mike Christie developed a librbd module for LIO. So what is it - KRBD or librbd? 2016-01-18 20:23 GMT+08:00 Tyler Bishop

Re: [ceph-users] CRUSH Rule Review - Not replicating correctly

2016-01-18 Thread deeepdish
Thanks Robert. Will definitely try this. Is there a way to implement “gradual CRUSH” changes? I noticed whenever cluster wide changes are pushed (crush map, for instance) the cluster immediately attempts to align itself disrupting client access / performance… > On Jan 18, 2016, at

Re: [ceph-users] CRUSH Rule Review - Not replicating correctly

2016-01-18 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Not that I know of. - Robert LeBlanc PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 On Mon, Jan 18, 2016 at 10:33 AM, deeepdish wrote: > Thanks Robert. Will definitely try this. Is there a way to implement

Re: [ceph-users] CRUSH Rule Review - Not replicating correctly

2016-01-18 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 I'm not sure why you have six monitors. Six monitors buys you nothing over five monitors other than more power being used, and more latency and more headache. See http://docs.ceph.com/docs/hammer/rados/configuration/mon-config-ref/#monitor-quorum

Re: [ceph-users] OSDs are down, don't know why

2016-01-18 Thread Jeff Epstein
Unfortunately, I haven't seen any obvious suspicious log messages from either the OSD or the MON. Is there a way to query detailed information on OSD monitoring, e.g. heartbeats? On 01/18/2016 05:54 PM, Steve Taylor wrote: With a single osd there shouldn't be much to worry about. It will have

[ceph-users] CephFS

2016-01-18 Thread Gregory Farnum
On Sunday, January 17, 2016, James Gallagher > wrote: > Hi, > > I'm looking to implement the CephFS on my Firefly release (v0.80) with > an XFS native file system, but so far I'm having some difficulties.

Re: [ceph-users] CephFS

2016-01-18 Thread Ilya Dryomov
On Sun, Jan 17, 2016 at 6:34 PM, James Gallagher wrote: > Hi, > > I'm looking to implement the CephFS on my Firefly release (v0.80) with an > XFS native file system, but so far I'm having some difficulties. After > following the ceph/qsg and creating a storage

[ceph-users] CentOS 7 iscsi gateway using lrbd

2016-01-18 Thread Dominik Zalewski
Hi, I'm looking into implementing iscsi gateway with MPIO using lrbd - https://github.com/swiftgist/lrb https://www.suse.com/docrep/documents/kgu61iyowz/suse_enterprise_storage_2_and_iscsi.pdf https://www.susecon.com/doc/2015/sessions/TUT16512.pdf >From above examples: *For iSCSI failover

Re: [ceph-users] Again - state of Ceph NVMe and SSDs

2016-01-18 Thread Tyler Bishop
Check these out to: http://www.seagate.com/internal-hard-drives/solid-state-hybrid/1200-ssd/ - Original Message - From: "Christian Balzer" To: "ceph-users" Sent: Sunday, January 17, 2016 10:45:56 PM Subject: Re: [ceph-users] Again - state of

Re: [ceph-users] Ceph and NFS

2016-01-18 Thread Burkhard Linke
Hi, On 18.01.2016 10:36, david wrote: Hello All. Does anyone provides Ceph rbd/rgw/cephfs through NFS? I have a requirement about Ceph Cluster which needs to provide NFS service. We export a CephFS mount point on one of our NFS servers. Works out of the box with Ubuntu Trusty, a

Re: [ceph-users] Ceph and NFS

2016-01-18 Thread Tyler Bishop
You should test out cephfs exported as an NFS target. - Original Message - From: "david" To: ceph-users@lists.ceph.com Sent: Monday, January 18, 2016 4:36:17 AM Subject: [ceph-users] Ceph and NFS Hello All. Does anyone provides Ceph rbd/rgw/cephfs through NFS?

Re: [ceph-users] CentOS 7 iscsi gateway using lrbd

2016-01-18 Thread Tyler Bishop
Well that's interesting. I've mounted block devices to the kernel and exported them to iscsi but the performance was horrible.. I wonder if this is any different? From: "Dominik Zalewski" To: ceph-users@lists.ceph.com Sent: Monday, January 18, 2016 6:35:20 AM

Re: [ceph-users] Ceph and NFS

2016-01-18 Thread Arthur Liu
On Mon, Jan 18, 2016 at 11:34 PM, Burkhard Linke < burkhard.li...@computational.bio.uni-giessen.de> wrote: > Hi, > > On 18.01.2016 10:36, david wrote: > >> Hello All. >> Does anyone provides Ceph rbd/rgw/cephfs through NFS? I have a >> requirement about Ceph Cluster which needs to

Re: [ceph-users] Again - state of Ceph NVMe and SSDs

2016-01-18 Thread Mark Nelson
On 01/16/2016 12:06 PM, David wrote: Hi! We’re planning our third ceph cluster and been trying to find how to maximize IOPS on this one. Our needs: * Pool for MySQL, rbd (mounted as /var/lib/mysql or equivalent on KVM servers) * Pool for storage of many small files, rbd (probably dovecot

Re: [ceph-users] Ceph Cache pool redundancy requirements.

2016-01-18 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 >From what I understand, the scrub only scrubs PG copies in the same pool, so there would not be much benefit to scrubbing a single replication pool until Ceph starts storing the hash of the metadata and data. Then you would only know that your data

Re: [ceph-users] OSDs are down, don't know why

2016-01-18 Thread Steve Taylor
With a single osd there shouldn't be much to worry about. It will have to get caught up on map epochs before it will report itself as up, but on a new cluster that should be pretty immediate. You'll probably have to look for clues in the osd and mon logs. I would expect some sort of error

Re: [ceph-users] Again - state of Ceph NVMe and SSDs

2016-01-18 Thread Tyler Bishop
One of the other guys on the list here benchmarked them. They spanked every other ssd on the *recommended* tree.. - Original Message - From: "Gregory Farnum" To: "Tyler Bishop" Cc: "David" , "Ceph Users"

Re: [ceph-users] Again - state of Ceph NVMe and SSDs

2016-01-18 Thread Gregory Farnum
On Sun, Jan 17, 2016 at 12:34 PM, Tyler Bishop wrote: > The changes you are looking for are coming from Sandisk in the ceph "Jewel" > release coming up. > > Based on benchmarks and testing, sandisk has really contributed heavily on > the tuning aspects and are

Re: [ceph-users] Again - state of Ceph NVMe and SSDs

2016-01-18 Thread Mark Nelson
Take Greg's comments to heart, because he's absolutely correct here. Distributed storage systems almost as a rule love parallelism and if you have enough you can often hide other issues. Latency is probably the more interesting question, and frankly that's where you'll often start seeing the

Re: [ceph-users] Ceph and NFS

2016-01-18 Thread david
Hi, Does CephFS stable enough to deploy it in product environments? and Do you compare the performance between nfs-ganesha and standard kernel based NFSd which are based on CephFS? > On Jan 18, 2016, at 20:34, Burkhard Linke > wrote: >

Re: [ceph-users] Ceph and NFS

2016-01-18 Thread david
Hi, Thanks for your answer. Does CephFS stable enough to deploy it in product environments? and Do you compare the performance between nfs-ganesha and standard kernel based NFSd which are based on CephFS? ___ ceph-users mailing list

Re: [ceph-users] Ceph and NFS

2016-01-18 Thread Gregory Farnum
On Mon, Jan 18, 2016 at 4:48 AM, Arthur Liu wrote: > > > On Mon, Jan 18, 2016 at 11:34 PM, Burkhard Linke > wrote: >> >> Hi, >> >> On 18.01.2016 10:36, david wrote: >>> >>> Hello All. >>> Does anyone provides Ceph

Re: [ceph-users] Infernalis upgrade breaks when journal on separate partition

2016-01-18 Thread Francois Lafont
Hi, I have not well followed this thread, so sorry in advance if I'm a little out of topic. Personally I'm using this udev rule and it works well (servers are Ubuntu Trusty): ~# cat /etc/udev/rules.d/90-ceph.rules ENV{ID_PART_ENTRY_SCHEME}=="gpt",

Re: [ceph-users] Infernalis, cephfs: difference between df and du

2016-01-18 Thread Francois Lafont
On 19/01/2016 05:19, Francois Lafont wrote: > However, I still have a question. Since my previous message, supplementary > data have been put in the cephfs and the values have changes as you can see: > > ~# du -sh /mnt/cephfs/ > 1.2G /mnt/cephfs/ > > ~# du --apparent-size -sh

[ceph-users] Keystone PKIZ token support for RadosGW

2016-01-18 Thread Blair Bethwaite
Hi all, Does anyone know if RGW supports Keystone's PKIZ tokens, or better yet know a list of the supported token types? Cheers, -- Cheers, ~Blairo ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] Infernalis, cephfs: difference between df and du

2016-01-18 Thread Francois Lafont
Hi, On 18/01/2016 05:00, Adam Tygart wrote: > As I understand it: I think you understand well. ;) > 4.2G is used by ceph (all replication, metadata, et al) it is a sum of > all the space "used" on the osds. I confirm that. > 958M is the actual space the data in cephfs is using (without

Re: [ceph-users] Infernalis, cephfs: difference between df and du

2016-01-18 Thread Adam Tygart
It appears that with --apparent-size, du adds the "size" of the directories to the total as well. On most filesystems this is the block size, or the amount of metadata space the directory is using. On CephFS, this size is fabricated to be the size sum of all sub-files. i.e. a cheap/free 'du -sh

[ceph-users] bucket type and crush map

2016-01-18 Thread Pedro Benites
Hello, I have configured osd_crush_chooseleaf_type = 3 (rack), and I have 6 osd in three hosts and three racks, my tree y this: datacenter datacenter1 -7 5.45999 rack rack1 -2 5.45999 host storage1 0 2.73000 osd.0up 1.0