[ceph-users] Luminous RadosGW with Apache

2017-11-15 Thread Monis Monther
Good Day, I am trying to install radosgw with apache instead of civetweb, I am under Luminous 12.2.0 I followed documentation from docs.ceph.com/docs/jewel/man/8/radosgw I keep getting error of permission denied (apache cant access the socket file) changing the ownership of /var/run/ceph does

[ceph-users] Luminous RadosGW with Apache

2017-11-15 Thread Monis Monther
Good Day, I am trying to install radosgw with apache instead of civetweb, I am under Luminous 12.2.0 I followed documentation from docs.ceph.com/docs/jewel/man/8/radosgw I keep getting error of permission denied (apache cant access the socket file) changing the ownership of /var/run/ceph does

[ceph-users] Where are the ceph-iscsi-* RPMS officially located?

2017-11-15 Thread Richard Chan
For the iSCSI RPMS referenced in http://docs.ceph.com/docs/master/rbd/iscsi-target-cli/ Where is the "official" RPM distribution repo? ceph-iscsi-config ceph-iscsi-cli -- Richard Chan ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] Ceph Luminous Directory

2017-11-15 Thread Linh Vu
Luminous supports this now http://docs.ceph.com/docs/master/cephfs/dirfrags/ and in my testing it has handled 2M files per directory with no problem. Configuring Directory fragmentation — Ceph Documentation docs.ceph.com Configuring Directory

[ceph-users] Ceph Luminous Directory

2017-11-15 Thread Hauke Homburg
Hello List, In our Factory we have again the Question cephfs. We noticed the new Release Luminous. We had problems with Jewel and big Directories and cephfs. http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-October/013628.html Does anyone know a Limitation in Luminous in cephfs in big

[ceph-users] [Luminous, bluestore]How to reduce memory usage of OSDs?

2017-11-15 Thread lin yunfan
Hi all, Is there a way to reduce memory usage of osd to below 800MB per osd?My server has only about 1 GB memory for every osd and the osd sometimes gets killed by oom killer. I have used the newer version of luminous from github(12.2.1-249-g42172a4 (42172a443183ffe6b36e85770e53fe678db293bf)

Re: [ceph-users] luminous vs jewel rbd performance

2017-11-15 Thread Rafael Lopez
Hey Linh...have not but if it makes any difference we are still using filestore. On 16 Nov. 2017 12:31, "Linh Vu" wrote: > Noticed that you're on 12.2.0 Raf. 12.2.1 fixed a lot of performance > issues from 12.2.0 for us on Luminous/Bluestore. Have you tried upgrading > to

Re: [ceph-users] 10.2.10: "default" zonegroup in custom root pool not found

2017-11-15 Thread Richard Chan
Yes, that was it. Thank you. On Thu, Nov 16, 2017 at 1:48 AM, Casey Bodley wrote: > > > On 11/15/2017 12:11 AM, Richard Chan wrote: > > After creating a non-default root pool > rgw_realm_root_pool = gold.rgw.root > rgw_zonegroup_root_pool = gold.rgw.root >

Re: [ceph-users] luminous vs jewel rbd performance

2017-11-15 Thread Linh Vu
Noticed that you're on 12.2.0 Raf. 12.2.1 fixed a lot of performance issues from 12.2.0 for us on Luminous/Bluestore. Have you tried upgrading to it? From: ceph-users on behalf of Rafael Lopez Sent:

Re: [ceph-users] luminous vs jewel rbd performance

2017-11-15 Thread Rafael Lopez
Hi Mark, Sorry for the late reply... I have been away on vacation/openstack summit etc for over a month now and looking at this again. Yeah the snippet was a bit misleading. The fio file contains small block jobs as well as big block jobs: [write-rbd1-4m-depth1] rbdname=rbd-tester-fio bs=4m

Re: [ceph-users] OSD Random Failures - Latest Luminous

2017-11-15 Thread Eric Nelson
I've been seeing these as well on our SSD cachetier that's been ravaged by disk failures as of late Same tp_peering assert as above even running luminous branch from git. Let me know if you have a bug filed I can +1 or have found a workaround. E On Wed, Nov 15, 2017 at 10:25 AM, Ashley

Re: [ceph-users] Moving bluestore WAL and DB after bluestore creation

2017-11-15 Thread Shawn Edwards
On Wed, Nov 15, 2017, 11:07 David Turner wrote: > I'm not going to lie. This makes me dislike Bluestore quite a bit. Using > multiple OSDs to an SSD journal allowed for you to monitor the write > durability of the SSD and replace it without having to out and re-add all >

[ceph-users] OSD Random Failures - Latest Luminous

2017-11-15 Thread Ashley Merrick
Hello, After replacing a single OSD disk due to a failed disk I am now seeing 2-3 OSD's randomly stop and fail to start, do a boot loop get to load_pgs and then fail with the following (I tried setting OSD log's to 5/5 but didn't get any extra lines around the error just more information pre

Re: [ceph-users] Cluster network slower than public network

2017-11-15 Thread Ronny Aasen
On 15.11.2017 13:50, Gandalf Corvotempesta wrote: As 10gb switches are expansive, what would happen by using a gigabit cluster network and a 10gb public network? Replication and rebalance should be slow, but what about public I/O ? When a client wants to write to a file, it does over the

Re: [ceph-users] 10.2.10: "default" zonegroup in custom root pool not found

2017-11-15 Thread Casey Bodley
On 11/15/2017 12:11 AM, Richard Chan wrote: After creating a non-default root pool rgw_realm_root_pool = gold.rgw.root rgw_zonegroup_root_pool = gold.rgw.root rgw_period_root_pool = gold.rgw.root rgw_zone_root_pool = gold.rgw.root rgw_region = gold.rgw.root You probably meant to set

Re: [ceph-users] radosgw multi site different period

2017-11-15 Thread Casey Bodley
Your period configuration is indeed consistent between zones. This "master is on a different period" error is specific to the metadata sync status. It's saying that zone b is unable to finish syncing the metadata changes from zone a that occurred during the previous period. Even though zone b

Re: [ceph-users] Reuse pool id

2017-11-15 Thread David Turner
It's probably against the inner workings of Ceph to change the ID of the pool. There are a couple other things in Ceph that keep old data around most likely to prevent potential collisions. One in particular is keeping deleted_snaps in the OSD map indefinitely. One thing I can think of in

Re: [ceph-users] Moving bluestore WAL and DB after bluestore creation

2017-11-15 Thread David Turner
I'm not going to lie. This makes me dislike Bluestore quite a bit. Using multiple OSDs to an SSD journal allowed for you to monitor the write durability of the SSD and replace it without having to out and re-add all of the OSDs on the device. Having to now out and backfill back onto the HDDs is

Re: [ceph-users] Reuse pool id

2017-11-15 Thread Karun Josy
Any suggestions ? Karun Josy On Mon, Nov 13, 2017 at 10:06 PM, Karun Josy wrote: > Hi, > > Is there anyway we can change or reuse pool id ? > I had created and deleted lot of test pools. So the IDs kind of look like > this now: > > --- > $ ceph osd lspools > 34

Re: [ceph-users] Fwd: Luminous RadosGW issue

2017-11-15 Thread Sam Huracan
Thanks Hans, I've fixed it. Ceph luminous auto create an user client.rgw, I did't know and make a new user client.radowgw. On Nov 9, 2017 17:03, "Hans van den Bogert" wrote: > On Nov 9, 2017, at 5:25 AM, Sam Huracan wrote: > > root@radosgw

Re: [ceph-users] Moving bluestore WAL and DB after bluestore creation

2017-11-15 Thread Mario Giammarco
It seems it is not possible. I recreated the OSD 2017-11-12 17:44 GMT+01:00 Shawn Edwards : > I've created some Bluestore OSD with all data (wal, db, and data) all on > the same rotating disk. I would like to now move the wal and db onto an > nvme disk. Is that possible

Re: [ceph-users] Separation of public/cluster networks

2017-11-15 Thread Wido den Hollander
> Op 15 november 2017 om 15:03 schreef Richard Hesketh > : > > > On 15/11/17 12:58, Micha Krause wrote: > > Hi, > > > > I've build a few clusters with separated public/cluster network, but I'm > > wondering if this is really > > the way to go. > > > >

Re: [ceph-users] Separation of public/cluster networks

2017-11-15 Thread Richard Hesketh
On 15/11/17 12:58, Micha Krause wrote: > Hi, > > I've build a few clusters with separated public/cluster network, but I'm > wondering if this is really > the way to go. > > http://docs.ceph.com/docs/jewel/rados/configuration/network-config-ref > > states 2 reasons: > > 1. There is more

Re: [ceph-users] ceps-deploy won't install luminous

2017-11-15 Thread jorpilo
I think you can always use the --release luminous option that will ensure you install luminous.ceph-deploy install --release luminous node1 node2 node3 Mensaje original De: "Ragan, Tj (Dr.)" Fecha: 15/11/17 11:11 a. m. (GMT+01:00) Para: Hans van den

Re: [ceph-users] ceph-deploy failed to deploy osd randomly

2017-11-15 Thread Wei Jin
I tried to do purge/purgedata and then redo the deploy command for a few times, and it still fails to start osd. And there is no error log, anyone know what's the problem? BTW, my os is dedian with 4.4 kernel. Thanks. On Wed, Nov 15, 2017 at 8:24 PM, Wei Jin wrote: > Hi,

Re: [ceph-users] Cluster network slower than public network

2017-11-15 Thread Gandalf Corvotempesta
Any idea? I have 1 16 ports 10gb switch, 2 or more 24ports gigabit switches and 5 OSDs (MONs running over them) and 5 hypervisor servers to connect to the storage At least 10 ports are needed for each network, thus, 20 ports for both cluster and public, right ? I don't have 20 10gb ports Il

[ceph-users] Separation of public/cluster networks

2017-11-15 Thread Micha Krause
Hi, I've build a few clusters with separated public/cluster network, but I'm wondering if this is really the way to go. http://docs.ceph.com/docs/jewel/rados/configuration/network-config-ref states 2 reasons: 1. There is more traffic in the backend, which could cause latencies in the public

Re: [ceph-users] Cluster network slower than public network

2017-11-15 Thread Gandalf Corvotempesta
Small info: all of our nodes (3 to 5 plus 6 hypervisors) have 4 10gb ports but we only have 2 10gb switches (small port numbers, only 16, so we can't place both network on the same switch) We use 2 switches for HA in active-backup mode I was thinking to use both 10gb switches as public network

[ceph-users] Cluster network slower than public network

2017-11-15 Thread Gandalf Corvotempesta
As 10gb switches are expansive, what would happen by using a gigabit cluster network and a 10gb public network? Replication and rebalance should be slow, but what about public I/O ? When a client wants to write to a file, it does over the public network and the ceph automatically replicate it

[ceph-users] ceph-deploy failed to deploy osd randomly

2017-11-15 Thread Wei Jin
Hi, List, My machine has 12 ssd There are some errors for ceph-deploy. It failed randomly root@n10-075-012:~# *ceph-deploy osd create --zap-disk n10-075-094:sdb:sdb* [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf [ceph_deploy.cli][INFO ] Invoked (1.5.39):

[ceph-users] ceph-deploy failed to deploy osd randomly

2017-11-15 Thread Wei Jin
Hi, List, My machine has 12 SSDs disk, and I use ceph-deploy to deploy them. But for some machine/disks,it failed to start osd. I tried many times, some success but others failed. But there is no error info. Following is ceph-deploy log for one disk: root@n10-075-012:~# ceph-deploy osd create

Re: [ceph-users] who is using nfs-ganesha and cephfs?

2017-11-15 Thread Jens-U. Mozdzen
Hi all, By Sage Weil : Who is running nfs-ganesha's FSAL to export CephFS? What has your experience been? (We are working on building proper testing and support for this into Mimic, but the ganesha FSAL has been around for years.) After we had moved most of our file-based

Re: [ceph-users] HW Raid vs. Multiple OSD

2017-11-15 Thread Phil Schwarz
Hi, thanks for the explanation, but... Twisting the Ceph storage model as you plan it is not a good idea : - You will decrease the support level(I'm not sure many people will build such an architecture) - You are certainly going to face strange issues with HW Raid on top of Ceph OSD - You

Re: [ceph-users] ceps-deploy won't install luminous

2017-11-15 Thread Ragan, Tj (Dr.)
Yes, I’ve done that. I’ve also tried changing the priority field from 1 to 2, with no effect. On 15 Nov 2017, at 09:58, Hans van den Bogert > wrote: Never mind, you already said you are on the latest ceph-deploy, so that can’t be it. I’m not

Re: [ceph-users] ceps-deploy won't install luminous

2017-11-15 Thread Ragan, Tj (Dr.)
$ cat /etc/yum.repos.d/ceph.repo [Ceph] name=Ceph packages for $basearch baseurl=http://download.ceph.com/rpm-jewel/el7/$basearch enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://download.ceph.com/keys/release.asc priority=1 [Ceph-noarch] name=Ceph noarch packages

Re: [ceph-users] ceps-deploy won't install luminous

2017-11-15 Thread Hans van den Bogert
Never mind, you already said you are on the latest ceph-deploy, so that can’t be it. I’m not familiar with deploying on Centos, but I can imagine that the last part of the checklist is important: http://docs.ceph.com/docs/luminous/start/quick-start-preflight/#priorities-preferences Can you

Re: [ceph-users] ceps-deploy won't install luminous

2017-11-15 Thread Hans van den Bogert
Hi, Can you show the contents of the file, /etc/yum.repos.d/ceph.repo ? Regards, Hans > On Nov 15, 2017, at 10:27 AM, Ragan, Tj (Dr.) > wrote: > > Hi All, > > I feel like I’m doing something silly. I’m spinning up a new cluster, and > followed the instructions on

[ceph-users] ceps-deploy won't install luminous

2017-11-15 Thread Ragan, Tj (Dr.)
Hi All, I feel like I’m doing something silly. I’m spinning up a new cluster, and followed the instructions on the pre-flight and quick start here: http://docs.ceph.com/docs/luminous/start/quick-start-preflight/ http://docs.ceph.com/docs/luminous/start/quick-ceph-deploy/ but ceph-deploy

Re: [ceph-users] Bluestore performance 50% of filestore

2017-11-15 Thread Maged Mokhtar
On 2017-11-14 21:54, Milanov, Radoslav Nikiforov wrote: > Hi > > We have 3 node, 27 OSDs cluster running Luminous 12.2.1 > > In filestore configuration there are 3 SSDs used for journals of 9 OSDs on > each hosts (1 SSD has 3 journal paritions for 3 OSDs). > > I've converted filestore to