Re: [ceph-users] pgs stuck in creating+peering state

2019-01-17 Thread Vasu Kulkarni
On Thu, Jan 17, 2019 at 4:42 AM Johan Thomsen wrote: > Thanks you for responding! > > First thing: I disabled the firewall on all the nodes. > More specifically not firewalld, but the NixOS firewall, since I run NixOS. > I can netcat both udp and tcp traffic on all ports between all nodes >

Re: [ceph-users] 'ceph-deploy osd create' and filestore OSDs

2018-12-04 Thread Vasu Kulkarni
On Tue, Dec 4, 2018 at 3:19 PM Matthew Pounsett wrote: > > > On Tue, 4 Dec 2018 at 18:12, Vasu Kulkarni wrote: > >> >>> As explained above, we can't just create smaller raw devices. Yes, >>> these are VMs but they're meant to replicate physical servers

Re: [ceph-users] 'ceph-deploy osd create' and filestore OSDs

2018-12-04 Thread Vasu Kulkarni
On Tue, Dec 4, 2018 at 3:07 PM Matthew Pounsett wrote: > > > On Tue, 4 Dec 2018 at 18:04, Vasu Kulkarni wrote: > >> >> >> On Tue, Dec 4, 2018 at 2:42 PM Matthew Pounsett >> wrote: >> >>> you are using HOST:DIR option which is bit old and

Re: [ceph-users] 'ceph-deploy osd create' and filestore OSDs

2018-12-04 Thread Vasu Kulkarni
On Tue, Dec 4, 2018 at 2:42 PM Matthew Pounsett wrote: > Going to take another stab at this... > > We have a development environment–made up of VMs–for developing and > testing the deployment tools for a particular service that depends on > cephfs for sharing state data between hosts. In

Re: [ceph-users] 'ceph-deploy osd create' and filestore OSDs

2018-12-04 Thread Vasu Kulkarni
On Mon, Dec 3, 2018 at 4:47 PM Matthew Pounsett wrote: > > I'm in the process of updating some development VMs that use ceph-fs. It > looks like recent updates to ceph have deprecated the 'ceph-deploy osd > prepare' and 'activate' commands in favour of the previously-optional > 'create'

Re: [ceph-users] Help! OSDs across the cluster just crashed

2018-10-02 Thread Vasu Kulkarni
can you file tracker for your issues(http://tracker.ceph.com/projects/ceph/issues/new) , email once its lengthy is not great to track the issue, Ideally full details of environment (os/ceph versions /before/after/workload info/ tool used for upgrade) is important if one has to recreate it. There

Re: [ceph-users] EC pool spread evenly across failure domains?

2018-10-02 Thread Vasu Kulkarni
On Tue, Oct 2, 2018 at 11:35 AM Mark Johnston wrote: > > I have the following setup in a test cluster: > > -1 8.49591 root default > -15 2.83197 chassis vm1 > -3 1.41599 host ceph01 > 0 ssd 1.41599 osd.0 > -5 1.41599 host ceph02 > 1

Re: [ceph-users] cephfs issue with moving files between data pools gives Input/output error

2018-10-01 Thread Vasu Kulkarni
On Mon, Oct 1, 2018 at 12:28 PM Gregory Farnum wrote: > > Moving a file into a directory with a different layout does not, and is not > intended to, copy the underlying file data into a different pool with the new > layout. If you want to do that you have to make it happen yourself by doing a

Re: [ceph-users] Purge Ceph Node and reuse it for another cluster

2018-09-26 Thread Vasu Kulkarni
you can do that safely, we do it all the time on test clusters, make sure you zap the disks on all osd nodes so that any partition data is erased. Also try to use the latest docs from 'master' branch ( I see the link you have is based on 'giant') On Wed, Sep 26, 2018 at 2:06 PM Marcus Müller

Re: [ceph-users] bluestore osd journal move

2018-09-24 Thread Vasu Kulkarni
On Mon, Sep 24, 2018 at 8:59 AM Andrei Mikhailovsky wrote: > > Hi Eugen, > > Many thanks for the links and the blog article. Indeed, the process of > changing the journal device seem far more complex than the FileStore osds. > Far more complex than it should be from an administrator point of

Re: [ceph-users] HELP! --> CLUSER DOWN (was "v13.2.1 Mimic released")

2018-07-28 Thread Vasu Kulkarni
On Sat, Jul 28, 2018 at 6:02 PM, wrote: > Have you guys changed something with the systemctl startup of the OSDs? I think there is some kind of systemd issue hidden in mimic, https://tracker.ceph.com/issues/25004 > > I've stopped and disabled all the OSDs on all my hosts via "systemctl >

Re: [ceph-users] [Ceph-deploy] Cluster Name

2018-07-20 Thread Vasu Kulkarni
On Fri, Jul 20, 2018 at 7:29 AM, Thode Jocelyn wrote: > Hi, > > > > I noticed that in commit > https://github.com/ceph/ceph-deploy/commit/b1c27b85d524f2553af2487a98023b60efe421f3, > the ability to specify a cluster name was removed. Is there a reason for > this removal ? > > > > Because right

Re: [ceph-users] Performance tuning for SAN SSD config

2018-07-06 Thread Vasu Kulkarni
affic/bandwidth too as it has to replicate across nodes. > > Thanks, > Matthew Stroud > > On 7/6/18, 11:12 AM, "Vasu Kulkarni" wrote: > > On Fri, Jul 6, 2018 at 8:38 AM, Matthew Stroud > wrote: > > > > Thanks for the reply.

Re: [ceph-users] Performance tuning for SAN SSD config

2018-07-06 Thread Vasu Kulkarni
On Fri, Jul 6, 2018 at 8:38 AM, Matthew Stroud wrote: > > Thanks for the reply. > > > > Actually we are using fiber channel (it’s so much more performant than iscsi > in our tests) as the primary storage and this is serving up traffic for RBD > for openstack, so this isn’t for backups. > > > >

Re: [ceph-users] cannot add new OSDs in mimic

2018-06-11 Thread Vasu Kulkarni
On Fri, Jun 8, 2018 at 4:09 PM, Paul Emmerich wrote: > Hi, > > we are also seeing this (I've also posted to the issue tracker). It only > affects clusters upgraded from Luminous, not new ones. > Also, it's not about re-using OSDs. Deleting any OSD seems to trigger this > bug for all new OSDs on

Re: [ceph-users] cannot add new OSDs in mimic

2018-06-07 Thread Vasu Kulkarni
t; > Mike Kuriger > > > > -Original Message----- > From: Vasu Kulkarni [mailto:vakul...@redhat.com] > Sent: Thursday, June 07, 2018 12:28 PM > To: Michael Kuriger > Cc: ceph-users > Subject: Re: [ceph-users] cannot add new OSDs in mimic > > There is a osd de

Re: [ceph-users] cannot add new OSDs in mimic

2018-06-07 Thread Vasu Kulkarni
There is a osd destroy command but not documented, did you run that as well? On Thu, Jun 7, 2018 at 12:21 PM, Michael Kuriger wrote: > CEPH team, > Is there a solution yet for adding OSDs in mimic - specifically re-using old > IDs? I was looking over this BUG report - >

Re: [ceph-users] Ceph EC profile, how are you using?

2018-06-01 Thread Vasu Kulkarni
Thanks to those who have added their config, Request anyone in list using EC profile in production to add high level config which will be helpful for tests. Thanks On Wed, May 30, 2018 at 12:16 PM, Vasu Kulkarni wrote: > Hello Ceph Users, > > I would like to know how folks are using E

[ceph-users] Ceph EC profile, how are you using?

2018-05-30 Thread Vasu Kulkarni
Hello Ceph Users, I would like to know how folks are using EC profile in the production environment, what kind of EC configurations are you using (10+4, 5+3 ? ) with other configuration options, If you can reply to this thread or update in the shared excel sheet below that will help design better

Re: [ceph-users] ceph-disk is getting removed from master

2018-05-23 Thread Vasu Kulkarni
On Wed, May 23, 2018 at 10:03 AM, Alfredo Deza <ad...@redhat.com> wrote: > On Wed, May 23, 2018 at 12:12 PM, Vasu Kulkarni <vakul...@redhat.com> wrote: >> Alfredo, >> >> Do we have the migration docs link from ceph-disk deployment to >> ceph-volume? t

Re: [ceph-users] ceph-disk is getting removed from master

2018-05-23 Thread Vasu Kulkarni
Alfredo, Do we have the migration docs link from ceph-disk deployment to ceph-volume? the current docs as i see lacks scenario migration, maybe there is another link ? http://docs.ceph.com/docs/master/ceph-volume/simple/#ceph-volume-simple If it doesn't exist can we document, how a) ceph-disk

Re: [ceph-users] What is the meaning of size and min_size for erasure-coded pools?

2018-05-08 Thread Vasu Kulkarni
On Tue, May 8, 2018 at 12:07 PM, Dan van der Ster <d...@vanderster.com> wrote: > On Tue, May 8, 2018 at 7:35 PM, Vasu Kulkarni <vakul...@redhat.com> wrote: >> On Mon, May 7, 2018 at 2:26 PM, Maciej Puzio <mkp37...@gmail.com> wrote: >>> I am an admin in a resear

Re: [ceph-users] What is the meaning of size and min_size for erasure-coded pools?

2018-05-08 Thread Vasu Kulkarni
On Mon, May 7, 2018 at 2:26 PM, Maciej Puzio wrote: > I am an admin in a research lab looking for a cluster storage > solution, and a newbie to ceph. I have setup a mini toy cluster on > some VMs, to familiarize myself with ceph and to test failure > scenarios. I am using ceph

Re: [ceph-users] ceph-deploy on 14.04

2018-04-30 Thread Vasu Kulkarni
If you are on 14.04 or need to use ceph-disk, then you can install version 1.5.39 from pip. to downgrade just uninstall the current one and reinstall 1.5.39 you dont have to delete your conf file folder. On Mon, Apr 30, 2018 at 5:31 PM, Scottix wrote: > It looks like

Re: [ceph-users] Upgrade Order with ceph-mgr

2018-04-26 Thread Vasu Kulkarni
; > On Thu, Apr 26, 2018 at 9:13 AM Vasu Kulkarni <vakul...@redhat.com> wrote: > >> Step 6 says mon should be upgraded *first*, The step 7 there indicates >> the order would be after mon upgrade and before osd. There are couple >> threads related to colocated mon/osd upg

Re: [ceph-users] Upgrade Order with ceph-mgr

2018-04-26 Thread Vasu Kulkarni
ceph-mgr but when I do an update I want to make sure it is the > recommended order to update things or maybe it just doesn't matter. > Either way usually there is a recommended order with ceph so just asking to > see what the official response is. > > On Thu, Apr 26, 2018 at 8:59

Re: [ceph-users] Upgrade Order with ceph-mgr

2018-04-26 Thread Vasu Kulkarni
On Thu, Apr 26, 2018 at 8:52 AM, Scottix wrote: > Now that we have ceph-mgr in luminous what is the best upgrade order for the > ceph-mgr? > > http://docs.ceph.com/docs/master/install/upgrading-ceph/ I think that is outdated and needs some fix but release notes is what gets

Re: [ceph-users] CephFS very unstable with many small files

2018-02-25 Thread Vasu Kulkarni
> On Feb 25, 2018, at 8:45 AM, Oliver Freyermuth > wrote: > > Dear Cephalopodians, > > in preparation for production, we have run very successful tests with large > sequential data, > and just now a stress-test creating many small files on CephFS. > > We use

Re: [ceph-users] active+remapped+backfill_toofull

2017-12-19 Thread Vasu Kulkarni
> On Dec 19, 2017, at 8:26 AM, Nghia Than wrote: > > Hi, > > My CEPH is stuck at this for few days, we added new OSD and nothing changed: Does the new osd show up in osd tree? I see all your osd’s at ~80%, the new ones should be at much lower percentage or did they

Re: [ceph-users] ceph-disk is now deprecated

2017-11-28 Thread Vasu Kulkarni
On Tue, Nov 28, 2017 at 9:22 AM, David Turner wrote: > Isn't marking something as deprecated meaning that there is a better option > that we want you to use and you should switch to it sooner than later? I > don't understand how this is ready to be marked as such if

Re: [ceph-users] Is the 12.2.1 really stable? Anybody have production cluster with Luminous Bluestore?

2017-11-22 Thread Vasu Kulkarni
On Wed, Nov 22, 2017 at 8:29 AM, magicb...@gmail.com wrote: > Hi > > We have a Ceph Jewel cluster running, but in our Lab environment, when we > try to upgrade to 12.2.0, we are facing a problem with cephx/auth and MGR. > > See this bugs: > > -

Re: [ceph-users] Is the 12.2.1 really stable? Anybody have production cluster with Luminous Bluestore?

2017-11-16 Thread Vasu Kulkarni
On Thu, Nov 16, 2017 at 6:33 AM, Ashley Merrick wrote: > > Currently experiencing a nasty bug http://tracker.ceph.com/issues/21142 Can you add more info to tracker about ceph osd tree(node/memory info) and what was the version of ceph before and was it in healthy state

Re: [ceph-users] removing cluster name support

2017-11-07 Thread Vasu Kulkarni
On Tue, Nov 7, 2017 at 11:38 AM, Sage Weil wrote: > On Tue, 7 Nov 2017, Alfredo Deza wrote: >> On Tue, Nov 7, 2017 at 7:09 AM, kefu chai wrote: >> > On Fri, Jun 9, 2017 at 3:37 AM, Sage Weil wrote: >> >> At CDM yesterday we talked about

Re: [ceph-users] Cephfs snapshot work

2017-11-06 Thread Vasu Kulkarni
On Sun, Nov 5, 2017 at 8:19 AM, Brady Deetz wrote: > My organization has a production cluster primarily used for cephfs upgraded > from jewel to luminous. We would very much like to have snapshots on that > filesystem, but understand that there are risks. > > What kind of work

Re: [ceph-users] Jewel -> Luminous upgrade, package install stopped all daemons

2017-09-15 Thread Vasu Kulkarni
On Fri, Sep 15, 2017 at 3:49 PM, Gregory Farnum wrote: > On Fri, Sep 15, 2017 at 3:34 PM David Turner wrote: >> >> I don't understand a single use case where I want updating my packages >> using yum, apt, etc to restart a ceph daemon. ESPECIALLY when

Re: [ceph-users] Jewel -> Luminous upgrade, package install stopped all daemons

2017-09-15 Thread Vasu Kulkarni
On Fri, Sep 15, 2017 at 2:10 PM, David Turner wrote: > I'm glad that worked for you to finish the upgrade. > > He has multiple MONs, but all of them are on nodes with OSDs as well. When > he updated the packages on the first node, it restarted the MON and all of > the

Re: [ceph-users] Jewel -> Luminous upgrade, package install stopped all daemons

2017-09-15 Thread Vasu Kulkarni
On Fri, Sep 15, 2017 at 1:48 PM, David wrote: > Happy to report I got everything up to Luminous, used your tip to keep the > OSDs running, David, thanks again for that. > > I'd say this is a potential gotcha for people collocating MONs. It appears > that if you're running

Re: [ceph-users] Luminous radosgw hangs after a few hours

2017-07-24 Thread Vasu Kulkarni
Please raise a tracker for rgw and also provide some additional journalctl logs and info(ceph version, os version etc): http://tracker.ceph.com/projects/rgw On Mon, Jul 24, 2017 at 9:03 AM, Vaibhav Bhembre wrote: > I am seeing the same issue on upgrade to Luminous

Re: [ceph-users] ceph-deploy mgr create error No such file or directory:

2017-07-14 Thread Vasu Kulkarni
yum.repos.d]# ceph --version > ceph version 10.2.8 (f5b1f1fd7c0be0506ba73502a675de9d048b744e) > > thanks a lot! > > 2017-07-14 19:21 GMT+02:00 Vasu Kulkarni <vakul...@redhat.com>: > >> It is tested for master and is working fine, I will run those same tests >> on l

Re: [ceph-users] ceph-deploy mgr create error No such file or directory:

2017-07-14 Thread Vasu Kulkarni
It is tested for master and is working fine, I will run those same tests on luminous and check if there is an issue and update here. mgr create is needed for luminous+ bulids only. On Fri, Jul 14, 2017 at 10:18 AM, Roger Brown wrote: > I've been trying to work through

Re: [ceph-users] Using ceph-deploy with multipath storage

2017-07-11 Thread Vasu Kulkarni
On Tue, Jul 11, 2017 at 7:48 AM, wrote: > Hi All, > > > > And further to my last email, does anyone have any experience of using > ceph-deploy with storage configured via multipath, please? > > > > Currently, we deploy new OSDs with: > > ceph-deploy disk zap

Re: [ceph-users] How to set up bluestore manually?

2017-07-06 Thread Vasu Kulkarni
pear to exist in /sys/block/dm-2 [ERROR > ] RuntimeError: command returned non-zero exit status: 1 > [ceph_deploy.osd][ERROR ] Failed to execute command: /usr/sbin/ceph-disk -v > prepare --block.wal /dev/cl/ceph-waldb-sdc --block.db > /dev/cl/ceph-waldb-sdc --bluestore --cluster ceph --fs

Re: [ceph-users] How to set up bluestore manually?

2017-06-30 Thread Vasu Kulkarni
On Fri, Jun 30, 2017 at 8:31 AM, Martin Emrich wrote: > Hi! > > > > I’d like to set up new OSDs with bluestore: the real data (“block”) on a > spinning disk, and DB+WAL on a SSD partition. > > > > But I do not use ceph-deploy, and never used ceph-disk (I set up the >

Re: [ceph-users] removing cluster name support

2017-06-09 Thread Vasu Kulkarni
On Fri, Jun 9, 2017 at 6:11 AM, Wes Dillingham wrote: > Similar to Dan's situation we utilize the --cluster name concept for our > operations. Primarily for "datamover" nodes which do incremental rbd > import/export between distinct clusters. This is entirely

Re: [ceph-users] Restart ceph cluster

2017-05-12 Thread Vasu Kulkarni
On Fri, May 12, 2017 at 7:17 AM, Алексей Усов wrote: > Thanks for reply. > > But tell command itself doesn't make changes persistent, so I must add them > to ceph.conf across the entire cluster (that's where configuration > management comes in), am I correct? Mind filing

Re: [ceph-users] Reg: Ceph-deploy install - failing

2017-05-08 Thread Vasu Kulkarni
the latest stable version is jewel(lts) and kraken, http://docs.ceph.com/docs/master/releases/ If you want to install stable version use --stable=jewel flag with ceph-deploy install command and it will get the packages from download.ceph.com, It is well tested on latest CentOS and Ubuntu. On

Re: [ceph-users] Ceph-deploy and git.ceph.com

2017-03-15 Thread Vasu Kulkarni
Just curious, why you still want to deploy new hammer instead of stable jewel? Is this a test environment? the last .10 release was basically for bug fixes for 0.94.9. On Wed, Mar 15, 2017 at 9:16 AM, Shinobu Kinjo wrote: > FYI: >

Re: [ceph-users] krbd and kernel feature mismatches

2017-02-27 Thread Vasu Kulkarni
ists/ceph-devel/msg34559.html > > What do you think of comment posted in that ML? > Would that make sense to you as well? > > > On Tue, Feb 28, 2017 at 2:41 AM, Vasu Kulkarni <vakul...@redhat.com> > wrote: > > Ilya, > > > > Many folks hit this and its quite

Re: [ceph-users] krbd and kernel feature mismatches

2017-02-27 Thread Vasu Kulkarni
Ilya, Many folks hit this and its quite difficult since the error is not properly printed out(unless one scans syslogs), Is it possible to default the feature to the one that kernel supports or its not possible to handle that case? Thanks On Mon, Feb 27, 2017 at 5:59 AM, Ilya Dryomov

Re: [ceph-users] ceph df : negative numbers

2017-02-06 Thread Vasu Kulkarni
worth filing an issue at http://tracker.ceph.com/ , looks like there are 2 different issues and should be easy to recreate. On Mon, Feb 6, 2017 at 9:01 AM, Florent B wrote: > On 02/06/2017 05:49 PM, Shinobu Kinjo wrote: > > How about *pve01-rbd01*? > > > > * rados -p

Re: [ceph-users] stalls caused by scrub on jewel

2016-12-01 Thread Vasu Kulkarni
On Thu, Dec 1, 2016 at 7:24 AM, Frédéric Nass < frederic.n...@univ-lorraine.fr> wrote: > > Hi Sage, Sam, > > We're impacted by this bug (case 01725311). Our cluster is running RHCS > 2.0 and is no more capable to scrub neither deep-scrub. > > [1] http://tracker.ceph.com/issues/17859 > [2]

Re: [ceph-users] Ceph Maintenance

2016-11-29 Thread Vasu Kulkarni
you can ignore that, its a known issue http://tracker.ceph.com/issues/15990 regardless waht version of ceph are you running and what are the details of os version you updated to ? On Tue, Nov 29, 2016 at 7:12 PM, Mike Jacobacci wrote: > Found some more info, but getting

Re: [ceph-users] New to ceph - error running create-initial

2016-11-29 Thread Vasu Kulkarni
If you are using 'master' build there is an issue workaround 1) before mon create-initial just run 'ceph-deploy admin mon-node' to push the admin key on mon nodes and then rerun mon create-initial 2) or use jewel build which is stable and if you dont need latest master ceph-deploy install

Re: [ceph-users] Adjust PG PGP placement groups on the fly

2016-11-04 Thread Vasu Kulkarni
from the docs (also important to read what pgp_num does): http://docs.ceph.com/docs/jewel/rados/operations/placement-groups/ To set the number of placement groups in a pool, you must specify the number of placement groups at the time you create the pool. See Create a Pool for details. Once you’ve

Re: [ceph-users] Ceph Very Small Cluster

2016-09-28 Thread Vasu Kulkarni
On Wed, Sep 28, 2016 at 8:03 AM, Ranjan Ghosh wrote: > Hi everyone, > > Up until recently, we were using GlusterFS to have two web servers in sync > so we could take one down and switch back and forth between them - e.g. for > maintenance or failover. Usually, both were running,

Re: [ceph-users] Ubuntu latest ceph-deploy fails to install hammer

2016-09-09 Thread Vasu Kulkarni
There is a known issue with latest ceph-deploy with *hammer*, the package split in later releases after *hammer* is the root cause, If you use ceph-deploy 1.5.25 (older version) it will work. you can get 1.5.25 from pypi http://tracker.ceph.com/issues/17128 On Fri, Sep 9, 2016 at 8:28 AM, Shain

Re: [ceph-users] Single-node Ceph & Systemd shutdown

2016-08-20 Thread Vasu Kulkarni
On Sat, Aug 20, 2016 at 10:35 AM, Marcus wrote: > For a home server project I've set up a single-node ceph system. > > Everything works just fine; I can mount block devices and store stuff on > them, however the system will not shut down without hanging. > > I've traced it

Re: [ceph-users] Designing ceph cluster

2016-08-18 Thread Vasu Kulkarni
Also most of the terminology looks like from Openstack and SAN, Here are the right terminology that should be used for Ceph http://docs.ceph.com/docs/master/glossary/ On Thu, Aug 18, 2016 at 8:57 AM, Gaurav Goyal wrote: > Hello Mart, > > My Apologies for that! > > We

Re: [ceph-users] Running ceph in docker

2016-07-05 Thread Vasu Kulkarni
On Wed, Jun 29, 2016 at 11:05 PM, F21 wrote: > Hey all, > > I am interested in running ceph in docker containers. This is extremely > attractive given the recent integration of swarm into the docker engine, > making it really easy to set up a docker cluster. > > When running

Re: [ceph-users] Issue installing ceph with ceph-deploy

2016-06-21 Thread Vasu Kulkarni
On Tue, Jun 21, 2016 at 8:16 AM, shane wrote: > Fran Barrera writes: > >> >> Hi all, >> I have a problem installing ceph jewel with ceph-deploy (1.5.33) on ubuntu > 14.04.4 (openstack instance). >> >> This is my setup: >> >> >> ceph-admin >> >> ceph-mon

Re: [ceph-users] ceph cookbook failed: Where to report that https://git.ceph.com/release.asc is down?

2016-06-18 Thread Vasu Kulkarni
It is down due to a gateway issue, David sent an email to sepia list yesterday (Below is his mail) -- Forwarded message -- From: David Galloway Date: Sat, Jun 18, 2016 at 12:15 AM Subject: Re: [sepia] Unexpected Downtime To: se...@lists.ceph.com We'd

Re: [ceph-users] Migrating files from ceph fs from cluster a to cluster b without low downtime

2016-06-06 Thread Vasu Kulkarni
On Mon, Jun 6, 2016 at 11:43 AM, Oliver Dzombic wrote: > Hi, > > we will have to copy all data > > from: hammer cephfs > > to: jewel cephfs > > and i would like to keep the resulting downtime low for the underlying > services. > > So does anyone know a good way/tool to

Re: [ceph-users] How do I start ceph jewel in CentOS?

2016-05-04 Thread Vasu Kulkarni
ystemctl status does not list any “ceph” services at all. > > > > > > > > > > > On 5/4/16, 9:37 AM, "Vasu Kulkarni" <vakul...@redhat.com> wrote: > >>sadly there are still some issues with jewel/master branch for centos >>systemctl se

Re: [ceph-users] How do I start ceph jewel in CentOS?

2016-05-04 Thread Vasu Kulkarni
sadly there are still some issues with jewel/master branch for centos systemctl service, As a workaround if you run "systemctl status" and look at the top most service name in the ceph-osd service tree and use that to stop/start it should work. On Wed, May 4, 2016 at 9:00 AM, Michael Kuriger

Re: [ceph-users] Question upgrading to Jewel

2016-04-22 Thread Vasu Kulkarni
Hope you followed the release notes and are on 0.94.4 or above http://docs.ceph.com/docs/master/release-notes/#upgrading-from-hammer 1) upgrade ( ensure you dont have user 'ceph' before) 2) stop the service /etc/init.d/ceph stop (since you are on centos/hammer) 3) change ownership

Re: [ceph-users] ceph-disk from jewel has issues on redhat 7

2016-03-19 Thread Vasu Kulkarni
/sbin/sgdisk --new=2:0:20480M --change-name=2:'ceph > journal' --partition-guid=2:aa23e07d-e6b3-4261-a236-c0565971d88d > --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- > /dev/sdd > The operation has completed successfully. > [18:46]# partx -u /dev/sdd > [18:46]# par

Re: [ceph-users] ceph-disk from jewel has issues on redhat 7

2016-03-15 Thread Vasu Kulkarni
retry in that same loop. > > There is code in ceph_deploy which uses partprobe or partx depending on > which distro it detects, that is how I worked out what to change here. > > If I have to tear things down again I will reproduce and post here. > > Steve > > > On Mar 15, 2

Re: [ceph-users] ceph-disk from jewel has issues on redhat 7

2016-03-15 Thread Vasu Kulkarni
times has to retry. In the case where I am using it to carve > an SSD into several partitions for journals it fails on the second one. > > Steve > > > > On Mar 15, 2016, at 1:45 PM, Vasu Kulkarni <vakul...@redhat.com> wrote: > > > > Ceph-deploy suite and also selinux s

Re: [ceph-users] ceph-disk from jewel has issues on redhat 7

2016-03-15 Thread Vasu Kulkarni
Ceph-deploy suite and also selinux suite(which isn't merged yet) indirectly tests ceph-disk and has been run on Jewel as well. I guess the issue Stephen is seeing is on multipath device which I believe is a known issue. On Tue, Mar 15, 2016 at 11:42 AM, Gregory Farnum wrote:

Re: [ceph-users] systemd & sysvinit scripts mix ?

2016-02-29 Thread Vasu Kulkarni
+1 On Mon, Feb 29, 2016 at 8:36 AM, Ken Dreyer wrote: > I recommend we simply drop the init scripts from the master branch. > All our supported platforms (CentOS 7 or newer, and Ubuntu Trusty or > newer) use upstart or systemd. > > - Ken > > On Mon, Feb 29, 2016 at 3:44 AM,

Re: [ceph-users] cant get cluster to become healthy. "stale+undersized+degraded+peered"

2015-09-17 Thread Vasu Kulkarni
This happens if you didn't have right ceph.configuratio when you deployed your cluster using ceph-deploy , those 64 pgs are from the default config, Since this is a fresh installation you can delete all default pools, check cluster state for no objects and clean state, setup ceph.conf based on

Re: [ceph-users] Help with CEPH deployment

2015-05-04 Thread Vasu Kulkarni
What are your initial monitor nodes? i,e what nodes did you specify in the first step: ceph-deploy new {initial-monitor-node(s)} Did you specify rgulistan-wsl11 as your monitor node in that step? - Original Message - From: Venkateswara Rao Jujjuri jujj...@gmail.com To: ceph-devel