___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
to do anything explicit like tell everybody
globally that there are multiple MDSes.
-Greg
On Mon, Dec 8, 2014 at 10:48 AM, JIten Shah jshah2...@me.com wrote:
Do I need to update the ceph.conf to support multiple MDS servers?
—Jiten
On Nov 24, 2014, at 6:56 AM, Gregory Farnum g...@gregs42
.
Chris
On Tue, Dec 9, 2014 at 3:10 PM, JIten Shah jshah2...@me.com wrote:
Hi Greg,
Sorry for the confusion. I am not looking for active/active configuration
which I know is not supported but what documentation can I refer to for
installing an active/stndby MDSes ?
I tried looking
Do I need to update the ceph.conf to support multiple MDS servers?
—Jiten
On Nov 24, 2014, at 6:56 AM, Gregory Farnum g...@gregs42.com wrote:
On Sun, Nov 23, 2014 at 10:36 PM, JIten Shah jshah2...@me.com wrote:
Hi Greg,
I haven’t setup anything in ceph.conf as mds.cephmon002 nor in any
again.
—Jiten
On Nov 20, 2014, at 5:47 PM, Michael Kuriger mk7...@yp.com wrote:
Maybe delete the pool and start over?
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
JIten Shah
Sent: Thursday, November 20, 2014 5:46 PM
To: Craig Lewis
Cc: ceph-users
Subject
I am trying to setup 3 MDS servers (one on each MON) but after I am done
setting up the first one, it give me below error when I try to start it on the
other ones. I understand that only 1 MDS is functional at a time, but I thought
you can have multiple of them up, incase the first one dies? Or
the OSDs?
-Greg
On Mon, Nov 17, 2014 at 12:52 PM, JIten Shah jshah2...@me.com wrote:
After i rebuilt the OSD’s, the MDS went into the degraded mode and will not
recover.
[jshah@Lab-cephmon001 ~]$ sudo tail -100f
/var/log/ceph/ceph-mds.Lab-cephmon001.log
2014-11-17 17:55:27.855861
? It almost
looks like you replaced multiple OSDs at the same time, and lost data because
of it.
Can you give us the output of `ceph osd tree`, and `ceph pg 2.33 query`?
On Wed, Nov 19, 2014 at 2:14 PM, JIten Shah jshah2...@me.com wrote:
After rebuilding a few OSD’s, I see that the pg’s
between each OSD rebuild? If you
rebuilt all 3 OSDs at the same time (or without waiting for a complete
recovery between them), that would cause this problem.
On Thu, Nov 20, 2014 at 11:40 AM, JIten Shah jshah2...@me.com wrote:
Yes, it was a healthy cluster and I had to rebuild because
Ceph that you lost the OSDs. For each OSD you moved, run ceph
osd lost OSDID, then try the force_create_pg command again.
If that doesn't work, you can keep fighting with it, but it'll be faster to
rebuild the cluster.
On Thu, Nov 20, 2014 at 1:45 PM, JIten Shah jshah2...@me.com wrote
PM, JIten Shah jshah2...@me.com wrote:
Ok. Thanks.
—Jiten
On Nov 20, 2014, at 2:14 PM, Craig Lewis cle...@centraldesktop.com wrote:
If there's no data to lose, tell Ceph to re-create all the missing PGs.
ceph pg force_create_pg 2.33
Repeat for each of the missing PGs
After rebuilding a few OSD’s, I see that the pg’s are stuck in degraded mode.
Sone are in the unclean and others are in the stale state. Somehow the MDS is
also degraded. How do I recover the OSD’s and the MDS back to healthy ? Read
through the documentation and on the web but no luck so far.
After i rebuilt the OSD’s, the MDS went into the degraded mode and will not
recover.
[jshah@Lab-cephmon001 ~]$ sudo tail -100f
/var/log/ceph/ceph-mds.Lab-cephmon001.log
2014-11-17 17:55:27.855861 7fffef5d3700 0 -- X.X.16.111:6800/3046050
X.X.16.114:0/838757053 pipe(0x1e18000 sd=22 :6800 s=0
Hi Guys,
We ran into this issue after we nearly max’ed out the sod’s. Since then, we
have cleaned up a lot of data in the sod’s but pg’s seem to stuck for last 4 to
5 days. I have run ceph osd reweight-by-utilization” and that did not seem to
work.
Any suggestions?
ceph -s
cluster
Thanks Chad. It seems to be working.
—Jiten
On Nov 11, 2014, at 12:47 PM, Chad Seys cws...@physics.wisc.edu wrote:
Find out which OSD it is:
ceph health detail
Squeeze blocks off the affected OSD:
ceph osd reweight OSDNUM 0.8
Repeat with any OSD which becomes toofull.
Your
Actually there were 100’s that were too full. We manually set the OSD weights
to 0.5 and it seems to be recovering.
Thanks of the tips on crush reweight. I will look into it.
—Jiten
On Nov 11, 2014, at 1:37 PM, Craig Lewis cle...@centraldesktop.com wrote:
How many OSDs are nearfull?
I've
I agree. This was just our brute-force method on our test cluster. We won't do
this on production cluster.
--Jiten
On Nov 11, 2014, at 2:11 PM, cwseys cws...@physics.wisc.edu wrote:
0.5 might be too much. All the PGs squeezed off of one OSD will need to be
stored on another. The fewer you
cephfsmeta xxx
c) ceph mds newfs {cephfsmeta_poolid} {cephfsdata_poolid}
5) ceph-deploy mds create {mdshostname}
Make sure you have password-less ssh access into the later host.
I think this should do the trick
JC
On Nov 6, 2014, at 20:07, JIten Shah jshah2...@me.com wrote
Hi Guys,
I am sure many of you guys have installed cephfs using puppet. I am trying to
install “firefly” using the puppet module from
https://github.com/ceph/puppet-ceph.git
and running into the “ceph_config” file issue where it’s unable to find the
config file and I am not sure why.
...@dachary.org wrote:
Hi,
At the moment puppet-ceph does not support CephFS. The error you're seeing
does not ring a bell, would you have more context to help diagnose it ?
Cheers
On 06/11/2014 23:44, JIten Shah wrote:
Hi Guys,
I am sure many of you guys have installed cephfs
Hi Guys,
We are trying to install cephFS using puppet on all the ODS nodes, as well as
MON and MDS. Are there recommended puppet modules that anyone has used in the
past or created their own?
Thanks.
—Jiten
___
ceph-users mailing list
Please send your /etc/hosts contents here.
--Jiten
On Oct 15, 2014, at 7:27 AM, Support - Avantek supp...@avantek.co.uk wrote:
I may be completely overlooking something here but I keep getting “ssh;
cannot resolve hostname” when I try to contact my OSD node’s from my monitor
node. I have
Thanks Craig. That’s exactly what I was looking for.
—Jiten
On Sep 16, 2014, at 2:42 PM, Craig Lewis cle...@centraldesktop.com wrote:
On Fri, Sep 12, 2014 at 4:35 PM, JIten Shah jshah2...@me.com wrote:
1. If we need to modify those numbers, do we need to update the values in
ceph.conf
Hi Guys,
We have a cluster with 1000 OSD nodes and 5 MON nodes and 1 MDS node. In order
to be able to loose quite a few OSD’s and still survive the load, we were
thinking of making the replication factor to 50.
Is that too big of a number? what is the performance implications and any other
Looking at the docs (as below), it seems like .95 and .85 are the default
numbers for full and near full ratio and if you reach the full ratio, it will
stop reading an writing to avoid data corruption.
http://ceph.com/docs/master/rados/configuration/mon-config-ref/#storage-capacity
So, few
What does your mount command look like ?
Sent from my iPhone 5S
On Sep 12, 2014, at 4:56 PM, Erick Ocrospoma zipper1...@gmail.com wrote:
Hi,
I'm n00b in the ceph world, so here I go. I was following this tutorials
[1][2] (in case you need to know if I missed something), while trying
Sent from my iPhone 5S
On Sep 12, 2014, at 8:01 PM, Erick Ocrospoma zipper1...@gmail.com wrote:
On 12 September 2014 21:16, JIten Shah jshah2...@me.com wrote:
Here's an example:
sudo mount -t ceph 192.168.0.1:6789:/ /mnt/mycephfs -o
name=admin,secret
On Sep 6, 2014, at 8:22 PM, Christian Balzer ch...@gol.com wrote:
Hello,
On Sat, 06 Sep 2014 10:28:19 -0700 JIten Shah wrote:
Thanks Christian. Replies inline.
On Sep 6, 2014, at 8:04 AM, Christian Balzer ch...@gol.com wrote:
Hello,
On Fri, 05 Sep 2014 15:31:01 -0700 JIten
While checking the health of the cluster, I ran to the following error:
warning: health HEALTH_WARN too few pgs per osd (1 min 20)
When I checked the pg and php numbers, I saw the value was the default value of
64
ceph osd pool get data pg_num
pg_num: 64
ceph osd pool get data pgp_num
pgp_num:
Thanks Greg.
—Jiten
On Sep 8, 2014, at 10:31 AM, Gregory Farnum g...@inktank.com wrote:
On Mon, Sep 8, 2014 at 10:08 AM, JIten Shah jshah2...@me.com wrote:
While checking the health of the cluster, I ran to the following error:
warning: health HEALTH_WARN too few pgs per osd (1 min 20
So, if it doesn’t refer to the entry in ceph.conf. Where does it actually store
the new value?
—Jiten
On Sep 8, 2014, at 10:31 AM, Gregory Farnum g...@inktank.com wrote:
On Mon, Sep 8, 2014 at 10:08 AM, JIten Shah jshah2...@me.com wrote:
While checking the health of the cluster, I ran
://inktank.com | http://ceph.com
On Mon, Sep 8, 2014 at 10:50 AM, JIten Shah jshah2...@me.com wrote:
So, if it doesn’t refer to the entry in ceph.conf. Where does it actually
store the new value?
—Jiten
On Sep 8, 2014, at 10:31 AM, Gregory Farnum g...@inktank.com wrote:
On Mon, Sep 8, 2014
Hello Cephers,
We created a ceph cluster with 100 OSD, 5 MON and 1 MSD and most of the stuff
seems to be working fine but we are seeing some degrading on the osd's due to
lack of space on the osd's. Is there a way to resize the OSD without bringing
the cluster down?
--jiten
We ran into the same issue where we could not mount the filesystem on the
clients because it had 3.9. Once we upgraded the kernel on the client node, we
were able to mount it fine. FWIW, you need kernel 3.14 and above.
--jiten
On Sep 5, 2014, at 6:55 AM, James Devine fxmul...@gmail.com wrote:
and
knowledge from the guys who have actually installed it and have it running in
their environment.
Any help is appreciated.
Thanks.
—Jiten Shah
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
/USECASES.md#i-want-to-try-this-module,-heard-of-ceph,-want-to-see-it-in-action
Cheers,
-Steve
On Fri, Aug 22, 2014 at 5:25 PM, JIten Shah jshah2...@me.com wrote:
Hi Guys,
I have been looking to try out a test ceph cluster in my lab to see if we can
replace it with our traditional storage
37 matches
Mail list logo