Am 14.10.2014 16:23, schrieb Sage Weil:
On Tue, 14 Oct 2014, Amon Ott wrote:
Am 13.10.2014 20:16, schrieb Sage Weil:
We've been doing a lot of work on CephFS over the past few months. This
is an update on the current state of things as of Giant.
...
* Either the kernel client (kernel 3.17 or
Hello Greg,
The dump and reset of the journal was succesful:
[root@th1-mon001 ~]# /usr/bin/ceph-mds -i th1-mon001 --pid-file
/var/run/ceph/mds.th1-mon001.pid -c /etc/ceph/ceph.conf --cluster ceph
--dump-journal 0 journaldumptgho-mon001
journal is 9483323613~134215459
read 134213311 bytes at
We've been doing a lot of work on CephFS over the past few months. This
is an update on the current state of things as of Giant.
...
* Either the kernel client (kernel 3.17 or later) or userspace (ceph-fuse
or libcephfs) clients are in good working order.
Thanks for all the work and
Hi,
I am deploying a 3 node ceph storagollowing thee cluster for my company,
following the webinar: http://www.youtube.com/watch?v=R3gnLrsZSno
I am stuck at formating the osd's and making them ready to mount the
directories. Below is the error thrown back:
root@ceph-node1:~# mkcephfs -a
Because this is an interesting problem, I added an additional host to my
4 node ceph setup that is a purely radosgw host. So I have
- ceph1 (mon + osd)
- ceph2-4 (osd)
- ceph5 (radosgw)
My ceph.conf on ceph5 included below. Obviously I changed my keystone
endpoints to use this host (ceph5).
Hi ALL,
I've created 2 mon and 2 osd on Centos 6.5 (x86_64).
I've tried 4 times (clean centos installation) but always have health:
HEALTH_WARN
Never HEALTH_OK always HEALTH_WARN! :(
# ceph -s
cluster d073ed20-4c0e-445e-bfb0-7b7658954874
health HEALTH_WARN 192 pgs degraded; 192 pgs
Hello,
osdmap e10: 4 osds: 2 up, 2 in
What about following commands :
# ceph osd tree
# ceph osd dump
You have 2 OSDs on 2 hosts, but 4 OSDs seems to be debined in your crush map.
Regards,
Pascal
Le 15 oct. 2014 à 11:11, Roman intra...@gmail.com a écrit :
Hi ALL,
I've created 2
Pascal,
Here is my latest installation:
cluster 204986f6-f43c-4199-b093-8f5c7bc641bb
health HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean;
recovery 20/40 objects degraded (50.000%)
monmap e1: 2 mons at
{ceph02=192.168.33.142:6789/0,ceph03=192.168.33.143:6789/0}, election
Firewall? Disable iptables, set SELinux to Permissive.
On 15 Oct, 2014 5:49 pm, Roman intra...@gmail.com wrote:
Pascal,
Here is my latest installation:
cluster 204986f6-f43c-4199-b093-8f5c7bc641bb
health HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean; recovery
20/40 objects
Le 15 oct. 2014 à 11:48, Roman intra...@gmail.com a écrit :
Pascal,
Here is my latest installation:
cluster 204986f6-f43c-4199-b093-8f5c7bc641bb
health HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean; recovery
20/40 objects degraded (50.000%)
monmap e1: 2 mons at
Yes of course...
iptables -F (no rules) = the same as disabled
SELINUX=disabled
As a testing ground, I use VBox. But I think it should not be a problem.
Firewall? Disable iptables, set SELinux to Permissive.
On 15 Oct, 2014 5:49 pm, Roman intra...@gmail.com
mailto:intra...@gmail.com wrote:
Am 15.10.2014 14:11, schrieb Ric Wheeler:
On 10/15/2014 08:43 AM, Amon Ott wrote:
Am 14.10.2014 16:23, schrieb Sage Weil:
On Tue, 14 Oct 2014, Amon Ott wrote:
Am 13.10.2014 20:16, schrieb Sage Weil:
We've been doing a lot of work on CephFS over the past few months.
This
is an update on the
Thanks Mark for looking into this further. As I mentioned earlier, I have
following nodes in my ceph cluster -
1 admin node
3 OSD (One of them is a monitor too)
1 gateway node
This should have worked technically. But I am not sure where I am going wrong.
I will continue to look into this and
I may be completely overlooking something here but I keep getting ssh; cannot
resolve hostname when I try to contact my OSD node's from my monitor node. I
have set the ipaddress's of the 3 nodes in /etc/hosts as suggested on the
website.
Thanks in advance
James
On 10/15/2014 04:27 PM, Support - Avantek wrote:
I may be completely overlooking something here but I keep getting ssh;
cannot resolve hostname when I try to contact my OSD node's from my monitor
node. I have set the ipaddress's of the 3 nodes in /etc/hosts as suggested on
the website.
Hi folks,
I recently had an OSD disk die, and I'm wondering what are the
current best practices for replacing it. I think I've thoroughly removed
the old disk, both physically and logically, but I'm having trouble figuring
out how to add the new disk into ceph.
For one thing,
On Wed, 15 Oct 2014, Amon Ott wrote:
Am 15.10.2014 14:11, schrieb Ric Wheeler:
On 10/15/2014 08:43 AM, Amon Ott wrote:
Am 14.10.2014 16:23, schrieb Sage Weil:
On Tue, 14 Oct 2014, Amon Ott wrote:
Am 13.10.2014 20:16, schrieb Sage Weil:
We've been doing a lot of work on CephFS over the
Hi,
I recently had an OSD disk die, and I'm wondering what are the
current best practices for replacing it. I think I've thoroughly removed
the old disk, both physically and logically, but I'm having trouble figuring
out how to add the new disk into ceph.
I did this today (one disk
Hi Daniel,
On 15/10/2014 08:02, Daniel Schwager wrote:
Hi,
I recently had an OSD disk die, and I'm wondering what are the
current best practices for replacing it. I think I've thoroughly removed
the old disk, both physically and logically, but I'm having trouble figuring
out how
Hi all,
When I remove all OSDs on a given host, then wait for all objects (PGs?) to
be to be active+clean, then remove the host (ceph osd crush remove hostname),
that causes the objects to shuffle around the cluster again.
Why does the CRUSH map depend on hosts that no longer have OSDs on
On Wed, 15 Oct 2014 11:06:55 -0500, Chad Seys cws...@physics.wisc.edu
wrote:
Hi all,
When I remove all OSDs on a given host, then wait for all objects (PGs?) to
be to be active+clean, then remove the host (ceph osd crush remove hostname),
that causes the objects to shuffle around the
On Tue, Sep 30, 2014 at 6:49 PM, Dmitry Borodaenko
dborodae...@mirantis.com wrote:
Last stable Firefly release (v0.80.5) was tagged on July 29 (over 2
months ago). Since then, there were twice as many commits merged into
the firefly branch than there existed on the branch before v0.80.5:
$
For the humble ceph user I am it is really hard to follow what version
of what product will get the changes I requiere.
Let me explain myself. I use ceph in my company is specialised in disk
recovery, my company needs a flexible, easy to maintain, trustable way
to store the data from the
Hi Chad,
That sounds bizarre to me, and I can't reproduce it. I added an osd (which was
previously not in the crush map) to a fake host=test:
ceph osd crush create-or-move osd.52 1.0 rack=RJ45 host=test
that resulted in some data movement of course. Then I removed that osd from the
crush
On Wed, Oct 15, 2014 at 9:39 AM, Dmitry Borodaenko
dborodae...@mirantis.com wrote:
On Tue, Sep 30, 2014 at 6:49 PM, Dmitry Borodaenko
dborodae...@mirantis.com wrote:
Last stable Firefly release (v0.80.5) was tagged on July 29 (over 2
months ago). Since then, there were twice as many commits
Hi Mariusz,
Usually removing OSD without removing host happens when you
remove/replace dead drives.
Hosts are in map so
* CRUSH wont put 2 copies on same node
* you can balance around network interface speed
That does not answer the original question IMO: Why does the CRUSH map depend
Hi Dan,
I'm using Emperor (0.72). Though I would think CRUSH maps have not changed
that much btw versions?
That sounds bizarre to me, and I can't reproduce it. I added an osd (which
was previously not in the crush map) to a fake host=test:
ceph osd crush create-or-move osd.52 1.0
Hi,
October 15 2014 7:05 PM, Chad Seys cws...@physics.wisc.edu wrote:
Hi Dan,
I'm using Emperor (0.72). Though I would think CRUSH maps have not changed
that much btw versions?
I'm using dumpling, with the hashpspool flag enabled, which I believe could
have been the only difference.
That
Loic,
root@ceph-node3:~# smartctl -a /dev/sdd | less
=== START OF INFORMATION SECTION ===
Device Model: ST4000NM0033-9ZM170
Serial Number:Z1Z5LGBX
..
admin@ceph-admin:~/cluster1$ emacs -nw ceph.conf
Sadly undump has been broken for quite some time (it was fixed in
giant as part of creating cephfs-journal-tool). If there's a one line
fix for this then it's probably worth putting in firefly since it's a
long term supported branch -- I'll do that now.
John
On Wed, Oct 15, 2014 at 8:23 AM,
Hello Mark -
I setup a new Ceph cluster like before. But this time it is talking to
Icehouse. Same set of problems like before. That is keystone flags are not
being honored if they are under [client.radosgw.gateway]. It seems like the
issue is with my radosgw setup. Let me create a new thread
I am trying to integrate Openstack keystone with radosgw. I have followed the
instructions as per the link - http://ceph.com/docs/master/radosgw/keystone/.
But for some reason, keystone flags under [client.radosgw.gateway] section are
not being honored. That means, presence of these flags never
HI Cephers,
I have an other question related to this issue, What would be the
procedure to restore a server fail (a whole server for example due to a
mother board trouble with no damage on disk).
Regards, I
2014-10-15 20:22 GMT+02:00 Daniel Schwager daniel.schwa...@dtnet.de:
Loic,
Hi Ceph users,
(sorry for the novel, but perhaps this might be useful for someone)
During our current project to upgrade our cluster from disks-only to
SSD journals, we've found it useful to convert our legacy puppet-ceph
deployed cluster (using something like the enovance module) to one that
I'm leveraging Ceph in a vm prototyping environment currently and am having
issues abstracting my VM definitions from the storage pool (to use a libvirt
convention).
I'm able to use the rbd support within the disk configuration of individual VMs
but am struggling to find a good reference for
On 10/15/2014 4:20 PM, Dan van der Ster wrote:
Hi Ceph users,
(sorry for the novel, but perhaps this might be useful for someone)
During our current project to upgrade our cluster from disks-only to
SSD journals, we've found it useful to convert our legacy puppet-ceph
deployed cluster (using
On 16/10/14 09:08, lakshmi k s wrote:
I am trying to integrate Openstack keystone with radosgw. I have
followed the instructions as per the link -
http://ceph.com/docs/master/radosgw/keystone/. But for some reason,
keystone flags under [client.radosgw.gateway] section are not being
honored. That
Gregory,
Thanks for prompt response, we'll go with v0.80.7.
It still would be nice if v0.80.8 doesn't take as long as v0.80.6, I
suspect one of the reasons you messed it up was too many commits
without intermediate releases.
On Wed, Oct 15, 2014 at 9:59 AM, Gregory Farnum g...@inktank.com
Hello Mark -
Changing the rwg keystone url to http://192.168.122.165:35357 did not help. I
continue to get 401 error. Also, I am trying to integrate with Icehouse this
time. I did not see any keystone.conf in /etc/apache2/sites-available for
adding WSGI chunked encoding. That said, I am
On 16/10/14 10:37, Mark Kirkwood wrote:
On 16/10/14 09:08, lakshmi k s wrote:
I am trying to integrate Openstack keystone with radosgw. I have
followed the instructions as per the link -
http://ceph.com/docs/master/radosgw/keystone/. But for some reason,
keystone flags under
Please send your /etc/hosts contents here.
--Jiten
On Oct 15, 2014, at 7:27 AM, Support - Avantek supp...@avantek.co.uk wrote:
I may be completely overlooking something here but I keep getting “ssh;
cannot resolve hostname” when I try to contact my OSD node’s from my monitor
node. I have
I still think that there is problem with the way radosgw is setup. Two things I
want to point out -
1. rgw keystone url - If this flag is under radosgw section of ceph.conf file,
I do not see the packets being exchanged between keystone and gateway node when
radosgw is restarted. I tried to
Hi Dan,
Great story and congratulation on the successful conversion :-) There are two
minor pitfalls left but they are only an inconvenience when testing the
ceph-disk prepare / udev logic ( https://github.com/ceph/ceph/pull/2717 and
https://github.com/ceph/ceph/pull/2648 ).
Cheers
On
Has anyone seen this issue? Appreciate your time.
On Wednesday, October 15, 2014 4:50 PM, lakshmi k s lux...@yahoo.com wrote:
I still think that there is problem with the way radosgw is setup. Two things I
want to point out -
1. rgw keystone url - If this flag is under radosgw section of
44 matches
Mail list logo