Hi,
While using rbd kernel client with cephx , admin user without admin
keyring was not able to map the rbd image to a block device and this
should be the work flow.
But issue is once I unmap rbd image without admin keyring it is allowing
to unmap the image and as per my understanding it
Unmapping is an operation local to the host and doesn't communicate
with the cluster at all (at least, in the kernel you're running...in
very new code it might involve doing an unwatch, which will require
communication). That means there's no need for a keyring, since its
purpose is to validate
There are a lot of next steps on
http://ceph.com/docs/master/rados/troubleshooting/troubleshooting-osd/
You probably want to look at the bits about using the admin socket,
and diagnosing slow requests. :)
-Greg
On Sun, Feb 8, 2015 at 8:48 PM, Matthew Monaco m...@monaco.cx wrote:
Hello!
***
On Fri, Feb 6, 2015 at 3:37 PM, David J. Arias david.ar...@getecsa.co wrote:
Hello!
I am sysadmin for a small IT consulting enterprise in México.
We are trying to integrate three servers running RHEL 5.9 into a new
CEPH cluster.
I downloaded the source code and tried compiling it, though I
On 02/09/2015 09:14 PM, Ken Dreyer wrote:
On 02/09/2015 08:17 AM, Gregory Farnum wrote:
I think there's ongoing work to backport (portions of?) Ceph to RHEL5,
but it definitely doesn't build out of the box. Even beyond the
library dependencies you've noticed you'll find more issues with e.g.
On 02/09/2015 09:25 PM, Dan Mick wrote:
That tree will be being rebased soon
will be being? Wow.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Tue, Feb 10, 2015 at 3:58 AM, Christopher Armstrong
ch...@opdemand.com wrote:
Hi folks,
One of our users is seeing machine crashes almost daily. He's using Ceph
v0.87 giant, and is seeing this crash:
On 02/09/2015 08:20 AM, Gregory Farnum wrote:
There are a lot of next steps on
http://ceph.com/docs/master/rados/troubleshooting/troubleshooting-osd/
You probably want to look at the bits about using the admin socket, and
diagnosing slow requests. :) -Greg
Yeah, I've been through most of
On Mon, Feb 9, 2015 at 7:12 PM, Matthew Monaco m...@monaco.cx wrote:
On 02/09/2015 08:20 AM, Gregory Farnum wrote:
There are a lot of next steps on
http://ceph.com/docs/master/rados/troubleshooting/troubleshooting-osd/
You probably want to look at the bits about using the admin socket, and
Hi,
does the lack of a battery backed cache in Ceph introduce any disadvantages?
We use PostgreSQL and our servers have UPS.
But I want to survive a power outage, although it is unlikely. But hope is not an
option ...
Regards,
Thomas Güttler
___
On Mon, Feb 9, 2015 at 11:58 AM, Christopher Armstrong
ch...@opdemand.com wrote:
Hi folks,
One of our users is seeing machine crashes almost daily. He's using Ceph
v0.87 giant, and is seeing this crash:
2x10gig lag across two switches with 2x40gig. should be enough bandwidth
On Mon, Feb 9, 2015 at 6:10 AM, Eneko Lacunza elacu...@binovo.es wrote:
Hi,
The common recommendation is to use a good (Intel S3700) SSD disk for
journals for each 3-4 OSDs, or otherwise to use internal journal on each
Greg,
Thanks for confirming they're unrelated - that'll save me from going down
the wrong path in debugging.
I'll see what I can do about getting more info from the MDS - I'll relay
this to our user.
Chris
On Mon, Feb 9, 2015 at 1:51 PM, Gregory Farnum g...@gregs42.com wrote:
On Mon, Feb 9,
Hi,
The common recommendation is to use a good (Intel S3700) SSD disk for
journals for each 3-4 OSDs, or otherwise to use internal journal on each
OSD. Don't put more than one journal on the same spinning disk.
Also, it is recommended to use 500G-1TB disks, specially if you have a
1gbit
Hello!
I'm novice in ceph and I try to install my first cluster.
Follow command does not create the keys and monitors did not start.
Files /etc/hosts and /etc/ceph/ceph.conf I typed at the end of this letter.
What I do make not thus?
[ceph@ceph-vm05 ~]$ ceph-deploy --overwrite-conf mon
Hi folks,
One of our users is seeing machine crashes almost daily. He's using Ceph
v0.87 giant, and is seeing this crash:
https://gist.githubusercontent.com/ianblenke/b74e5aa5547130ebc0fb/raw/c3eeab076310d149443fd6118113b9d94f176303/gistfile1.txt
It seems easy to trigger this by rsyncing to the
16 matches
Mail list logo