On 1/24/2013 2:49 AM, Gandalf Corvotempesta wrote:
2013/1/24 Dimitri Maziuk dmaz...@bmrb.wisc.edu:
So I'm stuck at a point way before those guides become relevant: once I
had one OSD/MDS/MON box up, I got HEALTH_WARN 384 pgs degraded; 384 pgs
stuck unclean; recovery 21/42 degraded (50.000%)
On 1/24/2013 8:20 AM, Sam Lang wrote:
Yep it means that you only have one OSD with replication level of 2.
If you had a rep level of 3, you would see degraded (66.667%). If you
just want to make the message go away (for testing purposes), you can
set the rep level to 1
One other question I have left (so far) is: I read and tried to follow
http://ceph.com/docs/master/install/rpm/ and
http://ceph.com/docs/master/start/quick-start/ on centos 6.3.
mkcephfs step fails without rbd kernel module.
I just tried to find libvirt, kernel, module, and qemu on those
Hi Dimitri,
Where in ceph.conf do I tell it to use qemu and librbd instead of
kernel module?
You do not need to specify that in ceph.conf.
When you run qemu then specify the disk for example like this:
-drive format=rbd,file=rbd:/pool/imagename,if=virtio,index=0,boot=on
Where you replace
On 01/24/2013 04:53 PM, Jens Kristian Søgaard wrote:
Hi Dimitri,
Where in ceph.conf do I tell it to use qemu and librbd instead of
kernel module?
You do not need to specify that in ceph.conf.
When you run qemu then specify the disk for example like this:
-drive
On Thu, Jan 24, 2013 at 9:28 AM, Dimitri Maziuk dmaz...@bmrb.wisc.edu wrote:
On 1/24/2013 8:20 AM, Sam Lang wrote:
Yep it means that you only have one OSD with replication level of 2.
If you had a rep level of 3, you would see degraded (66.667%). If you
just want to make the message go away
On 1/24/2013 9:58 AM, Wido den Hollander wrote:
On 01/24/2013 04:53 PM, Jens Kristian Søgaard wrote:
Hi Dimitri,
Where in ceph.conf do I tell it to use qemu and librbd instead of
kernel module?
You do not need to specify that in ceph.conf.
When you run qemu then specify the disk for
Hi Dimitri,
The step that's failing without the kernel module is Deploy the
configuration #2:
mkcephfs -a -c /etc/ceph/ceph.conf -k ceph.keyring
Could you elaborate on how it fails?
Do you get an error message?
Are you saying I'm to run qemu -drive ... instead of mkcephfs?
No, not at
On Thu, Jan 24, 2013 at 9:45 AM, Dimitri Maziuk dmaz...@bmrb.wisc.edu wrote:
One other question I have left (so far) is: I read and tried to follow
http://ceph.com/docs/master/install/rpm/ and
http://ceph.com/docs/master/start/quick-start/ on centos 6.3.
mkcephfs step fails without rbd
On 1/24/2013 10:22 AM, Sam Lang wrote:
... Does that make sense?
Yes, but when I'm trying to set up a ceph server using the quick start
guide, mkcephfs is failing with an error message I didn't write down,
but the complaint was along the lines of missing rbd.ko. Booting a 3.7
kernel made it
On Jan 24, 2013, at 10:09 AM, Dimitri Maziuk dmaz...@bmrb.wisc.edu wrote:
Yes, but when I'm trying to set up a ceph server using the quick start guide,
mkcephfs is failing with an error message I didn't write down, but the
complaint was along the lines of missing rbd.ko. Booting a 3.7 kernel
On 01/24/2013 07:28 AM, Dimitri Maziuk wrote:
On 1/24/2013 8:20 AM, Sam Lang wrote:
Yep it means that you only have one OSD with replication level of 2.
If you had a rep level of 3, you would see degraded (66.667%). If you
just want to make the message go away (for testing purposes), you can
On 01/20/2013 08:32 AM, Dimitri Maziuk wrote:
On 1/19/2013 11:13 AM, Sage Weil wrote:
If you want to use the kernel client(s), that is true: there are no
plans
to backport the client code to the ancient RHEL kernels. Nothing
prevents
you from running the server side, though, or the userland
On 01/24/2013 12:15 PM, Dan Mick wrote:
On 01/24/2013 07:28 AM, Dimitri Maziuk wrote:
On 1/24/2013 8:20 AM, Sam Lang wrote:
Yep it means that you only have one OSD with replication level of 2.
If you had a rep level of 3, you would see degraded (66.667%). If you
just want to make the message
On 01/24/2013 12:38 PM, John Wilkins wrote:
Dima,
I'm working on a new monitoring and troubleshooting guide now that will
answer most of the questions related to OSD and placement group states. I
hope to have it done this week. I have not actually tested the quick starts
on centos or rhel
On 01/24/2013 12:16 PM, Dan Mick wrote:
This is an apparently-unique problem, and we'd love to see details.
I hate it when it makes a liar out of me, this time around it worked on
2.6.23 -- FSVO worked: I did get it to 384 pgs stuck unclean stage.
--
Dimitri Maziuk
Programmer/sysadmin
Dima,
I went ahead and updated the quick-start conf with an example. I
appreciate the feedback.
John
On Thu, Jan 24, 2013 at 11:52 AM, Dimitri Maziuk dmaz...@bmrb.wisc.edu wrote:
On 01/24/2013 12:38 PM, John Wilkins wrote:
Dima,
I'm working on a new monitoring and troubleshooting guide
You'd think that only one [osd] section in ceph.conf implies
nrep = 1, though. (And then you can go on adding OSDs and changing nrep
accordingly -- that was my plan.)
Yeah; it's probably mostly just that one-OSD configurations are so
uncommon that we never special-cased that small user set.
On 01/24/2013 03:07 PM, Dan Mick wrote:
...
Yeah; it's probably mostly just that one-OSD configurations are so
uncommon that we never special-cased that small user set. Also, you can
run with a cluster in that state forever (well, until that one OSD dies
at least); I do that regularly with
On Thu, 24 Jan 2013, Dimitri Maziuk wrote:
On 01/24/2013 03:07 PM, Dan Mick wrote:
...
Yeah; it's probably mostly just that one-OSD configurations are so
uncommon that we never special-cased that small user set. Also, you can
run with a cluster in that state forever (well, until that one
On 01/24/2013 03:48 PM, Sage Weil wrote:
On Thu, 24 Jan 2013, Dimitri Maziuk wrote:
So I re-done it with 2 OSDs and got the expected HEALTH_OK right from
the start.
There may be a related issue at work here: the default crush rules now
replicate across hosts instead of across osds, so
John,
in block device quick start (http://ceph.com/docs/master/start/quick-rbd/)
sudo rbd map foo --pool rbd --name client.admin
maps the image to /dev/rbd0 here (centos 6.3/bobtail) so the subsequent
4. Use the block device. In the following example, create a file system.
sudo mkfs.ext4 -m0
On 01/24/2013 02:15 PM, Dimitri Maziuk wrote:
John,
in block device quick start (http://ceph.com/docs/master/start/quick-rbd/)
sudo rbd map foo --pool rbd --name client.admin
maps the image to /dev/rbd0 here (centos 6.3/bobtail) so the subsequent
4. Use the block device. In the following
Do I need to update the doc for Dima's comment then, or will the bug
fix take care of it?
On Thu, Jan 24, 2013 at 2:52 PM, Josh Durgin josh.dur...@inktank.com wrote:
On 01/24/2013 02:15 PM, Dimitri Maziuk wrote:
John,
in block device quick start (http://ceph.com/docs/master/start/quick-rbd/)
On 01/24/2013 03:36 PM, John Wilkins wrote:
Do I need to update the doc for Dima's comment then, or will the bug
fix take care of it?
Fixing the packages will take care of it.
On Thu, Jan 24, 2013 at 2:52 PM, Josh Durgin josh.dur...@inktank.com wrote:
On 01/24/2013 02:15 PM, Dimitri Maziuk
On Sun, Jan 20, 2013 at 10:39 AM, Dimitri Maziuk dmaz...@bmrb.wisc.edu wrote:
On 1/19/2013 12:16 PM, Sage Weil wrote:
We generally recommend the KVM+librbd route, as it is easier to manage the
dependencies, and is well integrated with libvirt. FWIW this is what
OpenStack and CloudStack
Dimitri,
For what it's worth I also stepped through the process of spinning up
Ceph and OpenStack on a single EC2 node in a recent blog entry:
http://ceph.com/howto/building-a-public-ami-with-ceph-and-openstack/
It has some shortcuts (read: not meant to be production) but it may
help give you a
On 01/23/2013 10:19 AM, Patrick McGarry wrote:
http://ceph.com/howto/building-a-public-ami-with-ceph-and-openstack/
On Wed, Jan 23, 2013 at 10:13 AM, Sam Lang sam.l...@inktank.com wrote:
http://ceph.com/docs/master/rbd/rbd-openstack/
These are both great, I'm sure, but Patrick's page says I
On Jan 23, 2013, at 5:10 PM, Dimitri Maziuk dmaz...@bmrb.wisc.edu wrote:
On 01/23/2013 10:19 AM, Patrick McGarry wrote:
http://ceph.com/howto/building-a-public-ami-with-ceph-and-openstack/
On Wed, Jan 23, 2013 at 10:13 AM, Sam Lang sam.l...@inktank.com wrote:
On 01/23/2013 06:17 PM, John Nielsen wrote:
...
http://ceph.com/docs/master/install/rpm/
http://ceph.com/docs/master/start/quick-start/
Between those two links my own quick-start on CentOS 6.3 was maybe 6 minutes.
YMMV.
It does, obviously, since
Deploy the configuration
...
2. Execute the
On 1/19/2013 12:16 PM, Sage Weil wrote:
We generally recommend the KVM+librbd route, as it is easier to manage the
dependencies, and is well integrated with libvirt. FWIW this is what
OpenStack and CloudStack normally use.
OK, so is there a quick stat document for that configuration?
(Oh,
Hello,
I was unable to get ceph to run on centos 6.3 following the 5 minute
Same here... I was only able to build the ceph-fuse client.
Denis
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at
On Sat, 19 Jan 2013, Dimitri Maziuk wrote:
On 1/19/2013 9:50 AM, Peter Smith wrote:
3. OS recommendation: The OS recommendation page:
http://ceph.com/docs/master/install/os-recommendations/#bobtail-0-56
says CentOS 6.3 has a default kernel with old kernel client. CentOS
6.3 is our
oops, it seems not a good news to CentOS users. For kvm's rbd
image/volume use case, is it well supported? Or will Inktank consider
supporting CentOS in the near future?
Ceph-Fs seems not an urgent requirement. But the rbd and object store
are the core requirement for cloud service providers who
Thanks for the reply, Sage and everyone.
Sage, so I can expect Ceph-rbd works well on Centos 6.3 if I only use
it as the Cinder volume backend because the librbd in QEMU doesn't
make use of kernel client, right?
Could you explain a bit more about what are the functions of kernel
client? Will it
Or upgrade to 3.7.3 kernel on Precise? Does Inktank test on Ubuntu
12.04 with old kernel or 3.7.3 kernel?
On Sun, Jan 20, 2013 at 8:41 AM, Jeff Mitchell
jeffrey.mitch...@gmail.com wrote:
I'd recommend qemu 1.2+. You'll probably need a newer libvirt than Centos 6
has as well. libvirt 0.10+ is
Hi Bill,
2011/12/18 Bill Hastings bllhasti...@gmail.com:
I am trying to get my feet wet with Ceph and RADOS. My aim is to use
it as a block device for KVM instances. My understanding is that
virtual disks get striped at 1 MB boundaries by default. Does that
mean that there are going to be
Thanks for the response. What if a write of 16 bytes was successful at
nodes A and B and failed at C, perhaps C had a momentarily unreachable
via the network? How is the Ceph client prevented from performing the
next read at C? Also what if the writes to OSD's were successful but
the metadata
On Sun, Dec 18, 2011 at 8:43 AM, Bill Hastings bllhasti...@gmail.com wrote:
Thanks for the response. What if a write of 16 bytes was successful at
nodes A and B and failed at C, perhaps C had a momentarily unreachable
via the network? How is the Ceph client prevented from performing the
next
These are perhaps very inane questions but I am trying to wrap my head
around this whole thing. So basically the primary OSD handling a
particular PG will make sure that the writes happen at all replicas. I
am assuming the client would timeout in case it doesn't get a
ack/commit within some time
Hi All
I am trying to get my feet wet with Ceph and RADOS. My aim is to use
it as a block device for KVM instances. My understanding is that
virtual disks get striped at 1 MB boundaries by default. Does that
mean that there are going to be 1MB files on disks? Let's say I want
to update a
41 matches
Mail list logo