On 09/09/2014 07:06 AM, yuelongguang wrote:
hi, josh.durgin:
i want to know how librbd launch io request.
use case:
inside vm, i use fio to test rbd-disk's io performance.
fio's pramaters are bs=4k, direct io, qemu cache=none.
in this case, if librbd just send what it gets from vm, i mean no
On 09/24/2014 04:57 PM, Brian Rak wrote:
I've been doing some testing of importing virtual machine images, and
I've found that 'rbd import' is at least 2x as slow as 'qemu-img
convert'. Is there anything I can do to speed this process up? I'd
like to use rbd import because it gives me a little
On 10/24/2014 08:21 AM, Xu (Simon) Chen wrote:
Hey folks,
I am trying to enable OpenStack to use RBD as image backend:
https://bugs.launchpad.net/nova/+bug/1226351
For some reason, nova-compute segfaults due to librados crash:
./log/SubsystemMap.h: In function 'bool
On 12/17/2014 03:49 PM, Gregory Farnum wrote:
On Wed, Dec 17, 2014 at 2:31 PM, McNamara, Bradley
bradley.mcnam...@seattle.gov wrote:
I have a somewhat interesting scenario. I have an RBD of 17TB formatted
using XFS. I would like it accessible from two different hosts, one
mapped/mounted
On 12/18/2014 10:49 AM, Travis Rhoden wrote:
One question re: discard support for kRBD -- does it matter which format
the RBD is? Format 1 and Format 2 are okay, or just for Format 2?
It shouldn't matter which format you use.
Josh
___
ceph-users
On 04/03/2014 03:36 PM, Jonathan Gowar wrote:
On Tue, 2014-03-04 at 14:05 +0800, YIP Wai Peng wrote:
Dear all,
I have a rbd image that I can't delete. It contains a snapshot that is
busy
# rbd --pool openstack-images rm
2383ba62-b7ab-4964-a776-fb3f3723aabe-deleted
2014-03-04
On 04/18/2014 10:47 AM, Alexandre DERUMIER wrote:
Thanks Kevin for for the full explain!
cache.writeback=on,cache.direct=off,cache.no-flush=off
I didn't known about the cache options split,thanks.
rbd does, to my knowledge, not use the kernel page cache, so we're safe
from that part. It
On 05/08/2014 09:42 AM, Federico Iezzi wrote:
I guys,
First of all congratulations on Firefly Release!
IMHO I think that this release is a huge step for Ceph Project!
Just for fun, this morning I upgraded one my staging Ceph cluster used
by an OpenStack Havana installation (Canonical cloud
On 05/21/2014 03:03 PM, Olivier Bonvalet wrote:
Le mercredi 21 mai 2014 à 08:20 -0700, Sage Weil a écrit :
You're certain that that is the correct prefix for the rbd image you
removed? Do you see the objects lists when you do 'rados -p rbd ls - |
grep prefix'?
I'm pretty sure yes : since I
On 05/21/2014 03:29 PM, Vilobh Meshram wrote:
Hi All,
I want to understand on how do CEPH users go about Quota Management when
CEPH is used with Openstack.
1. Is it recommended to use a common pool say “volumes” for creating
volumes which is shared by all tenants ? In this case a common
On Fri, 6 Jun 2014 17:34:56 -0700
Tyler Wilson k...@linuxdigital.net wrote:
Hey All,
Simple question, does 'rbd export-diff' work with children snapshot
aka;
root:~# rbd children images/03cb46f7-64ab-4f47-bd41-e01ced45f0b4@snap
compute/2b65c0b9-51c3-4ab1-bc3c-6b734cc796b8_disk
On 06/10/2014 01:56 AM, Vilobh Meshram wrote:
How does CEPH guarantee data isolation for volumes which are not meant
to be shared in a Openstack tenant?
When used with OpenStack the data isolation is provided by the
Openstack level so that all users who are part of same tenant will be
able to
On 07/04/2014 08:36 AM, Peter wrote:
i am having issues running radosgw-agent to sync data between two
radosgw zones. As far as i can tell both zones are running correctly.
My issue is when i run the radosgw-agent command:
radosgw-agent -v --src-access-key access_key --src-secret-key
On 07/07/2014 05:41 AM, Patrycja Szabłowska wrote:
OK, the mystery is solved.
From https://www.mail-archive.com/ceph-users@lists.ceph.com/msg10368.html
During a multi part upload you can't upload parts smaller than 5M
I've tried to upload smaller chunks, like 10KB. I've changed chunk size
to
-home on /home type ext4 (rw)
Any idea what went wrong here ?
Thanks Regards
Somnath
-Original Message-
From: Josh Durgin [mailto:josh.dur...@inktank.com]
Sent: Wednesday, September 18, 2013 6:10 PM
To: Somnath Roy
Cc: Sage Weil; ceph-de...@vger.kernel.org; Anirban Ray;
ceph-users
On 09/29/2013 07:34 PM, Aniket Nanhe wrote:
Hi,
We have a Ceph cluster set up and are trying to evaluate Ceph for it's S3
compatible object storage. I came across this best practices document for
Amazon S3, which goes over how naming keys in a particular way can improve
performance of object GET
On 09/26/2013 10:11 AM, Jogi Hofmüller wrote:
Dear all,
I am fairly new to ceph and just in the process of testing it using
several virtual machines.
Now I tried to create a block device on a client and fumbled with
settings for about an hour or two until the command line
rbd --id dovecot
On 09/27/2013 09:25 AM, Travis Rhoden wrote:
Hello everyone,
I'm running a Cuttlefish cluster that hosts a lot of RBDs. I recently
removed a snapshot of a large one (rbd snap rm -- 12TB), and I noticed
that all of the clients had markedly decreased performance. Looking
at iostat on the OSD
On 10/02/2013 10:45 AM, Oliver Daudey wrote:
Hey Robert,
On 02-10-13 14:44, Robert van Leeuwen wrote:
Hi,
I'm running a test setup with Ceph (dumpling) and Openstack (Grizzly) using libvirt to
patch the ceph disk directly to the qemu instance.
I'm using SL6 with the patched qemu packages
On 10/02/2013 06:26 PM, Blair Bethwaite wrote:
Josh,
On 3 October 2013 10:36, Josh Durgin josh.dur...@inktank.com wrote:
The version base of qemu in precise has the same problem. It only
affects writeback caching.
You can get qemu 1.5 (which fixes the issue) for precise from ubuntu's
cloud
On 10/13/2013 07:43 PM, alan.zhang wrote:
CPU: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz *2
MEM: 32GB
KVM: qemu-kvm-0.12.1.2-2.355.el6.2.cuttlefish.async.x86_64
Host: CentOS 6.4, kernel 2.6.32-358.14.1.el6.x86_64
Guest: CentOS 6.4, kernel 2.6.32-279.14.1.el6.x86_64
Ceph: ceph version
On 10/15/2013 08:56 PM, Blair Bethwaite wrote:
Date: Wed, 16 Oct 2013 16:06:49 +1300
From: Mark Kirkwood mark.kirkw...@catalyst.net.nz
mailto:mark.kirkw...@catalyst.net.nz
To: Wido den Hollander w...@42on.com mailto:w...@42on.com,
ceph-users@lists.ceph.com mailto:ceph-users@lists.ceph.com
On 10/18/2013 10:04 AM, Kevin Weiler wrote:
The kernel is 3.11.4-201.fc19.x86_64, and the image format is 1. I did,
however, try a map with an RBD that was format 2. I got the same error.
To rule out any capability drops as the culprit, can you map an rbd
image on the same host outside of a
On 10/20/2013 08:18 AM, Ugis wrote:
output follows:
#pvs -o pe_start /dev/rbd1p1
1st PE
4.00m
# cat /sys/block/rbd1/queue/minimum_io_size
4194304
# cat /sys/block/rbd1/queue/optimal_io_size
4194304
Well, the parameters are being set at least. Mike, is it possible that
having
On 10/21/2013 09:03 AM, Andrew Richards wrote:
Hi Everybody,
I'm attempting to get Ceph working for CentOS 6.4 running RDO Havana for
Cinder volume storage and boot-from-volume, and I keep bumping into a
very unhelpful errors on my nova-compute test node and my cinder
controller node.
Here is
in Grizzly?
No, that's no longer necessary.
Josh
Thanks,
Andy
On Oct 21, 2013, at 12:26 PM, Josh Durgin josh.dur...@inktank.com
mailto:josh.dur...@inktank.com wrote:
On 10/21/2013 09:03 AM, Andrew Richards wrote:
Hi Everybody,
I'm attempting to get Ceph working for CentOS 6.4 running RDO
On 10/16/2013 04:25 PM, Kelcey Jamison Damage wrote:
Hi,
I have gotten so close to have Ceph work in my cloud but I have reached
a roadblock. Any help would be greatly appreciated.
I receive the following error when trying to get KVM to run a VM with an
RBD volume:
Libvirtd.log:
2013-10-16
On 10/30/2013 01:54 AM, Mark Kirkwood wrote:
On 29/10/13 20:53, lixuehui wrote:
Hi,list
From the document that a radosgw-agent's right info should like this
INFO:radosgw_agent.sync:Starting incremental sync
INFO:radosgw_agent.worker:17910 is processing shard number 0
On 11/01/2013 03:07 AM, nicolasc wrote:
Hi every one,
I finally and happily managed to get my Ceph cluster (3 monitors among 8
nodes, each with 9 OSDs) running on version 0.71, but the rbd map
command shows a weird behaviour.
I can list pools, create images and snapshots, alleluia!
However,
On 11/08/2013 12:15 AM, Jens-Christian Fischer wrote:
Hi all
we have installed a Havana OpenStack cluster with RBD as the backing
storage for volumes, images and the ephemeral images. The code as
delivered in
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/imagebackend.py#L498
On 11/08/2013 03:50 AM, Wido den Hollander wrote:
On 11/07/2013 08:42 PM, Gruher, Joseph R wrote:
Is there any plan to implement some kind of QoS in Ceph? Say I want to
provide service level assurance to my OpenStack VMs and I might have to
throttle bandwidth to some to provide adequate
On 11/07/2013 09:48 AM, lixuehui wrote:
Hi all :
After we build a region with two zones distributed in two ceph
cluster.Start the agent ,it start works!
But what we find in the radosgw-agent stdout is that it failed to sync
objects all the time .Paste the info:
On 11/08/2013 03:13 PM, ja...@peacon.co.uk wrote:
On 2013-11-08 03:20, Haomai Wang wrote:
On Fri, Nov 8, 2013 at 9:31 AM, Josh Durgin josh.dur...@inktank.com
wrote:
I just list commands below to help users to understand:
cinder qos-create high_read_low_write consumer=front-end
read_iops_sec
Sorry for the delay, I'm still catching up since the openstack
conference.
Does the system user for the destination zone exist with the same
access secret and key in the source zone?
If you enable debug rgw = 30 on the destination you can see why the
copy_obj from the source zone is failing.
On 11/13/2013 09:06 PM, lixuehui wrote:
And on the slave zone gateway instence ,the info is like this :
2013-11-14 12:54:24.516840 7f51e7fef700 1 == starting new
request req=0xb1e3b0 =
2013-11-14 12:54:24.526640 7f51e7fef700 1 == req done
req=0xb1e3b0
On 11/14/2013 09:54 AM, Dmitry Borodaenko wrote:
On Thu, Nov 14, 2013 at 6:00 AM, Haomai Wang haomaiw...@gmail.com wrote:
We are using the nova fork by Josh Durgin
https://github.com/jdurgin/nova/commits/havana-ephemeral-rbd - are there
more patches that need to be integrated?
I hope I can
On 11/19/2013 05:28 AM, Behar Veliqi wrote:
Hi,
when using the librados c library, the documentation of the different functions
just tells that it returns a negative error code on failure,
e.g. the rados_read function
(http://ceph.com/docs/master/rados/api/librados/#rados_read).
Is there
On 11/20/2013 06:53 AM, nicolasc wrote:
Thank you Bernhard and Wogri. My old kernel version also explains the
format issue. Once again, sorry to have mixed that in the problem.
Back to my original inquiries, I hope someone can help me understand why:
* it is possible to create an RBD image
On 11/27/2013 07:21 AM, James Pearce wrote:
I was going to add something to the bug tracker, but it looks to me that
contributor email addresses all have public (unauthenticated)
visibility? Can this be set in user preferences?
Yes, it can be hidden here: http://tracker.ceph.com/my/account
On 11/26/2013 02:22 PM, Stephen Taylor wrote:
From ceph-users archive 08/27/2013:
On 08/27/2013 01:39 PM, Timofey Koolin wrote:
/Is way to know real size of rbd image and rbd snapshots?/
/rbd ls -l write declared size of image, but I want to know real size./
You can sum the sizes of the
On 11/26/2013 01:14 AM, Ta Ba Tuan wrote:
Hi James,
Proplem is why the Ceph not recommend using Device'UUID in Ceph.conf,
when, above error can be occur?
I think with the newer-style configuration, where your disks have
partition ids setup by ceph-disk instead of entries in ceph.conf, it
correctly or this rbd admin
socket depends on secified qemu package.
-Original Message-
From: Josh Durgin [mailto:josh.dur...@inktank.com]
Sent: Thursday, November 28, 2013 11:01 AM
To: Shu, Xinxin; ceph-us...@ceph.com
Subject: Re: [ceph-users] can not get rbd cache perf counter
On 11/27
extents marked 'zero' is always fine
for this size calculation.
Josh
[1] http://tracker.ceph.com/issues/6926
Again, I appreciate your help.
Steve
-Original Message-
From: Josh Durgin [mailto:josh.dur...@inktank.com]
Sent: Wednesday, November 27, 2013 7:51 PM
To: Stephen Taylor; ceph
On 12/02/2013 03:26 PM, Bill Eldridge wrote:
Hi all,
We're looking at using Ceph's copy-on-write for a ton of users'
replicated cloud image environments,
and are wondering how efficient Ceph is for adding user data to base
images -
is data added in normal 4kB or 64kB sizes, or can you specify
On 12/05/2013 02:37 PM, Dmitry Borodaenko wrote:
Josh,
On Tue, Nov 19, 2013 at 4:24 PM, Josh Durgin josh.dur...@inktank.com wrote:
I hope I can release or push commits to this branch contains live-migration,
incorrect filesystem size fix and ceph-snapshort support in a few days.
Can't wait
On 01/02/2014 01:40 PM, James Harper wrote:
I just had to restore an ms exchange database after an ceph hiccup (no actual
data lost - Exchange is very good like that with its no loss restore!). The
order
of events went something like:
. Loss of connection on osd to the cluster network (public
On 01/02/2014 10:51 PM, James Harper wrote:
I've not used ceph snapshots before. The documentation says that the rbd device
should not be in use before creating a snapshot. Does this mean that creating a
snapshot is not an atomic operation? I'm happy with a crash consistent
filesystem if
On 01/08/2014 11:07 AM, Gautam Saxena wrote:
When booting an image from Openstack in which CEPH is the back-end for
both volumes and images, I'm noticing that it takes about ~10 minutes
during the spawning phase -- I believe Openstack is making a fully
copy of the 30 GB Windows image. Shouldn't
On 02/04/2014 07:44 PM, Craig Lewis wrote:
On 2/4/14 17:06 , Craig Lewis wrote:
On 2/4/14 14:43 , Yehuda Sadeh wrote:
Does it ever catching up? You mentioned before that most of the writes
went to the same two buckets, so that's probably one of them. Note
that writes to the same bucket are
On 02/05/2014 01:23 PM, Craig Lewis wrote:
On 2/4/14 20:02 , Josh Durgin wrote:
From the log it looks like you're hitting the default maximum number of
entries to be processed at once per shard. This was intended to prevent
one really busy shard from blocking progress on syncing other shards
On 03/20/2014 02:07 PM, Dmitry Borodaenko wrote:
The patch series that implemented clone operation for RBD backed
ephemeral volumes in Nova did not make it into Icehouse. We have tried
our best to help it land, but it was ultimately rejected. Furthermore,
an additional requirement was imposed to
On 03/20/2014 07:03 PM, Dmitry Borodaenko wrote:
On Thu, Mar 20, 2014 at 3:43 PM, Josh Durgin josh.dur...@inktank.com wrote:
On 03/20/2014 02:07 PM, Dmitry Borodaenko wrote:
The patch series that implemented clone operation for RBD backed
ephemeral volumes in Nova did not make it into Icehouse
On 03/31/2014 03:03 PM, Brendan Moloney wrote:
Hi,
I was wondering if RBD snapshots use the CRUSH map to distribute
snapshot data and live data on different failure domains? If not, would
it be feasible in the future?
Currently rbd snapshots and live objects are stored in the same place,
On 03/12/2013 01:28 PM, Travis Rhoden wrote:
Thanks for the response, Trevor.
The root disk (/var/lib/nova/instances) must be on shared storage to run
the live migrate.
I would argue that it is on shared storage. It is an RBD stored in Ceph,
and that's available at each host via librbd.
PM, Josh Durgin josh.dur...@inktank.comwrote:
On 03/12/2013 01:28 PM, Travis Rhoden wrote:
Thanks for the response, Trevor.
The root disk (/var/lib/nova/instances) must be on shared storage to run
the live migrate.
I would argue that it is on shared storage. It is an RBD stored in
Ceph
On 03/18/2013 07:53 AM, Wolfgang Hennerbichler wrote:
On 03/13/2013 06:38 PM, Josh Durgin wrote:
Anyone seeing this problem, could you try the wip-rbd-cache-aio branch?
Hi,
just compiled and tested it out, unfortunately there's no big change:
ceph --version
ceph version 0.58-375
On 03/19/2013 11:03 PM, Chen, Xiaoxi wrote:
I think Josh may be the right man for this question ☺
To be more precious, I would like to add more words about the status:
1. We have configured “show_image_direct_url= Ture” in Glance, and from the
Cinder-volume’s log, we can make sure we have got
On 04/16/2013 02:08 AM, Wolfgang Hennerbichler wrote:
On 03/29/2013 09:46 PM, Josh Durgin wrote:
The issue was that the qemu rbd driver was blocking the main qemu
thread when flush was called, since it was using a synchronous flush.
Fixing this involves patches to librbd to add an asynchronous
On 05/03/2013 01:48 PM, Jens Kristian Søgaard wrote:
Hi,
* librbd: new async flush method to resolve qemu hangs (requires Qemu
update as well)
I'm very interested in this update, as it has held our system back.
Which version of qemu is needed?
It's in a release yet.
The release notes
On 05/06/2013 03:41 PM, w sun wrote:
Hi Josh,
I assume by put up-to-date rbd on top of the RHEL package, you mean that the latest
asynchronous flush fix (QEMU portion) can be back-ported and included in the RPMs? Or
not?
Yeah, the async flush will be included. We'll need one version of the
On 05/13/2013 09:17 AM, w sun wrote:
While planning the usage of fast clone from openstack glance image store
to cinder volume, I am a little concerned about possible IO performance
impact to the cinder volume service node if I have to perform flattening
of the multiple image down the road.
Am
with
cinder_endpoint_template=http://192.168.192.2:8776/v1/$(tenant_id)s
in your nova.conf.
Josh
-martin
On 30.05.2013 22:22, Martin Mailand wrote:
Hi Josh,
On 30.05.2013 21:17, Josh Durgin wrote:
It's trying to talk to the cinder api, and failing to connect at all.
Perhaps there's a firewall preventing
-e1565054a2d3 10240M 2
root@controller:~/vm_images#
-martin
On 30.05.2013 22:56, Josh Durgin wrote:
On 05/30/2013 01:50 PM, Martin Mailand wrote:
Hi Josh,
I found the problem, nova-compute tries to connect to the publicurl
(xxx.xxx.240.10) of the keytone endpoints, this ip
On 05/30/2013 02:50 PM, Martin Mailand wrote:
Hi Josh,
now everything is working, many thanks for your help, great work.
Great! I added those settings to
http://ceph.com/docs/master/rbd/rbd-openstack/ so it's easier to figure
out in the future.
-martin
On 30.05.2013 23:24, Josh Durgin
On 06/11/2013 11:59 AM, Guido Winkelmann wrote:
Hi,
I'm having issues with data corruption on RBD volumes again.
I'm using RBD volumes for virtual harddisks for qemu-kvm virtual machines.
Inside these virtual machines I have been running a C++ program (attached)
that fills a mounted filesystem
On 06/21/2013 09:48 AM, w sun wrote:
Josh Sebastien,
Does either of you have any comments on this cephx issue with multi-rbd
backend pools?
Thx. --weiguo
From: ws...@hotmail.com
To: ceph-users@lists.ceph.com
Date: Thu,
On 06/27/2013 05:54 PM, w sun wrote:
Thanks Josh. That explains. So I guess right now with Grizzly, you can
only use one rbd backend pool (assume with different cephx key for
different pool) on a single Cinder node unless you are willing to modify
cinder-volume.conf and restart cinder service
On 07/06/2013 04:51 AM, Xue, Chendi wrote:
Hi, all
I wanna fetch debug librbd and debug rbd logs when I am using vm to read /
write.
Details:
I created a volume from ceph and attached it to a vm.
So I suppose when I do read/write in the VM, I can get some rbd debug
logs in
On 07/16/2013 06:06 PM, Gaylord Holder wrote:
Now whenever I try to map an RBD to a machine, mon0 complains:
feature set mismatch, my 2 server's 2040002, missing 204
missing required protocol features.
Your cluster is using newer crush tunables to get better data
distribution, but your
[please keep replies on the list]
On 07/17/2013 04:04 AM, Gaylord Holder wrote:
On 07/16/2013 09:22 PM, Josh Durgin wrote:
On 07/16/2013 06:06 PM, Gaylord Holder wrote:
Now whenever I try to map an RBD to a machine, mon0 complains:
feature set mismatch, my 2 server's 2040002, missing
On 07/17/2013 05:59 AM, Maciej Gałkiewicz wrote:
Hello
Is there any way to verify that cache is enabled? My machine is running
with following parameters:
qemu-system-x86_64 -machine accel=kvm:tcg -name instance-0302 -S
-machine pc-i440fx-1.5,accel=kvm,usb=off -cpu
On 07/17/2013 11:39 PM, Maciej Gałkiewicz wrote:
I have created VM with KVM 1.1.2 and all I had was rbd_cache configured
in ceph.conf. Cache option in libvirt set to none:
disk type=network device=disk
driver name=qemu type=raw cache=none/
source
On 07/18/2013 11:32 AM, Maciej Gałkiewicz wrote:
On 18 Jul 2013 20:25, Josh Durgin josh.dur...@inktank.com
mailto:josh.dur...@inktank.com wrote:
Setting rbd_cache=true in ceph.conf will make librbd turn on the cache
regardless of qemu. Setting qemu to cache=none tells qemu that it
doesn't
On 07/23/2013 06:09 AM, Oliver Schulz wrote:
Dear Ceph Experts,
I remember reading that at least in the past I wasn't recommended
to mount Ceph storage on a Ceph cluster node. Given a recent kernel
(3.8/3.9) and sufficient CPU and memory resources on the nodes,
would it now be safe to
* Mount
On 08/08/2013 05:40 AM, Oliver Francke wrote:
Hi Josh,
I have a session logged with:
debug_ms=1:debug_rbd=20:debug_objectcacher=30
as you requested from Mike, even if I think, we do have another story
here, anyway.
Host-kernel is: 3.10.0-rc7, qemu-client 1.6.0-rc2, client-kernel is
On 08/09/2013 08:03 AM, Stefan Hajnoczi wrote:
On Fri, Aug 09, 2013 at 03:05:22PM +0100, Andrei Mikhailovsky wrote:
I can confirm that I am having similar issues with ubuntu vm guests using fio
with bs=4k direct=1 numjobs=4 iodepth=16. Occasionally i see hang tasks,
occasionally guest vm
On 08/12/2013 10:19 AM, PJ wrote:
Hi All,
Before go on the issue description, here is our hardware configurations:
- Physical machine * 3: each has quad-core CPU * 2, 64+ GB RAM, HDD * 12
(500GB ~ 1TB per drive; 1 for system, 11 for OSD). ceph OSD are on
physical machines.
- Each physical
On 08/14/2013 02:22 PM, Michael Morgan wrote:
Hello Everyone,
I have a Ceph test cluster doing storage for an OpenStack Grizzly platform
(also testing). Upgrading to 0.67 went fine on the Ceph side with the cluster
showing healthy but suddenly I can't upload images into Glance anymore. The
On 08/20/2013 11:20 AM, Vincent Hurtevent wrote:
I'm not the end user. It's possible that the volume has been detached
without unmounting.
As the volume is unattached and the initial kvm instance is down, I was
expecting the rbd volume is properly unlocked even if the guest unmount
hasn't
On 08/26/2013 12:03 AM, Wolfgang Hennerbichler wrote:
hi list,
I realize there's a command called rbd lock to lock an image. Can libvirt use
this to prevent virtual machines from being started simultaneously on different
virtualisation containers?
wogri
Yes - that's the reason for lock
On 08/26/2013 01:49 PM, Josh Durgin wrote:
On 08/26/2013 12:03 AM, Wolfgang Hennerbichler wrote:
hi list,
I realize there's a command called rbd lock to lock an image. Can
libvirt use this to prevent virtual machines from being started
simultaneously on different virtualisation containers
On 08/30/2013 02:22 PM, Oliver Daudey wrote:
Hey Mark,
On vr, 2013-08-30 at 13:04 -0500, Mark Chaney wrote:
Full disclosure, I have zero experience with openstack and ceph so far.
If I am going to use a Ceph RBD cluster to store my kvm instances, how
should I be doing backups?
1) I would
On 09/09/2013 04:57 AM, Andrey Korolyov wrote:
May I also suggest the same for export/import mechanism? Say, if image
was created by fallocate we may also want to leave holes upon upload
and vice-versa for export.
Import and export already omit runs of zeroes. They could detect
smaller runs
On 09/08/2013 01:14 AM, Da Chun Ng wrote:
I mapped an image to a system, and used blockdev to make it readonly.
But it failed.
[root@ceph0 mnt]# blockdev --setro /dev/rbd2
[root@ceph0 mnt]# blockdev --getro /dev/rbd2
0
It's on Centos6.4 with kernel 3.10.6 .
Ceph 0.61.8 .
Any idea?
For
On 09/10/2013 01:50 PM, Darren Birkett wrote:
One last question: I presume the fact that the 'volume_image_metadata'
field is not populated when cloning a glance image into a cinder volume
is a bug? It means that the cinder client doesn't show the volume as
bootable, though I'm not sure what
On 09/10/2013 01:51 AM, Andrey Korolyov wrote:
On Tue, Sep 10, 2013 at 3:03 AM, Josh Durgin josh.dur...@inktank.com wrote:
On 09/09/2013 04:57 AM, Andrey Korolyov wrote:
May I also suggest the same for export/import mechanism? Say, if image
was created by fallocate we may also want to leave
Also enabling rbd writeback caching will allow requests to be merged,
which will help a lot for small sequential I/O.
On 09/17/2013 02:03 PM, Gregory Farnum wrote:
Try it with oflag=dsync instead? I'm curious what kind of variation
these disks will provide.
Anyway, you're not going to get the
On 09/17/2013 03:30 PM, Somnath Roy wrote:
Hi,
I am running Ceph on a 3 node cluster and each of my server node is running 10
OSDs, one for each disk. I have one admin node and all the nodes are connected
with 2 X 10G network. One network is for cluster and other one configured as
public
On 02/05/2015 07:44 AM, Udo Lembke wrote:
Hi all,
is there any command to flush the rbd cache like the
echo 3 /proc/sys/vm/drop_caches for the os cache?
librbd exposes it as rbd_invalidate_cache(), and qemu uses it
internally, but I don't think you can trigger that via any user-facing
qemu
From: Logan Barfield lbarfi...@tqhosting.com
We've been running some tests to try to determine why our FreeBSD VMs
are performing much worse than our Linux VMs backed by RBD, especially
on writes.
Our current deployment is:
- 4x KVM Hypervisors (QEMU 2.0.0+dfsg-2ubuntu1.6)
- 2x OSD nodes
On 02/10/2015 07:54 PM, Blair Bethwaite wrote:
Just came across this in the docs:
Currently (i.e., firefly), namespaces are only useful for
applications written on top of librados. Ceph clients such as block
device, object storage and file system do not currently support this
feature.
Then
=
object_prefix rbd_id, allow rwx pool=foo namespace=bar'
Cinder or other management layers would still want broader access, but
these more restricted keys could be the only ones exposed to QEMU.
Josh
On 13 February 2015 at 05:57, Josh Durgin josh.dur...@inktank.com wrote:
On 02/10/2015 07:54 PM, Blair
On 01/06/2015 10:24 AM, Robert LeBlanc wrote:
Can't this be done in parallel? If the OSD doesn't have an object then
it is a noop and should be pretty quick. The number of outstanding
operations can be limited to 100 or a 1000 which would provide a
balance between speed and performance impact if
access (all handled automatically by librbd).
Using watch/notify to coordinate multi-client access would get complex
and inefficient pretty fast, and in general is best left to cephfs
rather than rbd.
Josh
On Jan 6, 2015 5:35 PM, Josh Durgin josh.dur...@inktank.com
mailto:josh.dur...@inktank.com
, 2015 at 4:19 PM, Josh Durgin josh.dur...@inktank.com wrote:
On 01/06/2015 10:24 AM, Robert LeBlanc wrote:
Can't this be done in parallel? If the OSD doesn't have an object then
it is a noop and should be pretty quick. The number of outstanding
operations can be limited to 100 or a 1000 which
On 03/03/2015 03:28 PM, Ken Dreyer wrote:
On 03/03/2015 04:19 PM, Sage Weil wrote:
Hi,
This is just a heads up that we've identified a performance regression in
v0.80.8 from previous firefly releases. A v0.80.9 is working it's way
through QA and should be out in a few days. If you haven't
On 03/02/2015 04:16 AM, koukou73gr wrote:
Hello,
Today I thought I'd experiment with snapshots and cloning. So I did:
rbd import --image-format=2 vm-proto.raw rbd/vm-proto
rbd snap create rbd/vm-proto@s1
rbd snap protect rbd/vm-proto@s1
rbd clone rbd/vm-proto@s1 rbd/server
And then proceeded
On 03/04/2015 01:36 PM, koukou73gr wrote:
On 03/03/2015 05:53 PM, Jason Dillaman wrote:
Your procedure appears correct to me. Would you mind re-running your
cloned image VM with the following ceph.conf properties:
[client]
rbd cache off
debug rbd = 20
log file =
On 03/26/2015 10:46 AM, Gregory Farnum wrote:
I don't know why you're mucking about manually with the rbd directory;
the rbd tool and rados handle cache pools correctly as far as I know.
That's true, but the rados tool should be able to manipulate binary data
more easily. It should probably
On 04/01/2015 02:42 AM, Jimmy Goffaux wrote:
English Version :
Hello,
I found a strange behavior in Ceph. This behavior is visible on Buckets
(RGW) and pools (RDB).
pools:
``
root@:~# qemu-img info rbd:pool/kibana2
image: rbd:pool/kibana2
file format: raw
virtual size: 30G (32212254720 bytes)
1 - 100 of 160 matches
Mail list logo