Hi David,
I re-uploaded entire log at http://123.30.41.138/ceph-osd.21.log
appear many many logs :|
2014-11-04 18:24:38.641529 7f0fda7ac780 15 read_log missing
106395'4837671 (106395'4837670) modify
5479e128/rbd_data.74ae9c3be03aff.0b01/head//6 by
client.7912580.0:19413835
Is RBD snapshotting what I'm looking for? Is this even possible?
Yes, you can use rbd snapshoting, export / import
http://ceph.com/dev-notes/incremental-snapshots-with-rbd/
But you need to do it for each rbd volume.
Here a script to do it:
http://www.rapide.nl/blog/item/ceph_-_rbd_replication
Hi Alexandre,
Thanks for the link! Unless I'm misunderstanding, this is to replicate an
RBD volume from one cluster to another. What if I just wanted to back up a
running cluster without having another cluster to replicate to? i.e. I'd
ideally like a tarball of raw files that I could extract on a
What if I just wanted to back up a running cluster without having another
cluster to replicate to
Yes, import is optionnal,
you can simply export and pipe to tar
rbd export-diff --from-snap snap1 pool/image@snap2 - | tar
- Mail original -
De: Christopher Armstrong
Hi.
I changed the script and added it multithreaded archiver.
See: http://www.theirek.com/blog/2014/10/26/primier-biekapa-rbd-ustroistva
2014-11-05 14:03 GMT+03:00 Alexandre DERUMIER aderum...@odiso.com:
What if I just wanted to back up a running cluster without having
another cluster to
Hello,
after doing a single-step transition, the test cluster is hanging in
unclean state, both before and after crush tunables adjustment:
status:
http://xdel.ru/downloads/transition-stuck/cephstatus.txt
osd dump:
http://xdel.ru/downloads/transition-stuck/cephosd.txt
query for a single pg in
Hi Sam,
Incomplete usually means the pgs do not have any complete copies. Did
you previously have more osds?
No. But could have OSDs quitting after hitting assert(0 == we got a bad
state machine event), or interacting with kernel 3.14 clients have caused the
incomplete copies?
How can I
Hi,
There is a small bug in the Fedora package for ceph-0.87. Two days ago,
Boris Ranto built the first 0.87 package, for Fedora 22 (rawhide) [1].
[1] http://koji.fedoraproject.org/koji/buildinfo?buildID=589731
This build was a succes, so I took that package and built it for Fedora
20 (which is
Hi Haomai,
Thanks for your presentation this afternoon! Would be great if you
could please share your slides and perhaps go into some more detail
about your modelling of copysets in crush.
--
Cheers,
~Blairo
___
ceph-users mailing list
Hi there,
I have this situation, where I'm using the same Ceph cluster (with
radosgw), for two different environments, QUAL and PRE-PRODUCTION.
I need different users for each environment, but I need to create the
same buckets, with the same name; I understand there is no way to have
2 buckets
Hi Sam,
'ceph pg pgid query'.
Thanks.
Looks like ceph is looking for and osd.20 which no longer exists:
probing_osds: [
1,
7,
15,
16],
down_osds_we_would_probe: [
20],
So perhaps during my
On Thu, Oct 30, 2014 at 8:13 AM, Cristian Falcas
cristi.fal...@gmail.com wrote:
Hello,
I have an one node ceph installation and when trying to import an
image using qemu, it works fine for some time and after that the osd
process starts using ~100% of cpu and the number of op/s increases and
The incomplete pgs are not processing requests. That's where the
blocked requests are coming from. You can query the pg state using
'ceph pg pgid query'. Full osds can also block requests.
-Sam
On Wed, Nov 5, 2014 at 7:24 AM, Chad Seys cws...@physics.wisc.edu wrote:
Hi Sam,
Incomplete
You could setup dedicated zones for each environment, and not
replicate between them.
Each zone would have it's own URL, but you would be able to re-use
usernames and bucket names. If different URLs are a problem, you
might be able to get around that in the load balancer or the web
servers. I
On Wed, Nov 5, 2014 at 7:24 AM, Chad Seys cws...@physics.wisc.edu wrote:
Hi Sam,
Incomplete usually means the pgs do not have any complete copies. Did
you previously have more osds?
No. But could have OSDs quitting after hitting assert(0 == we got a bad
state machine event), or
Are you referring to manual roll out of ceph osd crush tunables optimal (
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-July/041499.html) ?
Can you include a copy of ceph osd tree and your crushmap?
Were the OSDs flapping when you did those queries? pg 6.19 says [0,1] is
acting, but
You linked to both RDB and CephFS topics. Looks like RDB is covered well
in this thread, but you'll need something else if you want to backup CephFS
(or RadosGW).
CephFS is a normal file POSIX filesystem, so normal backup tools work on
it. Although that can be complicated if you make it large.
Hi,
It was an image import in openstack juno using glance.
It looks like it was the fault of glance after all, as a rados import
was fast (aka normal).
I should open a bug with juno release, as icehouse imports the image normally.
It was an empty pool, no spapshots, no files.
Best regards,
Hi Craig,
We are indeed using both Ceph FS and radosgw. I know I can use tools to
crawl over all the files and copy them someplace else, but wanted to see if
there was a better Ceph-recommended way to backup the pools themselves
which I can then re-import.
*Chris Armstrong*Head of Services
Yes, that`s right guess - crushmap is imagining both OSDs on a single
host, although daemon on the second host was able to act as up/in when
crushmap placed it on a different host, looks as a weird placement
miscalculation during update.
___
ceph-users
No. For most people, that's not really practical. Where would you export
a 1 PB pool? :-)
CephFS has some nice features that could make a traditional filesystem
backup much more efficient. It rolls the size and last-modified attributes
up the directory tree. If the backup client understood
Thanks Craig! I'll write up some docs/scripts for backing up radosgw and
Ceph FS and share back with the community.
*Chris Armstrong*Head of Services
OpDemand / Deis.io
GitHub: https://github.com/deis/deis -- Docs: http://docs.deis.io/
On Wed, Nov 5, 2014 at 11:14 AM, Craig Lewis
Hello everyone,
I am attempted to setup a two cluster situation for object storage disaster
recovery. I have two physically separate sites so using 1 big cluster isn’t an
option. I’m attempting to follow the guide at:
http://ceph.com/docs/v0.80.5/radosgw/federated-config/
One region two zones is the standard setup, so that should be fine.
Is metadata (users and buckets) being replicated, but not data (objects)?
Let's go through a quick checklist:
- Verify that you enabled log_meta and log_data in the region.json for
the master zone
- Verify that
Morning all ..
I have a simple 3 node 2 osd cluster setup serving VM Images (proxmox). The
two OSD's are on the two VM hosts. Size is set to 2 for replication on both
OSD's. SSD journals.
- if the Ceph Client (VM quest over RBD) is accessing data that is stored on
the local OSD, will it
Hello -
My ceph cluster needs to have two rados gateway nodes eventually interfacing
with Openstack haproxy. I have been successful in bringing up one of them. What
are the steps for additional rados gateway node to be included in cluster? Any
help is greatly appreciated.
Thanks much.
Sounds like you needed osd 20. You can mark osd 20 lost.
-Sam
On Wed, Nov 5, 2014 at 9:41 AM, Gregory Farnum g...@gregs42.com wrote:
On Wed, Nov 5, 2014 at 7:24 AM, Chad Seys cws...@physics.wisc.edu wrote:
Hi Sam,
Incomplete usually means the pgs do not have any complete copies. Did
you
Hello,
I am running ceph firefly with the cluster name cephprod and trying to
create a lun with the following options:
tgtadm --lld iscsi --mode logicalunit --op new --tid 1 --lun 1
--backing-store iscsi-spin/test-dr --bstype rbd
--bsopts=conf=/etc/ceph/cephprod.conf
but it fails with the error.
Hi List,
I recently concluded a study
http://pushpeshsharma.blogspot.in/2014/11/openstack-swift-vs-ceph-rgw-read.html
on the $subject, I am looking forward for reviews and feedback to improve
Ceph RGW numbers.
--
-Pushpesh
___
ceph-users mailing list
On 11/06/2014 06:37 AM, Gagandeep Arora wrote:
Hello,
I am running ceph firefly with the cluster name cephprod and trying to
create a lun with the following options:
tgtadm --lld iscsi --mode logicalunit --op new --tid 1 --lun 1
--backing-store iscsi-spin/test-dr --bstype rbd
On 11/05/2014 11:03 PM, Lindsay Mathieson wrote:
Morning all ..
I have a simple 3 node 2 osd cluster setup serving VM Images (proxmox). The
two OSD's are on the two VM hosts. Size is set to 2 for replication on both
OSD's. SSD journals.
- if the Ceph Client (VM quest over RBD) is
31 matches
Mail list logo