I assume you're talking about Option Two: MULTI-SITE OBJECT STORAGE
WITH FEDERATED GATEWAYS, from Inktank's
http://info.inktank.com/multisite_options_with_inktank_ceph_enterprise
There are still some options. Each zone has a master and one (or more)
replicas. You can only write to the
Can someone recommend some testing I can do to further investigate why this
issue with slow-disk-write in the VM OS is occurring?
It seems the issue, details below, are perhaps related to the VM OS running on
the RADOS images in Ceph.
Issue:
I have a handful (like 10) of VM's running that,
Correction:
When I wrote Here I provide the test results of two VMs that are running on
the same Ceph host, using disk images from the same ceph pool, and were cloned
from the same RADOS snapshot.
I really meant: Here I provide the test results of two VMs that are running on
the same Ceph host,
The short answer is no. The longer answer is it depends. The most
concise discussion I've seen is Inktank's Multi-site option whitepaper:
http://info.inktank.com/multisite_options_with_inktank_ceph_enterprise
That white paper only addresses RBD backups (using snapshots) and
RadosGW backups
Is there any way to cancel a scrub on a PG?
I have an OSD that's recovering, and there's a single PG left waiting:
2014-04-02 13:15:39.868994 mon.0 [INF] pgmap v5322756: 2592 pgs: 2589
active+clean, 1 active+recovery_wait, 2 active+clean+scrubbing+deep;
15066 GB data, 30527 GB used, 29061 GB
Thanks!
I knew about noscrub, but I didn't realize that the flapping would
cancel a scrub in progress.
So the scrub doesn't appear to be the reason it wasn't recovering.
After a flap, it goes into:
2014-04-02 14:11:09.776810 mon.0 [INF] pgmap v5323181: 2592 pgs: 2591
active+clean, 1
Hi,
I have a small 8TB testing cluster. During testing I've used 94G.
But, I have since removed pools and images from Ceph, I shouldn't be
using any space, but still the 94G usage remains. How can I reclaim old
used space?
Also, this:-
ceph@ceph-admin:~$ rbd rm
I'm seeing one OSD spamming it's log with
2014-04-02 16:49:21.547339 7f5cc6c5d700 1 heartbeat_map is_healthy
'OSD::op_tp thread 0x7f5cc3456700' had timed out after 15
It starts about 30 seconds after the OSD daemon is started. It
continues until
2014-04-02 16:48:57.526925 7f0e5a683700 1
Hi,
what are the options to consistently backup and restore
data out of a ceph cluster?
- RBDs can be snapshotted.
- Data on RBDs used inside VMs can be backed up using tools from the guest.
- CephFS data can be backed up using rsync are similar tools
What about object data in other pools?
Hi again Ilya,
No, no snapshots in this case. It's a brand new RBD that I've created.
Cheers. Tom.
On 01/04/14 16:08, Ilya Dryomov wrote:
On Tue, Apr 1, 2014 at 6:55 PM, Tom t...@t0mb.net wrote:
Thanks for the reply.
Ceph is version 0.73-1precise, and the kernel release is
Hi Robert
Thanks for raising this question , backup and restores options has always been
interesting to discuss. i too have a connected question for Inktank.
— Is there any work going for support of ceph cluster getting backed by
enterprise *proprietary* backup solutions available today
I Integrated Ceph + OpenStack with following document.
https://ceph.com/docs/master/rbd/rbd-openstack/
I could put image to glance on ceph cluster. but I can not create any
volume to cinder.
error messages are the same on this URL.
Hi Yehuda,
i tried your patch and it feels fine,
except you might need some special handling for those already corrupt uploads,
as trying to delete them gets radosgw in an endless loop and high cpu usage:
2014-04-02 11:03:15.045627 7fbf157d2700 0
RGWObjManifest::operator++(): result:
- Message from Gregory Farnum g...@inktank.com -
Date: Tue, 1 Apr 2014 09:03:17 -0700
From: Gregory Farnum g...@inktank.com
Subject: Re: [ceph-users] ceph 0.78 mon and mds crashing (bus error)
To: Yan, Zheng uker...@gmail.com
Cc: Kenneth Waegeman
It's been a while, but I think you need to use the long form
client_mountpoint config option here instead. If you search the list
archives it'll probably turn up; this is basically the only reason we
ever discuss -r. ;)
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Wed, Apr 2,
hi gregory,
(i'm a colleague of kenneth)
1) How big and what shape the filesystem is. Do you have some
extremely large directory that the MDS keeps trying to load and then
dump?
anyway to extract this from the mds without having to start it? as it
was an rsync operation, i can try to locate
Thanks for the response Greg.
Unfortunately, I appear to be missing something. If I use my cephfs key
with these perms:
client.cephfs
key: redacted
caps: [mds] allow rwx
caps: [mon] allow r
caps: [osd] allow rwx pool=data
This is what happens when I mount:
# ceph-fuse -k
Hrm, I don't remember. Let me know which permutation works and we can
dig into it.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Wed, Apr 2, 2014 at 9:00 AM, Travis Rhoden trho...@gmail.com wrote:
Thanks for the response Greg.
Unfortunately, I appear to be missing
Ah, I figured it out. My original key worked, but I needed to use the --id
option with ceph-fuse to tell it to use the cephfs user rather than the
admin user. Tailing the log on my monitor pointed out that it was logging
in with client.admin, but providing the key for client.cephfs.
So, final
hi,
1) How big and what shape the filesystem is. Do you have some
extremely large directory that the MDS keeps trying to load and then
dump?
anyway to extract this from the mds without having to start it? as it
was an rsync operation, i can try to locate possible candidates on the
source
20 matches
Mail list logo