is there a way to see how much data is allocated as opposed to just
what was used? for example, this 20gig image is only taking up 8gigs.
id like to see a df with the full allocation of images.
root@ceph1:~# rbd --image vm-101-disk-1 info
rbd image 'vm-101-disk-1':
size 20480 MB in 5120
Hi guys,
I'm using ceph for a long time now, since bobtail. I always upgraded every few
weeks/ months to the latest stable
release. Of course I also removed some osds and added new ones. Now during the
last few upgrades (I just upgraded from
80.6 to 80.8) I noticed that old osds take much
I am wondering how the value of journal_align_min_size gives impact on
journal padding. Is there any document describing the disk layout of
journal?
Thanks for help!
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hi all,
I've only been using ceph for a few months now and currently have a small
cluster (3 nodes, 18 OSDs). I get decent performance based upon the
configuration.
My question is, should I have a larger pipe on the client/public network or on
the ceph cluster private network? I can only have
On 27/02/2015, at 17.20, Yehuda Sadeh-Weinraub yeh...@redhat.com wrote:
I'd look at two things first. One is the '{fqdn}' string, which I'm not sure
whether that's the actual string that you have, or whether you just replaced
it for the sake of anonymity. The second is the port number, which
Does deleting/reformatting the old osds improve the performance?
On Fri, Feb 27, 2015 at 6:02 AM, Corin Langosch
corin.lango...@netskin.com wrote:
Hi guys,
I'm using ceph for a long time now, since bobtail. I always upgraded every
few weeks/ months to the latest stable
release. Of course I
That's the old way of defining pools. The new way involves in defining a zone
and placement targets for that zone. Then you can have different default
placement targets for different users.
Anu URL/pointers to better understand such matters?
Do you have any special config in your ceph.conf?
Sorry forgot to send to the list...
Begin forwarded message:
From: Steffen W Sørensen ste...@me.com
Subject: Re: [ceph-users] RadosGW S3ResponseError: 405 Method Not Allowed
Date: 27. feb. 2015 18.29.51 CET
To: Yehuda Sadeh-Weinraub yeh...@redhat.com
It seems that your request did find
- Original Message -
From: Steffen W Sørensen ste...@me.com
To: Yehuda Sadeh-Weinraub yeh...@redhat.com
Cc: ceph-users@lists.ceph.com
Sent: Friday, February 27, 2015 9:39:46 AM
Subject: Re: [ceph-users] RadosGW S3ResponseError: 405 Method Not Allowed
On 27/02/2015, at 17.20, Yehuda
rgw enable apis = s3
Commenting this out makes it work :)
[root@rgw tests3]# ./lsbuckets.py
[root@rgw tests3]# ./lsbuckets.py
my-new-bucket 2015-02-27T17:49:04.000Z
[root@rgw tests3]#
...
2015-02-27 18:49:22.601578 7f48f2bdd700 20 rgw_create_bucket returned ret=-17
Hi,
Newbie to RadosGW+Ceph, but learning...
Got a running Ceph Cluster working with rbd+CephFS clients. Now I'm trying to
verify a RadosGW S3 api, but seems to have an issue with RadosGW access.
I get the error (not found anything searching so far...):
S3ResponseError: 405 Method Not Allowed
Hi everyone,
I always have a bit of trouble wrapping my head around how libvirt seems
to ignore ceph.conf option while qemu/kvm does not, so I thought I'd
ask. Maybe Josh, Wido or someone else can clarify the following.
http://ceph.com/docs/master/rbd/qemu-rbd/ says:
Important: If you set
On 02/27/2015 11:37 AM, Blair Bethwaite wrote:
Sorry if this is actually documented somewhere,
It is. :)
but is it possible to
create and use multiple filesystems on the data data and metadata
pools? I'm guessing yes, but requires multiple MDSs?
Nope. Every fs needs one data and one
2015-02-27 20:56 GMT+08:00 Alexandre DERUMIER aderum...@odiso.com:
Hi,
from qemu rbd.c
if (flags BDRV_O_NOCACHE) {
rados_conf_set(s-cluster, rbd_cache, false);
} else {
rados_conf_set(s-cluster, rbd_cache, true);
}
and
block.c
int
Hi,
from qemu rbd.c
if (flags BDRV_O_NOCACHE) {
rados_conf_set(s-cluster, rbd_cache, false);
} else {
rados_conf_set(s-cluster, rbd_cache, true);
}
and
block.c
int bdrv_parse_cache_flags(const char *mode, int *flags)
{
*flags = ~BDRV_O_CACHE_MASK;
if
On 02/27/2015 01:56 PM, Alexandre DERUMIER wrote:
Hi,
from qemu rbd.c
if (flags BDRV_O_NOCACHE) {
rados_conf_set(s-cluster, rbd_cache, false);
} else {
rados_conf_set(s-cluster, rbd_cache, true);
}
and
block.c
int bdrv_parse_cache_flags(const char
On 27/02/2015, at 18.51, Steffen W Sørensen ste...@me.com wrote:
rgw enable apis = s3
Commenting this out makes it work :)
Thanks for helping on this initial issue!
[root@rgw tests3]# ./lsbuckets.py
[root@rgw tests3]# ./lsbuckets.py
my-new-bucket 2015-02-27T17:49:04.000Z
Anyone help me, please?
In the attach, the log of mds with debug = 20.
Thanks,
Att.
---
Daniel Takatori Ohara.
System Administrator - Lab. of Bioinformatics
Molecular Oncology Center
Instituto Sírio-Libanês de Ensino e Pesquisa
Hospital Sírio-Libanês
Phone: +55 11 3155-0200 (extension 1927)
R:
Also sending to the devel list to see if they have some insight.
On Wed, Feb 25, 2015 at 3:01 PM, Robert LeBlanc rob...@leblancnet.us wrote:
I tried finding an answer to this on Google, but couldn't find it.
Since BTRFS can parallel the journal with the write, does it make
sense to have the
Hi all,
we use an EC-Pool with an small cache tier in front of, for our
archive-data (4 * 16TB VM-disks).
The ec-pool has k=3;m=2 because we startet with 5 nodes and want to
migrate to an new ec-pool with k=5;m=2. Therefor we migrate one VM-disk
(16TB) from the ceph-cluster to an fc-raid with the
Hi
Seems there's a minor flaw in CentOS/RHEL niit script:
line 91 reads:
daemon --user=$user $RADOSGW -n $name
should ImHO be:
daemon --user=$user $RADOSGW -n $name
to avoid /etc/rc.d/init.d/functions:__pids_var_run line 151 complain in dirname
:)
/Steffen
- Original Message -
From: Steffen W Sørensen ste...@me.com
To: ceph-users@lists.ceph.com
Sent: Friday, February 27, 2015 6:40:01 AM
Subject: [ceph-users] RadosGW S3ResponseError: 405 Method Not Allowed
Hi,
Newbie to RadosGW+Ceph, but learning...
Got a running Ceph Cluster
I'd guess so, but that's not what I want to do ;)
Am 27.02.2015 um 18:43 schrieb Robert LeBlanc:
Does deleting/reformatting the old osds improve the performance?
On Fri, Feb 27, 2015 at 6:02 AM, Corin Langosch
corin.lango...@netskin.com wrote:
Hi guys,
I'm using ceph for a long time now,
On 27/02/2015, at 17.04, Udo Lembke ulem...@polarzone.de wrote:
ceph health detail
HEALTH_WARN pool ssd-archiv has too few pgs
Slightly different I had an issue with my Ceph Cluster underneath a PVE cluster
yesterday.
Had two Ceph pools for RBD virt disks, vm_images (boot hdd images) +
On 27/02/2015, at 19.02, Steffen W Sørensen ste...@me.com wrote:
Into which pool does such user data (buckets and objects) gets stored and
possible howto direct user data into a dedicated pool?
[root@rgw ~]# rados df
pool name category KB objects clones
The online Ceph Developer Summit is next week, and there is a session
proposed for discussing ongoing Ceph and Docker integration efforts:
https://wiki.ceph.com/Planning/Blueprints/Infernalis/Continue_Ceph%2F%2FDocker_integration_work
Right now there is mostly a catalog of existing
This is the first release candidate for Hammer, and includes all of
the features that will be present in the final release. We welcome
and encourage any and all testing in non-production clusters to identify
any problems with functionality, stability, or performance before the
final Hammer
On 02/27/2015 02:46 PM, Mark Wu wrote:
2015-02-27 20:56 GMT+08:00 Alexandre DERUMIER aderum...@odiso.com
mailto:aderum...@odiso.com:
Hi,
from qemu rbd.c
if (flags BDRV_O_NOCACHE) {
rados_conf_set(s-cluster, rbd_cache, false);
} else {
That's interesting, it seems to be alternating between two lines, but only one
thread this time? I'm guessing the 62738 is the osdmap, which is much behind
where it should be? Osd.0 and osd.3 are on 63675, if I'm understanding that
correctly.
2015-02-27 08:18:48.724645 7f2fbd1e8700 20 osd.11
Sorry if this is actually documented somewhere, but is it possible to
create and use multiple filesystems on the data data and metadata
pools? I'm guessing yes, but requires multiple MDSs?
--
Cheers,
~Blairo
___
ceph-users mailing list
A little further logging:
2015-02-27 10:27:15.745585 7fe8e3f2f700 20 osd.11 62839 update_osd_stat
osd_stat(1305 GB used, 1431 GB avail, 2789 GB total, peers []/[] op hist
[])
2015-02-27 10:27:15.745619 7fe8e3f2f700 5 osd.11 62839 heartbeat:
osd_stat(1305 GB used, 1431 GB avail, 2789 GB total,
Hi Stéphane,
I think I got it.
I purged my complete Cluster and set up the new one like the old and got
exactly the same problem again.
Then I did ceph osd crush tunables optimal which added the option
chooseleaf_vary_r 1 to the crushmap.
After that everything works fine.
Try it at your
32 matches
Mail list logo