Hello,
So this is going to be a noob question probably. I read the documentation,
but it didn't really cover upgrading to a specific version.
We have a cluster with mixed versions. While I don't want to upgrade the
latest version of ceph, I would like to upgrade the osd's so they are all
on
On Fri, Mar 03, 2017 at 10:55:06AM +1100, Blair Bethwaite wrote:
> Does anyone have any recommendations for good tools to perform
> file-system/tree backups and restores to/from a RGW object store (Swift or
> S3 APIs)? Happy to hear about both FOSS and commercial options please.
This isn't Ceph
On Fri, 3 Mar 2017, Mike Lovell wrote:
> i started an upgrade process to go from 0.94.7 to 10.2.5 on a production
> cluster that is using cache tiering. this cluster has 3 monitors, 28 storage
> nodes, around 370 osds. the upgrade of the monitors completed without issue.
> i then upgraded 2 of the
ceph daemonperf mds.ceph-0
-mds-- --mds_server-- ---objecter--- -mds_cache-
---mds_log
rlat inos caps|hsr hcs hcr |writ read actv|recd recy stry purg|segs
evts subm|
0 336k 97k| 000 | 00 20 | 00 246k 0 | 31
27k 0
0 336k 97k| 000
Hi all,
Unable to start radosgw after upgrading hammer(0.94.10) to jewel(10.2.5).
Please see the following log. Can someone help please?
# cat cephprod-client.radosgw.gps-prod-1.log
2017-03-04 10:35:10.459830 7f24316189c0 0 set uid:gid to 167:167
(ceph:ceph)
2017-03-04 10:35:10.459883
Hi Team,
I am installing new ceph setup(jewel) and here while activating OSD
its throughing below error.
I am using partition based osd like /home/osd1 not a entire disk.
Earlier installation one month back all are working fine but this time i
getting error like below.
My first thought is ceph doesn't have permissions to the rados keyring file.
eg.
[root@nuc1 ~]# ls -l /etc/ceph/ceph.client.radosgw.keyring
-rw-rw+ 1 root root 73 Feb 8 20:40
/etc/ceph/ceph.client.radosgw.keyring
You could give it read permission or be clever with setfacl, eg.
setfacl -m
Hi
I faced in those error. After some research I commentded out my custom setting:
#rgw zonegroup root pool = se.root
#rgw zone root pool = se.root
and after those rgw successfully started. Now setting are placed in default
pool: .rgw.root
>Суббота, 4 марта 2017, 6:40 +05:00 от Gagandeep
Permissions are correct see below:
[root@radosgw1 radosgw]# ls -l /etc/ceph
total 16
-rw-r--r--. 1 ceph ceph 63 Nov 19 2015 cephprod.client.admin.keyring
-rw-r--r--. 1 ceph ceph 122 Nov 24 2015 cephprod.client.radosgw.keyring
-rw-r--r--. 1 ceph ceph 1049 Nov 20 2015 cephprod.conf
Hi Blair,
We are also thinking of using ceph for 'backup'. At the moment we are
using rsync and hardlinks on a drbd setup. But I think when using cephfs
things could speed up, because file information is gotten from the mds
daemon, so this should save on one rsync file lookup, and we expect
Hi All,
I’ve a production cluster made of 8 nodes, 166 OSDs and 4 Journal SSD every 5
OSDs with replica 2 for a total RAW space of 150 TB.
I’ve few question about it:
It’s critical to have replica 2? Why?
Does replica 3 makes recovery faster?
Does replica 3 makes rebalancing and recovery less
i started an upgrade process to go from 0.94.7 to 10.2.5 on a production
cluster that is using cache tiering. this cluster has 3 monitors, 28
storage nodes, around 370 osds. the upgrade of the monitors completed
without issue. i then upgraded 2 of the storage nodes, and after the
restarts, the
On 17-03-03 12:30, Matteo Dacrema wrote:
Hi All,
I’ve a production cluster made of 8 nodes, 166 OSDs and 4 Journal SSD
every 5 OSDs with replica 2 for a total RAW space of 150 TB.
I’ve few question about it:
* It’s critical to have replica 2? Why?
Replica size 3 is highly recommended. I
Hi Henrik and Matteo,
While I agree with Henrik: increasing your replication factor won’t improve
recovery or read performance on its own. If you are changing from replica 2 to
replica 3, you might need to scale-out your cluster to have enough space for
the additional replica, and that would
Hi Marc,
Whilst I agree CephFS would probably help compared to your present
solution, what I'm looking for something that can talk to a the RadosGW
restful object storage APIs, so that the backing storage can be durable and
low-cost, i.e., on an erasure coded pool. In this case we're looking to
Hi all,
Does anyone run a production cluster with a modified crush map for create two
pools belonging one to HDDs and one to SSDs.
What’s the best method? Modify the crush map via ceph CLI or via text editor?
Will the modification to the crush map be persistent across reboots and
maintenance
Hello, all!
I have successfully create 2 zone cluster(se and se2). But my radosgw machines
are sending many GET /admin/log requests to each other after put 10k items to
cluster via radosgw. It's look like:
2017-03-03 17:31:17.897872 7f21b9083700 1 civetweb: 0x7f222001f660: 10.30.18.24
- -
Hi, Matteo!
Yes, I'm using mixed cluster in production but it's pretty small at the
moment. I've made a smal step by step manual for myself when I did this for
the first time and now put it as a gist:
https://gist.github.com/vheathen/cf2203aeb53e33e3f80c8c64a02263bc#file-manual-txt.
Probably it
Hi,
You should read email from Wido den Hollander:
"Hi,
As a Ceph consultant I get numerous calls throughout the year to help people
with getting their broken Ceph clusters back online.
The causes of downtime vary vastly, but one of the biggest causes is that
people use replication 2x. size =
19 matches
Mail list logo