Hi Ansgar,
Thanks for your information.
I have tried 's3fs-fuse' to mount RADOSGW buckets on Ubuntu client node. It
works.
But I am not sure this is the technique that access RADOSGW buckets via NFS
on Jewel.
Best Regards,
WD
-Original Message-
From: ceph-users
Hello:
Are there any documents or examples to explain the configuration of NFS to
access RADOSGW buckets on Jewel?
Thanks a lot.
Best Regards,
WD
Hello:
Are there any documents or examples to explain the configuration of NFS to
access RADOSGW buckets?
Thanks a lot.
Best Regards,
WD
Hello:
Sorry for that I forgot paste the results of image format 1. And I still
cannot mount the format 1 or 2 block on Ubuntu 14.04 client, which the kernel
is 3.13.0-85-generic #129-Ubuntu.
##
# rbd create block_data/data03 -s 10G --image-format 1
rbd: image format 1 is deprecated
Dear Cephers:
I got the same issue under Ubuntu 14.04, even I try to use the image format
‘1’.
# modinfo rbd
filename: /lib/modules/3.13.0-85-generic/kernel/drivers/block/rbd.ko
license:GPL
author: Jeff Garzik
description:rados block device
author:
Hi:
Is there anyone know the reason why 's3cmd lists bucket ok, but get object
failed for Ceph object'? For example:
(1) List the bucket
$ s3cmd ls
2015-12-24 02:26 s3://DIR1
(2) List the objects under the bucket 'DIR1'
$ s3cmd ls s3://DIR1
2015-12-25 08:17 3091
Thank Josh for sharing the idea.
I have try the command for calculating the block size by ' sudo rbd diff
PoolName/ImageName |awk '{ SUM += $2 } END { print SUM/1024/1024 " MB" }''. It
seems take much time even to 1-2 minutes.
I think it's not suitable for production environment.
WD
Thanks Wido.
So it seems there is no way to do this under Hammer.
WD
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Wido
den Hollander
Sent: Wednesday, January 13, 2016 2:19 PM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] How to
Hi,
Is there any way to check the block device space usage under the specified
pool? I need to know the capacity usage. If the block device is used over 80%,
I will send an alert to user.
Thanks a lot!
Best Regards,
WD
Hi,
You may try the following command
'sudo stop ceph-mon id=ceph2'
WD
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Yonghua Peng
Sent: Tuesday, November 17, 2015 9:34 AM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] can't stop ceph
Hi,
I am testing federated gateways asynchronization with one region and two
zones. After several files synchronized completed, I found some error messages
was appeared in the log file.
It told me that 'Too many open files' when connected to secondary zone. I have
modified the parameters of
Hi, Craig:
I used 10 VMs for federated gateway testing. There are 5 nodes for us-east,
and the others are for us-west. The two zones are independent.
Before the configurations of the region and zone, I have the two zones with
the same 'client.radosgw.[zone]' setting of ceph.conf.
Hi Craig,
I am testing the federated gateway of 1 region with 2 zones. And I found only
metadata is replicated, the data is NOT.
According to your check list, I am sure all thinks are checked. Could you
review my configuration scripts? The configuration files are similar to
HI:
Is there any way or command to find out the create date-time of block
snapshot?
Any help would be much appreciated.
Best Regards,
WD
Hi, all:
If the Ceph cluster health status is HEALTH_OK, the execution time of 'sudo
rbd ls rbd' is very short, like the following results.
$ time sudo rbd ls rbd
real0m0.096s
user0m0.014s
sys 0m0.028s
But if there are several warnings (eg: 1 pgs degraded; 6 pgs incomplete; 1650
Hi Wido:
According to your reply, if I add/remove OSDs from Ceph cluster, I have to
wait all PGs moving action are completed.
Then 'rbd ls' operation may works well.
Is there any way to speed up PGs action of adding/removing OSDs ?
Thanks a lot.
Best Regards,
WD
-Original
Hello,
I am testing NFS over RBD recently. I am trying to build the NFS HA
environment under Ubuntu 14.04 for testing, and the packages version
information as follows:
- Ubuntu 14.04 : 3.13.0-32-generic(Ubuntu 14.04.2 LTS)
- ceph : 0.80.9-0ubuntu0.14.04.2
- ceph-common :
Dear Eric:
Thanks for your information. The command 'reboot -fn' works well.
I have no idea that anybody has met 'umount stuck' condition like me. If it's
possible, I hope I could find the reason why the fail over process doesn't work
fine after 30 minutes.
WD
-Original Message-
Hello,
I am testing NFS over RBD recently. I am trying to build the NFS HA
environment under Ubuntu 14.04 for testing, and the packages version
information as follows:
- Ubuntu 14.04 : 3.13.0-32-generic(Ubuntu 14.04.2 LTS)
- ceph : 0.80.9-0ubuntu0.14.04.2
- ceph-common :
19 matches
Mail list logo