Hi,
There are any way to set permission that don't allow users to create
buckets but they can read these buckets? and how to add prefix to a bucket
when users create one, ex: when user 'aa' create a bucket called 'testing',
bucket name will be 'aatesting'?
Best regards,
Thanh Tran
Thank you soo much! That seems to work immidetately.
ATM I still see 3 pgs in active+clean+scrubbing state - but that will
hopefully fix by time.
So the way to go with firefly, is to either use at least 3 hosts for
OSDs - or reduce the number of replicas?
Kind Regards,
Georg
On
Leen,
thanks for explaining things. I does make sense now.
Unfortunately, it does look like this technology would not fulfill my
requirements as I do need to have an ability to perform maintenance without
shutting down vms.
I will open another topic to discuss possible solutions.
Thanks
Hello guys,
I am currently running a ceph cluster for running vms with qemu + rbd. It works
pretty well and provides a good degree of failover. I am able to run
maintenance tasks on the ceph nodes without interrupting vms IO.
I would like to do the same with VMWare / XenServer hypervisors,
Hi,
at the moment we are using tgt with RBD backend compiled from source on Ubuntu
12.04 and 14.04 LTS. We have two machines within two ip-ranges (e.g.
192.168.1.0/24 and 192.168.2.0/24). One machine in 192.168.1.0/24 and one
machine in 192.168.2.0/24. The config for tgt is the same on both
Hi,
I want to create a block image using the rbd cli and followng the rule:
stripe-unit * stripe-count = 2^order,
Take for example:
rbd create --pool rbd --size 20480 --order 21 --image-format 2 --stripe-unit
1048576 --stripe-count 2 image1
But when I try to map to a block device then raise
Hi,
Been doing some reading on the CEPH documentation and just wanted to clarify if
anyone knows the (approximate) correct PG's for CEPH.
What I mean is lets say I have created one pool with 4096 placement groups.
Now instead of one pool I want two so if I were to create 2 pools instead would
Uwe, thanks for your quick reply.
Do you run the Xenserver setup on production env and have you tried to test
some failover scenarios to see if the xenserver guest vms are working during
the failover of storage servers?
Also, how did you set up the xenserver iscsi? Have you used the
Hello Andrei,
I'm trying to accomplish the same thing with VMWare. So far I'm still doing
lab testing, but we've gotten as far as simulating a production workload.
Forgive the lengthy reply, I happen to be sitting on an airplane .
My existing solution is using NFS servers running in ESXi VMs.
On 05/12/2014 02:20 PM, Pieter Koorts wrote:
Hi,
Been doing some reading on the CEPH documentation and just wanted to
clarify if anyone knows the (approximate) correct PG's for CEPH.
What I mean is lets say I have created one pool with 4096 placement groups.
Now instead of one pool I want two
Hello, last week I've upgraded from 0.72.2 to last stable firefly 0.80
following the suggested procedure (upgrade in order monitors, OSDs,
MDSs, clients) on my 2 different clusters.
Everything is ok, I've HEALTH_OK on both, the only weird thing is that
few PGs remain in active+clean+scrubbing.
Hi,
yes, we use it in production. I can stop/kill the tgt on one server and
XenServer goes to the second one. We enabled multipathing in xenserver. In our
setup we haven't multiple ip-ranges so we scan/login the second target on
xenserverstartup with iscsiadm in rc.local.
Thats based on
Hi, I observe the same behaviour on a test ceph cluster (upgrade from emperor
to firefly)
cluster 819ea8af-c5e2-4e92-81f5-4348e23ae9e8
health HEALTH_OK
monmap e3: 3 mons at ..., election epoch 12, quorum 0,1,2 0,1,2
osdmap e94: 12 osds: 12 up, 12 in
pgmap v19001: 592
Hi Ceph,
Ceph will be at http://www.solutionslinux.fr/ in Paris next week, booth C36
next week ( 20 / 21 may 2014 ) in the non-profit village. Drop by anytime to
discuss Ceph if you are around :-) We're also having a meetup / lunch the 20th
( details are at
On 5/12/2014 4:52 AM, Andrei Mikhailovsky wrote:
Leen,
thanks for explaining things. I does make sense now.
Unfortunately, it does look like this technology would not fulfill my
requirements as I do need to have an ability to perform maintenance
without shutting down vms.
I've no idea how
I'll be there !
(Do you known if it'll be possible to buy some ceph t-shirts ?)
- Mail original -
De: Loic Dachary l...@dachary.org
À: ceph-users ceph-users@lists.ceph.com
Envoyé: Lundi 12 Mai 2014 16:38:31
Objet: [ceph-users] Ceph booth at http://www.solutionslinux.fr/
Hi Ceph,
On Mon, May 12, 2014 at 03:45:43PM +0200, Uwe Grohnwaldt wrote:
Hi,
yes, we use it in production. I can stop/kill the tgt on one server and
XenServer goes to the second one. We enabled multipathing in xenserver. In
our setup we haven't multiple ip-ranges so we scan/login the second target
On Mon, May 12, 2014 at 07:01:46PM +0200, Leen Besselink wrote:
On Mon, May 12, 2014 at 03:45:43PM +0200, Uwe Grohnwaldt wrote:
Hi,
yes, we use it in production. I can stop/kill the tgt on one server and
XenServer goes to the second one. We enabled multipathing in xenserver. In
our
On 5/12/14 06:54 , Guang wrote:
Hello cephers,
It has been a while since we started evaluating CEPH, and it turns out CEPH is
a reliable solution for our use case. Thanks for making CEPH as it is today.
We are trying to use CEPH to store user generated data (UGC), so that we put
extra
On Mon, May 12, 2014 at 10:52:33AM +0100, Andrei Mikhailovsky wrote:
Leen,
thanks for explaining things. I does make sense now.
Unfortunately, it does look like this technology would not fulfill my
requirements as I do need to have an ability to perform maintenance without
shutting
On Mon, May 12, 2014 at 12:08:24PM -0500, Dimitri Maziuk wrote:
PS. (now that I looked) see e.g.
http://blogs.mindspew-age.com/2012/04/05/adventures-in-high-availability-ha-iscsi-with-drbd-iscsi-and-pacemaker/
Dima
Didn't you say you wanted multiple servers to write to the same LUN ?
I
The underlying file system on the RBD needs to be a clustered file system, like
OCFS2, GFS2, etc., and a cluster between the two, or more, iSCSI target servers
needs to be created to manage the clustered file system.
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
On 05/12/2014 01:17 PM, McNamara, Bradley wrote:
The underlying file system on the RBD needs to be a clustered file
system, like OCFS2, GFS2, etc., and a cluster between the two, or more,
iSCSI target servers needs to be created to manage the clustered file
system.
Looks like we aren't sure
The formula was designed to be used on a per-pool basis. Having said that,
though, when looking at the number of PG's from a system-wide perspective, one
does not want too many total PG's. So, it's a balancing act, and it has been
suggested that it's better to have slightly more PG's than you
We are using a switchstack of Juniper EX4200 and EX3200. Cisco should work,
too. Anotheroption is failover bonding (but multipathing with different ips is
better)
Mit freundlichen Grüßen / Best Regards,
--
Consultant
Dipl.-Inf. Uwe Grohnwaldt
Gutleutstr. 351
60327 Frankfurt a. M.
eMail:
On 5/6/14 18:05 , Sage Weil wrote:
* osd: fix bug in journal replay/restart (Sage Weil)
I've been trying to find information about this. Is this #7922
http://tracker.ceph.com/issues/7922?
--
*Craig Lewis*
Senior Systems Engineer
Office +1.714.602.1309
Email cle...@centraldesktop.com
Thanks for the info. I was erring to less pools but using software that does
not share pools very well seem to put a spanner in the works at the time. I
think we will work on making it more RBD friendly
Thanks
Pieter
On 12 May 2014, at 19:53, McNamara, Bradley bradley.mcnam...@seattle.gov
This first Firefly point release fixes a few bugs, the most visible
being a problem that prevents scrub from completing in some cases.
Notable Changes
---
* osd: revert incomplete scrub fix (Samuel Just)
* rgw: fix stripe calculation for manifest objects (Yehuda Sadeh)
* rgw: improve
the kernel rdb does not support it, you need to use qemu.
On Mon, May 12, 2014 at 8:00 PM, lijian blacker1...@163.com wrote:
Hi,
I want to create a block image using the rbd cli and followng the rule:
stripe-unit * stripe-count = 2^order,
Take for example:
rbd create --pool rbd --size
Hi,
I am still stuck with the above issues.. If anyone knows the solution or
reason, please help me to sort out it.
On Fri, May 9, 2014 at 12:13 PM, Shanil S xielessha...@gmail.com wrote:
Hi Yehuda,
Also, if we use /admin/metadata/user then it will list only the usernames
and we won't get
0.80.1 update has fixed the problem.
thanks to ceph team !
- Mail original -
De: Simon Ironside sirons...@caffetine.org
À: ceph-users@lists.ceph.com
Envoyé: Lundi 12 Mai 2014 18:13:32
Objet: Re: [ceph-users] ceph firefly PGs in active+clean+scrubbing state
Hi,
I'm sure I saw on
Hi All,
I'm using ceph as a storage backend for my openstack havana
environment. I have an issue attaching a volume to an instance. This is
the error http://pastebin.com/raw.php?i=DpmrJtYH that I get when I try
to attach a volume. I already rebuild libvirt rpm to have rbd support.
Ceph
I see the EL6 build on http://ceph.com/rpm-firefly/el6/x86_64/ but not
on gitbuilder (last build 07MAY). Is 0.80.1 considered a different
branch ref for purposes of gitbuilder?
Jeff
On 05/12/2014 05:31 PM, Sage Weil wrote:
This first Firefly point release fixes a few bugs, the most visible
Hi,
On CentOS, how could we set selinux (not iptables) for Ceph?
Such as using:
setsebool -P command to change selinux rule for Ceph.
Thanks
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
34 matches
Mail list logo