Re: [ceph-users] Web based S3 client

2016-05-02 Thread 张灿
Hi Ben, Actually if not for CORS issues, we would have implemented Sree as a javascript app fully running in browser. By design the backend should be as simple as possible(currently less than 150 lines of python) to ease deployment for new users. So as for now we wouldn’t support LDAP auth but

Re: [ceph-users] Lab Newbie Here: Where do I start?

2016-05-02 Thread Tu Holmes
I would start here.  https://www.redhat.com/en/resources/red-hat-ceph-storage-hardware-configuration-guide //Tu _ From: Michael Ferguson Sent: Monday, May 2, 2016 12:30 PM Subject: [ceph-users] Lab Newbie Here: Where do I start? To:

Re: [ceph-users] Web based S3 client

2016-05-02 Thread 张灿
Hi George, The error is because your bucket `consonance` isn’t having CORS configured. For security issues, browsers limit cross domain accesses by default. If you create a bucket through Sree, it will configure CORS for you, but for existing buckets, they are likely to not having those

Re: [ceph-users] Lab Newbie Here: Where do I start?

2016-05-02 Thread Christian Balzer
Hello, firstly as a self proclaimed newbie, you start by reading. A LOT. Then, when you think you have a good grip on how Ceph works, come here and we shall strive to dissuade you from that notion. ^o^ On Mon, 2 May 2016 15:29:37 -0400 Michael Ferguson wrote: > G'Day All, > > > > I have

Re: [ceph-users] Deploying ceph by hand: a few omissions

2016-05-02 Thread Stuart Longland
On 02/05/16 10:20, Robin H. Johnson wrote: > On Sun, May 01, 2016 at 08:46:36PM +1000, Stuart Longland wrote: >> Hi all, >> >> This evening I was in the process of deploying a ceph cluster by hand. >> I did it by hand because to my knowledge, ceph-deploy doesn't support >> Gentoo, and my cluster

[ceph-users] yum installed jewel doesn't provide systemd scripts

2016-05-02 Thread Zhang Qiang
I installed jewel el7 via yum on CentOS 7.1, but it seems no systemd scripts are available. But I do find there's a folder named 'systemd' in the source, so maybe we forgot to build it into the package? ___ ceph-users mailing list

[ceph-users] performance drop a lot when running fio mix read/write

2016-05-02 Thread min fang
Hi, I run randow fio with rwmixread=70, and found read iops is 707, write is 303. (reference the following). This value is less than random write and read value. The 4K random write IOPs is 529 and 4k randread IOPs is 11343. Apart from rw type is different, other parameters are all same. I do

Re: [ceph-users] performance drop a lot when running fio mix read/write

2016-05-02 Thread Somnath Roy
Yes, reads will be affected a lot for mix read/write scenarios as Ceph is serializing ops on a PG. Write path is inefficient and that is affecting reads in turn. Hope you are following all the config settings (shards/threads, pg numbers etc. etc.) already discussed in the community. You may

[ceph-users] can I attach a volume to 2 servers

2016-05-02 Thread yang sheng
Hi I am using ceph infernalis. it works fine with my openstack liberty. I am trying to test nova evacuate. All the vms' volumes are shared among all compute nodes. however, the instance files (/var/lib/nova/instances) are in each compute node's local storage. Based on redhat docs(

Re: [ceph-users] can I attach a volume to 2 servers

2016-05-02 Thread Sean Redmond
Hi, You could set the below to create ephemeral disks as RBD's [libvirt] libvirt_images_type = rbd On Mon, May 2, 2016 at 2:28 PM, yang sheng wrote: > Hi > > I am using ceph infernalis. > > it works fine with my openstack liberty. > > I am trying to test nova evacuate.

[ceph-users] jewel upgrade : MON unable to start

2016-05-02 Thread SCHAER Frederic
Hi, I'm < sort of > following the upgrade instructions on CentOS 7.2. I upgraded 3 OSD nodes without too many issues, even if I would rewrite those upgrade instructions to : #chrony has ID 167 on my systems... this was set at install time ! but I use NTP anyway. yum remove chrony sed -i -e

Re: [ceph-users] jewel upgrade : MON unable to start

2016-05-02 Thread Oleksandr Natalenko
Why do you upgrade osds first if it is necessary to upgrade mons before everything else? On May 2, 2016 5:31:43 PM GMT+03:00, SCHAER Frederic wrote: >Hi, > >I'm < sort of > following the upgrade instructions on CentOS 7.2. >I upgraded 3 OSD nodes without too many

Re: [ceph-users] can I attach a volume to 2 servers

2016-05-02 Thread yang sheng
Hi Edward thanks for your explanation! Yes you are right. I just came across sebastien han's post, using nfs on top of rbd( http://www.sebastien-han.fr/blog/2012/07/06/nfs-over-rbd/) I will try this method. On Mon, May 2, 2016 at 11:14 AM, Edward Huyer wrote: > Mapping a

Re: [ceph-users] can I attach a volume to 2 servers

2016-05-02 Thread yang sheng
hi Sean thanks for your reply I think the ceph and openstack works fine for me. I can attach a bootable volume to vm. I am right now trying to attach the volumes to physical servers (hypervisor nodes) and share some data among hypervisors (based on the docs, nova evacuate function require all

Re: [ceph-users] Can Jewel read Hammer radosgw buckets?

2016-05-02 Thread Ansgar Jazdzewski
hi, my pools ar named a bit fifferent fomr the dafault ones: .dev-qa.rgw.gc .rgw.control .dev-qa.users.uid .dev-qa.users.swift .dev.rgw.root .dev-qa.usage .dev-qa.log .dev-qa.rgw.buckets .dev-qa.rgw.buckets.index .dev-qa.rgw.root .dev-qa.users.email .dev-qa.intent-log .dev-qa.rgw.buckets.extra

Re: [ceph-users] can I attach a volume to 2 servers

2016-05-02 Thread Edward Huyer
Mapping a single RBD on multiple servers isn’t going to do what you want unless you’re putting some kind of clustered filesystem on it. Exporting the filesystem via an NFS server will generally be simpler. You’ve already encountered one problem with sharing a block device without a clustered

Re: [ceph-users] Web based S3 client

2016-05-02 Thread George Mihaiescu
Hi Can, I gave it a try and I can see my buckets, but I get an error (see attached) when trying to see the contents of any bucket. The application is pretty simplistic now, and it would be great if support for the object and size count would be added. The bucket access type should be displayed

[ceph-users] Maximum MON Network Throughput Requirements

2016-05-02 Thread Brady Deetz
I'm working on finalizing designs for my Ceph deployment. I'm currently leaning toward 40gbps ethernet for interconnect between OSD nodes and to my MDS servers. But, I don't really want to run 40 gig to my mon servers unless there is a reason. Would there be an issue with using 1 gig on my monitor

Re: [ceph-users] Ceph Jewel 10.2.0 Build Error - ldap dependency related to -j1 and radosgw enabled

2016-05-02 Thread Dyweni - Ceph-Users
After installed required dependency 'virtualenv', then this error also occurs with -j2. The workaround I found is to include '--without-openldap' when using '--with-radosgw'. On 2016-04-29 15:55, Dyweni - Ceph-Users wrote: Hi, When I compile Ceph Jewel 10.2.0 using 'make -j1' I get the

[ceph-users] Lab Newbie Here: Where do I start?

2016-05-02 Thread Michael Ferguson
G'Day All, I have two old Promise Vtrak E310s JBOBs (still with support) each with 4 600GB Seagate SAS HDD and 8 2TB SATA HDD and two old HP DL360. While I am seeing so many ceph-deploy this and ceph-deploy that, I have not found any help that starts with the hardware. There seem to be lots

Re: [ceph-users] OSD Crashes

2016-05-02 Thread Varada Kari
You can run chown from /var/lib/ceph with -H --dereference, something like chown ceph:ceph -RH --dereference /var/lib/ceph/ need to change the ownership of the partition to ceph:ceph, ceph-disk does that in a fresh install. Journal is a symlink in osd, so we need to traverse the symlink and

Re: [ceph-users] Deploying ceph by hand: a few omissions

2016-05-02 Thread Henrik Korkuc
On 16-05-02 02:14, Stuart Longland wrote: On 02/05/16 00:32, Henrik Korkuc wrote: mons generate these bootstrep keys. You can find them in /var/lib/ceph/bootstrap-*/ceph.keyring on pre-infernalis there were created automagically (I guess by init). Infernalis and jewel have

[ceph-users] Erasure pool performance expectations

2016-05-02 Thread Peter Kerdisle
Hi guys, I am currently testing the performance of RBD using a cache pool and a 4/2 erasure profile pool. I have two SSD cache servers (2 SSDs for journals, 7 SSDs for data) with 2x10Gbit bonded each and a six OSD nodes with a 10Gbit public and 10Gbit cluster network for the erasure pool (10x3TB

Re: [ceph-users] Maximum MON Network Throughput Requirements

2016-05-02 Thread Chris Jones
Mons and RGWs only use the public network but Mons can have a good deal of traffic. I would not recommend 1Gb but if looking for lower bandwidth then 10Gb would be good for most. It all depends in the overall size of the cluster. You mentioned 40Gb. If the nodes are high density then 40Gb but if

Re: [ceph-users] 10.2.0 - mds won't recover, waiting on journal 300

2016-05-02 Thread Russ
Thanks - I got to that conclusion too eventually, after waiting for the recoveries to settle down. Not sure how it happened, but one of the nodes running 6 of the OSDs, after moving to the 4.4.6 kernel started showing objects that were unfound, even though all copies were valid and other OSDs had

Re: [ceph-users] Maximum MON Network Throughput Requirements

2016-05-02 Thread Brady Deetz
Thanks. Our initial deployment will be 8 OSD nodes containing 24 OSDs each (spinning rust, not ssd). Each node will contain 2 PCIe p3700 NVMe for journals. I expect us to grow to a maximum of 15 OSD nodes. I'll just keep 40 gig on everything for the sake of consistency and not risk under-sizing

Re: [ceph-users] 10.2.0 - mds won't recover, waiting on journal 300

2016-05-02 Thread John Spray
On Sun, May 1, 2016 at 2:34 AM, Russ wrote: > After getting all the OSDs and MONs updated and running ok, I updated the > MDS as usual; rebooted the machine after updating the kernel (we're on > 14.04, but it was running an older 4.x kernel, so took it to 16.04's > version), the

Re: [ceph-users] snaps & consistency group

2016-05-02 Thread Jason Dillaman
There is no current capability to support snapshot consistency groups within RBD; however, support for snapshot consistency groups is currently being developed for the Ceph kraken release. On Sun, May 1, 2016 at 11:04 AM, Yair Magnezi wrote: > Hello Guys . > > I'm a

Re: [ceph-users] can I attach a volume to 2 servers

2016-05-02 Thread Max A. Krasilnikov
Hello! On Mon, May 02, 2016 at 11:25:11AM -0400, forsaks.30 wrote: > Hi Edward > thanks for your explanation! > Yes you are right. > I just came across sebastien han's post, using nfs on top of rbd( > http://www.sebastien-han.fr/blog/2012/07/06/nfs-over-rbd/) > I will try this method. Why

Re: [ceph-users] Change MDS's mode from active to standby

2016-05-02 Thread John Spray
After you've decreased max_mds, use "ceph mds deactivate " on the ranks you want to get rid of. If you had two active MDSs, you would use that with rank=1. Once you've done that, your second MDS will go into state 'stopping', and after about 30 seconds (use ceph -w to follow progress) it will go

Re: [ceph-users] jewel upgrade : MON unable to start

2016-05-02 Thread SCHAER Frederic
I believe this is because I did not read the instruction thoroughly enough... this is my first "live upgrade" -Message d'origine- De : Oleksandr Natalenko [mailto:oleksa...@natalenko.name] Envoyé : lundi 2 mai 2016 16:39 À : SCHAER Frederic ;