[ceph-users] RGW Could not create user

2016-05-30 Thread Khang Nguyễn Nhật
Hi, I'm having problems with CEPH v10.2.1 Jewel when create user. My cluster is used CEPH Jewel, including: 3 OSD, 2 monitors and 1 RGW. - Here is the list of *cluster pools*: .rgw.root ap-southeast.rgw.control ap-southeast.rgw.data.root ap-southeast.rgw.gc ap-southeast.rgw.users.uid

Re: [ceph-users] centos 7 ceph 9.2.1 rbd image lost

2016-05-30 Thread Ilya Dryomov
On Mon, May 30, 2016 at 10:54 AM, dbgong wrote: > Hi all, > > I create a image on my Centos 7 client. > Then map the device and format to ext4, and mount in /mnt/ceph-hd > and I have added many files to /mnt/ceph-hd. > > and I didn't not set rbd start on boot. > then after the

Re: [ceph-users] mount error 5 = Input/output error (kernel driver)

2016-05-30 Thread Ilya Dryomov
On Mon, May 30, 2016 at 4:12 PM, Jens Offenbach wrote: > Hallo, > in my OpenStack Mitaka, I have installed the additional service "Manila" with > a CephFS backend. Everything is working. All shares are created successfully: > > manila show 9dd24065-97fb-4bcd-9ad1-ca63d40bf3a8 >

Re: [ceph-users] Fwd: [Ceph-community] Wasting the Storage capacity when using Ceph based On high-end storage systems

2016-05-30 Thread Oliver Dzombic
Hi, E3 CPUs have 4 Cores, with HT Unit. So 8 logical Cores. And they are not multi CPU. That means you will be naturally ( fastly ) limited in the number of OSD's you can run with that. Because no matter how much Ghz it has, the OSD process occupy a cpu core for ever. Not for 100%, but still

Re: [ceph-users] Fwd: [Ceph-community] Wasting the Storage capacity when using Ceph based On high-end storage systems

2016-05-30 Thread Christian Balzer
Hello, On Mon, 30 May 2016 09:40:11 +0100 Nick Fisk wrote: > The other option is to scale out rather than scale up. I'm currently > building nodes based on a fast Xeon E3 with 12 Drives in 1U. The MB/CPU > is very attractively priced and the higher clock gives you much lower > write latency if

[ceph-users] CephFS: slow writes over NFS when fs is mounted with kernel driver but fast with Fuse

2016-05-30 Thread David
Hi All I'm having an issue with slow writes over NFS (v3) when cephfs is mounted with the kernel driver. Writing a single 4K file from the NFS client is taking 3 - 4 seconds, however a 4K write (with sync) into the same folder on the server is fast as you would expect. When mounted with

[ceph-users] ceph-fuse performance about hammer and jewel

2016-05-30 Thread qisy
Hi, After jewel released fs product ready version, I upgrade the old hammer cluster, but iops droped a lot I made a test, with 3 nodes, each one have 8c 16G 1osd, the osd device got 15000 iops I found ceph-fuse client has better performance on hammer than jewel. fio

[ceph-users] mount error 5 = Input/output error (kernel driver)

2016-05-30 Thread Jens Offenbach
Hallo, in my OpenStack Mitaka, I have installed the additional service "Manila" with a CephFS backend. Everything is working. All shares are created successfully: manila show 9dd24065-97fb-4bcd-9ad1-ca63d40bf3a8

Re: [ceph-users] Fwd: [Ceph-community] Wasting the Storage capacity when using Ceph based On high-end storage systems

2016-05-30 Thread Oliver Dzombic
Hi Jack, any raid controller support JBOD mode. So you wont build a raid, even you can. But you will leave this to ceph to build the redundancy softwarebased. Or, if you have high needs of availbility, you can let the raid controller build raid's of raid level's where the raw loose of capacity

[ceph-users] centos 7 ceph 9.2.1 rbd image lost

2016-05-30 Thread dbgong
Hi all, I create a image on my Centos 7 client. Then map the device and format to ext4, and mount in /mnt/ceph-hd and I have added many files to /mnt/ceph-hd. and I didn't not set rbd start on boot. then after the server reboot, I can't find the image. no any rbd devices in /dev/ modprobe rbd

Re: [ceph-users] Fwd: [Ceph-community] Wasting the Storage capacity when using Ceph based On high-end storage systems

2016-05-30 Thread Nick Fisk
The other option is to scale out rather than scale up. I'm currently building nodes based on a fast Xeon E3 with 12 Drives in 1U. The MB/CPU is very attractively priced and the higher clock gives you much lower write latency if that is important. The density is slightly lower, but I guess you

Re: [ceph-users] rgw s3website issue

2016-05-30 Thread Gaurav Bafna
Hi Yehuda, What is the difference between two ? Aren't the static websites same as s3 as s3 hosts static websites only . Hi Robin, I am using the master only. The document would be great. I am thinking it is a config issue only. With your document, it should get cleared up. How does the

Re: [ceph-users] Fwd: [Ceph-community] Wasting the Storage capacity when using Ceph based On high-end storage systems

2016-05-30 Thread Jack Makenz
Thanks Christian, and all of ceph users Your guidance was very helpful, appreciate ! Regards Jack Makenz On Mon, May 30, 2016 at 11:08 AM, Christian Balzer wrote: > > Hello, > > you may want to read up on the various high-density node threads and > conversations here. > > You

Re: [ceph-users] Fwd: [Ceph-community] Wasting the Storage capacity when using Ceph based On high-end storage systems

2016-05-30 Thread Christian Balzer
Hello, you may want to read up on the various high-density node threads and conversations here. You most certainly do NOT need high end-storage systems to create multi-petabyte storage systems with Ceph. If you were to use these chassis as a basis: