Re: [ceph-users] ceph Cluster attempt to access beyond end of device

2017-08-17 Thread Hauke Homburg
Am 15.08.2017 um 16:34 schrieb ZHOU Yuan: > Hi Hauke, > > It's possibly the XFS issue as discussed in the previous thread. I > also saw this issue in some JBOD setup, running with RHEL 7.3 > > > Sincerely, Yuan > > On Tue, Aug 15, 2017 at 7:38 PM, Hauke Homburg >

Re: [ceph-users] Ceph Delete PG because ceph pg force_create_pg doesnt help

2017-08-17 Thread Hauke Homburg
Am 17.08.2017 um 22:35 schrieb Hauke Homburg: > Am 16.08.2017 um 13:40 schrieb Hauke Homburg: >> Hello, >> >> >> How can i delete a pg completly from a ceph server? I think i have all >> Data manually from the Server deleted. But i a ceph pg query >> shows the pg already? A ceph pg

Re: [ceph-users] How to distribute data

2017-08-17 Thread David Turner
Do you mean a lot of snapshots or creating a lot of clones from a snapshot? I can agree to the pain of crating a lot of snapshots of rbds in ceph. I'm assuming that you mean to say that you will have a template rbd with a version snapshot that you clone each time you need to let someone log in. Is

Re: [ceph-users] How to distribute data

2017-08-17 Thread Christian Balzer
Hello, On Fri, 18 Aug 2017 03:31:56 +0200 Oscar Segarra wrote: > Hi Christian, > > Thanks a lot for helping... > > Have you read: > http://docs.ceph.com/docs/master/rbd/rbd-openstack/ > > So just from the perspective of qcow2, you seem to be doomed. > --> Sorry, I've talking about RAW +

Re: [ceph-users] Optimise Setup with Bluestore

2017-08-17 Thread Christian Balzer
On Fri, 18 Aug 2017 00:09:48 +0200 Mehmet wrote: > *resend... this Time to the list...* > Hey David Thank you for the response! > > My use case is actually only rbd for kvm Images where mostly Running Lamp > systems on Ubuntu or centos. > All Images (rbds) are created with "proxmox" where the

Re: [ceph-users] How to distribute data

2017-08-17 Thread Christian Balzer
Hello, On Thu, 17 Aug 2017 23:56:49 +0200 Oscar Segarra wrote: > Hi David, > > Thanks a lot again for your quick answer... > > *The rules in the CRUSH map will always be followed. It is not possible > for Ceph to go against that and put data into a root that shouldn't have > it.* > --> I

Re: [ceph-users] Ceph cluster with SSDs

2017-08-17 Thread Christian Balzer
Hello, On Fri, 18 Aug 2017 00:00:09 +0200 Mehmet wrote: > Which ssds are used? Are they in production? If so how is your PG Count? > What he wrote. W/o knowing which apples you're comparing to what oranges, this is pointless. Also testing osd bench is the LEAST relevant test you can do, as it

Re: [ceph-users] How to distribute data

2017-08-17 Thread Oscar Segarra
Thanks a lot David, for me is a little bit difficult to make some tests because I have to buy a hardware... and the price is different with cache ssd tier o without it. If anybody have experience with VDI/login storms... will be really welcome! Note: I have removed the ceph-user list because I

Re: [ceph-users] Fwd: Can't get fullpartition space

2017-08-17 Thread David Clarke
On 18/08/17 06:10, Maiko de Andrade wrote: > Hi, > > I want install ceph in 3 machines. CEPH, CEPH-OSD-1 CEPH-OSD-2, each > machines have 2 disk in RAID 0 with total 930GiB > > CEPH is mon and osd too.. > CEPH-OSD-1 osd > CEPH-OSD-2 osd > > I install and reinstall ceph many times. All

Re: [ceph-users] Optimise Setup with Bluestore

2017-08-17 Thread Mehmet
*resend... this Time to the list...* Hey David Thank you for the response! My use case is actually only rbd for kvm Images where mostly Running Lamp systems on Ubuntu or centos. All Images (rbds) are created with "proxmox" where the ceph defaults are used (actually Jewel in the near Future

Re: [ceph-users] Ceph cluster with SSDs

2017-08-17 Thread Mehmet
Which ssds are used? Are they in production? If so how is your PG Count? Am 17. August 2017 20:04:25 MESZ schrieb M Ranga Swami Reddy : >Hello, >I am using the Ceph cluster with HDDs and SSDs. Created separate pool >for each. >Now, when I ran the "ceph osd bench", HDD's

Re: [ceph-users] Jewel (10.2.7) osd suicide timeout while deep-scrub

2017-08-17 Thread Gregory Farnum
On Thu, Aug 17, 2017 at 1:02 PM, Andreas Calminder wrote: > Hi! > Thanks for getting back to me! > > Clients access the cluster through rgw (s3), we had some big buckets > containing a lot of small files. Prior to this happening I removed a > semi-stale bucket with a

Re: [ceph-users] docs.ceph.com broken since... days?!?

2017-08-17 Thread Gregory Farnum
Yeah, Alfredo said he would look into it. Presumably something happened when he was fixing other broken pieces of things in the doc links. On Thu, Aug 17, 2017 at 12:44 PM, Jason Dillaman wrote: > It's up for me as well -- but for me the master branch docs are > missing the

Re: [ceph-users] Optimise Setup with Bluestore

2017-08-17 Thread Mehmet
Hey Mark :) Am 16. August 2017 21:43:34 MESZ schrieb Mark Nelson : >Hi Mehmet! > >On 08/16/2017 11:12 AM, Mehmet wrote: >> :( no suggestions or recommendations on this? >> >> Am 14. August 2017 16:50:15 MESZ schrieb Mehmet : >> >> Hi friends, >> >>

Re: [ceph-users] How to distribute data

2017-08-17 Thread David Turner
The rules in the CRUSH map will always be followed. It is not possible for Ceph to go against that and put data into a root that shouldn't have it. The problem with a cache tier is that Ceph is going to need to promote and evict stuff all the time (not free). A lot of people that want to use

[ceph-users] Modify user metadata in RGW multi-tenant setup

2017-08-17 Thread Sander van Schie
Hello, I'm trying to modify the metadata of a RGW user in a multi-tenant setup. For a regular user with the default implicit tenant it works fine using the following to get metadata: # radosgw-admin metadata get user: I however can't figure out how to do the same for a user with an explicit

Re: [ceph-users] docs.ceph.com broken since... days?!?

2017-08-17 Thread Jason Dillaman
It's up for me as well -- but for me the master branch docs are missing the table of contents on the nav pane on the left. On Thu, Aug 17, 2017 at 3:32 PM, David Turner wrote: > I've been using docs.ceph.com all day and just double checked that it's up. > Make sure that

Re: [ceph-users] Jewel (10.2.7) osd suicide timeout while deep-scrub

2017-08-17 Thread Andreas Calminder
Hi! Thanks for getting back to me! Clients access the cluster through rgw (s3), we had some big buckets containing a lot of small files. Prior to this happening I removed a semi-stale bucket with a rather large index, 2.5 million objects, all but 30 objects didn't actually exist which left the

Re: [ceph-users] docs.ceph.com broken since... days?!?

2017-08-17 Thread David Turner
I've been using docs.ceph.com all day and just double checked that it's up. Make sure that your DNS, router, firewall, etc isn't blocking it. On Thu, Aug 17, 2017 at 3:28 PM wrote: > ... or at least since yesterday! >

Re: [ceph-users] docs.ceph.com broken since... days?!?

2017-08-17 Thread David Turner
If you are on a different version of ceph than that, replace that in the url. jewel, hammer, etc. On Thu, Aug 17, 2017 at 3:37 PM Jason Dillaman wrote: > I'm not sure what's going on w/ the master branch docs today, but in > the meantime you can use the luminous docs [1]

Re: [ceph-users] docs.ceph.com broken since... days?!?

2017-08-17 Thread Jason Dillaman
I'm not sure what's going on w/ the master branch docs today, but in the meantime you can use the luminous docs [1] until this is sorted out since they should be nearly identical. [1] http://docs.ceph.com/docs/luminous/ On Thu, Aug 17, 2017 at 2:52 PM, wrote: >

Re: [ceph-users] How to distribute data

2017-08-17 Thread David Turner
If I'm understanding you correctly, you want to have 2 different roots that pools can be made using. The first being entirely SSD storage. The second being HDD Storage with an SSD cache tier on top of it. https://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/

[ceph-users] docs.ceph.com broken since... days?!?

2017-08-17 Thread ceph . novice
... or at least since yesterday! ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] How to distribute data

2017-08-17 Thread Oscar Segarra
Hi, Sorry guys, during theese days I'm asking a lot about how to distribute my data. I have two kinds of VM: 1.- Management VMs (linux) --> Full SSD dedicated disks 2.- Windows VM --> SSD + HHD (with tiering). I'm working on installing two clusters on the same host but I'm encountering lots of

Re: [ceph-users] RBD only keyring for client

2017-08-17 Thread Jason Dillaman
You should be able to set a CEPH_ARGS='--id rbd' environment variable. On Thu, Aug 17, 2017 at 2:25 PM, David Turner wrote: > I already tested putting name, user, and id in the global section with > client.rbd and rbd as the value (one at a time, testing in between). None

Re: [ceph-users] Ceph Delete PG because ceph pg force_create_pg doesnt help

2017-08-17 Thread Hauke Homburg
Am 16.08.2017 um 13:40 schrieb Hauke Homburg: > Hello, > > > How can i delete a pg completly from a ceph server? I think i have all > Data manually from the Server deleted. But i a ceph pg query > shows the pg already? A ceph pg force_create_pg doesn't create the pg. > > The ceph says he has

Re: [ceph-users] RBD only keyring for client

2017-08-17 Thread David Turner
I already tested putting name, user, and id in the global section with client.rbd and rbd as the value (one at a time, testing in between). None of them had any affect. This is on a 10.2.7 cluster. On Thu, Aug 17, 2017, 2:06 PM Gregory Farnum wrote: > I think you just

Re: [ceph-users] Jewel (10.2.7) osd suicide timeout while deep-scrub

2017-08-17 Thread Gregory Farnum
On Thu, Aug 17, 2017 at 12:14 AM Andreas Calminder < andreas.calmin...@klarna.com> wrote: > Thanks, > I've modified the timeout successfully, unfortunately it wasn't enough > for the deep-scrub to finish, so I increased the > osd_op_thread_suicide_timeout even higher (1200s), the deep-scrub >

[ceph-users] Fwd: Can't get fullpartition space

2017-08-17 Thread Maiko de Andrade
Hi, I want install ceph in 3 machines. CEPH, CEPH-OSD-1 CEPH-OSD-2, each machines have 2 disk in RAID 0 with total 930GiB CEPH is mon and osd too.. CEPH-OSD-1 osd CEPH-OSD-2 osd I install and reinstall ceph many times. All installation the CEPH don't get full partion space. Take only 1GB. How I

Re: [ceph-users] RBD only keyring for client

2017-08-17 Thread Gregory Farnum
I think you just specify "name = client.rbd" as a config in the global section of the machine's ceph.conf and it will use that automatically. -Greg On Thu, Aug 17, 2017 at 10:34 AM, David Turner wrote: > I created a user/keyring to be able to access RBDs, but I'm trying to

[ceph-users] Ceph cluster with SSDs

2017-08-17 Thread M Ranga Swami Reddy
Hello, I am using the Ceph cluster with HDDs and SSDs. Created separate pool for each. Now, when I ran the "ceph osd bench", HDD's OSDs show around 500 MB/s and SSD's OSD show around 280MB/s. Ideally, what I expected was - SSD's OSDs should be at-least 40% high as compared with HDD's OSD bench.

[ceph-users] RBD only keyring for client

2017-08-17 Thread David Turner
I created a user/keyring to be able to access RBDs, but I'm trying to find a way to set the config file on the client machine such that I don't need to use -n client.rbd in my commands when I'm on that host. Currently I'm testing rbd-fuse vs rbd-nbd for our use case, but I'm having a hard time