Re: [ceph-users] "ceph -s" shows no osds

2018-01-03 Thread Hüseyin Atatür YILDIRIM
Hello Sergey, I issued the mgr create command and it fails with ceph-deploy mgr create mon01 usage: ceph-deploy [-h] [-v | -q] [--version] [--username USERNAME] [--overwrite-conf] [--cluster NAME] [--ceph-conf CEPH_CONF] COMMAND ... ceph-deploy: error:

Re: [ceph-users] ceph luminous - SSD partitions disssapeared

2018-01-03 Thread Linh Vu
Just want to point out as well that the first thing I did when noticing this bug was to add the `ceph` user to the group `disk` thus giving it write permission to the devices. However this did not actually work (haven't checked in 12.2.2 yet), and I suspect that something in the ceph code was

Re: [ceph-users] Increasing PG number

2018-01-03 Thread David Turner
That script was mine and we were creating the PGs in chunks of 256 at a time with nobackfill and norecover set until we added 4k PGs. We used the script because of the amount of peering caused by adding thousands of PGs at a time was causing problems for client io. We did that 4 times (backfilling

Re: [ceph-users] iSCSI over RBD

2018-01-03 Thread Joshua Chen
I had the same problem before, mine is CentOS, and when I created /iscsi/create iqn_bla-bla it goes ocal LIO instance already has LIO configured with a target - unable to continue then finally the solution happened to be, turn off target service systemctl stop target systemctl disable target

Re: [ceph-users] ceph luminous - SSD partitions disssapeared

2018-01-03 Thread Linh Vu
Seen this issue when I first created our Luminous cluster. I use a custom systemd service to chown the DB and WAL partitions before ceph osd services get started. The script in /usr/local/sbin just does the chowning. ceph-nvme.service: # This is a workaround to chown the rocksdb and wal

Re: [ceph-users] ceph luminous - SSD partitions disssapeared

2018-01-03 Thread Sergey Malinin
To make device ownership persist over reboots, you can to set up udev rules. The article you referenced seems to have nothing to do with bluestore. When you had zapped /dev/sda, you zapped bluestore metadata stored on db partition so newly created partitions, if they were created apart from

Re: [ceph-users] Increasing PG number

2018-01-03 Thread Christian Wuerdig
A while back there was a thread on the ML where someone posted a bash script to slowly increase the number of PGs in steps of 256 AFAIR, the script would monitor the cluster activity and once all data shuffling had finished it would do another round until the target is hit. That was on filestore

Re: [ceph-users] Questions about pg num setting

2018-01-03 Thread Christian Wuerdig
Well, there is a setting for the minimum number of pgs per OSD (mon pg warn min per osd, see http://docs.ceph.com/docs/master/rados/configuration/pool-pg-config-ref/) and there will be a HEALTH_WARN state if you have too few. As far as I know having not enough PGs can cause trouble for CRUSH

Re: [ceph-users] ceph luminous - performance issue

2018-01-03 Thread ceph . novice
Hi Steven. interesting... 'm quite curious after your post now. I've migrated our prod. CEPH cluster to 12.2.2 and Bluestore just today and haven't heard back anything "bad" from the applications/users so far. performance tests on our test cluster were good before, but we use S3/RGW only

Re: [ceph-users] iSCSI over RBD

2018-01-03 Thread Mike Christie
On 12/25/2017 03:13 PM, Joshua Chen wrote: > Hello folks, > I am trying to share my ceph rbd images through iscsi protocol. > > I am trying iscsi-gateway > http://docs.ceph.com/docs/master/rbd/iscsi-overview/ > > > now > > systemctl start rbd-target-api > is working and I could run gwcli >

[ceph-users] finding and manually recovering objects in bluestore

2018-01-03 Thread Brady Deetz
In filestore (XFS), you'd find files representing objects using traditional bash commands like find. What tools do I have at my disposal for recovering data in bluestore? ___ ceph-users mailing list ceph-users@lists.ceph.com

[ceph-users] Determine cephfs paths and rados objects affected by incomplete pg

2018-01-03 Thread Brady Deetz
I would like to track down what objects are affected by an incomplete pg and in the case of cephfs map those objects to file paths. At the moment, the best I've come up with for mapping objects to a pg is very very slow: pool="pool" incomplete="1.cb7" for object in `rados -p ${pool} ls`; do

Re: [ceph-users] ceph luminous - SSD partitions disssapeared

2018-01-03 Thread Steven Vacaroaia
They were not After I change it manually I was still unable to start the service Further more, a reboot screed up permissions again ls -al /dev/sda* brw-rw 1 root disk 8, 0 Jan 3 11:10 /dev/sda brw-rw 1 root disk 8, 1 Jan 3 11:10 /dev/sda1 brw-rw 1 root disk 8, 2 Jan 3 11:10

Re: [ceph-users] How to evict a client in rbd

2018-01-03 Thread Karun Josy
It happens randomly. Karun Josy On Wed, Jan 3, 2018 at 7:07 AM, Jason Dillaman wrote: > I tried to reproduce this for over an hour today using the specified > versions w/o any success. Is this something that you can repeat > on-demand or was this a one-time occurance? > >

[ceph-users] PGs stuck in "active+undersized+degraded+remapped+backfill_wait", recovery speed is extremely slow

2018-01-03 Thread ignaqui de la fila
Hello all, I have ceph Luminous setup with filestore and bluestore OSDs. This cluster was deployed initially as Hammer, than I upgraded it to Jewel and eventually to Luminous. It’s heterogenous, we have SSDs, SAS 15K and 7.2K HDDs in it (see crush map attached). Earlier I converted 7.2K HDD from

Re: [ceph-users] ceph luminous - SSD partitions disssapeared

2018-01-03 Thread Sergey Malinin
Are actual devices (not only udev links) owned by user “ceph”? From: ceph-users on behalf of Steven Vacaroaia Sent: Wednesday, January 3, 2018 6:19:45 PM To: ceph-users Subject: [ceph-users] ceph luminous -

Re: [ceph-users] ceph luminous - performance issue

2018-01-03 Thread Steven Vacaroaia
Thanks for your willingness to help DELL R620, 1 CPU, 8 cores, 64 GB RAM cluster network is using 2 bonded 10 GB NICs ( mode=4), MTU=9000 SSD drives are Enterprise grade - 400 GB SSD Toshiba PX04SHB040 HDD drives are - 10k RPM, 600 GB Toshiba AL13SEB600 Steven On 3 January 2018 at 09:41,

[ceph-users] ceph luminous - SSD partitions disssapeared

2018-01-03 Thread Steven Vacaroaia
Hi, After a reboot, all the partitions created on the SSD drive dissapeared They were used by bluestore DB and WAL so the OSD are down The following error message are in /var/log/messages Jan 3 09:54:12 osd01 ceph-osd: 2018-01-03 09:54:12.992218 7f4b52b9ed00 -1

Re: [ceph-users] Query regarding min_size.

2018-01-03 Thread David Turner
min_size will also block reads. Just to add a +1 to what has been said, a write operation will always wait to ack until all osds for a PG have acked the write. min_size has absolutely no affect on this. min_size is calculated BEFORE the write or read is handled by any osds. If there is not the

Re: [ceph-users] "ceph -s" shows no osds

2018-01-03 Thread Hüseyin Atatür YILDIRIM
Latest version. Oh yes, I omitted the manager daemon setup. Let me check.. Thank you.. From: Sergey Malinin [mailto:h...@newmail.com] Sent: Wednesday, January 3, 2018 5:56 PM To: Hüseyin Atatür YILDIRIM ; ceph-users@lists.ceph.com Subject: Re: [ceph-users] "ceph -s"

Re: [ceph-users] Query regarding min_size.

2018-01-03 Thread Ronny Aasen
On 03. jan. 2018 14:51, James Poole wrote: Hi all, Whilst on a training course recently I was told that 'min_size' had an affect on client write performance, in that it's the required number of copies before ceph reports back to the client that an object has been written therefore setting a

Re: [ceph-users] "ceph -s" shows no osds

2018-01-03 Thread Sergey Malinin
What version are you using? Luminous needs mgr daemons running. From: ceph-users on behalf of Hüseyin Atatür YILDIRIM Sent: Wednesday, January 3, 2018 5:15:30 PM To: ceph-users@lists.ceph.com

Re: [ceph-users] ceph luminous - performance issue

2018-01-03 Thread Brady Deetz
Can you provide more detail regarding the infrastructure backing this environment? What hard drive, ssd, and processor are you using? Also, what is providing networking? I'm seeing 4k blocksize tests here. Latency is going to destroy you. On Jan 3, 2018 8:11 AM, "Steven Vacaroaia"

[ceph-users] "ceph -s" shows no osds

2018-01-03 Thread Hüseyin Atatür YILDIRIM
Hello, I try to setup Ceph cluster on Ubuntu 16.04. I’ve setup 1 monitor-osd (hostname mon01) and 2 osd hosts (osd01 and osd02). At one stage, I issued ceph-deploy osd create mon01:sdb1 osd01:sdb1 osd02:sdb1 and ran successfully. But when I issued below from the admin host: ssh

[ceph-users] ceph luminous - performance issue

2018-01-03 Thread Steven Vacaroaia
Hi, I am doing a PoC with 3 DELL R620 and 12 OSD , 3 SSD drives ( one on each server), bluestore I configured the OSD using the following ( /dev/sda is my SSD drive) ceph-disk prepare --zap-disk --cluster ceph --bluestore /dev/sde --block.wal /dev/sda --block.db /dev/sda Unfortunately both fio

[ceph-users] Query regarding min_size.

2018-01-03 Thread James Poole
Hi all, Whilst on a training course recently I was told that 'min_size' had an affect on client write performance, in that it's the required number of copies before ceph reports back to the client that an object has been written therefore setting a 'min_size' of 0 would only require a write to be

Re: [ceph-users] question on rbd resize

2018-01-03 Thread Richard Hesketh
No, most filesystems can be expanded pretty trivially (shrinking is a more complex operation but usually also doable). Assuming the likely case of an ext2/3/4 filesystem, the command "resize2fs /dev/rbd0" should resize the FS to cover the available space in the block device. Rich On 03/01/18

Re: [ceph-users] question on rbd resize

2018-01-03 Thread 13605702...@163.com
hi Jason the data won't be lost if i resize the filesystem in the image? thanks 13605702...@163.com From: Jason Dillaman Date: 2018-01-03 20:57 To: 13605702...@163.com CC: ceph-users Subject: Re: [ceph-users] question on rbd resize You need to resize the filesystem within the RBD block

Re: [ceph-users] question on rbd resize

2018-01-03 Thread Jason Dillaman
You need to resize the filesystem within the RBD block device. On Wed, Jan 3, 2018 at 7:37 AM, 13605702...@163.com <13605702...@163.com> wrote: > hi > > a rbd image is out of space (old size is 1GB), so i resize it to 10GB > > # rbd info rbd/test > rbd image 'test': > size 10240 MB in 2560

Re: [ceph-users] formatting bytes and object counts in ceph status ouput

2018-01-03 Thread Willem Jan Withagen
On 3-1-2018 00:44, Dan Mick wrote: > On 01/02/2018 08:54 AM, John Spray wrote: >> On Tue, Jan 2, 2018 at 10:43 AM, Jan Fajerski wrote: >>> Hi lists, >>> Currently the ceph status output formats all numbers with binary unit >>> prefixes, i.e. 1MB equals 1048576 bytes and an

[ceph-users] question on rbd resize

2018-01-03 Thread 13605702...@163.com
hi a rbd image is out of space (old size is 1GB), so i resize it to 10GB # rbd info rbd/test rbd image 'test': size 10240 MB in 2560 objects order 22 (4096 kB objects) block_name_prefix: rbd_data.1169238e1f29 format: 2 features: layering flags: and then i remap and remount the image on the

Re: [ceph-users] using s3cmd to put object into cluster with version?

2018-01-03 Thread Martin Emrich
Hi! I use the aws CLI tool, like this: aws --endpoint-url=http://your-rgw:7480 s3api put-bucket-versioning --bucket yourbucket --versioning-configuration Status=Enabled I also set a lifecycle configuration to expire older versions, e.g.: aws --endpoint-url=http://your-rgw:7480 s3api

Re: [ceph-users] Increasing PG number

2018-01-03 Thread tom.byrne
Last summer we increased an EC 8+3 pool from 1024 to 2048 PGs on our ~1500 OSD (Kraken) cluster. This pool contained ~2 petabytes of data at the time. We did a fair amount of testing on a throwaway pool on the same cluster beforehand, starting with small increases (16/32/64). The main

Re: [ceph-users] Cache tiering on Erasure coded pools

2018-01-03 Thread Mario Giammarco
Nobody explains why, I will tell you from direct experience: the cache tier has a block size of several megabytes. So if you ask for one byte that is not in cache some megabytes are read from disk and, if cache is full, some other megabytes are written from cache to the EC pool. Il giorno gio 28

[ceph-users] using s3cmd to put object into cluster with version?

2018-01-03 Thread 13605702...@163.com
hi i'm using s3cmd ( version 1.6.1) to put object to ceph cluster (jewel 10.2.10), and when i put the file with the same name, the older one is overrided. i know that rgw supports bucket object version. But how can i enabled it when using s3cmd? thanks 13605702...@163.com

Re: [ceph-users] Questions about pg num setting

2018-01-03 Thread Janne Johansson
In some common cases (when you have lots of objects per pg) ceph will warn about it. 2018-01-03 11:10 GMT+01:00 Marc Roos : > > > Is there a disadvantage to just always start pg_num and pgp_num with > something low like 8, and then later increase it when necessary? > >

Re: [ceph-users] Questions about pg num setting

2018-01-03 Thread Marc Roos
Is there a disadvantage to just always start pg_num and pgp_num with something low like 8, and then later increase it when necessary? Question is then how to identify when necessary -Original Message- From: Christian Wuerdig [mailto:christian.wuer...@gmail.com] Sent: dinsdag 2

Re: [ceph-users] Ceph scrub logs: _scan_snaps no head for $object?

2018-01-03 Thread Stefan Kooman
Quoting Sage Weil (s...@newdream.net): > Hi Stefan, Mehmet, > > Are these clusters that were upgraded from prior versions, or fresh > luminous installs? Fresh luminous install... The cluster was installed with 12.2.0, and later upgraded to 12.2.1 and 12.2.2. > This message indicates that there