Re: [ceph-users] Slow requet on node reboot

2017-08-10 Thread David Turner
I would set min_size back to 2 for general running, but put it down to 1 during planned maintenance. There are a lot of threads on the ML talking about why you shouldn't run with min_size of 1. On Thu, Aug 10, 2017, 11:36 PM Hyun Ha wrote: > Thanks for reply. > > In my

Re: [ceph-users] Slow requet on node reboot

2017-08-10 Thread Hyun Ha
Thanks for reply. In my case, it was an issue about min_size of pool. # ceph osd pool ls detail pool 5 'volumes' replicated size 2 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 512 pgp_num 512 last_change 844 flags hashpspool stripe_width 0 removed_snaps [1~23] when replicated

Re: [ceph-users] Questions about cache-tier in 12.1

2017-08-10 Thread David Turner
What error are you seeing? It looks like it worked. The "or already was" is just their way of saying they didn't check but it is definitely not set this way. On Thu, Aug 10, 2017, 8:45 PM Don Waterloo wrote: > I have a system w/ 7 hosts. > Each host has 1x1TB NVME, and

[ceph-users] Questions about cache-tier in 12.1

2017-08-10 Thread Don Waterloo
I have a system w/ 7 hosts. Each host has 1x1TB NVME, and 2x2TB SATA SSD. The intent was to use this for openstack, having glance stored on the SSD, and cinder + nova running cache-tier replication pool on nvme into erasure coded pool on ssd. The rationale is that, given the copy-on-write, only

Re: [ceph-users] New OSD missing from part of osd crush tree

2017-08-10 Thread John Spray
On Thu, Aug 10, 2017 at 4:31 PM, Sean Purdy wrote: > Luminous 12.1.1 rc > > > Our OSD osd.8 failed. So we removed that. > > > We added a new disk and did: > > $ ceph-deploy osd create --dmcrypt --bluestore store02:/dev/sdd > > That worked, created osd.18, OSD has data.

Re: [ceph-users] New OSD missing from part of osd crush tree

2017-08-10 Thread Gregory Farnum
Sage says a whole bunch of fixes for this have gone in since both then and 12.1.2. We should be pushing out a final 12.1.3 today for people to test on; can you try that and report back once it's out? -Greg On Thu, Aug 10, 2017 at 8:32 AM Sean Purdy wrote: > Luminous

[ceph-users] New OSD missing from part of osd crush tree

2017-08-10 Thread Sean Purdy
Luminous 12.1.1 rc Our OSD osd.8 failed. So we removed that. We added a new disk and did: $ ceph-deploy osd create --dmcrypt --bluestore store02:/dev/sdd That worked, created osd.18, OSD has data. However, mgr output at http://localhost:7000/servers showed osd.18 under a blank hostname

Re: [ceph-users] luminous/bluetsore osd memory requirements

2017-08-10 Thread Gregory Farnum
This has been discussed a lot in the performance meetings so I've added Mark to discuss. My naive recollection is that the per-terabyte recommendation will be more realistic than it was in the past (an effective increase in memory needs), but also that it will be under much better control than

Re: [ceph-users] Slow requet on node reboot

2017-08-10 Thread David Turner
When the node remote, are the osds being marked down immediately? If the node were to reboot, but not Mark the osds down, then all requires to those osds would block until they got marked down. On Thu, Aug 10, 2017, 5:46 AM Hyun Ha wrote: > Hi, Ramirez > > I have exactly

Re: [ceph-users] osd backfills and recovery limit issue

2017-08-10 Thread David Turner
1 backfill peer osd has never severely impacted performance afaik. That is a very small amount of io. I run with 2-5 in each of my clusters. When an osd comes up, the map changes enough that more PGs will move than just what are backfilling into the new osd. To modify how many backfills are

Re: [ceph-users] ceph-fuse mouting and returning 255

2017-08-10 Thread Dan van der Ster
Hi, I also noticed this and finally tracked it down: http://tracker.ceph.com/issues/20972 Cheers, Dan On Mon, Jul 10, 2017 at 3:58 PM, Florent B wrote: > Hi, > > Since 10.2.8 Jewel update, when ceph-fuse is mounting a file system, it > returns 255 instead of 0 ! > > $

Re: [ceph-users] Cephfs IO monitoring

2017-08-10 Thread John Spray
On Thu, Aug 10, 2017 at 5:19 AM, Brady Deetz wrote: > Curious if there is a method way I could see in near real-time the io > patters for an fs. For instance, what files are currently being read/written > and the block sizes. I suspect this is a big ask. The only thing I know of

[ceph-users] Slow requet on node reboot

2017-08-10 Thread Hyun Ha
Hi, Ramirez I have exactly same problem as yours. Did you solved that issue? Do you have expireences or solutions? Thank you. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] v11.2.1 Kraken Released

2017-08-10 Thread Abhishek Lekshmanan
This is the first bugfix release for Kraken, and probably the last release of the Kraken series (Kraken will be declared "End Of Life" (EOL) when Luminous is declared stable, ie. when 12.2.0 is released). It contains a large number of bugfixes across all Ceph components. We recommend that all

Re: [ceph-users] luminous/bluetsore osd memory requirements

2017-08-10 Thread Wido den Hollander
> Op 10 augustus 2017 om 11:14 schreef Marcus Haarmann > : > > > Hi, > > we have done some testing with bluestore and found that the memory > consumption of the osd > processes is depending not on the real data amount stored but on the number > of stored >

Re: [ceph-users] luminous/bluetsore osd memory requirements

2017-08-10 Thread Marcus Haarmann
Hi, we have done some testing with bluestore and found that the memory consumption of the osd processes is depending not on the real data amount stored but on the number of stored objects. This means that e.g. a block device of 100 GB which spreads over 100 objects has a different memory

Re: [ceph-users] osd backfills and recovery limit issue

2017-08-10 Thread cgxu
The explain about osd_max_backfills is below. osd max backfills Description:The maximum number of backfills allowed to or from a single OSD. Type: 64-bit Unsigned Integer Default:1 So, I just think the option does not limit osd numbers in backfill activity. > 在

Re: [ceph-users] Ceph cluster in error state (full) with raw usage 32% of total capacity

2017-08-10 Thread Mandar Naik
Hi Peter, Thanks a lot for the reply. Please find 'ceph osd df' output here - # ceph osd df ID WEIGHT REWEIGHT SIZE USEAVAIL %USE VAR PGS 2 0.04399 1.0 46056M 35576k 46021M 0.08 0.00 0 1 0.04399 1.0 46056M 40148k 46017M 0.09 0.00 384 0 0.04399 1.0 46056M 43851M

[ceph-users] luminous/bluetsore osd memory requirements

2017-08-10 Thread Stijn De Weirdt
hi all, we are planning to purchse new OSD hardware, and we are wondering if for upcoming luminous with bluestore OSDs, anything wrt the hardware recommendations from http://docs.ceph.com/docs/master/start/hardware-recommendations/ will be different, esp the memory/cpu part. i understand from

Re: [ceph-users] Ceph cluster in error state (full) with raw usage 32% of total capacity

2017-08-10 Thread Peter Maloney
I think a `ceph osd df` would be useful. And how did you set up such a cluster? I don't see a root, and you have each osd in there more than once...is that even possible? On 08/10/17 08:46, Mandar Naik wrote: > * > > Hi, > > I am evaluating ceph cluster for a solution where ceph could be used >

[ceph-users] Ceph cluster in error state (full) with raw usage 32% of total capacity

2017-08-10 Thread Mandar Naik
*Hi,I am evaluating ceph cluster for a solution where ceph could be used for provisioningpools which could be either stored local to a node or replicated across a cluster. This way ceph could be used as single point of solution for writing both local as well as replicateddata. Local storage helps