Hi,
Seem that the recovery process stop and get back to the same situation as
before.
I hope that the log can provide more info. Any way thanks already for your
assistance.
Kr
Philippe.
From: Philippe Van Hecke
Sent: 04 February 2019 07:53
To: Sage Weil
On Mon, 4 Feb 2019, Philippe Van Hecke wrote:
> So i restarted the osd but he stop after some time. But this is an effect on
> the cluster and cluster is on a partial recovery process.
>
> please find here log file of osd 49 after this restart
> https://filesender.belnet.be/?s=download&token=8c9
The official documentation [*] says that the only requirement to use the
balancer in upmap mode is that all clients must run at least luminous.
But I read somewhere (also in this mailing list) that there are also
requirements wrt the kernel.
If so:
1) Could you please specify what is the minimum r
ceph pg ls | grep 11.182
11.182 10 25 35 0 2534648064 1306
1306 active+recovery_wait+undersized+degraded 2019-02-04 09:23:26.461468
70238'1306 70673:24924 [64] 64 [64] 64
46843'56759413 2019-01-26 16:31:32
Hi,
some news:
I have tried with different transparent hugepage values (madvise, never) : no
change
I have tried to increase bluestore_cache_size_ssd to 8G: no change
I have tried to increase TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES to 256mb : it
seem to help, after 24h I'm still around 1,5ms. (
On Mon, Feb 4, 2019 at 9:25 AM Massimo Sgaravatto
wrote:
>
> The official documentation [*] says that the only requirement to use the
> balancer in upmap mode is that all clients must run at least luminous.
> But I read somewhere (also in this mailing list) that there are also
> requirements wrt
On 02/02/2019 05:07, Stuart Longland wrote:
On 1/2/19 10:43 pm, Alfredo Deza wrote:
The tmpfs setup is expected. All persistent data for bluestore OSDs
setup with LVM are stored in LVM metadata. The LVM/udev handler for
bluestore volumes create these tmpfs filesystems on the fly and populate
the
Thanks a lot
So, if I am using ceph just to provide block storage to an OpenStack
cluster (so using libvirt), the kernel version on the client nodes
shouldn't matter, right ?
Thanks again, Massimo
On Mon, Feb 4, 2019 at 10:02 AM Ilya Dryomov wrote:
> On Mon, Feb 4, 2019 at 9:25 AM Massimo Sgar
So, if I am using ceph just to provide block storage to an OpenStack
cluster (so using libvirt), the kernel version on the client nodes
shouldn't matter, right ?
Yep, just make sure your librbd on compute hosts is Luminous.
k
___
ceph-users mailing
On Fri, Feb 1, 2019 at 6:35 PM Vladimir Prokofev wrote:
>
> Your output looks a bit weird, but still, this is normal for bluestore. It
> creates small separate data partition that is presented as XFS mounted in
> /var/lib/ceph/osd, while real data partition is hidden as raw(bluestore)
> block d
Hello - We are using the ceph osd nodes with cache controller cache of 1G size.
Are there any recommendation for using the cache for read and write?
Here we are using - HDDs with colocated journals.
For SSD journal - 0% cache and 100% write.
Thanks
Swami
___
On Fri, Feb 1, 2019 at 6:07 PM Shain Miley wrote:
>
> Hi,
>
> I went to replace a disk today (which I had not had to do in a while)
> and after I added it the results looked rather odd compared to times past:
>
> I was attempting to replace /dev/sdk on one of our osd nodes:
>
> #ceph-deploy disk z
On Mon, Feb 4, 2019 at 4:43 AM Hector Martin wrote:
>
> On 02/02/2019 05:07, Stuart Longland wrote:
> > On 1/2/19 10:43 pm, Alfredo Deza wrote:
> >>> The tmpfs setup is expected. All persistent data for bluestore OSDs
> >>> setup with LVM are stored in LVM metadata. The LVM/udev handler for
> >>>
Thanks a lot !
On Mon, Feb 4, 2019 at 12:35 PM Konstantin Shalygin wrote:
> So, if I am using ceph just to provide block storage to an OpenStack
> cluster (so using libvirt), the kernel version on the client nodes
> shouldn't matter, right ?
>
> Yep, just make sure your librbd on compute hosts i
Hi again,
I speak too fast, the problem has occured again, so it's not tcmalloc cache
size related.
I have notice something using a simple "perf top",
each time I have this problem (I have seen exactly 4 times the same behaviour),
when latency is bad, perf top give me :
StupidAllocator::_al
Hi,
we have built a 6 Node NVMe only Ceph Cluster with 4x Intel DC P4510 8TB
each and one Intel DC P4800X 375GB Optane each. Up to 10x P4510 can be
installed in each node.
WAL and RocksDBs for all P4510 should be stored on the Optane (approx.
30GB per RocksDB incl. WAL).
Internally, discussion
Hi Alexandre,
looks like a bug in StupidAllocator.
Could you please collect BlueStore performance counters right after OSD
startup and once you get high latency.
Specifically 'l_bluestore_fragmentation' parameter is of interest.
Also if you're able to rebuild the code I can probably make a s
Thanks Igor,
>>Could you please collect BlueStore performance counters right after OSD
>>startup and once you get high latency.
>>
>>Specifically 'l_bluestore_fragmentation' parameter is of interest.
I'm already monitoring with
"ceph daemon osd.x perf dump ", (I have 2months history will all
On Fri, Feb 1, 2019 at 2:29 AM Mahmoud Ismail
wrote:
> Hello,
>
> I'm a bit confused about how the journaling actually works in the MDS.
>
> I was reading about these two configuration parameters (journal write head
> interval) and (mds early reply). Does the MDS flush the journal
> synchronousl
On Mon, Feb 4, 2019 at 4:16 PM Gregory Farnum wrote:
> On Fri, Feb 1, 2019 at 2:29 AM Mahmoud Ismail <
> mahmoudahmedism...@gmail.com> wrote:
>
>> Hello,
>>
>> I'm a bit confused about how the journaling actually works in the MDS.
>>
>> I was reading about these two configuration parameters (jour
On Mon, Feb 4, 2019 at 7:32 AM Mahmoud Ismail
wrote:
> On Mon, Feb 4, 2019 at 4:16 PM Gregory Farnum wrote:
>
>> On Fri, Feb 1, 2019 at 2:29 AM Mahmoud Ismail <
>> mahmoudahmedism...@gmail.com> wrote:
>>
>>> Hello,
>>>
>>> I'm a bit confused about how the journaling actually works in the MDS.
>>
>>but I don't see l_bluestore_fragmentation counter.
>>(but I have bluestore_fragmentation_micros)
ok, this is the same
b.add_u64(l_bluestore_fragmentation, "bluestore_fragmentation_micros",
"How fragmented bluestore free space is (free extents / max
possible number of free extents
On Mon, Feb 4, 2019 at 4:35 PM Gregory Farnum wrote:
>
>
> On Mon, Feb 4, 2019 at 7:32 AM Mahmoud Ismail <
> mahmoudahmedism...@gmail.com> wrote:
>
>> On Mon, Feb 4, 2019 at 4:16 PM Gregory Farnum wrote:
>>
>>> On Fri, Feb 1, 2019 at 2:29 AM Mahmoud Ismail <
>>> mahmoudahmedism...@gmail.com> wro
Hello,
I just upgraded our cluster to 12.2.11 and I have a few questions around
straw_calc_version and tunables.
Currently ceph status shows the following:
crush map has straw_calc_version=0
crush map has legacy tunables (require argonaut, min is firefly)
1. Will setting tunables to optimal
I think one limitation would be the 375GB since bluestore needs a larger
amount of space than filestore did.
On Mon, Feb 4, 2019 at 10:20 AM Florian Engelmann <
florian.engelm...@everyware.ch> wrote:
> Hi,
>
> we have built a 6 Node NVMe only Ceph Cluster with 4x Intel DC P4510 8TB
> each and one
On Mon, Feb 4, 2019 at 8:03 AM Mahmoud Ismail
wrote:
> On Mon, Feb 4, 2019 at 4:35 PM Gregory Farnum wrote:
>
>>
>>
>> On Mon, Feb 4, 2019 at 7:32 AM Mahmoud Ismail <
>> mahmoudahmedism...@gmail.com> wrote:
>>
>>> On Mon, Feb 4, 2019 at 4:16 PM Gregory Farnum
>>> wrote:
>>>
On Fri, Feb 1,
For future reference I found these 2 links which answer most of the questions:
http://docs.ceph.com/docs/master/rados/operations/crush-map/
https://www.openstack.org/assets/presentation-media/Advanced-Tuning-and-Operation-guide-for-Block-Storage-using-Ceph-Boston-2017-final.pdf
We have about 250
- Vivian
SSG OTC NST Storage
Tel: (8621)61167437
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
28 matches
Mail list logo