Dear All,
Ceph Version : 12.2.5-2.ge988fb6.el7
We are facing an issue on glance which have backend set to ceph, when
we try to create an instance or volume out of an image, it throws
checksum error.
When we use rbd export and use md5sum, value is matching with glance checksum.
When we use
Hi again,
Thanks to a hint from another user I seem to have gotten past this.
The trick was to restart the osds with a positive merge threshold (10)
then cycle through rados bench several hundred times, e.g.
while true ; do rados bench -p default.rgw.buckets.index 10 write
-b 4096 -t 128;
Looks like you are trying to write to the pseudo-root, mount /cephfs
instead of /.
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
On Sat, Apr 6, 2019 at 1:07 PM
> On Apr 8, 2019, at 5:42 PM, Bryan Stillwell wrote:
>
>
>> On Apr 8, 2019, at 4:38 PM, Gregory Farnum wrote:
>>
>> On Mon, Apr 8, 2019 at 3:19 PM Bryan Stillwell
>> wrote:
>>>
>>> There doesn't appear to be any correlation between the OSDs which would
>>> point to a hardware issue, and
I noticed when changing some settings, they appear to stay the same, for
example when trying to set this higher:
ceph tell 'osd.*' injectargs '--osd-recovery-max-active 4'
It gives the usual warning about may need to restart, but it still has the
old value:
# ceph --show-config | grep
Hi Fabio,
Did you resolve the issue?
A bit late, i know, but did you tried to restart OSD 14? If 102 and 121 are
fine i would also try to crush reweight 14 to 0.
Greetings
Mehmet
Am 10. März 2019 19:26:57 MEZ schrieb Fabio Abreu :
>Hi Darius,
>
>Thanks for your reply !
>
>This happening
On 4/9/19 12:43 PM, Francois Lafont wrote:
2. In my Docker container context, is it possible to put the logs above in the file
"/var/log/syslog" of my host, in other words is it possible to make sure to log this in
stdout of the daemon "radosgw"?
In brief, is it possible log "operations" in
Good point, thanks !
By making memory pressure (by playing with vm.min_free_kbytes), memory
is freed by the kernel.
So I think I essentially need to update monitoring rules, to avoid
false positive.
Thanks, I continue to read your resources.
Le mardi 09 avril 2019 à 09:30 -0500, Mark Nelson a
Hi all,
We have a slight issue while trying to migrate a pool from filestore
to bluestore.
This pool used to have 20 million objects in filestore -- it now has
50,000. During its life, the filestore pgs were internally split
several times, but never merged. Now the pg _head dirs have mostly
Can you pastebin the results from running the following on your backup
site rbd-mirror daemon node?
ceph --admin-socket /path/to/asok config set debug_rbd_mirror 15
ceph --admin-socket /path/to/asok rbd mirror restart nova
wait a minute to let some logs accumulate ...
ceph --admin-socket
Den tis 9 apr. 2019 kl 17:48 skrev Jason Dillaman :
> Any chance your rbd-mirror daemon has the admin sockets available
> (defaults to /var/run/ceph/cephdr-clientasok)? If
> so, you can run "ceph --admin-daemon /path/to/asok rbd mirror status".
>
{
"pool_replayers": [
{
Any chance your rbd-mirror daemon has the admin sockets available
(defaults to /var/run/ceph/cephdr-clientasok)? If
so, you can run "ceph --admin-daemon /path/to/asok rbd mirror status".
On Tue, Apr 9, 2019 at 11:26 AM Magnus Grönlund wrote:
>
>
>
> Den tis 9 apr. 2019 kl 17:14 skrev Jason
Den tis 9 apr. 2019 kl 17:14 skrev Jason Dillaman :
> On Tue, Apr 9, 2019 at 11:08 AM Magnus Grönlund
> wrote:
> >
> > >On Tue, Apr 9, 2019 at 10:40 AM Magnus Grönlund
> wrote:
> > >>
> > >> Hi,
> > >> We have configured one-way replication of pools between a production
> cluster and a backup
On Tue, Apr 9, 2019 at 11:08 AM Magnus Grönlund wrote:
>
> >On Tue, Apr 9, 2019 at 10:40 AM Magnus Grönlund wrote:
> >>
> >> Hi,
> >> We have configured one-way replication of pools between a production
> >> cluster and a backup cluster. But unfortunately the rbd-mirror or the
> >> backup
>On Tue, Apr 9, 2019 at 10:40 AM Magnus Grönlund wrote:
>>
>> Hi,
>> We have configured one-way replication of pools between a production
cluster and a backup cluster. But unfortunately the rbd-mirror or the
backup cluster is unable to keep up with the production cluster so the
replication fails
On Thu, Apr 4, 2019 at 6:27 AM huxia...@horebdata.cn
wrote:
>
> thanks a lot, Jason.
>
> how much performance loss should i expect by enabling rbd mirroring? I really
> need to minimize any performance impact while using this disaster recovery
> feature. Will a dedicated journal on Intel Optane
On Tue, Apr 9, 2019 at 10:40 AM Magnus Grönlund wrote:
>
> Hi,
> We have configured one-way replication of pools between a production cluster
> and a backup cluster. But unfortunately the rbd-mirror or the backup cluster
> is unable to keep up with the production cluster so the replication
Hi,
We have configured one-way replication of pools between a production
cluster and a backup cluster. But unfortunately the rbd-mirror or the
backup cluster is unable to keep up with the production cluster so the
replication fails to reach replaying state.
And the journals on the rbd volumes keep
Update:
I think we have a work-around, but no root cause yet.
What is working is removing the 'v2' bits from the ceph.conf file across
the cluster, and turning off all cephx authentication. Now everything
seems to be talking correctly other than some odd metrics around the edges.
Here's my
My understanding is that basically the kernel is either unable or
uninterested (maybe due to lack of memory pressure?) in reclaiming the
memory . It's possible you might have better behavior if you set
/sys/kernel/mm/khugepaged/max_ptes_none to a low value (maybe 0) or
maybe disable
Well, Dan seems to be right :
_tune_cache_size
target: 4294967296
heap: 6514409472
unmapped: 2267537408
mapped: 4246872064
old cache_size: 2845396873
new cache size: 2845397085
So we have 6GB in heap, but "only" 4GB mapped.
But "ceph tell osd.* heap release"
Thanks for the advice, we are using Debian 9 (stretch), with a custom
Linux kernel 4.14.
But "heap release" didn't help.
Le lundi 08 avril 2019 à 12:18 +0200, Dan van der Ster a écrit :
> Which OS are you using?
> With CentOS we find that the heap is not always automatically
> released. (You
Igor, thank you, Round 2 is explained now.
Main aka block aka slow device cannot be expanded in Luminus, this
functionality will be available after upgrade to Nautilus.
Wal and db devices can be expanded in Luminous.
Now I have recreated osd2 once again to get rid of the paradoxical
cepf osd df
Hi,
On 4/9/19 5:02 AM, Pavan Rallabhandi wrote:
Refer "rgw log http headers" under
http://docs.ceph.com/docs/nautilus/radosgw/config-ref/
Or even better in the code https://github.com/ceph/ceph/pull/7639
Ok, thx for your help Pavan. I have progressed but I have already some
problems.
24 matches
Mail list logo