Dear Jason,
Thanks for the reply.
We are using python 2.7.5
Yes. script is based on openstack code.
As suggested, we have tried chunk_size 32 and 64, and both giving same
incorrect checksum value.
We tried to copy same image in different pool and resulted same
incorrect checksum.
Thanks &
Hello,
I am have been managing a ceph cluster running 12.2.11. This was running
12.2.5 until the recent upgrade three months ago. We build another cluster
running 13.2.5 and synced the data between clusters and now would like to
run primarily off the 13.2.5 cluster. The data is all S3
Hello,
On Wed, 10 Apr 2019 20:09:58 +0200 Paul Emmerich wrote:
> On Wed, Apr 10, 2019 at 11:12 AM Christian Balzer wrote:
> >
> >
> > Hello,
> >
> > Another thing that crossed my mind aside from failure probabilities caused
> > by actual HDDs dying is of course the little detail that most
On 4/9/2019 1:59 PM, Yury Shevchuk wrote:
Igor, thank you, Round 2 is explained now.
Main aka block aka slow device cannot be expanded in Luminus, this
functionality will be available after upgrade to Nautilus.
Wal and db devices can be expanded in Luminous.
Now I have recreated osd2 once
On Wed, Apr 10, 2019 at 11:12 AM Christian Balzer wrote:
>
>
> Hello,
>
> Another thing that crossed my mind aside from failure probabilities caused
> by actual HDDs dying is of course the little detail that most Ceph
> installations will have have WAL/DB (journal) on SSDs, the most typical
>
To summarize this discussion:
There are two ways to change the configuration:
1. ceph config * is for permanently changing settings
2. ceph injectargs is for temporarily changing a setting until the
next restart of that daemon
* ceph config get or --show-config shows the defaults/permanent
Hello,
I am have been managing a ceph cluster running 12.2.11. This was running
12.2.5 until the recent upgrade three months ago. We build another cluster
running 13.2.5 and synced the data between clusters and now would like to
run primarily off the 13.2.5 cluster. The data is all S3 buckets.
On 10/04/2019 07:46, Brayan Perera wrote:
Dear All,
Ceph Version : 12.2.5-2.ge988fb6.el7
We are facing an issue on glance which have backend set to ceph, when
we try to create an instance or volume out of an image, it throws
checksum error.
When we use rbd export and use md5sum, value is
On Wed, Apr 10, 2019 at 1:46 AM Brayan Perera wrote:
>
> Dear All,
>
> Ceph Version : 12.2.5-2.ge988fb6.el7
>
> We are facing an issue on glance which have backend set to ceph, when
> we try to create an instance or volume out of an image, it throws
> checksum error.
> When we use rbd export and
I always end up using "ceph --admin-daemon
/var/run/ceph/name-of-socket-here.asok config show | grep ..." to get what
is in effect now for a certain daemon.
Needs you to be on the host of the daemon of course.
Me too, I just wanted to try what OP reported. And after trying that,
I'll keep it
Den ons 10 apr. 2019 kl 13:37 skrev Eugen Block :
> > If you don't specify which daemon to talk to, it tells you what the
> > defaults would be for a random daemon started just now using the same
> > config as you have in /etc/ceph/ceph.conf.
>
> I tried that, too, but the result is not correct:
If you don't specify which daemon to talk to, it tells you what the
defaults would be for a random daemon started just now using the same
config as you have in /etc/ceph/ceph.conf.
I tried that, too, but the result is not correct:
host1:~ # ceph -n osd.1 --show-config | grep
Den ons 10 apr. 2019 kl 13:31 skrev Eugen Block :
>
> While --show-config still shows
>
> host1:~ # ceph --show-config | grep osd_recovery_max_active
> osd_recovery_max_active = 3
>
>
> It seems as if --show-config is not really up-to-date anymore?
> Although I can execute it, the option doesn't
In fact the autotuner does it itself every time it tunes the cache size:
https://github.com/ceph/ceph/blob/master/src/os/bluestore/BlueStore.cc#L3630
Mark
On 4/10/19 2:53 AM, Frédéric Nass wrote:
Hi everyone,
So if the kernel is able to reclaim those pages, is there still a
point in
On 10/04/2019 18.11, Christian Balzer wrote:
> Another thing that crossed my mind aside from failure probabilities caused
> by actual HDDs dying is of course the little detail that most Ceph
> installations will have have WAL/DB (journal) on SSDs, the most typical
> ratio being 1:4.
> And given
Hello,
Another thing that crossed my mind aside from failure probabilities caused
by actual HDDs dying is of course the little detail that most Ceph
installations will have have WAL/DB (journal) on SSDs, the most typical
ratio being 1:4.
And given the current thread about compaction killing
It's ceph-bluestore-tool.
On 4/10/2019 10:27 AM, Wido den Hollander wrote:
On 4/10/19 9:25 AM, jes...@krogh.cc wrote:
On 4/10/19 9:07 AM, Charles Alva wrote:
Hi Ceph Users,
Is there a way around to minimize rocksdb compacting event so that it
won't use all the spinning disk IO utilization
Hi everyone,
So if the kernel is able to reclaim those pages, is there still a point
in running the heap release on a regular basis?
Regards,
Frédéric.
Le 09/04/2019 à 19:33, Olivier Bonvalet a écrit :
Good point, thanks !
By making memory pressure (by playing with vm.min_free_kbytes),
On 4/10/19 9:25 AM, jes...@krogh.cc wrote:
>> On 4/10/19 9:07 AM, Charles Alva wrote:
>>> Hi Ceph Users,
>>>
>>> Is there a way around to minimize rocksdb compacting event so that it
>>> won't use all the spinning disk IO utilization and avoid it being marked
>>> as down due to fail to send
> On 4/10/19 9:07 AM, Charles Alva wrote:
>> Hi Ceph Users,
>>
>> Is there a way around to minimize rocksdb compacting event so that it
>> won't use all the spinning disk IO utilization and avoid it being marked
>> as down due to fail to send heartbeat to others?
>>
>> Right now we have frequent
On 4/10/19 9:07 AM, Charles Alva wrote:
> Hi Ceph Users,
>
> Is there a way around to minimize rocksdb compacting event so that it
> won't use all the spinning disk IO utilization and avoid it being marked
> as down due to fail to send heartbeat to others?
>
> Right now we have frequent high
Hi Ceph Users,
Is there a way around to minimize rocksdb compacting event so that it won't
use all the spinning disk IO utilization and avoid it being marked as down
due to fail to send heartbeat to others?
Right now we have frequent high IO disk utilization for every 20-25 minutes
where the
Hi,
I haven't used the --show-config option until now, but if you ask your
OSD daemon directly, your change should have been applied:
host1:~ # ceph tell 'osd.*' injectargs '--osd-recovery-max-active 4'
host1:~ # ceph daemon osd.1 config show | grep osd_recovery_max_active
We hit into MAX AVAIL where one of our osd was 92%, we immediately
reweighted the osd to 0.95 from 1 and added new osd's to the cluster.
But, our backfilling is too slow and the ingestion to CEPH is too high.
Currently we have stopped the osd and trying to export-remove a few pg's to
free up some
24 matches
Mail list logo