Am 11.05.20 um 13:25 schrieb Igor Fedotov:
> Hi Stefan,
>
> I don't have specific preferences, hence any public storage you prefer.
>
> Just one note - I presume you collected the logs for the full set of 10
> runs. Which is redundant, could you please collect detailed logs (one
> per OSD) for s
Hi Stefan,
I don't have specific preferences, hence any public storage you prefer.
Just one note - I presume you collected the logs for the full set of 10
runs. Which is redundant, could you please collect detailed logs (one
per OSD) for single shot runs.
Sorry for the unclear previous inqui
Hi Igor,
where to post the logs?
Am 06.05.20 um 09:23 schrieb Stefan Priebe - Profihost AG:
> Hi Igor,
>
> Am 05.05.20 um 16:10 schrieb Igor Fedotov:
>> Hi Stefan,
>>
>> so (surprise!) some DB access counters show a significant difference, e.g.
>>
>> "kv_flush_lat": {
>> "avg
Hi Igor,
Am 05.05.20 um 16:10 schrieb Igor Fedotov:
> Hi Stefan,
>
> so (surprise!) some DB access counters show a significant difference, e.g.
>
> "kv_flush_lat": {
> "avgcount": 1423,
> "sum": 0.000906419,
> "avgtime": 0.00636
> },
>
Hi Stefan,
so (surprise!) some DB access counters show a significant difference, e.g.
"kv_flush_lat": {
"avgcount": 1423,
"sum": 0.000906419,
"avgtime": 0.00636
},
"kv_sync_lat": {
"avgcount": 1423,
"sum": 0.
Hello Igor,
Am 30.04.20 um 15:52 schrieb Igor Fedotov:
> 1) reset perf counters for the specific OSD
>
> 2) run bench
>
> 3) dump perf counters.
This is OSD 0:
# ceph tell osd.0 bench -f plain 12288000 4096
bench: wrote 12 MiB in blocks of 4 KiB in 6.70482 sec at 1.7 MiB/sec 447
IOPS
https://
Hi Stefan,
hmm... could you please collect performance counters for these two
cases. Using the following sequence
1) reset perf counters for the specific OSD
2) run bench
3) dump perf counters.
Collecting disks' (both main and db) activity with iostat would be nice
too. But please either i
HI Igor,
but the performance issue is still present even on the recreated OSD.
# ceph tell osd.38 bench -f plain 12288000 4096
bench: wrote 12 MiB in blocks of 4 KiB in 1.63389 sec at 7.2 MiB/sec
1.84k IOPS
vs.
# ceph tell osd.10 bench -f plain 12288000 4096
bench: wrote 12 MiB in blocks of 4 K
Hi Igore,
Am 27.04.20 um 15:03 schrieb Igor Fedotov:
> Just left a comment at https://tracker.ceph.com/issues/44509
>
> Generally bdev-new-db performs no migration, RocksDB might eventually do
> that but no guarantee it moves everything.
>
> One should use bluefs-bdev-migrate to do actual migrati
No, I don't think so. But you can try again after applying
bluefs-bdev-migrate
On 4/24/2020 9:13 PM, Stefan Priebe - Profihost AG wrote:
Hi Igor,
could it be the fact that there are those 64kb spilled over metadata i
can't get away?
Stefan
Am 24.04.20 um 13:08 schrieb Igor Fedotov:
Hi Stefa
Just left a comment at https://tracker.ceph.com/issues/44509
Generally bdev-new-db performs no migration, RocksDB might eventually do
that but no guarantee it moves everything.
One should use bluefs-bdev-migrate to do actual migration.
And I think that's the root cause for the above ticket.
Hi Igor,
could it be the fact that there are those 64kb spilled over metadata i
can't get away?
Stefan
Am 24.04.20 um 13:08 schrieb Igor Fedotov:
> Hi Stefan,
>
> that's not 100% pure experiment. Fresh OSD might be faster by itself.
> E.g. due to lack of space fragmentation and/or empty lookup
Hi Igor,
> Am 24.04.2020 um 13:09 schrieb Igor Fedotov :
>
> that's not 100% pure experiment. Fresh OSD might be faster by itself. E.g.
> due to lack of space fragmentation and/or empty lookup tables.
Also the migrated ones were just 3 weeks old having a usage of 5%.
> You might want to recrea
No not a standalone Wal I wanted to ask whether bdev-new-db migrated dB and Wal
from hdd to ssd.
Stefan
> Am 24.04.2020 um 13:01 schrieb Igor Fedotov :
>
>
> Unless you have 3 different types of disks beyond OSD (e.g. HDD, SSD, NVMe)
> standalone WAL makes no sense.
>
>
>
> On 4/24/2020 1
Hi Stefan,
that's not 100% pure experiment. Fresh OSD might be faster by itself.
E.g. due to lack of space fragmentation and/or empty lookup tables.
You might want to recreate OSD.0 without DB and attach DB manually. Then
benchmark resulting OSD.
Different experiment if you have another slo
Unless you have 3 different types of disks beyond OSD (e.g. HDD, SSD,
NVMe) standalone WAL makes no sense.
On 4/24/2020 1:58 PM, Stefan Priebe - Profihost AG wrote:
Is Wal device missing? Do I need to run *bluefs-bdev-new-db and Wal?*
Greets,
Stefan
Am 24.04.2020 um 11:32 schrieb Stefan Prie
Is Wal device missing? Do I need to run bluefs-bdev-new-db and Wal?
Greets,
Stefan
> Am 24.04.2020 um 11:32 schrieb Stefan Priebe - Profihost AG
> :
>
> Hi Igor,
>
> there must be a difference. I purged osd.0 and recreated it.
>
> Now it gives:
> ceph tell osd.0 bench
> {
>"bytes_written
Hi Igor,
there must be a difference. I purged osd.0 and recreated it.
Now it gives:
ceph tell osd.0 bench
{
"bytes_written": 1073741824,
"blocksize": 4194304,
"elapsed_sec": 8.155473563993,
"bytes_per_sec": 131659040.46819863,
"iops": 31.389961354303033
}
What's wrong wi
Hi,
if the OSDs are idle the difference is even more worse:
# ceph tell osd.0 bench
{
"bytes_written": 1073741824,
"blocksize": 4194304,
"elapsed_sec": 15.39670787501,
"bytes_per_sec": 69738403.346825853,
"iops": 16.626931034761871
}
# ceph tell osd.38 bench
{
"bytes
Hi,
Am 23.04.20 um 14:06 schrieb Igor Fedotov:
I don't recall any additional tuning to be applied to new DB volume. And
assume the hardware is pretty the same...
Do you still have any significant amount of data spilled over for these
updated OSDs? If not I don't have any valid explanation for
I don't recall any additional tuning to be applied to new DB volume. And
assume the hardware is pretty the same...
Do you still have any significant amount of data spilled over for these
updated OSDs? If not I don't have any valid explanation for the phenomena.
You might want to try "ceph os
21 matches
Mail list logo