Hello Wrok,
Almost 4 month ago we also struggel regarding the Ceph iscsi
gateway perfromance and some bug. if you hitting little but amount of load
you gateway will start creating issue. there one option you deploy dedicated
iscsi gateway (tgt-server) that have direct
Hi,
I've deployed a ceph-quincy for HPC. Recently, I always encounter the problem
of ceph-fuse crash
kernel version is 4.18.0-348.el8.0.2.x86_64
here is part of ceph-fuse log:
-59> 2023-06-28T09:51:00.452+0800 155546ff7700 3 client.159239 ll_lookup
0x200017f674a.head anaconda3
-58>
This was a simple step to delete the service
/# ceph orch rm osd.iops_optimized
WARN goes away
Just fyi: ceph orch help does not list rm option
Thank you,
Anantha
From: Adiga, Anantha
Sent: Thursday, June 29, 2023 4:38 PM
To: ceph-users@ceph.io
Subject: [ceph-users] warning:
Thanks for your reply,
Yes, my setup is like the following:
RGWs (port 8084) -> Nginx (80, 443)
So this why it make me confuse when :8084 appear with the domain.
And this behavior only occurs with PHP's generated url, not in Boto3
___
ceph-users
Hi,
I am not finding any reference to clear this warning AND stop the service. See
below
After creating OSD with iops_optimized option, this WARN mesg appear.
Ceph 17.2.6
[cid:image001.png@01D9AAA5.8639A1F0]
6/29/23 4:10:45 PM
[WRN]
Health check failed: Failed to apply 1 service(s):
Hi folks,
In the multisite environment, we can get one realm that contains multiple
zonegroups, each in turn can have multiple zones. However, the purpose of
zonegroup isn't clear to me. It seems that when a user is created, its metadata
is synced to all zones within the same realm, regardless
On Thu, Jun 29, 2023 at 10:46:16AM -, Huy Nguyen wrote:
> Hi,
> I tried to generate a presigned url using SDK PHP, but it doesn't
> work. (I also tried to use boto3 with the same configures and the url
> works normally)
Do you have some sort of load-balancer in the setup? Either HAProxy,
There was a similar issue reported at
https://tracker.ceph.com/issues/48103 and yet another ML post at
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/5LGXQINAJBIGFUZP5WEINVHNPBJEV5X7
May I second the question if it's safe to run radosgw-admin autotrim on
those logs?
If so, why
Hi Matthew!
On 6/29/23 06:23, Matthew Booth wrote:
On Wed, 28 Jun 2023 at 22:44, Ilya Dryomov wrote:
** TL;DR
In testing, the write latency performance of a PWL-cache backed RBD
disk was 2 orders of magnitude worse than the disk holding the PWL
cache.
** Summary
I was hoping that PWL cache
On 28/06/2023 21:26, Niklas Hambüchen wrote:
I have increased the number of scrubs per OSD from 1 to 3 using `ceph config
set osd osd_max_scrubs 3`.
Now the problematic PG is scrubbing in `ceph pg ls`:
active+clean+scrubbing+deep+inconsistent
This succeeded!
The deep-scrub fixed the PG
On Wed, 28 Jun 2023 at 22:44, Ilya Dryomov wrote:
>> ** TL;DR
>>
>> In testing, the write latency performance of a PWL-cache backed RBD
>> disk was 2 orders of magnitude worse than the disk holding the PWL
>> cache.
>>
>> ** Summary
>>
>> I was hoping that PWL cache might be a good solution to
ns
2023-06-29T17:10:46.880+0700 7f26014b0700 10 v4 credential format =
DNMZAFE6G2PP8H9P05UU/20230629/us-east-1/s3/aws4_request
2023-06-29T17:10:46.880+0700 7f26014b0700 10 access key id =
DNMZAFE6G2PP8H9P05UU
2023-06-29T17:10:46.880+0700 7f26014b0700 10 credential scope =
20230629/us-east-1/s3/aws4_request
So basically it does not matter unless I want to have that split up.
Thanks for all the answers.
I am still lobbying to phase out SATA SSDs and replace them with NVME
disks. :)
Am Mi., 28. Juni 2023 um 18:14 Uhr schrieb Anthony D'Atri <
a...@dreamsnake.net>:
> Even when you factor in density,
13 matches
Mail list logo