g and stop the OSD daemons on the
> affected node until the issue was resolved.
>
> Regards,
> Eugen
>
> Zitat von mahnoosh shahidi :
>
> > Hi all,
> >
> > I hope this message finds you well. We recently encountered an issue on
> one
> > of our OSD serve
s://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd/#flapping-osds
>
> On Sat, Jan 6, 2024 at 9:28 PM mahnoosh shahidi
> wrote:
> >
> > Hi all,
> >
> > I hope this message finds you well. We recently encountered an issue on
> one
> > of our OSD se
Hi all,
I hope this message finds you well. We recently encountered an issue on one
of our OSD servers, leading to network flapping and subsequently causing
significant performance degradation across our entire cluster. Although the
OSDs were correctly marked as down in the monitor, slow ops
Hi everyone,
In a pacific cluster, deleting S3 objects finish successfully but the
deleted objects still exist in the bucket list and we can delete them over
again without any error. Apparently object deletion does not update bucket
index data. The behaviour happens randomly in different buckets.
the user metadata unnecessarily,
> which would race with your admin api requests
>
> On Tue, Oct 24, 2023 at 9:56 AM mahnoosh shahidi
> wrote:
> >
> > Thanks Casey for your explanation,
> >
> > Yes it succeeded eventually. Sometimes after about 100 retries.
hen we detect a
> racing write. so it sounds like something else is modifying that user
> at the same time. does it eventually succeed if you retry?
>
> On Tue, Oct 24, 2023 at 9:21 AM mahnoosh shahidi
> wrote:
> >
> > Hi all,
> >
> > I couldn't understand wh
Hi all,
I couldn't understand what does the status -125 mean from the docs. I'm
getting 500 response status code when I call rgw admin APIs and the only
log in the rgw log files is as follows.
s3:get_obj recalculating target
initializing for trans_id =
the_file_system
>
> Simon
>
> On 07/08/2023 15:15, mahnoosh shahidi wrote:
> > Hi all,
> >
> > I have an rbd image that `rbd disk-usage` shows it has 31GB usage but in
> > the filesystem `du` shows its usage is 40KB.
> >
> > Does anyone know the
Hi all,
I have an rbd image that `rbd disk-usage` shows it has 31GB usage but in
the filesystem `du` shows its usage is 40KB.
Does anyone know the reason for this difference?
Best Regards,
Mahnoosh
___
ceph-users mailing list -- ceph-users@ceph.io
To
Hi Boris,
You can list your rgw daemons with the following command
ceph service dump -f json-pretty | jq '.services.rgw.daemons'
The following command extract all their ids
ceph service dump -f json-pretty | jq '.services.rgw.daemons' | egrep -e
> 'gid' -e '\"id\"'
>
Best Regards,
Mahnoosh
cement targets from the user and zonegroup.
I just want to get the value that I had set in the create bucket request.
Best Regards,
Mahnoosh
On Mon, Jul 3, 2023 at 1:19 PM Konstantin Shalygin wrote:
> Hi,
>
> On 3 Jul 2023, at 12:23, mahnoosh shahidi wrote:
>
> So clients can no
tantin Shalygin wrote:
> Hi,
>
> On 2 Jul 2023, at 17:17, mahnoosh shahidi wrote:
>
> Is there any way for clients (without rgw-admin access) to get the
> placement target of their S3 buckets? The "GetBucketLocation'' api returns
> "default" for all placement ta
Hi all,
Is there any way for clients (without rgw-admin access) to get the
placement target of their S3 buckets? The "GetBucketLocation'' api returns
"default" for all placement targets and I couldn't find any other S3 api
for this purpose.
Can anyone help me with this?
Thank you for your information.
On Mon, Jun 12, 2023 at 9:35 AM Jonas Nemeiksis
wrote:
> Hi,
>
> The ceph daemon image build is deprecated. You can read here [1]
>
> [1] https://github.com/ceph/ceph-container/issues/2112
>
> On Sun, Jun 11, 2023 at 4:03 PM mahnoosh shahidi
Thanks for your response. I need the ceph daemon image. I forgot to mention
it in the first message.
Best Regards,
Mahnoosh
On Sun, Jun 11, 2023 at 4:22 PM 胡 玮文 wrote:
> It is available at quay.io/ceph/ceph:v16.2.13
>
> > 在 2023年6月11日,16:31,mahnoosh shahidi 写道:
>
Hi all,
It seems the latest Pacific image in the registry is 16.2.11. Is there any
plan to push the latest version of Pacific (16.2.13) in near future?
Best Regards,
Mahnoosh
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an
he bucket names are really duplicated, please try running
> the following command:
>
> 'radosgw-admin bucket list --allow-unordered | jq -r ".[]" | sort | uniq
> -c | sort -h | tail'
>
> On Sat, Apr 15, 2023 at 7:20 PM Janne Johansson
> wrote:
>
>> Den lör 15 a
Hi,
I observed duplicate object names in the result of the admin list bucket on
15.2.12 cluster. I used the following command and some of the object names
in the result list appeared more than once. There is no versioning config
for the bucket.
radosgw-admin bucket list --allow-unordered
n August so won’t receive the fix. But it seems
> the next releases pacific and quincy will have the fix as will reef.
>
> Eric
> (he/him)
>
> On Feb 13, 2023, at 11:41 AM, mahnoosh shahidi
> wrote:
>
> Hi all,
>
> We have a cluster on 15.2.12. We are experie
Hi all,
We have a cluster on 15.2.12. We are experiencing an unusual scenario in
S3. User send PUT request to upload an object and RGW returns 200 as a
response status code. The object has been uploaded and can be downloaded
but it does not exist in the bucket list. We also tried to get the
Hi all,
Is there any way in rgw to move a bucket from one realm to another one in
the same cluster?
Best regards,
Mahnoosh
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi all,
Is there any way in rgw to move a bucket from one realm to another one in
the same cluster?
Best regards,
Mahnoosh
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi all,
Is there any way in rgw to move a bucket from one realm to another one in
the same cluster?
Best regards,
Mahnoosh
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
h
>
> try with object locator
>
> On Tue, Dec 27, 2022 at 8:13 PM mahnoosh shahidi
> wrote:
>
>> Hi Ceph users,
>>
>> I have a running cluster in octopus 15.2.12. I found an object in one of
>> my
>> S3 buckets that does not exist in the bucket list but
Hi Ceph users,
I have a running cluster in octopus 15.2.12. I found an object in one of my
S3 buckets that does not exist in the bucket list but I can download it
with any client. I also tried to get bucket index data by ```radosgw-admin
bi list --bucket MYBUCKET --object MYOBJECT``` and it
national Inc.
> dhils...@performair.com
> www.PerformAir.com
>
>
> -Original Message-
> From: mahnoosh shahidi [mailto:mahnooosh@gmail.com]
> Sent: Tuesday, November 23, 2021 8:20 AM
> To: Josh Baergen
> Cc: Ceph Users
> Subject: [ceph-users] Re: have buckets w
Hi Josh
Thanks for your response. Do you have any advice how to reshard these big
buckets so it doesn't cause any down time in our cluster? Resharding these
buckets makes a lots of slow ops in deleting old shard phase and the
cluster can't responde to any requests till resharding is completely
Can anyone help me with these questions?
On Sun, Nov 21, 2021 at 11:23 AM mahnoosh shahidi
wrote:
> Hi,
>
> Running cluster in octopus 15.2.12 . We have a big bucket with about 800M
> objects and resharding this bucket makes many slow ops in our bucket index
> osds. I wanna kn
Hi,
Running cluster in octopus 15.2.12 . We have a big bucket with about 800M
objects and resharding this bucket makes many slow ops in our bucket index
osds. I wanna know what happens if I don't reshard this bucket any more?
How does it affect the performance? The performance problem would be
Hi,
We have a ceph cluster with 3 mon nodes in octopus 15.2.12. Recently, one
of our monitor nodes randomly gets out of quorum and rejoins again. The
rocksdb compaction queue of the monitor has 5 entries most of the time and
rocksdb submit sync latency is about 1 second. There isn't any problem
u have ScrubResult log messages in ceph log? You can also check
> previous days and see how long the mon scrub is taking to complete. (Time
> from first to last entry)
>
> Cheers, Dan
>
>
>
> On Sun, Nov 7, 2021, 9:50 AM mahnoosh shahidi
> wrote:
>
>> Hi,
&g
We still have this problem. Does anybody have any ideas about this?
On Mon, Aug 23, 2021 at 9:53 AM mahnoosh shahidi
wrote:
> Hi everyone,
>
> We have a problem with octopus 15.2.12. osds randomly crash and restart
> with the following traceback log.
>
> -8> 2021-08
Hi everyone,
We have a problem with octopus 15.2.12. osds randomly crash and restart
with the following traceback log.
-8> 2021-08-20T15:01:03.165+0430 7f2d10fd7700 10 monclient:
handle_auth_request added challenge on 0x55a3fc654400
-7> 2021-08-20T15:01:03.201+0430 7f2d02960700 2
additional fast (SSD/NVMe)
> > drives for DB volumes? Or their DBs reside as spinning drives only? If
> > the latter is true I would strongly encourage you to fix that by
> > adding respective fast disks - RocksDB tend to works badly when not
> > deployed on SSDs...
> >
&
Hi Sven,
We had the same problem in our cluster. In our case it was the lifecycle
operation that runs at 00:00 every day and 2 hours after that the garbage
collector runs to delete objects. We could figure this out by monitoring
the number of objects in Rados and rgw. Hope it helps.
On Tue, Jul
mes? Or their DBs reside as spinning drives only? If the
> latter is true I would strongly encourage you to fix that by adding
> respective fast disks - RocksDB tend to works badly when not deployed on
> SSDs...
>
>
> Thanks,
>
> Igor
>
>
> On 7/26/2021 1:28
nt to set bluefs_buffered_io to true for every OSD.
>
> It looks it's false by default in v15.2.12
>
>
> Thanks,
>
> Igor
>
> On 7/18/2021 11:19 PM, mahnoosh shahidi wrote:
> > We have a ceph cluster with 408 osds, 3 mons and 3 rgws. We updated our
> > cluste
We have a ceph cluster with 408 osds, 3 mons and 3 rgws. We updated our
cluster from nautilus 14.2.14 to octopus 15.2.12 a few days ago. After
upgrading, the garbage collector process which is run after the lifecycle
process, causes slow ops and makes some osds to be restarted. In each
process the
38 matches
Mail list logo