2024年8月14日(水) 8:23 Raphaël Ducom :
>
> Hi
>
> I'm reaching out to check on the status of the XFS deadlock issue with RBD
> in hyperconverged environments, as detailed in Ceph tracker issue #43910 (
> https://tracker.ceph.com/issues/43910?tab=history). It looks like there
> hasn’t been much activity
2024年6月18日(火) 5:49 Laura Flores :
>
> Need to update the OS Recommendations doc to represent latest supported
> distros
> - https://docs.ceph.com/en/latest/start/os-recommendations/#platforms
> - PR from Zac to be reviewed CLT: https://github.com/ceph/ceph/pull/58092
>
> arm64 CI check ready to be
Hi Ilya,
Hi Satoru,
>
> For rbd in particular, we try to be as compatible as possible -- we
> have actually rejected some improvements to structured output (--format
> json or --format xml) in the past to stay compatible. As long as you
> stick to structured output instead of parsing human-readab
from reviewing changelogs so far.
Best,
Satoru
>
> > On Jun 13, 2024, at 21:00, Satoru Takeuchi
> wrote:
> >
> > Hi,
> >
> > I'm developing some tools that execute ceph commands like rbd. During
> > development,
> > I have come to wonder about
Hi,
I'm developing some tools that execute ceph commands like rbd. During
development,
I have come to wonder about compatibility of ceph commands.
I'd like to use ceph commands which version is >= the version used by
ceph daemons.
It results in executing new ceph commands against ceph clusters us
2024年5月2日(木) 7:42 Satoru Takeuchi :
>
> Hi Maged,
>
> 2024年5月2日(木) 5:34 Maged Mokhtar :
>>
>>
>> On 01/05/2024 16:12, Satoru Takeuchi wrote:
>> > I confirmed that incomplete data is left on `rbd import-diff` failure.
>> > I guess that this data is th
Hi Maged,
2024年5月2日(木) 5:34 Maged Mokhtar :
>
> On 01/05/2024 16:12, Satoru Takeuchi wrote:
> > I confirmed that incomplete data is left on `rbd import-diff` failure.
> > I guess that this data is the part of snapshot. Could someone answer
> > me the following questions
I confirmed that incomplete data is left on `rbd import-diff` failure.
I guess that this data is the part of snapshot. Could someone answer
me the following questions?
Q1. Is it safe to use the RBD image (e.g. client I/O and snapshot
management) even though incomplete data exists?
Q2. Is there any
.3 release.
https://tracker.ceph.com/issues/65393
Best,
Satoru
> cc @Yuri Weinstein
>
> On Mon, Apr 8, 2024 at 5:39 PM Satoru Takeuchi
> wrote:
>
>> 2024年4月9日(火) 0:43 Laura Flores :
>>
>>> Hi all,
>>>
>>> Today we discussed:
2024年4月9日(火) 8:06 Laura Flores :
> I've added them!
>
Thank you very much for your quick response!
>
> cc @Yuri Weinstein
>
> On Mon, Apr 8, 2024 at 5:39 PM Satoru Takeuchi
> wrote:
>
>> 2024年4月9日(火) 0:43 Laura Flores :
>>
>>> Hi a
2024年4月9日(火) 0:43 Laura Flores :
> Hi all,
>
> Today we discussed:
>
> 2024/04/08
>
>- [Zac] CQ#4 is going out this week -
>https://pad.ceph.com/p/ceph_quarterly_2024_04
>
>
>- Last chance to review!
>
>
>- [Zac] IcePic Initiative - context-sensitive help - do we regard the
>do
Hi Ilya,
2023年12月18日(月) 9:14 Satoru Takeuchi :
>
> Hi Ilya,
>
> > > Yes, it's possible. It's one of a workaround I thought. Then the
> > > backup data are as follows:
> > >
> > > a. The full backup taken at least 14 days ago.
> > &g
Hi Ilya,
> > Yes, it's possible. It's one of a workaround I thought. Then the
> > backup data are as follows:
> >
> > a. The full backup taken at least 14 days ago.
> > b. The latest 14 days backup data
>
> I think it would be:
>
> a. A full backup (taken potentially months ago, exact age doesn't
Hi Ilya,
2023年12月12日(火) 21:23 Ilya Dryomov :
> Not at the moment. Mykola has an old work-in-progress PR which extends
> "rbd import-diff" command to make this possible [1].
I didn't know this PR. Thank you very much.I'll evaluate this PR later.
> Since you as
> a user expected "rbd merge-diff"
Hi,
I'm developing RBD images' backup system. In my case, a backup data
must be stored at least two weeks. To meet this requirement, I'd like
to take backups as follows:
1. Take a full backup by rbd export first.
2. Take a differencial backups everyday.
3. Merge the full backup and the oldest (ta
Hi Yuri (resend it because I forgot to add ccs to MLs.)
2023年10月5日(木) 5:57 Yuri Weinstein :
Hello
We are getting very close to the next Quincy point release 17.2.7
Here is the list of must-have PRs https://pad.ceph.com/p/quincy_17.2.7_prs
We will start the release testing/review/approval proces
Hi Anthony,
> The docs aren't necessarily structured that way, i.e. there isn't a 17.2.6
> docs site as such. We try to document changes in behavior in sync with code,
> but don't currently have a process to ensure that a given docs build
> corresponds exactly to a given dot release. In fact
Hi,
I have a request about docs.ceph.com. Could you provide per minor-version views
on docs.ceph.com? Currently, we can select the Ceph version
by using `https://docs.ceph.com/en/". In this case, we can
use the major
version's code names (e.g., "quincy") or "latest". However, we can't
use minor ve
uring
> resharding?
>
I'm an admin and the user of this versioned bucket is not me. So I'll ask
them whether stopping request is acceptable.
Best,
Satoru
> Cory
> ------
> *From:* Satoru Takeuchi
> *Sent:* Wednesday, June 14, 2023 11:42 PM
&
Hi Cory,
> 1. PUT requests during reshard of versioned bucket fail with 404 and leave
>behind dark data
>
>Tracker: https://tracker.ceph.com/issues/61359
Could you tell me whether this problem is bypassed by
suspending(disabling) versioned buckets?
I have some versioning buckets in my Ce
Hi,
tracker.ceph.com seems to be quite slow recently. Since my colleagues
feel so as well,
this problem wouldn't be specific to me.
Could you tel me if there is a plan to fix this problem near future?
Thanks,
Satoru
___
ceph-users mailing list -- ceph-
Hi Mike,
I have two questions about Cephalocon 2023.
1. Will this event only be held as on-site (no virtual platform)?
2. Will the session records be available on YouTube as other Ceph events?
Thanks,
Satoru
___
ceph-users mailing list -- ceph-users@c
Hi Zach,
2023年1月5日(木) 0:36 John Zachary Dover :
>
> Do you use the header navigation bar on docs.ceph.com? See the attached
> file (sticky_header.png) if you are unsure of what "header navigation bar"
> means. In the attached file, the header navigation bar is indicated by
> means of two large, ug
2022年11月16日(水) 8:27 Daniel Brunner :
>
> Hi,
>
> are my mails not getting through?
>
> is anyone receiving my emails?
Yes.
>
> best regards,
> daniel
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...
Hi,
2022年10月24日(月) 11:22 Satoru Takeuchi :
...
> Could you tell me how to fix this problem and what is the `...rgw.opt` pool.
I understood that "...rgw.otp" pool is for mfa. In addition, I
consider this behavior is bug and opened a new issue.
pg autoscaler of rgw pools doesn
Hi Lo,
2022年10月25日(火) 18:01 Lo Re Giuseppe :
>
> I have found the logs showing the progress module failure:
>
> debug 2022-10-25T05:06:08.877+ 7f40868e7700 0 [rbd_support INFO root]
> execute_trash_remove: task={"sequence": 150, "id":
> "fcc864a0-9bde-4512-9f84-be10976613db", "message": "Re
Hi,
The autoscale settings of some pools seem to be disabled in some of my clusters.
This problem seems to be caused by overlapping root. My cluster is for RGW
and there are two shadow trees, one is for index (SSD), and the other
is for data (HDD).
This overlapping is caused by the existence of
`c
Hi Junior,
>> >- a.3 How to run Teuthology in my local environment?
>>
>> At this point, we have the ability to run some tests locally using
>> teuthology, Junior (cc'ed here) did a presentation on this topic,
>> which was recorded here: https://www.youtube.com/watch?v=wZHcg0oVzhY.
>
> Thank y
ght this
> issue. We are have minimal upgrade tests within the rados suite, but
> unfortunately, they weren't enough to catch this particular issue.
>
Thank you for the detailed explanation!
> Rest of the answers are inline.
>
> On Fri, Aug 26, 2022 at 2:52 AM Satoru Takeuch
il is in the following quoted description.
2022年8月19日(金) 15:39 Satoru Takeuchi :
> Hi,
>
> As I described in another mail(*1), my development Ceph cluster was
> corrupted when using problematic binary.
> When I upgraded to v16.2.7 + some patches (*2) + PR#45963 patch,
> unfound p
Hi,
As I described in another mail(*1), my development Ceph cluster was
corrupted when using problematic binary.
When I upgraded to v16.2.7 + some patches (*2) + PR#45963 patch,
unfound pgs and inconsistent pgs appeared. In the end, I deleted this cluster.
pacific: bluestore: set upper and lowe
2022年8月13日(土) 1:35 Robert W. Eckert :
> Interesting, a few weeks ago I added a new disk to each of my 3 node
> cluster and saw the same 2 Mb/s recovery.What I had noticed was that
> one OSD was using very high CPU and seems to have been the primary node on
> the affected PGs.I couldn’t fin
Hi,
2022年8月10日(水) 7:00 David Galloway :
>
> We're happy to announce the 17th and final backport release in the
> Octopus series. For a detailed release notes with links & changelog
> please refer to the official blog entry at
> https://ceph.io/en/news/blog/2022/v15-2-17-RELEASE-released
The link
Oops, I send my question about why v17.2.2 was release having mgr crash bug
only to David by mistake. I quoted this conversation because it would be
useful for other Ceph users.
2022年7月30日(土) 7:33 Satoru Takeuchi :
> Hi David,
>
> 2022年7月30日(土) 6:59 David Galloway :
>
>> Hi Sat
2022年7月25日(月) 18:45 Sridhar Seshasayee :
>
>
> On Mon, Jul 25, 2022 at 2:05 PM Satoru Takeuchi
> wrote:
>
>>
>> - Does this problem not exist in Pacific and older versions?
>>
> This problem does not exist in Pacific and prior versions. On Pacific, the
>
I'm trying to upgrade my Pacific cluster to Quincy and found this
thread. Let me confirm a few things.
- Does this problem not exist in Pacific and older versions?
- Does this problem happen only if `osd_op_queue=mclock_scheduler`?
- Do all parameters written in the OPERATIONS section not work if
ru
>
> Dominic L. Hilsbos, MBA
> Vice President – Information Technology
> Perform Air International Inc.
> dhils...@performair.com
> www.PerformAir.com
>
>
> From: Satoru Takeuchi [mailto:satoru.takeu...@gmail.com]
> Sent: Friday, August 20, 2021 2:48 PM
> To: Dom
Hi Dominic,
2021年8月21日(土) 1:05 :
> Satoru;
>
> You said " after restarting all nodes one by one." After each reboot, did
> you allow the cluster the time necessary to come back to a "HEALTH_OK"
> status?
>
No, the we rebooted with the following policy.
1. Reboot one machine.
2. Wait until com
2021年8月21日(土) 0:25 Satoru Takeuchi :
...
> # additional information
>
I forgot to write the important information. All my data have 3 copies.
Regards,
Satoru
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph
Hi,
I found an `active+recovery_unfound+undersized+degraded+remapped` pg
after restarting
all nodes one by one. Could anyone give some hints why this problem
happened and how
to restore my data?
I read some documents and searched Ceph issues, but I couldn't find
enough information.
https://docs.
Hi,
2021年7月4日(日) 0:17 changcheng.liu :
>
> Hi all,
> I'm reading the ceph survey results:
> https://ceph.io/community/2021-ceph-user-survey-results.
> Do we have the data about which type of AsyncMessenger is used?
> TCP/RDMA/DPDK.
> What's the reason that RDMA & DPDK isn't often use
Today I visited Ceph's official site and found that links to
`resources` page seemed to be missing.
https://ceph.io/en/
In addition, this page is no longer exists.
https://ceph.io/resources/
Could you tell me where did they are moved?
Thanks,
Satoru
2021年5月21日(金) 12:26 Satoru Takeuchi :
>
> 2021年5月18日(火) 14:09 Satoru Takeuchi :
>>
>> 2021年5月18日(火) 9:23 Satoru Takeuchi :
>> >
>> > Hi,
>> >
>> > I have a Ceph cluster used for RGW and RBD. I found that all I/Os to
>> > RGW seemed
2021年5月18日(火) 14:09 Satoru Takeuchi :
> 2021年5月18日(火) 9:23 Satoru Takeuchi :
> >
> > Hi,
> >
> > I have a Ceph cluster used for RGW and RBD. I found that all I/Os to
> > RGW seemed to be
> > blocked while dynamic resharding. Could you tell me whethe
2021年5月18日(火) 9:23 Satoru Takeuchi :
>
> Hi,
>
> I have a Ceph cluster used for RGW and RBD. I found that all I/Os to
> RGW seemed to be
> blocked while dynamic resharding. Could you tell me whether this
> behavior is by design or not?
>
> I attached a graph which mea
Hi,
I have a Ceph cluster used for RGW and RBD. I found that all I/Os to
RGW seemed to be
blocked while dynamic resharding. Could you tell me whether this
behavior is by design or not?
I attached a graph which means I/O seemed to be blocked. Here x-axis
is time and y-axis
is the number of RADOS o
2020年12月15日(火) 18:48 Eugen Block :
>
> Hi,
>
> it's correct that both read and write I/O is paused when a pool's
> min_size is not met.
>
> Regards,
> Eugen
Thank you! I'll send a PR to fix the Pool's configuration document.
Regards,
Satoru
Satoru
2020年12月4日(金) 9:53 Michael Thomas :
>
> On 12/3/20 6:47 PM, Satoru Takeuchi wrote:
> > Hi,
> >
> > Could you tell me whether it's ok to remove device_health_metrics pool
> > after disabling device monitoring feature?
> >
> > I don't use
Hi,
Could you tell me whether it's ok to remove device_health_metrics pool
after disabling device monitoring feature?
I don't use device monitoring feature because I capture hardware
information from other way.
However, after disabling this feature, device_health_metrics pool stll
exists.
I don't
Hi,
Could you tell me whether read I/O is acxepted when the number of replicas
is under pool's min_size?
I read the official document and found that there is a difference of the
effect of pool's
min_size between pool's document and the pool's configuration document.
Pool's document:
https://docs
50 matches
Mail list logo