[ceph-users] Re: [Ceph-announce] v18.2.4 Reef released

2024-07-26 Thread Travis Nielsen
Rook users are seeing OSDs fail on arm64 with v18.2.4. I would think it
also affects non-rook users.
Tracker opened: https://tracker.ceph.com/issues/67213

Thanks,
Travis

On Wed, Jul 24, 2024 at 3:13 PM Yuri Weinstein  wrote:

> We're happy to announce the 4th release in the Reef series.
>
> An early build of this release was accidentally exposed and packaged
> as 18.2.3 by the Debian project in April. That 18.2.3 release should
> not be used. The official release was re-tagged as v18.2.4 to avoid
> further confusion.
>
> v18.2.4 container images, now based on CentOS 9, may be incompatible
> on older kernels (e.g., Ubuntu 18.04) due to differences in thread
> creation methods. Users upgrading to v18.2.4 container images with
> older OS versions
> may encounter crashes during `pthread_create`. For workarounds, refer
> to the related tracker. However, we recommend upgrading your OS to
> avoid this unsupported combination.
> Related tracker: https://tracker.ceph.com/issues/66989
>
> We recommend users to update to this release.
> For detailed release notes with links & changelog please refer to the
> official blog entry at
> https://ceph.io/en/news/blog/2024/v18-2-4-reef-released/
>
> Notable Changes
> ---
> * RADOS: This release fixes a bug (https://tracker.ceph.com/issues/61948)
> where
>   pre-reef clients were allowed to connect to the `pg-upmap-primary`
>   (https://docs.ceph.com/en/reef/rados/operations/read-balancer/)
>   interface despite users having set `require-min-compat-client=reef`,
>   leading to an assert in the osds and mons. You are susceptible to this
>   bug in reef versions prior to 18.2.3 if 1) you are using an osdmap
>   generated via the offline osdmaptool with the `--read` option or 2)
>   you have explicitly generated pg-upmap-primary mappings with the CLI
>   command. Please note that the fix is minimal and does not address corner
>   cases such as adding a mapping in the middle of an upgrade or in a
> partially
>   upgraded cluster (related trackers linked in
> https://tracker.ceph.com/issues/61948).
>   As such, we recommend removing any existing pg-upmap-primary
> mappings until remaining
>   issues are addressed in future point releases.
>   See https://tracker.ceph.com/issues/61948#note-32 for instructions
> on how to remove
>   existing pg-upmap-primary mappings.
> * RBD: When diffing against the beginning of time (`fromsnapname == NULL`)
> in
>   fast-diff mode (`whole_object == true` with `fast-diff` image feature
> enabled
>   and valid), diff-iterate is now guaranteed to execute locally if
> exclusive
>   lock is available.  This brings a dramatic performance improvement for
> QEMU
>   live disk synchronization and backup use cases.
> * RADOS: `get_pool_is_selfmanaged_snaps_mode` C++ API has been deprecated
>   due to being prone to false negative results.  It's safer replacement is
>   `pool_is_in_selfmanaged_snaps_mode`.
> * RBD: The option ``--image-id`` has been added to `rbd children` CLI
> command,
>   so it can be run for images in the trash.
>
> Related tracker: https://tracker.ceph.com/issues/65393
>
> Getting Ceph
> 
> * Git at git://github.com/ceph/ceph.git
> * Tarball at https://download.ceph.com/tarballs/ceph_18.2.4.orig.tar.gz
> * Containers at https://quay.io/repository/ceph/ceph
> * For packages, see https://docs.ceph.com/en/latest/install/get-packages/
> * Release git sha1: e7ad5345525c7aa95470c26863873b581076945d
> ___
> Ceph-announce mailing list -- ceph-annou...@ceph.io
> To unsubscribe send an email to ceph-announce-le...@ceph.io
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: reef 18.2.2 (hot-fix) QE validation status

2024-03-05 Thread Travis Nielsen
Looks great to me, Redo has tested this thoroughly.

Thanks!
Travis

On Tue, Mar 5, 2024 at 8:48 AM Yuri Weinstein  wrote:

> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/64721#note-1
> Release Notes - TBD
> LRC upgrade - TBD
>
> Seeking approvals/reviews for:
>
> smoke - in progress
> rados - Radek, Laura?
> quincy-x - in progress
>
> Also need approval from Travis, Redouane for Prometheus fix testing.
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: CLT Meeting minutes 2023-12-06

2023-12-06 Thread Travis Nielsen
On Wed, Dec 6, 2023 at 8:26 AM Laura Flores  wrote:

> Closing the loop (blocked waiting for Neha's input): how are we using
> Gibba on a day-to-day basis? Is it only used for checking reef point
> releases?
>
>- To be discussed again next week, as Neha had a conflict
>
> [Nizam] http://old.ceph.com/pgcalc is not working anymore, is there any
> replacement for pgcalc page in the new ceph site?
>
>- Last attempt at a fix https://github.com/ceph/ceph.io/issues/265
>- Nizam will take a look; the PR needs some CSS work
>- We may not even need this link due to the autoscaler, etc.
>- Perhaps add some kind of "banner" to the pgcalc page to make it
>clear that users should look to the autoscaler
>
> Update on 18.2.1 bluestore issue?
>
>- Fix has been raised; also not as serious as initially suspected
>- Fix is in testing
>
> Is this the final blocking issue? Is the release imminent or perhaps
another week or two? Depending on the timeline we would prefer to depend on
it in Rook's next release, thanks.



> --
>
> Laura Flores
>
> She/Her/Hers
>
> Software Engineer, Ceph Storage 
>
> Chicago, IL
>
> lflo...@ibm.com | lflo...@redhat.com 
> M: +17087388804
>
>
> ___
> Dev mailing list -- d...@ceph.io
> To unsubscribe send an email to dev-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Rook-Ceph OSD Deployment Error

2023-11-27 Thread Travis Nielsen
Sounds like you're hitting a known issue with v17.2.7.
https://github.com/rook/rook/issues/13136

The fix will be in v18.2.1 if it's an option to upgrade to Reef. If not,
you'll need to use v17.2.6 until the fix comes out for quincy in v17.2.8.

Travis

On Thu, Nov 23, 2023 at 4:06 PM P Wagner-Beccard <
wagner-kerschbau...@schaffroth.eu> wrote:

> Hi Mailing-Lister's,
>
> I am reaching out for assistance regarding a deployment issue I am facing
> with Ceph on a 4 node RKE2 cluster. We are attempting to deploy Ceph via
> the rook helm chart, but we are encountering an issue that apparently seems
> related to a known bug (https://tracker.ceph.com/issues/61597).
>
> During the OSD preparation phase, the deployment consistently fails with an
> IndexError: list index out of range. The logs indicate a problem occurs
> when configuring new Disks, specifically using /dev/dm-3 as a metadata
> device. It's important to note that /dev/dm-3 is an LVM on top of an mdadm
> RAID, which might or might not be contributing to this issue. (I swear,
> this setup worked already)
>
> Here is a snippet of the error from the deployment logs:
> > 2023-11-23 23:11:30.196913 D | exec: IndexError: list index out of range
> > 2023-11-23 23:11:30.236962 C | rookcmd: failed to configure devices:
> failed to initialize osd: failed ceph-volume report: exit status 1
> https://paste.openstack.org/show/bileqRFKbolrBlTqszmC/
>
> We have attempted different configurations, including specifying devices
> explicitly and using the useAllDevices: true option with a specified
> metadata device (/dev/dm-3 or the /dev/pv_md0/lv_md0 path). However, the
> issue persists across multiple configurations.
>
> tested configurations are as follows:
>
> Explicit device specification:
>
> ```yaml
> nodes:
>   - name: "ceph01.maas"
> devices:
>   - name: /dev/dm-1
>   - name: /dev/dm-2
>   - name: "sdb"
> config:
>   metadataDevice: "/dev/dm-3"
>   - name: "sdc"
> config:
>   metadataDevice: "/dev/dm-3"
> ```
>
> General device specification with metadata device:
> ```yaml
> storage:
>   useAllNodes: true
>   useAllDevices: true
>   config:
> metadataDevice: /dev/dm-3
> ```
>
> I would greatly appreciate any insights or recommendations on how to
> proceed or work around this issue.
> Is there a halfway decent way to apply the fix or maybe a workaround that
> we can apply to successfully deploy Ceph in our environment?
>
> Kind regards,
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: reef 18.2.1 QE Validation status

2023-11-16 Thread Travis Nielsen
Rook already ran the tests against Guillaume's change directly, it looks
good to us. I don't see a new latest-reef-devel image tag yet, but will
plan on rerunning the tests when that tag is updated.

Thanks,
Travis

On Thu, Nov 16, 2023 at 8:27 AM Adam King  wrote:

> Guillaume ran that patch through the orch suite earlier today before
> merging. I think we should be okay on that front. The issue it's fixing was
> also particular to rook iirc, which teuthology doesn't cover.
>
> On Thu, Nov 16, 2023 at 10:18 AM Yuri Weinstein 
> wrote:
>
>> OK I will start building.
>>
>> Travis, Adam King - any need to rerun any suites?
>>
>> On Thu, Nov 16, 2023 at 7:14 AM Guillaume Abrioux 
>> wrote:
>> >
>> > Hi Yuri,
>> >
>> >
>> >
>> > Backport PR [2] for reef has been merged.
>> >
>> >
>> >
>> > Thanks,
>> >
>> >
>> >
>> > [2] https://github.com/ceph/ceph/pull/54514/files
>> >
>> >
>> >
>> > --
>> >
>> > Guillaume Abrioux
>> >
>> > Software Engineer
>> >
>> >
>> >
>> > From: Guillaume Abrioux 
>> > Date: Wednesday, 15 November 2023 at 21:02
>> > To: Yuri Weinstein , Nizamudeen A ,
>> Guillaume Abrioux , Travis Nielsen <
>> tniel...@redhat.com>
>> > Cc: Adam King , Redouane Kachach <
>> rkach...@redhat.com>, dev , ceph-users 
>> > Subject: Re: [EXTERNAL] [ceph-users] Re: reef 18.2.1 QE Validation
>> status
>> >
>> > Hi Yuri, (thanks)
>> >
>> >
>> >
>> > Indeed, we had a regression in ceph-volume impacting rook scenarios
>> which was supposed to be fixed by [1].
>> >
>> > It turns out rook's CI didn't catch that fix wasn't enough for some
>> reason (I believe the CI run wasn't using the right image, Travis might
>> confirm or give more details).
>> >
>> > Another patch [2] is needed in order to fix this regression.
>> >
>> >
>> >
>> > Let me know if more details are needed.
>> >
>> >
>> >
>> > Thanks,
>> >
>> >
>> >
>> > [1]
>> https://github.com/ceph/ceph/pull/54429/commits/ee26074a5e7e90b4026659bf3adb1bc973595e91
>> >
>> > [2] https://github.com/ceph/ceph/pull/54514/files
>> >
>> >
>> >
>> >
>> >
>> > --
>> >
>> > Guillaume Abrioux
>> >
>> > Software Engineer
>> >
>> >
>> >
>> > 
>> >
>> > From: Yuri Weinstein 
>> > Sent: 15 November 2023 20:23
>> > To: Nizamudeen A ; Guillaume Abrioux <
>> gabri...@redhat.com>; Travis Nielsen 
>> > Cc: Adam King ; Redouane Kachach <
>> rkach...@redhat.com>; dev ; ceph-users 
>> > Subject: [EXTERNAL] [ceph-users] Re: reef 18.2.1 QE Validation status
>> >
>> >
>> >
>> > This is on behalf of Guillaume.
>> >
>> > We have one more last mites issue that may have to be included
>> > https://tracker.ceph.com/issues/63545
>> https://github.com/ceph/ceph/pull/54514
>> >
>> > Travis, Redo, Guillaume will provide more context and details.
>> >
>> > We are assessing the situation as 18.2.1 has been built and signed.
>> >
>> > On Tue, Nov 14, 2023 at 11:07 AM Yuri Weinstein 
>> wrote:
>> > >
>> > > OK thx!
>> > >
>> > > We have completed the approvals.
>> > >
>> > > On Tue, Nov 14, 2023 at 9:13 AM Nizamudeen A  wrote:
>> > > >
>> > > > dashboard approved. Failure known and unrelated!
>> > > >
>> > > > On Tue, Nov 14, 2023, 22:34 Adam King  wrote:
>> > > >>
>> > > >> orch approved.  After reruns, orch/cephadm was just hitting two
>> known (nonblocker) issues and orch/rook teuthology suite is known to not be
>> functional currently.
>> > > >>
>> > > >> On Tue, Nov 14, 2023 at 10:33 AM Yuri Weinstein <
>> ywein...@redhat.com> wrote:
>> > > >>>
>> > > >>> Build 4 with https://github.com/ceph/ceph/pull/54224  was built
>> and I
>> > > >>> ran the tests below and asking for approvals:
>> > > >>>
>> > > >>> smoke - Laura
>> > > >>> rados/mgr - PASSED
>>

[ceph-users] Re: reef 18.2.1 QE Validation status

2023-11-15 Thread Travis Nielsen
The tests were re-run <https://github.com/rook/rook/pull/13215> with
Guillaume's changes and are passing now!

Thanks,
Travis

On Wed, Nov 15, 2023 at 1:19 PM Yuri Weinstein  wrote:

> Sounds like it's a must to be added.
>
> When the reef backport PR can be merged?
>
> On Wed, Nov 15, 2023 at 12:13 PM Travis Nielsen 
> wrote:
> >
> > Thanks Guiilaume and Redo for tracking down this issue. After talking
> more with Guillaume I now realized that not all the tests were using the
> expected latest-reef-devel label so Rook tests were incorrectly showing
> green for Reef. :(
> > Now that I ran the tests again in the test PR with all tests using the
> latest-reef-devel label, all the tests with OSDs on PVCs are failing that
> use ceph-volume raw mode. So this is a blocker for Rook scenarios, we
> really need to fix to avoid breaking OSDs.
> >
> > Thanks,
> > Travis
> >
> > On Wed, Nov 15, 2023 at 1:03 PM Guillaume Abrioux 
> wrote:
> >>
> >> Hi Yuri, (thanks)
> >>
> >> Indeed, we had a regression in ceph-volume impacting rook scenarios
> which was supposed to be fixed by [1].
> >> It turns out rook's CI didn't catch that fix wasn't enough for some
> reason (I believe the CI run wasn't using the right image, Travis might
> confirm or give more details).
> >> Another patch [2] is needed in order to fix this regression.
> >>
> >> Let me know if more details are needed.
> >>
> >> Thanks,
> >>
> >> [1]
> https://github.com/ceph/ceph/pull/54429/commits/ee26074a5e7e90b4026659bf3adb1bc973595e91
> >> [2] https://github.com/ceph/ceph/pull/54514/files
> >>
> >>
> >> --
> >> Guillaume Abrioux
> >> Software Engineer
> >>
> >> 
> >> From: Yuri Weinstein 
> >> Sent: 15 November 2023 20:23
> >> To: Nizamudeen A ; Guillaume Abrioux <
> gabri...@redhat.com>; Travis Nielsen 
> >> Cc: Adam King ; Redouane Kachach <
> rkach...@redhat.com>; dev ; ceph-users 
> >> Subject: [EXTERNAL] [ceph-users] Re: reef 18.2.1 QE Validation status
> >>
> >> This is on behalf of Guillaume.
> >>
> >> We have one more last mites issue that may have to be included
> >> https://tracker.ceph.com/issues/63545
> https://github.com/ceph/ceph/pull/54514
> >>
> >> Travis, Redo, Guillaume will provide more context and details.
> >>
> >> We are assessing the situation as 18.2.1 has been built and signed.
> >>
> >> On Tue, Nov 14, 2023 at 11:07 AM Yuri Weinstein 
> wrote:
> >> >
> >> > OK thx!
> >> >
> >> > We have completed the approvals.
> >> >
> >> > On Tue, Nov 14, 2023 at 9:13 AM Nizamudeen A  wrote:
> >> > >
> >> > > dashboard approved. Failure known and unrelated!
> >> > >
> >> > > On Tue, Nov 14, 2023, 22:34 Adam King  wrote:
> >> > >>
> >> > >> orch approved.  After reruns, orch/cephadm was just hitting two
> known (nonblocker) issues and orch/rook teuthology suite is known to not be
> functional currently.
> >> > >>
> >> > >> On Tue, Nov 14, 2023 at 10:33 AM Yuri Weinstein <
> ywein...@redhat.com> wrote:
> >> > >>>
> >> > >>> Build 4 with https://github.com/ceph/ceph/pull/54224  was built
> and I
> >> > >>> ran the tests below and asking for approvals:
> >> > >>>
> >> > >>> smoke - Laura
> >> > >>> rados/mgr - PASSED
> >> > >>> rados/dashboard - Nizamudeen
> >> > >>> orch - Adam King
> >> > >>>
> >> > >>> See Build 4 runs - https://tracker.ceph.com/issues/63443#note-1
> >> > >>>
> >> > >>> On Tue, Nov 14, 2023 at 12:21 AM Redouane Kachach <
> rkach...@redhat.com> wrote:
> >> > >>> >
> >> > >>> > Yes, cephadm has some tests for monitoring that should be
> enough to ensure basic functionality is working properly. The rest of the
> changes in the PR are for rook orchestrator.
> >> > >>> >
> >> > >>> > On Tue, Nov 14, 2023 at 5:04 AM Nizamudeen A 
> wrote:
> >> > >>> >>
> >> > >>> >> dashboard changes are minimal and approved. and since the
> dashboard change is related to the
> >> > >>> >> m

[ceph-users] Re: reef 18.2.1 QE Validation status

2023-11-15 Thread Travis Nielsen
Thanks Guiilaume and Redo for tracking down this issue. After talking more
with Guillaume I now realized that not all the tests were using the
expected latest-reef-devel label so Rook tests were incorrectly showing
green for Reef. :(
Now that I ran the tests again in the test PR
<https://github.com/rook/rook/pull/13203> with all tests using the
latest-reef-devel label, all the tests with OSDs on PVCs are failing that
use ceph-volume raw mode. So this is a blocker for Rook scenarios, we
really need to fix to avoid breaking OSDs.

Thanks,
Travis

On Wed, Nov 15, 2023 at 1:03 PM Guillaume Abrioux  wrote:

> Hi Yuri, (thanks)
>
> Indeed, we had a regression in ceph-volume impacting rook scenarios which
> was supposed to be fixed by [1].
> It turns out rook's CI didn't catch that fix wasn't enough for some reason
> (I believe the CI run wasn't using the right image, Travis might confirm or
> give more details).
> Another patch [2] is needed in order to fix this regression.
>
> Let me know if more details are needed.
>
> Thanks,
>
> [1]
> https://github.com/ceph/ceph/pull/54429/commits/ee26074a5e7e90b4026659bf3adb1bc973595e91
> [2] https://github.com/ceph/ceph/pull/54514/files
>
>
> --
> Guillaume Abrioux
> Software Engineer
>
> --
> *From:* Yuri Weinstein 
> *Sent:* 15 November 2023 20:23
> *To:* Nizamudeen A ; Guillaume Abrioux <
> gabri...@redhat.com>; Travis Nielsen 
> *Cc:* Adam King ; Redouane Kachach ;
> dev ; ceph-users 
> *Subject:* [EXTERNAL] [ceph-users] Re: reef 18.2.1 QE Validation status
>
> This is on behalf of Guillaume.
>
> We have one more last mites issue that may have to be included
> https://tracker.ceph.com/issues/63545
> https://github.com/ceph/ceph/pull/54514
>
> Travis, Redo, Guillaume will provide more context and details.
>
> We are assessing the situation as 18.2.1 has been built and signed.
>
> On Tue, Nov 14, 2023 at 11:07 AM Yuri Weinstein 
> wrote:
> >
> > OK thx!
> >
> > We have completed the approvals.
> >
> > On Tue, Nov 14, 2023 at 9:13 AM Nizamudeen A  wrote:
> > >
> > > dashboard approved. Failure known and unrelated!
> > >
> > > On Tue, Nov 14, 2023, 22:34 Adam King  wrote:
> > >>
> > >> orch approved.  After reruns, orch/cephadm was just hitting two known
> (nonblocker) issues and orch/rook teuthology suite is known to not be
> functional currently.
> > >>
> > >> On Tue, Nov 14, 2023 at 10:33 AM Yuri Weinstein 
> wrote:
> > >>>
> > >>> Build 4 with https://github.com/ceph/ceph/pull/54224  was built and
> I
> > >>> ran the tests below and asking for approvals:
> > >>>
> > >>> smoke - Laura
> > >>> rados/mgr - PASSED
> > >>> rados/dashboard - Nizamudeen
> > >>> orch - Adam King
> > >>>
> > >>> See Build 4 runs - https://tracker.ceph.com/issues/63443#note-1
> > >>>
> > >>> On Tue, Nov 14, 2023 at 12:21 AM Redouane Kachach <
> rkach...@redhat.com> wrote:
> > >>> >
> > >>> > Yes, cephadm has some tests for monitoring that should be enough
> to ensure basic functionality is working properly. The rest of the changes
> in the PR are for rook orchestrator.
> > >>> >
> > >>> > On Tue, Nov 14, 2023 at 5:04 AM Nizamudeen A 
> wrote:
> > >>> >>
> > >>> >> dashboard changes are minimal and approved. and since the
> dashboard change is related to the
> > >>> >> monitoring stack (prometheus..) which is something not covered in
> the dashboard test suites, I don't think running it is necessary.
> > >>> >> But maybe the cephadm suite has some monitoring stack related
> testings written?
> > >>> >>
> > >>> >> On Tue, Nov 14, 2023 at 1:10 AM Yuri Weinstein <
> ywein...@redhat.com> wrote:
> > >>> >>>
> > >>> >>> Ack Travis.
> > >>> >>>
> > >>> >>> Since it touches a dashboard, Nizam - please reply/approve.
> > >>> >>>
> > >>> >>> I assume that rados/dashboard tests will be sufficient, but
> expecting
> > >>> >>> your recommendations.
> > >>> >>>
> > >>> >>> This addition will make the final release likely to be pushed.
> > >>> >>>
> > >>> >>> On Mon, Nov 13, 2023 at 11:30 AM Travis Nielsen <
&g

[ceph-users] Re: reef 18.2.1 QE Validation status

2023-11-13 Thread Travis Nielsen
I'd like to see these changes for much improved dashboard integration with
Rook. The changes are to the rook mgr orchestrator module, and supporting
test changes. Thus, this should be very low risk to the ceph release. I
don't know the details of the tautology suites, but I would think suites
involving the mgr modules would only be necessary.

Travis

On Mon, Nov 13, 2023 at 12:14 PM Yuri Weinstein  wrote:

> Redouane
>
> What would be a sufficient level of testing (tautology suite(s))
> assuming this PR is approved to be added?
>
> On Mon, Nov 13, 2023 at 9:13 AM Redouane Kachach 
> wrote:
> >
> > Hi Yuri,
> >
> > I've just backported to reef several fixes that I introduced in the last
> months for the rook orchestrator. Most of them are fixes for dashboard
> issues/crashes that only happen on Rook environments. The PR [1] has all
> the changes and it was merged into reef this morning. We really need these
> changes to be part of the next reef release as the upcoming Rook stable
> version will be based on it.
> >
> > Please, can you include those changes in the upcoming reef 18.2.1
> release?
> >
> > [1] https://github.com/ceph/ceph/pull/54224
> >
> > Thanks a lot,
> > Redouane.
> >
> >
> > On Mon, Nov 13, 2023 at 6:03 PM Yuri Weinstein 
> wrote:
> >>
> >> -- Forwarded message -
> >> From: Venky Shankar 
> >> Date: Thu, Nov 9, 2023 at 11:52 PM
> >> Subject: Re: [ceph-users] Re: reef 18.2.1 QE Validation status
> >> To: Yuri Weinstein 
> >> Cc: dev , ceph-users 
> >>
> >>
> >> Hi Yuri,
> >>
> >> On Fri, Nov 10, 2023 at 4:55 AM Yuri Weinstein 
> wrote:
> >> >
> >> > I've updated all approvals and merged PRs in the tracker and it looks
> >> > like we are ready for gibba, LRC upgrades pending approval/update from
> >> > Venky.
> >>
> >> The smoke test failure is caused by missing (kclient) patches in
> >> Ubuntu 20.04 that certain parts of the fs suite (via smoke tests) rely
> >> on. More details here
> >>
> >> https://tracker.ceph.com/issues/63488#note-8
> >>
> >> The kclient tests in smoke pass with other distro's and the fs suite
> >> tests have been reviewed and look good. Run details are here
> >>
> >> https://tracker.ceph.com/projects/cephfs/wiki/Reef#07-Nov-2023
> >>
> >> The smoke failure is noted as a known issue for now. Consider this run
> >> as "fs approved".
> >>
> >> >
> >> > On Thu, Nov 9, 2023 at 1:31 PM Radoslaw Zarzynski <
> rzarz...@redhat.com> wrote:
> >> > >
> >> > > rados approved!
> >> > >
> >> > > Details are here:
> https://tracker.ceph.com/projects/rados/wiki/REEF#1821-Review.
> >> > >
> >> > > On Mon, Nov 6, 2023 at 10:33 PM Yuri Weinstein 
> wrote:
> >> > > >
> >> > > > Details of this release are summarized here:
> >> > > >
> >> > > > https://tracker.ceph.com/issues/63443#note-1
> >> > > >
> >> > > > Seeking approvals/reviews for:
> >> > > >
> >> > > > smoke - Laura, Radek, Prashant, Venky (POOL_APP_NOT_ENABLE
> failures)
> >> > > > rados - Neha, Radek, Travis, Ernesto, Adam King
> >> > > > rgw - Casey
> >> > > > fs - Venky
> >> > > > orch - Adam King
> >> > > > rbd - Ilya
> >> > > > krbd - Ilya
> >> > > > upgrade/quincy-x (reef) - Laura PTL
> >> > > > powercycle - Brad
> >> > > > perf-basic - Laura, Prashant (POOL_APP_NOT_ENABLE failures)
> >> > > >
> >> > > > Please reply to this email with approval and/or trackers of known
> >> > > > issues/PRs to address them.
> >> > > >
> >> > > > TIA
> >> > > > YuriW
> >> > > > ___
> >> > > > Dev mailing list -- d...@ceph.io
> >> > > > To unsubscribe send an email to dev-le...@ceph.io
> >> > > >
> >> > >
> >> > ___
> >> > ceph-users mailing list -- ceph-users@ceph.io
> >> > To unsubscribe send an email to ceph-users-le...@ceph.io
> >>
> >>
> >>
> >> --
> >> Cheers,
> >> Venky
> >>
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: reef 18.2.1 QE Validation status

2023-11-08 Thread Travis Nielsen
Yuri, we need to add this issue as a blocker for 18.2.1. We discovered this
issue after the release of 17.2.7, and don't want to hit the same blocker
in 18.2.1 where some types of OSDs are failing to be created in new
clusters, or failing to start in upgraded clusters.
https://tracker.ceph.com/issues/63391

Thanks!
Travis

On Wed, Nov 8, 2023 at 4:41 AM Venky Shankar  wrote:

> Hi Yuri,
>
> On Wed, Nov 8, 2023 at 2:32 AM Yuri Weinstein  wrote:
> >
> > 3 PRs above mentioned were merged and I am returning some tests:
> > https://pulpito.ceph.com/?sha1=55e3239498650453ff76a9b06a37f1a6f488c8fd
> >
> > Still seeing approvals.
> > smoke - Laura, Radek, Prashant, Venky in progress
> > rados - Neha, Radek, Travis, Ernesto, Adam King
> > rgw - Casey in progress
> > fs - Venky
>
> There's a failure in the fs suite
>
>
> https://pulpito.ceph.com/vshankar-2023-11-07_05:14:36-fs-reef-release-distro-default-smithi/7450325/
>
> Seems to be related to nfs-ganesha. I've reached out to Frank Filz
> (#cephfs on ceph slack) to have a look. WIll update as soon as
> possible.
>
> > orch - Adam King
> > rbd - Ilya approved
> > krbd - Ilya approved
> > upgrade/quincy-x (reef) - Laura PTL
> > powercycle - Brad
> > perf-basic - in progress
> >
> >
> > On Tue, Nov 7, 2023 at 8:38 AM Casey Bodley  wrote:
> > >
> > > On Mon, Nov 6, 2023 at 4:31 PM Yuri Weinstein 
> wrote:
> > > >
> > > > Details of this release are summarized here:
> > > >
> > > > https://tracker.ceph.com/issues/63443#note-1
> > > >
> > > > Seeking approvals/reviews for:
> > > >
> > > > smoke - Laura, Radek, Prashant, Venky (POOL_APP_NOT_ENABLE failures)
> > > > rados - Neha, Radek, Travis, Ernesto, Adam King
> > > > rgw - Casey
> > >
> > > rgw results are approved. https://github.com/ceph/ceph/pull/54371
> > > merged to reef but is needed on reef-release
> > >
> > > > fs - Venky
> > > > orch - Adam King
> > > > rbd - Ilya
> > > > krbd - Ilya
> > > > upgrade/quincy-x (reef) - Laura PTL
> > > > powercycle - Brad
> > > > perf-basic - Laura, Prashant (POOL_APP_NOT_ENABLE failures)
> > > >
> > > > Please reply to this email with approval and/or trackers of known
> > > > issues/PRs to address them.
> > > >
> > > > TIA
> > > > YuriW
> > > > ___
> > > > ceph-users mailing list -- ceph-users@ceph.io
> > > > To unsubscribe send an email to ceph-users-le...@ceph.io
> > > >
> > >
> > ___
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
>
> --
> Cheers,
> Venky
> ___
> Dev mailing list -- d...@ceph.io
> To unsubscribe send an email to dev-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: After upgrading from 17.2.6 to 18.2.0, OSDs are very frequently restarting due to livenessprobe failures

2023-09-21 Thread Travis Nielsen
If there is nothing obvious in the OSD logs such as failing to start, and
if the OSDs appear to be running until the liveness probe restarts them,
you could disable or change the timeouts on the liveness probe. See
https://rook.io/docs/rook/latest/CRDs/Cluster/ceph-cluster-crd/#health-settings
.

But of course, we need to understand if there is some issue with the OSDs.
Please open a Rook issue if it appears related to the liveness probe.

Travis

On Thu, Sep 21, 2023 at 3:12 AM Igor Fedotov  wrote:

> Hi!
>
> Can you share OSD logs demostrating such a restart?
>
>
> Thanks,
>
> Igor
>
> On 20/09/2023 20:16, sbeng...@gmail.com wrote:
> > Since upgrading to 18.2.0 , OSDs are very frequently restarting due to
> livenessprobe failures making the cluster unusable. Has anyone else seen
> this behavior?
> >
> > Upgrade path: ceph 17.2.6 to 18.2.0 (and rook from 1.11.9 to 1.12.1)
> > on ubuntu 20.04 kernel 5.15.0-79-generic
> >
> > Thanks.
> > ___
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: 16.2.14 pacific QE validation status

2023-08-25 Thread Travis Nielsen
Approved for rook.

For future approvals, Blaine or I could approve, as Seb is on another
project now.

Thanks,
Travis

On Fri, Aug 25, 2023 at 7:06 AM Venky Shankar  wrote:

> On Fri, Aug 25, 2023 at 7:17 AM Patrick Donnelly 
> wrote:
> >
> > On Wed, Aug 23, 2023 at 10:41 AM Yuri Weinstein 
> wrote:
> > >
> > > Details of this release are summarized here:
> > >
> > > https://tracker.ceph.com/issues/62527#note-1
> > > Release Notes - TBD
> > >
> > > Seeking approvals for:
> > >
> > > smoke - Venky
> > > rados - Radek, Laura
> > >   rook - Sébastien Han
> > >   cephadm - Adam K
> > >   dashboard - Ernesto
> > >
> > > rgw - Casey
> > > rbd - Ilya
> > > krbd - Ilya
> > > fs - Venky, Patrick
> >
> > approved
> >
> > https://tracker.ceph.com/projects/cephfs/wiki/Pacific#2023-August-22
>
> You beat me to this. Thanks, Patrick.
>
> >
> >
> > --
> > Patrick Donnelly, Ph.D.
> > He / Him / His
> > Red Hat Partner Engineer
> > IBM, Inc.
> > GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
> > ___
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
>
> --
> Cheers,
> Venky
> ___
> Dev mailing list -- d...@ceph.io
> To unsubscribe send an email to dev-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: ref v18.2.0 QE Validation status

2023-07-31 Thread Travis Nielsen
Approved for Rook. Rook tests are passing against the latest Reef.

Thanks,
Travis

On Sun, Jul 30, 2023 at 9:46 AM Yuri Weinstein  wrote:

> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/62231#note-1
>
> Seeking approvals/reviews for:
>
> smoke - Laura, Radek
> rados - Neha, Radek, Travis, Ernesto, Adam King
> rgw - Casey
> fs - Venky
> orch - Adam King
> rbd - Ilya
> krbd - Ilya
> upgrade-clients:client-upgrade* - in progress
> powercycle - Brad
>
> Please reply to this email with approval and/or trackers of known
> issues/PRs to address them.
>
> bookworm distro support is an outstanding issue.
>
> TIA
> YuriW
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Rook on bare-metal?

2023-07-06 Thread Travis Nielsen
Here are the answers to some of the questions. Happy to follow up with more
discussion in the Rook Slack , Discussions
, or Issues
.

Thanks!
Travis

On Thu, Jul 6, 2023 at 4:43 AM Anthony D'Atri  wrote:

> I’m also using Rook on BM.  I had never used K8s before, so that was the
> learning curve, e.g. translating the example YAML files into the Helm
> charts we needed, and the label / taint / toleration dance to fit the
> square peg of pinning services to round hole nodes.  We’re using Kubespray
> ; I gather there are other ways of deploying K8s?
>
> Some things that could improve:
>
> * mgrs are limited to 2, apparently Sage previously said that was all
> anyone should need.  I would like to be able to deploy one for each mon.


Is there a specific need for 3? Or is it more of a habit/expectation?


> * The efficiency of `destroy`ing OSDs is not exploited, so replacing one
> involves more data shuffling than it otherwise might
>

There is a related design discussion in progress that will address the
replacement of OSDs to avoid the data reshuffling:
https://github.com/rook/rook/pull/12381


> * I’m specifying 3 RGWs but only getting 1 deployed, no idea why

* Ingress / load balancer service for multiple RGWs seems to be manual
> * Bundled alerts are kind of noisy
>

Curious for more details on these three issues if you want to open issues.


> * I’m still unsure what Rook does dynamically, and what it only does at
> deployment time (we use ArgoCD).  I.e., if I make changes, what sticks and
> what’s trampled?
>

Changes are intended to be updated if you change the settings in the CRDs.
If you see settings that are not applied when changed, agreed we should
track that and fix it, or at least document it.


> * How / if one can bake configuration (as in `ceph.conf` entries) into the
> YAML files vs manually running “ceph config”
>

ceph.conf settings can be applied through a configmap. See
https://rook.io/docs/rook/latest/Storage-Configuration/Advanced/ceph-configuration/#custom-csi-cephconf-settings


> * What the sidecars within the pods are doing, if any of them can be
> disabled
>

Sidecars are needed for some of the pods (csi drivers and mgr) to provide
some functionality. They can't be disabled unless some feature is disabled.
For example, if two mgrs are running, the mgr sidecar is needed to watch
when the mgr failover occurs so the services can update to point to the
active mgr. Search this doc for "sidecar" for some more details on the mgr
sidecar.
https://rook.io/docs/rook/latest/CRDs/Cluster/ceph-cluster-crd/#cluster-wide-resources-configuration-settings


> * Requests / limits for various pods, especially when on dedicated nodes.
> Plan to experiment with disabling limits and setting
> `autotune_memory_target_ratio` and `osd_memory_target_autotune`
>

Where you have dedicated nodes, it can certainly be simpler to remove the
resource requests/limits, as long as you set those memory limits. Default
requests/limits are set by the helm chart, and they can admittedly be
challenging to tune since there are so many moving parts.


> * Documentation for how to do pod-specific configuration, i.e. setting the
> number of OSDs per node when it isn’t uniform.  A colleague helped me sort
> this out, but I’m enumerating each node - would like to be able to do so
> more concisely, perhaps with a default and overrides.
>

There are multiple ways to deal with OSD creation, depending on the
environment. Curious to follow up on what worked for you, or how this could
be improved in the docs.


>
> > On Jul 6, 2023, at 4:13 AM, Joachim Kraftmayer - ceph ambassador <
> joachim.kraftma...@clyso.com> wrote:
> >
> > Hello
> >
> > we have been following rook since 2018 and have had our experiences both
> on bare-metal and in the hyperscalers.
> > In the same way, we have been following cephadm from the beginning.
> >
> > Meanwhile, we have been using both in production for years and the
> decision which orchestrator to use depends from project to project. e.g.,
> the features of both projects are not identical.
> >
> > Joachim
> >
> > ___
> > ceph ambassador DACH
> > ceph consultant since 2012
> >
> > Clyso GmbH - Premier Ceph Foundation Member
> >
> > https://www.clyso.com/
> >
> > Am 06.07.23 um 07:16 schrieb Nico Schottelius:
> >> Morning,
> >>
> >> we are running some ceph clusters with rook on bare metal and can very
> >> much recomend it. You should have proper k8s knowledge, knowing how to
> >> change objects such as configmaps or deployments, in case things go
> >> wrong.
> >>
> >> In regards to stability, the rook operator is written rather defensive,
> >> not changing monitors or the cluster if the quorom is not met and
> >> checking how the osd status is on removal/adding of osds.
> >>
> >> So TL;DR: very much usable and rather k8s native.
> >>
> >> BR,
> >>
> >> Nico
> >>
> >> 

[ceph-users] Re: reef v18.1.0 QE Validation status

2023-05-30 Thread Travis Nielsen
Rook daily CI is passing against the image
quay.io/ceph/daemon-base:latest-main-devel, which means the Reef release is
looking good from Rook's perspective:


With the Reef release we need to have the tags soon:

quay.io/ceph/daemon-base:latest-reef-devel

quay.io/ceph/ceph:v18


Guillaume, will these happen automatically, or do we need some work done in
ceph-container?


Thanks,

Travis



On Tue, May 30, 2023 at 10:54 AM Yuri Weinstein  wrote:

> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/61515#note-1
> Release Notes - TBD
>
> Seeking approvals/reviews for:
>
> rados - Neha, Radek, Travis, Ernesto, Adam King (we still have to
> merge https://github.com/ceph/ceph/pull/51788 for
> the core)
> rgw - Casey
> fs - Venky
> orch - Adam King
> rbd - Ilya
> krbd - Ilya
> upgrade/octopus-x - deprecated
> upgrade/pacific-x - known issues, Ilya, Laura?
> upgrade/reef-p2p - N/A
> clients upgrades - not run yet
> powercycle - Brad
> ceph-volume - in progress
>
> Please reply to this email with approval and/or trackers of known
> issues/PRs to address them.
>
> gibba upgrade was done and will need to be done again this week.
> LRC upgrade TBD
>
> TIA
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: quincy v17.2.1 QE Validation status

2022-06-22 Thread Travis Nielsen
Ok, let's declare Rook signed off. First time asking for Rook, I'll try to
pay more attention going forward... :)

Rook has some daily tests
 in
the Rook repo running against the following tags.

quay.io/ceph/daemon-base:latest-octopus-devel

quay.io/ceph/daemon-base:latest-pacific-devel

quay.io/ceph/daemon-base:latest-quincy-devel


All of these are passing, except for a couple intermittent or other known
issues.


Travis

On Wed, Jun 22, 2022 at 4:53 PM Laura Flores  wrote:

> I did not see any Dashboard failures, and the Rook one is known and
> getting looked into. I cannot approve for Dashboard or Rook, but I can at
> least give that piece of information.
>
> - Laura
>
> On Wed, Jun 22, 2022 at 5:43 PM Yuri Weinstein 
> wrote:
>
>>
>> We did not get approvals for dashboard and rook, but we also did not get
>> disapproval :)
>>
>> Josh, David it's ready for publishing assuming you agree.
>>
>> On Wed, Jun 22, 2022 at 3:26 PM Neha Ojha  wrote:
>>
>>> On Wed, Jun 22, 2022 at 11:44 AM Laura Flores 
>>> wrote:
>>> >
>>> > Here is the summary of RADOS failures. Everything looks good and
>>> normal to
>>> > me! I will leave it to Neha to give final approval though.
>>>
>>> Thanks Laura. These runs look good. We encountered
>>> https://tracker.ceph.com/issues/56101 while upgrading the gibba scale
>>> cluster and the LRC to 17.2.1. The crash happens during shutdown and
>>> isn't repeatable. The issue is being actively investigated and does
>>> not look like a regression in 17.2.1. We found a couple of reports in
>>> telemetry about this crash on 17.2.0.
>>>
>>> RADOS approved!
>>>
>>> - Neha
>>>
>>>
>>> >
>>> >
>>> > https://tracker.ceph.com/issues/55974#note-1
>>> >
>>> > Failures:
>>> > 1. https://tracker.ceph.com/issues/52321
>>> > 2. https://tracker.ceph.com/issues/56000
>>> > 3. https://tracker.ceph.com/issues/53685
>>> > 4. https://tracker.ceph.com/issues/52124
>>> > 5. https://tracker.ceph.com/issues/55854
>>> > 6. https://tracker.ceph.com/issues/53789
>>> >
>>> > Details:
>>> > 1. qa/tasks/rook times out: 'check osd count' reached maximum tries
>>> > (90) after waiting for 900 seconds - Ceph - Orchestrator
>>> > 2. task/test_nfs: ERROR: Daemon not found: mds.a.smithi060.ujwxef.
>>> See
>>> > `cephadm ls` - Ceph - Orchestrator
>>> > 3. Assertion `HAVE_FEATURE(features, SERVER_OCTOPUS)' failed. -
>>> Ceph -
>>> > RADOS
>>> > 4. Invalid read of size 8 in handle_recovery_delete() - Ceph -
>>> RADOS
>>> > 5. Datetime AssertionError in test_health_history
>>> > (tasks.mgr.test_insights.TestInsights) - Ceph - Mgr
>>> > 6. CommandFailedError (rados/test_python.sh): "RADOS object not
>>> found"
>>> > causes test_rados.TestWatchNotify.test_aio_notify to fail - Ceph -
>>> RADOS
>>> >
>>> > On Wed, Jun 22, 2022 at 12:43 PM Yuri Weinstein 
>>> wrote:
>>> >
>>> > > Looking for final approvals: core, dashboard, rook pls
>>> > >
>>> > > On Tue, Jun 14, 2022 at 10:17 AM Yuri Weinstein >> >
>>> > > wrote:
>>> > >
>>> > >> Details of this release are summarized here:
>>> > >>
>>> > >> https://tracker.ceph.com/issues/55974
>>> > >> 
>>> > >> Release Notes - https://github.com/ceph/ceph/pull/46576
>>> > >>
>>> > >> Seeking approvals for:
>>> > >>
>>> > >> rados - Neha, Travis, Ernesto, Adam
>>> > >> rgw - Casey
>>> > >> fs - Venky, Gerg
>>> > >> orch - Adam
>>> > >> rbd - Ilya, Deepika
>>> > >> krbd  Ilya, Deepika
>>> > >> upgrade/octopus-x - Casey
>>> > >>
>>> > >> Please reply to this email with approval and/or trackers of known
>>> > >> issues/PRs to address them.
>>> > >>
>>> > >> Josh, David - it's ready for LRC upgrade if you'd like.
>>> > >>
>>> > >> Thx
>>> > >> YuriW
>>> > >>
>>> > > ___
>>> > > Dev mailing list -- d...@ceph.io
>>> > > To unsubscribe send an email to dev-le...@ceph.io
>>> > >
>>> >
>>> >
>>> > --
>>> >
>>> > Laura Flores
>>> >
>>> > She/Her/Hers
>>> >
>>> > Associate Software Engineer, Ceph Storage
>>> >
>>> > Red Hat Inc. 
>>> >
>>> > La Grange Park, IL
>>> >
>>> > lflo...@redhat.com
>>> > M: +17087388804
>>> > @RedHat    Red Hat
>>> >   Red Hat
>>> > 
>>> > 
>>> > ___
>>> > ceph-users mailing list -- ceph-users@ceph.io
>>> > To unsubscribe send an email to ceph-users-le...@ceph.io
>>> >
>>>
>>>
>
> --
>
> Laura Flores
>
> She/Her/Hers
>
> Associate Software Engineer, Ceph Storage
>
> Red Hat Inc. 
>
> La Grange Park, IL
>
> lflo...@redhat.com
> M: +17087388804
> @RedHat    Red Hat
>   Red Hat
> 
> 
>
> ___
> 

[ceph-users] Re: quincy v17.2.0 QE Validation status

2022-04-14 Thread Travis Nielsen
Rook is ready for Quincy, thanks!

Travis

On Thu, Apr 14, 2022 at 1:48 PM Yuri Weinstein  wrote:

> I am assuming approvals from Neha and Venky.
> Still not sure if Sébastien Han approved rook.
>
> We are waiting for the last PR to be mergedhttps://
> github.com/ceph/ceph/pull/45885
>
> Then all (after the PRs merged) is ready for final review/approval, Josh,
> and ready for publishing, David.
>
> Thx
> YuriW
>
> On Wed, Apr 13, 2022 at 10:53 PM Venky Shankar 
> wrote:
>
>> On Mon, Apr 11, 2022 at 7:33 PM Venky Shankar 
>> wrote:
>> >
>> > On Fri, Apr 8, 2022 at 3:32 PM Venky Shankar 
>> wrote:
>> > >
>> > > On Tue, Apr 5, 2022 at 7:44 AM Venky Shankar 
>> wrote:
>> > > >
>> > > > Hey Josh,
>> > > >
>> > > > On Tue, Apr 5, 2022 at 4:34 AM Josh Durgin 
>> wrote:
>> > > > >
>> > > > > Hi Venky and Ernesto, how are the mount fix and grafana container
>> build looking?
>> > > >
>> > > > Currently running into various teuthology related issues when
>> testing
>> > > > out the mount fix.
>> > > >
>> > > > We'll want a test run without these failures to be really sure that
>> we
>> > > > aren't missing anything.
>> > >
>> > > Update: The unrelated failures have been taken care of (updates to
>> > > testing kernel).  Seeing one failed test with the following PR:
>> > >
>> > > https://github.com/ceph/ceph/pull/45689
>> > >
>> > > We are working on priority to get that resolved.
>> >
>> > PR merged into master.
>> >
>> > Yuri, FYI - quincy backport PR is updated:
>> > https://github.com/ceph/ceph/pull/45780
>>
>> Merged into quincy.
>>
>> >
>> > >
>> > > >
>> > > > >
>> > > > > Josh
>> > > > >
>> > > > >
>> > > > > On Fri, Apr 1, 2022 at 8:22 AM Venky Shankar 
>> wrote:
>> > > > >>
>> > > > >> On Thu, Mar 31, 2022 at 8:51 PM Venky Shankar <
>> vshan...@redhat.com> wrote:
>> > > > >> >
>> > > > >> > Hi Yuri,
>> > > > >> >
>> > > > >> > On Wed, Mar 30, 2022 at 11:24 PM Yuri Weinstein <
>> ywein...@redhat.com> wrote:
>> > > > >> > >
>> > > > >> > > We merged rgw, cephadm and core PRs, but some work is still
>> pending on fs and dashboard components.
>> > > > >> > >
>> > > > >> > > Seeking approvals for:
>> > > > >> > >
>> > > > >> > > smoke - Venky
>> > > > >> > > fs - Venky
>> > > > >> >
>> > > > >> > I approved the latest batch for cephfs PRs:
>> > > > >> >
>> https://trello.com/c/Iq3WtUK5/1494-wip-yuri-testing-2022-03-29-0741-quincy
>> > > > >> >
>> > > > >> > There is one pending (blocker) PR:
>> > > > >> > https://github.com/ceph/ceph/pull/45689 - I'll let you know
>> when the
>> > > > >> > backport is available.
>> > > > >>
>> > > > >> Smoke test passes with the above PR:
>> > > > >>
>> https://pulpito.ceph.com/vshankar-2022-04-01_12:29:01-smoke-wip-vshankar-testing1-20220401-123425-testing-default-smithi/
>> > > > >>
>> > > > >> Requested Yuri to run FS suite w/ master (jobs were not getting
>> > > > >> scheduled in my run). Thanks, Yuri!
>> > > > >>
>> > > > >> ___
>> > > > >> ceph-users mailing list -- ceph-users@ceph.io
>> > > > >> To unsubscribe send an email to ceph-users-le...@ceph.io
>> > > > >>
>> > > >
>> > > >
>> > > > --
>> > > > Cheers,
>> > > > Venky
>> > >
>> > >
>> > >
>> > > --
>> > > Cheers,
>> > > Venky
>> >
>> >
>> >
>> > --
>> > Cheers,
>> > Venky
>>
>>
>>
>> --
>> Cheers,
>> Venky
>>
>> ___
> Dev mailing list -- d...@ceph.io
> To unsubscribe send an email to dev-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io