[ceph-users] Re: ceph dashboard reef 18.2.2 radosgw

2024-05-09 Thread Nizamudeen A
Hello Christopher,

Could you please paste the logs and exceptions to this thread as well?

Regards,
Nizam

On Wed, May 8, 2024 at 11:21 PM Christopher Durham 
wrote:

> Hello,
> I am uisng 18.2.2 on Rocky 8 Linux.
>
> I am getting http error 500 whe trying to hit the ceph dashboard on reef
> 18.2.2 when trying to look at any of the radosgw pages.
> I tracked this down to /usr/share/ceph/mgr/dashboard/controllers/rgw.py
> It appears to parse the metadata for a given radosgw server improperly. In
> my varoous rgw ceph.conf entries, I have:
> rgw frontends = beast ssl_endpoint=0.0.0.0
> ssl_certificate=/path/to/pem_with_cert_and_key
> but, rgw.py pulls the metadata for each server, and it is looking for
> 'port=' in the metadata for each server. When it doesn't find it based on
> line 147 in rgw.py, the ceph-mgr logs throwan exception which the manager
> proper catches and returns a 500.
> Would changing my frontends definition work? Is this known? I have had the
> frontends definition for awhile prior to my reef upgrade. Thanks
> -Chris
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: 18.2.2 dashboard really messed up.

2024-03-14 Thread Nizamudeen A
Yup, that does look like a huge difference.

@Pedro Gonzalez Gomez  @Aashish Sharma
 @Ankush Behl   Could you guys help
here? Did we miss any fixes for 18.2.2?

Regards,

On Thu, Mar 14, 2024 at 2:17 AM Harry G Coin  wrote:

> Thanks!  Oddly, all the dashboard checks you suggest appear normal, yet
> the result remains broken.
>
> Before I used your instruction about the dashboard, I have this result:
>
> root@noc3:~# ceph dashboard get-prometheus-api-host
> http://noc3.1.quietfountain.com:9095
> root@noc3:~# netstat -6nlp | grep 9095
> tcp6   0  0 :::9095 :::*
>LISTEN  80963/prometheus
> root@noc3:~#
>
> To check it, I tried setting it to something random, the browser aimed at
> the dashboard site reported no connection.  The error message ended when I
> restored the above.  But the graphs remain empty, the numbers 1 and 0.5 on
> each.
>
> Regarding the used storage, notice the overall usage is 43.6 of 111
> TiB.Seems quite a distance from the trigger warning points of 85 and
> 95?  The default values are in use.  All the OSDs are between 37% to 42%
> usage.   What am I missing?
>
> Thanks!
>
>
>
> On 3/12/24 02:07, Nizamudeen A wrote:
>
> Hi,
>
> The warning and danger indicator in the capacity chart points to the
> nearful and full ratio set to the cluster and
> the default values for them are 85% and 95% respectively. You can do a
> `ceph osd dump | grep ratio` and see those.
>
> When this got introduced, there was a blog post
> <https://ceph.io/en/news/blog/2023/landing-page/#capacity-card>explaining
> how this is mapped in the chart. But when your used storage
> crosses that 85% mark, the chart is colored with yellow to indicate the
> user, and when it crosses 95% (or the full ratio) the
> chart is colored with red to tell that. But that doesn't mean the cluster
> is in bad shape but its a visual indicator to tell you
> you are running out of storage.
>
> Regarding the Cluster Utilization chart, it gets metrics directly from
> prometheus so that it can be used to show a time-series
> data in UI rather than the metrics at current point in time (which was
> used before). So if you have prometheus configured in
> dashboard and its url is provided in the dashboard settings `ceph
> dashboard set-prometheus-api-host `
> then you should be able to see the metrics.
>
> In case you need to read more about the new page you can check here
> <https://docs.ceph.com/en/latest/mgr/dashboard/#overview-of-the-dashboard-landing-page>
> .
>
> Regards,
> Nizam
>
>
>
> On Mon, Mar 11, 2024 at 11:47 PM Harry G Coin  wrote:
>
>> Looking at ceph -s, all is well.  Looking at the dashboard, 85% of my
>> capacity is 'warned', and 95% is 'in danger'.   There is no hint given
>> as to the nature of the danger or reason for the warning.  Though
>> apparently with merely 5% of my ceph world 'normal', the cluster reports
>> 'ok'.  Which, you know, seems contradictory.  I've used just under 40%
>> of capacity.
>>
>> Further down the dashboard, all the subsections of 'Cluster Utilization'
>> are '1' and '0.5' with nothing whatever in the graphics area.
>>
>> Previous versions of ceph presented a normal dashboard.
>>
>> It's just a little half rack, 5 hosts, a few physical drives each, been
>> running ceph for a couple years now.  Orchestrator is cephadm.  It's
>> just about as 'plain vanilla' at it gets.  I've had to mute one alert,
>> because cephadm refresh aborts when it finds drives on any host that
>> have nothing to do with ceph that don't have a blkid_ip 'TYPE' key.
>> Seems unrelated to a totally messed up dashboard.  (The tracker for that
>> is here: https://tracker.ceph.com/issues/63502 ).
>>
>> Any idea what the steps are to get useful stuff back on the dashboard?
>> Any idea where I can learn what my 85% danger and 95% warning is
>> 'about'?  (You'd think 'danger' (The volcano is blowing up now!)  would
>> be worse than 'warning' (the volcano might blow up soon) , so how can
>> warning+danger > 100%, or if not additive how can warning < danger?)
>>
>>   Here's a bit of detail:
>>
>> root@noc1:~# ceph -s
>>   cluster:
>> id: 4067126d-01cb-40af-824a-881c130140f8
>> health: HEALTH_OK
>> (muted: CEPHADM_REFRESH_FAILED)
>>
>>   services:
>> mon: 5 daemons, quorum noc4,noc2,noc1,noc3,sysmon1 (age 70m)
>> mgr: noc2.yhyuxd(active, since 82m), standbys: noc4.tvhgac,
>> noc3.sybsfb, noc1.jtteqg
>> mds: 1/1 daemons up, 3 standby
>> osd: 27 osds: 27 up (since 20m), 27 in (since 2d)
>>
>>   da

[ceph-users] Re: 18.2.2 dashboard really messed up.

2024-03-12 Thread Nizamudeen A
Hi,

The warning and danger indicator in the capacity chart points to the
nearful and full ratio set to the cluster and
the default values for them are 85% and 95% respectively. You can do a
`ceph osd dump | grep ratio` and see those.

When this got introduced, there was a blog post
explaining
how this is mapped in the chart. But when your used storage
crosses that 85% mark, the chart is colored with yellow to indicate the
user, and when it crosses 95% (or the full ratio) the
chart is colored with red to tell that. But that doesn't mean the cluster
is in bad shape but its a visual indicator to tell you
you are running out of storage.

Regarding the Cluster Utilization chart, it gets metrics directly from
prometheus so that it can be used to show a time-series
data in UI rather than the metrics at current point in time (which was used
before). So if you have prometheus configured in
dashboard and its url is provided in the dashboard settings `ceph dashboard
set-prometheus-api-host `
then you should be able to see the metrics.

In case you need to read more about the new page you can check here

.

Regards,
Nizam



On Mon, Mar 11, 2024 at 11:47 PM Harry G Coin  wrote:

> Looking at ceph -s, all is well.  Looking at the dashboard, 85% of my
> capacity is 'warned', and 95% is 'in danger'.   There is no hint given
> as to the nature of the danger or reason for the warning.  Though
> apparently with merely 5% of my ceph world 'normal', the cluster reports
> 'ok'.  Which, you know, seems contradictory.  I've used just under 40%
> of capacity.
>
> Further down the dashboard, all the subsections of 'Cluster Utilization'
> are '1' and '0.5' with nothing whatever in the graphics area.
>
> Previous versions of ceph presented a normal dashboard.
>
> It's just a little half rack, 5 hosts, a few physical drives each, been
> running ceph for a couple years now.  Orchestrator is cephadm.  It's
> just about as 'plain vanilla' at it gets.  I've had to mute one alert,
> because cephadm refresh aborts when it finds drives on any host that
> have nothing to do with ceph that don't have a blkid_ip 'TYPE' key.
> Seems unrelated to a totally messed up dashboard.  (The tracker for that
> is here: https://tracker.ceph.com/issues/63502 ).
>
> Any idea what the steps are to get useful stuff back on the dashboard?
> Any idea where I can learn what my 85% danger and 95% warning is
> 'about'?  (You'd think 'danger' (The volcano is blowing up now!)  would
> be worse than 'warning' (the volcano might blow up soon) , so how can
> warning+danger > 100%, or if not additive how can warning < danger?)
>
>   Here's a bit of detail:
>
> root@noc1:~# ceph -s
>   cluster:
> id: 4067126d-01cb-40af-824a-881c130140f8
> health: HEALTH_OK
> (muted: CEPHADM_REFRESH_FAILED)
>
>   services:
> mon: 5 daemons, quorum noc4,noc2,noc1,noc3,sysmon1 (age 70m)
> mgr: noc2.yhyuxd(active, since 82m), standbys: noc4.tvhgac,
> noc3.sybsfb, noc1.jtteqg
> mds: 1/1 daemons up, 3 standby
> osd: 27 osds: 27 up (since 20m), 27 in (since 2d)
>
>   data:
> volumes: 1/1 healthy
> pools:   16 pools, 1809 pgs
> objects: 12.29M objects, 17 TiB
> usage:   44 TiB used, 67 TiB / 111 TiB avail
> pgs: 1793 active+clean
>  9active+clean+scrubbing
>  7active+clean+scrubbing+deep
>
>   io:
> client:   5.6 MiB/s rd, 273 KiB/s wr, 41 op/s rd, 58 op/s wr
>
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: reef 18.2.2 (hot-fix) QE validation status

2024-03-05 Thread Nizamudeen A
for dashboard, I see 1 failure, 2 dead and 2 passed jobs. The failed e2e is
something we fixed a while ago. not sure why it's broken again.
but if it's recurring, we'll have a look. In any case it's not a blocker.

On Wed, Mar 6, 2024 at 4:53 AM Laura Flores  wrote:

> Here are the rados and smoke suite summaries.
>
> @Radoslaw Zarzynski , @Adam King 
> , @Nizamudeen A , mind having a look to ensure the
> results from the rados suite look good to you?
>
>  @Venky Shankar  mind having a look at the smoke
> suite? There was a resurgence of https://tracker.ceph.com/issues/57206. I
> don't see this as a blocker to the hotfix release, but LMK your thoughts.
>
> *rados*
> -
> https://pulpito.ceph.com/yuriw-2024-03-04_20:52:58-rados-reef-release-distro-default-smithi
> -
> https://pulpito.ceph.com/yuriw-2024-03-05_01:29:36-rados-reef-release-distro-default-smithi
>
> Failures, unrelated:
> 1. https://tracker.ceph.com/issues/64725 -- new tracker, but known
> issue
> 2. https://tracker.ceph.com/issues/61774
> 3. https://tracker.ceph.com/issues/49287
> 4. https://tracker.ceph.com/issues/55141
> 5. https://tracker.ceph.com/issues/59142
> 6. https://tracker.ceph.com/issues/64726 -- new tracker
> 7. https://tracker.ceph.com/issues/62992
> 8. https://tracker.ceph.com/issues/64208
>
> Details:
> 1. rados/singleton: application not enabled on pool 'rbd' - Ceph -
> RADOS
> 2. centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak
> in mons - Ceph - RADOS
> 3. podman: setting cgroup config for procHooks process caused: Unit
> libpod-$hash.scope not found - Ceph - Orchestrator
> 4. thrashers/fastread: assertion failure: rollback_info_trimmed_to ==
> head - Ceph - RADOS
> 5. mgr/dashboard: fix e2e for dashboard v3 - Ceph - Mgr - Dashboard
> 6. LibRadosAioEC.MultiWritePP hang and pkill - Ceph - RADOS
> 7. Heartbeat crash in reset_timeout and clear_timeout - Ceph - RADOS
> 8. test_cephadm.sh: Container version mismatch causes job to fail. -
> Ceph - Orchestrator
>
> *smoke*
> -
> https://pulpito.ceph.com/yuriw-2024-03-05_15:31:54-smoke-reef-release-distro-default-smithi/
>
> Failures, unrelated:
> 1. https://tracker.ceph.com/issues/52624
> 2. https://tracker.ceph.com/issues/57206
> 3. https://tracker.ceph.com/issues/64727 -- new tracker
>
> Details:
> 1. qa: "Health check failed: Reduced data availability: 1 pg peering
> (PG_AVAILABILITY)" - Ceph
> 2. ceph_test_libcephfs_reclaim crashes during test - Ceph - CephFS
> 3. suites/dbench.sh: Socket exception: No route to host (113) -
> Infrastructure
>
> On Tue, Mar 5, 2024 at 3:05 PM Yuri Weinstein  wrote:
>
>> Only suits below need approval:
>>
>> smoke - Radek, Laura?
>> rados - Radek, Laura?
>>
>> We are also in the process of upgrading gibba and then LRC into 18.2.2 RC
>>
>> On Tue, Mar 5, 2024 at 7:47 AM Yuri Weinstein 
>> wrote:
>> >
>> > Details of this release are summarized here:
>> >
>> > https://tracker.ceph.com/issues/64721#note-1
>> > Release Notes - TBD
>> > LRC upgrade - TBD
>> >
>> > Seeking approvals/reviews for:
>> >
>> > smoke - in progress
>> > rados - Radek, Laura?
>> > quincy-x - in progress
>> >
>> > Also need approval from Travis, Redouane for Prometheus fix testing.
>> ___
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
>>
>
>
> --
>
> Laura Flores
>
> She/Her/Hers
>
> Software Engineer, Ceph Storage <https://ceph.io>
>
> Chicago, IL
>
> lflo...@ibm.com | lflo...@redhat.com 
> M: +17087388804
>
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: pacific 16.2.15 QE validation status

2024-02-20 Thread Nizamudeen A
dashboard approved. our e2e specs are passing but the suite failed because
of a different error.
cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm
(CEPHADM_STRAY_DAEMON)" in cluster log


On Tue, Feb 20, 2024 at 9:29 PM Yuri Weinstein  wrote:

> We have restarted QE validation after fixing issues and merging several
> PRs.
> The new Build 3 (rebase of pacific) tests are summarized in the same
> note (see Build 3 runs) https://tracker.ceph.com/issues/64151#note-1
>
> Seeking approvals:
>
> rados - Radek, Junior, Travis, Ernesto, Adam King
> rgw - Casey
> fs - Venky
> rbd - Ilya
> krbd - Ilya
>
> upgrade/octopus-x (pacific) - Adam King, Casey PTL
>
> upgrade/pacific-p2p - Casey PTL
>
> ceph-volume - Guillaume, fixed by
> https://github.com/ceph/ceph/pull/55658 retesting
>
> On Thu, Feb 8, 2024 at 8:43 AM Casey Bodley  wrote:
> >
> > thanks, i've created https://tracker.ceph.com/issues/64360 to track
> > these backports to pacific/quincy/reef
> >
> > On Thu, Feb 8, 2024 at 7:50 AM Stefan Kooman  wrote:
> > >
> > > Hi,
> > >
> > > Is this PR: https://github.com/ceph/ceph/pull/54918 included as well?
> > >
> > > You definitely want to build the Ubuntu / debian packages with the
> > > proper CMAKE_CXX_FLAGS. The performance impact on RocksDB is _HUGE_.
> > >
> > > Thanks,
> > >
> > > Gr. Stefan
> > >
> > > P.s. Kudos to Mark Nelson for figuring it out / testing.
> > > ___
> > > ceph-users mailing list -- ceph-users@ceph.io
> > > To unsubscribe send an email to ceph-users-le...@ceph.io
> > >
> >
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: pacific 16.2.15 QE validation status

2024-02-01 Thread Nizamudeen A
Thanks Laura,

Raised a PR for  https://tracker.ceph.com/issues/57386
https://github.com/ceph/ceph/pull/55415


On Thu, Feb 1, 2024 at 5:15 AM Laura Flores  wrote:

> I reviewed the rados suite. @Adam King , @Nizamudeen A
>  would appreciate a look from you, as there are some
> orchestrator and dashboard trackers that came up.
>
> pacific-release, 16.2.15
>
> Failures:
> 1. https://tracker.ceph.com/issues/62225
> 2. https://tracker.ceph.com/issues/64278
> 3. https://tracker.ceph.com/issues/58659
> 4. https://tracker.ceph.com/issues/58658
> 5. https://tracker.ceph.com/issues/64280 -- new tracker, worth a look
> from Orch
> 6. https://tracker.ceph.com/issues/63577
> 7. https://tracker.ceph.com/issues/63894
> 8. https://tracker.ceph.com/issues/64126
> 9. https://tracker.ceph.com/issues/63887
> 10. https://tracker.ceph.com/issues/61602
> 11. https://tracker.ceph.com/issues/54071
> 12. https://tracker.ceph.com/issues/57386
> 13. https://tracker.ceph.com/issues/64281
> 14. https://tracker.ceph.com/issues/49287
>
> Details:
> 1. pacific upgrade test fails on 'ceph versions | jq -e' command -
> Ceph - RADOS
> 2. Unable to update caps for client.iscsi.iscsi.a - Ceph - Orchestrator
> 3. mds_upgrade_sequence: failure when deploying node-exporter - Ceph -
> Orchestrator
> 4. mds_upgrade_sequence: Error: initializing source
> docker://prom/alertmanager:v0.20.0 - Ceph - Orchestrator
> 5. mgr-nfs-upgrade test times out from failed cephadm daemons - Ceph -
> Orchestrator
> 6. cephadm: docker.io/library/haproxy: toomanyrequests: You have
> reached your pull rate limit. You may increase the limit by authenticating
> and upgrading: https://www.docker.com/increase-rate-limit - Ceph -
> Orchestrator
> 7. qa: cephadm failed with an error code 1, alertmanager container not
> found. - Ceph - Orchestrator
> 8. ceph-iscsi build was retriggered and now missing
> package_manager_version attribute - Ceph
> 9. Starting alertmanager fails from missing container - Ceph -
> Orchestrator
> 10. pacific: cls/test_cls_sdk.sh: Health check failed: 1 pool(s) do
> not have an application enabled (POOL_APP_NOT_ENABLED) - Ceph - RADOS
> 11. rados/cephadm/osds: Invalid command: missing required parameter
> hostname() - Ceph - Orchestrator
> 12. cephadm/test_dashboard_e2e.sh: Expected to find content: '/^foo$/'
> within the selector: 'cd-modal .badge' but never did - Ceph - Mgr -
> Dashboard
> 13. Failed to download key at
> http://download.ceph.com/keys/autobuild.asc: Request failed:  error [Errno 101] Network is unreachable> - Infrastructure
> 14. podman: setting cgroup config for procHooks process caused: Unit
> libpod-$hash.scope not found - Ceph - Orchestrator
>
> On Wed, Jan 31, 2024 at 1:41 PM Casey Bodley  wrote:
>
>> On Mon, Jan 29, 2024 at 4:39 PM Yuri Weinstein 
>> wrote:
>> >
>> > Details of this release are summarized here:
>> >
>> > https://tracker.ceph.com/issues/64151#note-1
>> >
>> > Seeking approvals/reviews for:
>> >
>> > rados - Radek, Laura, Travis, Ernesto, Adam King
>> > rgw - Casey
>>
>> rgw approved, thanks
>>
>> > fs - Venky
>> > rbd - Ilya
>> > krbd - in progress
>> >
>> > upgrade/nautilus-x (pacific) - Casey PTL (regweed tests failed)
>> > upgrade/octopus-x (pacific) - Casey PTL (regweed tests failed)
>> >
>> > upgrade/pacific-x (quincy) - in progress
>> > upgrade/pacific-p2p - Ilya PTL (maybe rbd related?)
>> >
>> > ceph-volume - Guillaume
>> >
>> > TIA
>> > YuriW
>> > ___
>> > ceph-users mailing list -- ceph-users@ceph.io
>> > To unsubscribe send an email to ceph-users-le...@ceph.io
>> >
>> ___
>> Dev mailing list -- d...@ceph.io
>> To unsubscribe send an email to dev-le...@ceph.io
>>
>
>
> --
>
> Laura Flores
>
> She/Her/Hers
>
> Software Engineer, Ceph Storage <https://ceph.io>
>
> Chicago, IL
>
> lflo...@ibm.com | lflo...@redhat.com 
> M: +17087388804
>
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: pacific 16.2.15 QE validation status

2024-01-30 Thread Nizamudeen A
dashboard looks good! approved.

Regards,
Nizam

On Tue, Jan 30, 2024 at 3:09 AM Yuri Weinstein  wrote:

> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/64151#note-1
>
> Seeking approvals/reviews for:
>
> rados - Radek, Laura, Travis, Ernesto, Adam King
> rgw - Casey
> fs - Venky
> rbd - Ilya
> krbd - in progress
>
> upgrade/nautilus-x (pacific) - Casey PTL (regweed tests failed)
> upgrade/octopus-x (pacific) - Casey PTL (regweed tests failed)
>
> upgrade/pacific-x (quincy) - in progress
> upgrade/pacific-p2p - Ilya PTL (maybe rbd related?)
>
> ceph-volume - Guillaume
>
> TIA
> YuriW
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: TLS 1.2 for dashboard

2024-01-25 Thread Nizamudeen A
Understood, thank you.

On Thu, Jan 25, 2024, 20:24 Sake Ceph  wrote:

> I would say drop it for squid release or if you keep it in squid, but
> going to disable it in a minor release later, please make a note in the
> release notes if the option is being removed.
> Just my 2 cents :)
>
> Best regards,
> Sake
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: TLS 1.2 for dashboard

2024-01-25 Thread Nizamudeen A
Ah okay, thanks for the clarification.

In that case, probably we'll need to keep this 1.2 fix for squid i guess.
I'll check and will update as necessary.

On Thu, Jan 25, 2024, 20:12 Sake Ceph  wrote:

> Hi Nizamudeen,
>
> Thank you for your quick response!
>
> The load balancers support TLS 1.3, but the administrators need to
> reconfigure the healthchecks. The only problem, it's a global change for
> all load balancers... So not something they change overnight and need to
> plan/test for.
>
> Best regards,
> Sake
>
> > Op 25-01-2024 15:22 CET schreef Nizamudeen A :
> >
> >
> > Hi,
> >
> > I'll re-open the PR and will merge it to Quincy. Btw i want to know if
> the load balancers will be supporting tls 1.3 in future. Because we were
> planning to completely drop the tls1.2 support from dashboard because of
> security reasons. (But so far we are planning to keep it as it is atleast
> for the older releases)
> >
> > Regards,
> > Nizam
> >
> >
> > On Thu, Jan 25, 2024, 19:41 Sake Ceph  wrote:
> > > After upgrading to 17.2.7 our load balancers can't check the status of
> the manager nodes for the dashboard. After some troubleshooting I noticed
> only TLS 1.3 is availalbe for the dashboard.
> > >
> > >  Looking at the source (quincy), TLS config got changed from 1.2 to
> 1.3. Searching in the tracker I found out that we are not the only one with
> troubles and there will be added an option to the dashboard config. Tracker
> ID 62940 got backports and the ones for reef and pacific already merged.
> But the pull request (63068) for Quincy is closed :(
> > >
> > >  What to do? I hope this one can get merged for 17.2.8.
> > >  ___
> > >  ceph-users mailing list -- ceph-users@ceph.io
> > >  To unsubscribe send an email to ceph-users-le...@ceph.io
> > >
> > >
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: TLS 1.2 for dashboard

2024-01-25 Thread Nizamudeen A
Hi,

I'll re-open the PR and will merge it to Quincy. Btw i want to know if the
load balancers will be supporting tls 1.3 in future. Because we were
planning to completely drop the tls1.2 support from dashboard because of
security reasons. (But so far we are planning to keep it as it is atleast
for the older releases)

Regards,
Nizam

On Thu, Jan 25, 2024, 19:41 Sake Ceph  wrote:

> After upgrading to 17.2.7 our load balancers can't check the status of the
> manager nodes for the dashboard. After some troubleshooting I noticed only
> TLS 1.3 is availalbe for the dashboard.
>
> Looking at the source (quincy), TLS config got changed from 1.2 to 1.3.
> Searching in the tracker I found out that we are not the only one with
> troubles and there will be added an option to the dashboard config. Tracker
> ID 62940 got backports and the ones for reef and pacific already merged.
> But the pull request (63068) for Quincy is closed :(
>
> What to do? I hope this one can get merged for 17.2.8.
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: REST API Endpoint Failure - Request For Where To Look To Resolve

2024-01-08 Thread Nizamudeen A
Hi,

Niz is totally fine ;) and good to hear the issue is resolved.

Regards,

On Sun, Jan 7, 2024 at 9:05 AM duluxoz  wrote:

> Hi Niz (may I call you "Niz"?)
>
> So, with the info you provided I was able to find what the issue was in
> the logs (now I know where the darn things are!) and so we have resolved
> our problem - a mis-configured port number - obvious when you think about
> it - and so I'd like to thank you once again for all of you patience and
> help
>
> Cheers
>
> Dulux-oz
> On 05/01/2024 20:39, Nizamudeen A wrote:
>
> ah sorry for that. Outside the cephadm shell, if you do cephadm ls | grep
> "mgr.", that should give you the mgr container name. It should look
> something like this
> [root@ceph-node-00 ~]# cephadm ls | grep "mgr."
> "name": "mgr.ceph-node-00.aoxbdg",
> "systemd_unit":
> "ceph-e877a630-abaa-11ee-b7ce-52540097c...@mgr.ceph-node-00.aoxbdg"
> ,
> "service_name": "mgr",
>
> and you can use that name to see the logs.
>
> On Fri, Jan 5, 2024 at 3:04 PM duluxoz  wrote:
>
>> Yeah, that's what I meant when I said I'm new to podman and containers -
>> so, stupid Q: What is the "typical" name for a given container eg if the
>> server is "node1" is the management container "mgr.node1" of something
>> similar?
>>
>> And thanks for the help - I really *do* appreciate it.  :-)
>> On 05/01/2024 20:30, Nizamudeen A wrote:
>>
>> ah yeah, its usually inside the container so you'll need to check the mgr
>> container for the logs.
>> cephadm logs -n 
>>
>> also cephadm has
>> its own log channel which can be used to get the logs.
>>
>> https://docs.ceph.com/en/quincy/cephadm/operations/#watching-cephadm-log-messages
>>
>> On Fri, Jan 5, 2024 at 2:54 PM duluxoz  wrote:
>>
>>> Yeap, can do - are the relevant logs in the "usual" place or buried
>>> somewhere inside some sort of container (typically)?  :-)
>>> On 05/01/2024 20:14, Nizamudeen A wrote:
>>>
>>> no, the error message is not clear enough to deduce an error. could you
>>> perhaps share the mgr logs at the time of the error? It could have some
>>> tracebacks
>>> which can give more info to debug it further.
>>>
>>> Regards,
>>>
>>> On Fri, Jan 5, 2024 at 2:00 PM duluxoz  wrote:
>>>
>>>> Hi Nizam,
>>>>
>>>> Yeap, done all that - we're now at the point of creating the iSCSI
>>>> Target(s) for the gateway (via the Dashboard and/or the CLI: see the error
>>>> message in the OP) - any ideas?  :-)
>>>>
>>>> Cheers
>>>>
>>>> Dulux-Oz
>>>> On 05/01/2024 19:10, Nizamudeen A wrote:
>>>>
>>>> Hi,
>>>>
>>>> You can find the APIs associated with the iscsi here:
>>>> https://docs.ceph.com/en/reef/mgr/ceph_api/#iscsi
>>>>
>>>> and if you create iscsi service through dashboard or cephadm, it should
>>>> add the iscsi gateways to the dashboard.
>>>> you can view them by issuing *ceph dashboard iscsi-gateway-list* and
>>>> you can add or remove gateways manually by
>>>>
>>>> ceph dashboard iscsi-gateway-add -i 
>>>> []
>>>> ceph dashboard iscsi-gateway-rm 
>>>>
>>>> which you can find the documentation here:
>>>> https://docs.ceph.com/en/quincy/mgr/dashboard/#enabling-iscsi-management
>>>>
>>>> Regards,
>>>> Nizam
>>>>
>>>>
>>>>
>>>>
>>>> On Fri, Jan 5, 2024 at 12:53 PM duluxoz  wrote:
>>>>
>>>>> Hi All,
>>>>>
>>>>> A little help please.
>>>>>
>>>>> TL/DR: Please help with error message:
>>>>> ~~~
>>>>> REST API failure, code : 500
>>>>> Unable to access the configuration object
>>>>> Unable to contact the local API endpoint (https://localhost:5000/api)
>>>>> ~~~
>>>>>
>>>>> The Issue
>>>>> 
>>>>>
>>>>> I've been through the documentation and can't find what I'm looking
>>>>> for
>>>>> - possibly because I'm not really sure what it is I *am* looking for,
>>>>> so
>>>>> if someone can point me in the right direction I would really
>>>>> appreciate it.
>>>

[ceph-users] Re: Reef Dashboard Recovery Throughput empty

2024-01-05 Thread Nizamudeen A
Hi,

Is it possible that this is related to https://tracker.ceph.com/issues/63927
?

Regards,
Nizam

On Fri, Jan 5, 2024 at 4:22 PM Zoltán Beck  wrote:

> Hi All,
>
>   we just upgraded to Reef, everything looks great, except the new
> Dashboard. The Recovery Throughput graph is empty, the recovery is ongoing
> for 18 hours and still no data. I tried to move the prometheus service to
> other node and redeployed couple of times, but still no data.
>
>   Kind Regards
> Zoltan
>
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: REST API Endpoint Failure - Request For Where To Look To Resolve

2024-01-05 Thread Nizamudeen A
ah sorry for that. Outside the cephadm shell, if you do cephadm ls | grep
"mgr.", that should give you the mgr container name. It should look
something like this
[root@ceph-node-00 ~]# cephadm ls | grep "mgr."
"name": "mgr.ceph-node-00.aoxbdg",
"systemd_unit":
"ceph-e877a630-abaa-11ee-b7ce-52540097c...@mgr.ceph-node-00.aoxbdg",
"service_name": "mgr",

and you can use that name to see the logs.

On Fri, Jan 5, 2024 at 3:04 PM duluxoz  wrote:

> Yeah, that's what I meant when I said I'm new to podman and containers -
> so, stupid Q: What is the "typical" name for a given container eg if the
> server is "node1" is the management container "mgr.node1" of something
> similar?
>
> And thanks for the help - I really *do* appreciate it.  :-)
> On 05/01/2024 20:30, Nizamudeen A wrote:
>
> ah yeah, its usually inside the container so you'll need to check the mgr
> container for the logs.
> cephadm logs -n 
>
> also cephadm has
> its own log channel which can be used to get the logs.
>
> https://docs.ceph.com/en/quincy/cephadm/operations/#watching-cephadm-log-messages
>
> On Fri, Jan 5, 2024 at 2:54 PM duluxoz  wrote:
>
>> Yeap, can do - are the relevant logs in the "usual" place or buried
>> somewhere inside some sort of container (typically)?  :-)
>> On 05/01/2024 20:14, Nizamudeen A wrote:
>>
>> no, the error message is not clear enough to deduce an error. could you
>> perhaps share the mgr logs at the time of the error? It could have some
>> tracebacks
>> which can give more info to debug it further.
>>
>> Regards,
>>
>> On Fri, Jan 5, 2024 at 2:00 PM duluxoz  wrote:
>>
>>> Hi Nizam,
>>>
>>> Yeap, done all that - we're now at the point of creating the iSCSI
>>> Target(s) for the gateway (via the Dashboard and/or the CLI: see the error
>>> message in the OP) - any ideas?  :-)
>>>
>>> Cheers
>>>
>>> Dulux-Oz
>>> On 05/01/2024 19:10, Nizamudeen A wrote:
>>>
>>> Hi,
>>>
>>> You can find the APIs associated with the iscsi here:
>>> https://docs.ceph.com/en/reef/mgr/ceph_api/#iscsi
>>>
>>> and if you create iscsi service through dashboard or cephadm, it should
>>> add the iscsi gateways to the dashboard.
>>> you can view them by issuing *ceph dashboard iscsi-gateway-list* and
>>> you can add or remove gateways manually by
>>>
>>> ceph dashboard iscsi-gateway-add -i 
>>> []
>>> ceph dashboard iscsi-gateway-rm 
>>>
>>> which you can find the documentation here:
>>> https://docs.ceph.com/en/quincy/mgr/dashboard/#enabling-iscsi-management
>>>
>>> Regards,
>>> Nizam
>>>
>>>
>>>
>>>
>>> On Fri, Jan 5, 2024 at 12:53 PM duluxoz  wrote:
>>>
>>>> Hi All,
>>>>
>>>> A little help please.
>>>>
>>>> TL/DR: Please help with error message:
>>>> ~~~
>>>> REST API failure, code : 500
>>>> Unable to access the configuration object
>>>> Unable to contact the local API endpoint (https://localhost:5000/api)
>>>> ~~~
>>>>
>>>> The Issue
>>>> 
>>>>
>>>> I've been through the documentation and can't find what I'm looking for
>>>> - possibly because I'm not really sure what it is I *am* looking for,
>>>> so
>>>> if someone can point me in the right direction I would really
>>>> appreciate it.
>>>>
>>>> I get the above error message when I run the `gwcli` command from
>>>> inside
>>>> a cephadm shell.
>>>>
>>>> What I'm trying to do is set up a set of iSCSI Gateways in our
>>>> Ceph-Reef
>>>> 18.2.1 Cluster (yes, I know its being depreciated as of Nov 22 - or
>>>> whatever). We recently migrated 7 upgraded from a manual install of
>>>> Quincy to a CephAdm install of Reef - everything went AOK *except* for
>>>> the iSCSI Gateways. So we tore them down and then rebuilt them as per
>>>> the latest documentation. So now we've got 3 gateways as per the
>>>> Service
>>>> page of the Dashboard and I'm trying to create the targets.
>>>>
>>>> I tried via the Dashboard but had errors, so instead I went in to do it
>>>> via gwcli and hit the above error (which I now bevel to be the cause of
>>>> the GUI creation I

[ceph-users] Re: REST API Endpoint Failure - Request For Where To Look To Resolve

2024-01-05 Thread Nizamudeen A
ah yeah, its usually inside the container so you'll need to check the mgr
container for the logs.
cephadm logs -n 

also cephadm has
its own log channel which can be used to get the logs.
https://docs.ceph.com/en/quincy/cephadm/operations/#watching-cephadm-log-messages

On Fri, Jan 5, 2024 at 2:54 PM duluxoz  wrote:

> Yeap, can do - are the relevant logs in the "usual" place or buried
> somewhere inside some sort of container (typically)?  :-)
> On 05/01/2024 20:14, Nizamudeen A wrote:
>
> no, the error message is not clear enough to deduce an error. could you
> perhaps share the mgr logs at the time of the error? It could have some
> tracebacks
> which can give more info to debug it further.
>
> Regards,
>
> On Fri, Jan 5, 2024 at 2:00 PM duluxoz  wrote:
>
>> Hi Nizam,
>>
>> Yeap, done all that - we're now at the point of creating the iSCSI
>> Target(s) for the gateway (via the Dashboard and/or the CLI: see the error
>> message in the OP) - any ideas?  :-)
>>
>> Cheers
>>
>> Dulux-Oz
>> On 05/01/2024 19:10, Nizamudeen A wrote:
>>
>> Hi,
>>
>> You can find the APIs associated with the iscsi here:
>> https://docs.ceph.com/en/reef/mgr/ceph_api/#iscsi
>>
>> and if you create iscsi service through dashboard or cephadm, it should
>> add the iscsi gateways to the dashboard.
>> you can view them by issuing *ceph dashboard iscsi-gateway-list* and you
>> can add or remove gateways manually by
>>
>> ceph dashboard iscsi-gateway-add -i 
>> []
>> ceph dashboard iscsi-gateway-rm 
>>
>> which you can find the documentation here:
>> https://docs.ceph.com/en/quincy/mgr/dashboard/#enabling-iscsi-management
>>
>> Regards,
>> Nizam
>>
>>
>>
>>
>> On Fri, Jan 5, 2024 at 12:53 PM duluxoz  wrote:
>>
>>> Hi All,
>>>
>>> A little help please.
>>>
>>> TL/DR: Please help with error message:
>>> ~~~
>>> REST API failure, code : 500
>>> Unable to access the configuration object
>>> Unable to contact the local API endpoint (https://localhost:5000/api)
>>> ~~~
>>>
>>> The Issue
>>> 
>>>
>>> I've been through the documentation and can't find what I'm looking for
>>> - possibly because I'm not really sure what it is I *am* looking for, so
>>> if someone can point me in the right direction I would really appreciate
>>> it.
>>>
>>> I get the above error message when I run the `gwcli` command from inside
>>> a cephadm shell.
>>>
>>> What I'm trying to do is set up a set of iSCSI Gateways in our Ceph-Reef
>>> 18.2.1 Cluster (yes, I know its being depreciated as of Nov 22 - or
>>> whatever). We recently migrated 7 upgraded from a manual install of
>>> Quincy to a CephAdm install of Reef - everything went AOK *except* for
>>> the iSCSI Gateways. So we tore them down and then rebuilt them as per
>>> the latest documentation. So now we've got 3 gateways as per the Service
>>> page of the Dashboard and I'm trying to create the targets.
>>>
>>> I tried via the Dashboard but had errors, so instead I went in to do it
>>> via gwcli and hit the above error (which I now bevel to be the cause of
>>> the GUI creation I encountered.
>>>
>>> I have absolutely no experience with podman or containers in general,
>>> and can't work out how to fix the issue. So I'm requesting some help -
>>> not to solve the problem for me, but to point me in the right direction
>>> to solve it myself.  :-)
>>>
>>> So, anyone?
>>>
>>> Cheers
>>> Dulux-Oz
>>> ___
>>> ceph-users mailing list -- ceph-users@ceph.io
>>> To unsubscribe send an email to ceph-users-le...@ceph.io
>>
>>
>> --
>
> *Matthew J BLACK*
>   M.Inf.Tech.(Data Comms)
>   MBA
>   B.Sc.
>   MACS (Snr), CP, IP3P
>
> When you want it done *right* ‒ the first time!
> Phone: +61 4 0411 0089
> Email: matt...@peregrineit.net
> Web: www.peregrineit.net
>
> [image: View Matthew J BLACK's profile on LinkedIn]
> <http://au.linkedin.com/in/mjblack>
>
> This Email is intended only for the addressee.  Its use is limited to that
> intended by the author at the time and it is not to be distributed without
> the author’s consent.  You must not use or disclose the contents of this
> Email, or add the sender’s Email address to any database, list or mailing
> list unless you are expressly authorised to do so.  Unless otherwise
> stated, Peregrine I.T. Pty Ltd accepts no liability for the contents of
> this Email except where subsequently confirmed in writing.  The opinions
> expressed in this Email are those of the author and do not necessarily
> represent the views of Peregrine I.T. Pty Ltd.  This Email is confidential
> and may be subject to a claim of legal privilege.
>
> If you have received this Email in error, please notify the author and
> delete this message immediately.
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: REST API Endpoint Failure - Request For Where To Look To Resolve

2024-01-05 Thread Nizamudeen A
no, the error message is not clear enough to deduce an error. could you
perhaps share the mgr logs at the time of the error? It could have some
tracebacks
which can give more info to debug it further.

Regards,

On Fri, Jan 5, 2024 at 2:00 PM duluxoz  wrote:

> Hi Nizam,
>
> Yeap, done all that - we're now at the point of creating the iSCSI
> Target(s) for the gateway (via the Dashboard and/or the CLI: see the error
> message in the OP) - any ideas?  :-)
>
> Cheers
>
> Dulux-Oz
> On 05/01/2024 19:10, Nizamudeen A wrote:
>
> Hi,
>
> You can find the APIs associated with the iscsi here:
> https://docs.ceph.com/en/reef/mgr/ceph_api/#iscsi
>
> and if you create iscsi service through dashboard or cephadm, it should
> add the iscsi gateways to the dashboard.
> you can view them by issuing *ceph dashboard iscsi-gateway-list* and you
> can add or remove gateways manually by
>
> ceph dashboard iscsi-gateway-add -i 
> []
> ceph dashboard iscsi-gateway-rm 
>
> which you can find the documentation here:
> https://docs.ceph.com/en/quincy/mgr/dashboard/#enabling-iscsi-management
>
> Regards,
> Nizam
>
>
>
>
> On Fri, Jan 5, 2024 at 12:53 PM duluxoz  wrote:
>
>> Hi All,
>>
>> A little help please.
>>
>> TL/DR: Please help with error message:
>> ~~~
>> REST API failure, code : 500
>> Unable to access the configuration object
>> Unable to contact the local API endpoint (https://localhost:5000/api)
>> ~~~
>>
>> The Issue
>> 
>>
>> I've been through the documentation and can't find what I'm looking for
>> - possibly because I'm not really sure what it is I *am* looking for, so
>> if someone can point me in the right direction I would really appreciate
>> it.
>>
>> I get the above error message when I run the `gwcli` command from inside
>> a cephadm shell.
>>
>> What I'm trying to do is set up a set of iSCSI Gateways in our Ceph-Reef
>> 18.2.1 Cluster (yes, I know its being depreciated as of Nov 22 - or
>> whatever). We recently migrated 7 upgraded from a manual install of
>> Quincy to a CephAdm install of Reef - everything went AOK *except* for
>> the iSCSI Gateways. So we tore them down and then rebuilt them as per
>> the latest documentation. So now we've got 3 gateways as per the Service
>> page of the Dashboard and I'm trying to create the targets.
>>
>> I tried via the Dashboard but had errors, so instead I went in to do it
>> via gwcli and hit the above error (which I now bevel to be the cause of
>> the GUI creation I encountered.
>>
>> I have absolutely no experience with podman or containers in general,
>> and can't work out how to fix the issue. So I'm requesting some help -
>> not to solve the problem for me, but to point me in the right direction
>> to solve it myself.  :-)
>>
>> So, anyone?
>>
>> Cheers
>> Dulux-Oz
>> ___
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: REST API Endpoint Failure - Request For Where To Look To Resolve

2024-01-05 Thread Nizamudeen A
Hi,

You can find the APIs associated with the iscsi here:
https://docs.ceph.com/en/reef/mgr/ceph_api/#iscsi

and if you create iscsi service through dashboard or cephadm, it should add
the iscsi gateways to the dashboard.
you can view them by issuing *ceph dashboard iscsi-gateway-list* and you
can add or remove gateways manually by

ceph dashboard iscsi-gateway-add -i 
[]
ceph dashboard iscsi-gateway-rm 

which you can find the documentation here:
https://docs.ceph.com/en/quincy/mgr/dashboard/#enabling-iscsi-management

Regards,
Nizam




On Fri, Jan 5, 2024 at 12:53 PM duluxoz  wrote:

> Hi All,
>
> A little help please.
>
> TL/DR: Please help with error message:
> ~~~
> REST API failure, code : 500
> Unable to access the configuration object
> Unable to contact the local API endpoint (https://localhost:5000/api)
> ~~~
>
> The Issue
> 
>
> I've been through the documentation and can't find what I'm looking for
> - possibly because I'm not really sure what it is I *am* looking for, so
> if someone can point me in the right direction I would really appreciate
> it.
>
> I get the above error message when I run the `gwcli` command from inside
> a cephadm shell.
>
> What I'm trying to do is set up a set of iSCSI Gateways in our Ceph-Reef
> 18.2.1 Cluster (yes, I know its being depreciated as of Nov 22 - or
> whatever). We recently migrated 7 upgraded from a manual install of
> Quincy to a CephAdm install of Reef - everything went AOK *except* for
> the iSCSI Gateways. So we tore them down and then rebuilt them as per
> the latest documentation. So now we've got 3 gateways as per the Service
> page of the Dashboard and I'm trying to create the targets.
>
> I tried via the Dashboard but had errors, so instead I went in to do it
> via gwcli and hit the above error (which I now bevel to be the cause of
> the GUI creation I encountered.
>
> I have absolutely no experience with podman or containers in general,
> and can't work out how to fix the issue. So I'm requesting some help -
> not to solve the problem for me, but to point me in the right direction
> to solve it myself.  :-)
>
> So, anyone?
>
> Cheers
> Dulux-Oz
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Unable to find Refresh Interval Option in Ceph Dashboard (Ceph v18.2.1 "reef")- Seeking Assistance

2024-01-03 Thread Nizamudeen A
Hi,

The new dashboard refreshes every 5 seconds (not 25 seconds). But the
Cluster Utilization chart refreshes in sync with the
scrape interval of prometheus (which is defaulted to 15s unless explicitly
changed in the prometheus configuration).

Are you seeing the whole dashboard gets refreshed after 25s which would not
be expected behaviour. If it is just the
Cluster Utilization charts then you can playaround with the scrape_interval
but decreasing the scrape interval pulls metrics
very often which would not be good in the case of big clusters.

Regards,
Nizam

On Sat, Dec 30, 2023 at 11:49 PM Alam Mohammad  wrote:

> Hello,
>
> We've been using Ceph for managing our storage infrastructure, and we
> recently upgraded to the latest version (Ceph v18.2.1 "reef"). However,
> We've noticed that the "refresh interval" option seems to be missing in the
> dashboard, and we are facing challenges with monitoring our cluster in
> real-time.
>
> In the earlier version of the Ceph dashboard, there was a useful "refresh
> interval" option that allowed us to customize the update frequency of the
> dashboard. This was particularly handy for monitoring changes and
> responding promptly. However, after the upgrade to Ceph v18.2.1 "reef", We
> can't seem to find this option anywhere in the dashboard.
>
> Additionally, we observed an automatic refresh occurring at every 25
> seconds. Seeking guidance locating and tuning the refresh interval settings
> in the latest version of Ceph to potentially reduce this interval.
> We've explored the dashboard settings thoroughly and reviewed the release
> notes for Ceph v18.2.1 "reef", but we couldn't find any mention of the
> removal of the "refresh interval" option.
>
>
> Any guidance or insights would be greatly appreciated!
>
> Thanks,
> Mohammad Saif
> Ceph Enthusiast
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] CLT Meeting minutes 2023-11-23

2023-11-23 Thread Nizamudeen A
Hello,

etherpad history lost
   need a way to recover from DB or find another way to back things up

discuss the quincy/dashboard-v3 backports? was tabled from 11/1
   https://github.com/ceph/ceph/pull/54252
   agreement is to not backport breaking features to stable branches.

18.2.1
   LRC upgrade affected by https://tracker.ceph.com/issues/62570#note-4
https://ceph-storage.slack.com/archives/C1HFJ4VTN/p1700575544548809
  Figure out the reproducer and add tests

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=ce0d5bd3a6c176f9a3bf867624a07119dd4d0878
is the trigger on the kernel client side?
Clients with that patch should work with the server-side code that
broke older ones
Suggestion to introduce a matrix of pre-built "older" kernels into the fs
suite

gibba vs LRC upgrades
LRC shouldn't be updated often, it should act more like a production
cluster

RCs for reef, quincy and pacific
   for next week when there is more time to discuss

Regards,
-- 

Nizamudeen A

Software Engineer

Red Hat <https://www.redhat.com/>
<https://www.redhat.com/>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Problem while upgrade 17.2.6 to 17.2.7

2023-11-16 Thread Nizamudeen A
m ['--timeout', '895', 'gather-facts']
> 2023-11-16 15:00:46,276 7fbc86e16740 DEBUG
> 
> cephadm ['--timeout', '895', 'gather-facts']
> 2023-11-16 15:01:48,291 7fec587af740 DEBUG
> 
> cephadm ['--timeout', '895', 'gather-facts']
> 2023-11-16 15:02:50,500 7f6338963740 DEBUG
> 
> cephadm ['--timeout', '895', 'gather-facts']
> 2023-11-16 15:02:51,882 7fbc52e2f740 DEBUG
> 
> cephadm ['--image', '
> quay.io/ceph/ceph@sha256:56984a149e89ce282e9400ca53371ff7df74b1c7f5e979b6ec651b751931483a',
> '--timeout', '895', 'list-networks']
> 2023-11-16 15:03:53,692 7f652d1e6740 DEBUG
> ----
> cephadm ['--timeout', '895', 'gather-facts']
> 2023-11-16 15:04:56,193 7f2c66ce3740 DEBUG
> 
> cephadm ['--timeout', '895', 'gather-facts']
> Le 16/11/2023 à 12:41, Nizamudeen A a écrit :
>
> Hello,
>
> can you also add the mgr logs at the time of this error?
>
> Regards,
>
> On Thu, Nov 16, 2023 at 4:12 PM Jean-Marc FONTANA <
> jean-marc.font...@inrs.fr> wrote:
>
>> Hello David,
>>
>> We tried what you pointed in your message. First, it was set to
>>
>> "s3, s3website, swift, swift_auth, admin, sts, iam, subpub"
>>
>> We tried to set it to "s3, s3website, swift, swift_auth, admin, sts,
>> iam, subpub, notifications"
>>
>> and then to "s3, s3website, swift, swift_auth, admin, sts, iam,
>> notifications",
>>
>> with no success at each time.
>>
>> We tried then
>>
>>ceph dashboard reset-rgw-api-admin-resource
>>
>> or
>>
>>ceph dashboard set-rgw-api-admin-resource XXX
>>
>> getting a 500 internal error message in a red box on the upper corner
>> with the first one
>>
>> or the 404 error message with the second one.
>>
>> Thanks for your helping,
>>
>> Cordialement,
>>
>> JM Fontana
>>
>>
>> Le 14/11/2023 à 20:53, David C. a écrit :
>> > Hi Jean Marc,
>> >
>> > maybe look at this parameter "rgw_enable_apis", if the values you have
>> > correspond to the default (need rgw restart) :
>> >
>> >
>> https://docs.ceph.com/en/quincy/radosgw/config-ref/#confval-rgw_enable_apis
>> >
>> > ceph config get client.rgw rgw_enable_apis
>> >
>> > 
>> >
>> > Cordialement,
>> >
>> > *David CASIER*
>> >
>> > 
>> >
>> >
>> >
>> > Le mar. 14 nov. 2023 à 11:45, Jean-Marc FONTANA
>> >  a écrit :
>> >
>> > Hello everyone,
>> >
>> > We operate two clusters that we installed with ceph-deploy in
>> > Nautilus
>> > version on Debian 10. We use them for external S3 storage
>> > (owncloud) and
>> > rbd disk images.We had them upgraded to Octopus and Pacific
>> > versions on
>> > Debian 11 and recently converted them to cephadm and upgraded to
>> > Quincy
>> > (17.2.6).
>> >
>> > As we now have the orchestrator, we tried updating to 17.2.7 using
>> > the
>> > command# ceph orch upgrade start --image quay.io/ceph/ceph:v17.2.7
>> > <http://quay.io/ceph/ceph:v17.2.7>
>> >
>> > Everything went well, both clusters work perfectly for our use,
>> > except
>> > that the Rados gateway configuration is no longer accessible from
>> the
>> > dashboard with the following error messageError connecting to Object
>> > Gateway: RGW REST API failed request with status code 404.
>> >
>> > We tried a few solutions found on the internet (reset rgw
>> > credentials,
>> > restart rgw adnd mgr, reenable dashboard, ...), unsuccessfully.
>> >
>> > Does somebody have an idea ?
>> >
>> > Best regards,
>> >
>> > Jean-Marc Fontana
>> > ___
>> > ceph-users mailing list -- ceph-users@ceph.io
>> > To unsubscribe send an email to ceph-users-le...@ceph.io
>> >
>> ___
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
>>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Problem while upgrade 17.2.6 to 17.2.7

2023-11-16 Thread Nizamudeen A
Hello,

can you also add the mgr logs at the time of this error?

Regards,

On Thu, Nov 16, 2023 at 4:12 PM Jean-Marc FONTANA 
wrote:

> Hello David,
>
> We tried what you pointed in your message. First, it was set to
>
> "s3, s3website, swift, swift_auth, admin, sts, iam, subpub"
>
> We tried to set it to "s3, s3website, swift, swift_auth, admin, sts,
> iam, subpub, notifications"
>
> and then to "s3, s3website, swift, swift_auth, admin, sts, iam,
> notifications",
>
> with no success at each time.
>
> We tried then
>
>ceph dashboard reset-rgw-api-admin-resource
>
> or
>
>ceph dashboard set-rgw-api-admin-resource XXX
>
> getting a 500 internal error message in a red box on the upper corner
> with the first one
>
> or the 404 error message with the second one.
>
> Thanks for your helping,
>
> Cordialement,
>
> JM Fontana
>
>
> Le 14/11/2023 à 20:53, David C. a écrit :
> > Hi Jean Marc,
> >
> > maybe look at this parameter "rgw_enable_apis", if the values you have
> > correspond to the default (need rgw restart) :
> >
> >
> https://docs.ceph.com/en/quincy/radosgw/config-ref/#confval-rgw_enable_apis
> >
> > ceph config get client.rgw rgw_enable_apis
> >
> > 
> >
> > Cordialement,
> >
> > *David CASIER*
> >
> > 
> >
> >
> >
> > Le mar. 14 nov. 2023 à 11:45, Jean-Marc FONTANA
> >  a écrit :
> >
> > Hello everyone,
> >
> > We operate two clusters that we installed with ceph-deploy in
> > Nautilus
> > version on Debian 10. We use them for external S3 storage
> > (owncloud) and
> > rbd disk images.We had them upgraded to Octopus and Pacific
> > versions on
> > Debian 11 and recently converted them to cephadm and upgraded to
> > Quincy
> > (17.2.6).
> >
> > As we now have the orchestrator, we tried updating to 17.2.7 using
> > the
> > command# ceph orch upgrade start --image quay.io/ceph/ceph:v17.2.7
> > 
> >
> > Everything went well, both clusters work perfectly for our use,
> > except
> > that the Rados gateway configuration is no longer accessible from the
> > dashboard with the following error messageError connecting to Object
> > Gateway: RGW REST API failed request with status code 404.
> >
> > We tried a few solutions found on the internet (reset rgw
> > credentials,
> > restart rgw adnd mgr, reenable dashboard, ...), unsuccessfully.
> >
> > Does somebody have an idea ?
> >
> > Best regards,
> >
> > Jean-Marc Fontana
> > ___
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
> >
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: reef 18.2.1 QE Validation status

2023-11-14 Thread Nizamudeen A
dashboard approved. Failure known and unrelated!

On Tue, Nov 14, 2023, 22:34 Adam King  wrote:

> orch approved.  After reruns, orch/cephadm was just hitting two known
> (nonblocker) issues and orch/rook teuthology suite is known to not be
> functional currently.
>
> On Tue, Nov 14, 2023 at 10:33 AM Yuri Weinstein 
> wrote:
>
>> Build 4 with https://github.com/ceph/ceph/pull/54224 was built and I
>> ran the tests below and asking for approvals:
>>
>> smoke - Laura
>> rados/mgr - PASSED
>> rados/dashboard - Nizamudeen
>> orch - Adam King
>>
>> See Build 4 runs - https://tracker.ceph.com/issues/63443#note-1
>>
>> On Tue, Nov 14, 2023 at 12:21 AM Redouane Kachach 
>> wrote:
>> >
>> > Yes, cephadm has some tests for monitoring that should be enough to
>> ensure basic functionality is working properly. The rest of the changes in
>> the PR are for rook orchestrator.
>> >
>> > On Tue, Nov 14, 2023 at 5:04 AM Nizamudeen A  wrote:
>> >>
>> >> dashboard changes are minimal and approved. and since the dashboard
>> change is related to the
>> >> monitoring stack (prometheus..) which is something not covered in the
>> dashboard test suites, I don't think running it is necessary.
>> >> But maybe the cephadm suite has some monitoring stack related testings
>> written?
>> >>
>> >> On Tue, Nov 14, 2023 at 1:10 AM Yuri Weinstein 
>> wrote:
>> >>>
>> >>> Ack Travis.
>> >>>
>> >>> Since it touches a dashboard, Nizam - please reply/approve.
>> >>>
>> >>> I assume that rados/dashboard tests will be sufficient, but expecting
>> >>> your recommendations.
>> >>>
>> >>> This addition will make the final release likely to be pushed.
>> >>>
>> >>> On Mon, Nov 13, 2023 at 11:30 AM Travis Nielsen 
>> wrote:
>> >>> >
>> >>> > I'd like to see these changes for much improved dashboard
>> integration with Rook. The changes are to the rook mgr orchestrator module,
>> and supporting test changes. Thus, this should be very low risk to the ceph
>> release. I don't know the details of the tautology suites, but I would
>> think suites involving the mgr modules would only be necessary.
>> >>> >
>> >>> > Travis
>> >>> >
>> >>> > On Mon, Nov 13, 2023 at 12:14 PM Yuri Weinstein <
>> ywein...@redhat.com> wrote:
>> >>> >>
>> >>> >> Redouane
>> >>> >>
>> >>> >> What would be a sufficient level of testing (tautology suite(s))
>> >>> >> assuming this PR is approved to be added?
>> >>> >>
>> >>> >> On Mon, Nov 13, 2023 at 9:13 AM Redouane Kachach <
>> rkach...@redhat.com> wrote:
>> >>> >> >
>> >>> >> > Hi Yuri,
>> >>> >> >
>> >>> >> > I've just backported to reef several fixes that I introduced in
>> the last months for the rook orchestrator. Most of them are fixes for
>> dashboard issues/crashes that only happen on Rook environments. The PR [1]
>> has all the changes and it was merged into reef this morning. We really
>> need these changes to be part of the next reef release as the upcoming Rook
>> stable version will be based on it.
>> >>> >> >
>> >>> >> > Please, can you include those changes in the upcoming reef
>> 18.2.1 release?
>> >>> >> >
>> >>> >> > [1] https://github.com/ceph/ceph/pull/54224
>> >>> >> >
>> >>> >> > Thanks a lot,
>> >>> >> > Redouane.
>> >>> >> >
>> >>> >> >
>> >>> >> > On Mon, Nov 13, 2023 at 6:03 PM Yuri Weinstein <
>> ywein...@redhat.com> wrote:
>> >>> >> >>
>> >>> >> >> -- Forwarded message -
>> >>> >> >> From: Venky Shankar 
>> >>> >> >> Date: Thu, Nov 9, 2023 at 11:52 PM
>> >>> >> >> Subject: Re: [ceph-users] Re: reef 18.2.1 QE Validation status
>> >>> >> >> To: Yuri Weinstein 
>> >>> >> >> Cc: dev , ceph-users 
>> >>> >> >>
>> >>> >> >>
>> >>> >> >> Hi Yuri,
>&g

[ceph-users] Re: reef 18.2.1 QE Validation status

2023-11-13 Thread Nizamudeen A
dashboard changes are minimal and approved. and since the dashboard change
is related to the
monitoring stack (prometheus..) which is something not covered in the
dashboard test suites, I don't think running it is necessary.
But maybe the cephadm suite has some monitoring stack related testings
written?

On Tue, Nov 14, 2023 at 1:10 AM Yuri Weinstein  wrote:

> Ack Travis.
>
> Since it touches a dashboard, Nizam - please reply/approve.
>
> I assume that rados/dashboard tests will be sufficient, but expecting
> your recommendations.
>
> This addition will make the final release likely to be pushed.
>
> On Mon, Nov 13, 2023 at 11:30 AM Travis Nielsen 
> wrote:
> >
> > I'd like to see these changes for much improved dashboard integration
> with Rook. The changes are to the rook mgr orchestrator module, and
> supporting test changes. Thus, this should be very low risk to the ceph
> release. I don't know the details of the tautology suites, but I would
> think suites involving the mgr modules would only be necessary.
> >
> > Travis
> >
> > On Mon, Nov 13, 2023 at 12:14 PM Yuri Weinstein 
> wrote:
> >>
> >> Redouane
> >>
> >> What would be a sufficient level of testing (tautology suite(s))
> >> assuming this PR is approved to be added?
> >>
> >> On Mon, Nov 13, 2023 at 9:13 AM Redouane Kachach 
> wrote:
> >> >
> >> > Hi Yuri,
> >> >
> >> > I've just backported to reef several fixes that I introduced in the
> last months for the rook orchestrator. Most of them are fixes for dashboard
> issues/crashes that only happen on Rook environments. The PR [1] has all
> the changes and it was merged into reef this morning. We really need these
> changes to be part of the next reef release as the upcoming Rook stable
> version will be based on it.
> >> >
> >> > Please, can you include those changes in the upcoming reef 18.2.1
> release?
> >> >
> >> > [1] https://github.com/ceph/ceph/pull/54224
> >> >
> >> > Thanks a lot,
> >> > Redouane.
> >> >
> >> >
> >> > On Mon, Nov 13, 2023 at 6:03 PM Yuri Weinstein 
> wrote:
> >> >>
> >> >> -- Forwarded message -
> >> >> From: Venky Shankar 
> >> >> Date: Thu, Nov 9, 2023 at 11:52 PM
> >> >> Subject: Re: [ceph-users] Re: reef 18.2.1 QE Validation status
> >> >> To: Yuri Weinstein 
> >> >> Cc: dev , ceph-users 
> >> >>
> >> >>
> >> >> Hi Yuri,
> >> >>
> >> >> On Fri, Nov 10, 2023 at 4:55 AM Yuri Weinstein 
> wrote:
> >> >> >
> >> >> > I've updated all approvals and merged PRs in the tracker and it
> looks
> >> >> > like we are ready for gibba, LRC upgrades pending approval/update
> from
> >> >> > Venky.
> >> >>
> >> >> The smoke test failure is caused by missing (kclient) patches in
> >> >> Ubuntu 20.04 that certain parts of the fs suite (via smoke tests)
> rely
> >> >> on. More details here
> >> >>
> >> >> https://tracker.ceph.com/issues/63488#note-8
> >> >>
> >> >> The kclient tests in smoke pass with other distro's and the fs suite
> >> >> tests have been reviewed and look good. Run details are here
> >> >>
> >> >>
> https://tracker.ceph.com/projects/cephfs/wiki/Reef#07-Nov-2023
> >> >>
> >> >> The smoke failure is noted as a known issue for now. Consider this
> run
> >> >> as "fs approved".
> >> >>
> >> >> >
> >> >> > On Thu, Nov 9, 2023 at 1:31 PM Radoslaw Zarzynski <
> rzarz...@redhat.com> wrote:
> >> >> > >
> >> >> > > rados approved!
> >> >> > >
> >> >> > > Details are here:
> https://tracker.ceph.com/projects/rados/wiki/REEF#1821-Review.
> >> >> > >
> >> >> > > On Mon, Nov 6, 2023 at 10:33 PM Yuri Weinstein <
> ywein...@redhat.com> wrote:
> >> >> > > >
> >> >> > > > Details of this release are summarized here:
> >> >> > > >
> >> >> > > > https://tracker.ceph.com/issues/63443#note-1
> >> >> > > >
> >> >> > > > Seeking approvals/reviews for:
> >> >> > > >
> >> >> > > > smoke - Laura, Radek, Prashant, Venky (POOL_APP_NOT_ENABLE
> failures)
> >> >> > > > rados - Neha, Radek, Travis, Ernesto, Adam King
> >> >> > > > rgw - Casey
> >> >> > > > fs - Venky
> >> >> > > > orch - Adam King
> >> >> > > > rbd - Ilya
> >> >> > > > krbd - Ilya
> >> >> > > > upgrade/quincy-x (reef) - Laura PTL
> >> >> > > > powercycle - Brad
> >> >> > > > perf-basic - Laura, Prashant (POOL_APP_NOT_ENABLE failures)
> >> >> > > >
> >> >> > > > Please reply to this email with approval and/or trackers of
> known
> >> >> > > > issues/PRs to address them.
> >> >> > > >
> >> >> > > > TIA
> >> >> > > > YuriW
> >> >> > > > ___
> >> >> > > > Dev mailing list -- d...@ceph.io
> >> >> > > > To unsubscribe send an email to dev-le...@ceph.io
> >> >> > > >
> >> >> > >
> >> >> > ___
> >> >> > ceph-users mailing list -- ceph-users@ceph.io
> >> >> > To unsubscribe send an email to ceph-users-le...@ceph.io
> >> >>
> >> >>
> >> >>
> >> >> --
> >> >> Cheers,
> >> >> Venky
> >> >>
> >> ___
> >> ceph-users mailing list -- ceph-users@ceph.io
> >> To unsubscribe send an email to 

[ceph-users] Re: Ceph Dashboard - Community News Sticker [Feedback]

2023-11-09 Thread Nizamudeen A
Thank you everyone for the feedback!

It's always good to hear if something gives value or not to the UI and to
users before we go ahead and start doing it.

And btw, if people are wondering whether we are short on features, the
short answer is no. Along with Multi-Cluster Management & monitoring
through the ceph-dashboard, some more extra management features will be
coming in on the upcoming major release. The News Sticker was one
of the items that was on the list.

If you have more feedback on something that you guys would want to see in
the GUI, please let us know and we'll add it to our list and work on it.

Regards,
Nizam

On Thu, Nov 9, 2023 at 7:24 PM Anthony D'Atri 
wrote:

> IMHO we don't need yet another place to look for information, especially
> one that some operators never see.  ymmv.
>
> >
> >> Hello,
> >>
> >> We wanted to get some feedback on one of the features that we are
> planning
> >> to bring in for upcoming releases.
> >>
> >> On the Ceph GUI, we thought it could be interesting to show information
> >> regarding the community events, ceph release information (Release notes
> and
> >> changelogs) and maybe even notify about new blog post releases and also
> >> inform regarding the community group meetings. There would be options to
> >> subscribe to the events that you want to get notified.
> >>
> >> Before proceeding with its implementation, we thought it'd be good to
> get
> >> some community feedback around it. So please let us know what you think
> >> (the goods and the bads).
> >>
> >> Regards,
> >> --
> >>
> >> Nizamudeen A
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Ceph Dashboard - Community News Sticker [Feedback]

2023-11-08 Thread Nizamudeen A
Hello,

We wanted to get some feedback on one of the features that we are planning
to bring in for upcoming releases.

On the Ceph GUI, we thought it could be interesting to show information
regarding the community events, ceph release information (Release notes and
changelogs) and maybe even notify about new blog post releases and also
inform regarding the community group meetings. There would be options to
subscribe to the events that you want to get notified.

Before proceeding with its implementation, we thought it'd be good to get
some community feedback around it. So please let us know what you think
(the goods and the bads).

Regards,
-- 

Nizamudeen A

Software Engineer

Red Hat <https://www.redhat.com/>
<https://www.redhat.com/>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: reef 18.2.1 QE Validation status

2023-11-08 Thread Nizamudeen A
dashboard approved, the test failure is known cypress issue which is not a
blocker.

Regards,
Nizam

On Wed, Nov 8, 2023, 21:41 Yuri Weinstein  wrote:

> We merged 3 PRs and rebuilt "reef-release" (Build 2)
>
> Seeking approvals/reviews for:
>
> smoke - Laura, Radek 2 jobs failed in "objectstore/bluestore" tests
> (see Build 2)
> rados - Neha, Radek, Travis, Ernesto, Adam King
> rgw - Casey reapprove on Build 2
> fs - Venky, approve on Build 2
> orch - Adam King
> upgrade/quincy-x (reef) - Laura PTL
> powercycle - Brad (known issues)
>
> We need to close
> https://tracker.ceph.com/issues/63391
> (https://github.com/ceph/ceph/pull/54392) - Travis, Guillaume
> https://tracker.ceph.com/issues/63151 - Adam King do we need anything for
> this?
>
> On Wed, Nov 8, 2023 at 6:33 AM Travis Nielsen  wrote:
> >
> > Yuri, we need to add this issue as a blocker for 18.2.1. We discovered
> this issue after the release of 17.2.7, and don't want to hit the same
> blocker in 18.2.1 where some types of OSDs are failing to be created in new
> clusters, or failing to start in upgraded clusters.
> > https://tracker.ceph.com/issues/63391
> >
> > Thanks!
> > Travis
> >
> > On Wed, Nov 8, 2023 at 4:41 AM Venky Shankar 
> wrote:
> >>
> >> Hi Yuri,
> >>
> >> On Wed, Nov 8, 2023 at 2:32 AM Yuri Weinstein 
> wrote:
> >> >
> >> > 3 PRs above mentioned were merged and I am returning some tests:
> >> >
> https://pulpito.ceph.com/?sha1=55e3239498650453ff76a9b06a37f1a6f488c8fd
> >> >
> >> > Still seeing approvals.
> >> > smoke - Laura, Radek, Prashant, Venky in progress
> >> > rados - Neha, Radek, Travis, Ernesto, Adam King
> >> > rgw - Casey in progress
> >> > fs - Venky
> >>
> >> There's a failure in the fs suite
> >>
> >>
> https://pulpito.ceph.com/vshankar-2023-11-07_05:14:36-fs-reef-release-distro-default-smithi/7450325/
> >>
> >> Seems to be related to nfs-ganesha. I've reached out to Frank Filz
> >> (#cephfs on ceph slack) to have a look. WIll update as soon as
> >> possible.
> >>
> >> > orch - Adam King
> >> > rbd - Ilya approved
> >> > krbd - Ilya approved
> >> > upgrade/quincy-x (reef) - Laura PTL
> >> > powercycle - Brad
> >> > perf-basic - in progress
> >> >
> >> >
> >> > On Tue, Nov 7, 2023 at 8:38 AM Casey Bodley 
> wrote:
> >> > >
> >> > > On Mon, Nov 6, 2023 at 4:31 PM Yuri Weinstein 
> wrote:
> >> > > >
> >> > > > Details of this release are summarized here:
> >> > > >
> >> > > > https://tracker.ceph.com/issues/63443#note-1
> >> > > >
> >> > > > Seeking approvals/reviews for:
> >> > > >
> >> > > > smoke - Laura, Radek, Prashant, Venky (POOL_APP_NOT_ENABLE
> failures)
> >> > > > rados - Neha, Radek, Travis, Ernesto, Adam King
> >> > > > rgw - Casey
> >> > >
> >> > > rgw results are approved. https://github.com/ceph/ceph/pull/54371
> >> > > merged to reef but is needed on reef-release
> >> > >
> >> > > > fs - Venky
> >> > > > orch - Adam King
> >> > > > rbd - Ilya
> >> > > > krbd - Ilya
> >> > > > upgrade/quincy-x (reef) - Laura PTL
> >> > > > powercycle - Brad
> >> > > > perf-basic - Laura, Prashant (POOL_APP_NOT_ENABLE failures)
> >> > > >
> >> > > > Please reply to this email with approval and/or trackers of known
> >> > > > issues/PRs to address them.
> >> > > >
> >> > > > TIA
> >> > > > YuriW
> >> > > > ___
> >> > > > ceph-users mailing list -- ceph-users@ceph.io
> >> > > > To unsubscribe send an email to ceph-users-le...@ceph.io
> >> > > >
> >> > >
> >> > ___
> >> > ceph-users mailing list -- ceph-users@ceph.io
> >> > To unsubscribe send an email to ceph-users-le...@ceph.io
> >>
> >>
> >>
> >> --
> >> Cheers,
> >> Venky
> >> ___
> >> Dev mailing list -- d...@ceph.io
> >> To unsubscribe send an email to dev-le...@ceph.io
> ___
> Dev mailing list -- d...@ceph.io
> To unsubscribe send an email to dev-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: 17.2.7 quincy dashboard issues

2023-11-03 Thread Nizamudeen A
>
> Alternately publish
> all metrics to prometheus with fsid label then you can auto-filter
> based on the fsid of the ceph cluster since fsid is unique.
>

This is exactly something that we are looking into as we are looking into
providing
the support for multi-cluster monitoring and management from the Ceph
Dashboard
which is right now an ongoing PoC. Thanks for providing more context here.

But as of now, I don't see a way to make it configurable in the dashboard.
But in the near future
you can expect these to be added.

Regards,
Nizam

On Fri, Nov 3, 2023 at 7:54 AM Matthew Darwin  wrote:

> In my case I'm adding a label that is unique to each ceph cluster and
> then can filter on that.  In my ceph dashboard in grafana I've added a
> pull-down list to check each different ceph cluster.
>
> You need a way for me to configure what labels to filter on so I can
> match it up with how I configured the prometheus. Alternately publish
> all metrics to prometheus with fsid label then you can auto-filter
> based on the fsid of the ceph cluster since fsid is unique.
>
> On 2023-11-02 01:03, Nizamudeen A wrote:
> >
> > We have 4 ceph clusters going into the same prometheus instance.
> >
> > Just curious, In the prometheus, if you want to see the details for
> > a single cluster, how's it done through query?
> >
> > For reference, these are the queries that we are currently using now.
> >
> > USEDCAPACITY = 'ceph_cluster_total_used_bytes',
> >   WRITEIOPS = 'sum(rate(ceph_pool_wr[1m]))',
> >   READIOPS = 'sum(rate(ceph_pool_rd[1m]))',
> >   READLATENCY = 'avg_over_time(ceph_osd_apply_latency_ms[1m])',
> >   WRITELATENCY = 'avg_over_time(ceph_osd_commit_latency_ms[1m])',
> >   READCLIENTTHROUGHPUT = 'sum(rate(ceph_pool_rd_bytes[1m]))',
> >   WRITECLIENTTHROUGHPUT = 'sum(rate(ceph_pool_wr_bytes[1m]))',
> >   RECOVERYBYTES = 'sum(rate(ceph_osd_recovery_bytes[1m]))'
> >
> > We might not have considered the possibility of multiple
> > ceph-clusters pointing to a single prometheus instance.
> > In that case there should be some filtering done with cluster id or
> > something to properly identify it.
> >
> > FYI @Pedro Gonzalez Gomez <mailto:pegon...@redhat.com> @Ankush Behl
> > <mailto:anb...@redhat.com> @Aashish Sharma <mailto:aasha...@redhat.com>
> >
> > Regards,
> > Nizam
> >
> > On Mon, Oct 30, 2023 at 11:05 PM Matthew Darwin  wrote:
> >
> > Ok, so I tried the new ceph dashboard by "set-prometheus-api-host"
> > (note "host" and not "url") and it returns the wrong data.  We
> > have 4
> > ceph clusters going into the same prometheus instance.  How does it
> > know which data to pull? Do I need to pass a promql query?
> >
> > The capacity widget at the top right (not using prometheus)
> > shows 35%
> > of 51 TiB used (test cluster data)... This is correct. The chart
> > shows
> > use capacity is 1.7 PiB, which is coming from the production
> > cluster
> > (incorrect).
> >
> > Ideas?
> >
> >
> > On 2023-10-30 11:30, Nizamudeen A wrote:
> > > Ah yeah, probably that's why the utilization charts are empty
> > > because it relies on
> > > the prometheus info.
> > >
> > > And I raised a PR to disable the new dashboard in quincy.
> > > https://github.com/ceph/ceph/pull/54250
> > >
> > > Regards,
> > > Nizam
> > >
> > > On Mon, Oct 30, 2023 at 6:09 PM Matthew Darwin
> >  wrote:
> > >
> > > Hello,
> > >
> > > We're not using prometheus within ceph (ceph dashboards
> > show in our
> > > grafana which is hosted elsewhere). The old dashboard
> > showed the
> > > metrics fine, so not sure why in a patch release we would need
> > > to make
> > > configuration changes to get the same metrics Agree it
> > > should be
> > > off by default.
> > >
> > > "ceph dashboard feature disable dashboard" works to put
> > the old
> > > dashboard back.  Thanks.
> > >
> > > On 2023-10-30 00:09, Nizamudeen A wrote:
> > > > Hi Matthew,
> > > >
> > > > Is the prometheus configured in the cluster? And also the
> > > > PROMETHUEUS_API_URL is set? You can set it manually by ceph
>

[ceph-users] Re: 17.2.7 quincy dashboard issues

2023-11-01 Thread Nizamudeen A
>
> We have 4 ceph clusters going into the same prometheus instance.
>
Just curious, In the prometheus, if you want to see the details for a
single cluster, how's it done through query?

For reference, these are the queries that we are currently using now.

  USEDCAPACITY = 'ceph_cluster_total_used_bytes',
>   WRITEIOPS = 'sum(rate(ceph_pool_wr[1m]))',
>   READIOPS = 'sum(rate(ceph_pool_rd[1m]))',
>   READLATENCY = 'avg_over_time(ceph_osd_apply_latency_ms[1m])',
>   WRITELATENCY = 'avg_over_time(ceph_osd_commit_latency_ms[1m])',
>   READCLIENTTHROUGHPUT = 'sum(rate(ceph_pool_rd_bytes[1m]))',
>   WRITECLIENTTHROUGHPUT = 'sum(rate(ceph_pool_wr_bytes[1m]))',
>   RECOVERYBYTES = 'sum(rate(ceph_osd_recovery_bytes[1m]))'


We might not have considered the possibility of multiple ceph-clusters
pointing to a single prometheus instance.
In that case there should be some filtering done with cluster id or
something to properly identify it.

FYI @Pedro Gonzalez Gomez  @Ankush Behl
 @Aashish Sharma 

Regards,
Nizam

On Mon, Oct 30, 2023 at 11:05 PM Matthew Darwin  wrote:

> Ok, so I tried the new ceph dashboard by "set-prometheus-api-host"
> (note "host" and not "url") and it returns the wrong data.  We have 4
> ceph clusters going into the same prometheus instance.  How does it
> know which data to pull? Do I need to pass a promql query?
>
> The capacity widget at the top right (not using prometheus) shows 35%
> of 51 TiB used (test cluster data)... This is correct. The chart shows
> use capacity is 1.7 PiB, which is coming from the production cluster
> (incorrect).
>
> Ideas?
>
>
> On 2023-10-30 11:30, Nizamudeen A wrote:
> > Ah yeah, probably that's why the utilization charts are empty
> > because it relies on
> > the prometheus info.
> >
> > And I raised a PR to disable the new dashboard in quincy.
> > https://github.com/ceph/ceph/pull/54250
> >
> > Regards,
> > Nizam
> >
> > On Mon, Oct 30, 2023 at 6:09 PM Matthew Darwin  wrote:
> >
> > Hello,
> >
> > We're not using prometheus within ceph (ceph dashboards show in our
> > grafana which is hosted elsewhere). The old dashboard showed the
> > metrics fine, so not sure why in a patch release we would need
> > to make
> > configuration changes to get the same metrics Agree it
> > should be
> > off by default.
> >
> > "ceph dashboard feature disable dashboard" works to put the old
> > dashboard back.  Thanks.
> >
> > On 2023-10-30 00:09, Nizamudeen A wrote:
> > > Hi Matthew,
> > >
> > > Is the prometheus configured in the cluster? And also the
> > > PROMETHUEUS_API_URL is set? You can set it manually by ceph
> > dashboard
> > > set-prometheus-api-url .
> > >
> > > You can switch to the old Dashboard by switching the feature
> > toggle in the
> > > dashboard. `ceph dashboard feature disable dashboard` and
> > reloading the
> > > page. Probably this should have been disabled by default.
> > >
> > > Regards,
> > > Nizam
> > >
> > > On Sun, Oct 29, 2023, 23:04 Matthew Darwin wrote:
> > >
> > >> Hi all,
> > >>
> > >> I see17.2.7 quincy is published as debian-bullseye packages.
> > So I
> > >> tried it on a test cluster.
> > >>
> > >> I must say I was not expecting the big dashboard change in a
> > patch
> > >> release.  Also all the "cluster utilization" numbers are all
> > blank now
> > >> (any way to fix it?), so the dashboard is much less usable now.
> > >>
> > >> Thoughts?
> > >> ___
> > >> ceph-users mailing list --ceph-users@ceph.io
> > >> To unsubscribe send an email toceph-users-le...@ceph.io
> > >>
> > > ___
> > > ceph-users mailing list --ceph-users@ceph.io
> > > To unsubscribe send an email toceph-users-le...@ceph.io
> > ___
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
> >
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: 17.2.7 quincy

2023-10-30 Thread Nizamudeen A
Ah yeah, probably that's why the utilization charts are empty because it
relies on
the prometheus info.

And I raised a PR to disable the new dashboard in quincy.
https://github.com/ceph/ceph/pull/54250

Regards,
Nizam

On Mon, Oct 30, 2023 at 6:09 PM Matthew Darwin  wrote:

> Hello,
>
> We're not using prometheus within ceph (ceph dashboards show in our
> grafana which is hosted elsewhere). The old dashboard showed the
> metrics fine, so not sure why in a patch release we would need to make
> configuration changes to get the same metrics Agree it should be
> off by default.
>
> "ceph dashboard feature disable dashboard" works to put the old
> dashboard back.  Thanks.
>
> On 2023-10-30 00:09, Nizamudeen A wrote:
> > Hi Matthew,
> >
> > Is the prometheus configured in the cluster? And also the
> > PROMETHUEUS_API_URL is set? You can set it manually by ceph dashboard
> > set-prometheus-api-url .
> >
> > You can switch to the old Dashboard by switching the feature toggle in
> the
> > dashboard. `ceph dashboard feature disable dashboard` and reloading the
> > page. Probably this should have been disabled by default.
> >
> > Regards,
> > Nizam
> >
> > On Sun, Oct 29, 2023, 23:04 Matthew Darwin  wrote:
> >
> >> Hi all,
> >>
> >> I see17.2.7 quincy is published as debian-bullseye packages.  So I
> >> tried it on a test cluster.
> >>
> >> I must say I was not expecting the big dashboard change in a patch
> >> release.  Also all the "cluster utilization" numbers are all blank now
> >> (any way to fix it?), so the dashboard is much less usable now.
> >>
> >> Thoughts?
> >> ___
> >> ceph-users mailing list --ceph-users@ceph.io
> >> To unsubscribe send an email toceph-users-le...@ceph.io
> >>
> > ___
> > ceph-users mailing list --ceph-users@ceph.io
> > To unsubscribe send an email toceph-users-le...@ceph.io
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: 17.2.7 quincy

2023-10-29 Thread Nizamudeen A
Hi Matthew,

Is the prometheus configured in the cluster? And also the
PROMETHUEUS_API_URL is set? You can set it manually by ceph dashboard
set-prometheus-api-url .

You can switch to the old Dashboard by switching the feature toggle in the
dashboard. `ceph dashboard feature disable dashboard` and reloading the
page. Probably this should have been disabled by default.

Regards,
Nizam

On Sun, Oct 29, 2023, 23:04 Matthew Darwin  wrote:

> Hi all,
>
> I see17.2.7 quincy is published as debian-bullseye packages.  So I
> tried it on a test cluster.
>
> I must say I was not expecting the big dashboard change in a patch
> release.  Also all the "cluster utilization" numbers are all blank now
> (any way to fix it?), so the dashboard is much less usable now.
>
> Thoughts?
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: quincy v17.2.7 QE Validation status

2023-10-17 Thread Nizamudeen A
dashboard approved!

On Tue, Oct 17, 2023 at 12:22 AM Yuri Weinstein  wrote:

> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/63219#note-2
> Release Notes - TBD
>
> Issue https://tracker.ceph.com/issues/63192 appears to be failing several
> runs.
> Should it be fixed for this release?
>
> Seeking approvals/reviews for:
>
> smoke - Laura
> rados - Laura, Radek, Travis, Ernesto, Adam King
>
> rgw - Casey
> fs - Venky
> orch - Adam King
>
> rbd - Ilya
> krbd - Ilya
>
> upgrade/quincy-p2p - Known issue IIRC, Casey pls confirm/approve
>
> client-upgrade-quincy-reef - Laura
>
> powercycle - Brad pls confirm
>
> ceph-volume - Guillaume pls take a look
>
> Please reply to this email with approval and/or trackers of known
> issues/PRs to address them.
>
> Josh, Neha - gibba and LRC upgrades -- N/A for quincy now after reef
> release.
>
> Thx
> YuriW
> ___
> Dev mailing list -- d...@ceph.io
> To unsubscribe send an email to dev-le...@ceph.io
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Awful new dashboard in Reef

2023-09-15 Thread Nizamudeen A
The source is prometheus. The below are the set of queries that we use to
populate the charts.

USEDCAPACITY = 'ceph_cluster_total_used_bytes',
WRITEIOPS = 'sum(rate(ceph_pool_wr[1m]))',
READIOPS = 'sum(rate(ceph_pool_rd[1m]))',
READLATENCY = 'avg_over_time(ceph_osd_apply_latency_ms[1m])',
WRITELATENCY = 'avg_over_time(ceph_osd_commit_latency_ms[1m])',
READCLIENTTHROUGHPUT = 'sum(rate(ceph_pool_rd_bytes[1m]))',
WRITECLIENTTHROUGHPUT = 'sum(rate(ceph_pool_wr_bytes[1m]))',
RECOVERYBYTES = 'sum(rate(ceph_osd_recovery_bytes[1m]))'


And I think all of them are available in grafana. Correct me if I'm
wrong @Pedro
Gonzalez Gomez 

Regards,
Nizam

On Fri, Sep 15, 2023 at 5:22 PM Marc  wrote:

>
> Where can I find the source of this dashboard? I assume this is also in
> grafana not?
>
> >
> > Hmmm, I think I like this capacity card, much better than the one I am
> > currently using ;)
> >
> > >
> > > We have some screenshots in a blog post we did a while back:
> > > https://ceph.io/en/news/blog/2023/landing-page/
> > > and also in the documentation:
> > >
> https://docs.ceph.com/en/latest/mgr/dashboard/#overview-of-the-dashboard-
> > > landing-page
> > >
> > > Regards,
> > >
> > > On Wed, Sep 13, 2023 at 5:59 PM Marc  > >  > wrote:
> > >
> > >
> > > Screen captures please. Not everyone is installing the default
> ones.
> > >
> > > >
> > > > We are collecting these feedbacks. For a while we weren't
> focusing
> > > on the
> > > > mobile view
> > > > of the dashboard. If there are users using those, we'll look into
> > > it as
> > > > well. Will let everyone know
> > > > soon with the improvements in the UI.
> > > >
> > > > Regards,
> > > > Nizam
> > > >
> > > > On Mon, Sep 11, 2023 at 2:23 PM Nicola Mori  > >  > wrote:
> > > >
> > > > > Hi Nizam,
> > > > >
> > > > > many thanks for the tip. And sorry for the quite rude subject
> of
> > > my
> > > > > post, I really appreciate the dashboard revamp effort but I was
> > > > > frustrated about the malfunctioning and missing features. By
> the
> > > way,
> > > > > one of the things that really need to be improved is the
> support
> > > for
> > > > > mobile devices, the dashboard on my phone is quite unusable,
> both
> > > the
> > > > > old and the new one (although the old gets better when browsing
> > > in
> > > > > desktop mode).
> > > > > Thanks again,
> > > > >
> > > ___
> > > ceph-users mailing list -- ceph-users@ceph.io  > > us...@ceph.io>
> > > To unsubscribe send an email to ceph-users-le...@ceph.io
> > > 
> > >
> >
> > ___
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Awful new dashboard in Reef

2023-09-13 Thread Nizamudeen A
Hey Marc,

We have some screenshots in a blog post we did a while back:
https://ceph.io/en/news/blog/2023/landing-page/
and also in the documentation:
https://docs.ceph.com/en/latest/mgr/dashboard/#overview-of-the-dashboard-landing-page

Regards,

On Wed, Sep 13, 2023 at 5:59 PM Marc  wrote:

> Screen captures please. Not everyone is installing the default ones.
>
> >
> > We are collecting these feedbacks. For a while we weren't focusing on the
> > mobile view
> > of the dashboard. If there are users using those, we'll look into it as
> > well. Will let everyone know
> > soon with the improvements in the UI.
> >
> > Regards,
> > Nizam
> >
> > On Mon, Sep 11, 2023 at 2:23 PM Nicola Mori  wrote:
> >
> > > Hi Nizam,
> > >
> > > many thanks for the tip. And sorry for the quite rude subject of my
> > > post, I really appreciate the dashboard revamp effort but I was
> > > frustrated about the malfunctioning and missing features. By the way,
> > > one of the things that really need to be improved is the support for
> > > mobile devices, the dashboard on my phone is quite unusable, both the
> > > old and the new one (although the old gets better when browsing in
> > > desktop mode).
> > > Thanks again,
> > >
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Awful new dashboard in Reef

2023-09-12 Thread Nizamudeen A
Thank you Nicola,

We are collecting these feedbacks. For a while we weren't focusing on the
mobile view
of the dashboard. If there are users using those, we'll look into it as
well. Will let everyone know
soon with the improvements in the UI.

Regards,
Nizam

On Mon, Sep 11, 2023 at 2:23 PM Nicola Mori  wrote:

> Hi Nizam,
>
> many thanks for the tip. And sorry for the quite rude subject of my
> post, I really appreciate the dashboard revamp effort but I was
> frustrated about the malfunctioning and missing features. By the way,
> one of the things that really need to be improved is the support for
> mobile devices, the dashboard on my phone is quite unusable, both the
> old and the new one (although the old gets better when browsing in
> desktop mode).
> Thanks again,
>
> Nicola
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Awful new dashboard in Reef

2023-09-10 Thread Nizamudeen A
Hey guys,

Thanks for the feedback. The new landing page is still improving. While we
are doing it, we haven't removed the old page completely.

If you want you can switch to the old Dashboard by switching the feature
toggle in the dashboard. `ceph dashboard feature disable dashboard`
will bring back the old dashboard. I know the naming is a bit weird but
this is a temporary thing that's going to stay for shorter period. Once we
have the new dashboard page improved we will completely remove the older
one. We'll also include a toggle in the main page to toggle
between the old and new page for the time being.

Regards,
Nizam

On Thu, Sep 7, 2023 at 2:20 PM Nicola Mori  wrote:

> My cluster has 104 OSDs, so I don't think this can be a factor for the
> malfunctioning.
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] CLT Meeting minutes 2023-08-30

2023-08-30 Thread Nizamudeen A
Hello,

Finish v18.2.0 upgrade on LRC? It seems to be running v18.1.3
 not much of a difference in code commits

news on teuthology jobs hanging?
 cephfs issues because of network troubles
 Its resolved by Patrick

User council discussion follow-up
 Detailed info on this pad: https://pad.ceph.com/p/user_dev_relaunch
 First topic will come from David's team

16.2.14 release
 Pushing to release by this week.

Regards,
Nizam

-- 

Nizamudeen A

Software Engineer

Red Hat <https://www.redhat.com/>
<https://www.redhat.com/>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: 16.2.14 pacific QE validation status

2023-08-24 Thread Nizamudeen A
Dashboard approved!

@Laura Flores  https://tracker.ceph.com/issues/62559,
this could be a dashboard issue. We'll be removing those tests from the
orch suite. Because we are already checking them
in the jenkins pipeline. The current one in the teuthology suite is a bit
flaky and not reliable.

Regards,
Nizam
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: ref v18.2.0 QE Validation status

2023-08-01 Thread Nizamudeen A
dashboard approved! failure is unrelated and tracked via
https://tracker.ceph.com/issues/58946

Regards,
Nizam

On Sun, Jul 30, 2023 at 9:16 PM Yuri Weinstein  wrote:

> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/62231#note-1
>
> Seeking approvals/reviews for:
>
> smoke - Laura, Radek
> rados - Neha, Radek, Travis, Ernesto, Adam King
> rgw - Casey
> fs - Venky
> orch - Adam King
> rbd - Ilya
> krbd - Ilya
> upgrade-clients:client-upgrade* - in progress
> powercycle - Brad
>
> Please reply to this email with approval and/or trackers of known
> issues/PRs to address them.
>
> bookworm distro support is an outstanding issue.
>
> TIA
> YuriW
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Grafana service fails to start due to bad directory name after Quincy upgrade

2023-06-23 Thread Nizamudeen A
Hi,

You can upgrade the grafana version individually by setting the config_opt
for grafana container image like:
ceph config set mgr mgr/cephadm/container_image_grafana
quay.io/ceph/ceph-grafana:8.3.5

and then redeploy the grafana container again either via dashboard or
cephadm.

Regards,
Nizam



On Fri, Jun 23, 2023 at 12:05 AM Adiga, Anantha 
wrote:

> Hi Eugen,
>
> Thank you so much for the details.  Here is the update (comments in-line
> >>):
>
> Regards,
> Anantha
> -Original Message-
> From: Eugen Block 
> Sent: Monday, June 19, 2023 5:27 AM
> To: ceph-users@ceph.io
> Subject: [ceph-users] Re: Grafana service fails to start due to bad
> directory name after Quincy upgrade
>
> Hi,
>
> so grafana is starting successfully now? What did you change?
> >>  I stopped and removed the Grafana image and  started it from "Ceph
> Dashboard" service. The version is still 6.7.4. I also had to change the
> following.
> I do not have a way to make  this permanent, if the service is redeployed
> I  will lose  the changes.
> I did not save the file that cephadm generated. This was one reason why
> Grafana service would not start. I had replace it with the one below to
> resolve this issue.
> [users]
>   default_theme = light
> [auth.anonymous]
>   enabled = true
>   org_name = 'Main Org.'
>   org_role = 'Viewer'
> [server]
>   domain = 'bootstrap.storage.lab'
>   protocol = https
>   cert_file = /etc/grafana/certs/cert_file
>   cert_key = /etc/grafana/certs/cert_key
>   http_port = 3000
>   http_addr =
> [snapshots]
>   external_enabled = false
> [security]
>   disable_initial_admin_creation = false
>   cookie_secure = true
>   cookie_samesite = none
>   allow_embedding = true
>   admin_password = paswd-value
>   admin_user = user-name
>
> Also this was the other change:
> # This file is generated by cephadm.
> apiVersion: 1   <--  This was the line added to
> var/lib/ceph/d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e/grafana.fl31ca104ja0201/etc/grafana/provisioning/datasources/ceph-dashboard.yml
> >>
> Regarding the container images, yes there are defaults in cephadm which
> can be overridden with ceph config. Can you share this output?
>
> ceph config dump | grep container_image
> >>
> Here it is
> root@fl31ca104ja0201:/# ceph config dump | grep container_image
> global   basic
>  container_image
> quay.io/ceph/ceph@sha256:af79fedafc42237b7612fe2d18a9c64ca62a0b38ab362e614ad671efa4a0547e
> *
> mgr  advanced
> mgr/cephadm/container_image_alertmanager
> docker.io/prom/alertmanager:v0.16.2
>   *
> mgr  advanced
> mgr/cephadm/container_image_base   quay.io/ceph/daemon
> mgr  advanced
> mgr/cephadm/container_image_grafanadocker.io/grafana/grafana:6.7.4
>   *
> mgr  advanced
> mgr/cephadm/container_image_node_exporter
> docker.io/prom/node-exporter:v0.17.0
>  *
> mgr  advanced
> mgr/cephadm/container_image_prometheus
> docker.io/prom/prometheus:v2.7.2
>  *
> client.rgw.default.default.fl31ca104ja0201.ninovsbasic
>  container_image
> quay.io/ceph/ceph@sha256:af79fedafc42237b7612fe2d18a9c64ca62a0b38ab362e614ad671efa4a0547e
> *
> client.rgw.default.default.fl31ca104ja0202.yhjkmbbasic
>  container_image
> quay.io/ceph/ceph@sha256:af79fedafc42237b7612fe2d18a9c64ca62a0b38ab362e614ad671efa4a0547e
> *
> client.rgw.default.default.fl31ca104ja0203.fqnriqbasic
>  container_image
> quay.io/ceph/ceph@sha256:af79fedafc42237b7612fe2d18a9c64ca62a0b38ab362e614ad671efa4a0547e
> *
> >>
> I tend to always use a specific image as described here [2]. I also
> haven't deployed grafana via dashboard yet so I can't really comment on
> that as well as on the warnings you report.
>
>
> >>OK. The need for that is, in Quincy when you enable Loki and Promtail,
> to view the daemon logs Ceph board pulls in Grafana  dashboard. I will let
> you know once that issue is resolved.
>
> Regards,
> Eugen
>
> [2]
>
> https://docs.ceph.com/en/latest/cephadm/services/monitoring/#using-custom-images
> >> Thank you I am following the document now
>
> Zitat von "Adiga, Anantha" :
>
> > Hi Eugene,
> >
> > Thank you for your response, here is the update.
> >
> > The upgrade to Quincy was done  following the cephadm orch upgrade
> > procedure ceph orch upgrade start --image quay.io/ceph/ceph:v17.2.6
> >
> > Upgrade completed with out errors. After the upgrade, upon creating
> > the Grafana service from Ceph dashboard, it deployed Grafana 6.7.4.
> > The version is hardcoded in the code, should it not be 8.3.5 as listed
> > below in Quincy documentation? See below
> >
> > [Grafana service started from Cephdashboard]
> >
> > 

[ceph-users] Re: alerts in dashboard

2023-06-21 Thread Nizamudeen A
Hi Ben,

It looks like you forgot to attach the screenshots.

Regards,
Nizam

On Wed, Jun 21, 2023, 12:23 Ben  wrote:

> Hi,
>
> I got many critical alerts in ceph dashboard. Meanwhile the cluster shows
> health ok status.
>
> See attached screenshot for detail. My questions are, are they real alerts?
> How to get rid of them?
>
> Thanks
> Ben
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] CLT Meeting minutes 17/05/23

2023-05-18 Thread Nizamudeen A
Hey all,

Ceph Quarterly announcement [Josh and Zac]
   One page digest that may published quarterly
   Planning for 1st of June, September and December

Reef RC
   https://pad.ceph.com/p/reef_scale_testing
   https://pad.ceph.com/p/ceph-user-dev-monthly-minutes#L17
   ETA last week of May

Missing CentOS 9 Python deps
   Ken Dreyer has volunteered to help get the packages in epel

Lab update
  There have been reimaging issues and kernel timeouts seen in the lab;
infra team is working   through fixing it
   Please raise infrastructure trackers for any bugs you see, as we have
started to do monthly infrastructure bug scrubs

Regards,
Nizam
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: 16.2.13 pacific QE validation status

2023-05-02 Thread Nizamudeen A
dashboard approved!

Regards,
Nizam

On Tue, May 2, 2023, 20:48 Yuri Weinstein  wrote:

> Please review the Release Notes - https://github.com/ceph/ceph/pull/51301
>
> Still seeking approvals for:
>
> rados - Neha, Radek, Laura
>   rook - Sébastien Han
>   dashboard - Ernesto
>
> fs - Venky, Patrick
> (upgrade/octopus-x (pacific) - Laura (look the same as in 16.2.8))
>
> ceph-volume - Guillaume
>
> On Tue, May 2, 2023 at 8:00 AM Casey Bodley  wrote:
> >
> > On Thu, Apr 27, 2023 at 5:21 PM Yuri Weinstein 
> wrote:
> > >
> > > Details of this release are summarized here:
> > >
> > > https://tracker.ceph.com/issues/59542#note-1
> > > Release Notes - TBD
> > >
> > > Seeking approvals for:
> > >
> > > smoke - Radek, Laura
> > > rados - Radek, Laura
> > >   rook - Sébastien Han
> > >   cephadm - Adam K
> > >   dashboard - Ernesto
> > >
> > > rgw - Casey
> >
> > rgw approved
> >
> > > rbd - Ilya
> > > krbd - Ilya
> > > fs - Venky, Patrick
> > > upgrade/octopus-x (pacific) - Laura (look the same as in 16.2.8)
> > > upgrade/pacific-p2p - Laura
> > > powercycle - Brad (SELinux denials)
> > > ceph-volume - Guillaume, Adam K
> > >
> > > Thx
> > > YuriW
> > > ___
> > > Dev mailing list -- d...@ceph.io
> > > To unsubscribe send an email to dev-le...@ceph.io
> >
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: quincy v17.2.6 QE Validation status

2023-03-27 Thread Nizamudeen A
Dashboard LGTM!

On Sat, Mar 25, 2023 at 1:16 AM Yuri Weinstein  wrote:

> Details of this release are updated here:
>
> https://tracker.ceph.com/issues/59070#note-1
> Release Notes - TBD
>
> The slowness we experienced seemed to be self-cured.
> Neha, Radek, and Laura please provide any findings if you have them.
>
> Seeking approvals/reviews for:
>
> rados - Neha, Radek, Travis, Ernesto, Adam King (rerun on Build 2 with
> PRs merged on top of quincy-release)
> rgw - Casey (rerun on Build 2 with PRs merged on top of quincy-release)
> fs - Venky
>
> upgrade/octopus-x - Neha, Laura (package issue Adam Kraitman any updates?)
> upgrade/pacific-x - Neha, Laura, Ilya see
> https://tracker.ceph.com/issues/58914
> upgrade/quincy-p2p
>  - Neha, Laura
> client-upgrade-octopus-quincy-quincy - Neha, Laura (package issue Adam
> Kraitman any updates?)
> powercycle - Brad
>
> Please reply to this email with approval and/or trackers of known
> issues/PRs to address them.
>
> Josh, Neha - gibba and LRC upgrades pending major suites approvals.
> RC release - pending major suites approvals.
>
> On Tue, Mar 21, 2023 at 1:04 PM Yuri Weinstein 
> wrote:
> >
> > Details of this release are summarized here:
> >
> > https://tracker.ceph.com/issues/59070#note-1
> > Release Notes - TBD
> >
> > The reruns were in the queue for 4 days because of some slowness issues.
> > The core team (Neha, Radek, Laura, and others) are trying to narrow
> > down the root cause.
> >
> > Seeking approvals/reviews for:
> >
> > rados - Neha, Radek, Travis, Ernesto, Adam King (we still have to test
> > and merge at least one PR https://github.com/ceph/ceph/pull/50575 for
> > the core)
> > rgw - Casey
> > fs - Venky (the fs suite has an unusually high amount of failed jobs,
> > any reason to suspect it in the observed slowness?)
> > orch - Adam King
> > rbd - Ilya
> > krbd - Ilya
> > upgrade/octopus-x - Laura is looking into failures
> > upgrade/pacific-x - Laura is looking into failures
> > upgrade/quincy-p2p - Laura is looking into failures
> > client-upgrade-octopus-quincy-quincy - missing packages, Adam Kraitman
> > is looking into it
> > powercycle - Brad
> > ceph-volume - needs a rerun on merged
> > https://github.com/ceph/ceph-ansible/pull/7409
> >
> > Please reply to this email with approval and/or trackers of known
> > issues/PRs to address them.
> >
> > Also, share any findings or hypnosis about the slowness in the
> > execution of the suite.
> >
> > Josh, Neha - gibba and LRC upgrades pending major suites approvals.
> > RC release - pending major suites approvals.
> >
> > Thx
> > YuriW
> ___
> Dev mailing list -- d...@ceph.io
> To unsubscribe send an email to dev-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: clt meeting summary [15/02/2023]

2023-02-16 Thread Nizamudeen A
Maybe an etherpad and pinning that to #sepia channel.


On Wed, Feb 15, 2023, 23:32 Laura Flores  wrote:

> I would be interested in helping catalogue errors and fixes we experience
> in the lab. Do we have a preferred platform for this cheatsheet?
>
> On Wed, Feb 15, 2023 at 11:54 AM Nizamudeen A  wrote:
>
>> Hi all,
>>
>> today's topics were:
>>
>>- Labs:
>>   - Keeping a catalog
>>   - Have a dedicated group to debug/work through the issues.
>>   - Looking for interested parties that would like to contribute in
>>   the lab maintenance tasks
>>   - Poll for meeting time, looking for a central person to follow up
>>   / organize
>>   - No one's been actively coordinating on the lab issues apart from
>>   Laura. David Orman volunteered if we need help coordinating the lab 
>> issues
>>- Reef release
>>   - [casey] things aren't looking good for end-of-february freeze
>>   - Since the whole thing depends on test-infra, can't really
>>   estimate the time frame.
>>   - The freeze maybe delayed
>>- Dev Summit in Amsterdam: estimate how many would attend in person,
>>remote
>>- 50/50 of those present would attend (as per the voting)
>>   - Ad hoc virtual could work
>>- Need to update the component leads page:
>>https://ceph.io/en/community/team/
>>- Vikhyath volunteered before, so Josh will check with him.
>>
>>
>> Regards,
>> --
>>
>> Nizamudeen A
>>
>> Software Engineer
>>
>> Red Hat <https://www.redhat.com/>
>> <https://www.redhat.com/>
>> ___
>> Dev mailing list -- d...@ceph.io
>> To unsubscribe send an email to dev-le...@ceph.io
>>
>
>
> --
>
> Laura Flores
>
> She/Her/Hers
>
> Software Engineer, Ceph Storage
>
> Red Hat Inc. <https://www.redhat.com>
>
> Chicago, IL
>
> lflo...@redhat.com
> M: +17087388804
> @RedHat <https://twitter.com/redhat>   Red Hat
> <https://www.linkedin.com/company/red-hat>  Red Hat
> <https://www.facebook.com/RedHatInc>
> <https://www.redhat.com>
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] clt meeting summary [15/02/2023]

2023-02-15 Thread Nizamudeen A
Hi all,

today's topics were:

   - Labs:
  - Keeping a catalog
  - Have a dedicated group to debug/work through the issues.
  - Looking for interested parties that would like to contribute in the
  lab maintenance tasks
  - Poll for meeting time, looking for a central person to follow up /
  organize
  - No one's been actively coordinating on the lab issues apart from
  Laura. David Orman volunteered if we need help coordinating the lab issues
   - Reef release
  - [casey] things aren't looking good for end-of-february freeze
  - Since the whole thing depends on test-infra, can't really estimate
  the time frame.
  - The freeze maybe delayed
   - Dev Summit in Amsterdam: estimate how many would attend in person,
   remote
   - 50/50 of those present would attend (as per the voting)
  - Ad hoc virtual could work
   - Need to update the component leads page:
   https://ceph.io/en/community/team/
   - Vikhyath volunteered before, so Josh will check with him.


Regards,
-- 

Nizamudeen A

Software Engineer

Red Hat <https://www.redhat.com/>
<https://www.redhat.com/>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: excluding from host_pattern

2023-01-27 Thread Nizamudeen A
Hi,

I am not sure about cephadm but if you were to use the ceph-dashboard, in
its host creation form you can enter a pattern like ceph[01-19] should add
ceph01...ceph19.

Regards,
Nizam

On Fri, Jan 27, 2023, 23:52 E Taka <0eta...@gmail.com> wrote:

> Thanks, Ulrich, but:
>
> # ceph orch host ls --host_pattern="^ceph(0[1-9])|(1[0-9])$"
> 0 hosts in cluster whose hostname matched "^ceph(0[1-9])|(1[0-9])$"
>
> Bash pattern are not accepted. (I tried it in numerous other combinations).
> But, as I said, not really a problem - just wondering what the regex might
> be.
>
> Am Fr., 27. Jan. 2023 um 19:09 Uhr schrieb Ulrich Klein <
> ulrich.kl...@ulrichklein.de>:
>
> > I use something like "^ceph(0[1-9])|(1[0-9])$", but in a script that
> > checks a parameter for a "correct" ceph node name like in:
> >
> >wantNum=$1
> >if [[ $wantNum =~ ^ceph(0[2-9]|1[0-9])$ ]] ; then
> >   wantNum=${BASH_REMATCH[1]}
> >
> > Which gives me the number, if it is in the range 02-19
> >
> > Dunno, if that helps :)
> >
> > Ciao, Uli
> >
> > > On 27. Jan 2023, at 18:17, E Taka <0eta...@gmail.com> wrote:
> > >
> > > Hi,
> > >
> > > I wonder if it is possible to define a host pattern, which includes the
> > > host names
> > > ceph01…ceph19, but no other hosts, especially not ceph00. That means,
> > this
> > > pattern is wrong: ceph[01][0-9] , since it includes ceph00.
> > >
> > > Not really a problem, but it seems that the "“host-pattern” is a regex
> > that
> > > matches against hostnames and returns only matching hosts"¹ is not
> > defined
> > > more precisely in the docs.
> > >
> > > 1) https://docs.ceph.com/en/latest/cephadm/host-management/
> > > ___
> > > ceph-users mailing list -- ceph-users@ceph.io
> > > To unsubscribe send an email to ceph-users-le...@ceph.io
> >
> >
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: 16.2.11 pacific QE validation status

2023-01-20 Thread Nizamudeen A
Dashboard lgtm!

Regards,
Nizam

On Fri, Jan 20, 2023, 22:09 Yuri Weinstein  wrote:

> The overall progress on this release is looking much better and if we
> can approve it we can plan to publish it early next week.
>
> Still seeking approvals
>
> rados - Neha, Laura
> rook - Sébastien Han
> cephadm - Adam
> dashboard - Ernesto
> rgw - Casey
> rbd - Ilya (full rbd run in progress now)
> krbd - Ilya
> fs - Venky, Patrick
> upgrade/nautilus-x (pacific) - passed thx Adam Kraitman!
> upgrade/octopus-x (pacific) - almost passed, still running 1 job
> upgrade/pacific-p2p - Neha (same as in 16.2.8)
> powercycle - Brad (see new SELinux denials)
>
> On Tue, Jan 17, 2023 at 10:45 AM Yuri Weinstein 
> wrote:
> >
> > OK I will rerun failed jobs filtering rhel in
> >
> > Thx!
> >
> > On Tue, Jan 17, 2023 at 10:43 AM Adam Kraitman 
> wrote:
> > >
> > > Hey the satellite issue was fixed
> > >
> > > Thanks
> > >
> > > On Tue, Jan 17, 2023 at 7:43 PM Laura Flores 
> wrote:
> > >>
> > >> This was my summary of rados failures. There was nothing new or amiss,
> > >> although it is important to note that runs were done with filtering
> out
> > >> rhel 8.
> > >>
> > >> I will leave it to Neha for final approval.
> > >>
> > >> Failures:
> > >> 1. https://tracker.ceph.com/issues/58258
> > >> 2. https://tracker.ceph.com/issues/58146
> > >> 3. https://tracker.ceph.com/issues/58458
> > >> 4. https://tracker.ceph.com/issues/57303
> > >> 5. https://tracker.ceph.com/issues/54071
> > >>
> > >> Details:
> > >> 1. rook: kubelet fails from connection refused - Ceph -
> Orchestrator
> > >> 2. test_cephadm.sh: Error: Error initializing source docker://
> > >> quay.ceph.io/ceph-ci/ceph:master - Ceph - Orchestrator
> > >> 3. qa/workunits/post-file.sh: postf...@drop.ceph.com: Permission
> denied
> > >> - Ceph
> > >> 4. rados/cephadm: Failed to fetch package version from
> > >>
> https://shaman.ceph.com/api/search/?status=ready=ceph=default=ubuntu%2F22.04%2Fx86_64=b34ca7d1c2becd6090874ccda56ef4cd8dc64bf7
> > >> - Ceph - Orchestrator
> > >> 5. rados/cephadm/osds: Invalid command: missing required parameter
> > >> hostname() - Ceph - Orchestrator
> > >>
> > >> On Tue, Jan 17, 2023 at 9:48 AM Yuri Weinstein 
> wrote:
> > >>
> > >> > Please see the test results on the rebased RC 6.6 in this comment:
> > >> >
> > >> > https://tracker.ceph.com/issues/58257#note-2
> > >> >
> > >> > We're still having infrastructure issues making testing difficult.
> > >> > Therefore all reruns were done excluding the rhel 8 distro
> > >> > ('--filter-out rhel_8')
> > >> >
> > >> > Also, the upgrades failed and Adam is looking into this.
> > >> >
> > >> > Seeking new approvals
> > >> >
> > >> > rados - Neha, Laura
> > >> > rook - Sébastien Han
> > >> > cephadm - Adam
> > >> > dashboard - Ernesto
> > >> > rgw - Casey
> > >> > rbd - Ilya
> > >> > krbd - Ilya
> > >> > fs - Venky, Patrick
> > >> > upgrade/nautilus-x (pacific) - Adam Kraitman
> > >> > upgrade/octopus-x (pacific) - Adam Kraitman
> > >> > upgrade/pacific-p2p - Neha - Adam Kraitman
> > >> > powercycle - Brad
> > >> >
> > >> > Thx
> > >> >
> > >> > On Fri, Jan 6, 2023 at 8:37 AM Yuri Weinstein 
> wrote:
> > >> > >
> > >> > > Happy New Year all!
> > >> > >
> > >> > > This release remains to be in "progress"/"on hold" status as we
> are
> > >> > > sorting all infrastructure-related issues.
> > >> > >
> > >> > > Unless I hear objections, I suggest doing a full rebase/retest QE
> > >> > > cycle (adding PRs merged lately) since it's taking much longer
> than
> > >> > > anticipated when sepia is back online.
> > >> > >
> > >> > > Objections?
> > >> > >
> > >> > > Thx
> > >> > > YuriW
> > >> > >
> > >> > > On Thu, Dec 15, 2022 at 9:14 AM Yuri Weinstein <
> ywein...@redhat.com>
> > >> > wrote:
> > >> > > >
> > >> > > > Details of this release are summarized here:
> > >> > > >
> > >> > > > https://tracker.ceph.com/issues/58257#note-1
> > >> > > > Release Notes - TBD
> > >> > > >
> > >> > > > Seeking approvals for:
> > >> > > >
> > >> > > > rados - Neha (https://github.com/ceph/ceph/pull/49431 is still
> being
> > >> > > > tested and will be merged soon)
> > >> > > > rook - Sébastien Han
> > >> > > > cephadm - Adam
> > >> > > > dashboard - Ernesto
> > >> > > > rgw - Casey (rwg will be rerun on the latest SHA1)
> > >> > > > rbd - Ilya, Deepika
> > >> > > > krbd - Ilya, Deepika
> > >> > > > fs - Venky, Patrick
> > >> > > > upgrade/nautilus-x (pacific) - Neha, Laura
> > >> > > > upgrade/octopus-x (pacific) - Neha, Laura
> > >> > > > upgrade/pacific-p2p - Neha - Neha, Laura
> > >> > > > powercycle - Brad
> > >> > > > ceph-volume - Guillaume, Adam K
> > >> > > >
> > >> > > > Thx
> > >> > > > YuriW
> > >> > ___
> > >> > Dev mailing list -- d...@ceph.io
> > >> > To unsubscribe send an email to dev-le...@ceph.io
> > >> >
> > >>
> > >>
> > >> --
> > >>
> > >> Laura Flores
> > >>
> > >> She/Her/Hers
> > >>
> > >> Software Engineer, Ceph Storage
> 

[ceph-users] Re: 16.2.11 pacific QE validation status

2022-12-19 Thread Nizamudeen A
dashboard approved.

Regards,
Nizam

On Thu, Dec 15, 2022 at 10:45 PM Yuri Weinstein  wrote:

> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/58257#note-1
> Release Notes - TBD
>
> Seeking approvals for:
>
> rados - Neha (https://github.com/ceph/ceph/pull/49431 is still being
> tested and will be merged soon)
> rook - Sébastien Han
> cephadm - Adam
> dashboard - Ernesto
> rgw - Casey (rwg will be rerun on the latest SHA1)
> rbd - Ilya, Deepika
> krbd - Ilya, Deepika
> fs - Venky, Patrick
> upgrade/nautilus-x (pacific) - Neha, Laura
> upgrade/octopus-x (pacific) - Neha, Laura
> upgrade/pacific-p2p - Neha - Neha, Laura
> powercycle - Brad
> ceph-volume - Guillaume, Adam K
>
> Thx
> YuriW
>
> ___
> Dev mailing list -- d...@ceph.io
> To unsubscribe send an email to dev-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Enable Centralized Logging in Dashboard.

2022-11-16 Thread Nizamudeen A
Hi,

Did you login to the grafana dashboard? For centralized logging you'll need
to login to the
grafana using your grafana username and password. If you do that and
refresh the dashboard,
I think the Loki page should be visible from the Daemon Logs page.

Regards,
Nizam

On Wed, Nov 16, 2022 at 7:31 PM E Taka <0eta...@gmail.com> wrote:

> Ceph: 17.2.5, dockerized with Ubuntu 20.04
>
> Hi all,
>
> I try to enable the Centralized Logging in Dashboard as described in
>
>
> https://docs.ceph.com/en/quincy/cephadm/services/monitoring/#cephadm-monitoring-centralized-logs
>
> Logging inti files is enabled:
>   ceph config set global log_to_file true
>   ceph config set global mon_cluster_log_to_file true
>
> Loki is deployed at one host, promtail on every host:
>
>
> service_type: loki
> service_name: loki
> placement:
>  hosts:
>  - ceph00
> ---
> service_type: promtail
> service_name: promtail
> placement:
>  host_pattern: '*'
>
>
> After applying the YAML above the log messages in »ceph -W cephadm« look
> good (deploying loki+promtail and reconfiguring grafana). But the Dashboard
> "Cluster → Logs → Daemon Logs" just shows a standard grafana page without
> any buttons for the Ceph Cluster. Its URL is https://ceph00.
>
> [mydomain]:3000/explore?orgId=1=["now-1h","now","Loki",{"refId":"A"}]
>
> Did I miss something for the centralized Logging in the Dashboard?
>
> Thank!
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Ceph Leadership Team Meeting Minutes - 2022 Oct 26

2022-10-27 Thread Nizamudeen A
Great, thanks Ilya.

Regards,

On Thu, Oct 27, 2022 at 2:00 PM Ilya Dryomov  wrote:

> On Thu, Oct 27, 2022 at 9:05 AM Nizamudeen A  wrote:
> >
> > >
> > > lab issues blocking centos container builds and teuthology testing:
> > > * https://tracker.ceph.com/issues/57914
> > > * delays testing for 16.2.11
> >
> >
> > The quay.ceph.io has been down for some days now.  Not sure who is
> actively
> > maintaining the quay repos now.
> > At least in the ceph-dashboard, we have a failing jenkins job (ceph
> > dashboard cephadm e2e
> > <https://jenkins.ceph.com/job/ceph-dashboard-cephadm-e2e-nightly-main/>)
> > where it tries to bootstrap a
> > cluster using the quay.ceph.io/ceph-ci/ceph:main image and it fails to
> do
> > so. and that tests are failing for some days as well.
> >
> > While doing some searching I came across this document
> > <https://wiki.sepia.ceph.com/doku.php?id=services:quay.ceph.io>, but not
> > sure
> > what to make of it. It'd be good if someone with some knowledge on this
> > area would take a look at it.
>
> Hi Nizamudeen,
>
> Dan is looking into it in the context of another, possibly related, lab
> issues ticket [1].
>
> [1] https://tracker.ceph.com/issues/57935
>
> Thanks,
>
> Ilya
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Ceph Leadership Team Meeting Minutes - 2022 Oct 26

2022-10-27 Thread Nizamudeen A
>
> lab issues blocking centos container builds and teuthology testing:
> * https://tracker.ceph.com/issues/57914
> * delays testing for 16.2.11


The quay.ceph.io has been down for some days now.  Not sure who is actively
maintaining the quay repos now.
At least in the ceph-dashboard, we have a failing jenkins job (ceph
dashboard cephadm e2e
)
where it tries to bootstrap a
cluster using the quay.ceph.io/ceph-ci/ceph:main image and it fails to do
so. and that tests are failing for some days as well.

While doing some searching I came across this document
, but not
sure
what to make of it. It'd be good if someone with some knowledge on this
area would take a look at it.

Thanks,
Nizam

On Wed, Oct 26, 2022 at 7:49 PM Casey Bodley  wrote:

> lab issues blocking centos container builds and teuthology testing:
> * https://tracker.ceph.com/issues/57914
> * delays testing for 16.2.11
>
> upcoming events:
> * Ceph Developer Monthly (APAC) next week, please add topics:
> https://tracker.ceph.com/projects/ceph/wiki/CDM_02-NOV-2022
> * Ceph Virtual 2022 starts next Thursday:
> https://ceph.io/en/community/events/2022/ceph-virtual/
>
> ___
> Dev mailing list -- d...@ceph.io
> To unsubscribe send an email to dev-le...@ceph.io
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: quincy v17.2.4 QE Validation status

2022-09-16 Thread Nizamudeen A
Dashboard LGTM!

On Wed, 14 Sept 2022, 01:33 Yuri Weinstein,  wrote:

> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/57472#note-1
> Release Notes - https://github.com/ceph/ceph/pull/48072
>
> Seeking approvals for:
>
> rados - Neha, Travis, Ernesto, Adam
> rgw - Casey
> fs - Venky
> orch - Adam
> rbd - Ilya, Deepika
> krbd - missing packages, Adam Kr is looking into it
> upgrade/octopus-x - missing packages, Adam Kr is looking into it
> ceph-volume - Guillaume is looking into it
>
> Please reply to this email with approval and/or trackers of known
> issues/PRs to address them.
>
> Josh, Neha - LRC upgrade pending major suites approvals.
> RC release - pending major suites approvals.
>
> Thx
> YuriW
>
> ___
> Dev mailing list -- d...@ceph.io
> To unsubscribe send an email to dev-le...@ceph.io
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Unable to login to Ceph Pacific Dashboard

2022-01-19 Thread Nizamudeen A
Hey Pardhiv,

What happens when you try a different browser other than the one you are
using now? Also can you please try login again after clearing all the
browser cache?

Regards,
Nizamudeen

On Thu, Jan 20, 2022 at 2:38 AM Pardhiv Karri  wrote:

> Hi,
>
> I installed Ceph Pacific one Monitor node using cephadm tool. The output of
> installation gave me the credentials. When I go to a browser (different
> from ceph server) I see the login screen and when I enter the credentials
> the browser loads to the same page, in that fraction of a second I see it
> asking me to enter a new password, so I went into cli and changed the
> password and now trying to login with the new password but still gets stuck
> at the login screen. I opened ports 8443 and 8080. Tried creating another
> user with credentials and still no luck. What am I missing?
>
> Thanks,
> Pardhiv
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io