Re: Freeze Break Request: more ipsilon/sssd debugging

2021-09-13 Thread Mohan Boddu
On Mon, Sep 13, 2021 at 1:45 PM Stephen John Smoogen  wrote:
>
> On Mon, 13 Sept 2021 at 13:34, Kevin Fenzi  wrote:
> >
> > So, it turns out the additional debugging we setup in that sssd package
> > on the ipsilon servers isn't sufficent to show what the problem is. ;(
> >
> > sssd developers would like us to crank up debugging to try and get logs
> > of whats happening. Unfortunately, this will use a ton of disk space,
> > which these vm's do not have much of. ;(
> >
> > I was going to ask to grow the disk on ipsilon02, but I think perhaps
> > cleaner would be to create a NFS volume and mount it on /var/log/sssd/
> > on ipsilon02. This way we don't need to actually take the machine down
> > and it's really easy to back out.
> >
>
> Yeah either do an NFS or a seperate vmdisk to mount as that and then
> blow away when done.

+1 to either

>
> > So, can I get +1s to:
> >
> > * make a NFS volume and mount it on /var/log/sssd on ipsilon02
> > * adjust ipsilon02's sssd to enable crazy levels of debug.
> > * restart sssd and wait for the next failure.
> > * profit!
> >
>
> I would try this and if NFS fails then go with a seperate disk for this.
>
>
> > +1s? other ideas?
> >
> > kevin
> > ___
> > infrastructure mailing list -- infrastructure@lists.fedoraproject.org
> > To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
> > Fedora Code of Conduct: 
> > https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> > List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> > List Archives: 
> > https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
> > Do not reply to spam on the list, report it: 
> > https://pagure.io/fedora-infrastructure
>
>
>
> --
> Stephen J Smoogen.
> I've seen things you people wouldn't believe. Flame wars in
> sci.astro.orion. I have seen SPAM filters overload because of Godwin's
> Law. All those moments will be lost in time... like posts on a BBS...
> time to shutdown -h now.
> ___
> infrastructure mailing list -- infrastructure@lists.fedoraproject.org
> To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
> Fedora Code of Conduct: 
> https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> List Archives: 
> https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
> Do not reply to spam on the list, report it: 
> https://pagure.io/fedora-infrastructure
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


Re: Freeze Break Request: more ipsilon/sssd debugging

2021-09-13 Thread Mark O'Brien
+1 for either NFS or vm disk

On Mon 13 Sep 2021, 19:40 Pierre-Yves Chibon,  wrote:

> On Mon, Sep 13, 2021 at 01:44:45PM -0400, Stephen John Smoogen wrote:
> > On Mon, 13 Sept 2021 at 13:34, Kevin Fenzi  wrote:
> > >
> > > So, it turns out the additional debugging we setup in that sssd package
> > > on the ipsilon servers isn't sufficent to show what the problem is. ;(
> > >
> > > sssd developers would like us to crank up debugging to try and get logs
> > > of whats happening. Unfortunately, this will use a ton of disk space,
> > > which these vm's do not have much of. ;(
> > >
> > > I was going to ask to grow the disk on ipsilon02, but I think perhaps
> > > cleaner would be to create a NFS volume and mount it on /var/log/sssd/
> > > on ipsilon02. This way we don't need to actually take the machine down
> > > and it's really easy to back out.
> > >
> >
> > Yeah either do an NFS or a seperate vmdisk to mount as that and then
> > blow away when done.
> >
> > > So, can I get +1s to:
> > >
> > > * make a NFS volume and mount it on /var/log/sssd on ipsilon02
> > > * adjust ipsilon02's sssd to enable crazy levels of debug.
> > > * restart sssd and wait for the next failure.
> > > * profit!
> > >
> >
> > I would try this and if NFS fails then go with a seperate disk for this.
>
> +1 on either.
>
>
> Pierre
> ___
> infrastructure mailing list -- infrastructure@lists.fedoraproject.org
> To unsubscribe send an email to
> infrastructure-le...@lists.fedoraproject.org
> Fedora Code of Conduct:
> https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> List Archives:
> https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
> Do not reply to spam on the list, report it:
> https://pagure.io/fedora-infrastructure
>
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


Re: another ipsilon freeze break

2021-09-13 Thread Mark O'Brien
+1 from me

On Mon 13 Sep 2021, 20:15 Stephen John Smoogen,  wrote:

> On Mon, 13 Sept 2021 at 15:06, Stephen John Smoogen 
> wrote:
> >
> > On Mon, 13 Sept 2021 at 14:58, Kevin Fenzi  wrote:
> > >
> > > This time I just want +1s to run the ipsilon playbook.
> > >
> > > This will deploy the OIDC client data for the production ocp cluster we
> > > are working on bringing up. It shouldn't cause any problems and woud be
> > > easy to back out.
> > >
> > > +1s?
> > >
> >
>
> I am ok with your plan. +1/
>
>
>
> --
> Stephen J Smoogen.
> I've seen things you people wouldn't believe. Flame wars in
> sci.astro.orion. I have seen SPAM filters overload because of Godwin's
> Law. All those moments will be lost in time... like posts on a BBS...
> time to shutdown -h now.
> ___
> infrastructure mailing list -- infrastructure@lists.fedoraproject.org
> To unsubscribe send an email to
> infrastructure-le...@lists.fedoraproject.org
> Fedora Code of Conduct:
> https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> List Archives:
> https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
> Do not reply to spam on the list, report it:
> https://pagure.io/fedora-infrastructure
>
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


Re: another ipsilon freeze break

2021-09-13 Thread Stephen John Smoogen
On Mon, 13 Sept 2021 at 15:06, Stephen John Smoogen  wrote:
>
> On Mon, 13 Sept 2021 at 14:58, Kevin Fenzi  wrote:
> >
> > This time I just want +1s to run the ipsilon playbook.
> >
> > This will deploy the OIDC client data for the production ocp cluster we
> > are working on bringing up. It shouldn't cause any problems and woud be
> > easy to back out.
> >
> > +1s?
> >
>

I am ok with your plan. +1/



-- 
Stephen J Smoogen.
I've seen things you people wouldn't believe. Flame wars in
sci.astro.orion. I have seen SPAM filters overload because of Godwin's
Law. All those moments will be lost in time... like posts on a BBS...
time to shutdown -h now.
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


Re: another ipsilon freeze break

2021-09-13 Thread Neal Gompa
On Mon, Sep 13, 2021 at 2:58 PM Kevin Fenzi  wrote:
>
> This time I just want +1s to run the ipsilon playbook.
>
> This will deploy the OIDC client data for the production ocp cluster we
> are working on bringing up. It shouldn't cause any problems and woud be
> easy to back out.
>
> +1s?

+1


-- 
真実はいつも一つ!/ Always, there's only one truth!
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


Re: Freeze Break Request: Blockerbugs Hotfix to Deal With RHBZ Change

2021-09-13 Thread Pierre-Yves Chibon
On Mon, Sep 13, 2021 at 12:49:21PM -0600, Tim Flink wrote:
> It turns out that the configuration for rhbz was changed without an 
> announcement and now the max number of bugs returned for any query is 20. 
> Some of the queries that we use in blockerbugs return more than 20 bugs so I 
> have a hotfix to deal with the new pagination requirements. I've attached the 
> patch to this email.
> 
> I'd like to apply this patch to production. Unfortunately, we're in the 
> middle of a non-trivial rewrite that's currently on stg so testing the patch 
> isn't really an option and that code isn't ready for production use right now.
> 
> Thanks,
> 
> Tim

> diff --git a/blockerbugs/util/bz_interface.py 
> b/blockerbugs/util/bz_interface.py
> index 471140f..a9f90d7 100644
> --- a/blockerbugs/util/bz_interface.py
> +++ b/blockerbugs/util/bz_interface.py
> @@ -29,11 +29,14 @@ from xmlrpc.client import Fault
>  
>  from blockerbugs import app
>  
> +# rhbz has been updated to have a max of 20 results returned
> +BUGZILLA_QUERY_LIMIT = 20

If I understood correctly, you can request more than 20 results, but if you
don't specify how many you want, you'll only get 20.
I believe if you set it to 0 you may get everything, but this needs to be
confirmed empirically.

Anyway, I would check first if you can't retrieve 100 or 200 results in one go
instead of doing iterations of 20 ;-)

>  base_query = {'o1': 'anywords',
>'f1': 'blocked',
>'query_format': 'advanced',
> -  'extra_fields': ['flags']}
> -
> +  'extra_fields': ['flags'],
> +  'limit': BUGZILLA_QUERY_LIMIT}
>  
>  class BZInterfaceError(Exception):
>  """A custom wrapper for XMLRPC errors from Bugzilla"""
> @@ -77,7 +80,8 @@ class BlockerBugs():
>  # 
> https://bugzilla.stage.redhat.com/buglist.cgi?bug_status=NEW_status=ASSIGNED
>  # 
> _status=POST_status=MODIFIED=Fedora=anaconda=component
>  # 
> =changedafter=Fedora_format=advanced=2013-03-21%2012%3A25=19
> -def get_bz_query(self, tracker: int, last_update: datetime.datetime = 
> None) -> dict[str, Any]:
> +def get_bz_query(self, tracker: int, last_update: datetime.datetime = 
> None, offset: int = 0
> + ) -> dict[str, Any]:

If you can set the limit to 0 to retrieve everything, you may want to default to
None rather than 0.

>  """Build a Bugzilla query to retrieve all necessary info about all 
> bugs which block the
>  `tracker` bug.
>  
> @@ -129,6 +133,9 @@ class BlockerBugs():
>  'f10': 'CP'
>  })
>  
> +if offset > 0:

And if you default to None, this will need to be tweaked.

> +query.update({'offset': offset})
> +
>  return query
>  
>  def query_tracker(self, tracker: int, last_update: 
> Optional[datetime.datetime] = None
> @@ -139,8 +146,21 @@ class BlockerBugs():
>  :param last_update: If provided, the query is modified to ask only 
> about bugs which have
>  recent modifications; otherwise asks about all 
> bugs.
>  """
> -query = self.get_bz_query(tracker, last_update)
> -buglist = self.bz.query(query)
> +
> +buglist = []
> +last_query_len = BUGZILLA_QUERY_LIMIT
> +
> +
> +# this is a hotfix hack to work around the sudden config change in 
> rhbz where the max
> +# number of bugs returned for a query is 20
> +# it seems to be working for now but may need more work going forward
> +while last_query_len == BUGZILLA_QUERY_LIMIT:
> +
> +new_query = self.get_bz_query(tracker, last_update, 
> offset=len(buglist))
> +new_buglist = self.bz.query(new_query)
> +buglist.extend(new_buglist)
> +last_query_len = len(new_buglist)
> +

Looks alright to me otherwise.


Pierre
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


Freeze Break Request: Blockerbugs Hotfix to Deal With RHBZ Change

2021-09-13 Thread Tim Flink

It turns out that the configuration for rhbz was changed without an 
announcement and now the max number of bugs returned for any query is 20. Some 
of the queries that we use in blockerbugs return more than 20 bugs so I have a 
hotfix to deal with the new pagination requirements. I've attached the patch to 
this email.

I'd like to apply this patch to production. Unfortunately, we're in the middle 
of a non-trivial rewrite that's currently on stg so testing the patch isn't 
really an option and that code isn't ready for production use right now.

Thanks,

Tim
diff --git a/blockerbugs/util/bz_interface.py b/blockerbugs/util/bz_interface.py
index 471140f..a9f90d7 100644
--- a/blockerbugs/util/bz_interface.py
+++ b/blockerbugs/util/bz_interface.py
@@ -29,11 +29,14 @@ from xmlrpc.client import Fault
 
 from blockerbugs import app
 
+# rhbz has been updated to have a max of 20 results returned
+BUGZILLA_QUERY_LIMIT = 20
+
 base_query = {'o1': 'anywords',
   'f1': 'blocked',
   'query_format': 'advanced',
-  'extra_fields': ['flags']}
-
+  'extra_fields': ['flags'],
+  'limit': BUGZILLA_QUERY_LIMIT}
 
 class BZInterfaceError(Exception):
 """A custom wrapper for XMLRPC errors from Bugzilla"""
@@ -77,7 +80,8 @@ class BlockerBugs():
 # https://bugzilla.stage.redhat.com/buglist.cgi?bug_status=NEW_status=ASSIGNED
 # _status=POST_status=MODIFIED=Fedora=anaconda=component
 # =changedafter=Fedora_format=advanced=2013-03-21%2012%3A25=19
-def get_bz_query(self, tracker: int, last_update: datetime.datetime = None) -> dict[str, Any]:
+def get_bz_query(self, tracker: int, last_update: datetime.datetime = None, offset: int = 0
+ ) -> dict[str, Any]:
 """Build a Bugzilla query to retrieve all necessary info about all bugs which block the
 `tracker` bug.
 
@@ -129,6 +133,9 @@ class BlockerBugs():
 'f10': 'CP'
 })
 
+if offset > 0:
+query.update({'offset': offset})
+
 return query
 
 def query_tracker(self, tracker: int, last_update: Optional[datetime.datetime] = None
@@ -139,8 +146,21 @@ class BlockerBugs():
 :param last_update: If provided, the query is modified to ask only about bugs which have
 recent modifications; otherwise asks about all bugs.
 """
-query = self.get_bz_query(tracker, last_update)
-buglist = self.bz.query(query)
+
+buglist = []
+last_query_len = BUGZILLA_QUERY_LIMIT
+
+
+# this is a hotfix hack to work around the sudden config change in rhbz where the max
+# number of bugs returned for a query is 20
+# it seems to be working for now but may need more work going forward
+while last_query_len == BUGZILLA_QUERY_LIMIT:
+
+new_query = self.get_bz_query(tracker, last_update, offset=len(buglist))
+new_buglist = self.bz.query(new_query)
+buglist.extend(new_buglist)
+last_query_len = len(new_buglist)
+
 return buglist
 
 def query_prioritized(self) -> list[bzBug]:
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


Re: Freeze Break Request: more ipsilon/sssd debugging

2021-09-13 Thread Pierre-Yves Chibon
On Mon, Sep 13, 2021 at 01:44:45PM -0400, Stephen John Smoogen wrote:
> On Mon, 13 Sept 2021 at 13:34, Kevin Fenzi  wrote:
> >
> > So, it turns out the additional debugging we setup in that sssd package
> > on the ipsilon servers isn't sufficent to show what the problem is. ;(
> >
> > sssd developers would like us to crank up debugging to try and get logs
> > of whats happening. Unfortunately, this will use a ton of disk space,
> > which these vm's do not have much of. ;(
> >
> > I was going to ask to grow the disk on ipsilon02, but I think perhaps
> > cleaner would be to create a NFS volume and mount it on /var/log/sssd/
> > on ipsilon02. This way we don't need to actually take the machine down
> > and it's really easy to back out.
> >
> 
> Yeah either do an NFS or a seperate vmdisk to mount as that and then
> blow away when done.
> 
> > So, can I get +1s to:
> >
> > * make a NFS volume and mount it on /var/log/sssd on ipsilon02
> > * adjust ipsilon02's sssd to enable crazy levels of debug.
> > * restart sssd and wait for the next failure.
> > * profit!
> >
> 
> I would try this and if NFS fails then go with a seperate disk for this.

+1 on either.


Pierre
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


Re: Freeze Break Request: more ipsilon/sssd debugging

2021-09-13 Thread Stephen John Smoogen
On Mon, 13 Sept 2021 at 13:34, Kevin Fenzi  wrote:
>
> So, it turns out the additional debugging we setup in that sssd package
> on the ipsilon servers isn't sufficent to show what the problem is. ;(
>
> sssd developers would like us to crank up debugging to try and get logs
> of whats happening. Unfortunately, this will use a ton of disk space,
> which these vm's do not have much of. ;(
>
> I was going to ask to grow the disk on ipsilon02, but I think perhaps
> cleaner would be to create a NFS volume and mount it on /var/log/sssd/
> on ipsilon02. This way we don't need to actually take the machine down
> and it's really easy to back out.
>

Yeah either do an NFS or a seperate vmdisk to mount as that and then
blow away when done.

> So, can I get +1s to:
>
> * make a NFS volume and mount it on /var/log/sssd on ipsilon02
> * adjust ipsilon02's sssd to enable crazy levels of debug.
> * restart sssd and wait for the next failure.
> * profit!
>

I would try this and if NFS fails then go with a seperate disk for this.


> +1s? other ideas?
>
> kevin
> ___
> infrastructure mailing list -- infrastructure@lists.fedoraproject.org
> To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
> Fedora Code of Conduct: 
> https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> List Archives: 
> https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
> Do not reply to spam on the list, report it: 
> https://pagure.io/fedora-infrastructure



-- 
Stephen J Smoogen.
I've seen things you people wouldn't believe. Flame wars in
sci.astro.orion. I have seen SPAM filters overload because of Godwin's
Law. All those moments will be lost in time... like posts on a BBS...
time to shutdown -h now.
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


Re: Freeze Break Request: more ipsilon/sssd debugging

2021-09-13 Thread Neal Gompa
On Mon, Sep 13, 2021 at 1:34 PM Kevin Fenzi  wrote:
>
> So, it turns out the additional debugging we setup in that sssd package
> on the ipsilon servers isn't sufficent to show what the problem is. ;(
>
> sssd developers would like us to crank up debugging to try and get logs
> of whats happening. Unfortunately, this will use a ton of disk space,
> which these vm's do not have much of. ;(
>
> I was going to ask to grow the disk on ipsilon02, but I think perhaps
> cleaner would be to create a NFS volume and mount it on /var/log/sssd/
> on ipsilon02. This way we don't need to actually take the machine down
> and it's really easy to back out.
>
> So, can I get +1s to:
>
> * make a NFS volume and mount it on /var/log/sssd on ipsilon02
> * adjust ipsilon02's sssd to enable crazy levels of debug.
> * restart sssd and wait for the next failure.
> * profit!
>
> +1s? other ideas?
>

+1 from me.



-- 
真実はいつも一つ!/ Always, there's only one truth!
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


Freeze Break Request: more ipsilon/sssd debugging

2021-09-13 Thread Kevin Fenzi
So, it turns out the additional debugging we setup in that sssd package
on the ipsilon servers isn't sufficent to show what the problem is. ;( 

sssd developers would like us to crank up debugging to try and get logs
of whats happening. Unfortunately, this will use a ton of disk space,
which these vm's do not have much of. ;( 

I was going to ask to grow the disk on ipsilon02, but I think perhaps
cleaner would be to create a NFS volume and mount it on /var/log/sssd/
on ipsilon02. This way we don't need to actually take the machine down
and it's really easy to back out. 

So, can I get +1s to: 

* make a NFS volume and mount it on /var/log/sssd on ipsilon02
* adjust ipsilon02's sssd to enable crazy levels of debug. 
* restart sssd and wait for the next failure. 
* profit!

+1s? other ideas?

kevin


signature.asc
Description: PGP signature
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


Re: Two Matrix Questions

2021-09-13 Thread Kevin Fenzi
On Fri, Sep 10, 2021 at 11:08:28AM +0200, Michal Konecny wrote:
> 
> 
> On 10. 09. 21 1:36, Kevin Fenzi wrote:
> > On Thu, Sep 09, 2021 at 06:56:23PM -0400, Matthew Miller wrote:
> > > On Thu, Sep 09, 2021 at 02:20:47PM -0700, Kevin Fenzi wrote:
> > > > > 1. I find the bot reports in #fedora-admin and other places to be
> > > > > overwhelming. Do people find these useful at all? If there is a 
> > > > > reason to
> > > > > keep them, could they be moved to a new room like 
> > > > > #fedora-admin-bot or
> > > > > something? I think this will help keep it a friendly space for
> > > > > non-bot users of the room.
> > > > I'm fine dropping them entirely.
> > > That seems like the easiest way to me for sure. I do have the whole "Bot
> > > Chatter" space set up _just in case_ though. :)
> > I do personally find #fedora-fedmsg and #fedora-fedmsg-stg useful tho.
> > If only to see that things are flowing as they should be.
> On matrix I'm getting disconnected from these channels, because of some spam
> rules. :-(

Oh? What does it say? This is by joining them over the
#fedora-fedmsg:libera.org bridge?

kevin


signature.asc
Description: PGP signature
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure


[Fedocal] Reminder meeting : Websites & Apps Team Meeting

2021-09-13 Thread jflory7
Dear all,

You are kindly invited to the meeting:
   Websites & Apps Team Meeting on 2021-09-14 from 15:00:00 to 16:00:00 UTC
   At https://meet.jit.si/fedora-websites-apps-meeting

The meeting will be about:
Weekly team meeting for the Websites & Apps Team. This is part of the [Websites 
& Apps Community Revamp 
Objective](https://fedoraproject.org/wiki/Objectives/Websites_%26_Apps_Community_Revamp).
 See [past discussion](
https://discussion.fedoraproject.org/t/planning-meeting-for-websites-apps-team-reboot/27911)
 to learn more about the Websites & Apps Team and how it came together.

More information available at:
[hackmd.io/Mxm2We3yTqKybLsdohadOA](https://hackmd.io/Mxm2We3yTqKybLsdohadOA?view)


Source: https://calendar.fedoraproject.org//meeting/9990/

___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
Do not reply to spam on the list, report it: 
https://pagure.io/fedora-infrastructure