[jenkinsci/nexus-platform-plugin] 3b2041: Removing git commands

2022-07-07 Thread 'Hector Danilo Hurtado Olaya' via Jenkins Commits
  Branch: refs/heads/test-post-release-policy-evaluation
  Home:   https://github.com/jenkinsci/nexus-platform-plugin
  Commit: 3b2041dd3d3060d6503ac8a77c3b93a5332aae53
  
https://github.com/jenkinsci/nexus-platform-plugin/commit/3b2041dd3d3060d6503ac8a77c3b93a5332aae53
  Author: Hector Hurtado 
  Date:   2022-07-07 (Thu, 07 Jul 2022)

  Changed paths:
M Jenkinsfile.sonatype.post-release

  Log Message:
  ---
  Removing git commands


-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Commits" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-commits+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/test-post-release-policy-evaluation/270b8e-3b2041%40github.com.


[jenkinsci/nexus-platform-plugin] 270b8e: Tests post release policy evaluation

2022-07-07 Thread 'Hector Danilo Hurtado Olaya' via Jenkins Commits
  Branch: refs/heads/test-post-release-policy-evaluation
  Home:   https://github.com/jenkinsci/nexus-platform-plugin
  Commit: 270b8e9d24022ee9135a733e6a43c61411effebb
  
https://github.com/jenkinsci/nexus-platform-plugin/commit/270b8e9d24022ee9135a733e6a43c61411effebb
  Author: Hector Hurtado 
  Date:   2022-07-07 (Thu, 07 Jul 2022)

  Changed paths:
M Jenkinsfile.sonatype.post-release

  Log Message:
  ---
  Tests post release policy evaluation


-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Commits" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-commits+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/test-post-release-policy-evaluation/00-270b8e%40github.com.


[jenkinsci/nexus-platform-plugin]

2022-07-07 Thread 'Hector Danilo Hurtado Olaya' via Jenkins Commits
  Branch: refs/heads/bump-innersource-dependencies-c40883
  Home:   https://github.com/jenkinsci/nexus-platform-plugin

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Commits" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-commits+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/bump-innersource-dependencies-c40883/a6742f-00%40github.com.


Re: Synaptic missing in "Bookworm"

2022-06-30 Thread Richard Hector

On 1/07/22 12:08, Peter Hillier-Brook wrote:

anyone with thoughts, or info about Synaptic missing in "Bookworm"?



https://tracker.debian.org/pkg/synaptic

Richard



Re: [Openvpn-users] How to enable timestamps in server logfile?

2022-06-29 Thread Richard Hector

On 23/06/22 02:05, David Sommerseth wrote:

/usr/lib/systemd/system/openvpn-server@.service


^^  This is the proper service file being packaged.  Even though, as 
this is from a Debian package, I would have expected it under 
/lib/systemd/system.


Thanks to the big /usr merge, they're going to be the same place anyway.

On my debian bullseye system:

richard@zircon:~$ file /lib
/lib: symbolic link to usr/lib

https://wiki.debian.org/UsrMerge

I assume Ubuntu has done or will do the same thing - IIRC it's led by 
systemd.


https://discourse.ubuntu.com/t/usr-merge-to-become-required-in-hh/18962

Richard


___
Openvpn-users mailing list
Openvpn-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/openvpn-users


[jenkinsci/nexus-platform-plugin] fdf5e5: INT-6741 Create support script to get plugins deta...

2022-06-29 Thread 'Hector Danilo Hurtado Olaya' via Jenkins Commits
  Branch: refs/heads/main
  Home:   https://github.com/jenkinsci/nexus-platform-plugin
  Commit: fdf5e54dbc57233bf2e817dd4c60a5ac72e2440d
  
https://github.com/jenkinsci/nexus-platform-plugin/commit/fdf5e54dbc57233bf2e817dd4c60a5ac72e2440d
  Author: Hector Danilo Hurtado Olaya 
  Date:   2022-06-29 (Wed, 29 Jun 2022)

  Changed paths:
A support/README.md
A support/print-plugins-information-script.groovy

  Log Message:
  ---
  INT-6741 Create support script to get plugins details (#208)

* INT-6741 Create support script to get plugins details

* Adjusting docs

* Update support/print-plugins-information-script.groovy

Co-authored-by: Eduard Tita <49167996+eduard-t...@users.noreply.github.com>

* Feedback changes

Co-authored-by: Eduard Tita <49167996+eduard-t...@users.noreply.github.com>


-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Commits" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-commits+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/main/99d1e2-fdf5e5%40github.com.


[jenkinsci/nexus-platform-plugin]

2022-06-29 Thread 'Hector Danilo Hurtado Olaya' via Jenkins Commits
  Branch: refs/heads/INT-6741-support-script-to-get-plugin-details
  Home:   https://github.com/jenkinsci/nexus-platform-plugin

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Commits" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-commits+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/INT-6741-support-script-to-get-plugin-details/189153-00%40github.com.


[jenkinsci/nexus-platform-plugin] 189153: Feedback changes

2022-06-29 Thread 'Hector Danilo Hurtado Olaya' via Jenkins Commits
  Branch: refs/heads/INT-6741-support-script-to-get-plugin-details
  Home:   https://github.com/jenkinsci/nexus-platform-plugin
  Commit: 189153a9cb52366a96ddbbf205e703a106dc5685
  
https://github.com/jenkinsci/nexus-platform-plugin/commit/189153a9cb52366a96ddbbf205e703a106dc5685
  Author: Hector Hurtado 
  Date:   2022-06-29 (Wed, 29 Jun 2022)

  Changed paths:
A support/README.md
M support/print-plugins-information-script.groovy

  Log Message:
  ---
  Feedback changes


-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Commits" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-commits+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/INT-6741-support-script-to-get-plugin-details/a27069-189153%40github.com.


[jenkinsci/nexus-platform-plugin] a27069: Update support/print-plugins-information-script.gr...

2022-06-29 Thread 'Hector Danilo Hurtado Olaya' via Jenkins Commits
  Branch: refs/heads/INT-6741-support-script-to-get-plugin-details
  Home:   https://github.com/jenkinsci/nexus-platform-plugin
  Commit: a27069e522fe7b70691a00bb8a8944e464de4acf
  
https://github.com/jenkinsci/nexus-platform-plugin/commit/a27069e522fe7b70691a00bb8a8944e464de4acf
  Author: Hector Danilo Hurtado Olaya 
  Date:   2022-06-29 (Wed, 29 Jun 2022)

  Changed paths:
M support/print-plugins-information-script.groovy

  Log Message:
  ---
  Update support/print-plugins-information-script.groovy

Co-authored-by: Eduard Tita <49167996+eduard-t...@users.noreply.github.com>


-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Commits" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-commits+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/INT-6741-support-script-to-get-plugin-details/d69994-a27069%40github.com.


[jenkinsci/nexus-platform-plugin] d69994: Adjusting docs

2022-06-29 Thread 'Hector Danilo Hurtado Olaya' via Jenkins Commits
  Branch: refs/heads/INT-6741-support-script-to-get-plugin-details
  Home:   https://github.com/jenkinsci/nexus-platform-plugin
  Commit: d699941118c5febc241bc63dbcbdaa25cc123be1
  
https://github.com/jenkinsci/nexus-platform-plugin/commit/d699941118c5febc241bc63dbcbdaa25cc123be1
  Author: Hector Hurtado 
  Date:   2022-06-29 (Wed, 29 Jun 2022)

  Changed paths:
M support/print-plugins-information-script.groovy

  Log Message:
  ---
  Adjusting docs


-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Commits" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-commits+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/INT-6741-support-script-to-get-plugin-details/1e0f6d-d69994%40github.com.


[jenkinsci/nexus-platform-plugin] 1e0f6d: INT-6741 Create support script to get plugins details

2022-06-29 Thread 'Hector Danilo Hurtado Olaya' via Jenkins Commits
  Branch: refs/heads/INT-6741-support-script-to-get-plugin-details
  Home:   https://github.com/jenkinsci/nexus-platform-plugin
  Commit: 1e0f6d6ec13ca1e78105e9323e1af3c2da918d89
  
https://github.com/jenkinsci/nexus-platform-plugin/commit/1e0f6d6ec13ca1e78105e9323e1af3c2da918d89
  Author: Hector Hurtado 
  Date:   2022-06-29 (Wed, 29 Jun 2022)

  Changed paths:
A support/print-plugins-information-script.groovy

  Log Message:
  ---
  INT-6741 Create support script to get plugins details


-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Commits" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-commits+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/INT-6741-support-script-to-get-plugin-details/00-1e0f6d%40github.com.


Re: [DNG] moving to a new system

2022-06-24 Thread Hector Gonzalez Jaime via Dng


On 6/24/22 10:56, o1bigtenor via Dng wrote:

On Fri, Jun 24, 2022 at 10:19 AM Dr. Nikolaus Klepp via Dng
 wrote:

Anno domini 2022 Fri, 24 Jun 09:05:39 -0500
  o1bigtenor via Dng scripsit:

Greetings

Hoping that I'm not asking too many questions.

(moving from debian testing to devuan testing (daedalus)
the old system is under 5.17.xx and the new one is on 5.18
if that makes for differences)

(I've learnt the hard way that just winging things means a LOT more
work and even a greater chance for issues.)

My existing system has been a work in progress for over 10 years. So
I've gotten things
set up quite the way that I like them so things change slowly but in
that there are also
less 'terror' moments when everything has gone 'goofy'.

Is there any way to move over things like settings (and all the other
pamphernania) for browsers and libreoffice and the like?

I was thinking of doing things by using scp from the old system to the new one.

Dunno if that would create issues or not.

Any better ideas - - - - well I'm all ears!!!

Move your home directory to the new system ... and use rsync, not scp.


That seems simple - - - - except I've never used rsync yet.

Suggestions for a good recipe to follow- - - please?



from the new system (this will overwrite /home files if you have them):

rsync -avxKSH root@oldsystem:/home/ /home/

means make a backup, show what you do, don't change filesystems, keep 
dirlinks, use sparse files, and keep hard links, from 
root@oldsystem:/home/ to your local /home/


Just don't do this for the root filesystem, unless it is to put it 
somewhere else


This will use ssh for authentication, either use a key for 
authenticating, (man ssh-keygen) or change the user to what it needs to be.


man rsync explains the options in detail.  You can interrupt this 
command and run it again, it will continue where it left.




TIA
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


--
Hector Gonzalez
ca...@genac.org

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Bug#1000250: Dependency on patatt

2022-06-22 Thread Hector Oron
Hello,

On Sat, 20 Nov 2021 at 07:33, Anuradha Weeraman  wrote:
> Patatt (https://git.kernel.org/pub/scm/utils/patatt/patatt.git) is now
> in Debian unstable and patch attestation functionality has been moved
> to it, so requesting that a dependency be included in b4 for patatt.

This is now done, however, the recommended patatt version is patatt>=0.5,<2.0.
I'll update the package to versioned dependency once this is updated
in unstable.

Regards,
-- 
 Héctor Orón  -.. . -... .. .- -.   -.. . ...- . .-.. --- .--. . .-.



Bug#1010064: salsa: please push the pristine-tar branch

2022-06-22 Thread Hector Oron
Hello Dmitry,

On Sat, 23 Apr 2022 at 15:57, Dmitry Baryshkov  wrote:
>
> Source: device-tree-compiler
> Version: 1.6.1-1
> Severity: normal
>
> Recent push to dtc sources to the repo at salsa includes the
> debian/gbp.conf file. This file contains the "pristine-tar = True"
> config option, however the repo does not contain the pristine-tar
> branch. Thus rebuilding the package from the repo without passing
> additional options to gbp is impossible.
>
> Please push the prisine-tar branch to the salsa repo.

I do not have a pristine-tar branch on my system, but you can grab the
upstream tarball from the Debian mirror. I'll drop pristine-tar from
gbp.conf to allow gbp to create a tarball from the upstream tag.

I hope that works for you.

Regards
-- 
 Héctor Orón  -.. . -... .. .- -.   -.. . ...- . .-.. --- .--. . .-.



[google-appengine] Re: App Engine Phishing Website

2022-06-17 Thread 'Hector Martinez Rodriguez' via Google App Engine
The form you submitted is correct, but you can additionally use this one 
 to report the 
fraudulent clone. Nevertheless, we created an internal report to escalate 
this concern too.

On Thursday, June 16, 2022 at 4:58:18 PM UTC-5 dbab...@alluresecurity.com 
wrote:

> Hello,
> I used the form 
>  to report 
> a phishing website 10 days ago and the site, while difficult to reach, is 
> still publicly accessible.
>
> What can we do to escalate?
>
> IP address: 34.145.113.173
> https://34.145.113.173/
> *Hostname:* 173.113.145.34.bc.googleusercontent.com.
>
> That's a fraudulent clone of First Republic Bank's online banking website.
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/google-appengine/ae033d4f-4057-4587-86cb-a62ae64ddd8dn%40googlegroups.com.


Re: PL/M & CP/M

2022-06-07 Thread Hector Peraza via cctalk

On 5/31/2022 12:29 AM, Bill Gunshannon via cctalk wrote:

On 5/30/22 18:20, Chuck Guzis via cctalk wrote:

Are you talking about this?

https://web.archive.org/web/20131110002247/http://www.nj7p.info/Common/Toys/Software/OS/work/IntelTools.zip 



(Courtesy of Mark Ogden)



Not what I thought I was looking for but may turn out very
useful anyway.  I might be able to build a system and then
dis-assemble it to Z80 mnemonics.  In any event, it will
make fun reading.

Thank you.

bill



In the late 80's I disassembled a PL/M compiler I got in paper tape and 
ported it to CP/M. Then stored it in a cassette tape, then lost it, then 
about 8 years ago found it again and recovered it. The compiler had no 
indication whatsoever of who wrote it, but with the help of Mr. Emmanuel 
Roche from comp.os.cpm it's origin was traced back to Norsk Data's 
PL/Mycro compiler for their Mycro-1 8080 machine. It is a one-pass 
compiler (the key to its identification), appeared to be written 
directly in 8080 assembly, and produces hex or binary output. I never 
made it available anywhere, except for the copy I gave to Mr. Roche and 
IIRC to Mark Ogden too. Is that the one you mean? The only other PL/M 
compiler I know about that ran on 8-bit hardware, besides Intel's, was 
PLMX but I don't now the history behind it.


Hector.



Re: regarding firewall discussion

2022-06-03 Thread Richard Hector

On 2/06/22 05:26, Joe wrote:

On Tue, 31 May 2022 03:17:52 +0100
mick crane  wrote:


regarding firewall discussion I'm uncertain how firewalls are
supposed to work.
I think the idea is that nothing is accepted unless it is in response
to a request.
What's to stop some spurious instructions being sent in response to
genuine request?


Nothing really, but the reply can only come from the site you made the
request to.


A source IP address can be faked.

Richard



Re: grep: show matching line from pattern file

2022-06-03 Thread Richard Hector

On 3/06/22 07:17, Greg Wooledge wrote:

On Thu, Jun 02, 2022 at 03:12:23PM -0400, duh wrote:


> > Jim Popovitch wrote on 28/05/2022 21:40:
> > > I have a file of regex patterns and I use grep like so:
> > > 
> > >  ~$ grep -f patterns.txt /var/log/syslog
> > > 
> > > What I'd like to get is a listing of all lines, specifically the line

> > > numbers of the regexps in patterns.txt, that match entries in
> > > /var/log/syslog.   Is there a way to do this?



$cat -n /var/log/syslog | grep warn

and it found "warn" in the syslog file and provided line numbers. I have
not used the -f option


You're getting the line numbers from the log file.  The OP wanted the line
numbers of the patterns in the -f pattern file.

Why?  I have no idea.  There is no standard option to do this, because
it's not a common requirement.  That's why I wrote one from scratch
in perl.



I don't know what the OP's use case is, but here's an example I might use:

I have a bunch of custom ignore files for logcheck. After a software 
upgrade, I might want to check which patterns no longer match anything, 
and can be deleted or modified.


I'd really still want to check with real egrep, though, rather than 
using perl's re engine instead.


Cheers,
Richard



Re: Jira -> GitHub Issues Migration (This Friday)

2022-06-03 Thread Hector Miuler Malpica Gallegos
The Issue *Type* the Jira is not imported as *label* in github :(, I think
it's important


*Hector Miuler Malpica Gallegos <http://www.miuler.com>*



El vie, 3 jun 2022 a la(s) 10:14, Danny McCormick (dannymccorm...@google.com)
escribió:

> Hey Hector, they were just enabled (thanks @Valentyn Tymofieiev
>  for merging the PR).
>
> The existing Jiras should be migrated throughout the rest of the day -
> this will take a while due to GitHub rate limits, but it should definitely
> be done by the end of the weekend (and I expect significantly earlier). In
> the meantime, feel free to start opening issues at
> https://github.com/apache/beam/issues, and let me know if you see any
> issues with the migrated issues.
>
> Thanks,
> Danny
>
> On Fri, Jun 3, 2022 at 10:42 AM Hector Miuler Malpica Gallegos <
> miu...@gmail.com> wrote:
>
>> When will github issues be enabled?  I understood that it would be today
>>
>>
>>
>> *Hector Miuler Malpica Gallegos <http://www.miuler.com>*
>>
>>
>>
>> El jue, 2 jun 2022 a la(s) 21:35, Hector Miuler Malpica Gallegos (
>> miu...@gmail.com) escribió:
>>
>>>
>>> hahaha!!! excuse me, I'm thinking in the flink project when read the
>>> last email, I do not know why.
>>>
>>>
>>> *Hector Miuler Malpica Gallegos <http://www.miuler.com>*
>>>
>>>
>>>
>>> El jue, 2 jun 2022 a la(s) 18:04, Danny McCormick (
>>> dannymccorm...@google.com) escribió:
>>>
>>>> Thanks for calling that out Hector - the migration will only migrate
>>>> issues from the Beam project (that one is from the Flink project), so I
>>>> don't think it should be an issue.
>>>>
>>>> Thanks,
>>>> Danny
>>>>
>>>> On Thu, Jun 2, 2022 at 6:38 PM Hector Miuler Malpica Gallegos <
>>>> miu...@gmail.com> wrote:
>>>>
>>>>> This migration includes the issues of the component `Kubernetes
>>>>> Operator` ? like this issue FLINK-27820
>>>>> <https://issues.apache.org/jira/browse/FLINK-27820> ? this issues
>>>>> correspond to the repository
>>>>> https://github.com/apache/flink-kubernetes-operator, keep it in mind.
>>>>>
>>>>>
>>>>> *Hector Miuler Malpica Gallegos <http://www.miuler.com>*
>>>>>
>>>>>
>>>>>
>>>>> El jue, 2 jun 2022 a la(s) 12:33, Danny McCormick (
>>>>> dannymccorm...@google.com) escribió:
>>>>>
>>>>>> Given the consensus here, I updated the tool to do this. This means
>>>>>> that we won't update the JIRAs to be read-only until after the migration 
>>>>>> is
>>>>>> complete. I'll rerun the tool if any extra jiras come in during the
>>>>>> intervening period. The tool will also still write the mapping to the 
>>>>>> file
>>>>>> in case there are unforeseen issues so that we can backfill if needed.
>>>>>>
>>>>>> Thanks for the suggestion and followup Brian, Ahmet, and Alexey.
>>>>>>
>>>>>> On Thu, Jun 2, 2022 at 12:16 PM Alexey Romanenko <
>>>>>> aromanenko@gmail.com> wrote:
>>>>>>
>>>>>>> +1 That would be very helpful for mapping!
>>>>>>>
>>>>>>> On 2 Jun 2022, at 17:48, Ahmet Altay  wrote:
>>>>>>>
>>>>>>> Is it possible to add comments on the JIRAs with a link to the new
>>>>>>> corresponding github issue?
>>>>>>>
>>>>>>> On Thu, Jun 2, 2022 at 8:47 AM Danny McCormick <
>>>>>>> dannymccorm...@google.com> wrote:
>>>>>>>
>>>>>>>> Thanks for the feedback, I agree it would be good to keep that
>>>>>>>> option open - I updated the tool to write those to a file when we 
>>>>>>>> create an
>>>>>>>> issue. I'll share that after the migration.
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> Danny
>>>>>>>>
>>>>>>>> On Wed, Jun 1, 2022 at 7:03 PM Brian Hulette 
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> Thanks Danny. Regarding links to GitHub issues, if we could at
>>>>>>>>> least save off a record of ji

Re: Jira -> GitHub Issues Migration (This Friday)

2022-06-03 Thread Hector Miuler Malpica Gallegos
When will github issues be enabled?  I understood that it would be today



*Hector Miuler Malpica Gallegos <http://www.miuler.com>*



El jue, 2 jun 2022 a la(s) 21:35, Hector Miuler Malpica Gallegos (
miu...@gmail.com) escribió:

>
> hahaha!!! excuse me, I'm thinking in the flink project when read the last
> email, I do not know why.
>
>
> *Hector Miuler Malpica Gallegos <http://www.miuler.com>*
>
>
>
> El jue, 2 jun 2022 a la(s) 18:04, Danny McCormick (
> dannymccorm...@google.com) escribió:
>
>> Thanks for calling that out Hector - the migration will only migrate
>> issues from the Beam project (that one is from the Flink project), so I
>> don't think it should be an issue.
>>
>> Thanks,
>> Danny
>>
>> On Thu, Jun 2, 2022 at 6:38 PM Hector Miuler Malpica Gallegos <
>> miu...@gmail.com> wrote:
>>
>>> This migration includes the issues of the component `Kubernetes
>>> Operator` ? like this issue FLINK-27820
>>> <https://issues.apache.org/jira/browse/FLINK-27820> ? this issues
>>> correspond to the repository
>>> https://github.com/apache/flink-kubernetes-operator, keep it in mind.
>>>
>>>
>>> *Hector Miuler Malpica Gallegos <http://www.miuler.com>*
>>>
>>>
>>>
>>> El jue, 2 jun 2022 a la(s) 12:33, Danny McCormick (
>>> dannymccorm...@google.com) escribió:
>>>
>>>> Given the consensus here, I updated the tool to do this. This means
>>>> that we won't update the JIRAs to be read-only until after the migration is
>>>> complete. I'll rerun the tool if any extra jiras come in during the
>>>> intervening period. The tool will also still write the mapping to the file
>>>> in case there are unforeseen issues so that we can backfill if needed.
>>>>
>>>> Thanks for the suggestion and followup Brian, Ahmet, and Alexey.
>>>>
>>>> On Thu, Jun 2, 2022 at 12:16 PM Alexey Romanenko <
>>>> aromanenko@gmail.com> wrote:
>>>>
>>>>> +1 That would be very helpful for mapping!
>>>>>
>>>>> On 2 Jun 2022, at 17:48, Ahmet Altay  wrote:
>>>>>
>>>>> Is it possible to add comments on the JIRAs with a link to the new
>>>>> corresponding github issue?
>>>>>
>>>>> On Thu, Jun 2, 2022 at 8:47 AM Danny McCormick <
>>>>> dannymccorm...@google.com> wrote:
>>>>>
>>>>>> Thanks for the feedback, I agree it would be good to keep that option
>>>>>> open - I updated the tool to write those to a file when we create an 
>>>>>> issue.
>>>>>> I'll share that after the migration.
>>>>>>
>>>>>> Thanks,
>>>>>> Danny
>>>>>>
>>>>>> On Wed, Jun 1, 2022 at 7:03 PM Brian Hulette 
>>>>>> wrote:
>>>>>>
>>>>>>> Thanks Danny. Regarding links to GitHub issues, if we could at least
>>>>>>> save off a record of jira <-> issue mappings we could look at adding the
>>>>>>> links later. I think it would be nice to have those links so that anyone
>>>>>>> landing in a jira through a search or an old link can quickly find the
>>>>>>> current ticket, but I don't think that needs to block the migration.
>>>>>>>
>>>>>>> On Wed, Jun 1, 2022 at 7:05 AM Danny McCormick <
>>>>>>> dannymccorm...@google.com> wrote:
>>>>>>>
>>>>>>>> Hey Brian,
>>>>>>>>
>>>>>>>> 1. Right now, the plan is to (1) turn on the issues tab, (2) make
>>>>>>>> the JIRA read only, (3) run the migration tool. Since the migration 
>>>>>>>> tool
>>>>>>>> won't be run until after Jiras are read only, there shouldn't be issues
>>>>>>>> with making sure everything gets captured.
>>>>>>>> 2. That current ordering does mean it's difficult to add a link to
>>>>>>>> the newly created Issue, and I hadn't built in that feature. With that
>>>>>>>> said, I will ask Infra if they're able to put up a banner redirecting
>>>>>>>> people to GitHub for the Beam project - that should hopefully minimize 
>>>>>>>> some
>>>>>>>> of the issues - and I'll also look into updating the tool to 

[jira] [Updated] (FLINK-27889) Error when the LastReconciledSpec is null

2022-06-03 Thread Hector Miuler Malpica Gallegos (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Miuler Malpica Gallegos updated FLINK-27889:
---
Description: 
My FlinkDeployment was left with erros, when he can not start correctly, the 
following message:

 

 
{code:java}
2022-06-01 04:36:10,070 o.a.f.k.o.r.ReconciliationUtils [WARN 
][flink-02/cosmosdb] Attempt count: 5, last attempt: true
2022-06-01 04:36:10,072 i.j.o.p.e.ReconciliationDispatcher 
[ERROR][flink-02/cosmosdb] Error during event processing ExecutionScope{ 
resource id: CustomResourceID
{name='cosmosdb', namespace='flink-02'}, version: null} failed.
org.apache.flink.kubernetes.operator.exception.ReconciliationException: 
java.lang.IllegalArgumentException: Only "local" is supported as schema for 
application mode. This assumes that the jar is located in the image, not the 
Flink client. An example of such path is: 
local:///opt/flink/examples/streaming/WindowJoin.jar
        at 
org.apache.flink.kubernetes.operator.controller.FlinkDeploymentController.reconcile(FlinkDeploymentController.java:130)
        at 
org.apache.flink.kubernetes.operator.controller.FlinkDeploymentController.reconcile(FlinkDeploymentController.java:59)
        at 
io.javaoperatorsdk.operator.processing.Controller$2.execute(Controller.java:101)
        at 
io.javaoperatorsdk.operator.processing.Controller$2.execute(Controller.java:76)
        at 
io.javaoperatorsdk.operator.api.monitoring.Metrics.timeControllerExecution(Metrics.java:34)
        at 
io.javaoperatorsdk.operator.processing.Controller.reconcile(Controller.java:75)
        at 
io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.reconcileExecution(ReconciliationDispatcher.java:143)
        at 
io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleReconcile(ReconciliationDispatcher.java:109)
        at 
io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleDispatch(ReconciliationDispatcher.java:74)
        at 
io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleExecution(ReconciliationDispatcher.java:50)
        at 
io.javaoperatorsdk.operator.processing.event.EventProcessor$ControllerExecution.run(EventProcessor.java:349)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown 
Source)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown 
Source)
        at java.base/java.lang.Thread.run(Unknown Source)
Caused by: java.lang.IllegalArgumentException: Only "local" is supported as 
schema for application mode. This assumes that the jar is located in the image, 
not the Flink client. An example of such path is: 
local:///opt/flink/examples/streaming/WindowJoin.jar
        at 
org.apache.flink.kubernetes.utils.KubernetesUtils.lambda$checkJarFileForApplicationMode$2(KubernetesUtils.java:407)
        at 
org.apache.flink.util.function.FunctionUtils.lambda$uncheckedFunction$2(FunctionUtils.java:73)
        at java.base/java.util.stream.ReferencePipeline$3$1.accept(Unknown 
Source)
        at java.base/java.util.Collections$2.tryAdvance(Unknown Source)
        at java.base/java.util.Collections$2.forEachRemaining(Unknown Source)
        at java.base/java.util.stream.AbstractPipeline.copyInto(Unknown Source)
        at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown 
Source)
        at 
java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(Unknown Source)
        at java.base/java.util.stream.AbstractPipeline.evaluate(Unknown Source)
        at java.base/java.util.stream.ReferencePipeline.collect(Unknown Source)
        at 
org.apache.flink.kubernetes.utils.KubernetesUtils.checkJarFileForApplicationMode(KubernetesUtils.java:412)
        at 
org.apache.flink.kubernetes.KubernetesClusterDescriptor.deployApplicationCluster(KubernetesClusterDescriptor.java:206)
        at 
org.apache.flink.client.deployment.application.cli.ApplicationClusterDeployer.run(ApplicationClusterDeployer.java:67)
        at 
org.apache.flink.kubernetes.operator.service.FlinkService.submitApplicationCluster(FlinkService.java:163)
        at 
org.apache.flink.kubernetes.operator.reconciler.deployment.ApplicationReconciler.deployFlinkJob(ApplicationReconciler.java:283)
        at 
org.apache.flink.kubernetes.operator.reconciler.deployment.ApplicationReconciler.reconcile(ApplicationReconciler.java:83)
        at 
org.apache.flink.kubernetes.operator.reconciler.deployment.ApplicationReconciler.reconcile(ApplicationReconciler.java:58)
        at 
org.apache.flink.kubernetes.operator.controller.FlinkDeploymentController.reconcile(FlinkDeploymentController.java:126)
        ... 13 more
2022-06-01 04:36:10,073 i.j.o.p.e.EventProcessor       
[ERROR][flink-02/cosmosdb] Exhausted retries for ExecutionScope{ resource id: 
CustomResourceID{name='cosmosdb', namespace='flink-02'}
, ve

[jira] [Updated] (FLINK-27889) Error when the LastReconciledSpec is null

2022-06-02 Thread Hector Miuler Malpica Gallegos (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Miuler Malpica Gallegos updated FLINK-27889:
---
Description: 
My FlinkDeployment was left with erros, when he can not start correctly, the 
following message:

 

{{2022-06-01 04:36:10,070 o.a.f.k.o.r.ReconciliationUtils [WARN 
][flink-02/cosmosdb] Attempt count: 5, last attempt: true}}
{{2022-06-01 04:36:10,072 i.j.o.p.e.ReconciliationDispatcher 
[ERROR][flink-02/cosmosdb] Error during event processing ExecutionScope{ 
resource id: CustomResourceID

{name='cosmosdb', namespace='flink-02'}, version: null} failed.}}
{{org.apache.flink.kubernetes.operator.exception.ReconciliationException: 
java.lang.IllegalArgumentException: Only "local" is supported as schema for 
application mode. This assumes that the jar is located in the image, not the 
Flink client. An example of such path is: 
local:///opt/flink/examples/streaming/WindowJoin.jar}}
{{        at 
org.apache.flink.kubernetes.operator.controller.FlinkDeploymentController.reconcile(FlinkDeploymentController.java:130)}}
{{        at 
org.apache.flink.kubernetes.operator.controller.FlinkDeploymentController.reconcile(FlinkDeploymentController.java:59)}}
{{        at 
io.javaoperatorsdk.operator.processing.Controller$2.execute(Controller.java:101)}}
{{        at 
io.javaoperatorsdk.operator.processing.Controller$2.execute(Controller.java:76)}}
{{        at 
io.javaoperatorsdk.operator.api.monitoring.Metrics.timeControllerExecution(Metrics.java:34)}}
{{        at 
io.javaoperatorsdk.operator.processing.Controller.reconcile(Controller.java:75)}}
{{        at 
io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.reconcileExecution(ReconciliationDispatcher.java:143)}}
{{        at 
io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleReconcile(ReconciliationDispatcher.java:109)}}
{{        at 
io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleDispatch(ReconciliationDispatcher.java:74)}}
{{        at 
io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleExecution(ReconciliationDispatcher.java:50)}}
{{        at 
io.javaoperatorsdk.operator.processing.event.EventProcessor$ControllerExecution.run(EventProcessor.java:349)}}
{{        at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)}}
{{        at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)}}
{{        at java.base/java.lang.Thread.run(Unknown Source)}}
{{Caused by: java.lang.IllegalArgumentException: Only "local" is supported as 
schema for application mode. This assumes that the jar is located in the image, 
not the Flink client. An example of such path is: 
local:///opt/flink/examples/streaming/WindowJoin.jar}}
{{        at 
org.apache.flink.kubernetes.utils.KubernetesUtils.lambda$checkJarFileForApplicationMode$2(KubernetesUtils.java:407)}}
{{        at 
org.apache.flink.util.function.FunctionUtils.lambda$uncheckedFunction$2(FunctionUtils.java:73)}}
{{        at java.base/java.util.stream.ReferencePipeline$3$1.accept(Unknown 
Source)}}
{{        at java.base/java.util.Collections$2.tryAdvance(Unknown Source)}}
{{        at java.base/java.util.Collections$2.forEachRemaining(Unknown 
Source)}}
{{        at java.base/java.util.stream.AbstractPipeline.copyInto(Unknown 
Source)}}
{{        at 
java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown Source)}}
{{        at 
java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(Unknown 
Source)}}
{{        at java.base/java.util.stream.AbstractPipeline.evaluate(Unknown 
Source)}}
{{        at java.base/java.util.stream.ReferencePipeline.collect(Unknown 
Source)}}
{{        at 
org.apache.flink.kubernetes.utils.KubernetesUtils.checkJarFileForApplicationMode(KubernetesUtils.java:412)}}
{{        at 
org.apache.flink.kubernetes.KubernetesClusterDescriptor.deployApplicationCluster(KubernetesClusterDescriptor.java:206)}}
{{        at 
org.apache.flink.client.deployment.application.cli.ApplicationClusterDeployer.run(ApplicationClusterDeployer.java:67)}}
{{        at 
org.apache.flink.kubernetes.operator.service.FlinkService.submitApplicationCluster(FlinkService.java:163)}}
{{        at 
org.apache.flink.kubernetes.operator.reconciler.deployment.ApplicationReconciler.deployFlinkJob(ApplicationReconciler.java:283)}}
{{        at 
org.apache.flink.kubernetes.operator.reconciler.deployment.ApplicationReconciler.reconcile(ApplicationReconciler.java:83)}}
{{        at 
org.apache.flink.kubernetes.operator.reconciler.deployment.ApplicationReconciler.reconcile(ApplicationReconciler.java:58)}}
{{        at 
org.apache.flink.kubernetes.operator.controller.FlinkDeploymentController.reconcile(FlinkDeploymentController.java:126)}}
{{        ... 13 more}}
{{2022-06-01 04:36:10,073 i.j.o.p.e.EventProcessor    

[jira] [Updated] (FLINK-27889) Error when the LastReconciledSpec is null

2022-06-02 Thread Hector Miuler Malpica Gallegos (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Miuler Malpica Gallegos updated FLINK-27889:
---
Attachment: scratch_7.json

> Error when the LastReconciledSpec is null
> -
>
> Key: FLINK-27889
> URL: https://issues.apache.org/jira/browse/FLINK-27889
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>    Reporter: Hector Miuler Malpica Gallegos
>Priority: Major
> Attachments: scratch_7.json
>
>
> My FlinkDeployment was left with erros, when he can not start correctly, the 
> following message:
>  
> {{2022-06-01 04:36:10,070 o.a.f.k.o.r.ReconciliationUtils [WARN 
> ][flink-02/cosmosdb] Attempt count: 5, last attempt: true}}
> {{2022-06-01 04:36:10,072 i.j.o.p.e.ReconciliationDispatcher 
> [ERROR][flink-02/cosmosdb] Error during event processing ExecutionScope{ 
> resource id: CustomResourceID
> {name='cosmosdb', namespace='flink-02'}, version: null} failed.}}
> {{org.apache.flink.kubernetes.operator.exception.ReconciliationException: 
> java.lang.IllegalArgumentException: Only "local" is supported as schema for 
> application mode. This assumes that the jar is located in the image, not the 
> Flink client. An example of such path is: 
> local:///opt/flink/examples/streaming/WindowJoin.jar}}
> {{        at 
> org.apache.flink.kubernetes.operator.controller.FlinkDeploymentController.reconcile(FlinkDeploymentController.java:130)}}
> {{        at 
> org.apache.flink.kubernetes.operator.controller.FlinkDeploymentController.reconcile(FlinkDeploymentController.java:59)}}
> {{        at 
> io.javaoperatorsdk.operator.processing.Controller$2.execute(Controller.java:101)}}
> {{        at 
> io.javaoperatorsdk.operator.processing.Controller$2.execute(Controller.java:76)}}
> {{        at 
> io.javaoperatorsdk.operator.api.monitoring.Metrics.timeControllerExecution(Metrics.java:34)}}
> {{        at 
> io.javaoperatorsdk.operator.processing.Controller.reconcile(Controller.java:75)}}
> {{        at 
> io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.reconcileExecution(ReconciliationDispatcher.java:143)}}
> {{        at 
> io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleReconcile(ReconciliationDispatcher.java:109)}}
> {{        at 
> io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleDispatch(ReconciliationDispatcher.java:74)}}
> {{        at 
> io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleExecution(ReconciliationDispatcher.java:50)}}
> {{        at 
> io.javaoperatorsdk.operator.processing.event.EventProcessor$ControllerExecution.run(EventProcessor.java:349)}}
> {{        at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)}}
> {{        at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)}}
> {{        at java.base/java.lang.Thread.run(Unknown Source)}}
> {{Caused by: java.lang.IllegalArgumentException: Only "local" is supported as 
> schema for application mode. This assumes that the jar is located in the 
> image, not the Flink client. An example of such path is: 
> local:///opt/flink/examples/streaming/WindowJoin.jar}}
> {{        at 
> org.apache.flink.kubernetes.utils.KubernetesUtils.lambda$checkJarFileForApplicationMode$2(KubernetesUtils.java:407)}}
> {{        at 
> org.apache.flink.util.function.FunctionUtils.lambda$uncheckedFunction$2(FunctionUtils.java:73)}}
> {{        at java.base/java.util.stream.ReferencePipeline$3$1.accept(Unknown 
> Source)}}
> {{        at java.base/java.util.Collections$2.tryAdvance(Unknown Source)}}
> {{        at java.base/java.util.Collections$2.forEachRemaining(Unknown 
> Source)}}
> {{        at java.base/java.util.stream.AbstractPipeline.copyInto(Unknown 
> Source)}}
> {{        at 
> java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown Source)}}
> {{        at 
> java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(Unknown 
> Source)}}
> {{        at java.base/java.util.stream.AbstractPipeline.evaluate(Unknown 
> Source)}}
> {{        at java.base/java.util.stream.ReferencePipeline.collect(Unknown 
> Source)}}
> {{        at 
> org.apache.flink.kubernetes.utils.KubernetesUtils.checkJarFileForApplicationMode(KubernetesUtils.java:412)}}
> {{        at 
> org.apache.flink.kubernetes.KubernetesClusterDescriptor.deployApplicationCluster(KubernetesClusterDescriptor.java:2

[jira] [Updated] (FLINK-27889) Error when the LastReconciledSpec is null

2022-06-02 Thread Hector Miuler Malpica Gallegos (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Miuler Malpica Gallegos updated FLINK-27889:
---
Description: 
My FlinkDeployment was left with erros, when he can not start correctly, the 
following message:

 

{{2022-06-01 04:36:10,070 o.a.f.k.o.r.ReconciliationUtils [WARN 
][flink-02/cosmosdb] Attempt count: 5, last attempt: true}}
{{2022-06-01 04:36:10,072 i.j.o.p.e.ReconciliationDispatcher 
[ERROR][flink-02/cosmosdb] Error during event processing ExecutionScope{ 
resource id: CustomResourceID

{name='cosmosdb', namespace='flink-02'}, version: null} failed.}}
{{org.apache.flink.kubernetes.operator.exception.ReconciliationException: 
java.lang.IllegalArgumentException: Only "local" is supported as schema for 
application mode. This assumes that the jar is located in the image, not the 
Flink client. An example of such path is: 
local:///opt/flink/examples/streaming/WindowJoin.jar}}
{{        at 
org.apache.flink.kubernetes.operator.controller.FlinkDeploymentController.reconcile(FlinkDeploymentController.java:130)}}
{{        at 
org.apache.flink.kubernetes.operator.controller.FlinkDeploymentController.reconcile(FlinkDeploymentController.java:59)}}
{{        at 
io.javaoperatorsdk.operator.processing.Controller$2.execute(Controller.java:101)}}
{{        at 
io.javaoperatorsdk.operator.processing.Controller$2.execute(Controller.java:76)}}
{{        at 
io.javaoperatorsdk.operator.api.monitoring.Metrics.timeControllerExecution(Metrics.java:34)}}
{{        at 
io.javaoperatorsdk.operator.processing.Controller.reconcile(Controller.java:75)}}
{{        at 
io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.reconcileExecution(ReconciliationDispatcher.java:143)}}
{{        at 
io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleReconcile(ReconciliationDispatcher.java:109)}}
{{        at 
io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleDispatch(ReconciliationDispatcher.java:74)}}
{{        at 
io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleExecution(ReconciliationDispatcher.java:50)}}
{{        at 
io.javaoperatorsdk.operator.processing.event.EventProcessor$ControllerExecution.run(EventProcessor.java:349)}}
{{        at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)}}
{{        at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)}}
{{        at java.base/java.lang.Thread.run(Unknown Source)}}
{{Caused by: java.lang.IllegalArgumentException: Only "local" is supported as 
schema for application mode. This assumes that the jar is located in the image, 
not the Flink client. An example of such path is: 
local:///opt/flink/examples/streaming/WindowJoin.jar}}
{{        at 
org.apache.flink.kubernetes.utils.KubernetesUtils.lambda$checkJarFileForApplicationMode$2(KubernetesUtils.java:407)}}
{{        at 
org.apache.flink.util.function.FunctionUtils.lambda$uncheckedFunction$2(FunctionUtils.java:73)}}
{{        at java.base/java.util.stream.ReferencePipeline$3$1.accept(Unknown 
Source)}}
{{        at java.base/java.util.Collections$2.tryAdvance(Unknown Source)}}
{{        at java.base/java.util.Collections$2.forEachRemaining(Unknown 
Source)}}
{{        at java.base/java.util.stream.AbstractPipeline.copyInto(Unknown 
Source)}}
{{        at 
java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown Source)}}
{{        at 
java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(Unknown 
Source)}}
{{        at java.base/java.util.stream.AbstractPipeline.evaluate(Unknown 
Source)}}
{{        at java.base/java.util.stream.ReferencePipeline.collect(Unknown 
Source)}}
{{        at 
org.apache.flink.kubernetes.utils.KubernetesUtils.checkJarFileForApplicationMode(KubernetesUtils.java:412)}}
{{        at 
org.apache.flink.kubernetes.KubernetesClusterDescriptor.deployApplicationCluster(KubernetesClusterDescriptor.java:206)}}
{{        at 
org.apache.flink.client.deployment.application.cli.ApplicationClusterDeployer.run(ApplicationClusterDeployer.java:67)}}
{{        at 
org.apache.flink.kubernetes.operator.service.FlinkService.submitApplicationCluster(FlinkService.java:163)}}
{{        at 
org.apache.flink.kubernetes.operator.reconciler.deployment.ApplicationReconciler.deployFlinkJob(ApplicationReconciler.java:283)}}
{{        at 
org.apache.flink.kubernetes.operator.reconciler.deployment.ApplicationReconciler.reconcile(ApplicationReconciler.java:83)}}
{{        at 
org.apache.flink.kubernetes.operator.reconciler.deployment.ApplicationReconciler.reconcile(ApplicationReconciler.java:58)}}
{{        at 
org.apache.flink.kubernetes.operator.controller.FlinkDeploymentController.reconcile(FlinkDeploymentController.java:126)}}
{{        ... 13 more}}
{{2022-06-01 04:36:10,073 i.j.o.p.e.EventProcessor    

[jira] [Updated] (FLINK-27889) Error when the LastReconciledSpec is null

2022-06-02 Thread Hector Miuler Malpica Gallegos (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Miuler Malpica Gallegos updated FLINK-27889:
---
Description: 
My FlinkDeployment was left with erros, when he can not start correctly, the 
following message:

 

{{2022-06-01 04:36:10,070 o.a.f.k.o.r.ReconciliationUtils [WARN 
][flink-02/cosmosdb] Attempt count: 5, last attempt: true}}
{{2022-06-01 04:36:10,072 i.j.o.p.e.ReconciliationDispatcher 
[ERROR][flink-02/cosmosdb] Error during event processing ExecutionScope{ 
resource id: CustomResourceID

{name='cosmosdb', namespace='flink-02'}, version: null} failed.}}
{{org.apache.flink.kubernetes.operator.exception.ReconciliationException: 
java.lang.IllegalArgumentException: Only "local" is supported as schema for 
application mode. This assumes that the jar is located in the image, not the 
Flink client. An example of such path is: 
local:///opt/flink/examples/streaming/WindowJoin.jar}}
{{        at 
org.apache.flink.kubernetes.operator.controller.FlinkDeploymentController.reconcile(FlinkDeploymentController.java:130)}}
{{        at 
org.apache.flink.kubernetes.operator.controller.FlinkDeploymentController.reconcile(FlinkDeploymentController.java:59)}}
{{        at 
io.javaoperatorsdk.operator.processing.Controller$2.execute(Controller.java:101)}}
{{        at 
io.javaoperatorsdk.operator.processing.Controller$2.execute(Controller.java:76)}}
{{        at 
io.javaoperatorsdk.operator.api.monitoring.Metrics.timeControllerExecution(Metrics.java:34)}}
{{        at 
io.javaoperatorsdk.operator.processing.Controller.reconcile(Controller.java:75)}}
{{        at 
io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.reconcileExecution(ReconciliationDispatcher.java:143)}}
{{        at 
io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleReconcile(ReconciliationDispatcher.java:109)}}
{{        at 
io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleDispatch(ReconciliationDispatcher.java:74)}}
{{        at 
io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleExecution(ReconciliationDispatcher.java:50)}}
{{        at 
io.javaoperatorsdk.operator.processing.event.EventProcessor$ControllerExecution.run(EventProcessor.java:349)}}
{{        at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)}}
{{        at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)}}
{{        at java.base/java.lang.Thread.run(Unknown Source)}}
{{Caused by: java.lang.IllegalArgumentException: Only "local" is supported as 
schema for application mode. This assumes that the jar is located in the image, 
not the Flink client. An example of such path is: 
local:///opt/flink/examples/streaming/WindowJoin.jar}}
{{        at 
org.apache.flink.kubernetes.utils.KubernetesUtils.lambda$checkJarFileForApplicationMode$2(KubernetesUtils.java:407)}}
{{        at 
org.apache.flink.util.function.FunctionUtils.lambda$uncheckedFunction$2(FunctionUtils.java:73)}}
{{        at java.base/java.util.stream.ReferencePipeline$3$1.accept(Unknown 
Source)}}
{{        at java.base/java.util.Collections$2.tryAdvance(Unknown Source)}}
{{        at java.base/java.util.Collections$2.forEachRemaining(Unknown 
Source)}}
{{        at java.base/java.util.stream.AbstractPipeline.copyInto(Unknown 
Source)}}
{{        at 
java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown Source)}}
{{        at 
java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(Unknown 
Source)}}
{{        at java.base/java.util.stream.AbstractPipeline.evaluate(Unknown 
Source)}}
{{        at java.base/java.util.stream.ReferencePipeline.collect(Unknown 
Source)}}
{{        at 
org.apache.flink.kubernetes.utils.KubernetesUtils.checkJarFileForApplicationMode(KubernetesUtils.java:412)}}
{{        at 
org.apache.flink.kubernetes.KubernetesClusterDescriptor.deployApplicationCluster(KubernetesClusterDescriptor.java:206)}}
{{        at 
org.apache.flink.client.deployment.application.cli.ApplicationClusterDeployer.run(ApplicationClusterDeployer.java:67)}}
{{        at 
org.apache.flink.kubernetes.operator.service.FlinkService.submitApplicationCluster(FlinkService.java:163)}}
{{        at 
org.apache.flink.kubernetes.operator.reconciler.deployment.ApplicationReconciler.deployFlinkJob(ApplicationReconciler.java:283)}}
{{        at 
org.apache.flink.kubernetes.operator.reconciler.deployment.ApplicationReconciler.reconcile(ApplicationReconciler.java:83)}}
{{        at 
org.apache.flink.kubernetes.operator.reconciler.deployment.ApplicationReconciler.reconcile(ApplicationReconciler.java:58)}}
{{        at 
org.apache.flink.kubernetes.operator.controller.FlinkDeploymentController.reconcile(FlinkDeploymentController.java:126)}}
{{        ... 13 more}}
{{2022-06-01 04:36:10,073 i.j.o.p.e.EventProcessor    

[jira] [Created] (FLINK-27889) Error when the LastReconciledSpec is null

2022-06-02 Thread Hector Miuler Malpica Gallegos (Jira)
Hector Miuler Malpica Gallegos created FLINK-27889:
--

 Summary: Error when the LastReconciledSpec is null
 Key: FLINK-27889
 URL: https://issues.apache.org/jira/browse/FLINK-27889
 Project: Flink
  Issue Type: Bug
  Components: Kubernetes Operator
Reporter: Hector Miuler Malpica Gallegos


My FlinkDeployment was left with erros, when he can not start correctly, the 
following message:

 

{{2022-06-01 04:36:10,070 o.a.f.k.o.r.ReconciliationUtils [WARN 
][flink-wape-02/migration-cosmosdb-wape] Attempt count: 5, last attempt: true}}
{{2022-06-01 04:36:10,072 i.j.o.p.e.ReconciliationDispatcher 
[ERROR][flink-wape-02/migration-cosmosdb-wape] Error during event processing 
ExecutionScope\{ resource id: CustomResourceID{name='migration-cosmosdb-wape', 
namespace='flink-wape-02'}, version: null} failed.}}
{{org.apache.flink.kubernetes.operator.exception.ReconciliationException: 
java.lang.IllegalArgumentException: Only "local" is supported as schema for 
application mode. This assumes that the jar is located in the image, not the 
Flink client. An example of such path is: 
local:///opt/flink/examples/streaming/WindowJoin.jar}}
{{        at 
org.apache.flink.kubernetes.operator.controller.FlinkDeploymentController.reconcile(FlinkDeploymentController.java:130)}}
{{        at 
org.apache.flink.kubernetes.operator.controller.FlinkDeploymentController.reconcile(FlinkDeploymentController.java:59)}}
{{        at 
io.javaoperatorsdk.operator.processing.Controller$2.execute(Controller.java:101)}}
{{        at 
io.javaoperatorsdk.operator.processing.Controller$2.execute(Controller.java:76)}}
{{        at 
io.javaoperatorsdk.operator.api.monitoring.Metrics.timeControllerExecution(Metrics.java:34)}}
{{        at 
io.javaoperatorsdk.operator.processing.Controller.reconcile(Controller.java:75)}}
{{        at 
io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.reconcileExecution(ReconciliationDispatcher.java:143)}}
{{        at 
io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleReconcile(ReconciliationDispatcher.java:109)}}
{{        at 
io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleDispatch(ReconciliationDispatcher.java:74)}}
{{        at 
io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleExecution(ReconciliationDispatcher.java:50)}}
{{        at 
io.javaoperatorsdk.operator.processing.event.EventProcessor$ControllerExecution.run(EventProcessor.java:349)}}
{{        at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)}}
{{        at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)}}
{{        at java.base/java.lang.Thread.run(Unknown Source)}}
{{Caused by: java.lang.IllegalArgumentException: Only "local" is supported as 
schema for application mode. This assumes that the jar is located in the image, 
not the Flink client. An example of such path is: 
local:///opt/flink/examples/streaming/WindowJoin.jar}}
{{        at 
org.apache.flink.kubernetes.utils.KubernetesUtils.lambda$checkJarFileForApplicationMode$2(KubernetesUtils.java:407)}}
{{        at 
org.apache.flink.util.function.FunctionUtils.lambda$uncheckedFunction$2(FunctionUtils.java:73)}}
{{        at java.base/java.util.stream.ReferencePipeline$3$1.accept(Unknown 
Source)}}
{{        at java.base/java.util.Collections$2.tryAdvance(Unknown Source)}}
{{        at java.base/java.util.Collections$2.forEachRemaining(Unknown 
Source)}}
{{        at java.base/java.util.stream.AbstractPipeline.copyInto(Unknown 
Source)}}
{{        at 
java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown Source)}}
{{        at 
java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(Unknown 
Source)}}
{{        at java.base/java.util.stream.AbstractPipeline.evaluate(Unknown 
Source)}}
{{        at java.base/java.util.stream.ReferencePipeline.collect(Unknown 
Source)}}
{{        at 
org.apache.flink.kubernetes.utils.KubernetesUtils.checkJarFileForApplicationMode(KubernetesUtils.java:412)}}
{{        at 
org.apache.flink.kubernetes.KubernetesClusterDescriptor.deployApplicationCluster(KubernetesClusterDescriptor.java:206)}}
{{        at 
org.apache.flink.client.deployment.application.cli.ApplicationClusterDeployer.run(ApplicationClusterDeployer.java:67)}}
{{        at 
org.apache.flink.kubernetes.operator.service.FlinkService.submitApplicationCluster(FlinkService.java:163)}}
{{        at 
org.apache.flink.kubernetes.operator.reconciler.deployment.ApplicationReconciler.deployFlinkJob(ApplicationReconciler.java:283)}}
{{        at 
org.apache.flink.kubernetes.operator.reconciler.deployment.ApplicationReconciler.reconcile(ApplicationReconciler.java:83)}}
{{        at 
org.apache.flink.kubernetes.operator.reconciler.deployment.ApplicationReconciler.reconcile(Applicati

[jira] [Created] (FLINK-27889) Error when the LastReconciledSpec is null

2022-06-02 Thread Hector Miuler Malpica Gallegos (Jira)
Hector Miuler Malpica Gallegos created FLINK-27889:
--

 Summary: Error when the LastReconciledSpec is null
 Key: FLINK-27889
 URL: https://issues.apache.org/jira/browse/FLINK-27889
 Project: Flink
  Issue Type: Bug
  Components: Kubernetes Operator
Reporter: Hector Miuler Malpica Gallegos


My FlinkDeployment was left with erros, when he can not start correctly, the 
following message:

 

{{2022-06-01 04:36:10,070 o.a.f.k.o.r.ReconciliationUtils [WARN 
][flink-wape-02/migration-cosmosdb-wape] Attempt count: 5, last attempt: true}}
{{2022-06-01 04:36:10,072 i.j.o.p.e.ReconciliationDispatcher 
[ERROR][flink-wape-02/migration-cosmosdb-wape] Error during event processing 
ExecutionScope\{ resource id: CustomResourceID{name='migration-cosmosdb-wape', 
namespace='flink-wape-02'}, version: null} failed.}}
{{org.apache.flink.kubernetes.operator.exception.ReconciliationException: 
java.lang.IllegalArgumentException: Only "local" is supported as schema for 
application mode. This assumes that the jar is located in the image, not the 
Flink client. An example of such path is: 
local:///opt/flink/examples/streaming/WindowJoin.jar}}
{{        at 
org.apache.flink.kubernetes.operator.controller.FlinkDeploymentController.reconcile(FlinkDeploymentController.java:130)}}
{{        at 
org.apache.flink.kubernetes.operator.controller.FlinkDeploymentController.reconcile(FlinkDeploymentController.java:59)}}
{{        at 
io.javaoperatorsdk.operator.processing.Controller$2.execute(Controller.java:101)}}
{{        at 
io.javaoperatorsdk.operator.processing.Controller$2.execute(Controller.java:76)}}
{{        at 
io.javaoperatorsdk.operator.api.monitoring.Metrics.timeControllerExecution(Metrics.java:34)}}
{{        at 
io.javaoperatorsdk.operator.processing.Controller.reconcile(Controller.java:75)}}
{{        at 
io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.reconcileExecution(ReconciliationDispatcher.java:143)}}
{{        at 
io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleReconcile(ReconciliationDispatcher.java:109)}}
{{        at 
io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleDispatch(ReconciliationDispatcher.java:74)}}
{{        at 
io.javaoperatorsdk.operator.processing.event.ReconciliationDispatcher.handleExecution(ReconciliationDispatcher.java:50)}}
{{        at 
io.javaoperatorsdk.operator.processing.event.EventProcessor$ControllerExecution.run(EventProcessor.java:349)}}
{{        at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)}}
{{        at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)}}
{{        at java.base/java.lang.Thread.run(Unknown Source)}}
{{Caused by: java.lang.IllegalArgumentException: Only "local" is supported as 
schema for application mode. This assumes that the jar is located in the image, 
not the Flink client. An example of such path is: 
local:///opt/flink/examples/streaming/WindowJoin.jar}}
{{        at 
org.apache.flink.kubernetes.utils.KubernetesUtils.lambda$checkJarFileForApplicationMode$2(KubernetesUtils.java:407)}}
{{        at 
org.apache.flink.util.function.FunctionUtils.lambda$uncheckedFunction$2(FunctionUtils.java:73)}}
{{        at java.base/java.util.stream.ReferencePipeline$3$1.accept(Unknown 
Source)}}
{{        at java.base/java.util.Collections$2.tryAdvance(Unknown Source)}}
{{        at java.base/java.util.Collections$2.forEachRemaining(Unknown 
Source)}}
{{        at java.base/java.util.stream.AbstractPipeline.copyInto(Unknown 
Source)}}
{{        at 
java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown Source)}}
{{        at 
java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(Unknown 
Source)}}
{{        at java.base/java.util.stream.AbstractPipeline.evaluate(Unknown 
Source)}}
{{        at java.base/java.util.stream.ReferencePipeline.collect(Unknown 
Source)}}
{{        at 
org.apache.flink.kubernetes.utils.KubernetesUtils.checkJarFileForApplicationMode(KubernetesUtils.java:412)}}
{{        at 
org.apache.flink.kubernetes.KubernetesClusterDescriptor.deployApplicationCluster(KubernetesClusterDescriptor.java:206)}}
{{        at 
org.apache.flink.client.deployment.application.cli.ApplicationClusterDeployer.run(ApplicationClusterDeployer.java:67)}}
{{        at 
org.apache.flink.kubernetes.operator.service.FlinkService.submitApplicationCluster(FlinkService.java:163)}}
{{        at 
org.apache.flink.kubernetes.operator.reconciler.deployment.ApplicationReconciler.deployFlinkJob(ApplicationReconciler.java:283)}}
{{        at 
org.apache.flink.kubernetes.operator.reconciler.deployment.ApplicationReconciler.reconcile(ApplicationReconciler.java:83)}}
{{        at 
org.apache.flink.kubernetes.operator.reconciler.deployment.ApplicationReconciler.reconcile(Applicati

Re: Jira -> GitHub Issues Migration (This Friday)

2022-06-02 Thread Hector Miuler Malpica Gallegos
hahaha!!! excuse me, I'm thinking in the flink project when read the last
email, I do not know why.


*Hector Miuler Malpica Gallegos <http://www.miuler.com>*



El jue, 2 jun 2022 a la(s) 18:04, Danny McCormick (dannymccorm...@google.com)
escribió:

> Thanks for calling that out Hector - the migration will only migrate
> issues from the Beam project (that one is from the Flink project), so I
> don't think it should be an issue.
>
> Thanks,
> Danny
>
> On Thu, Jun 2, 2022 at 6:38 PM Hector Miuler Malpica Gallegos <
> miu...@gmail.com> wrote:
>
>> This migration includes the issues of the component `Kubernetes Operator`
>> ? like this issue FLINK-27820
>> <https://issues.apache.org/jira/browse/FLINK-27820> ? this issues
>> correspond to the repository
>> https://github.com/apache/flink-kubernetes-operator, keep it in mind.
>>
>>
>> *Hector Miuler Malpica Gallegos <http://www.miuler.com>*
>>
>>
>>
>> El jue, 2 jun 2022 a la(s) 12:33, Danny McCormick (
>> dannymccorm...@google.com) escribió:
>>
>>> Given the consensus here, I updated the tool to do this. This means that
>>> we won't update the JIRAs to be read-only until after the migration is
>>> complete. I'll rerun the tool if any extra jiras come in during the
>>> intervening period. The tool will also still write the mapping to the file
>>> in case there are unforeseen issues so that we can backfill if needed.
>>>
>>> Thanks for the suggestion and followup Brian, Ahmet, and Alexey.
>>>
>>> On Thu, Jun 2, 2022 at 12:16 PM Alexey Romanenko <
>>> aromanenko@gmail.com> wrote:
>>>
>>>> +1 That would be very helpful for mapping!
>>>>
>>>> On 2 Jun 2022, at 17:48, Ahmet Altay  wrote:
>>>>
>>>> Is it possible to add comments on the JIRAs with a link to the new
>>>> corresponding github issue?
>>>>
>>>> On Thu, Jun 2, 2022 at 8:47 AM Danny McCormick <
>>>> dannymccorm...@google.com> wrote:
>>>>
>>>>> Thanks for the feedback, I agree it would be good to keep that option
>>>>> open - I updated the tool to write those to a file when we create an 
>>>>> issue.
>>>>> I'll share that after the migration.
>>>>>
>>>>> Thanks,
>>>>> Danny
>>>>>
>>>>> On Wed, Jun 1, 2022 at 7:03 PM Brian Hulette 
>>>>> wrote:
>>>>>
>>>>>> Thanks Danny. Regarding links to GitHub issues, if we could at least
>>>>>> save off a record of jira <-> issue mappings we could look at adding the
>>>>>> links later. I think it would be nice to have those links so that anyone
>>>>>> landing in a jira through a search or an old link can quickly find the
>>>>>> current ticket, but I don't think that needs to block the migration.
>>>>>>
>>>>>> On Wed, Jun 1, 2022 at 7:05 AM Danny McCormick <
>>>>>> dannymccorm...@google.com> wrote:
>>>>>>
>>>>>>> Hey Brian,
>>>>>>>
>>>>>>> 1. Right now, the plan is to (1) turn on the issues tab, (2) make
>>>>>>> the JIRA read only, (3) run the migration tool. Since the migration tool
>>>>>>> won't be run until after Jiras are read only, there shouldn't be issues
>>>>>>> with making sure everything gets captured.
>>>>>>> 2. That current ordering does mean it's difficult to add a link to
>>>>>>> the newly created Issue, and I hadn't built in that feature. With that
>>>>>>> said, I will ask Infra if they're able to put up a banner redirecting
>>>>>>> people to GitHub for the Beam project - that should hopefully minimize 
>>>>>>> some
>>>>>>> of the issues - and I'll also look into updating the tool to do that in
>>>>>>> case the banner isn't doable. I'm also planning on doing a few passes to
>>>>>>> update our docs and code comments from Jiras to issues once the 
>>>>>>> migration
>>>>>>> is done.
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Danny
>>>>>>>
>>>>>>> On Tue, May 31, 2022 at 8:09 PM Brian Hulette 
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Thanks Danny, it's great to

Re: Jira -> GitHub Issues Migration (This Friday)

2022-06-02 Thread Hector Miuler Malpica Gallegos
This migration includes the issues of the component `Kubernetes Operator` ?
like this issue FLINK-27820
<https://issues.apache.org/jira/browse/FLINK-27820> ? this issues
correspond to the repository
https://github.com/apache/flink-kubernetes-operator, keep it in mind.


*Hector Miuler Malpica Gallegos <http://www.miuler.com>*



El jue, 2 jun 2022 a la(s) 12:33, Danny McCormick (dannymccorm...@google.com)
escribió:

> Given the consensus here, I updated the tool to do this. This means that
> we won't update the JIRAs to be read-only until after the migration is
> complete. I'll rerun the tool if any extra jiras come in during the
> intervening period. The tool will also still write the mapping to the file
> in case there are unforeseen issues so that we can backfill if needed.
>
> Thanks for the suggestion and followup Brian, Ahmet, and Alexey.
>
> On Thu, Jun 2, 2022 at 12:16 PM Alexey Romanenko 
> wrote:
>
>> +1 That would be very helpful for mapping!
>>
>> On 2 Jun 2022, at 17:48, Ahmet Altay  wrote:
>>
>> Is it possible to add comments on the JIRAs with a link to the new
>> corresponding github issue?
>>
>> On Thu, Jun 2, 2022 at 8:47 AM Danny McCormick 
>> wrote:
>>
>>> Thanks for the feedback, I agree it would be good to keep that option
>>> open - I updated the tool to write those to a file when we create an issue.
>>> I'll share that after the migration.
>>>
>>> Thanks,
>>> Danny
>>>
>>> On Wed, Jun 1, 2022 at 7:03 PM Brian Hulette 
>>> wrote:
>>>
>>>> Thanks Danny. Regarding links to GitHub issues, if we could at least
>>>> save off a record of jira <-> issue mappings we could look at adding the
>>>> links later. I think it would be nice to have those links so that anyone
>>>> landing in a jira through a search or an old link can quickly find the
>>>> current ticket, but I don't think that needs to block the migration.
>>>>
>>>> On Wed, Jun 1, 2022 at 7:05 AM Danny McCormick <
>>>> dannymccorm...@google.com> wrote:
>>>>
>>>>> Hey Brian,
>>>>>
>>>>> 1. Right now, the plan is to (1) turn on the issues tab, (2) make the
>>>>> JIRA read only, (3) run the migration tool. Since the migration tool won't
>>>>> be run until after Jiras are read only, there shouldn't be issues with
>>>>> making sure everything gets captured.
>>>>> 2. That current ordering does mean it's difficult to add a link to the
>>>>> newly created Issue, and I hadn't built in that feature. With that said, I
>>>>> will ask Infra if they're able to put up a banner redirecting people to
>>>>> GitHub for the Beam project - that should hopefully minimize some of the
>>>>> issues - and I'll also look into updating the tool to do that in case the
>>>>> banner isn't doable. I'm also planning on doing a few passes to update our
>>>>> docs and code comments from Jiras to issues once the migration is done.
>>>>>
>>>>> Thanks,
>>>>> Danny
>>>>>
>>>>> On Tue, May 31, 2022 at 8:09 PM Brian Hulette 
>>>>> wrote:
>>>>>
>>>>>> Thanks Danny, it's great to see this happening!
>>>>>>
>>>>>> A couple of questions:
>>>>>> - Is there something we can do to remind people creating a jira that
>>>>>> they should create a bug instead (e.g. a template)? If not I suppose we 
>>>>>> can
>>>>>> just re-run the migration tool a few times up until jira creation is
>>>>>> disabled to make sure everything is captured.
>>>>>> - Will your migration tooling comment on the original jira with a
>>>>>> link to the new issue in GitHub?
>>>>>>
>>>>>> Brian
>>>>>>
>>>>>> On Tue, May 31, 2022 at 9:57 AM Robert Bradshaw 
>>>>>> wrote:
>>>>>>
>>>>>>> Thanks for finally making this happen.
>>>>>>>
>>>>>>> On Tue, May 31, 2022 at 7:18 AM Sachin Agarwal 
>>>>>>> wrote:
>>>>>>> >
>>>>>>> > Thank you Danny! This will help us a lot, especially with new
>>>>>>> contributors. Thanks so much!
>>>>>>> >
>>>>>>> > On Tue, May 31, 2022 at 4:10 AM Danny McCormick <
>>>>>>> dannymccorm...@google.com> wrote:
>>>>>>> >>
>>>>>>> >> Hey folks, this is a reminder that we will be migrating from Jira
>>>>>>> to GitHub Issues this Friday (6/4). A few key details to keep in mind:
>>>>>>> >>
>>>>>>> >> 1. All active Jiras will get automatically migrated and assigned
>>>>>>> over the course of the weekend.
>>>>>>> >> 2. Starting Friday (once the the Issues tab is open), please stop
>>>>>>> creating Jiras and start creating Issues instead. You should also 
>>>>>>> reference
>>>>>>> issues in your PRs and commits instead of Jiras. The Jira creation flow
>>>>>>> will eventually be disabled.
>>>>>>> >> 3. If you encounter any issues that can't be resolved by looking
>>>>>>> at the doc updates, please let me know and/or follow up in this thread.
>>>>>>> >>
>>>>>>> >> I'm looking forward to seeing how Issues can minimize friction
>>>>>>> for new contributors and I'm hopeful that this will be a smooth 
>>>>>>> transition.
>>>>>>> If you have any last minute concerns let me know. For more context, see 
>>>>>>> the
>>>>>>> original thread on this topic.
>>>>>>> >>
>>>>>>> >> Thanks,
>>>>>>> >> Danny
>>>>>>>
>>>>>>
>>


[krita] [Bug 454599] New: Feature request: perspective concentric ellipse

2022-05-30 Thread Hector
https://bugs.kde.org/show_bug.cgi?id=454599

Bug ID: 454599
   Summary: Feature request: perspective concentric ellipse
   Product: krita
   Version: unspecified
  Platform: unspecified
OS: Unspecified
Status: REPORTED
  Severity: wishlist
  Priority: NOR
 Component: Tool/Assistants
  Assignee: krita-bugs-n...@kde.org
  Reporter: misha.bossm...@yandex.ru
  Target Milestone: ---

Greetings. I am the author of the last topic on the request of a perspective
ellipse. I did not correctly describe how it should work in theory. In the
context of algorithms and formulas. But in the end, you did everything right.
And I'm very happy about it.
There is one detail left. I apparently completely forgot to mention it in
previews request. Here we need a second tool, where the ellipse will be the
concentric. As already existing without perspective. Called Concentric Ellipse
I don't know if you are aware of this, if you planned it. I just wrote a
request. And yes. It's my own fault for not posting this in the previous
thread. Actually... In drawing only the Perspective Concentric Ellipse is
important. 

Previews: https://bugs.kde.org/show_bug.cgi?id=405643

-- 
You are receiving this mail because:
You are watching all bug changes.

[jira] [Updated] (HADOOP-18264) ZKDelegationTokenSecretManager should handle duplicate Token sequenceNums

2022-05-27 Thread Hector Sandoval Chaverri (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Sandoval Chaverri updated HADOOP-18264:
--
Priority: Minor  (was: Major)

> ZKDelegationTokenSecretManager should handle duplicate Token sequenceNums
> -
>
> Key: HADOOP-18264
> URL: https://issues.apache.org/jira/browse/HADOOP-18264
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>    Reporter: Hector Sandoval Chaverri
>Priority: Minor
>
> The ZKDelegationTokenSecretManager relies on the TokenIdentifier 
> sequenceNumber to identify each Token in the ZK Store. It's possible for 
> multiple TokenIdentifiers to share the same sequenceNumber, as this is an int 
> that can overflow. 
> The AbstractDelegationTokenSecretManager uses a Map DelegationTokenInformation> so all properties in the TokenIdentifier must 
> match. ZKDelegationTokenSecretManager should follow the same logic.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18264) ZKDelegationTokenSecretManager should handle duplicate Token sequenceNums

2022-05-27 Thread Hector Sandoval Chaverri (Jira)
Hector Sandoval Chaverri created HADOOP-18264:
-

 Summary: ZKDelegationTokenSecretManager should handle duplicate 
Token sequenceNums
 Key: HADOOP-18264
 URL: https://issues.apache.org/jira/browse/HADOOP-18264
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Hector Sandoval Chaverri


The ZKDelegationTokenSecretManager relies on the TokenIdentifier sequenceNumber 
to identify each Token in the ZK Store. It's possible for multiple 
TokenIdentifiers to share the same sequenceNumber, as this is an int that can 
overflow. 

The AbstractDelegationTokenSecretManager uses a Map so all properties in the TokenIdentifier must 
match. ZKDelegationTokenSecretManager should follow the same logic.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18264) ZKDelegationTokenSecretManager should handle duplicate Token sequenceNums

2022-05-27 Thread Hector Sandoval Chaverri (Jira)
Hector Sandoval Chaverri created HADOOP-18264:
-

 Summary: ZKDelegationTokenSecretManager should handle duplicate 
Token sequenceNums
 Key: HADOOP-18264
 URL: https://issues.apache.org/jira/browse/HADOOP-18264
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Hector Sandoval Chaverri


The ZKDelegationTokenSecretManager relies on the TokenIdentifier sequenceNumber 
to identify each Token in the ZK Store. It's possible for multiple 
TokenIdentifiers to share the same sequenceNumber, as this is an int that can 
overflow. 

The AbstractDelegationTokenSecretManager uses a Map so all properties in the TokenIdentifier must 
match. ZKDelegationTokenSecretManager should follow the same logic.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (FLINK-27821) Cannot delete flinkdeployment when the pod and deployment deleted manually

2022-05-27 Thread Hector Miuler Malpica Gallegos (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17542930#comment-17542930
 ] 

Hector Miuler Malpica Gallegos commented on FLINK-27821:


I delete the flinksessionjob and then forced the deletion of the 
flinkdeployment and at some point it could be deleted, how strange

> Cannot delete flinkdeployment when the pod and deployment deleted manually
> --
>
> Key: FLINK-27821
> URL: https://issues.apache.org/jira/browse/FLINK-27821
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Affects Versions: kubernetes-operator-1.1.0
>Reporter: Hector Miuler Malpica Gallegos
>Priority: Major
>
> My operator was installed with following command:
>  
> {{git clone g...@github.com:apache/flink-kubernetes-operator.git}}
> {{git checkout 207b17b}}
> {{cd flink-kubernetes-operator}}
> {{helm -debug upgrade -i  flink-kubernetes-operator 
> helm/flink-kubernetes-operator --set 
> image.repository=ghcr.io/apache/flink-kubernetes-operator -set 
> image.tag=207b17b}}
>  
> Then I create a flinkDeployment and flinkSessionJob, then I deleted the 
> deployment that generated the flinkDeployment, then reinstall operator and 
> finally I wanted to delete the flinkdeployment
>  
> kubectl logs -f pod/flink-kubernetes-operator-5cf66cbbcb-bpl9p
>  
> {{2022-05-27 13:40:22,027 o.a.f.k.o.c.FlinkDeploymentController [INFO 
> ][flink-system/migration] Deleting FlinkDeployment}}
> {{2022-05-27 13:40:34,047 o.a.f.s.n.i.n.c.AbstractChannel [WARN ] 
> Force-closing a channel whose registration task was not accepted by an event 
> loop: [id: 0xb2062900]}}
> {{java.util.concurrent.RejectedExecutionException: event executor terminated}}
> {{        at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:923)}}
> {{        at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:350)}}
> {{        at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:343)}}
> {{        at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:825)}}
> {{        at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:815)}}
> {{        at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AbstractUnsafe.register(AbstractChannel.java:483)}}
> {{        at 
> org.apache.flink.shaded.netty4.io.netty.channel.SingleThreadEventLoop.register(SingleThreadEventLoop.java:87)}}
> {{        at 
> org.apache.flink.shaded.netty4.io.netty.channel.SingleThreadEventLoop.register(SingleThreadEventLoop.java:81)}}
> {{        at 
> org.apache.flink.shaded.netty4.io.netty.channel.MultithreadEventLoopGroup.register(MultithreadEventLoopGroup.java:86)}}
> {{        at 
> org.apache.flink.shaded.netty4.io.netty.bootstrap.AbstractBootstrap.initAndRegister(AbstractBootstrap.java:323)}}
> {{        at 
> org.apache.flink.shaded.netty4.io.netty.bootstrap.Bootstrap.doResolveAndConnect(Bootstrap.java:155)}}
> {{        at 
> org.apache.flink.shaded.netty4.io.netty.bootstrap.Bootstrap.connect(Bootstrap.java:139)}}
> {{        at 
> org.apache.flink.shaded.netty4.io.netty.bootstrap.Bootstrap.connect(Bootstrap.java:123)}}
> {{        at 
> org.apache.flink.runtime.rest.RestClient.submitRequest(RestClient.java:467)}}
> {{        at 
> org.apache.flink.runtime.rest.RestClient.sendRequest(RestClient.java:390)}}
> {{        at 
> org.apache.flink.runtime.rest.RestClient.sendRequest(RestClient.java:304)}}
> {{        at 
> org.apache.flink.client.program.rest.RestClusterClient.lambda$null$32(RestClusterClient.java:863)}}
> {{        at 
> java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(Unknown 
> Source)}}
> {{        at 
> java.base/java.util.concurrent.CompletableFuture.postComplete(Unknown 
> Source)}}
> {{        at 
> java.base/java.util.concurrent.CompletableFuture.postFire(Unknown Source)}}
> {{        at 
> java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(Unknown 
> Source)}}
> {{        at 
> java.base/java.util.concurrent.CompletableFuture$Completion.run(Unknown 
> Source)}}
> {{        at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)}}
> {{        at 
> java.base/java.util.concurren

[jira] [Updated] (FLINK-27821) Cannot delete flinkdeployment when the pod and deployment deleted manually

2022-05-27 Thread Hector Miuler Malpica Gallegos (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Miuler Malpica Gallegos updated FLINK-27821:
---
Description: 
My operator was installed with following command:

 

{{git clone g...@github.com:apache/flink-kubernetes-operator.git}}
{{git checkout 207b17b}}
{{cd flink-kubernetes-operator}}
{{helm -debug upgrade -i  flink-kubernetes-operator 
helm/flink-kubernetes-operator --set 
image.repository=ghcr.io/apache/flink-kubernetes-operator -set 
image.tag=207b17b}}

 

Then I create a flinkDeployment and flinkSessionJob, then I deleted the 
deployment that generated the flinkDeployment, then reinstall operator and 
finally I wanted to delete the flinkdeployment

 

kubectl logs -f pod/flink-kubernetes-operator-5cf66cbbcb-bpl9p

 

{{2022-05-27 13:40:22,027 o.a.f.k.o.c.FlinkDeploymentController [INFO 
][flink-system/migration] Deleting FlinkDeployment}}
{{2022-05-27 13:40:34,047 o.a.f.s.n.i.n.c.AbstractChannel [WARN ] Force-closing 
a channel whose registration task was not accepted by an event loop: [id: 
0xb2062900]}}
{{java.util.concurrent.RejectedExecutionException: event executor terminated}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:923)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:350)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:343)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:825)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:815)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AbstractUnsafe.register(AbstractChannel.java:483)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.channel.SingleThreadEventLoop.register(SingleThreadEventLoop.java:87)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.channel.SingleThreadEventLoop.register(SingleThreadEventLoop.java:81)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.channel.MultithreadEventLoopGroup.register(MultithreadEventLoopGroup.java:86)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.bootstrap.AbstractBootstrap.initAndRegister(AbstractBootstrap.java:323)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.bootstrap.Bootstrap.doResolveAndConnect(Bootstrap.java:155)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.bootstrap.Bootstrap.connect(Bootstrap.java:139)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.bootstrap.Bootstrap.connect(Bootstrap.java:123)}}
{{        at 
org.apache.flink.runtime.rest.RestClient.submitRequest(RestClient.java:467)}}
{{        at 
org.apache.flink.runtime.rest.RestClient.sendRequest(RestClient.java:390)}}
{{        at 
org.apache.flink.runtime.rest.RestClient.sendRequest(RestClient.java:304)}}
{{        at 
org.apache.flink.client.program.rest.RestClusterClient.lambda$null$32(RestClusterClient.java:863)}}
{{        at 
java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(Unknown 
Source)}}
{{        at 
java.base/java.util.concurrent.CompletableFuture.postComplete(Unknown Source)}}
{{        at java.base/java.util.concurrent.CompletableFuture.postFire(Unknown 
Source)}}
{{        at 
java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(Unknown 
Source)}}
{{        at 
java.base/java.util.concurrent.CompletableFuture$Completion.run(Unknown 
Source)}}
{{        at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)}}
{{        at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)}}
{{        at java.base/java.lang.Thread.run(Unknown Source)}}
{{2022-05-27 13:40:34,047 o.a.f.s.n.i.n.u.c.D.rejectedExecution [ERROR] Failed 
to submit a listener notification task. Event loop shut down?}}
{{java.util.concurrent.RejectedExecutionException: event executor terminated}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:923)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:350)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:343)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:825)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:815

[jira] [Updated] (FLINK-27821) Cannot delete flinkdeployment when the pod and deployment deleted manually

2022-05-27 Thread Hector Miuler Malpica Gallegos (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Miuler Malpica Gallegos updated FLINK-27821:
---
Description: 
My operator was installed with following command:

 

{{git clone g...@github.com:apache/flink-kubernetes-operator.git}}

{{git checkout 207b17b}}

{{cd flink-kubernetes-operator}}

{{helm -debug upgrade -i  flink-kubernetes-operator 
helm/flink-kubernetes-operator --set 
image.repository=ghcr.io/apache/flink-kubernetes-operator -set 
image.tag=207b17b}}

 

Then I create a flinkDeployment and flinkSessionJob, then I deleted the 
deployment that generated the flinkDeployment, then reinstall operator and 
finally I wanted to delete the flinkdeployment

 

kubectl logs -f pod/flink-kubernetes-operator-5cf66cbbcb-bpl9p

 

{{2022-05-27 13:40:22,027 o.a.f.k.o.c.FlinkDeploymentController [INFO 
][flink-system/migration] Deleting FlinkDeployment}}
{{2022-05-27 13:40:34,047 o.a.f.s.n.i.n.c.AbstractChannel [WARN ] Force-closing 
a channel whose registration task was not accepted by an event loop: [id: 
0xb2062900]}}
{{java.util.concurrent.RejectedExecutionException: event executor terminated}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:923)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:350)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:343)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:825)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:815)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AbstractUnsafe.register(AbstractChannel.java:483)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.channel.SingleThreadEventLoop.register(SingleThreadEventLoop.java:87)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.channel.SingleThreadEventLoop.register(SingleThreadEventLoop.java:81)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.channel.MultithreadEventLoopGroup.register(MultithreadEventLoopGroup.java:86)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.bootstrap.AbstractBootstrap.initAndRegister(AbstractBootstrap.java:323)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.bootstrap.Bootstrap.doResolveAndConnect(Bootstrap.java:155)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.bootstrap.Bootstrap.connect(Bootstrap.java:139)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.bootstrap.Bootstrap.connect(Bootstrap.java:123)}}
{{        at 
org.apache.flink.runtime.rest.RestClient.submitRequest(RestClient.java:467)}}
{{        at 
org.apache.flink.runtime.rest.RestClient.sendRequest(RestClient.java:390)}}
{{        at 
org.apache.flink.runtime.rest.RestClient.sendRequest(RestClient.java:304)}}
{{        at 
org.apache.flink.client.program.rest.RestClusterClient.lambda$null$32(RestClusterClient.java:863)}}
{{        at 
java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(Unknown 
Source)}}
{{        at 
java.base/java.util.concurrent.CompletableFuture.postComplete(Unknown Source)}}
{{        at java.base/java.util.concurrent.CompletableFuture.postFire(Unknown 
Source)}}
{{        at 
java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(Unknown 
Source)}}
{{        at 
java.base/java.util.concurrent.CompletableFuture$Completion.run(Unknown 
Source)}}
{{        at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)}}
{{        at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)}}
{{        at java.base/java.lang.Thread.run(Unknown Source)}}
{{2022-05-27 13:40:34,047 o.a.f.s.n.i.n.u.c.D.rejectedExecution [ERROR] Failed 
to submit a listener notification task. Event loop shut down?}}
{{java.util.concurrent.RejectedExecutionException: event executor terminated}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:923)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:350)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:343)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:825)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:815

[jira] [Updated] (FLINK-27821) Cannot delete flinkdeployment when the pod and deployment deleted manually

2022-05-27 Thread Hector Miuler Malpica Gallegos (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Miuler Malpica Gallegos updated FLINK-27821:
---
Description: 
My operator was installed with following command:

 

{{git clone g...@github.com:apache/flink-kubernetes-operator.git }}

{{git checkout 207b17b}}

{{cd flink-kubernetes-operator}}

helm --debug upgrade -i  flink-kubernetes-operator 
helm/flink-kubernetes-operator --set 
image.repository=ghcr.io/apache/flink-kubernetes-operator {{--set 
image.tag=207b17b}}

 

Then I create a flinkDeployment and flinkSessionJob, then I deleted the 
deployment that generated the flinkDeployment, then reinstall operator and 
finally I wanted to delete the flinkdeployment

 

kubectl logs -f pod/flink-kubernetes-operator-5cf66cbbcb-bpl9p

 

{{2022-05-27 13:40:22,027 o.a.f.k.o.c.FlinkDeploymentController [INFO 
][flink-system/migration] Deleting FlinkDeployment}}
{{2022-05-27 13:40:34,047 o.a.f.s.n.i.n.c.AbstractChannel [WARN ] Force-closing 
a channel whose registration task was not accepted by an event loop: [id: 
0xb2062900]}}
{{java.util.concurrent.RejectedExecutionException: event executor terminated}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:923)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:350)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:343)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:825)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:815)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AbstractUnsafe.register(AbstractChannel.java:483)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.channel.SingleThreadEventLoop.register(SingleThreadEventLoop.java:87)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.channel.SingleThreadEventLoop.register(SingleThreadEventLoop.java:81)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.channel.MultithreadEventLoopGroup.register(MultithreadEventLoopGroup.java:86)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.bootstrap.AbstractBootstrap.initAndRegister(AbstractBootstrap.java:323)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.bootstrap.Bootstrap.doResolveAndConnect(Bootstrap.java:155)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.bootstrap.Bootstrap.connect(Bootstrap.java:139)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.bootstrap.Bootstrap.connect(Bootstrap.java:123)}}
{{        at 
org.apache.flink.runtime.rest.RestClient.submitRequest(RestClient.java:467)}}
{{        at 
org.apache.flink.runtime.rest.RestClient.sendRequest(RestClient.java:390)}}
{{        at 
org.apache.flink.runtime.rest.RestClient.sendRequest(RestClient.java:304)}}
{{        at 
org.apache.flink.client.program.rest.RestClusterClient.lambda$null$32(RestClusterClient.java:863)}}
{{        at 
java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(Unknown 
Source)}}
{{        at 
java.base/java.util.concurrent.CompletableFuture.postComplete(Unknown Source)}}
{{        at java.base/java.util.concurrent.CompletableFuture.postFire(Unknown 
Source)}}
{{        at 
java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(Unknown 
Source)}}
{{        at 
java.base/java.util.concurrent.CompletableFuture$Completion.run(Unknown 
Source)}}
{{        at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)}}
{{        at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)}}
{{        at java.base/java.lang.Thread.run(Unknown Source)}}
{{2022-05-27 13:40:34,047 o.a.f.s.n.i.n.u.c.D.rejectedExecution [ERROR] Failed 
to submit a listener notification task. Event loop shut down?}}
{{java.util.concurrent.RejectedExecutionException: event executor terminated}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:923)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:350)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:343)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:825)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:815

[jira] [Updated] (FLINK-27821) Cannot delete flinkdeployment when the pod and deployment deleted manually

2022-05-27 Thread Hector Miuler Malpica Gallegos (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Miuler Malpica Gallegos updated FLINK-27821:
---
Description: 
My operator was installed with following command:

 

{{{}git clone g...@github.com:apache/flink-kubernetes-operator.git git checkout 
207b17b{}}}{{{}cd flink-kubernetes-operator  {}}}{{helm --debug upgrade -i \}}
{{           flink-kubernetes-operator helm/flink-kubernetes-operator \}}
{{           --set image.repository=ghcr.io/apache/flink-kubernetes-operator \}}
{{           --set image.tag=207b17b}}

 

Then I create a flinkDeployment and flinkSessionJob, then I deleted the 
deployment that generated the flinkDeployment, then reinstall operator and 
finally I wanted to delete the flinkdeployment

 

kubectl logs -f pod/flink-kubernetes-operator-5cf66cbbcb-bpl9p

 

{{2022-05-27 13:40:22,027 o.a.f.k.o.c.FlinkDeploymentController [INFO 
][flink-system/migration] Deleting FlinkDeployment}}
{{2022-05-27 13:40:34,047 o.a.f.s.n.i.n.c.AbstractChannel [WARN ] Force-closing 
a channel whose registration task was not accepted by an event loop: [id: 
0xb2062900]}}
{{java.util.concurrent.RejectedExecutionException: event executor terminated}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:923)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:350)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:343)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:825)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:815)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AbstractUnsafe.register(AbstractChannel.java:483)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.channel.SingleThreadEventLoop.register(SingleThreadEventLoop.java:87)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.channel.SingleThreadEventLoop.register(SingleThreadEventLoop.java:81)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.channel.MultithreadEventLoopGroup.register(MultithreadEventLoopGroup.java:86)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.bootstrap.AbstractBootstrap.initAndRegister(AbstractBootstrap.java:323)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.bootstrap.Bootstrap.doResolveAndConnect(Bootstrap.java:155)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.bootstrap.Bootstrap.connect(Bootstrap.java:139)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.bootstrap.Bootstrap.connect(Bootstrap.java:123)}}
{{        at 
org.apache.flink.runtime.rest.RestClient.submitRequest(RestClient.java:467)}}
{{        at 
org.apache.flink.runtime.rest.RestClient.sendRequest(RestClient.java:390)}}
{{        at 
org.apache.flink.runtime.rest.RestClient.sendRequest(RestClient.java:304)}}
{{        at 
org.apache.flink.client.program.rest.RestClusterClient.lambda$null$32(RestClusterClient.java:863)}}
{{        at 
java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(Unknown 
Source)}}
{{        at 
java.base/java.util.concurrent.CompletableFuture.postComplete(Unknown Source)}}
{{        at java.base/java.util.concurrent.CompletableFuture.postFire(Unknown 
Source)}}
{{        at 
java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(Unknown 
Source)}}
{{        at 
java.base/java.util.concurrent.CompletableFuture$Completion.run(Unknown 
Source)}}
{{        at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)}}
{{        at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)}}
{{        at java.base/java.lang.Thread.run(Unknown Source)}}
{{2022-05-27 13:40:34,047 o.a.f.s.n.i.n.u.c.D.rejectedExecution [ERROR] Failed 
to submit a listener notification task. Event loop shut down?}}
{{java.util.concurrent.RejectedExecutionException: event executor terminated}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:923)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:350)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:343)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:825)}}
{{        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute

[jira] [Updated] (FLINK-27821) Cannot delete flinkdeployment when the pod and deployment deleted manually

2022-05-27 Thread Hector Miuler Malpica Gallegos (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Miuler Malpica Gallegos updated FLINK-27821:
---
Description: 
My operator was installed with following command:

```

git clone g...@github.com:apache/flink-kubernetes-operator.git git checkout 
207b17b

cd flink-kubernetes-operator  

helm --debug upgrade -i \
           flink-kubernetes-operator helm/flink-kubernetes-operator \
           --set image.repository=ghcr.io/apache/flink-kubernetes-operator \
           --set image.tag=207b17b

```

Then I create a flinkDeployment and flinkSessionJob, then I deleted the 
deployment that generated the flinkDeployment, then reinstall operator and 
finally I wanted to delete the flinkdeployment

 

kubectl logs -f pod/flink-kubernetes-operator-5cf66cbbcb-bpl9p

```

2022-05-27 13:40:22,027 o.a.f.k.o.c.FlinkDeploymentController [INFO 
][flink-system/migration] Deleting FlinkDeployment
2022-05-27 13:40:34,047 o.a.f.s.n.i.n.c.AbstractChannel [WARN ] Force-closing a 
channel whose registration task was not accepted by an event loop: [id: 
0xb2062900]
java.util.concurrent.RejectedExecutionException: event executor terminated
        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:923)
        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:350)
        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:343)
        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:825)
        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:815)
        at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AbstractUnsafe.register(AbstractChannel.java:483)
        at 
org.apache.flink.shaded.netty4.io.netty.channel.SingleThreadEventLoop.register(SingleThreadEventLoop.java:87)
        at 
org.apache.flink.shaded.netty4.io.netty.channel.SingleThreadEventLoop.register(SingleThreadEventLoop.java:81)
        at 
org.apache.flink.shaded.netty4.io.netty.channel.MultithreadEventLoopGroup.register(MultithreadEventLoopGroup.java:86)
        at 
org.apache.flink.shaded.netty4.io.netty.bootstrap.AbstractBootstrap.initAndRegister(AbstractBootstrap.java:323)
        at 
org.apache.flink.shaded.netty4.io.netty.bootstrap.Bootstrap.doResolveAndConnect(Bootstrap.java:155)
        at 
org.apache.flink.shaded.netty4.io.netty.bootstrap.Bootstrap.connect(Bootstrap.java:139)
        at 
org.apache.flink.shaded.netty4.io.netty.bootstrap.Bootstrap.connect(Bootstrap.java:123)
        at 
org.apache.flink.runtime.rest.RestClient.submitRequest(RestClient.java:467)
        at 
org.apache.flink.runtime.rest.RestClient.sendRequest(RestClient.java:390)
        at 
org.apache.flink.runtime.rest.RestClient.sendRequest(RestClient.java:304)
        at 
org.apache.flink.client.program.rest.RestClusterClient.lambda$null$32(RestClusterClient.java:863)
        at 
java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(Unknown 
Source)
        at 
java.base/java.util.concurrent.CompletableFuture.postComplete(Unknown Source)
        at java.base/java.util.concurrent.CompletableFuture.postFire(Unknown 
Source)
        at 
java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(Unknown 
Source)
        at 
java.base/java.util.concurrent.CompletableFuture$Completion.run(Unknown Source)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown 
Source)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown 
Source)
        at java.base/java.lang.Thread.run(Unknown Source)
2022-05-27 13:40:34,047 o.a.f.s.n.i.n.u.c.D.rejectedExecution [ERROR] Failed to 
submit a listener notification task. Event loop shut down?
java.util.concurrent.RejectedExecutionException: event executor terminated
        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:923)
        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:350)
        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:343)
        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:825)
        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:815)
        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.safeExecute(DefaultPromise.java:841

[jira] [Created] (FLINK-27821) Cannot delete flinkdeployment when the pod and deployment deleted manually

2022-05-27 Thread Hector Miuler Malpica Gallegos (Jira)
Hector Miuler Malpica Gallegos created FLINK-27821:
--

 Summary: Cannot delete flinkdeployment when the pod and deployment 
deleted manually
 Key: FLINK-27821
 URL: https://issues.apache.org/jira/browse/FLINK-27821
 Project: Flink
  Issue Type: Bug
  Components: Kubernetes Operator
Affects Versions: kubernetes-operator-1.1.0
Reporter: Hector Miuler Malpica Gallegos


My operator was installed with following command:

```

git clone g...@github.com:apache/flink-kubernetes-operator.git git checkout 
207b17b

cd flink-kubernetes-operator  

helm --debug upgrade -i \
           flink-kubernetes-operator helm/flink-kubernetes-operator \
           --set image.repository=ghcr.io/apache/flink-kubernetes-operator \
           --set image.tag=207b17b

```

Then I create a flinkDeployment and flinkSessionJob, then I delete the 
deployment of the flinkDeployment, and finally I wanted to delete the 
flinkdeployment

 

kubectl logs -f pod/flink-kubernetes-operator-5cf66cbbcb-bpl9p

 

```

2022-05-27 13:40:22,027 o.a.f.k.o.c.FlinkDeploymentController [INFO 
][flink-system/migration] Deleting FlinkDeployment
2022-05-27 13:40:34,047 o.a.f.s.n.i.n.c.AbstractChannel [WARN ] Force-closing a 
channel whose registration task was not accepted by an event loop: [id: 
0xb2062900]
java.util.concurrent.RejectedExecutionException: event executor terminated
        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:923)
        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:350)
        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:343)
        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:825)
        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:815)
        at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AbstractUnsafe.register(AbstractChannel.java:483)
        at 
org.apache.flink.shaded.netty4.io.netty.channel.SingleThreadEventLoop.register(SingleThreadEventLoop.java:87)
        at 
org.apache.flink.shaded.netty4.io.netty.channel.SingleThreadEventLoop.register(SingleThreadEventLoop.java:81)
        at 
org.apache.flink.shaded.netty4.io.netty.channel.MultithreadEventLoopGroup.register(MultithreadEventLoopGroup.java:86)
        at 
org.apache.flink.shaded.netty4.io.netty.bootstrap.AbstractBootstrap.initAndRegister(AbstractBootstrap.java:323)
        at 
org.apache.flink.shaded.netty4.io.netty.bootstrap.Bootstrap.doResolveAndConnect(Bootstrap.java:155)
        at 
org.apache.flink.shaded.netty4.io.netty.bootstrap.Bootstrap.connect(Bootstrap.java:139)
        at 
org.apache.flink.shaded.netty4.io.netty.bootstrap.Bootstrap.connect(Bootstrap.java:123)
        at 
org.apache.flink.runtime.rest.RestClient.submitRequest(RestClient.java:467)
        at 
org.apache.flink.runtime.rest.RestClient.sendRequest(RestClient.java:390)
        at 
org.apache.flink.runtime.rest.RestClient.sendRequest(RestClient.java:304)
        at 
org.apache.flink.client.program.rest.RestClusterClient.lambda$null$32(RestClusterClient.java:863)
        at 
java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(Unknown 
Source)
        at 
java.base/java.util.concurrent.CompletableFuture.postComplete(Unknown Source)
        at java.base/java.util.concurrent.CompletableFuture.postFire(Unknown 
Source)
        at 
java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(Unknown 
Source)
        at 
java.base/java.util.concurrent.CompletableFuture$Completion.run(Unknown Source)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown 
Source)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown 
Source)
        at java.base/java.lang.Thread.run(Unknown Source)
2022-05-27 13:40:34,047 o.a.f.s.n.i.n.u.c.D.rejectedExecution [ERROR] Failed to 
submit a listener notification task. Event loop shut down?
java.util.concurrent.RejectedExecutionException: event executor terminated
        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:923)
        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:350)
        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:343)
        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:825)
        at 
org.apache.flink.shaded.netty4

[jira] [Created] (FLINK-27821) Cannot delete flinkdeployment when the pod and deployment deleted manually

2022-05-27 Thread Hector Miuler Malpica Gallegos (Jira)
Hector Miuler Malpica Gallegos created FLINK-27821:
--

 Summary: Cannot delete flinkdeployment when the pod and deployment 
deleted manually
 Key: FLINK-27821
 URL: https://issues.apache.org/jira/browse/FLINK-27821
 Project: Flink
  Issue Type: Bug
  Components: Kubernetes Operator
Affects Versions: kubernetes-operator-1.1.0
Reporter: Hector Miuler Malpica Gallegos


My operator was installed with following command:

```

git clone g...@github.com:apache/flink-kubernetes-operator.git git checkout 
207b17b

cd flink-kubernetes-operator  

helm --debug upgrade -i \
           flink-kubernetes-operator helm/flink-kubernetes-operator \
           --set image.repository=ghcr.io/apache/flink-kubernetes-operator \
           --set image.tag=207b17b

```

Then I create a flinkDeployment and flinkSessionJob, then I delete the 
deployment of the flinkDeployment, and finally I wanted to delete the 
flinkdeployment

 

kubectl logs -f pod/flink-kubernetes-operator-5cf66cbbcb-bpl9p

 

```

2022-05-27 13:40:22,027 o.a.f.k.o.c.FlinkDeploymentController [INFO 
][flink-system/migration] Deleting FlinkDeployment
2022-05-27 13:40:34,047 o.a.f.s.n.i.n.c.AbstractChannel [WARN ] Force-closing a 
channel whose registration task was not accepted by an event loop: [id: 
0xb2062900]
java.util.concurrent.RejectedExecutionException: event executor terminated
        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:923)
        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:350)
        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:343)
        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:825)
        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:815)
        at 
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AbstractUnsafe.register(AbstractChannel.java:483)
        at 
org.apache.flink.shaded.netty4.io.netty.channel.SingleThreadEventLoop.register(SingleThreadEventLoop.java:87)
        at 
org.apache.flink.shaded.netty4.io.netty.channel.SingleThreadEventLoop.register(SingleThreadEventLoop.java:81)
        at 
org.apache.flink.shaded.netty4.io.netty.channel.MultithreadEventLoopGroup.register(MultithreadEventLoopGroup.java:86)
        at 
org.apache.flink.shaded.netty4.io.netty.bootstrap.AbstractBootstrap.initAndRegister(AbstractBootstrap.java:323)
        at 
org.apache.flink.shaded.netty4.io.netty.bootstrap.Bootstrap.doResolveAndConnect(Bootstrap.java:155)
        at 
org.apache.flink.shaded.netty4.io.netty.bootstrap.Bootstrap.connect(Bootstrap.java:139)
        at 
org.apache.flink.shaded.netty4.io.netty.bootstrap.Bootstrap.connect(Bootstrap.java:123)
        at 
org.apache.flink.runtime.rest.RestClient.submitRequest(RestClient.java:467)
        at 
org.apache.flink.runtime.rest.RestClient.sendRequest(RestClient.java:390)
        at 
org.apache.flink.runtime.rest.RestClient.sendRequest(RestClient.java:304)
        at 
org.apache.flink.client.program.rest.RestClusterClient.lambda$null$32(RestClusterClient.java:863)
        at 
java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(Unknown 
Source)
        at 
java.base/java.util.concurrent.CompletableFuture.postComplete(Unknown Source)
        at java.base/java.util.concurrent.CompletableFuture.postFire(Unknown 
Source)
        at 
java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(Unknown 
Source)
        at 
java.base/java.util.concurrent.CompletableFuture$Completion.run(Unknown Source)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown 
Source)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown 
Source)
        at java.base/java.lang.Thread.run(Unknown Source)
2022-05-27 13:40:34,047 o.a.f.s.n.i.n.u.c.D.rejectedExecution [ERROR] Failed to 
submit a listener notification task. Event loop shut down?
java.util.concurrent.RejectedExecutionException: event executor terminated
        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:923)
        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:350)
        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:343)
        at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:825)
        at 
org.apache.flink.shaded.netty4

Re: maildir-utils is marked for autoremoval from testing

2022-05-26 Thread Hector Oron
Hello,

El dj., 26 de maig 2022, 16:54, Hector Oron  va escriure:

> Hello,
>
> El dj., 26 de maig 2022, 15:19, Norbert Preining 
> va escriure:
>
>> Hi
>>
>> (please cc)
>>
>> On Thu, 26 May 2022, Debian testing autoremoval watch wrote:
>> > maildir-utils 1.6.10-1 is marked for autoremoval from testing on
>> 2022-06-30
>> >
>> > It (build-)depends on packages with these RC bugs:
>> > 1011146: nvidia-graphics-drivers-tesla-470: CVE-2022-28181,
>> CVE-2022-28183, CVE-2022-28184, CVE-2022-28185, CVE-2022-28191,
>> CVE-2022-28192
>> >  https://bugs.debian.org/1011146
>>
>> This is now the nth email I get about a completely irrelevant
>> dependency.
>>
>
> I have gotten the same email for several packages, I assume this was
> unexpected. I tried to check for an email giving some explanation, but I
> have not found it yet.
>

Found this https://bugs.debian.org/1011268



>
>
>> Norbert
>>
>> --
>> PREINING Norbert  https://www.preining.info
>> Mercari Inc. + IFMGA Guide + TU Wien + TeX Live
>> GPG: 0x860CDC13   fp: F7D8 A928 26E3 16A1 9FA0 ACF0 6CAC A448 860C DC13
>>
>>


Re: maildir-utils is marked for autoremoval from testing

2022-05-26 Thread Hector Oron
Hello,

El dj., 26 de maig 2022, 15:19, Norbert Preining  va
escriure:

> Hi
>
> (please cc)
>
> On Thu, 26 May 2022, Debian testing autoremoval watch wrote:
> > maildir-utils 1.6.10-1 is marked for autoremoval from testing on
> 2022-06-30
> >
> > It (build-)depends on packages with these RC bugs:
> > 1011146: nvidia-graphics-drivers-tesla-470: CVE-2022-28181,
> CVE-2022-28183, CVE-2022-28184, CVE-2022-28185, CVE-2022-28191,
> CVE-2022-28192
> >  https://bugs.debian.org/1011146
>
> This is now the nth email I get about a completely irrelevant
> dependency.
>

I have gotten the same email for several packages, I assume this was
unexpected. I tried to check for an email giving some explanation, but I
have not found it yet.



> Norbert
>
> --
> PREINING Norbert  https://www.preining.info
> Mercari Inc. + IFMGA Guide + TU Wien + TeX Live
> GPG: 0x860CDC13   fp: F7D8 A928 26E3 16A1 9FA0 ACF0 6CAC A448 860C DC13
>
>


[konsole] [Bug 454034] "Allow Color Filters" feature is poorly named, surprising and unintuitive

2022-05-23 Thread Hector Martin
https://bugs.kde.org/show_bug.cgi?id=454034

--- Comment #3 from Hector Martin  ---
Keep in mind that the confusing setting name was only the last 15 minutes spent
on the issue. I spent months wondering what the colored squares were about in
the first place. It would be helpful to also add a caption to the previews, as
I mentioned.

-- 
You are receiving this mail because:
You are watching all bug changes.

[jira] [Created] (HDFS-16591) StateStoreZooKeeper fails to initialize

2022-05-23 Thread Hector Sandoval Chaverri (Jira)
Hector Sandoval Chaverri created HDFS-16591:
---

 Summary: StateStoreZooKeeper fails to initialize
 Key: HDFS-16591
 URL: https://issues.apache.org/jira/browse/HDFS-16591
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: rbf
Reporter: Hector Sandoval Chaverri


MembershipStore and MountTableStore are failing to initialize, logging the 
following errors on the Router logs:
{noformat}
2022-05-23 16:43:01,156 ERROR 
org.apache.hadoop.hdfs.server.federation.router.RouterHeartbeatService: Cannot 
get version for class 
org.apache.hadoop.hdfs.server.federation.store.MembershipStore
org.apache.hadoop.hdfs.server.federation.store.StateStoreUnavailableException: 
Cached State Store not initialized, MembershipState records not valid
at 
org.apache.hadoop.hdfs.server.federation.store.CachedRecordStore.checkCacheAvailable(CachedRecordStore.java:106)
at 
org.apache.hadoop.hdfs.server.federation.store.CachedRecordStore.getCachedRecords(CachedRecordStore.java:227)
at 
org.apache.hadoop.hdfs.server.federation.router.RouterHeartbeatService.getStateStoreVersion(RouterHeartbeatService.java:131)
at 
org.apache.hadoop.hdfs.server.federation.router.RouterHeartbeatService.updateStateStore(RouterHeartbeatService.java:92)
at 
org.apache.hadoop.hdfs.server.federation.router.RouterHeartbeatService.periodicInvoke(RouterHeartbeatService.java:159)
at 
org.apache.hadoop.hdfs.server.federation.router.PeriodicService$1.run(PeriodicService.java:178)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748){noformat}
After investigating, we noticed that ZKDelegationTokenSecretManager normally 
initializes properties for ZooKeeper clients to connect using SASL/Kerberos. If 
ZKDelegationTokenSecretManager is replaced with a new SecretManager, the SASL 
properties don't get configured and any StateStores that connect to ZooKeeper 
fail with the above error. 

 A potential way to fix this is by setting the JaasConfiguration (currently 
done in ZKDelegationTokenSecretManager) as part of the StateStoreZooKeeperImpl 
initialization method.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-16591) StateStoreZooKeeper fails to initialize

2022-05-23 Thread Hector Sandoval Chaverri (Jira)
Hector Sandoval Chaverri created HDFS-16591:
---

 Summary: StateStoreZooKeeper fails to initialize
 Key: HDFS-16591
 URL: https://issues.apache.org/jira/browse/HDFS-16591
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: rbf
Reporter: Hector Sandoval Chaverri


MembershipStore and MountTableStore are failing to initialize, logging the 
following errors on the Router logs:
{noformat}
2022-05-23 16:43:01,156 ERROR 
org.apache.hadoop.hdfs.server.federation.router.RouterHeartbeatService: Cannot 
get version for class 
org.apache.hadoop.hdfs.server.federation.store.MembershipStore
org.apache.hadoop.hdfs.server.federation.store.StateStoreUnavailableException: 
Cached State Store not initialized, MembershipState records not valid
at 
org.apache.hadoop.hdfs.server.federation.store.CachedRecordStore.checkCacheAvailable(CachedRecordStore.java:106)
at 
org.apache.hadoop.hdfs.server.federation.store.CachedRecordStore.getCachedRecords(CachedRecordStore.java:227)
at 
org.apache.hadoop.hdfs.server.federation.router.RouterHeartbeatService.getStateStoreVersion(RouterHeartbeatService.java:131)
at 
org.apache.hadoop.hdfs.server.federation.router.RouterHeartbeatService.updateStateStore(RouterHeartbeatService.java:92)
at 
org.apache.hadoop.hdfs.server.federation.router.RouterHeartbeatService.periodicInvoke(RouterHeartbeatService.java:159)
at 
org.apache.hadoop.hdfs.server.federation.router.PeriodicService$1.run(PeriodicService.java:178)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748){noformat}
After investigating, we noticed that ZKDelegationTokenSecretManager normally 
initializes properties for ZooKeeper clients to connect using SASL/Kerberos. If 
ZKDelegationTokenSecretManager is replaced with a new SecretManager, the SASL 
properties don't get configured and any StateStores that connect to ZooKeeper 
fail with the above error. 

 A potential way to fix this is by setting the JaasConfiguration (currently 
done in ZKDelegationTokenSecretManager) as part of the StateStoreZooKeeperImpl 
initialization method.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[konsole] [Bug 454034] New: "Allow Color Filters" feature is poorly named, surprising and unintuitive

2022-05-19 Thread Hector Martin
https://bugs.kde.org/show_bug.cgi?id=454034

Bug ID: 454034
   Summary: "Allow Color Filters" feature is poorly named,
surprising and unintuitive
   Product: konsole
   Version: 21.12.3
  Platform: Gentoo Packages
OS: Linux
Status: REPORTED
  Severity: minor
  Priority: NOR
 Component: general
  Assignee: konsole-de...@kde.org
  Reporter: hec...@marcansoft.com
  Target Milestone: ---

SUMMARY
I've been wondering for months why I sometimes got random colored squares,
usually near my Plasma taskbar, that would disappear as soon as I moved the
mouse.

I just found out it's a Konsole feature today.

STEPS TO REPRODUCE
1. Use Konsole with defaults
2. Notice random colored squares appear with no rhyme nor reason
3. Be confused for months
4. Eventually put two and two together and realize it's previewing color names
on hover.
5. Spend 15 minutes looking for the option to turn it off

OBSERVED RESULT
Frustration

EXPECTED RESULT
It should take 2 seconds to understand the feature (and that it is one) and
less than a minute to figure out how to turn it off.

ADDITIONAL INFORMATION
May I suggest:
- Adding a caption/title to the tooltip, something like "Color preview", to
make it clear it's intentional and not, say, a broken window thumbnail, which
is what it looked like to me at first
- Renaming the setting to something like "Show color preview tooltips" "Preview
color names on hover". I had to literally hover over every setting and read the
explanations to find it. I have no idea what a "color filter" is or why I would
or wouldn't "allow" it.

-- 
You are receiving this mail because:
You are watching all bug changes.

Re: Permanent email address?

2022-05-16 Thread Richard Hector

On 16/05/22 05:11, Dan Ritter wrote:

I note that nobody owns rhkramer.org:

$ host rhkramer.org
Host rhkramer.org not found: 3(NXDOMAIN)

NXDOMAIN means no such domain.


Not quite. It doesn't mean no-one owns it; it just means (IIRC) there's 
no A or  record for that domain. www.rhkramer.org could exist, for 
example.


Instead, check with:

$ whois rhkramer.org
NOT FOUND

[snip legalese]

... which shows that it is indeed available.

Richard



[MARMAM] New paper on Northernmost Habitat Range of Guadalupe Fur Seals in the Gulf of California.

2022-05-15 Thread Hector Perez
Dear MARMAN colleagues,

On behalf of my co-authors, we are pleased to announce our recent short-note in 
Aquatic Mammals Journal about ¨Northernmost Habitat Range of Guadalupe Fur 
Seals (Arctocephalus townsendi) in the Gulf of California, México¨.

Citation:
Gálvez, C., Pérez-Puig, H. and F. R. Elorriaga-Verplancken. (2022). 
Northernmost Habitat Range of Guadalupe Fur Seals (Arctocephalus townsendi) in 
the Gulf of California, México. Aquatic Mammals, 48(3):223-233, DOI 
10.1578/AM.48.3.2022.223 Short223

You can have access to the short-note here:  
https://www.aquaticmammalsjournal.org/index.php?option=com_content=article=2200:northernmost-habitat-range-of-guadalupe-fur-seals-arctocephalus-townsendi-in-the-gulf-of-california-mexico=207=326

Here is the absctrat:

In this short-note, we provided information on an apparent increase in the 
Guadalupe fur seal’s (Arctocephalus townsendi) (GFS) habitat expansion range, 
involving the potential establishment of the first bulls haul-out site located 
in the Midriff Islands Region of the Gulf of California. We highlight the 
relevance of increase research efforts in this area to determine their 
potential colonization, which is relevant due to haul-out site is located in a 
region where important anthropogenic activities (e.g, fishing) are performed, 
representing a potential risk forindividuals welfare and survival during their 
post-breeding migration period along the Gulf of California.

Please feel free to contact us with any questions/comments or request a pdf 
copy.

Best regards,

Héctor.
___


M.C. Héctor Pérez Puig
Coordinador - Programa de Mamíferos Marinos

Centro de Estudios Culturales y Ecológicos Prescott College A.C.


Coordinator - Marine Mammal Program

Prescott College Kino Bay Center for Cultural and Ecological Studies

   w. 
www.prescott.edu/kino-bay-center/index.html
   e. hector.pe...@prescott.edu
   t. (01662)-242-0024
___
MARMAM mailing list
MARMAM@lists.uvic.ca
https://lists.uvic.ca/mailman/listinfo/marmam


[systemsettings] [Bug 364321] Ability to switch from JEDEC to SI units

2022-05-14 Thread Hector Martin
https://bugs.kde.org/show_bug.cgi?id=364321

--- Comment #22 from Hector Martin  ---
The UI was there in KDE4. It was lost in KDE5. Do you have a reference as to
why it was removed?

When 57240 was closed, there was no UI and no intent to add an UI. A UI was
later added anyway. Clearly someone thought it was important. If it was
deliberately removed, that action is not documented in 57240, since it predates
the addition of the UI entirely.

-- 
You are receiving this mail because:
You are watching all bug changes.

[systemsettings] [Bug 364321] Ability to switch from JEDEC to SI units

2022-05-14 Thread Hector Martin
https://bugs.kde.org/show_bug.cgi?id=364321

Hector Martin  changed:

   What|Removed |Added

 Resolution|DUPLICATE   |---
 Status|RESOLVED|REOPENED

--- Comment #20 from Hector Martin  ---
The good news is that manually fiddling with the config, as in bug 57240, does
indeed still work and lets you get real SI units. Great!

The bad news is that bug predates the existence of the UI in KDE 4.x, and this
bug is a *regression* since the UI did exist at a later point and subsequently
disappeared. So this isn't a duplicate, since it's a regression. If you can
find documentation that the subsequent removal of the UI was intentional and it
will not be restored, that would make this bug a WONTFIX, not a DUPLICATE.

-- 
You are receiving this mail because:
You are watching all bug changes.

Re: wtf just happened to my local staging web server

2022-05-11 Thread Richard Hector

On 5/05/22 19:57, Stephan Seitz wrote:

Am Do, Mai 05, 2022 at 09:30:42 +0200 schrieb Klaus Singvogel:

I think there are more.


Yes, I only know wtf as ...


Yes, but such language is not permitted on this list.

Richard



[jira] [Updated] (HADOOP-18167) Add metrics to track delegation token secret manager operations

2022-05-11 Thread Hector Sandoval Chaverri (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Sandoval Chaverri updated HADOOP-18167:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Add metrics to track delegation token secret manager operations
> ---
>
> Key: HADOOP-18167
> URL: https://issues.apache.org/jira/browse/HADOOP-18167
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Hector Sandoval Chaverri
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-18167-branch-2.10-2.patch, 
> HADOOP-18167-branch-2.10-3.patch, HADOOP-18167-branch-2.10-4.patch, 
> HADOOP-18167-branch-2.10.patch, HADOOP-18167-branch-3.3.patch
>
>  Time Spent: 6h 10m
>  Remaining Estimate: 0h
>
> New metrics to track operations that store, update and remove delegation 
> tokens in implementations of AbstractDelegationTokenSecretManager. This will 
> help evaluate the impact of using different secret managers and add 
> optimizations.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[nznog] Re: routing problem?

2022-05-11 Thread Richard Hector

On 11/05/22 20:46, Nathan Ward wrote:



On 11/05/2022, at 8:06 PM, Richard Hector  wrote:

Hi all,

Hopefully this is acceptable here ...

I have a VPS (with a well-known NZ provider) which I can ping, but can't ssh 
to. tcptraceroute stops a couple of hops in (I think the first to not respond 
is our immediate ISP's immediate upstream).

From a different house/ISP, I can connect fine, and from here, I can connect to 
a different VPS (same provider, different network block)

I'm reasonably confident it's not firewall, partly from extensive testing, and 
partly because the same behaviour is shown when running tcptraceroute to the 
gateways of the respective VPS.

Any thoughts?

The fact that the traces make it part way suggest to me that it's a routing 
problem, but then how does ping work?

I could include tcptraceroute results, but is it considered ok to reveal ISPs 
etc? My email probably reveals it all anyway, of course ... :-)


Yeah post away. I would suggest run mtr once with tcp, once with udp, once with 
icmp.
mtr has tcp, udp and icmp modes these days and I find it better than 
traditional tcptraceroute - if for no other reason than it’s got a nice 
consistent interface regardless what protocol you’re testing with.

It may be that the hop that tcptraceroute “stop” at is actually just a router 
that’s dropping your tcp from hitting the control plane, and higher TTL packets 
continue through, so let them run their course and see if you get hops after 
that.

Probably also be worth hitting up your ISP, if that’s relevant, as it sounds 
like it’s an issue with them.


Thanks Nathan.

From home (2degrees):
richard@citrine:~$ mtr -tr 103.6.212.12
Start: 2022-05-11T20:59:07+1200
HOST: citrine Loss%   Snt   Last   Avg  Best  Wrst StDev
  1.|-- void.home  0.0%100.5   0.5   0.4   0.7 
  0.1
  2.|-- 65.7.69.111.static.snap.n  0.0%103.3   3.5   1.5   8.6 
  2.3
  3.|-- voyager.wix.nzix.net   0.0%102.2   2.5   2.0   3.5 
  0.6
  4.|-- ae-0-447.cr2.wgn.vygr.net  0.0%102.7   2.8   2.1   4.2 
  0.7
  5.|-- xe-2-0-3-0.cr1.wgn.vygr.n  0.0%104.0   2.8   2.0   4.6 
  0.8
  6.|-- xe-1-1-0-0.cr2.mdr.vygr.n  0.0%10   13.4  12.5  11.9  13.4 
  0.5
  7.|-- ae-1-0.cr1.mdr.vygr.net0.0%10   12.4  12.9  11.8  15.3 
  1.2
  8.|-- et-0-0-0-0.cr1.qst.vygr.n  0.0%10   12.7  15.0  11.8  34.0 
  6.7
  9.|-- et-0-0-3-0.cr2.qst.vygr.n  0.0%10   14.1  14.2  11.7  21.1 
  3.5
 10.|-- et-0-0.3.cr2.qst.vygr.net  0.0%10   14.6  12.6  11.7  14.6 
  0.9
 11.|-- xe-0-1-0.cr2.pmd.vygr.net  0.0%10   13.1  15.2  12.0  37.9 
  8.0
 12.|-- 113.21.224.23  0.0%10   12.3  12.5  12.2  13.3 
  0.4
 13.|-- akl-host1.backend.net.nz   0.0%10   13.3  14.0  12.1  22.5 
  3.1

richard@citrine:~$ mtr -trT 103.6.212.12
Start: 2022-05-11T21:00:58+1200
HOST: citrine Loss%   Snt   Last   Avg  Best  Wrst StDev
  1.|-- void.home  0.0%100.5   0.6   0.5   0.7 
  0.0
  2.|-- 65.7.69.111.static.snap.n  0.0%10   32.1   5.8   2.1  32.1 
  9.3
  3.|-- ???   100.0100.0   0.0   0.0   0.0 
  0.0

richard@citrine:~$ mtr -tru 103.6.212.12
Start: 2022-05-11T21:01:25+1200
HOST: citrine Loss%   Snt   Last   Avg  Best  Wrst StDev
  1.|-- void.home  0.0%100.5   0.5   0.5   0.7 
  0.1
  2.|-- 65.7.69.111.static.snap.n  0.0%102.0   2.1   1.6   3.4 
  0.6
  3.|-- ???   100.0100.0   0.0   0.0   0.0 
  0.0


From a different home (Spark):
richard@kereru:~$ mtr -tr 103.6.212.12
Start: 2022-05-11T21:04:56+1200
HOST: kereru  Loss%   Snt   Last   Avg  Best  Wrst StDev
  1.|-- fibre-router   0.0%100.6   0.5   0.4   0.6 
  0.1
  2.|-- 219-88-156-1-vdsl.sparkbb  0.0%107.9   4.0   1.7   7.9 
  1.7
  3.|-- ???   100.0100.0   0.0   0.0   0.0 
  0.0
  4.|-- 122.56.113.6   0.0%10   12.7  12.1  10.7  13.7 
  1.0
  5.|-- voyager-dom.akcr11.global  0.0%10   13.7  14.7  11.5  28.7 
  5.1
  6.|-- et-0-0-0-0.cr1.qst.vygr.n  0.0%10   14.8  16.0  11.7  22.6 
  3.7
  7.|-- et-0-0-3-0.cr2.qst.vygr.n  0.0%10   12.0  13.5  11.0  18.8 
  2.3
  8.|-- et-0-0.3.cr2.qst.vygr.net  0.0%10   27.6  15.6  11.4  27.6 
  4.5
  9.|-- xe-0-1-0.cr2.pmd.vygr.net  0.0%10   14.3  14.5  11.4  21.3 
  2.7
 10.|-- 113.21.224.23  0.0%10   14.3  13.7  11.7  15.2 
  1.3
 11.|-- akl-host1.backend.net.nz   0.0%10   11.2  12.9  11.2  15.2 
  1.4

richard@kereru:~$ mtr -trT 103.6.212.12
Start: 2022-05-11T21:06:04+1200
HOST: kereru  Loss%   Snt   Last   Avg  Best  Wrst StDev
  1.|-- fibre-router   0.0%100.6   0.7   0.6   1.3 
  0.2
  2.|-- 219-88-156-1-vdsl.sparkbb  0.0%104.0   4.3   1.9   8.8 
  2.0
  3.|-- ???   100.0100.0   0.0   0.0   0.0 
  0.0

[nznog] routing problem?

2022-05-11 Thread Richard Hector

Hi all,

Hopefully this is acceptable here ...

I have a VPS (with a well-known NZ provider) which I can ping, but can't 
ssh to. tcptraceroute stops a couple of hops in (I think the first to 
not respond is our immediate ISP's immediate upstream).


From a different house/ISP, I can connect fine, and from here, I can 
connect to a different VPS (same provider, different network block)


I'm reasonably confident it's not firewall, partly from extensive 
testing, and partly because the same behaviour is shown when running 
tcptraceroute to the gateways of the respective VPS.


Any thoughts?

The fact that the traces make it part way suggest to me that it's a 
routing problem, but then how does ping work?


I could include tcptraceroute results, but is it considered ok to reveal 
ISPs etc? My email probably reveals it all anyway, of course ... :-)


Cheers,
Richard
___
NZNOG mailing list -- nznog@list.waikato.ac.nz
To unsubscribe send an email to nznog-le...@list.waikato.ac.nz


[jira] [Commented] (HADOOP-18167) Add metrics to track delegation token secret manager operations

2022-05-05 Thread Hector Sandoval Chaverri (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17532413#comment-17532413
 ] 

Hector Sandoval Chaverri commented on HADOOP-18167:
---

[~ayushtkn] I haven't been able to repro by starting ResourceManager, but I see 
this can happen if there's a call to MetricsSystemImpl#init. I've created 
HADOOP-18222 to track this issue.

> Add metrics to track delegation token secret manager operations
> ---
>
> Key: HADOOP-18167
> URL: https://issues.apache.org/jira/browse/HADOOP-18167
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Hector Sandoval Chaverri
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-18167-branch-2.10-2.patch, 
> HADOOP-18167-branch-2.10-3.patch, HADOOP-18167-branch-2.10-4.patch, 
> HADOOP-18167-branch-2.10.patch, HADOOP-18167-branch-3.3.patch
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> New metrics to track operations that store, update and remove delegation 
> tokens in implementations of AbstractDelegationTokenSecretManager. This will 
> help evaluate the impact of using different secret managers and add 
> optimizations.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18222) Prevent DelegationTokenSecretManagerMetrics from registering multiple times

2022-05-05 Thread Hector Sandoval Chaverri (Jira)
Hector Sandoval Chaverri created HADOOP-18222:
-

 Summary: Prevent DelegationTokenSecretManagerMetrics from 
registering multiple times 
 Key: HADOOP-18222
 URL: https://issues.apache.org/jira/browse/HADOOP-18222
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Hector Sandoval Chaverri


After committing HADOOP-18167, we received reports of the following error when 
ResourceManager is initialized:
{noformat}
Caused by: java.io.IOException: Problem starting http server
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1389)
at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:475)
... 4 more
Caused by: org.apache.hadoop.metrics2.MetricsException: Metrics source 
DelegationTokenSecretManagerMetrics already exists!
at 
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newSourceName(DefaultMetricsSystem.java:152)
at 
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.sourceName(DefaultMetricsSystem.java:125)
at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:229)
at 
org.apache.hadoop.metrics2.MetricsSystem.register(MetricsSystem.java:71)
at 
org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager$DelegationTokenSecretManagerMetrics.create(AbstractDelegationTokenSecretManager.java:878)
at 
org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.(AbstractDelegationTokenSecretManager.java:152)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenManager$DelegationTokenSecretManager.(DelegationTokenManager.java:72)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenManager.(DelegationTokenManager.java:122)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.initTokenManager(DelegationTokenAuthenticationHandler.java:161)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.init(DelegationTokenAuthenticationHandler.java:130)
at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeAuthHandler(AuthenticationFilter.java:194)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.initializeAuthHandler(DelegationTokenAuthenticationFilter.java:214)
at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:180)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:180)
at 
org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53){noformat}
This can happen if MetricsSystemImpl#init is called and multiple metrics are 
registered with the same name. A proposed solution is to declare the metrics in 
AbstractDelegationTokenSecretManager as singleton, which would prevent multiple 
instances DelegationTokenSecretManagerMetrics from being registered.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18222) Prevent DelegationTokenSecretManagerMetrics from registering multiple times

2022-05-05 Thread Hector Sandoval Chaverri (Jira)
Hector Sandoval Chaverri created HADOOP-18222:
-

 Summary: Prevent DelegationTokenSecretManagerMetrics from 
registering multiple times 
 Key: HADOOP-18222
 URL: https://issues.apache.org/jira/browse/HADOOP-18222
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Hector Sandoval Chaverri


After committing HADOOP-18167, we received reports of the following error when 
ResourceManager is initialized:
{noformat}
Caused by: java.io.IOException: Problem starting http server
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1389)
at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:475)
... 4 more
Caused by: org.apache.hadoop.metrics2.MetricsException: Metrics source 
DelegationTokenSecretManagerMetrics already exists!
at 
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newSourceName(DefaultMetricsSystem.java:152)
at 
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.sourceName(DefaultMetricsSystem.java:125)
at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:229)
at 
org.apache.hadoop.metrics2.MetricsSystem.register(MetricsSystem.java:71)
at 
org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager$DelegationTokenSecretManagerMetrics.create(AbstractDelegationTokenSecretManager.java:878)
at 
org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.(AbstractDelegationTokenSecretManager.java:152)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenManager$DelegationTokenSecretManager.(DelegationTokenManager.java:72)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenManager.(DelegationTokenManager.java:122)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.initTokenManager(DelegationTokenAuthenticationHandler.java:161)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.init(DelegationTokenAuthenticationHandler.java:130)
at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeAuthHandler(AuthenticationFilter.java:194)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.initializeAuthHandler(DelegationTokenAuthenticationFilter.java:214)
at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:180)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:180)
at 
org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53){noformat}
This can happen if MetricsSystemImpl#init is called and multiple metrics are 
registered with the same name. A proposed solution is to declare the metrics in 
AbstractDelegationTokenSecretManager as singleton, which would prevent multiple 
instances DelegationTokenSecretManagerMetrics from being registered.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jenkinsci/nexus-platform-plugin]

2022-05-05 Thread 'Hector Danilo Hurtado Olaya' via Jenkins Commits
  Branch: refs/heads/INT-6770-adjust-no-proxy-hosts-validation
  Home:   https://github.com/jenkinsci/nexus-platform-plugin

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Commits" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-commits+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/INT-6770-adjust-no-proxy-hosts-validation/188dc3-00%40github.com.


[jenkinsci/nexus-platform-plugin] ff9788: INT-6770 Adjusting no proxy hosts validation (#200)

2022-05-05 Thread 'Hector Danilo Hurtado Olaya' via Jenkins Commits
  Branch: refs/heads/main
  Home:   https://github.com/jenkinsci/nexus-platform-plugin
  Commit: ff9788e3acc289c43d085bbab334fb337a878c89
  
https://github.com/jenkinsci/nexus-platform-plugin/commit/ff9788e3acc289c43d085bbab334fb337a878c89
  Author: Hector Danilo Hurtado Olaya 
  Date:   2022-05-05 (Thu, 05 May 2022)

  Changed paths:
M pom.xml
M src/main/java/org/sonatype/nexus/ci/iq/IqClientFactory.groovy
M src/main/java/org/sonatype/nexus/ci/iq/RemoteScanner.groovy
M src/main/java/org/sonatype/nexus/ci/util/ProxyUtil.groovy
M 
src/main/java/org/sonatype/nexus/ci/util/RepositoryManagerClientUtil.groovy
M src/test/java/org/sonatype/nexus/ci/util/ProxyUtilTest.groovy

  Log Message:
  ---
  INT-6770 Adjusting no proxy hosts validation (#200)

* INT-6770 Adjusting no proxy hosts validation


-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Commits" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-commits+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/main/a4c2da-ff9788%40github.com.


[jenkinsci/nexus-platform-plugin] 188dc3: Cleaning code

2022-05-04 Thread 'Hector Danilo Hurtado Olaya' via Jenkins Commits
  Branch: refs/heads/INT-6770-adjust-no-proxy-hosts-validation
  Home:   https://github.com/jenkinsci/nexus-platform-plugin
  Commit: 188dc36db5d535e5a8ee07fd52a5ada4462a3ebe
  
https://github.com/jenkinsci/nexus-platform-plugin/commit/188dc36db5d535e5a8ee07fd52a5ada4462a3ebe
  Author: Hector Hurtado 
  Date:   2022-05-04 (Wed, 04 May 2022)

  Changed paths:
M src/main/java/org/sonatype/nexus/ci/util/ProxyUtil.groovy

  Log Message:
  ---
  Cleaning code


-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Commits" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-commits+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/INT-6770-adjust-no-proxy-hosts-validation/15b67e-188dc3%40github.com.


Re: stretch with bullseye kernel?

2022-05-04 Thread Richard Hector

On 4/05/22 18:57, Tixy wrote:

On Wed, 2022-05-04 at 00:44 +0300, IL Ka wrote:

Linux kernel is backward compatible. Linus calls it "we do not break
userspace".
That means _old_  applications should work on new kernel


There's also the issue of what config options the kernel is built with.
I'm sure there's been at least one time in the past where for a new
Debian release they've had to enable a kernel feature that the new
systemd (or udev?) wanted. But again, a case like that would stop a new
Debian working on and old kernel, not the other way around as the OP is
intending. I don't expect the Debian kernel maintainers would _remove_
kernel config options needed in a prior release.



Thanks all - I thought it would probably be safe, and indeed everything 
seems to be working :-)


Cheers,
Richard



[jenkinsci/nexus-platform-plugin] 15b67e: Cleaning code

2022-05-03 Thread 'Hector Danilo Hurtado Olaya' via Jenkins Commits
  Branch: refs/heads/INT-6770-adjust-no-proxy-hosts-validation
  Home:   https://github.com/jenkinsci/nexus-platform-plugin
  Commit: 15b67ec71cede547ffbd5e928fd75f9ee86a4b48
  
https://github.com/jenkinsci/nexus-platform-plugin/commit/15b67ec71cede547ffbd5e928fd75f9ee86a4b48
  Author: Hector Hurtado 
  Date:   2022-05-03 (Tue, 03 May 2022)

  Changed paths:
M src/main/java/org/sonatype/nexus/ci/util/ProxyUtil.groovy

  Log Message:
  ---
  Cleaning code


-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Commits" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-commits+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/INT-6770-adjust-no-proxy-hosts-validation/ea4b43-15b67e%40github.com.


stretch with bullseye kernel?

2022-05-03 Thread Richard Hector

Hi all,

For various reasons, I have some stretch LXC containers, on a buster 
host that I now need to upgrade. That will mean they end up running on 
buster's 5.10 kernel.


Is that likely to be a problem?

If so, I guess I can leave the host on buster's kernel for the time 
being, but that's obviously not ideal.


Hopefully the stretch containers can/will be either upgraded or 
dispensed with soon ...


Cheers,
Richard



[jenkinsci/nexus-platform-plugin] ea4b43: INT-6770 Adjusting no proxy hosts validation

2022-05-03 Thread 'Hector Danilo Hurtado Olaya' via Jenkins Commits
  Branch: refs/heads/INT-6770-adjust-no-proxy-hosts-validation
  Home:   https://github.com/jenkinsci/nexus-platform-plugin
  Commit: ea4b435de94fd150c31076e6129acd26bd1de184
  
https://github.com/jenkinsci/nexus-platform-plugin/commit/ea4b435de94fd150c31076e6129acd26bd1de184
  Author: Hector Hurtado 
  Date:   2022-05-03 (Tue, 03 May 2022)

  Changed paths:
M pom.xml
M src/main/java/org/sonatype/nexus/ci/iq/IqClientFactory.groovy
M src/main/java/org/sonatype/nexus/ci/iq/RemoteScanner.groovy
M src/main/java/org/sonatype/nexus/ci/util/ProxyUtil.groovy
M 
src/main/java/org/sonatype/nexus/ci/util/RepositoryManagerClientUtil.groovy
M src/test/java/org/sonatype/nexus/ci/util/ProxyUtilTest.groovy

  Log Message:
  ---
  INT-6770 Adjusting no proxy hosts validation


-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Commits" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-commits+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/INT-6770-adjust-no-proxy-hosts-validation/00-ea4b43%40github.com.


Re: [Bacula-users] About bacula-dir.conmsg

2022-05-02 Thread Hector Barrera
Thanks Bill.

Hector B.

On Mon, May 2, 2022, 10:35 AM Bill Arlofski via Bacula-users <
bacula-users@lists.sourceforge.net> wrote:

> On 5/2/22 11:08, Hector Barrera wrote:
> > Quick question, where is this file located at?
> >
> > Please advise.
> >
> > Hector B.
>
> It is in the directory configured in the Director's bacula-dir.conf file
> as the "Working Directory"  and this is different
> for different distributions depending on what the package maintainer
> decides.
>
> Hope this helps.
>
>
> Best regards,
> Bill
>
> --
> Bill Arlofski
> w...@protonmail.com
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] About bacula-dir.conmsg

2022-05-02 Thread Hector Barrera
Quick question, where is this file located at?

Please advise.

Hector B.


On Mon, May 2, 2022 at 7:28 AM Guilherme Santos 
wrote:

> Hey guys, what's up?
>
> Could someone tell me more about bacula-dir.conmsg? Mine is about
> 21G... Why this archive has this size?
> Thanks!!
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[PATCH] iommu: dart: Add missing module owner to ops structure

2022-05-02 Thread Hector Martin
This is required to make loading this as a module work.

Signed-off-by: Hector Martin 
---
 drivers/iommu/apple-dart.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/iommu/apple-dart.c b/drivers/iommu/apple-dart.c
index decafb07ad08..a10a73282014 100644
--- a/drivers/iommu/apple-dart.c
+++ b/drivers/iommu/apple-dart.c
@@ -773,6 +773,7 @@ static const struct iommu_ops apple_dart_iommu_ops = {
.get_resv_regions = apple_dart_get_resv_regions,
.put_resv_regions = generic_iommu_put_resv_regions,
.pgsize_bitmap = -1UL, /* Restricted during dart probe */
+   .owner = THIS_MODULE,
.default_domain_ops = &(const struct iommu_domain_ops) {
.attach_dev = apple_dart_attach_dev,
.detach_dev = apple_dart_detach_dev,
-- 
2.35.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[jenkinsci/nexus-platform-plugin]

2022-04-29 Thread 'Hector Danilo Hurtado Olaya' via Jenkins Commits
  Branch: refs/heads/bump-innersource-dependencies-c1309a
  Home:   https://github.com/jenkinsci/nexus-platform-plugin

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Commits" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-commits+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/bump-innersource-dependencies-c1309a/687d3f-00%40github.com.


[jira] [Updated] (HADOOP-18167) Add metrics to track delegation token secret manager operations

2022-04-26 Thread Hector Sandoval Chaverri (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Sandoval Chaverri updated HADOOP-18167:
--
Attachment: HADOOP-18167-branch-2.10-4.patch

> Add metrics to track delegation token secret manager operations
> ---
>
> Key: HADOOP-18167
> URL: https://issues.apache.org/jira/browse/HADOOP-18167
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Hector Sandoval Chaverri
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-18167-branch-2.10-2.patch, 
> HADOOP-18167-branch-2.10-3.patch, HADOOP-18167-branch-2.10-4.patch, 
> HADOOP-18167-branch-2.10.patch, HADOOP-18167-branch-3.3.patch
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> New metrics to track operations that store, update and remove delegation 
> tokens in implementations of AbstractDelegationTokenSecretManager. This will 
> help evaluate the impact of using different secret managers and add 
> optimizations.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18167) Add metrics to track delegation token secret manager operations

2022-04-26 Thread Hector Sandoval Chaverri (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Sandoval Chaverri updated HADOOP-18167:
--
Attachment: HADOOP-18167-branch-3.3.patch

> Add metrics to track delegation token secret manager operations
> ---
>
> Key: HADOOP-18167
> URL: https://issues.apache.org/jira/browse/HADOOP-18167
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Hector Sandoval Chaverri
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-18167-branch-2.10-2.patch, 
> HADOOP-18167-branch-2.10-3.patch, HADOOP-18167-branch-2.10.patch, 
> HADOOP-18167-branch-3.3.patch
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> New metrics to track operations that store, update and remove delegation 
> tokens in implementations of AbstractDelegationTokenSecretManager. This will 
> help evaluate the impact of using different secret managers and add 
> optimizations.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [DNG] trouble with rdiff-backup

2022-04-25 Thread Hector Gonzalez Jaime via Dng


On 4/24/22 12:17, Hendrik Boom wrote:

Suddenly it cannot write on a file system.  Presumably the backup drive?  The 
one it has already filled with 215831712 iK blocks and has abother 1608278664 
available?

Then more complaints about banned unicode characters, and then another similar 
backtrace ending with

OSError: [Errno 30] Read-only file system

Is anyone else having problems like these?  Is rdiff-backup busted?  Or is my 
new backup drive or my USB interface busted?


This might mean your system detected a format problem with your backup disk, 
have you tried to remount it?  Check your kernel log,
if there is such a problem it would remount the disk read only.  You can verify 
with the mount command and no arguments.
If that is the case, you should umount the disk, and run fsck on it before 
trying to mount the disk again.

--
Hector Gonzalez
ca...@genac.org

___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: doveadm pw usage

2022-04-25 Thread Richard Hector

On 24/04/22 22:45, ミユナ (alice) wrote:

ok the helps says:

pw   [-l] [-p plaintext]

i just thought it specifies the text file.

thanks for clarifying it.



Bernardo Reino wrote:
The argument to "-p" is not a file containing the password, but the 
password itself!



The downside of putting the password on the command line is that it will 
(briefly) be visible in the output of 'ps':


richard   9449  0.0  0.0   5040  3616 pts/4R+   19:27   0:00 
/usr/bin/doveconf -f service=doveadm -c /etc/dovecot/dovecot.conf -m 
doveadm -e /usr/bin/doveadm pw -p asdf


Cheers,
Richard


Re: how to setup IMAPs with letsencrypt

2022-04-25 Thread Richard Hector

On 24/04/22 13:14, ミユナ (alice) wrote:



Richard Hector wrote:

otherwise you'll have to use DNS challenge method
to support multiple hostnames on the same certificate.


Um, no I didn't. I replied to that. Please check your attributions :-)

Cheers,
Richard



Re: how to setup IMAPs with letsencrypt

2022-04-23 Thread Richard Hector

On 22/04/22 11:57, Joseph Tam wrote:

Keep in mind the subject name (CN or SAN AltNames) of your certificate
must match your IMAP server name e.g. if your certificate is
made for "www.mydomain.com", you'll have to configure your IMAP
clients to also use "www.mydomain.com" as the IMAP server name.

This typically means the web and IMAP server must reside on the
same server, otherwise you'll have to use DNS challenge method
to support multiple hostnames on the same certificate.


_A_ web server has to be there. It doesn't have to serve anything else 
useful. My mail server has a web server that only serves the LE 
challenge. Well, actually it's a proxy server that serves several other 
domains too, but there's nothing else served on that domain (at the moment).


Cheers,
Richard


Re: [ansible-project] Reading in extra files, as or into dicts?

2022-04-16 Thread Richard Hector

On 16/04/22 22:13, Richard Hector wrote:

Hi all,

I have created a directory 'users' alongside my inventory. It has a 
directory 'user_vars', intended to be used like host_vars, but for 
users, obviously.


In there, I have files like this:

=
---
name: richard
gecos: 'Richard Hector,,,'
shell: '/bin/bash'
ssh_keys:
   - richard@foo
   - richard@bar
=

Then in host_vars/all, I have this kind of thing:

=
---
users:
   - richard
admins:
   - richard
ansible_users:
   - richard
=

I also have users/public_keys, which has a file for each of 
'richard@foo' etc, containing one key.


Where I'm stuck is reading in the user_vars file(s).

I want to get rid of what I used to have:

=
- name: users
   user:
     name: '{{ item.name }}'
     comment: '{{ item.gecos }}'
     shell: '{{ item.shell }}'
     createhome: yes
     state: present
     groups: '{{ item.groups }}'
     append: yes
   with_items:
   - { name: 'richard', gecos: 'Richard Hector,,,', shell: 
'/bin/bash', groups: [ 'sudo', 'adm' ] }

   tags:
     - users
==

since I want to separate data from the rest of my config.

So I'd like to either read all the user_vars files into a single 
dictionary before I run that loop, or read each file in its own 
iteration of the loop - or something better if that's the answer.


I thought about using set_fact in a loop, but that would give me 
separate facts/variables for each user, making it harder(?) to index 
them (but maybe by text templating the variable name?)


I also thought about doing a lookup in every line of the user loop 
above, but that seems wasteful, and I'm not sure how I'd do it anyway.


I've got this, but it looks horrible:

==
- name: set up user dicts
  set_fact:
"user_{{ item }}": "{{ lookup('file', inventory_dir + 
'/users/user_vars/' + item) |from_yaml }}"

  with_items: "{{ users }}"
  tags:
- users

- name: users
  user:
name: "{{ lookup('vars', 'user_' + item).name }}"
comment: "{{ lookup('vars', 'user_' + item).gecos }}"
shell: "{{ lookup('vars', 'user_' + item).shell }}"
createhome: yes
state: present
append: yes
  with_items: "{{ users }}"
  tags:
- users

- name: admin groups
  user:
name: "{{ item }}"
append: yes
groups:
  - sudo
  - adm
  when: item in admins
  with_items: "{{ users }}"
  tags:
- users

- name: ansible group
  user:
name: "{{ item }}"
append: yes
groups:
  - sudo
  - adm
  when: item in ansible_users
  with_items: "{{ users }}"
  tags:
- users
=

I'm still to do the ssh key stuff - that's going to be pretty ugly too, 
I think.


Are there ways to make this cleaner?

Cheers,
Richard

--
You received this message because you are subscribed to the Google Groups "Ansible 
Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/af4b8c5e-3e6c-b937-48fd-b74ea32d66d0%40walnut.gen.nz.


[ansible-project] Reading in extra files, as or into dicts?

2022-04-16 Thread Richard Hector

Hi all,

I have created a directory 'users' alongside my inventory. It has a 
directory 'user_vars', intended to be used like host_vars, but for 
users, obviously.


In there, I have files like this:

=
---
name: richard
gecos: 'Richard Hector,,,'
shell: '/bin/bash'
ssh_keys:
  - richard@foo
  - richard@bar
=

Then in host_vars/all, I have this kind of thing:

=
---
users:
  - richard
admins:
  - richard
ansible_users:
  - richard
=

I also have users/public_keys, which has a file for each of 
'richard@foo' etc, containing one key.


Where I'm stuck is reading in the user_vars file(s).

I want to get rid of what I used to have:

=
- name: users
  user:
name: '{{ item.name }}'
comment: '{{ item.gecos }}'
shell: '{{ item.shell }}'
createhome: yes
state: present
groups: '{{ item.groups }}'
append: yes
  with_items:
  - { name: 'richard', gecos: 'Richard Hector,,,', shell: 
'/bin/bash', groups: [ 'sudo', 'adm' ] }

  tags:
- users
==

since I want to separate data from the rest of my config.

So I'd like to either read all the user_vars files into a single 
dictionary before I run that loop, or read each file in its own 
iteration of the loop - or something better if that's the answer.


I thought about using set_fact in a loop, but that would give me 
separate facts/variables for each user, making it harder(?) to index 
them (but maybe by text templating the variable name?)


I also thought about doing a lookup in every line of the user loop 
above, but that seems wasteful, and I'm not sure how I'd do it anyway.


Any suggestions?

Thanks,
Richard

--
You received this message because you are subscribed to the Google Groups "Ansible 
Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/bb093b37-e90b-115b-593c-d535b5945f7b%40walnut.gen.nz.


Re: [ansible-project] Keeping inventory and other data separate from code

2022-04-16 Thread Richard Hector

On 2/04/22 16:07, Nico Kadel-Garcia wrote:

On Fri, Apr 1, 2022 at 10:27 PM Richard Hector  wrote:


Hi all,

Currently my inventory is stored in the same git repo as my play(book)s,
roles etc, which I don't like.


Consider using git submodules if you want a unified workspace.


Hmm. I got thoroughly confused by submodules last time I tried them. And 
didn't Linus say he regretted having devised them in the first place?


Is there an easy way to have a non-unified workspace, so that I can use 
separate repos more easily?


Cheers,
Richard


What are common ways to avoid this? Perhaps keep inventory in a subdir
which is .gitignored, and make that a separate repo?

I also want to keep data which is not strictly inventory - eg lists of
users, ssh keys etc which are specific to my site/business, but shared
across many hosts and groups. I think I need to use lookups to access
this. Perhaps that data and inventory can go together in that other repo?

eg:

/home/richard/ansible/
   .git/
 .gitignore (ansible/data)
   .ansible.cfg
   /data
 .git/
 /inventory
   hosts
   /host_vars
   /group_vars
 users.yml
   /plays
   /roles

Does that seem reasonable?

That way, if (when) I have questions in the future, I can refer you all
to my ansible repo without sharing the private data :-)

Are there better ways to do this?

Cheers,
Richard

--
You received this message because you are subscribed to the Google Groups "Ansible 
Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/c0b602a7-b40e-7254-f153-c98a49dd3046%40walnut.gen.nz.




--
You received this message because you are subscribed to the Google Groups "Ansible 
Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/1119bcec-83e7-f0f6-5fa3-c40ae074af59%40walnut.gen.nz.


Re: [ansible-project] [OT?] What do you call a container container?

2022-04-16 Thread Richard Hector

Thanks - I think I'll go with 'ship', for the time being :-)

Cheers,
Richard

On 2/04/22 12:23, Nico Kadel-Garcia wrote:

"Container ship" is pretty clear. Others include:

"Tupperware".

"Mixing bowls"

"Measuringcups"

"Russian Dolls"

"Turtles All The Way Down".

"The inevitable product of people taught only recursion as a valid way
to do anything"

"Pay no attention to that server behind the layer of abstraction"

The list could go on and on, much like what you're describing.

On Fri, Apr 1, 2022 at 5:46 PM Richard Hector  wrote:


Hi all,

I have several leased VPS in which I run LXC containers.

At the moment, the group I use for those is "lxc_hosts", but that has a
few problems:

- Everything in inventory is a host, so lxc_host could just as well be a
   container as the machine it lives on.

- Separators in general are a pain - ansible doesn't like '-' in many
   places; '_' is invalid in hostnames; '.' and '/' are special in loads
   of places, and I suspect everything else is even riskier

'host' seems the most natural thing for hosting containers but is
problematic as above. 'container' would be worse, in this context.

This problem gets worse as I do more complex stuff - eg I'm looking into
using the lxc_ssh module, to manage containers on remote hosts before
direct ssh is available.

I've been wondering about 'ship' - as in a container ship contains
shipping containers - but it's not particularly intuitive, if I was eg
to share my config here to ask for assistance, or (dreaming) if I get to
the point of having employees or contractors.

Any other suggestions?

Cheers,
Richard

--
You received this message because you are subscribed to the Google Groups "Ansible 
Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/68a20941-f60a-bc5b-6843-8b9c3436b54b%40walnut.gen.nz.




--
You received this message because you are subscribed to the Google Groups "Ansible 
Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/3ae53de8-a952-1872-171e-db9c1084226b%40walnut.gen.nz.


[jira] [Updated] (HADOOP-18167) Add metrics to track delegation token secret manager operations

2022-04-15 Thread Hector Sandoval Chaverri (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Sandoval Chaverri updated HADOOP-18167:
--
Attachment: HADOOP-18167-branch-2.10-3.patch

> Add metrics to track delegation token secret manager operations
> ---
>
> Key: HADOOP-18167
> URL: https://issues.apache.org/jira/browse/HADOOP-18167
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Hector Sandoval Chaverri
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-18167-branch-2.10-2.patch, 
> HADOOP-18167-branch-2.10-3.patch, HADOOP-18167-branch-2.10.patch
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> New metrics to track operations that store, update and remove delegation 
> tokens in implementations of AbstractDelegationTokenSecretManager. This will 
> help evaluate the impact of using different secret managers and add 
> optimizations.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18167) Add metrics to track delegation token secret manager operations

2022-04-14 Thread Hector Sandoval Chaverri (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17522530#comment-17522530
 ] 

Hector Sandoval Chaverri commented on HADOOP-18167:
---

[~fengnanli] / [~jing9] / [~inigoiri] Would you be able to help review this, 
since Owen is out for next week? Thank you!

> Add metrics to track delegation token secret manager operations
> ---
>
> Key: HADOOP-18167
> URL: https://issues.apache.org/jira/browse/HADOOP-18167
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Hector Sandoval Chaverri
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-18167-branch-2.10-2.patch, 
> HADOOP-18167-branch-2.10.patch
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> New metrics to track operations that store, update and remove delegation 
> tokens in implementations of AbstractDelegationTokenSecretManager. This will 
> help evaluate the impact of using different secret managers and add 
> optimizations.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [Bacula-users] Getting a LTO8 drive work with Bacula.

2022-04-13 Thread Hector Barrera
Thank you, Martin!

Your suggestion to add the debug level revealed the problem!

btape: block_util.c:316-0 === adata=0 binbuf=524200
btape: block_util.c:577-0 Zero end blk: adata=0 cleared=64 buf_len=524288
wlen=524288 binbuf=524224
btape: block_util.c:361-0 block_header: block_len=524224
btape: block_util.c:377-0 ser_block_header: adata=0 checksum=543943d7
btape: block_util.c:598-0 Enter: bool is_user_volume_size_reached(DCR*,
bool)
btape: block_util.c:625-0 *Maximum volume size 2,000,000,000 *exceeded Vol=
device="IBMLTO8" (/dev/tape/by-id/scsi-35000e111cc2d8001-nst).
Marking Volume "" as Full.
btape: block_util.c:631-0 Return from is_user_volume_size_reached=1
btape: block_util.c:632-0 Leave: bool is_user_volume_size_reached(DCR*,
bool)

For some reason, I had this directive in my bacula-sd.conf file:

*MaximumVolumeSize = 20*

Which is only mostly used for disk-based backups, not for tapes!

Thanks again Martin.

Problem solved.

Cheers!

Hector Barrera.




On Wed, Apr 13, 2022 at 3:41 AM Martin Simmons  wrote:

> I suggest adding the -d200 argument to btape so it prints some debugging
> info.
>
> Also, which driver are you using?  I think you need to avoid the lin_tape
> driver because it doesn't work with Bacula.
>
> __Martin
>
>
> >>>>> On Tue, 12 Apr 2022 16:46:55 -0700, Hector Barrera said:
> >
> > Hello Gentlemen,
> >
> > I wonder if any of you had a chance to make an IBM LTO8 Tape Drive work
> > with Bacula 11.0.1
> >
> > Previously we had an IBM LTO7 that worked great, but this new drive is
> not
> > working as expected, the btape test function throws out an error.
> >
> > I followed the Bacula chapter: "Testing Your Tape Drive With Bacula"
> >
> > Here's some information about our setup:
> >
> > *Tape Drive:*
> >
> > [root@bacula02 tmp]# tapeinfo -f /dev/nst0
> > Product Type: Tape Drive
> > Vendor ID: 'IBM '
> > Product ID: 'ULTRIUM-HH8 '
> > Revision: 'N9M1'
> > Attached Changer API: No
> > SerialNumber: '1097012184'
> > MinBlock: 1
> > MaxBlock: 8388608
> > SCSI ID: 1
> > SCSI LUN: 0
> > Ready: yes
> > BufferedMode: yes
> > Medium Type: 0x78
> > Density Code: 0x5c
> > *BlockSize: 0*
> > DataCompEnabled: yes
> > DataCompCapable: yes
> > DataDeCompEnabled: yes
> > CompType: 0xff
> > DeCompType: 0xff
> > BOP: yes
> > Block Position: 0
> > Partition 0 Remaining Kbytes: -1
> > Partition 0 Size in Kbytes: -1
> > ActivePartition: 0
> > EarlyWarningSize: 0
> > NumPartitions: 1
> > MaxPartitions: 3
> > Partition0: 1071
> > Partition1: 57857
> > Partition2: 0
> > Partition3: 0
> >
> > First I thought the problem might be with the block size, but as you can
> > see above, the drive is already to 0
> >
> > *Running the btape test function:*
> >
> > [root@bacula02 tmp]# btape -c /opt/bacula/etc/bacula-sd.conf
> > /dev/tape/by-id/scsi-35000e111cc2d8001-nst
> > Tape block granularity is 1024 bytes.
> > btape: butil.c:295-0 Using device:
> > "/dev/tape/by-id/scsi-35000e111cc2d8001-nst" for writing.
> > btape: btape.c:477-0 open device "IBMLTO8"
> > (/dev/tape/by-id/scsi-35000e111cc2d8001-nst): OK
> > *test
> >
> > === Write, rewind, and re-read test ===
> >
> > I'm going to write 1 records and an EOF
> > then write 1 records and an EOF, then rewind,
> > and re-read the data to verify that it is correct.
> >
> > This is an *essential* feature ...
> >
> > 12-Apr 15:59 btape JobId 0: Re-read of last block succeeded.
> > *btape: btape.c:1156-0 Error writing block to device.*
> > *
> > --
> > Attempting to WEOF a second time claims the tape is in read-only mode:
> >
> > *weof
> > btape: Fatal Error at tape_dev.c:953 because:
> > tape_dev.c:952 Attempt to WEOF on non-appendable Volume
> > 12-Apr 16:25 btape: Fatal Error at tape_dev.c:953 because:
> > tape_dev.c:952 Attempt to WEOF on non-appendable Volume
> > btape: btape.c:607-0 Bad status from weof. *ERR=tape_dev.c:952 Attempt to
> > WEOF on non-appendable Volume*
> >
> >
> > *Our bacula-sd.conf (LTO8 in bold):*
> >
> > Storage {
> >   Name = "bacula02-sd"
> >   SdAddress = **.**.**.**
> >   WorkingDirectory = "/opt/bacula/working"
> >   PidDirectory = "/opt/bacula/working"
> >   PluginDirectory = "/opt/bacula/plugins"
> >   MaximumConcurrentJobs = 20
> > }
> > Device {
> >   Name = "F

Re: [Bacula-users] Getting a LTO8 drive work with Bacula.

2022-04-13 Thread Hector Barrera
Thanks, Gary for replying back.

I used the /dev/nst0 as a shortcut when I typed the command and I use the
/dev/tape/by-id/scsi-35000e111cc2d8001-nst in the bacula-sd.conf file just
in case the tape drive name gets changed by Centos when the server gets
rebooted (it tends to happen on rare occasions), so by using the
dev/tape/by-id/  this assures bacula to always find the tape device name no
matter what.
Nonetheless,.running the tapeinfo command but this time using the
/dev/tape/by-id outputs the same information:

[root@bacula02 ~]# tapeinfo -f /dev/tape/by-id/scsi-35000e111cc2d8001-nst
Product Type: Tape Drive
Vendor ID: 'IBM '
Product ID: 'ULTRIUM-HH8 '
Revision: 'N9M1'
Attached Changer API: No
SerialNumber: '1097012184'
MinBlock: 1
MaxBlock: 8388608
SCSI ID: 1
SCSI LUN: 0
Ready: yes
BufferedMode: yes
Medium Type: 0x78
Density Code: 0x5c
BlockSize: 0
DataCompEnabled: yes
DataCompCapable: yes
DataDeCompEnabled: yes
CompType: 0xff
DeCompType: 0xff
BOP: yes
Block Position: 0
Partition 0 Remaining Kbytes: -1
Partition 0 Size in Kbytes: -1
ActivePartition: 0
EarlyWarningSize: 0
NumPartitions: 1
MaxPartitions: 3
Partition0: 1071
Partition1: 57857
Partition2: 0
Partition3: 0
--

This problem is a weird one, because like I said in my previous email, I'm
able to tar files directly to the tape drive, i.e. tar -cvf /dev/nst0  and
I'm able to extract them back.

If you notice while running the btape test, it appears that the command is
able to write to the drive the first file, then it reads the last block
back without a problem, but when it tries to write a second time, then it
fails:

12-Apr 15:59 btape JobId 0: Re-read of last block succeeded. <---
*btape: btape.c:1156-0 Error writing block to device.*

Is as if the drive is not writing the correct EOF bit at the end of the
file transfer.

Also, there's another command that outputs the drive status:

[root@bacula02 ~]# mt -f /dev/tape/by-id/scsi-35000e111cc2d8001-nst status
SCSI 2 tape drive:
File number=0, block number=0, partition=0.
Tape block size 0 bytes. Density code 0x5c (no translation).
Soft error count since last status=0
General status bits on (4101):
 BOT ONLINE IM_REP_EN

Thanks again.

Hector B.


On Tue, Apr 12, 2022 at 7:20 PM Gary R. Schmidt 
wrote:

> On 13/04/2022 09:46, Hector Barrera wrote:
> > Hello Gentlemen,
> >
> > I wonder if any of you had a chance to make an IBM LTO8 Tape Drive work
> > with Bacula 11.0.1
> >
> > Previously we had an IBM LTO7 that worked great, but this new drive is
> > not working as expected, the btape test function throws out an error.
> >
> > I followed the Bacula chapter: "Testing Your Tape Drive With Bacula"
> >
> > Here's some information about our setup:
> >
> > *_Tape Drive:_*
> >
> > [root@bacula02 tmp]# tapeinfo -f /dev/nst0
>
> >ArchiveDevice = "/dev/tape/by-id/scsi-35000e111cc2d8001-nst"
>
> You are referring to things using different names.
>
> What happens if you change them to be consistent?
>
> Cheers,
> GaryB-)
>
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Getting a LTO8 drive work with Bacula.

2022-04-12 Thread Hector Barrera
quot;/opt/bacula/scripts/mtx-changer %c %o %S %a %d"
}
---

I should mention that running this actually works:

mt -f /dev/nst0 rewind
tar -cvf /dev/nst0 .
mt -f /dev/nst0 rewind
tar -xvf /dev/nst0

Running the tar directly to the tape drive works.
I was able to tar several files and extract them without a problem.
So the problem seems to be with my bacula configuration.

Please help, I'm banging my head against the trying to make our new LTO8
drive work with Bacula.

Cheers!
-- 
*Hector Barrera* *| **IMN CREATIVE*
DIRECTOR OF TECHNOLOGY
622 West Colorado Street
Glendale, California 91204
O: 818 858 0408
M: 562.413.5151
W: *imncreative.com <http://imncreative.com/>*
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [PATCH] MAINTAINERS: merge DART into ARM/APPLE MACHINE

2022-04-12 Thread Hector Martin
On 13/04/2022 01.12, Sven Peter wrote:
> It's the same people anyway.
> 
> Signed-off-by: Sven Peter 
> ---
>  MAINTAINERS | 10 ++
>  1 file changed, 2 insertions(+), 8 deletions(-)
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index fd768d43e048..5af879de869c 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -1375,14 +1375,6 @@ L: linux-in...@vger.kernel.org
>  S:   Odd fixes
>  F:   drivers/input/mouse/bcm5974.c
>  
> -APPLE DART IOMMU DRIVER
> -M:   Sven Peter 
> -R:   Alyssa Rosenzweig 
> -L:   iommu@lists.linux-foundation.org
> -S:   Maintained
> -F:   Documentation/devicetree/bindings/iommu/apple,dart.yaml
> -F:   drivers/iommu/apple-dart.c
> -
>  APPLE PCIE CONTROLLER DRIVER
>  M:   Alyssa Rosenzweig 
>  M:   Marc Zyngier 
> @@ -1836,6 +1828,7 @@ F:  Documentation/devicetree/bindings/arm/apple/*
>  F:   Documentation/devicetree/bindings/clock/apple,nco.yaml
>  F:   Documentation/devicetree/bindings/i2c/apple,i2c.yaml
>  F:   Documentation/devicetree/bindings/interrupt-controller/apple,*
> +F:   Documentation/devicetree/bindings/iommu/apple,dart.yaml
>  F:   Documentation/devicetree/bindings/mailbox/apple,mailbox.yaml
>  F:   Documentation/devicetree/bindings/pci/apple,pcie.yaml
>  F:   Documentation/devicetree/bindings/pinctrl/apple,pinctrl.yaml
> @@ -1845,6 +1838,7 @@ F:  arch/arm64/boot/dts/apple/
>  F:   drivers/clk/clk-apple-nco.c
>  F:   drivers/i2c/busses/i2c-pasemi-core.c
>  F:   drivers/i2c/busses/i2c-pasemi-platform.c
> +F:   drivers/iommu/apple-dart.c
>  F:   drivers/irqchip/irq-apple-aic.c
>  F:   drivers/mailbox/apple-mailbox.c
>  F:   drivers/pinctrl/pinctrl-apple-gpio.c

Acked-by: Hector Martin 

-- 
Hector Martin (mar...@marcan.st)
Public Key: https://mrcn.st/pub
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: Libreoffice: printing "dirties" the file being printed

2022-04-11 Thread Richard Hector

On 9/04/22 00:17, gene heskett wrote:

IMO its up to the pdf
interpretor to make the pdf its handed fit the printer. Period, IMO it is
not open for discussion.


"Make it fit" might include scaling. You don't necessarily want that 
happening automatically - what if you're printing something like a 
circuit board design (not likely from LO, I admit), to be transferred 
directly onto the pcb?


Cheers,
Richard



Bug#1008323: bpftrace: fix FTBFS

2022-04-11 Thread Hector Oron
Hello,

  According to https://github.com/iovisor/bpftrace/issues/2068

  I applied the following patch to make the package build:

--- bpftrace-0.14.1.orig/src/btf.cpp
+++ bpftrace-0.14.1/src/btf.cpp
@@ -225,7 +225,7 @@ std::string BTF::c_def(const std::unorde
   char err_buf[256];
   int err;

-  dump = btf_dump__new(btf, nullptr, , dump_printf);
+  dump = btf_dump__new_deprecated(btf, nullptr, , dump_printf);
   err = libbpf_get_error(dump);
   if (err)
   {
@@ -496,7 +496,7 @@ std::unique_ptr BTF::get_a
   char err_buf[256];
   int err;

-  dump = btf_dump__new(btf, nullptr, , dump_printf);
+  dump = btf_dump__new_deprecated(btf, nullptr, , dump_printf);
   err = libbpf_get_error(dump);
   if (err)
   {
@@ -554,7 +554,7 @@ std::map BTF::get_all_struc
   char err_buf[256];
   int err;

-  dump = btf_dump__new(btf, nullptr, , dump_printf);
+  dump = btf_dump__new_deprecated(btf, nullptr, , dump_printf);
   err = libbpf_get_error(dump);
   if (err)
   {


-- 
 Héctor Orón  -.. . -... .. .- -.   -.. . ...- . .-.. --- .--. . .-.



Bug#1008323: bpftrace: fix FTBFS

2022-04-11 Thread Hector Oron
Hello,

  According to https://github.com/iovisor/bpftrace/issues/2068

  I applied the following patch to make the package build:

--- bpftrace-0.14.1.orig/src/btf.cpp
+++ bpftrace-0.14.1/src/btf.cpp
@@ -225,7 +225,7 @@ std::string BTF::c_def(const std::unorde
   char err_buf[256];
   int err;

-  dump = btf_dump__new(btf, nullptr, , dump_printf);
+  dump = btf_dump__new_deprecated(btf, nullptr, , dump_printf);
   err = libbpf_get_error(dump);
   if (err)
   {
@@ -496,7 +496,7 @@ std::unique_ptr BTF::get_a
   char err_buf[256];
   int err;

-  dump = btf_dump__new(btf, nullptr, , dump_printf);
+  dump = btf_dump__new_deprecated(btf, nullptr, , dump_printf);
   err = libbpf_get_error(dump);
   if (err)
   {
@@ -554,7 +554,7 @@ std::map BTF::get_all_struc
   char err_buf[256];
   int err;

-  dump = btf_dump__new(btf, nullptr, , dump_printf);
+  dump = btf_dump__new_deprecated(btf, nullptr, , dump_printf);
   err = libbpf_get_error(dump);
   if (err)
   {


-- 
 Héctor Orón  -.. . -... .. .- -.   -.. . ...- . .-.. --- .--. . .-.



Bug#1009109: fzf: FTBFS missing build dependency on golang-golang-x-term-dev

2022-04-07 Thread Hector Oron
Source: fzf
Version: 0.29.0-1
Severity: important

Hello,

fzf package fails to build from source as it is missing a dependency
on golang-golang-x-term-dev

diff -Nru fzf-0.29.0/debian/control fzf-0.29.0/debian/control
--- fzf-0.29.0/debian/control2022-01-08 11:13:51.0 -0500
+++ fzf-0.29.0/debian/control2022-04-07 06:19:10.0 -0400
@@ -11,13 +11,14 @@
 Build-Depends: debhelper-compat (= 13),
dh-exec,
dh-golang,
-   golang-github-rivo-uniseg-dev,
+   golang-github-rivo-uniseg-dev,
golang-github-mattn-go-isatty-dev,
golang-github-mattn-go-runewidth-dev,
golang-github-mattn-go-shellwords-dev,
golang-github-saracen-walker-dev,
golang-go (>= 1.5),
-   golang-golang-x-crypto-dev
+   golang-golang-x-crypto-dev,
+   golang-golang-x-term-dev,
 Rules-Requires-Root: no
 XS-Go-Import-Path: github.com/junegunn/fzf

-- 
 Héctor Orón  -.. . -... .. .- -.   -.. . ...- . .-.. --- .--. . .-.



Re: libvirt tools and keyfiles

2022-04-02 Thread Richard Hector




On 2022-04-01, Celejar  wrote:



What is going on here? Since I'm specifying a keyfile on the command
line, and it's being used - otherwise I wouldn't even get the list of
VMs - why am I being prompted for the password?

Celejar


Apologies for replying to the wrong message - I've deleted the original.

Are you really getting prompted for the password for the host system? 
You're not talking about the login prompt on the console of the VM?


Also, by adding my normal user on the host system to the libvirt group, 
it's not necessary to ssh as root - I can just use my normal user. In 
fact I don't allow root logins, so I can't directly test your commands.


Oh, and I assume the doubled '-c' is a typo :-)

Cheers,
Richard



[ansible-project] Keeping inventory and other data separate from code

2022-04-01 Thread Richard Hector

Hi all,

Currently my inventory is stored in the same git repo as my play(book)s, 
roles etc, which I don't like.


What are common ways to avoid this? Perhaps keep inventory in a subdir 
which is .gitignored, and make that a separate repo?


I also want to keep data which is not strictly inventory - eg lists of 
users, ssh keys etc which are specific to my site/business, but shared 
across many hosts and groups. I think I need to use lookups to access 
this. Perhaps that data and inventory can go together in that other repo?


eg:

/home/richard/ansible/
  .git/
.gitignore (ansible/data)
  .ansible.cfg
  /data
.git/
/inventory
  hosts
  /host_vars
  /group_vars
users.yml
  /plays
  /roles

Does that seem reasonable?

That way, if (when) I have questions in the future, I can refer you all 
to my ansible repo without sharing the private data :-)


Are there better ways to do this?

Cheers,
Richard

--
You received this message because you are subscribed to the Google Groups "Ansible 
Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/c0b602a7-b40e-7254-f153-c98a49dd3046%40walnut.gen.nz.


[ansible-project] [OT?] What do you call a container container?

2022-04-01 Thread Richard Hector

Hi all,

I have several leased VPS in which I run LXC containers.

At the moment, the group I use for those is "lxc_hosts", but that has a 
few problems:


- Everything in inventory is a host, so lxc_host could just as well be a
  container as the machine it lives on.

- Separators in general are a pain - ansible doesn't like '-' in many
  places; '_' is invalid in hostnames; '.' and '/' are special in loads
  of places, and I suspect everything else is even riskier

'host' seems the most natural thing for hosting containers but is 
problematic as above. 'container' would be worse, in this context.


This problem gets worse as I do more complex stuff - eg I'm looking into 
using the lxc_ssh module, to manage containers on remote hosts before 
direct ssh is available.


I've been wondering about 'ship' - as in a container ship contains 
shipping containers - but it's not particularly intuitive, if I was eg 
to share my config here to ask for assistance, or (dreaming) if I get to 
the point of having employees or contractors.


Any other suggestions?

Cheers,
Richard

--
You received this message because you are subscribed to the Google Groups "Ansible 
Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/68a20941-f60a-bc5b-6843-8b9c3436b54b%40walnut.gen.nz.


[jira] [Updated] (HADOOP-18167) Add metrics to track delegation token secret manager operations

2022-03-25 Thread Hector Sandoval Chaverri (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Sandoval Chaverri updated HADOOP-18167:
--
Attachment: HADOOP-18167-branch-2.10-2.patch

> Add metrics to track delegation token secret manager operations
> ---
>
> Key: HADOOP-18167
> URL: https://issues.apache.org/jira/browse/HADOOP-18167
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Hector Sandoval Chaverri
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-18167-branch-2.10-2.patch, 
> HADOOP-18167-branch-2.10.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> New metrics to track operations that store, update and remove delegation 
> tokens in implementations of AbstractDelegationTokenSecretManager. This will 
> help evaluate the impact of using different secret managers and add 
> optimizations.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18167) Add metrics to track delegation token secret manager operations

2022-03-22 Thread Hector Sandoval Chaverri (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Sandoval Chaverri updated HADOOP-18167:
--
Attachment: HADOOP-18167-branch-2.10.patch
Status: Patch Available  (was: Open)

> Add metrics to track delegation token secret manager operations
> ---
>
> Key: HADOOP-18167
> URL: https://issues.apache.org/jira/browse/HADOOP-18167
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Hector Sandoval Chaverri
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-18167-branch-2.10.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> New metrics to track operations that store, update and remove delegation 
> tokens in implementations of AbstractDelegationTokenSecretManager. This will 
> help evaluate the impact of using different secret managers and add 
> optimizations.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18167) Add metrics to track delegation token secret manager operations

2022-03-22 Thread Hector Sandoval Chaverri (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17510910#comment-17510910
 ] 

Hector Sandoval Chaverri commented on HADOOP-18167:
---

Hi [~ste...@apache.org], I saw that a few classes , such as S3AInstrumentation, 
implement IOStatisticsSource and use IOStatisticsStore to track different 
counters. Is this the approach that you think we should follow?

Could you also help explain what's the consumer of the IOStatistics?

> Add metrics to track delegation token secret manager operations
> ---
>
> Key: HADOOP-18167
> URL: https://issues.apache.org/jira/browse/HADOOP-18167
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Hector Sandoval Chaverri
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> New metrics to track operations that store, update and remove delegation 
> tokens in implementations of AbstractDelegationTokenSecretManager. This will 
> help evaluate the impact of using different secret managers and add 
> optimizations.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Bug#988647: crash: please update to new upstream release

2022-03-22 Thread Hector Oron
Hello,

  Since release has now happened, could you move 7.3 to bookworm/sid
and upload newer version 8.0.0 needed for linux kernel 5.15.

Regards
-- 
 Héctor Orón  -.. . -... .. .- -.   -.. . ...- . .-.. --- .--. . .-.



[jira] [Created] (HADOOP-18167) Add metrics to track delegation token secret manager operations

2022-03-21 Thread Hector Sandoval Chaverri (Jira)
Hector Sandoval Chaverri created HADOOP-18167:
-

 Summary: Add metrics to track delegation token secret manager 
operations
 Key: HADOOP-18167
 URL: https://issues.apache.org/jira/browse/HADOOP-18167
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Hector Sandoval Chaverri


New metrics to track operations that store, update and remove delegation tokens 
in implementations of AbstractDelegationTokenSecretManager. This will help 
evaluate the impact of using different secret managers and add optimizations.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18167) Add metrics to track delegation token secret manager operations

2022-03-21 Thread Hector Sandoval Chaverri (Jira)
Hector Sandoval Chaverri created HADOOP-18167:
-

 Summary: Add metrics to track delegation token secret manager 
operations
 Key: HADOOP-18167
 URL: https://issues.apache.org/jira/browse/HADOOP-18167
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Hector Sandoval Chaverri


New metrics to track operations that store, update and remove delegation tokens 
in implementations of AbstractDelegationTokenSecretManager. This will help 
evaluate the impact of using different secret managers and add optimizations.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: OT EU-based Cloud Service

2022-03-18 Thread Richard Hector

On 18/03/22 21:14, Byung-Hee HWANG wrote:

https://hetzner.cloud

German company, a single VPS cost is about 5€ per month.

Oh Nuremberg! Racing Circuit, fantastic!!


Um - you might be thinking of Nürburg? Home of the Nürburgring? :-)

Nuremburg has other associations in my mind, but I'm sure it's a fine 
place for a data centre.


Cheers,
Richard



Re: cups/avahi-daemon - worrying logs

2022-03-17 Thread Richard Hector

On 17/03/22 19:37, mick crane wrote:

On 2022-03-17 05:09, Richard Hector wrote:

On 8/03/22 13:25, Richard Hector wrote:

Hi all,

I've recently set up a small box to run cups, to provide network 
access to a USB-only printer. It's a 32-bit machine running bullseye.


I'm seeing log messages like these:

Mar  7 15:47:47 whio avahi-daemon[310]: Record 
[Brother\032HL-2140\032\064\032whio._ipps._tcp.local#011IN#011SRV 0 
0 631 whio.local ; ttl=120] not fitting in legacy unicast packet, 
dropping.
Mar  7 15:47:47 whio avahi-daemon[310]: Record 
[whio.local#011IN#011 fe80::3e4a:92ff:fed3:9e16 ; ttl=120] not 
fitting in legacy unicast packet, dropping.
Mar  7 15:47:48 whio avahi-daemon[310]: Record 
[Brother\032HL-2140\032\064\032whio._ipp._tcp.local#011IN#011SRV 0 0 
631 whio.local ; ttl=120] not fitting in legacy unicast packet, 
dropping.
Mar  7 15:47:48 whio avahi-daemon[310]: Record 
[whio.local#011IN#011 fe80::3e4a:92ff:fed3:9e16 ; ttl=120] not 
fitting in legacy unicast packet, dropping.


Those link-local IPv6 addresses belong to the machine itself. It 
currently has no other IPv6 address(es) (other than loopback), but I 
should probably set that up.


Any hints as to what's going on?

Most of the hits I get from a web search are full of 'me too' with no 
answers.


Nobody? Not even another 'me too'? :-)

Any suggestions for further/better questions to ask, or info to provide?


I have no idea. Could it be something to do with this old report ?
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=517683


I don't think so - that seems to relate to too many packets, rather than 
packets being too big.



I would probably fiddle about to get rid of the IPV6 thing.


I don't want to get rid of IPv6 - I use it. It could be connected with 
IPv6 due to IPv6 (IIRC) not allowing fragmentation of packets.


Cheers,
Richard



<    3   4   5   6   7   8   9   10   11   12   >