Re: RFR: 8330694: Rename 'HeapRegion' to 'G1HeapRegion' [v13]

2024-05-24 Thread Thomas Schatzl
On Fri, 24 May 2024 13:04:14 GMT, Lei Zaakjyu  wrote:

>> follow up 8267941
>
> Lei Zaakjyu has updated the pull request with a new target base due to a 
> merge or a rebase. The pull request now contains 10 commits:
> 
>  - review
>  - Merge branch 'master' of https://git.openjdk.org/jdk into JDK-8330694
>  - restore
>  - Merge branch 'master' of https://git.openjdk.org/jdk into JDK-8330694
>  - review
>  - Merge branch 'master' into JDK-8330694
>  - fix indentation
>  - also tidy up
>  - tidy up
>  - rename

Still good imo

-

Marked as reviewed by tschatzl (Reviewer).

PR Review: https://git.openjdk.org/jdk/pull/18871#pullrequestreview-2076897185


Re: [PATCH v2] ntp: remove accidental integer wrap-around

2024-05-24 Thread Thomas Gleixner
On Fri, May 24 2024 at 14:09, Thomas Gleixner wrote:
> On Fri, May 17 2024 at 20:22, Justin Stitt wrote:
> I dug into history to find a Fixes tag. That unearthed something
> interesting.  Exactly this check used to be there until commit
> eea83d896e31 ("ntp: NTP4 user space bits update") which landed in
> 2.6.30. The change log says:
>
> "If some values for adjtimex() are outside the acceptable range, they
>  are now simply normalized instead of letting the syscall fail."
>
> The problem with that commit is that it did not do any normalization at
> all and just relied on the actual time_maxerror handling in
> second_overflow(), which is both insufficient and also prone to that
> overflow issue.
>
> So instead of turning the clock back, we might be better off to actually
> put the normalization in place at the assignment:
>
> time_maxerror = min(max(0, txc->maxerror), NTP_PHASE_LIMIT);
>
> or something like that.

So that commit also removed the sanity check for time_esterror, but
that's not doing anything in the kernel other than being reset in
clear_ntp() and being handed back to user space. No idea what this is
actually used for.

Thanks,

tglx



Re: [PATCH v2] ntp: remove accidental integer wrap-around

2024-05-24 Thread Thomas Gleixner
On Fri, May 17 2024 at 20:22, Justin Stitt wrote:
> time_maxerror is unconditionally incremented and the result is checked
> against NTP_PHASE_LIMIT, but the increment itself can overflow,
> resulting in wrap-around to negative space.
>
> The user can supply some crazy values which is causing the overflow. Add
> an extra validation step checking that maxerror is reasonable.

The user can supply any value which can cause an overflow as the input
is unchecked. Add ...

Hmm?

> diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
> index b58dffc58a8f..321f251c02aa 100644
> --- a/kernel/time/timekeeping.c
> +++ b/kernel/time/timekeeping.c
> @@ -2388,6 +2388,11 @@ static int timekeeping_validate_timex(const struct 
> __kernel_timex *txc)
>   }
>   }
>  
> + if (txc->modes & ADJ_MAXERROR) {
> + if (txc->maxerror < 0 || txc->maxerror > NTP_PHASE_LIMIT)
> + return -EINVAL;
> + }

I dug into history to find a Fixes tag. That unearthed something
interesting.  Exactly this check used to be there until commit
eea83d896e31 ("ntp: NTP4 user space bits update") which landed in
2.6.30. The change log says:

"If some values for adjtimex() are outside the acceptable range, they
 are now simply normalized instead of letting the syscall fail."

The problem with that commit is that it did not do any normalization at
all and just relied on the actual time_maxerror handling in
second_overflow(), which is both insufficient and also prone to that
overflow issue.

So instead of turning the clock back, we might be better off to actually
put the normalization in place at the assignment:

time_maxerror = min(max(0, txc->maxerror), NTP_PHASE_LIMIT);

or something like that.

Miroslav: Any opinion on that?

Thanks,

tglx



Re: [rsyslog] Stop actions

2024-05-24 Thread Thomas Raef via rsyslog
I changed it to:

ruleset(name="drop") {
if ($rawmsg contains "temp-write-test-") or ($rawmsg contains "-mc.log") or
($rawmsg contains "/bb-plugin/cache") then {
stop
}
}

But the messages still show up.

If the message is malformed, what can I do?

This is one such message I'm still getting:

"message": type=PATH msg=audit(1715691166.683:1235018): item=1
name=\"/var/www/[redacted]/htdocs/wp-content/mc_data/e0dd02283d6008e11343bf4b5d38ced4-mc.log\"
inode=2427162 dev=08:01 mode=0100644 ouid=1010 ogid=2011 rdev=00:00
nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
OUID=\"[redacted\" OGID=\"redacted\"

Thomas J. Raef
Founder, WeWatchYourWebsite.com
http://wewatchyourwebsite.com
tr...@wewatchyourwebsite.com
LinkedIn <https://www.linkedin.com/in/thomas-raef-74b93a14/>
Facebook <https://www.facebook.com/WeWatchYourWebsite>



On Fri, May 24, 2024 at 6:49 AM Rainer Gerhards 
wrote:

> I guess the message is malformed and the string you look for is inside
> another field.
>
> I would suggest that you use "$rawmsg" instead of "$msg". If that
> works, a) we are on the right track and b) you actually solved the
> issue, albeit probably not in the best possible way.
>
> HTH
> Rainer
>
> El vie, 24 may 2024 a las 12:28, Thomas Raef via rsyslog
> () escribió:
> >
> > I have rules setup but I want to ignore all entries like this:
> >
> >  "message": type=PATH msg=audit(1715687344.694:1226486): item=3
> >
> name=\"/var/www/[redacted].com/htdocs/wp-content/temp-write-test-12345467\"
> > inode=1661307 dev=08:01 mode=0100644 ouid=1005 ogid=2006 rdev=00:00
> > nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
> > OUID=\"[redacted]\" OGID=\"[redacted]\"
> >
> > I want to ignore all entries that have temp-write-test- in the message.
> >
> > I've tried:
> >
> > :msg, contains, "temp-write-test-" stop
> >
> >
> >
> > But I continually get messages with that string in them. I've tried it
> with
> > that as the first rule.
> >
> >
> > And I've tried this as well:
> >
> >
> > ruleset(name="drop") {
> > if ($msg contains "temp-write-test-") or ($msg contains "-mc.log") or
> ($msg
> > contains "/bb-plugin/cache") then {
> > stop
> > }
> > }
> >
> > input(type="imfile"
> > File="/var/log/audit/audit.log"
> > Tag="audit_logs"
> > ruleset="drop"
> > reopenOnTruncate="on"
> > )
> >
> >
> > Nothing works.
> >
> >
> > Can anyone shed some light? Please?
> >
> >
> > Thomas J. Raef
> > Founder, WeWatchYourWebsite.com
> > http://wewatchyourwebsite.com
> > tr...@wewatchyourwebsite.com
> > LinkedIn <https://www.linkedin.com/in/thomas-raef-74b93a14/>
> > Facebook <https://www.facebook.com/WeWatchYourWebsite>
> > ___
> > rsyslog mailing list
> > https://lists.adiscon.net/mailman/listinfo/rsyslog
> > http://www.rsyslog.com/professional-services/
> > What's up with rsyslog? Follow https://twitter.com/rgerhards
> > NOTE WELL: This is a PUBLIC mailing list, posts are ARCHIVED by a myriad
> of sites beyond our control. PLEASE UNSUBSCRIBE and DO NOT POST if you
> DON'T LIKE THAT.
>
___
rsyslog mailing list
https://lists.adiscon.net/mailman/listinfo/rsyslog
http://www.rsyslog.com/professional-services/
What's up with rsyslog? Follow https://twitter.com/rgerhards
NOTE WELL: This is a PUBLIC mailing list, posts are ARCHIVED by a myriad of 
sites beyond our control. PLEASE UNSUBSCRIBE and DO NOT POST if you DON'T LIKE 
THAT.

[rsyslog] Stop actions

2024-05-24 Thread Thomas Raef via rsyslog
I have rules setup but I want to ignore all entries like this:

 "message": type=PATH msg=audit(1715687344.694:1226486): item=3
name=\"/var/www/[redacted].com/htdocs/wp-content/temp-write-test-12345467\"
inode=1661307 dev=08:01 mode=0100644 ouid=1005 ogid=2006 rdev=00:00
nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
OUID=\"[redacted]\" OGID=\"[redacted]\"

I want to ignore all entries that have temp-write-test- in the message.

I've tried:

:msg, contains, "temp-write-test-" stop



But I continually get messages with that string in them. I've tried it with
that as the first rule.


And I've tried this as well:


ruleset(name="drop") {
if ($msg contains "temp-write-test-") or ($msg contains "-mc.log") or ($msg
contains "/bb-plugin/cache") then {
stop
}
}

input(type="imfile"
File="/var/log/audit/audit.log"
Tag="audit_logs"
ruleset="drop"
reopenOnTruncate="on"
)


Nothing works.


Can anyone shed some light? Please?


Thomas J. Raef
Founder, WeWatchYourWebsite.com
http://wewatchyourwebsite.com
tr...@wewatchyourwebsite.com
LinkedIn <https://www.linkedin.com/in/thomas-raef-74b93a14/>
Facebook <https://www.facebook.com/WeWatchYourWebsite>
___
rsyslog mailing list
https://lists.adiscon.net/mailman/listinfo/rsyslog
http://www.rsyslog.com/professional-services/
What's up with rsyslog? Follow https://twitter.com/rgerhards
NOTE WELL: This is a PUBLIC mailing list, posts are ARCHIVED by a myriad of 
sites beyond our control. PLEASE UNSUBSCRIBE and DO NOT POST if you DON'T LIKE 
THAT.


[ANNOUNCEMENT] Commons Daemon 1.4.0 Released

2024-05-24 Thread Mark Thomas

The Apache Commons Team is pleased to announce the availability of
Apache Commons Daemon 1.4.0

The Apache Commons Daemon software library provides a generic Daemon
(unix) or Service (Windows) wrapper for Java code.

Version 1.4.0 raises the minimum supported version of Java to Java 8 and 
Windows to Windows 10 / Windows Server 2012. Version 1.4.0 also 
addresses a number of bugs.


A full list of changes can be found at
https://commons.apache.org/proper/commons-daemon/changes-report.html

Source and binary distributions are available for download from the
Apache Commons download site:

https://commons.apache.org/proper/commons-daemon/download_daemon.cgi

Please verify signatures using the KEYS file available at the above
location when downloading the release.

For complete information on Commons Daemon, including
instructions on how to submit bug reports, patches, or suggestions for
improvement, see the Apache Commons Daemon website:

https://commons.apache.org/proper/commons-daemon/

Mark
on behalf of the Apache Commons community


[ANNOUNCEMENT] Commons Daemon 1.4.0 Released

2024-05-24 Thread Mark Thomas

The Apache Commons Team is pleased to announce the availability of
Apache Commons Daemon 1.4.0

The Apache Commons Daemon software library provides a generic Daemon
(unix) or Service (Windows) wrapper for Java code.

Version 1.4.0 raises the minimum supported version of Java to Java 8 and 
Windows to Windows 10 / Windows Server 2012. Version 1.4.0 also 
addresses a number of bugs.


A full list of changes can be found at
https://commons.apache.org/proper/commons-daemon/changes-report.html

Source and binary distributions are available for download from the
Apache Commons download site:

https://commons.apache.org/proper/commons-daemon/download_daemon.cgi

Please verify signatures using the KEYS file available at the above
location when downloading the release.

For complete information on Commons Daemon, including
instructions on how to submit bug reports, patches, or suggestions for
improvement, see the Apache Commons Daemon website:

https://commons.apache.org/proper/commons-daemon/

Mark
on behalf of the Apache Commons community

-
To unsubscribe, e-mail: user-unsubscr...@commons.apache.org
For additional commands, e-mail: user-h...@commons.apache.org



[ANNOUNCEMENT] Commons Daemon 1.4.0 Released

2024-05-24 Thread Mark Thomas

The Apache Commons Team is pleased to announce the availability of
Apache Commons Daemon 1.4.0

The Apache Commons Daemon software library provides a generic Daemon
(unix) or Service (Windows) wrapper for Java code.

Version 1.4.0 raises the minimum supported version of Java to Java 8 and 
Windows to Windows 10 / Windows Server 2012. Version 1.4.0 also 
addresses a number of bugs.


A full list of changes can be found at
https://commons.apache.org/proper/commons-daemon/changes-report.html

Source and binary distributions are available for download from the
Apache Commons download site:

https://commons.apache.org/proper/commons-daemon/download_daemon.cgi

Please verify signatures using the KEYS file available at the above
location when downloading the release.

For complete information on Commons Daemon, including
instructions on how to submit bug reports, patches, or suggestions for
improvement, see the Apache Commons Daemon website:

https://commons.apache.org/proper/commons-daemon/

Mark
on behalf of the Apache Commons community

-
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org



Re: PersistentManager and ClassNotFoundException

2024-05-24 Thread Mark Thomas
Can you provide the simplest web application (with source) that 
replications the problem?


Mark


On 23/05/2024 23:45, Jakub Królikowski wrote:

Hi,

I'm working with Tomcat 10.1.

When a user starts using the store in my web application, I save the
ShopCart object on the "cart" session attribute.
I want the "cart" attributes to return to the session after restarting the
app.


To enable session persistence I added



to the Context. It loads the StandardManager.

And this works fine - after reload / restart the object "ShopCart" is back
in the session.



I want to experiment with PersistentManager. Tomcat docs says: "
The persistence across restarts provided by the *StandardManager* is a
simpler implementation than that provided by the *PersistentManager*. If
robust, production quality persistence across restarts is required then the
*PersistentManager* should be used with an appropriate configuration.

"

I hope for a Listener of deserialization of the session attributes.

The new Manager configuration looks like this:







But it doesn't work. After restart I get this exception:


java.lang.ClassNotFoundException: ShopCart

at
org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1332)

at
org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1144)

at java.base/java.lang.Class.forName0(Native Method)

at java.base/java.lang.Class.forName(Class.java:534)

at java.base/java.lang.Class.forName(Class.java:513)

at
org.apache.catalina.util.CustomObjectInputStream.resolveClass(CustomObjectInputStream.java:158)

at
java.base/java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:2061)

at
java.base/java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1927)

at
java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2252)

at
java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1762)

at
java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:540)

at
java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:498)

at
org.apache.catalina.session.StandardSession.doReadObject(StandardSession.java:1198)

at
org.apache.catalina.session.StandardSession.readObjectData(StandardSession.java:831)

at org.apache.catalina.session.FileStore.load(FileStore.java:203)

at org.apache.catalina.session.StoreBase.processExpires(StoreBase.java:138)

at
org.apache.catalina.session.PersistentManagerBase.processExpires(PersistentManagerBase.java:409)

at
org.apache.catalina.session.ManagerBase.backgroundProcess(ManagerBase.java:587)

at
org.apache.catalina.core.StandardContext.backgroundProcess(StandardContext.java:4787)

at
org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.processChildren(ContainerBase.java:1172)

at
org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.processChildren(ContainerBase.java:1176)

at
org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.processChildren(ContainerBase.java:1176)

at
org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.run(ContainerBase.java:1154)

at
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:572)

at
java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:358)

at
java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)

at
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)

at
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)

at
org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:63)

at java.base/java.lang.Thread.run(Thread.java:1583)


I guess this means that the two managers use ClassLoader differently.
How to get the PersistentManager to work in this case?

Best regards,
--
Jakub Królikowski



-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: gwt-maven-springboot-archetype updated ...

2024-05-24 Thread Thomas Broyer


On Thursday, May 23, 2024 at 6:31:18 PM UTC+2 frank.h...@googlemail.com 
wrote:

Running a `mvn clean verify` generate the project and compares it with 
predefined sources. Similar to your gwt-maven-archetype.
What not gets tested, is building a war and run it.


Ah, apparently it also gets built 
(https://github.com/NaluKit/gwt-maven-springboot-archetype/blob/main/modular-springboot-webapp/src/test/resources/projects/basic-webapp/goal.txt),
 
but indeed the generated WAR is not run.


(fwiw, your GitHub Actions workflow no longer runs as it targets the master 
branch and you renamed it to main)

-- 
You received this message because you are subscribed to the Google Groups "GWT 
Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-web-toolkit+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/google-web-toolkit/b913dab3-6a70-4b84-850f-722523c75acdn%40googlegroups.com.


Re: Security Constraints and Session Timeout

2024-05-24 Thread Mark Thomas

On 23/05/2024 17:01, Jerry Malcolm wrote:
I have some servlets that I can't put security constraints on at the 
web.xml level.  However, deep down in the code there are some places 
that I need a user to be logged in.  My overall UI ensures this all 
works by having certain JSPs with constraints that force the user to log 
in before getting to the servlet.  But if the user spends too much time 
interacting with the servlet and not reloading one of the pages that 
require a login, the session will timeout, and the user is now buried in 
one of the servlets, and I've lost the session/userprincipal.  It 
appears that interacting with a servlet that has no constraints does not 
reset the session timer.  Is that correct, or am I seeing it wrong?  I 
know the easy answer would be to add a constraint requiring login to 
access the servlet.  But with the current design, that's not going to 
work. Is there something I can do in the servlet and/or servlet config 
in web.xml to force servlet access to keep resetting the session timer 
so it won't expire without having to put role constraints directly on 
the servlet?


Just calling HttpServletRequest.getSession(false) from the Servlet 
should be sufficient.


Note you can monitor the expiration time for sessions using the Manager 
application. That might be helpful in testing.


Mark

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



[VOTE][RESULT] Release Apache Commons Daemon 1.4.0 based on RC1

2024-05-24 Thread Mark Thomas

All,

My apologies for the delay in closing this vote. I missed some of the 
the votes so I didn't realize there were sufficient votes for this to pass.


The following votes were cast:

Binding:
+1: ggregory, markt, chtompki

No other votes were cast.

The vote therefore passes.

Thanks to everyone who contributed to this release.

Mark


On 17/05/2024 19:05, Mark Thomas wrote:
We have fixed a few bugs, added enhancements and updated the minimum 
Java and Windows version since Apache Commons Daemon 1.3.4 was released, 
so I would like to release Apache Commons Daemon 1.4.0.


Apache Commons Daemon 1.4.0 RC1 is available for review here:
     https://dist.apache.org/repos/dist/dev/commons/daemon/1.4.0-RC1 
(svn revision 69267)


The Git tag commons-daemon-1.4.0-RC1 commit for this RC is 
6b911598b815a4a7b8ab2b8a8a2157593effc6bc which you can browse here:


https://gitbox.apache.org/repos/asf?p=commons-daemon.git;a=commit;h=6b911598b815a4a7b8ab2b8a8a2157593effc6bc
You may checkout this tag using:
     git clone https://gitbox.apache.org/repos/asf/commons-daemon.git 
--branch commons-daemon-1.4.0-RC1 commons-daemon-1.4.0-RC1


Maven artifacts are here:

https://repository.apache.org/content/repositories/orgapachecommons-1729/commons-daemon/commons-daemon/1.4.0/

These are the artifacts and their hashes:

#Release SHA-512s
#Fri May 17 16:28:36 BST 2024
commons-daemon-1.4.0-bin-windows.zip=5974d638994cbf821c17d0fc6b69bace08b0314ea5614c1a57175a02cda7c57a6b8ee49f8892206061f9d3385da5841db31d9ce9b3ce74cf4afc10ad8e68
commons-daemon-1.4.0-bin.tar.gz=15fccd35a711f91e5b4466d56f50585c7ae3a787a39c16e006617c86b9e9feee9fbf902582b08c2e896ca6a655500d805fdbb9c97f04f70321631168b8d42c81
commons-daemon-1.4.0-bin.zip=3652ed9ed9cf6fcb0d4b5067570c322b0b3c9ae0a81dee1d7b0992bb7ff5654a7c4dc89c0c2d966c9962778288c6ad60bd8ac10f62898c9e10261bec6e61d3ea
commons-daemon-1.4.0-bom.json=0de219d72a63d8154f42ef5bd6c348936e14d65efec3e54a55ebfb9bc757e4ceac7aabd8c8b85d94657ed76f44069ac56b2bb231aba5419733f00a3dc85f6601
commons-daemon-1.4.0-bom.xml=bc0dba27a50ca6c5d30015f97bd258325452e6fabefd1cf38b94d0ce5699233a18b456fd701761a5f8cedf847cbd152879e0dec9add548611d5593b910c90244
commons-daemon-1.4.0-javadoc.jar=8fd299a3d228c4ab4ea8455b81319d80b3e27cac1c31bed1e03cc7a3391d59f18e037adcb72e68202511a45ef5bc49274d6e9cf38c860b55bb9b874a92044d2e
commons-daemon-1.4.0-native-src.tar.gz=8a54200d547ef7ee647e8d4910fd3cb55bf7d8fc75de8f0e01bc701ef0b386ddc3843e6c9189e34d2afd62060fb6299ea83c421cf60c7d105d04cb45904500d3
commons-daemon-1.4.0-native-src.zip=cb6b12bbd775eba7d012744cf908f42fc6d39e421c1f41546f230b431c1d239cc3e2d9c09520165b5db7a95701b651a6738a5d1915d39a4520b1ff07ce4f65a5
commons-daemon-1.4.0-sources.jar=701b3646ea29de5ea69d72c8741a2dc56a44a57168c0e7d1afab87f89d9cab75c413f1fe3d09f5765e4dbe2b2af0951125ee0f6a0a4d5b4fafcf49bfd0b03cbf
commons-daemon-1.4.0-src.tar.gz=285f33ce36e2591f49b6067da16612ec1b49b23a8637d077618aefaae4452993dc2a31660665551ea761857390d940100e162e205fe7c0fad9c72374f2d15bb8
commons-daemon-1.4.0-src.zip=190d6b8b65d71594ff02bade3fbcd6b09d5b2e68413a2a23ef2cbf945d2e19655c1d480484ec198f7e140eaa3744c970770cea17498c12f9bfe284f5bd28a51d
commons-daemon-1.4.0-test-sources.jar=e889d8b5bda1e0a89d33741e9308739b732e938ef13b552acf7dc0ba52845766e6a49f3fbb6c821655d295e18b9accbfeac1c26b8afacc088084511cea301bcd
commons-daemon-1.4.0-tests.jar=b392bdaa59e3d75e7aa023f65514385edfc44bc1bc088826b643186bfeaf47215375a814af3637e585bde201dd6ee5ef3669f2b4a3cf2e275da4fc6ccd91dfda
commons-daemon_commons-daemon-1.4.0.spdx.json=47c669c16aca4588d4940a4dcec162a619587f8fc8d6a74a5abbe8562296f0eb08f271db531e678a939355a9b7f669cb9ade864d953c77402b60e8c183f1faed



Details of changes since 1.3.4 are in the change log:

https://dist.apache.org/repos/dist/dev/commons/daemon/1.4.0-RC1/RELEASE-NOTES.txt

https://github.com/apache/commons-daemon/blob/master/src/changes/changes.xml

KEYS:
   https://downloads.apache.org/commons/KEYS

Please review the release candidate and vote.
This vote will close no sooner than 72 hours from now.

   [ ] +1 Release these artifacts
   [ ] +0 OK, but...
   [ ] -0 OK, but really should fix...
   [ ] -1 I oppose this release because...

Thank you,

Mark Thomas,
Release Manager (using key 10C01C5A2F6059E7)

For following is intended as a helper and refresher for reviewers.

Validating a release candidate
==

These guidelines are NOT complete.

Requirements: Git, Java, Maven.

You can validate a release from a release candidate (RC) tag as follows.

1a) Clone and checkout the RC tag

git clone https://gitbox.apache.org/repos/asf/commons-daemon.git 
--branch commons-daemon-1.4.0-RC1 commons-daemon-1.4.0-RC1

cd commons-daemon-1.4.0-RC1

1b) Download and unpack the source archive from:

https://dist.apache.org/repos/dist/dev/commons/daemon/1.4.0-RC1/source

2) Check Apache licenses

This step is not required if the site includes a RAT report page which 
you then must check.


mvn apache-rat:check

3) Check binary compatibility

Re: Will te UUID or blkid of a device change?

2024-05-24 Thread Thomas Schmitt
Hi,

Hans wrote:
> > I want to make sure, that the correct USB-stick is used.
> > Thus I can do by using the UUID of the target stick like
> > dd if=/path/to/myfile.iso of=UUID="123456-abcd-"

David Wright wrote:
>   # dd of=/dev/disk/by-id/JetFlash_Transcend_4GB_JKNB2FYG-0:0  …  …

This is indeed a good solution if the device ID is known and systemd does
not change its way of composing the "by-id" name.
But if you first have to find out the right "by-id" name, then there is
again the risk of user error, especially when in a hurry.


There is a different, interactive approach which depends on the fact that
the Linux kernel creates a new device file when a USB stick is plugged in:

  https://packages.debian.org/stable/xorriso-dd-target
  https://wiki.debian.org/XorrisoDdTarget

The man page of xorriso-dd-target demonstrates more use cases, like:
  - List all devices with reasoning
  - Evaluate particular given devices


Have a nice day :)

Thomas



Re: [PATCH] tests/qtest/migration-test: Run some basic tests on s390x and ppc64 with TCG, too

2024-05-23 Thread Thomas Huth

On 24/05/2024 02.05, Nicholas Piggin wrote:

On Wed May 22, 2024 at 7:12 PM AEST, Thomas Huth wrote:

On s390x, we recently had a regression that broke migration / savevm
(see commit bebe9603fc ("hw/intc/s390_flic: Fix crash that occurs when
saving the machine state"). The problem was merged without being noticed
since we currently do not run any migration / savevm related tests on
x86 hosts.
While we currently cannot run all migration tests for the s390x target
on x86 hosts yet (due to some unresolved issues with TCG), we can at
least run some of the non-live tests to avoid such problems in the future.
Thus enable the "analyze-script" and the "bad_dest" tests before checking
for KVM on s390x or ppc64 (this also fixes the problem that the
"analyze-script" test was not run on s390x at all anymore since it got
disabled again by accident in a previous refactoring of the code).


ppc64 is working for me, can it be enabled fully, or is it still
breaking somewhere? FWIW I have a patch to change it from using
open-firmware commands to a boot file which speeds it up.


IIRC last time that I tried it was working fine for me, too, but getting a 
speedup here first would be very welcome since using the Forth code slows 
down the whole testing quite a bit.


 Thomas




Re: [PATCH v2 1/1] x86/vector: Fix vector leak during CPU offline

2024-05-23 Thread Thomas Gleixner
On Wed, May 22 2024 at 15:02, Dongli Zhang wrote:
> The absence of IRQD_MOVE_PCNTXT prevents immediate effectiveness of
> interrupt affinity reconfiguration via procfs. Instead, the change is
> deferred until the next instance of the interrupt being triggered on the
> original CPU.
>
> When the interrupt next triggers on the original CPU, the new affinity is
> enforced within __irq_move_irq(). A vector is allocated from the new CPU,
> but if the old vector on the original CPU remains online, it is not
> immediately reclaimed. Instead, apicd->move_in_progress is flagged, and the
> reclaiming process is delayed until the next trigger of the interrupt on
> the new CPU.
>
> Upon the subsequent triggering of the interrupt on the new CPU,
> irq_complete_move() adds a task to the old CPU's vector_cleanup list if it
> remains online. Subsequently, the timer on the old CPU iterates over its
> vector_cleanup list, reclaiming old vectors.
>
> However, a rare scenario arises if the old CPU is outgoing before the
> interrupt triggers again on the new CPU. The irq_force_complete_move() may
> not have the chance to be invoked on the outgoing CPU to reclaim the old
> apicd->prev_vector. This is because the interrupt isn't currently affine to
> the outgoing CPU, and irq_needs_fixup() returns false. Even though
> __vector_schedule_cleanup() is later called on the new CPU, it doesn't
> reclaim apicd->prev_vector; instead, it simply resets both
> apicd->move_in_progress and apicd->prev_vector to 0.
>
> As a result, the vector remains unreclaimed in vector_matrix, leading to a
> CPU vector leak.
>
> To address this issue, move the invocation of irq_force_complete_move()
> before the irq_needs_fixup() call to reclaim apicd->prev_vector, if the
> interrupt is currently or used to affine to the outgoing CPU. Additionally,
> reclaim the vector in __vector_schedule_cleanup() as well, following a
> warning message, although theoretically it should never see
> apicd->move_in_progress with apicd->prev_cpu pointing to an offline CPU.

Nice change log!



[PULL] drm-misc-fixes

2024-05-23 Thread Thomas Zimmermann
Hi Dave, Sima,

here's the weekly PR for drm-misc-fixes. There's one important
patch included, which fixes a kernel panic that can be triggered
from userspace.

Best regards
Thomas

drm-misc-fixes-2024-05-23:
Short summary of fixes pull:

buddy:
- stop using PAGE_SIZE

shmem-helper:
- avoid kernel panic in mmap()

tests:
- buddy: fix PAGE_SIZE dependency
The following changes since commit 6897204ea3df808d342c8e4613135728bc538bcd:

  drm/connector: Add \n to message about demoting connector force-probes 
(2024-05-07 09:17:07 -0700)

are available in the Git repository at:

  https://gitlab.freedesktop.org/drm/misc/kernel.git 
tags/drm-misc-fixes-2024-05-23

for you to fetch changes up to 39bc27bd688066a63e56f7f64ad34fae03fbe3b8:

  drm/shmem-helper: Fix BUG_ON() on mmap(PROT_WRITE, MAP_PRIVATE) (2024-05-21 
14:38:51 +0200)


Short summary of fixes pull:

buddy:
- stop using PAGE_SIZE

shmem-helper:
- avoid kernel panic in mmap()

tests:
- buddy: fix PAGE_SIZE dependency


Matthew Auld (2):
  drm/buddy: stop using PAGE_SIZE
  drm/tests/buddy: stop using PAGE_SIZE

Mohamed Ahmed (1):
  drm/nouveau: use tile_mode and pte_kind for VM_BIND bo allocations

Wachowski, Karol (1):
  drm/shmem-helper: Fix BUG_ON() on mmap(PROT_WRITE, MAP_PRIVATE)

 drivers/gpu/drm/drm_buddy.c |  2 +-
 drivers/gpu/drm/drm_gem_shmem_helper.c  |  3 +++
 drivers/gpu/drm/nouveau/nouveau_abi16.c |  3 +++
 drivers/gpu/drm/nouveau/nouveau_bo.c| 44 ++---
 drivers/gpu/drm/tests/drm_buddy_test.c  | 42 +++
 include/drm/drm_buddy.h |  6 ++---
 include/uapi/drm/nouveau_drm.h  |  7 ++
 7 files changed, 57 insertions(+), 50 deletions(-)

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Frankenstrasse 146, 90461 Nuernberg, Germany
GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman
HRB 36809 (AG Nuernberg)


[PULL] drm-misc-fixes

2024-05-23 Thread Thomas Zimmermann
Hi Dave, Sima,

here's the weekly PR for drm-misc-fixes. There's one important
patch included, which fixes a kernel panic that can be triggered
from userspace.

Best regards
Thomas

drm-misc-fixes-2024-05-23:
Short summary of fixes pull:

buddy:
- stop using PAGE_SIZE

shmem-helper:
- avoid kernel panic in mmap()

tests:
- buddy: fix PAGE_SIZE dependency
The following changes since commit 6897204ea3df808d342c8e4613135728bc538bcd:

  drm/connector: Add \n to message about demoting connector force-probes 
(2024-05-07 09:17:07 -0700)

are available in the Git repository at:

  https://gitlab.freedesktop.org/drm/misc/kernel.git 
tags/drm-misc-fixes-2024-05-23

for you to fetch changes up to 39bc27bd688066a63e56f7f64ad34fae03fbe3b8:

  drm/shmem-helper: Fix BUG_ON() on mmap(PROT_WRITE, MAP_PRIVATE) (2024-05-21 
14:38:51 +0200)


Short summary of fixes pull:

buddy:
- stop using PAGE_SIZE

shmem-helper:
- avoid kernel panic in mmap()

tests:
- buddy: fix PAGE_SIZE dependency


Matthew Auld (2):
  drm/buddy: stop using PAGE_SIZE
  drm/tests/buddy: stop using PAGE_SIZE

Mohamed Ahmed (1):
  drm/nouveau: use tile_mode and pte_kind for VM_BIND bo allocations

Wachowski, Karol (1):
  drm/shmem-helper: Fix BUG_ON() on mmap(PROT_WRITE, MAP_PRIVATE)

 drivers/gpu/drm/drm_buddy.c |  2 +-
 drivers/gpu/drm/drm_gem_shmem_helper.c  |  3 +++
 drivers/gpu/drm/nouveau/nouveau_abi16.c |  3 +++
 drivers/gpu/drm/nouveau/nouveau_bo.c| 44 ++---
 drivers/gpu/drm/tests/drm_buddy_test.c  | 42 +++
 include/drm/drm_buddy.h |  6 ++---
 include/uapi/drm/nouveau_drm.h  |  7 ++
 7 files changed, 57 insertions(+), 50 deletions(-)

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Frankenstrasse 146, 90461 Nuernberg, Germany
GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman
HRB 36809 (AG Nuernberg)


[PULL] drm-misc-fixes

2024-05-23 Thread Thomas Zimmermann
Hi Dave, Sima,

here's the weekly PR for drm-misc-fixes. There's one important
patch included, which fixes a kernel panic that can be triggered
from userspace.

Best regards
Thomas

drm-misc-fixes-2024-05-23:
Short summary of fixes pull:

buddy:
- stop using PAGE_SIZE

shmem-helper:
- avoid kernel panic in mmap()

tests:
- buddy: fix PAGE_SIZE dependency
The following changes since commit 6897204ea3df808d342c8e4613135728bc538bcd:

  drm/connector: Add \n to message about demoting connector force-probes 
(2024-05-07 09:17:07 -0700)

are available in the Git repository at:

  https://gitlab.freedesktop.org/drm/misc/kernel.git 
tags/drm-misc-fixes-2024-05-23

for you to fetch changes up to 39bc27bd688066a63e56f7f64ad34fae03fbe3b8:

  drm/shmem-helper: Fix BUG_ON() on mmap(PROT_WRITE, MAP_PRIVATE) (2024-05-21 
14:38:51 +0200)


Short summary of fixes pull:

buddy:
- stop using PAGE_SIZE

shmem-helper:
- avoid kernel panic in mmap()

tests:
- buddy: fix PAGE_SIZE dependency


Matthew Auld (2):
  drm/buddy: stop using PAGE_SIZE
  drm/tests/buddy: stop using PAGE_SIZE

Mohamed Ahmed (1):
  drm/nouveau: use tile_mode and pte_kind for VM_BIND bo allocations

Wachowski, Karol (1):
  drm/shmem-helper: Fix BUG_ON() on mmap(PROT_WRITE, MAP_PRIVATE)

 drivers/gpu/drm/drm_buddy.c |  2 +-
 drivers/gpu/drm/drm_gem_shmem_helper.c  |  3 +++
 drivers/gpu/drm/nouveau/nouveau_abi16.c |  3 +++
 drivers/gpu/drm/nouveau/nouveau_bo.c| 44 ++---
 drivers/gpu/drm/tests/drm_buddy_test.c  | 42 +++
 include/drm/drm_buddy.h |  6 ++---
 include/uapi/drm/nouveau_drm.h  |  7 ++
 7 files changed, 57 insertions(+), 50 deletions(-)

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Frankenstrasse 146, 90461 Nuernberg, Germany
GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman
HRB 36809 (AG Nuernberg)


Re: [blink-dev] Support for SharedArrayBuffer API and WASM Sqlite in WebView

2024-05-23 Thread 'Thomas Steiner' via blink-dev
That's correct, WebView doesn't support this as per
https://issues.chromium.org/issues/40914606#comment5. @Ayu Ishii
, what's the status of WebSQL in WebView, is it still
available (I think it is)?

On Thu, May 23, 2024, 18:29 'Antonio MORENO' via blink-dev <
blink-dev@chromium.org> wrote:

> Hi,
>
> We maintain a web application that previously used WebSQL, and that has
> been migrated to the WASM implementation of Sqlite. This works well in the
> newer versions of Chrome, but it seems it doesn't work in an Android
> application some of our customers use, that is implemented using WebView.
>
> The reason seems to be that the SharedArrayBuffers API, which the WASM
> Sqlite library requires, is not supported in WebView:
>
> https://caniuse.com/sharedarraybuffer
>
> Can you confirm this is the case? Will this API be supported in WebView at
> some point? Or otherwise, what would be the recommended approach in case a
> customer needs to use a WebView-based Android application to access our web
> application?
>
> Thanks in advance, and kind regards,
>
> Antonio Moreno.
>
> --
> You received this message because you are subscribed to the Google Groups
> "blink-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to blink-dev+unsubscr...@chromium.org.
> To view this discussion on the web visit
> https://groups.google.com/a/chromium.org/d/msgid/blink-dev/597d01f5-2dac-4b02-bb63-f3415493496an%40chromium.org
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to blink-dev+unsubscr...@chromium.org.
To view this discussion on the web visit 
https://groups.google.com/a/chromium.org/d/msgid/blink-dev/CALgRrLmcR43B4KkqLz4XwC6vKXpyH4UBHQH8zrjCPjT1YstvTw%40mail.gmail.com.


Re: Last call before deprecating the nested_splitter plugin

2024-05-23 Thread Thomas Passin
Speaking of which, the free layout controller still has this method:

def get_top_splitter(self) -> Optional[Wrapper]:
"""Return the top splitter of c.frame.top."""
f = self.c.frame
if hasattr(f, 'top') and f.top:
child = f.top.findChild(*NestedSplitter*)  *<*
return child and child.top()
return None

This method is used by the *richtext* plugin as well as the flc itself.

*qt_gui* also has a method by this name but it has been updated to return 
the 'main_splitter'.
On Thursday, May 23, 2024 at 10:52:37 AM UTC-4 Edward K. Ream wrote:

> On Thu, May 23, 2024 at 7:00 AM Thomas Passin  wrote:
>
> Is it correct that the "main" splitter is the one whose splitter bar runs 
>> all the way either from top to bottom or from side to side, depending on 
>> orientation?
>>
>
> Yes, kinda. But you shouldn't take my word for it. Consult the code!
>
> Search for 'main_splitter'. Find *dw.createMainLayout*.
>
> The answer to your question are these lines:
>
> main_splitter = QtWidgets.QSplitter(parent)
> main_splitter.setOrientation(Orientation.Vertical)
> secondary_splitter = QtWidgets.QSplitter(main_splitter)
>
> What's the parent?  cff createMainLayout. The caller is 
> *dw.createMainWindow*. The parent is *dw.centralwidget*.
>
> main_splitter, secondary_splitter = 
> self.createMainLayout(self.centralwidget)
>
> Consult the rest of dw.createMainWindow for further details.
>
> Thomas, you will learn a lot by answering your own questions. You can do 
> it!
>
> Edward
>

-- 
You received this message because you are subscribed to the Google Groups 
"leo-editor" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to leo-editor+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/leo-editor/c5ecfcf9-37ad-48c2-8aa4-ca25dbf5aa14n%40googlegroups.com.


[NTG-context] Suppress pagenumber

2024-05-23 Thread Thomas Meyer

Need Help!

I'm standing in front of the barn door ...

How can I suppress the pagenumber only on the first page?

Thanks in advance
Thomas___
If your question is of interest to others as well, please add an entry to the 
Wiki!

maillist : ntg-context@ntg.nl / 
https://mailman.ntg.nl/mailman3/lists/ntg-context.ntg.nl
webpage  : https://www.pragma-ade.nl / https://context.aanhet.net (mirror)
archive  : https://github.com/contextgarden/context
wiki : https://wiki.contextgarden.net
___


Re: [PATCH 4/5] x86/kernel: Move page table macros to new header

2024-05-23 Thread Thomas Gleixner
On Wed, Apr 10 2024 at 15:48, Jason Andryuk wrote:
> ---
>  arch/x86/kernel/head_64.S| 22 ++
>  arch/x86/kernel/pgtable_64_helpers.h | 28 

That's the wrong place as you want to include it from arch/x86/platform.

arch/x86/include/asm/

Thanks,

tglx



Re: (tomcat) branch main updated: Add support for shallow copies when using WebDAV

2024-05-23 Thread Mark Thomas

On 23/05/2024 12:53, Konstantin Kolinko wrote:

вт, 21 мая 2024 г. в 14:55, :


The following commit(s) were added to refs/heads/main by this push:
  new 4176706761 Add support for shallow copies when using WebDAV
4176706761 is described below

commit 4176706761242851b14be303daf2a00ef385ee49
Author: Mark Thomas 
AuthorDate: Tue May 21 12:54:40 2024 +0100

 Add support for shallow copies when using WebDAV





@@ -1583,7 +1598,9 @@ public class WebdavServlet extends DefaultServlet 
implements PeriodicEventListen
  childSrc += "/";
  }
  childSrc += entry;
-copyResource(errorList, childSrc, childDest);
+if (infiniteCopy) {
+copyResource(errorList, childSrc, childDest, true);
+}
  }


I think that the "if (infiniteCopy)" block here is too narrow.

The whole loop over children (starting with "String[] entries =
resources.list(source)") here is useless when the infiniteCopy option
is false.


Thanks for the review. I've widened the block.

Thinking about it the infinite/not infinite copy option is fairly 
pointless. For a file it has no impact. For a directory you either copy 
the whole tree or just create a new directory - and MKCOL can be used 
for that. Oh well.


Mark

-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org



Re: Last call before deprecating the nested_splitter plugin

2024-05-23 Thread Thomas Passin
Is it correct that the "main" splitter is the one whose splitter bar runs 
all the way either from top to bottom or from side to side, depending on 
orientation?

On Thursday, May 23, 2024 at 5:10:32 AM UTC-4 Edward K. Ream wrote:

> On Wed, May 22, 2024 at 5:50 PM Thomas Passin  wrote:
>
> I'd like to have a command to place the log frame into its own panel next 
>> to the body editor's. 
>>
> ...
>
>> So there should be three built-in layouts instead of two, and there 
>> should be a command or commands similar to *toggle-split-direction* to 
>> activate any of them.
>>
>
> Such enhancements would be a separate issue. They might affect VR and VR3, 
> but we can deal with that later. 
>
> I also don't like the name "secondary_splitter".
>>
>
> The main splitter contains the secondary splitter. See the code for 
> details. I don't think any other name will be much better.
>
> Qt object names are for debugging and as targets for 
> qt_gui.find_widget_by_name. Changing the names would be yet another 
> breaking change. I don't want to go there.
>
> Edward
>

-- 
You received this message because you are subscribed to the Google Groups 
"leo-editor" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to leo-editor+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/leo-editor/a1298cd5-4eba-4ad9-a568-505092b04cden%40googlegroups.com.


[frameworks-kio] [Bug 293888] Poor performance with mounted network locations

2024-05-23 Thread Thomas Mitterfellner
https://bugs.kde.org/show_bug.cgi?id=293888

Thomas Mitterfellner  changed:

   What|Removed |Added

 CC||tho...@mitterfellner.at

--- Comment #42 from Thomas Mitterfellner  ---
The only thing that really helped getting rid of the 25 second delay for the
file dialog to open was to set:

export GVFS_REMOTE_VOLUME_MONITOR_IGNORE=true

as per
https://forum.manjaro.org/t/is-it-possible-to-bypass-a-directory-using-org-gtk-vfs-udisks2volumemonitor/121698

I don't know why this affected a KDE dialog, though; probably because I opened
it from Firefox, which is a gtk app (with
widget.use-xdg-desktop-portal.file-picker set to 1 in about:config).

-- 
You are receiving this mail because:
You are watching all bug changes.

Re: [pve-devel] [PATCH qemu-server 1/2] migration: avoid crash with heavy IO on local VM disk

2024-05-23 Thread Thomas Lamprecht
Am 23/05/2024 um 11:08 schrieb Fabian Grünbichler:
>> +my $kvm_version = PVE::QemuServer::kvm_user_version();
> wouldn't this need to check the *running* qemu binary for the migrated
> VM, not the *installed* qemu binary on the system?

Yes, this would need to check the running QEMU version via QMP,
as otherwise, the new command will be also get issued to VMs
started before the updated, which then will fail.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: Typo in download link for Debian 32-bit DVD ISO

2024-05-23 Thread Thomas Lange
Thanks for reporting. It has been fixed, but the rebuild of the web
pages is still needed. It will happen in a few hours.


>>>>> On Thu, 23 May 2024 09:08:41 +0200, Stefano Pigozzi  said:

> This page seems to be listing an invalid download link for the 32-bit PC 
DVD-1 iso.
> This link:

> 
https://cdimage.debian.org/debian-cd/current/i386/iso-cd/debian-12.5.0-i386-DVD-1.iso

> Should instead be:

> 
https://cdimage.debian.org/debian-cd/current/i386/iso-dvd/debian-12.5.0-i386-DVD-1.iso

-- 
best regards Thomas



[FRnOG] [TECH] IPvX ?

2024-05-23 Thread Thomas Brenac via frnog
Re Bonjour la liste
 
Comme a quasiment chaque participation aux réunion du RiPE NCC “on” me 
sollicite sur un doux mélange d évolution IP associant blockchain et désormais 
IA.…Avec un papier a télécharger ici : https://we.tl/t-nDyLxVqimx
 
Si il pleut ce week-end, que vous avez raté Euro-vision même en replay et Si 
cela vous intéresse je veux bien votre avis.

Cheers
___
Thomas BRENAC
+33686263575 



---
Liste de diffusion du FRnOG
http://www.frnog.org/


Bug#1071581: dialog: stop using libtool-bin

2024-05-23 Thread Thomas Dickey
On Wed, May 22, 2024 at 10:34:12AM +0200, Helmut Grohne wrote:
> Hi Thomas,
> 
> On Wed, May 22, 2024 at 03:53:43AM -0400, Thomas Dickey wrote:
> > I don't use autoheader (though it's present in the fork I've maintained for
> > about the past quarter-century).  The configure script generates the 
> > complete
> > dlg_config.h without that crutch.  Attempting to bypass that will certainly
> > lead to unnecessary bug reports.
> 
> I fear it occurred to me late that I should be using autoconf-dickey
> instead of the standard autoconf for dialog. Hence my patch makes it
> work the "wrong" autoconf and thus runs autoheader. I see how that would
> not be necessary with autoconf-dickey.
> 
> > Actually it would be AC_FOREACH, which invokes AH_TEMPLATE
> > 
> > fwiw, CF_CURSES_FUNCS predates that stuff (1997 versus 1999),
> > and there are other macros which might use those features.
> 
> Yeah. And if you make dialog work with autoconf-dickey and without
> autoheader, then all of this becomes moot anyway.
> 
> Feel free to come up with a different solution as long as we stop
> relying on /usr/bin/libtool as that's the component that will go away.
> We now have one working solution and I'm happy if that is sufficient to
> get the ball rolling for a better solution than mine.

thanks (on my to-do list)

-- 
Thomas E. Dickey 
https://invisible-island.net


signature.asc
Description: PGP signature


Re: Problem running Apache Gump [vmgump]

2024-05-23 Thread Mark Thomas
This is my fault. I (still) haven't added the Derby JARs correctly. I'm 
working on this now.


Mark


On 23/05/2024 03:14, g...@gump-vm2.apache.org wrote:

There is a problem with run 'vmgump' (23052024_01), location : 
http://gump-vm2.apache.org/

The log ought be at:
http://gump-vm2.apache.org/gump_log.txt

The last (up to) 50 lines of the log are :
Document run using [XDocDocumenter]
Traceback (most recent call last):
   File "bin/integrate.py", line 114, in 
 irun()
   File "bin/integrate.py", line 91, in irun
 result = getRunner(run).perform()
   File "/srv/gump/public/gump/python/gump/core/runner/runner.py", line 260, in 
perform
 return self.performRun()
   File "/srv/gump/public/gump/python/gump/core/runner/demand.py", line 214, in 
performRun
 self.finalize()
   File "/srv/gump/public/gump/python/gump/core/runner/runner.py", line 240, in 
finalize
 self.run._dispatchEvent(FinalizeRunEvent(self.run))
   File "/srv/gump/public/gump/python/gump/core/run/gumprun.py", line 186, in 
_dispatchEvent
 actor._processEvent(event)
   File "/srv/gump/public/gump/python/gump/core/run/actor.py", line 83, in 
_processEvent
 self.processEvent(event)
   File "/srv/gump/public/gump/python/gump/core/run/actor.py", line 127, in 
processEvent
 self._processOtherEvent(event)
   File "/srv/gump/public/gump/python/gump/core/run/actor.py", line 184, in 
_processOtherEvent
 self.processOtherEvent(event)
   File "/srv/gump/public/gump/python/gump/actor/document/documenter.py", line 
48, in processOtherEvent
 self.document()
   File "/srv/gump/public/gump/python/gump/actor/document/documenter.py", line 
72, in document
 self.documentRun()
   File "/srv/gump/public/gump/python/gump/actor/document/xdocs/documenter.py", 
line 97, in documentRun
 self.documentEverythingElse()
   File "/srv/gump/public/gump/python/gump/actor/document/xdocs/documenter.py", 
line 481, in documentEverythingElse
 self.documentModule(module)
   File "/srv/gump/public/gump/python/gump/actor/document/xdocs/documenter.py", 
line 1843, in documentModule
 self.documentProject(project)
   File "/srv/gump/public/gump/python/gump/actor/document/xdocs/documenter.py", 
line 1991, in documentProject
 self.documentProjectDetails(project, realTime)
   File "/srv/gump/public/gump/python/gump/actor/document/xdocs/documenter.py", 
line 2083, in documentProjectDetails
 depens += self.documentDependenciesList(dependencySection,
   File "/srv/gump/public/gump/python/gump/actor/document/xdocs/documenter.py", 
line 2293, in documentDependenciesList
 self.insertLink(project, referencingObject,
   File "/srv/gump/public/gump/python/gump/actor/document/xdocs/documenter.py", 
line 3045, in insertLink
 link = self.getLink(toObject, fromObject, state)
   File "/srv/gump/public/gump/python/gump/actor/document/xdocs/documenter.py", 
line 3085, in getLink
 url = getRelativeLocation(toObject, fromObject,
   File "/srv/gump/public/gump/python/gump/actor/document/xdocs/resolver.py", 
line 109, in getRelativeLocation
 toLocation=getLocationForObject(toObject,extn)
   File "/srv/gump/public/gump/python/gump/actor/document/xdocs/resolver.py", 
line 127, in getLocationForObject
 getPathForObject(object),
   File "/srv/gump/public/gump/python/gump/actor/document/xdocs/resolver.py", 
line 83, in getPathForObject
 path=getPathForObject(object.getModule()).getPostfixed(object.getName())
   File "/srv/gump/public/gump/python/gump/core/model/project.py", line 856, in 
getModule
 raise RuntimeError('Project [' + self.name + '] not in a module.]')
RuntimeError: Project [derby] not in a module.]
Process Exit Code : 1
--
Gump Version: 2.0.2-alpha-0003

-
To unsubscribe, e-mail: general-unsubscr...@gump.apache.org
For additional commands, e-mail: general-h...@gump.apache.org



-
To unsubscribe, e-mail: general-unsubscr...@gump.apache.org
For additional commands, e-mail: general-h...@gump.apache.org



Re: WebDAV and Microsoft clients

2024-05-23 Thread Mark Thomas

On 22/05/2024 21:47, Michael Osipov wrote:

On 2024/05/22 17:21:07 Mark Thomas wrote:

All,

I've been looking at the WebDav Servlet for the last few days and in
particular how it interacts with Microsoft clients.


Which clients are we talking about? Windows Explorer?


Yes. The client that gets used when you map a network drive using a 
WebDAV endpoint.



I know that DAV Redirector/Explorer are quite picky about TLS and 
authentication.


Thanks. That is useful to know. I'd read that BASIC auth was disabled by 
default and required a registry tweak to use. I hadn't see anything 
about TLS. I'll keep that in mind.



You might want also try CarotDAV. It served me quite well testing mod_dav and 
Tomcat WebDAV servlet


Noted. My primary objective was to see if the WebdavFixFilter was still 
required (I don't think it is). Seeing what might be required to get 
better out of the box compatibility with the Windows Explorer client was 
secondary. That seems to be dependent on PROPPATCH support which looks 
to require a non-trivial amount of work to get working. I'm happy to 
look at that but without users calling for it, it is going to be low 
priority.


Mark

-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org



[FRnOG] [TECH] IPvX

2024-05-23 Thread Thomas BRENAC via frnog

Bonjour la liste

Comme a quasiment chaque participation aux réunion du RiPE NCC “on” me 
sollicite sur un doux mélange d évolution IP associant blockchain et désormais 
IA.…Avec un papier comme celui ci-joint.

Si il pleut ce week-end, que vous avez raté Euro-vision même en replay et Si 
cela vous intéresse je veux bien votre avis. 



__

Thomas BRENAC

+33686263575 



---
Liste de diffusion du FRnOG
http://www.frnog.org/


[PATCH] hw: debugexit: use runstate API instead of plain exit()

2024-05-23 Thread Thomas Weißschuh
Directly calling exit() prevents any kind of management or handling.
Instead use the corresponding runstate API.
The default behavior of the runstate API is the same as exit().

Signed-off-by: Thomas Weißschuh 
---
 hw/misc/debugexit.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/hw/misc/debugexit.c b/hw/misc/debugexit.c
index ab6de69ce72f..c5c562fd9357 100644
--- a/hw/misc/debugexit.c
+++ b/hw/misc/debugexit.c
@@ -12,6 +12,7 @@
 #include "hw/qdev-properties.h"
 #include "qemu/module.h"
 #include "qom/object.h"
+#include "sysemu/runstate.h"
 
 #define TYPE_ISA_DEBUG_EXIT_DEVICE "isa-debug-exit"
 OBJECT_DECLARE_SIMPLE_TYPE(ISADebugExitState, ISA_DEBUG_EXIT_DEVICE)
@@ -32,7 +33,8 @@ static uint64_t debug_exit_read(void *opaque, hwaddr addr, 
unsigned size)
 static void debug_exit_write(void *opaque, hwaddr addr, uint64_t val,
  unsigned width)
 {
-exit((val << 1) | 1);
+qemu_system_shutdown_request_with_code(SHUTDOWN_CAUSE_GUEST_SHUTDOWN,
+   (val << 1) | 1);
 }
 
 static const MemoryRegionOps debug_exit_ops = {

---
base-commit: 7e1c0047015ffbd408e1aa4a5ec1abe4751dbf7e
change-id: 20240523-debugexit-22e7587adbeb

Best regards,
-- 
Thomas Weißschuh 




Re: [Patch, fortran] PR103312 - [11/12/13/14/15 Regression] ICE in gfc_find_component since r9-1098-g3cf89a7b992d483e

2024-05-23 Thread Paul Richard Thomas
Hi Harald,

You were absolutely right about returning 'false' :-) The patch is duly
corrected.

Committed to mainline and will be followed by backports in a few weeks.

Regards

Paul


On Tue, 21 May 2024 at 19:58, Harald Anlauf  wrote:

> Hi Paul,
>
> Am 20.05.24 um 11:06 schrieb Paul Richard Thomas:
> > Hi All,
> >
> > I don't think that this PR is really a regression although the fact that
> it
> > is marked as such brought it to my attention :-)
> >
> > The fix turned out to be remarkably simple. It was found after going
> down a
> > silly number of rabbit holes, though!
> >
> > The chunk in dependency.cc is probably more elaborate than it needs to
> be.
> > Returning -2 is sufficient for the testcase to work. Otherwise, the
> > comments in the patch say it all.
>
> this part looks OK, but can you elaborate on this change to expr.cc:
>
> diff --git a/gcc/fortran/expr.cc b/gcc/fortran/expr.cc
> index c883966646c..4ee2ad55915 100644
> --- a/gcc/fortran/expr.cc
> +++ b/gcc/fortran/expr.cc
> @@ -3210,6 +3210,11 @@ gfc_reduce_init_expr (gfc_expr *expr)
>   {
> bool t;
>
> +  /* It is far too early to resolve a class compcall. Punt to
> resolution.  */
> +  if (expr && expr->expr_type == EXPR_COMPCALL
> +  && expr->symtree->n.sym->ts.type == BT_CLASS)
> +return true;
> +
>
> I would have expected to return 'false' here, as we do not
> have an expression that reduces to a constant.  What am I
> missing?
>
> (The testcase compiles and works here also when using 'false'.)
>
> > OK for mainline? I will delay for a month before backporting.
>
> OK if can you show me wrong...
>
> Thanks,
> Harald
>
> > Regards
> >
> > Paul
> >
>
>


Re: [Patch, fortran] PR103312 - [11/12/13/14/15 Regression] ICE in gfc_find_component since r9-1098-g3cf89a7b992d483e

2024-05-23 Thread Paul Richard Thomas
Hi Harald,

You were absolutely right about returning 'false' :-) The patch is duly
corrected.

Committed to mainline and will be followed by backports in a few weeks.

Regards

Paul


On Tue, 21 May 2024 at 19:58, Harald Anlauf  wrote:

> Hi Paul,
>
> Am 20.05.24 um 11:06 schrieb Paul Richard Thomas:
> > Hi All,
> >
> > I don't think that this PR is really a regression although the fact that
> it
> > is marked as such brought it to my attention :-)
> >
> > The fix turned out to be remarkably simple. It was found after going
> down a
> > silly number of rabbit holes, though!
> >
> > The chunk in dependency.cc is probably more elaborate than it needs to
> be.
> > Returning -2 is sufficient for the testcase to work. Otherwise, the
> > comments in the patch say it all.
>
> this part looks OK, but can you elaborate on this change to expr.cc:
>
> diff --git a/gcc/fortran/expr.cc b/gcc/fortran/expr.cc
> index c883966646c..4ee2ad55915 100644
> --- a/gcc/fortran/expr.cc
> +++ b/gcc/fortran/expr.cc
> @@ -3210,6 +3210,11 @@ gfc_reduce_init_expr (gfc_expr *expr)
>   {
> bool t;
>
> +  /* It is far too early to resolve a class compcall. Punt to
> resolution.  */
> +  if (expr && expr->expr_type == EXPR_COMPCALL
> +  && expr->symtree->n.sym->ts.type == BT_CLASS)
> +return true;
> +
>
> I would have expected to return 'false' here, as we do not
> have an expression that reduces to a constant.  What am I
> missing?
>
> (The testcase compiles and works here also when using 'false'.)
>
> > OK for mainline? I will delay for a month before backporting.
>
> OK if can you show me wrong...
>
> Thanks,
> Harald
>
> > Regards
> >
> > Paul
> >
>
>


[gcc r15-788] Fortran: Fix ICEs due to comp calls in initialization exprs [PR103312]

2024-05-23 Thread Paul Thomas via Gcc-cvs
https://gcc.gnu.org/g:2ce90517ed75c4af9fc0616f2670cf6dfcfa8a91

commit r15-788-g2ce90517ed75c4af9fc0616f2670cf6dfcfa8a91
Author: Paul Thomas 
Date:   Thu May 23 07:59:46 2024 +0100

Fortran: Fix ICEs due to comp calls in initialization exprs [PR103312]

2024-05-23  Paul Thomas  

gcc/fortran
PR fortran/103312
* dependency.cc (gfc_dep_compare_expr): Handle component call
expressions. Return -2 as default and return 0 if compared with
a function expression that is from an interface body and has
the same name.
* expr.cc (gfc_reduce_init_expr): If the expression is a comp
call do not attempt to reduce, defer to resolution and return
false.
* trans-types.cc (gfc_get_dtype_rank_type,
gfc_get_nodesc_array_type): Fix whitespace.

gcc/testsuite/
PR fortran/103312
* gfortran.dg/pr103312.f90: New test.

Diff:
---
 gcc/fortran/dependency.cc  | 32 +
 gcc/fortran/expr.cc|  5 ++
 gcc/fortran/trans-types.cc |  4 +-
 gcc/testsuite/gfortran.dg/pr103312.f90 | 87 ++
 4 files changed, 126 insertions(+), 2 deletions(-)

diff --git a/gcc/fortran/dependency.cc b/gcc/fortran/dependency.cc
index fb4d94de641..bafe8cbc5bc 100644
--- a/gcc/fortran/dependency.cc
+++ b/gcc/fortran/dependency.cc
@@ -440,6 +440,38 @@ gfc_dep_compare_expr (gfc_expr *e1, gfc_expr *e2)
return mpz_sgn (e2->value.op.op2->value.integer);
 }
 
+
+  if (e1->expr_type == EXPR_COMPCALL)
+{
+  /* This will have emerged from interface.cc(gfc_check_typebound_override)
+via gfc_check_result_characteristics. It is possible that other
+variants exist that are 'equal' but play it safe for now by setting
+the relationship as 'indeterminate'.  */
+  if (e2->expr_type == EXPR_FUNCTION && e2->ref)
+   {
+ gfc_ref *ref = e2->ref;
+ gfc_symbol *s = NULL;
+
+ if (e1->value.compcall.tbp->u.specific)
+   s = e1->value.compcall.tbp->u.specific->n.sym;
+
+ /* Check if the proc ptr points to an interface declaration and the
+names are the same; ie. the overriden proc. of an abstract type.
+The checking of the arguments will already have been done.  */
+ for (; ref && s; ref = ref->next)
+   if (!ref->next && ref->type == REF_COMPONENT
+   && ref->u.c.component->attr.proc_pointer
+   && ref->u.c.component->ts.interface
+   && ref->u.c.component->ts.interface->attr.if_source
+   == IFSRC_IFBODY
+   && !strcmp (s->name, ref->u.c.component->name))
+ return 0;
+   }
+
+  /* Assume as default that TKR checking is sufficient.  */
+ return -2;
+  }
+
   if (e1->expr_type != e2->expr_type)
 return -3;
 
diff --git a/gcc/fortran/expr.cc b/gcc/fortran/expr.cc
index c883966646c..a162744c719 100644
--- a/gcc/fortran/expr.cc
+++ b/gcc/fortran/expr.cc
@@ -3210,6 +3210,11 @@ gfc_reduce_init_expr (gfc_expr *expr)
 {
   bool t;
 
+  /* It is far too early to resolve a class compcall. Punt to resolution.  */
+  if (expr && expr->expr_type == EXPR_COMPCALL
+  && expr->symtree->n.sym->ts.type == BT_CLASS)
+return false;
+
   gfc_init_expr_flag = true;
   t = gfc_resolve_expr (expr);
   if (t)
diff --git a/gcc/fortran/trans-types.cc b/gcc/fortran/trans-types.cc
index 676014e9b98..8466c595e06 100644
--- a/gcc/fortran/trans-types.cc
+++ b/gcc/fortran/trans-types.cc
@@ -1591,7 +1591,7 @@ gfc_get_dtype_rank_type (int rank, tree etype)
   size = size_in_bytes (etype);
   break;
 }
-  
+
   gcc_assert (size);
 
   STRIP_NOPS (size);
@@ -1740,7 +1740,7 @@ gfc_get_nodesc_array_type (tree etype, gfc_array_spec * 
as, gfc_packed packed,
tmp = gfc_conv_mpz_to_tree (expr->value.integer,
gfc_index_integer_kind);
   else
-   tmp = NULL_TREE;
+   tmp = NULL_TREE;
   GFC_TYPE_ARRAY_LBOUND (type, n) = tmp;
 
   expr = as->upper[n];
diff --git a/gcc/testsuite/gfortran.dg/pr103312.f90 
b/gcc/testsuite/gfortran.dg/pr103312.f90
new file mode 100644
index 000..deacc70bf5d
--- /dev/null
+++ b/gcc/testsuite/gfortran.dg/pr103312.f90
@@ -0,0 +1,87 @@
+! { dg-do run }
+!
+! Test the fix for pr103312, in which the use of a component call in
+! initialization expressions, eg. character(this%size()), caused ICEs.
+!
+! Contributed by Arseny Solokha  
+!
+module example
+
+  type, abstract :: foo
+integer :: i
+  contains
+procedure(foo_size), deferred :: size
+procedure(foo_func), deferred :: func
+  end type
+
+  interface
+function foo_

Re: The nested-splitter project has collapsed in complexity

2024-05-22 Thread Thomas Passin
On Tuesday, May 21, 2024 at 3:36:03 PM UTC-4 Thomas Passin wrote:

VR3 is working with  *ekr-3910-no-fl-ns-plugins*, possible quirks aside.  
Will check soon on Linux.  Freewin is working. RPCalc is still working.


OK on linux so far. 

-- 
You received this message because you are subscribed to the Google Groups 
"leo-editor" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to leo-editor+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/leo-editor/6b60ab6c-8009-42c8-9f8e-2cd2454469b0n%40googlegroups.com.


ANN: xterm-392

2024-05-22 Thread Thomas Dickey
Files:
  https://invisible-island.net/archives/xterm/current/xterm-392.tgz
  https://invisible-island.net/archives/xterm/current/xterm-392.tgz.asc
  https://invisible-island.net/archives/xterm/patches/xterm-392.patch.gz
  https://invisible-island.net/archives/xterm/patches/xterm-392.patch.gz.asc
  https://invisible-island.net/archives/xterm/xterm-392.tgz
  https://invisible-island.net/archives/xterm/xterm-392.tgz.asc

Patch #392 - 2024/05/22

 * improve  input decoding for non-Latin1 character sets by preserving
   the sense of GL/GR.
 * add  resource  preferLatin1  to simplify UPSS configuration (Gentoo
   #932154).
 * build-fix  for --disable-boxchars; patch #390 reuses that feature's
   code  to draw the part of the DEC Technical character set which has
   no Unicode equivalent.
 * modify #include of pty.h to work with musl (report by Khem Raj).
 * improvedefinitionsusedinclock_gettimelogic   in
   graphics_sixel.c, as well as updating comments (patch by Ben Wong).
 * amend allowC1Printable changes from patch #391, restoring a special
   case  which  caused C1 characters to be ignored (report/testcase by
   Dmytro Bagrii).


-- 
Thomas E. Dickey 
https://invisible-island.net


signature.asc
Description: PGP signature


ANN: xterm-392

2024-05-22 Thread Thomas Dickey
Files:
  https://invisible-island.net/archives/xterm/current/xterm-392.tgz
  https://invisible-island.net/archives/xterm/current/xterm-392.tgz.asc
  https://invisible-island.net/archives/xterm/patches/xterm-392.patch.gz
  https://invisible-island.net/archives/xterm/patches/xterm-392.patch.gz.asc
  https://invisible-island.net/archives/xterm/xterm-392.tgz
  https://invisible-island.net/archives/xterm/xterm-392.tgz.asc

Patch #392 - 2024/05/22

 * improve  input decoding for non-Latin1 character sets by preserving
   the sense of GL/GR.
 * add  resource  preferLatin1  to simplify UPSS configuration (Gentoo
   #932154).
 * build-fix  for --disable-boxchars; patch #390 reuses that feature's
   code  to draw the part of the DEC Technical character set which has
   no Unicode equivalent.
 * modify #include of pty.h to work with musl (report by Khem Raj).
 * improvedefinitionsusedinclock_gettimelogic   in
   graphics_sixel.c, as well as updating comments (patch by Ben Wong).
 * amend allowC1Printable changes from patch #391, restoring a special
   case  which  caused C1 characters to be ignored (report/testcase by
   Dmytro Bagrii).


-- 
Thomas E. Dickey 
https://invisible-island.net


signature.asc
Description: PGP signature


Re: Last call before deprecating the nested_splitter plugin

2024-05-22 Thread Thomas Passin
I think one thing is missing.  I'd like to have a command to place the log 
frame into its own panel next to the body editor's.  This layout has the 
tree pane vertically on the left, the body in the middle, and the log frame 
on the right.  I'm sure I could come up with a script to do this, but I 
think there should be a Leo command for it.  The reason is that there are 
times when you want to see many lines in the log frame.  It could be a 
stack trace, the output of key bindings, or an app like my bookmark app or 
RPCalc, that opens in the log frame.  And when VR/VR3 gets opened sharing 
the log frame panel the height crunch gets even worse.

So there should be three built-in layouts instead of two, and there should 
be a command or commands similar to *toggle-split-direction* to activate 
any of them.

I also don't like the name "secondary_splitter".  I have no way to know 
which is "primary" and which is "secondary", and I don't see why I should 
have to or what the distinction is.  Better splitter names would be a good 
thing (sorry I don't have any suggestions yet, but I don't understand the 
distinction so I can't offer any!).

On Wednesday, May 22, 2024 at 5:04:20 PM UTC-4 Edward K. Ream wrote:

> Imo, PR #3911  is 
> ready to be merged into devel.
>
> Please let me know if you disagree.
>
> Edward
>

-- 
You received this message because you are subscribed to the Google Groups 
"leo-editor" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to leo-editor+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/leo-editor/49e4a351-a293-448a-993c-c2a843a1a025n%40googlegroups.com.


Re: processes stuck in shutdown following OOM/recovery

2024-05-22 Thread Thomas Munro
On Thu, May 23, 2024 at 9:58 AM Martijn Wallet  wrote:
> The following review has been posted through the commitfest application:
> make installcheck-world:  not tested
> Implements feature:   not tested
> Spec compliant:   not tested
> Documentation:not tested
>
> Hi, I somehow fail to be able to mark all checkboxes on this review page...
> However, build and tested with all passed successfully on Rocky Linux release 
> 8.9 (Green Obsidian).
> Not sure of more reviewing is needed on other Operating Systems since this is 
> only my second review.

Thanks!

I'm also hoping to get review of the rather finickity state machine
logic involved from people familiar with that; I think it's right, but
I'd hate to break some other edge case...

> nb: second mail to see spf is fixed and Thomas receives this message.

FTR 171641337152.1103.7326466732639994038.p...@coridan.postgresql.org
and 171641505305.1105.9868637944637520353.p...@coridan.postgresql.org
both showed up in my inbox, and they both have headers "Received-SPF:
pass ...".




Bug#1071300: marked as pending in ceph

2024-05-22 Thread Thomas Goirand
Control: tag -1 pending

Hello,

Bug #1071300 in ceph reported by you has been fixed in the
Git repository and is awaiting an upload. You can see the commit
message below and you can check the diff of the fix at:

https://salsa.debian.org/ceph-team/ceph/-/commit/e668eec9d3babfc9922ef39954b74011082b41a2


Add fix-ftbfs-with-newer-snappy.patch (Closes: #1071300).


(this message was generated automatically)
-- 
Greetings

https://bugs.debian.org/1071300



[OSList] Re: June 27- July 25 First Global Summit for Co-creating a World that Works for All

2024-05-22 Thread Thomas Herrmann via OSList
What a lovely invitation – thanks!
I signed up and look forward to meeting mny wonderful humans
Good night
Thomas

Från: Peggy Holman via OSList 
Skickat: den 21 maj 2024 21:35
Till: Open Space Listserv 
Ämne: [OSList] June 27- July 25 First Global Summit for Co-creating a World 
that Works for All

Hi my friends,

I wrote about this convening in March with a question about online global 
gatherings.


I am thrilled to finally send you an invitation to the First Global Summit for 
Co-creating a World that Works for 
All<https://www.eventbrite.com/e/first-global-summit-for-co-creating-a-world-that-works-for-all-registration-903889637237?aff=oddtdtcreator>.
 It is an online convening in two-plus phases:


Phase 1: An Open Space Summit that runs June 27/28-28/29 in three times periods 
friendly to different regions. (See schedule of six 5-hour sessions over 48 
hours<https://teamup.com/kszcgrc6rhvbbsut7u>)


Phase 2: A month-long open-ended opportunity from Jun 28/29-July 24/25 for 
self-organized deep dives supported by a social system map of participants and 
a calendar<https://teamup.com/kszcgrc6rhvbbsut7u> with a closing reflection on 
July 24/25.  Check out the map to see the growing list of participants.


Plus: Assuming all goes well, we’ll do it again, expanding our hosting circle 
and co-evolving our approach with each iteration.

Since you know some of the people hosting this gathering, I’ve copied the 
hosting team's names below. We are an international group taking on this crazy, 
ambitious initiative to create connections that support those who are 
co-creating a world that works for all. Connections that invite novel 
possibilities, open hearts, cultivate empathy and belonging, and inspire action.

Our aspiration is to grow a network for sharing knowledge and practices, 
finding partners, and amplifying the support infrastructure and the impact of 
all of our work. In a way, building on what we have learned through 38 years of 
the OS list but with a different focus.


Join us...
Register 
<https://www.eventbrite.com/e/first-global-summit-for-co-creating-a-world-that-works-for-all-registration-903889637237?aff=oddtdtcreator>
 now
And invite your friends!

Questions? Please ask.


Warmly,
Peggy


Hosts

  *   Peggy Holman<https://peggyholman.com/> (USA, lead)
  *   Tova Averbuch<https://tovaaverbuch.com/> (Israel)
  *   Kathryn Kylee<https://www.linkedin.com/in/kathryn-kylee-98040b24a/> 
(USA/India/Germany)
  *   Lucy Wairimu 
Mukuria<https://www.linkedin.com/in/wairimu-mukuria-30b248120/> (Kenya)
  *   Grace Ndanu, (Kajiado county, Kenya)
  *   Funda Oral<https://fundaoraltoussaint.com/>, (Turkey)
  *   Praveen Madan<https://www.linkedin.com/in/pmadan/>, Berrett-Koehler CEO & 
Publisher, (USA)
  *   Pablo 
Restrepo<https://www.linkedin.com/in/pablorestrepo/?originalSubdomain=ca> 
(Colombia)
  *   Ben Roberts<https://www.linkedin.com/in/benjaminjroberts/> (USA)
  *   Bhavna Sharma<https://www.linkedin.com/in/bhavna-sharma-9353855/> (India)
  *   Tonnie vander Zouwen<https://www.linkedin.com/in/tonnievanderzouwen/> 
(The Netherlands)
  *   Cecily Victor<https://www.linkedin.com/in/cecilyvictor/> (USA/India)
  *   Audrey Zheng<https://www.linkedin.com/in/audreyzheng-openspacechina/> 
(China)

Sponsors
The Berrett-Koehler Foundation,<https://www.bkfoundation.org/> in partnership 
with Berrett-Koehler Publishers<https://www.bkconnection.com/>, invites you to 
join others for an online, self-organizing, participant-led summit.



_
Peggy Holman
pe...@peggyholman.com<mailto:pe...@peggyholman.com>

Bellevue, WA  98006
206-948-0432
www.peggyholman.com<http://www.peggyholman.com>

Enjoy the award winning Engaging Emergence: Turning Upheaval into 
Opportunity<https://peggyholman.com/papers/engaging-emergence/>


"An angel told me that the only way to step into the fire and not get burnt, is 
to become
the fire".
  -- Drew Dellinger













OSList mailing list -- everyone@oslist.org
To unsubscribe send an email to everyone-le...@oslist.org
See the archives here: https://oslist.org/empathy/list/everyone.oslist.org

Bug#1068250: dracut: Consider switching to the fork dracut-ng

2024-05-22 Thread Thomas Lange
Yes, I already got this information.

I think I will also prepare dracut-ng to Debian. It then has to go
through the NEW queue. And currently I don't know, when I will start
this.


>>>>> On Wed, 22 May 2024 20:54:38 +0200, Evgeni Golov  said:

> FWIW, Fedora switched to this fork starting with Fedora 40 [1].
> [1] https://src.fedoraproject.org/rpms/dracut/

-- 
viele Grüße Thomas



Re: WebDAV and Microsoft clients

2024-05-22 Thread Mark Thomas

On 22/05/2024 18:46, Rainer Jung wrote:

Am 22.05.24 um 19:21 schrieb Mark Thomas:

All,

I've been looking at the WebDav Servlet for the last few days and in 
particular how it interacts with Microsoft clients.


Basic operations including:
- directory listings
- create new file
- create new directory
- rename
- update contents (ie open a file for editing and then saving it)

all work for port 80 and port 8080 when WebDAV is mounted at "/" or a 
specific context.


Drag/drop and copy/paste do not work. This appears to be related to 
Tomcat not implementing PROPPATCH. There is some guess work involved 
since I don't have access to the Microsoft code but I think the client 
is setting timestamps with PROPPATCH and then checking them. Because 
the PROPPATCH fails the overall operation is failed.


I don't think that the WebdavFixFilter is required any more.

I'd like to propose the following:
- deprecate WebdavFixFilter in all current versions and then remove it
   in Tomcat 11
- add the above information on what works and what doesn't to the
   WebdavServlet Javadoc - maybe along with a note to ping the dev list
   if drag/drop and copy/paste are required (or maybe a BZ issue)
- come back to this if there is user interest in getting drag/drop and
   copy/paste working

Thoughts?


You might already know about this, but In the httpd world sometimes the 
litmus test suite was mentioned:


http://www.webdav.org/neon/litmus/

I never have build or even used it, but it might be useful to check for 
improvements or regressions in the course of bigger changes.


That is actually where the current work started. I re-ran the test suite 
after Rémy's locking changes, found a bug as the test suite had been 
updated since the last time I ran it, fixed that big and then starting 
looking at the Microsoft client behaviour.


There are a large number of failures around property support as expected 
since we don't implement PROPPATCH


Mark

-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org



Re: The improved restart-leo command

2024-05-22 Thread Thomas Passin
What directory are you running the tests from? Unless you set the right one 
using PYTHONPATH, you have to be in the *leo-editor*   directory to be able 
to find *leo.core.xxx*, etc.

On Wednesday, May 22, 2024 at 1:52:35 PM UTC-4 Edward K. Ream wrote:

> PR #3922  significantly 
>> improves Leo's restart-leo command. The command now uses the command-line 
>> arguments in effect when Leo started.
>>
>> I have tested the latest version of Leo's devel branch in a dedicated 
>> Debian & Fedora VM.
>>
>> The command is working fine in both !
>>
>
> Thanks for your testing.
>
> However & FYI: When I run Leo's unit tests in both VMs I receive 43 
>> identical / similar errors:
>>
> [snip] 
>
>>   File 
>> "/home/user/PyVE/GitHub/Leo/leo-editor/leo/external/npyscreen/wgwidget.py", 
>> line 21, in 
>> from leo.core import leoGlobals as g
>> ModuleNotFoundError: No module named 'leo'
>>
>
> This looks like an installation problem.
>
> Edward
>

-- 
You received this message because you are subscribed to the Google Groups 
"leo-editor" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to leo-editor+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/leo-editor/0e620cce-8dba-4ad2-9b48-a2397bdb0490n%40googlegroups.com.


Re: [RFC PATCH v3 13/21] drm/exec: Rework contended locking

2024-05-22 Thread Thomas Hellström
On Wed, 2024-05-22 at 18:52 +0200, Christian König wrote:
> Am 22.05.24 um 16:32 schrieb Thomas Hellström:
> > On Wed, 2024-05-22 at 07:52 +0200, Christian König wrote:
> > > Am 21.05.24 um 09:16 schrieb Thomas Hellström:
> > > > If contention and backoff occurs during a drm_exec ww
> > > > transaction,
> > > > the contended lock is not locked again until the next orinary
> > > > attempt to lock a dma_resv lock. However, with the introduction
> > > > of
> > > > drm_exec_trylock(), that doesn't work, since the locking of the
> > > > contended lock needs to be a sleeping lock. Neither can we
> > > > ignore
> > > > locking the contended lock during a trylock since that would
> > > > violate
> > > > at least the ww_mutex annotations.
> > > > 
> > > > So resolve this by actually locking the contended lock during
> > > > drm_exec_retry_on_contention(). However, this introduces a new
> > > > point
> > > > of failure since locking the contended lock may return -EINTR.
> > > > 
> > > > Hence drm_exec_retry_on_contention() must take an error
> > > > parameter
> > > > and
> > > > also return a value indicating success.
> > > After thinking more about that I have to pretty clearly NAK this.
> > >    
> > I thought we were beyond upfront NAKing in the first reply :/
> 
> Well my memory could fail me, but I mentioned concerns on this
> approach 
> before.
> 
> I was a bit annoyed seeing that again. But could as well be that my 
> response never got out or that I'm mixing things up.

I haven't seen it at least. Last discussion on this I saw was
here. I didn't see a follow-up on that.

https://lore.kernel.org/dri-devel/953c157bf69df12d831a781f0f638d93717bb044.ca...@linux.intel.com/


> 
> > > It's an intentional design decision to guarantee that at the
> > > start of
> > > the loop no object is locked.
> > > 
> > > This is because Sima and I wanted to integrate userptr handling
> > > into
> > > drm_exec as well in the long term.
> > First I agree the interface looks worse with this patch.
> > But I thought generic userptr handling were going to end up as a
> > gpuvm
> > helper (without using GEM objects) as we've discussed previously.
> 
> We might be talking past each other. That sounds like SVM, e.g. on 
> demand paging.
> 
> What I mean is pre-faulting during command submission like radeon, 
> amdgpu and i915 do for the userptr handling.

Yes, then we're talking about the same thing.

We discussed in this thread here, started by Dave.

https://lore.kernel.org/dri-devel/CAPM=9twPgn+fpbkig0Vhjt=cJdHQFbNH_Z=srhszwuvlkha...@mail.gmail.com/

I still think the right place is in drm_gpuvm for this sort of stuff.
And I think that's the concluding argument by Sima as well.

In any case, If the planned drm_exec development is to be a full
execbuf helper, I think we need a capable sub-helper for ONLY the ww
transaction locking as well, with support for the various locking
primitives. In particular if we're going to be able to port i915 ww
transaction locking over. There are more uses of the ww locking
transacions than execbuf.

> 
> For that you need to re-start the whole handling similar to how you
> need 
> to re-start for the mutex locking when you detect that the page array
> is 
> stale, the difference is that you are not allowed to hold any resv
> locks 
> while pre-faulting.
> 
> That's why it is a requirement that the drm_exec loop starts without
> any 
> locks held.

But wouldn't you need an outer (userptr) loop and an inner
(ww_transaction) loop for this? Why would we want to re-validate
userptrs on -EDEADLKS?

> 
> > Anyway if still there would be helpers in drm_exec for some other
> > generic userptr solution, those need to be done before the
> > ww_acquire_ctx_init(). The contended locking here is done after, so
> > I
> > can't really see how these would clash.
> 
> Yes, that indeed was a problem. The ww_acquire_ctx_init() was 
> intentionally moved into drm_exec_cleanup() to partially prevent that
> issue.
> 
> I haven't fully figured out how to do handle everything exactly, but
> at 
> least in principle it can be made work. With this change here it
> becomes 
> impossible.
> 
> > Still, If we need to come up with another solution, I think it's
> > fair
> > we clearly sort out why.
> > 
> > > I think we should just document that drm_exec_trylock() can't be
> > > used
> > > to
> > > l

WebDAV and Microsoft clients

2024-05-22 Thread Mark Thomas

All,

I've been looking at the WebDav Servlet for the last few days and in 
particular how it interacts with Microsoft clients.


Basic operations including:
- directory listings
- create new file
- create new directory
- rename
- update contents (ie open a file for editing and then saving it)

all work for port 80 and port 8080 when WebDAV is mounted at "/" or a 
specific context.


Drag/drop and copy/paste do not work. This appears to be related to 
Tomcat not implementing PROPPATCH. There is some guess work involved 
since I don't have access to the Microsoft code but I think the client 
is setting timestamps with PROPPATCH and then checking them. Because the 
PROPPATCH fails the overall operation is failed.


I don't think that the WebdavFixFilter is required any more.

I'd like to propose the following:
- deprecate WebdavFixFilter in all current versions and then remove it
  in Tomcat 11
- add the above information on what works and what doesn't to the
  WebdavServlet Javadoc - maybe along with a note to ping the dev list
  if drag/drop and copy/paste are required (or maybe a BZ issue)
- come back to this if there is user interest in getting drag/drop and
  copy/paste working

Thoughts?

Mark

-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org



Re: JSInterop and "JSON.stringify" method return "Converting circular structure to JSON"

2024-05-22 Thread Thomas Broyer


On Wednesday, May 22, 2024 at 12:43:56 PM UTC+2 tenti...@gmail.com wrote:

I misunderstood the documentation... ty for the clarification Thomas can 
you give me some confirmations.

The Date issue, When you say to use the JsDate do you mean the one in the 
elemental2 package (elemental2.core.JsDate) or in the gwt core (
com.google.gwt.core.client.JsDate) ?

Any one of them, anything that directly maps to a native JS Date object.
 

So for the Date issue i just enough to replace this code :

@JsProperty *public* *native* *Date* getDataRepertorioDocumento();

 

@JsProperty *public* *native* *void* setDataRepertorioDocumento(*Date* 
dataRepertorioDocumento);

 

With:

@JsProperty *public* *native* Js*Date* getDataRepertorioDocumento();

 

@JsProperty *public* *native* *void* setDataRepertorioDocumento(*JsDate* 
dataRepertorioDocumento);

 

Right ?

Yes.
(note that it works for serializing because a JS Date object has a toJSON() 
method that returns its toISOString(), but it won't work for parsing JSON, 
for that you will have to pass a *reviver* function to JSON.parse() that 
will have to be aware of your object structure to know that the 
dataRepertorioDocumento property value needs to be parsed to a Date object, 
or use a @JsOverlay getter/setter pair that will serialize/parse the 
java.util.Date or JsDate value to/from the JSON representation you want, 
same as List and Map)

I also missed an instance of Integer in your objects, this will have to be 
changed to Double.
 

For the "List" and "Map" problem, i will probably try to use some 
@JsOverlay instead to use a second argument  on the JSON.stringify by the 
way can you point me out some example (i'm not very skilled with this 
library) ?


Could be as simple as (note that a copy is made each time the getter or 
setter is called):
ReferenzaDTOGWT[] nodeIdAllegatti;
// This could also use Elemental's JsArrayLike.asList()
@JsOverlay public List getNodeIdAllegatti() { return 
List.of(this.nodeIdAllegatti); }
@JsOverlay public void setNodeIdAllegatti(List 
nodeIdAllegatti) { this.nodeIdAllegatti = nodeIdAllegatti.toArray(new 
ReferenzaDTOGWT[nodeAllegatti.size()]); }

JsPropertyMap errors;
@JsOverlay public Map getErrors() {
  var ret = new HashMap();
  errors.forEach(key -> ret.put(ret, errors.get(key)));
  return ret;
}
@JsOverlay public setErrors(Map errors) {
  var obj = JsPropertyMap.of();
  errors.forEach((key, value) -> obj.set(key, value));
  this.errors = obj;
}

Of course for the Map> mappaAltriMetadati 
you'd have to also transform each List.

The JSON.stringify replacer could look like:
JSONType.StringifyReplacerFn replacer = (key, value) -> {
  if (value instanceof List) {
return ((List) value).toArray();
  }
  if (value instanceof Map) {
var obj = JsPropertyMap.of();
((Map) value).forEach((k, v) -> obj.set(k, v));
return obj;
  }
  if (value instanceof Date) {
return ((Date) value).getTime(); // pass that to JsDate.create() if you 
prefer an ISO-formatted String rather than the timestamp
  }
  if (value instanceof Integer) {
return ((Integer) value).doubleValue();
  }
  return value;
};
 
This is all totally untested (also note that I haven't actually written GWT 
code for years, this is all based on memory and the javadocs)

Also I found this project updated for GWT 2.9.0 and java 11  
https://github.com/jp-solutions/gwt-interop-utils . it’s seem goof enough 
for my use case,  i’ll try out and let you know it.

Not sure what it actually brings on top of plain old JsInterop Base or 
Elemental Core…

-- 
You received this message because you are subscribed to the Google Groups "GWT 
Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-web-toolkit+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/google-web-toolkit/fb1a696c-7f35-4f8a-8208-d88ff8f15fc9n%40googlegroups.com.


Re: More on writing academic papers

2024-05-22 Thread Thomas Dupond via
"G. Branden Robinson"  a écrit :
> At 2024-05-22T09:28:46+0200, Thomas Dupond via wrote:
>> Damian McGuckin  a écrit :
>> > Yes. We process a database to automatically generate the invoice
>> > details which is then run through 'groff -mm' to provide the invoice
>> > on a company letterhead.
>> 
>> Also did an ersatz of this at my previous job to generate numbered
>> invoices.  The hardest part was finding the company logo in postscript
>> format :D
>
> If you're outputting PDF, you can use groff's `PDFPIC` macro from the
> stock "pdfpic.tmac" file to achieve the same end.
>
> https://git.savannah.gnu.org/cgit/groff.git/tree/tmac/pdfpic.tmac?h=1.23.0

Yes!  Tremendous macro file this one.  Although I don’t remember
exactly why I used postscript but I think it is because I compiled in
the default Debian WSL on MS Windows without an admin account.  By
default Debian does not include the full groff package and thus you
cannot output to PDF, only to PS.

Regards,
-- 
Thomas




Re: [RFC PATCH v3 15/21] drm/exec: Add a snapshot capability

2024-05-22 Thread Thomas Hellström
On Wed, 2024-05-22 at 15:54 +0200, Thomas Hellström wrote:
> On Wed, 2024-05-22 at 13:27 +0200, Christian König wrote:
> > Am 21.05.24 um 09:16 schrieb Thomas Hellström:
> > > When validating a buffer object for submission, we might need to
> > > lock
> > > a number of object for eviction to make room for the validation.
> > > 
> > > This makes it pretty likely that validation will eventually
> > > succeed,
> > > since eventually the validating process will hold most dma_resv
> > > locks
> > > of the buffer objects residing in the memory type being validated
> > > for.
> > > 
> > > However, once validation of a single object has succeeded it
> > > might
> > > not
> > > be beneficial to hold on to those locks anymore, and the
> > > validator
> > > would want to drop the locks of all objects taken during
> > > validation.
> > 
> > Exactly avoiding that was one of the goals of developing the
> > drm_exec
> > object.
> > 
> > When objects are unlocked after evicting them it just gives
> > concurrent 
> > operations an opportunity to lock them and re-validate them into
> > the 
> > contended domain.
> > 
> > So why should that approach here be beneficial at all?
> 
> It's a matter of being nice to the rest of the system while *still
> guaranteeing progress*. For each object we're trying to validate, we
> keep on evicting other objects until we make progress even if we lock
> all the objects in the domain.
> 
> If we were unlocking after each eviction, we can't really guarantee
> progress.
> 
> OTOH, a concurrent locker of the object may well be one with higher
> priority (lower ticket number) just wanting to perform a pagefault
> 
> So it's a tradeoff between locking just locking other processes out
> to
> allow us to make one step of progress and to in addition hit them
> with
> the big sledgehammer.

I thought I'd also mention that the ideal solution here I think would
be to have an rw_mutex per manager. Ordinary allocations take it in
read mode, evictions take it in write mode. Now the bad thing is it
sits in between ww_mutexes so it would have to be a ww_rw_mutex which
would probably be too nasty to implement.

/Thomas

> 
> /Thomas
> 
> > 
> > Regards,
> > Christian.
> > 
> > > 
> > > Introduce a drm_exec snapshot functionality that can be used to
> > > record the locks held at a certain time, and a restore
> > > functionality
> > > that restores the drm_exec state to the snapshot by dropping all
> > > locks.
> > > 
> > > Snapshots can be nested if needed.
> > > 
> > > Cc: Christian König 
> > > Cc: Somalapuram Amaranath 
> > > Cc: Matthew Brost 
> > > Cc: 
> > > Signed-off-by: Thomas Hellström
> > > 
> > > ---
> > >   drivers/gpu/drm/drm_exec.c | 55
> > > +-
> > >   include/drm/drm_exec.h | 23 +++-
> > >   2 files changed, 76 insertions(+), 2 deletions(-)
> > > 
> > > diff --git a/drivers/gpu/drm/drm_exec.c
> > > b/drivers/gpu/drm/drm_exec.c
> > > index 1383680ffa4a..9eea5d0d3a98 100644
> > > --- a/drivers/gpu/drm/drm_exec.c
> > > +++ b/drivers/gpu/drm/drm_exec.c
> > > @@ -57,6 +57,7 @@ static void drm_exec_unlock_all(struct drm_exec
> > > *exec)
> > >   struct drm_gem_object *obj;
> > >   unsigned long index;
> > >   
> > > + WARN_ON(exec->snap);
> > >   drm_exec_for_each_locked_object_reverse(exec, index,
> > > obj)
> > > {
> > >   dma_resv_unlock(obj->resv);
> > >   drm_gem_object_put(obj);
> > > @@ -90,6 +91,7 @@ void drm_exec_init(struct drm_exec *exec, u32
> > > flags, unsigned nr)
> > >   exec->num_objects = 0;
> > >   exec->contended = DRM_EXEC_DUMMY;
> > >   exec->prelocked = NULL;
> > > + exec->snap = NULL;
> > >   }
> > >   EXPORT_SYMBOL(drm_exec_init);
> > >   
> > > @@ -301,7 +303,6 @@ int drm_exec_lock_obj(struct drm_exec *exec,
> > > struct drm_gem_object *obj)
> > >   goto error_unlock;
> > >   
> > >   return 0;
> > > -
> > >   error_unlock:
> > >   dma_resv_unlock(obj->resv);
> > >   return ret;
> > > @@ -395,5 +396,57 @@ int drm_exec_prepare_array(struct drm_exec
> > > 

Re: [RFC PATCH v3 13/21] drm/exec: Rework contended locking

2024-05-22 Thread Thomas Hellström
On Wed, 2024-05-22 at 07:52 +0200, Christian König wrote:
> Am 21.05.24 um 09:16 schrieb Thomas Hellström:
> > If contention and backoff occurs during a drm_exec ww transaction,
> > the contended lock is not locked again until the next orinary
> > attempt to lock a dma_resv lock. However, with the introduction of
> > drm_exec_trylock(), that doesn't work, since the locking of the
> > contended lock needs to be a sleeping lock. Neither can we ignore
> > locking the contended lock during a trylock since that would
> > violate
> > at least the ww_mutex annotations.
> > 
> > So resolve this by actually locking the contended lock during
> > drm_exec_retry_on_contention(). However, this introduces a new
> > point
> > of failure since locking the contended lock may return -EINTR.
> > 
> > Hence drm_exec_retry_on_contention() must take an error parameter
> > and
> > also return a value indicating success.
> 
> After thinking more about that I have to pretty clearly NAK this.
>   
I thought we were beyond upfront NAKing in the first reply :/

> It's an intentional design decision to guarantee that at the start of
> the loop no object is locked.
> 
> This is because Sima and I wanted to integrate userptr handling into 
> drm_exec as well in the long term.

First I agree the interface looks worse with this patch.
But I thought generic userptr handling were going to end up as a gpuvm
helper (without using GEM objects) as we've discussed previously.
Anyway if still there would be helpers in drm_exec for some other
generic userptr solution, those need to be done before the
ww_acquire_ctx_init(). The contended locking here is done after, so I
can't really see how these would clash.

Still, If we need to come up with another solution, I think it's fair
we clearly sort out why.

> I think we should just document that drm_exec_trylock() can't be used
> to 
> lock the first BO in the loop and explicitly WARN if that's the case.

Unfortunately that's not sufficient for the general use-case. If we
want to keep the ttm_bo_vm approach of dropping the mmap lock when
there is contention on the bo resv, we need to be able to trylock on
first lock. Also bo creation is using trylock but might be able to use
a sleeping lock there. But if that sleeping lock triggers an -EDEADLK
(DEBUG_WW_MUTEX_SLOWPATH) we have the weird situation of referencing an
object that never was fully created as a contending object.

So the only really working alternative solution I can see is that
drm_exec_trylock simply fails if there is a contended lock and we'd
need to live with the weird bo creation situation described above.

/Thomas

> 
> Regards,
> Christian.
> 
> > 
> > Cc: Christian König 
> > Cc: Somalapuram Amaranath 
> > Cc: Matthew Brost 
> > Cc: 
> > Signed-off-by: Thomas Hellström 
> > ---
> >   .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c  | 16 -
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c    |  6 ++--
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_csa.c   |  4 +--
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c   |  8 ++---
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c   |  8 ++---
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_seq64.c |  4 +--
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_umsch_mm.c  |  8 ++---
> >   drivers/gpu/drm/amd/amdkfd/kfd_svm.c  |  2 +-
> >   drivers/gpu/drm/drm_exec.c    | 35
> > ++-
> >   drivers/gpu/drm/drm_gpuvm.c   |  8 ++---
> >   drivers/gpu/drm/imagination/pvr_job.c |  2 +-
> >   drivers/gpu/drm/msm/msm_gem_submit.c  |  2 +-
> >   drivers/gpu/drm/nouveau/nouveau_uvmm.c    |  2 +-
> >   drivers/gpu/drm/tests/drm_exec_test.c | 12 +++
> >   drivers/gpu/drm/xe/xe_gt_pagefault.c  |  4 +--
> >   drivers/gpu/drm/xe/xe_vm.c    | 10 +++---
> >   include/drm/drm_exec.h    | 23 +---
> >   17 files changed, 92 insertions(+), 62 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
> > b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
> > index e4d4e55c08ad..4a08a692aa1f 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
> > @@ -1152,12 +1152,12 @@ static int reserve_bo_and_vm(struct kgd_mem
> > *mem,
> >     drm_exec_init(>exec, DRM_EXEC_INTERRUPTIBLE_WAIT, 0);
> >     drm_exec_until_all_locked(>exec) {
> >     ret = amdgpu_vm_lock_pd(vm, >exec, 2);
> > -   drm_exec_retry_on_contention(>exec);
> > +   ret = drm_exec_retry_

Re: [RFC PATCH v3 15/21] drm/exec: Add a snapshot capability

2024-05-22 Thread Thomas Hellström
On Wed, 2024-05-22 at 13:27 +0200, Christian König wrote:
> Am 21.05.24 um 09:16 schrieb Thomas Hellström:
> > When validating a buffer object for submission, we might need to
> > lock
> > a number of object for eviction to make room for the validation.
> > 
> > This makes it pretty likely that validation will eventually
> > succeed,
> > since eventually the validating process will hold most dma_resv
> > locks
> > of the buffer objects residing in the memory type being validated
> > for.
> > 
> > However, once validation of a single object has succeeded it might
> > not
> > be beneficial to hold on to those locks anymore, and the validator
> > would want to drop the locks of all objects taken during
> > validation.
> 
> Exactly avoiding that was one of the goals of developing the drm_exec
> object.
> 
> When objects are unlocked after evicting them it just gives
> concurrent 
> operations an opportunity to lock them and re-validate them into the 
> contended domain.
> 
> So why should that approach here be beneficial at all?

It's a matter of being nice to the rest of the system while *still
guaranteeing progress*. For each object we're trying to validate, we
keep on evicting other objects until we make progress even if we lock
all the objects in the domain.

If we were unlocking after each eviction, we can't really guarantee
progress.

OTOH, a concurrent locker of the object may well be one with higher
priority (lower ticket number) just wanting to perform a pagefault

So it's a tradeoff between locking just locking other processes out to
allow us to make one step of progress and to in addition hit them with
the big sledgehammer.

/Thomas

> 
> Regards,
> Christian.
> 
> > 
> > Introduce a drm_exec snapshot functionality that can be used to
> > record the locks held at a certain time, and a restore
> > functionality
> > that restores the drm_exec state to the snapshot by dropping all
> > locks.
> > 
> > Snapshots can be nested if needed.
> > 
> > Cc: Christian König 
> > Cc: Somalapuram Amaranath 
> > Cc: Matthew Brost 
> > Cc: 
> > Signed-off-by: Thomas Hellström 
> > ---
> >   drivers/gpu/drm/drm_exec.c | 55
> > +-
> >   include/drm/drm_exec.h | 23 +++-
> >   2 files changed, 76 insertions(+), 2 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/drm_exec.c
> > b/drivers/gpu/drm/drm_exec.c
> > index 1383680ffa4a..9eea5d0d3a98 100644
> > --- a/drivers/gpu/drm/drm_exec.c
> > +++ b/drivers/gpu/drm/drm_exec.c
> > @@ -57,6 +57,7 @@ static void drm_exec_unlock_all(struct drm_exec
> > *exec)
> >     struct drm_gem_object *obj;
> >     unsigned long index;
> >   
> > +   WARN_ON(exec->snap);
> >     drm_exec_for_each_locked_object_reverse(exec, index, obj)
> > {
> >     dma_resv_unlock(obj->resv);
> >     drm_gem_object_put(obj);
> > @@ -90,6 +91,7 @@ void drm_exec_init(struct drm_exec *exec, u32
> > flags, unsigned nr)
> >     exec->num_objects = 0;
> >     exec->contended = DRM_EXEC_DUMMY;
> >     exec->prelocked = NULL;
> > +   exec->snap = NULL;
> >   }
> >   EXPORT_SYMBOL(drm_exec_init);
> >   
> > @@ -301,7 +303,6 @@ int drm_exec_lock_obj(struct drm_exec *exec,
> > struct drm_gem_object *obj)
> >     goto error_unlock;
> >   
> >     return 0;
> > -
> >   error_unlock:
> >     dma_resv_unlock(obj->resv);
> >     return ret;
> > @@ -395,5 +396,57 @@ int drm_exec_prepare_array(struct drm_exec
> > *exec,
> >   }
> >   EXPORT_SYMBOL(drm_exec_prepare_array);
> >   
> > +/**
> > + * drm_exec_restore() - Restore the drm_exec state to the point of
> > a snapshot.
> > + * @exec: The drm_exec object with the state.
> > + * @snap: The snapshot state.
> > + *
> > + * Restores the drm_exec object by means of unlocking and dropping
> > references
> > + * to objects locked after the snapshot.
> > + */
> > +void drm_exec_restore(struct drm_exec *exec, struct
> > drm_exec_snapshot *snap)
> > +{
> > +   struct drm_gem_object *obj;
> > +   unsigned int index;
> > +
> > +   exec->snap = snap->saved_snap;
> > +
> > +   drm_exec_for_each_locked_object_reverse(exec, index, obj)
> > {
> > +   if (index + 1 == snap->num_locked)
> > +   break;
> > +
> > +   dma_resv_unlock(obj->resv);
> > +   drm_gem_object_put(obj);
> > +   exec

Re: [RFC PATCH v3 16/21] drm/exec: Introduce an evict mode

2024-05-22 Thread Thomas Hellström
On Wed, 2024-05-22 at 15:28 +0200, Christian König wrote:
> Am 21.05.24 um 09:16 schrieb Thomas Hellström:
> > Locking for eviction is in some way different from locking for
> > submission:
> > 
> > 1) We can't lock objects that are already locked for submission,
> > hence DRM_EXEC_IGNORE_DUPLICATES must be unset.
> > 2) We must be able to re-lock objects locked for eviction,
> > either for submission or for yet another eviction, in
> > particular objects sharing a single resv must be considered.
> 
> Yeah, I was already thinking about that as well.
> 
> My idea so far was to have a separate function for locking eviction
> BOs. 
> This function would then use trylock or blocking depending on some
> setting.

Downstream i915 also has a separate locking function for this. I'm fine
with that as well. Probably the most sane choice.


> 
> > 3) There is no point to keep a contending object after the
> > transaction restart. We don't know whether we actually want to use
> > it again.
> 
> Well that isn't true as far as I know.
> 
> If we don't use trylock we still need to lock the object after
> rollback 
> to make sure that we waited for it to become available.

Yes, the transaction restart mentioned above is *after* the relaxation,
so the rollback becomes:

unlock_all
lock_contending_lock.
unlock_contending_lock.
drop_contending lock.


/Thomas


> 
> Regards,
> Christian.
> 
> > So introduce a drm_exec evict mode, and for now instead of
> > explicitly setting it using a function call or implement separate
> > locking functions that use evict mode, assume evict mode if
> > there is a snapshot registered. This can easily be changed later.
> > 
> > To keep track of resvs locked for eviction, use a pointer set
> > implemented by an xarray. This is probably not the most efficient
> > data structure but used as an easy-to-implement first approach.
> > If the set is empty (evict mode never used), the performance-
> > and memory usage impact will be very small.
> > 
> > TODO: Probably want to implement the set using an open addressing
> > hash table.
> > 
> > Cc: Christian König 
> > Cc: Somalapuram Amaranath 
> > Cc: Matthew Brost 
> > Cc: 
> > Signed-off-by: Thomas Hellström 
> > ---
> >   drivers/gpu/drm/drm_exec.c | 77
> > ++
> >   include/drm/drm_exec.h | 15 
> >   2 files changed, 85 insertions(+), 7 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/drm_exec.c
> > b/drivers/gpu/drm/drm_exec.c
> > index 9eea5d0d3a98..ea79d96f5439 100644
> > --- a/drivers/gpu/drm/drm_exec.c
> > +++ b/drivers/gpu/drm/drm_exec.c
> > @@ -65,6 +65,10 @@ static void drm_exec_unlock_all(struct drm_exec
> > *exec)
> >   
> >     drm_gem_object_put(exec->prelocked);
> >     exec->prelocked = NULL;
> > +
> > +   /* garbage collect */
> > +   xa_destroy(>resv_set);
> > +   xa_init(>resv_set);
> >   }
> >   
> >   /**
> > @@ -92,6 +96,8 @@ void drm_exec_init(struct drm_exec *exec, u32
> > flags, unsigned nr)
> >     exec->contended = DRM_EXEC_DUMMY;
> >     exec->prelocked = NULL;
> >     exec->snap = NULL;
> > +   exec->drop_contended = false;
> > +   xa_init(>resv_set);
> >   }
> >   EXPORT_SYMBOL(drm_exec_init);
> >   
> > @@ -110,6 +116,7 @@ void drm_exec_fini(struct drm_exec *exec)
> >     drm_gem_object_put(exec->contended);
> >     ww_acquire_fini(>ticket);
> >     }
> > +   xa_destroy(>resv_set);
> >   }
> >   EXPORT_SYMBOL(drm_exec_fini);
> >   
> > @@ -139,6 +146,30 @@ bool drm_exec_cleanup(struct drm_exec *exec)
> >   }
> >   EXPORT_SYMBOL(drm_exec_cleanup);
> >   
> > +static unsigned long drm_exec_resv_to_key(const struct dma_resv
> > *resv)
> > +{
> > +   return (unsigned long)resv / __alignof__(typeof(*resv));
> > +}
> > +
> > +static void
> > +drm_exec_resv_set_erase(struct drm_exec *exec, unsigned long key)
> > +{
> > +   if (xa_load(>resv_set, key))
> > +   xa_erase(>resv_set, key);
> > +}
> > +
> > +static bool drm_exec_in_evict_mode(struct drm_exec *exec)
> > +{
> > +   return !!exec->snap;
> > +}
> > +
> > +static void drm_exec_set_evict_mode(struct drm_exec *exec,
> > +       struct drm_exec_snapshot
> > *snap)
> > +{
> > +   exec->snap = snap;
> > +   exec->flags &= ~DRM_EXEC_IGNORE_DUPLICATES;
> > +}
> > +
&

Re: DFSort query

2024-05-22 Thread Ron Thomas
My Apologies Kolusu for wrong details inputted  

Thanks Much for the sample job and it is working good for my requirement.

Regards
 Ron T

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: [PULL 10/38] tests/qtest/migration: Add a test for the analyze-migration script

2024-05-22 Thread Thomas Huth

On 22/05/2024 14.48, Fabiano Rosas wrote:

Thomas Huth  writes:


On 21/05/2024 14.46, Fabiano Rosas wrote:

Alex Bennée  writes:


Juan Quintela  writes:


From: Fabiano Rosas 

Add a smoke test that migrates to a file and gives it to the
script. It should catch the most annoying errors such as changes in
the ram flags.

After code has been merged it becomes way harder to figure out what is
causing the script to fail, the person making the change is the most
likely to know right away what the problem is.

Signed-off-by: Fabiano Rosas 
Acked-by: Thomas Huth 
Reviewed-by: Juan Quintela 
Signed-off-by: Juan Quintela 
Message-ID: <20231009184326.15777-7-faro...@suse.de>


I bisected the failures I'm seeing on s390x to the introduction of this
script. I don't know if its simply a timeout on a relatively slow VM:


What's the range of your bisect? That test has been disabled and then
reenabled on s390x. It could be tripping the bisect.

04131e0009 ("tests/qtest/migration-test: Disable the analyze-migration.py test on 
s390x")
81c2c9dd5d ("tests/qtest/migration-test: Fix analyze-migration.py for s390x")

I don't think that test itself could be timing out. It's a very simple
test. It runs a migration and then uses the output to validate the
script.


Agreed, the analyze-migration.py is unlikely to be the issue - especially
since it seems to have been disabled again in commit 6f0771de903b ...
Fabiano, why did you disable it here again? The reason is not mentioned in
the commit description.


Your patch 81c2c9dd5d was merged between my v1 and v2 on the list and I
didn't notice so I messed up the rebase. I'll send a patch soon to fix
that.


Thanks, but I already sent a patch earlier today that should fix the issue:


https://lore.kernel.org/qemu-devel/20240522091255.417263-1-th...@redhat.com/T/#u

 Thomas




Re: [PULL 10/38] tests/qtest/migration: Add a test for the analyze-migration script

2024-05-22 Thread Thomas Huth

On 22/05/2024 14.48, Fabiano Rosas wrote:

Thomas Huth  writes:


On 21/05/2024 14.46, Fabiano Rosas wrote:

Alex Bennée  writes:


Juan Quintela  writes:


From: Fabiano Rosas 

Add a smoke test that migrates to a file and gives it to the
script. It should catch the most annoying errors such as changes in
the ram flags.

After code has been merged it becomes way harder to figure out what is
causing the script to fail, the person making the change is the most
likely to know right away what the problem is.

Signed-off-by: Fabiano Rosas 
Acked-by: Thomas Huth 
Reviewed-by: Juan Quintela 
Signed-off-by: Juan Quintela 
Message-ID: <20231009184326.15777-7-faro...@suse.de>


I bisected the failures I'm seeing on s390x to the introduction of this
script. I don't know if its simply a timeout on a relatively slow VM:


What's the range of your bisect? That test has been disabled and then
reenabled on s390x. It could be tripping the bisect.

04131e0009 ("tests/qtest/migration-test: Disable the analyze-migration.py test on 
s390x")
81c2c9dd5d ("tests/qtest/migration-test: Fix analyze-migration.py for s390x")

I don't think that test itself could be timing out. It's a very simple
test. It runs a migration and then uses the output to validate the
script.


Agreed, the analyze-migration.py is unlikely to be the issue - especially
since it seems to have been disabled again in commit 6f0771de903b ...
Fabiano, why did you disable it here again? The reason is not mentioned in
the commit description.


Your patch 81c2c9dd5d was merged between my v1 and v2 on the list and I
didn't notice so I messed up the rebase. I'll send a patch soon to fix
that.


Thanks, but I already sent a patch earlier today that should fix the issue:


https://lore.kernel.org/qemu-devel/20240522091255.417263-1-th...@redhat.com/T/#u

 Thomas




Re: [PATCH 1/4] MAINTAINERS: drop audio maintainership

2024-05-22 Thread Thomas Huth

On 16/05/2024 14.03, Gerd Hoffmann wrote:

Remove myself from audio (both devices and backend) entries.
Flip status to "Orphan" for entries which have nobody else listed.

Signed-off-by: Gerd Hoffmann 
---
  MAINTAINERS | 19 ---
  1 file changed, 4 insertions(+), 15 deletions(-)

diff --git a/MAINTAINERS b/MAINTAINERS
index 1b79767d6196..7f52e2912fc3 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS

...

@@ -2388,7 +2387,6 @@ F: hw/virtio/virtio-mem-pci.c
  F: include/hw/virtio/virtio-mem.h
  
  virtio-snd

-M: Gerd Hoffmann 
  R: Manos Pitsidianakis 
  S: Supported


I think the status should be downgraded to Orphan or at least Odd-fixes, 
unless Manos wants to upgrade from "R:" to "M:" ?



  ALSA Audio backend
-M: Gerd Hoffmann 
  R: Christian Schoenebeck 
  S: Odd Fixes
  F: audio/alsaaudio.c


I'd also suggest that Christian either upgrade from R: to M: or that we 
change the status to Orphan



  JACK Audio Connection Kit backend
-M: Gerd Hoffmann 
  R: Christian Schoenebeck 
  S: Odd Fixes
  F: audio/jackaudio.c


dito


  SDL Audio backend
-M: Gerd Hoffmann 
  R: Thomas Huth 


I'm fine if you update my entry from R: to M: here.


  S: Odd Fixes
  F: audio/sdlaudio.c
  
  Sndio Audio backend

-M: Gerd Hoffmann 
  R: Alexandre Ratchov 
  S: Odd Fixes
  F: audio/sndioaudio.c


Same again, I'd suggest to either set to Orphan or upgrade the R: entry?

 Thomas




Re: [PATCH rfcv2 17/17] tests/qtest: Add intel-iommu test

2024-05-22 Thread Thomas Huth

On 22/05/2024 08.23, Zhenzhong Duan wrote:

Add the framework to test the intel-iommu device.

Currently only tested cap/ecap bits correctness in scalable
modern mode. Also tested cap/ecap bits consistency before
and after system reset.

Signed-off-by: Zhenzhong Duan 
---
  MAINTAINERS|  1 +
  tests/qtest/intel-iommu-test.c | 63 ++
  tests/qtest/meson.build|  1 +
  3 files changed, 65 insertions(+)
  create mode 100644 tests/qtest/intel-iommu-test.c

diff --git a/MAINTAINERS b/MAINTAINERS
index 5dab60bd04..f1ef6128c8 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -3656,6 +3656,7 @@ S: Supported
  F: hw/i386/intel_iommu.c
  F: hw/i386/intel_iommu_internal.h
  F: include/hw/i386/intel_iommu.h
+F: tests/qtest/intel-iommu-test.c
  
  AMD-Vi Emulation

  S: Orphan
diff --git a/tests/qtest/intel-iommu-test.c b/tests/qtest/intel-iommu-test.c
new file mode 100644
index 00..e1273bce14
--- /dev/null
+++ b/tests/qtest/intel-iommu-test.c
@@ -0,0 +1,63 @@
+/*
+ * QTest testcase for intel-iommu
+ *
+ * Copyright (c) 2024 Intel, Inc.
+ *
+ * Author: Zhenzhong Duan 
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+
+#include "qemu/osdep.h"
+#include "libqtest-single.h"


It's a little bit nicer to write new tests without libqtest-single.h (e.g. 
in case you ever add migration tests later, you must not use anything that 
uses a global state), so I'd recommend to use "qts = qtest_init(...)" 
instead of qtest_start(...) and then to use the functions with the "qtest_" 
prefix instead of the other functions from libqtest-single.h ... but it's 
only a recommendation, up to you whether you want to respin your patch with 
it or not.


Anyway:
Acked-by: Thomas Huth 

Do you want me to pick this up through the qtest tree, or shall this go 
through some x86-related tree instead?


 Thomas




Re: [PATCH v3 2/3] meson: Add -fno-sanitize=function

2024-05-22 Thread Thomas Huth

On 22/05/2024 12.48, Akihiko Odaki wrote:

-fsanitize=function enforces the consistency of function types, but
include/qemu/lockable.h contains function pointer casts, which violate
the rule. We already disables exact type checks for CFI with
-fsanitize-cfi-icall-generalize-pointers so disable -fsanitize=function
as well.


Ah, I was already wondering why we didn't see this in the CFI builds yet, 
but now I understand :-)


Anyway, just FYI, I've also opened some bug tickets for this some days ago:

https://gitlab.com/qemu-project/qemu/-/issues/2346
https://gitlab.com/qemu-project/qemu/-/issues/2345

(I assume we still should fix the underlying issues at one point in time and 
remove the compiler flag here again later? Otherwise you could close these 
with the "Resolves:" keyword in your patch description)



  qemu_common_flags = [
'-D_GNU_SOURCE', '-D_FILE_OFFSET_BITS=64', '-D_LARGEFILE_SOURCE',
-  '-fno-strict-aliasing', '-fno-common', '-fwrapv' ]
+  '-fno-sanitize=function', '-fno-strict-aliasing', '-fno-common', '-fwrapv' ]
  qemu_cflags = []
  qemu_ldflags = []


With GCC, I get:

cc: error: unrecognized argument to ‘-fno-sanitize=’ option: ‘function’

I think you need to add this via cc.get_supported_arguments() to make sure 
that we only add it for compilers that support this option.


 Thomas




[PATCH] tests/qtest/migration-test: Fix the check for a successful run of analyze-migration.py

2024-05-22 Thread Thomas Huth
If analyze-migration.py cannot be run or crashes, the error is currently
ignored since the code only checks for nonzero values in case the child
exited properly. For example, if you run the test with a non-existing
Python interpreter, it still succeeds:

 $ PYTHON=wrongpython QTEST_QEMU_BINARY=./qemu-system-x86_64 
tests/qtest/migration-test
 ...
 # Running /x86_64/migration/analyze-script
 # Using machine type: pc-q35-9.1
 # starting QEMU: exec ./qemu-system-x86_64 -qtest unix:/tmp/qtest-417639.sock 
-qtest-log /dev/null -chardev socket,path=/tmp/qtest-417639.qmp,id=char0 -mon 
chardev=char0,mode=control -display none -audio none -accel kvm -accel tcg 
-machine pc-q35-9.1, -name source,debug-threads=on -m 150M -serial 
file:/tmp/migration-test-XPLUN2/src_serial -drive 
if=none,id=d0,file=/tmp/migration-test-XPLUN2/bootsect,format=raw -device 
ide-hd,drive=d0,secs=1,cyls=1,heads=1   -uuid 
----  -accel qtest
 # starting QEMU: exec ./qemu-system-x86_64 -qtest unix:/tmp/qtest-417639.sock 
-qtest-log /dev/null -chardev socket,path=/tmp/qtest-417639.qmp,id=char0 -mon 
chardev=char0,mode=control -display none -audio none -accel kvm -accel tcg 
-machine pc-q35-9.1, -name target,debug-threads=on -m 150M -serial 
file:/tmp/migration-test-XPLUN2/dest_serial -incoming tcp:127.0.0.1:0 -drive 
if=none,id=d0,file=/tmp/migration-test-XPLUN2/bootsect,format=raw -device 
ide-hd,drive=d0,secs=1,cyls=1,heads=1 -accel qtest
 **
 ERROR:../../devel/qemu/tests/qtest/migration-test.c:1603:test_analyze_script: 
code should not be reached
 migration-test: ../../devel/qemu/tests/qtest/libqtest.c:240: qtest_wait_qemu: 
Assertion `pid == s->qemu_pid' failed.
 migration-test: ../../devel/qemu/tests/qtest/libqtest.c:240: qtest_wait_qemu: 
Assertion `pid == s->qemu_pid' failed.
 ok 2 /x86_64/migration/analyze-script
 ...

Let's better fail the test in case the child did not exit properly, too.

Signed-off-by: Thomas Huth 
---
 tests/qtest/migration-test.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tests/qtest/migration-test.c b/tests/qtest/migration-test.c
index 5b4eca2b20..b7e3406471 100644
--- a/tests/qtest/migration-test.c
+++ b/tests/qtest/migration-test.c
@@ -1604,7 +1604,7 @@ static void test_analyze_script(void)
 }
 
 g_assert(waitpid(pid, , 0) == pid);
-if (WIFEXITED(wstatus) && WEXITSTATUS(wstatus) != 0) {
+if (!WIFEXITED(wstatus) || WEXITSTATUS(wstatus) != 0) {
 g_test_message("Failed to analyze the migration stream");
 g_test_fail();
 }
-- 
2.45.1




[PATCH] tests/qtest/migration-test: Run some basic tests on s390x and ppc64 with TCG, too

2024-05-22 Thread Thomas Huth
On s390x, we recently had a regression that broke migration / savevm
(see commit bebe9603fc ("hw/intc/s390_flic: Fix crash that occurs when
saving the machine state"). The problem was merged without being noticed
since we currently do not run any migration / savevm related tests on
x86 hosts.
While we currently cannot run all migration tests for the s390x target
on x86 hosts yet (due to some unresolved issues with TCG), we can at
least run some of the non-live tests to avoid such problems in the future.
Thus enable the "analyze-script" and the "bad_dest" tests before checking
for KVM on s390x or ppc64 (this also fixes the problem that the
"analyze-script" test was not run on s390x at all anymore since it got
disabled again by accident in a previous refactoring of the code).

Signed-off-by: Thomas Huth 
---
 PS: Before anyone asks, yes, the quoted problem has been detected by the
 s390x runner in the gitlab-CI, but since that occasionally shows failure
 due to its slowness, it's considered as non-gating and nobody really looked
 at the failing jobs :-(

 tests/qtest/migration-test.c | 39 ++--
 1 file changed, 20 insertions(+), 19 deletions(-)

diff --git a/tests/qtest/migration-test.c b/tests/qtest/migration-test.c
index e8d3555f56..5b4eca2b20 100644
--- a/tests/qtest/migration-test.c
+++ b/tests/qtest/migration-test.c
@@ -3437,6 +3437,20 @@ int main(int argc, char **argv)
 arch = qtest_get_arch();
 is_x86 = !strcmp(arch, "i386") || !strcmp(arch, "x86_64");
 
+tmpfs = g_dir_make_tmp("migration-test-XX", );
+if (!tmpfs) {
+g_test_message("Can't create temporary directory in %s: %s",
+   g_get_tmp_dir(), err->message);
+}
+g_assert(tmpfs);
+
+module_call_init(MODULE_INIT_QOM);
+
+migration_test_add("/migration/bad_dest", test_baddest);
+#ifndef _WIN32
+migration_test_add("/migration/analyze-script", test_analyze_script);
+#endif
+
 /*
  * On ppc64, the test only works with kvm-hv, but not with kvm-pr and TCG
  * is touchy due to race conditions on dirty bits (especially on PPC for
@@ -3444,8 +3458,8 @@ int main(int argc, char **argv)
  */
 if (g_str_equal(arch, "ppc64") &&
 (!has_kvm || access("/sys/module/kvm_hv", F_OK))) {
-g_test_message("Skipping test: kvm_hv not available");
-return g_test_run();
+g_test_message("Skipping tests: kvm_hv not available");
+goto test_add_done;
 }
 
 /*
@@ -3453,19 +3467,10 @@ int main(int argc, char **argv)
  * there until the problems are resolved
  */
 if (g_str_equal(arch, "s390x") && !has_kvm) {
-g_test_message("Skipping test: s390x host with KVM is required");
-return g_test_run();
+g_test_message("Skipping tests: s390x host with KVM is required");
+goto test_add_done;
 }
 
-tmpfs = g_dir_make_tmp("migration-test-XX", );
-if (!tmpfs) {
-g_test_message("Can't create temporary directory in %s: %s",
-   g_get_tmp_dir(), err->message);
-}
-g_assert(tmpfs);
-
-module_call_init(MODULE_INIT_QOM);
-
 if (is_x86) {
 migration_test_add("/migration/precopy/unix/suspend/live",
test_precopy_unix_suspend_live);
@@ -3491,12 +3496,6 @@ int main(int argc, char **argv)
 }
 }
 
-migration_test_add("/migration/bad_dest", test_baddest);
-#ifndef _WIN32
-if (!g_str_equal(arch, "s390x")) {
-migration_test_add("/migration/analyze-script", test_analyze_script);
-}
-#endif
 migration_test_add("/migration/precopy/unix/plain",
test_precopy_unix_plain);
 migration_test_add("/migration/precopy/unix/xbzrle",
@@ -3653,6 +3652,8 @@ int main(int argc, char **argv)
test_vcpu_dirty_limit);
 }
 
+test_add_done:
+
 ret = g_test_run();
 
 g_assert_cmpint(ret, ==, 0);
-- 
2.45.1




Bug#1071581: dialog: stop using libtool-bin

2024-05-22 Thread Thomas Dickey
On Wed, May 22, 2024 at 08:12:04AM +0200, Helmut Grohne wrote:
> Hi Thomas,
> 
> On Tue, May 21, 2024 at 03:06:00PM -0400, Thomas Dickey wrote:
> > hmm - there are two sets of changes - I don't see a reason for the
> > change to the curses function checks.
> 
> Thank you for reviewing my patch. The curses function check change does
> have a reason. It can be solved differently in principle.
> 
> When I ran autoheader, config.hin would lack all the defines that should

I don't use autoheader (though it's present in the fork I've maintained for
about the past quarter-century).  The configure script generates the complete
dlg_config.h without that crutch.  Attempting to bypass that will certainly
lead to unnecessary bug reports.

> have come from CF_CURSES_FUNCS while the relevant HAVE_* defines would
> still show up in config.log after running configure and therefore the
> resulting dlg_config.h would also lack them. That meant that dialog
> would perceive a very dysfunctional curses and its shim would fail to
> compile. Quite clearly, we shouldn't assume a crippled curses and
> config.hin should contain the relevant templates. As it turns out,
> autoheader interprets the m4 files and collects the AC_DEFINE and
> AC_DEFINE_UNQUOTED invocations, well some of them actually. The
> AC_CHECK_FUNCS would be collected whereas CF_CURSES_FUNCS not, even
> though both seemed quite similar. The subtle difference is that
> AC_CHECK_FUNCS uses AS_FOR (a loop that is evaluated using m4) whereas

Actually it would be AC_FOREACH, which invokes AH_TEMPLATE

fwiw, CF_CURSES_FUNCS predates that stuff (1997 versus 1999),
and there are other macros which might use those features.

(I added a to-do to follow up on this)

> CF_CURSES_FUNCS uses a shell for loop. Thus, autoheader would only see a
> single, bogus AC_DEFINE_UNQUOTED for all of CF_CURSES_FUNCS and ignore
> that. Avoiding this shell loop is key here and I went for manually
> unrolling it, because AS_FOR didn't work out initially and unrolling
> seemed workable to me. The crucial bit here is that you cannot use shell
> for control flow here. If you prefer AS_FOR or some other working
> mechanism, that's fine. Just do something about it to avoid dialog
> failing to build when we remove libtool-bin from Debian.
> 
> Helmut
> 

-- 
Thomas E. Dickey 
https://invisible-island.net


signature.asc
Description: PGP signature


Re: More on writing academic papers

2024-05-22 Thread Thomas Dupond via
Hello Bento,

Damian McGuckin  a écrit :
> Bento,
>
> On Tue, 21 May 2024, Bento Borges Schirmer wrote:
>
>> I think I will stop reproducing templates for conferences for now.
>
> Wise move. I just tweak my standard template every time.
>
>> different template, such as that of ACM!
>
> Mine is close to this.
>
>> That's the entire idea after all, right!? I'll be using defaults for
>> now and worry less.
>
> Very wise move.
>
>> And then, after getting the actual research done and written, I can
>> spend some time tailoring the final print, I hope to find help here :)
>> :) Like how to handle all the ABNT rules for citations and references
>> :(
>
> That is the 'refer' tool. Which is a whole new ball game. I wish I
> understood it better than I do.

Refer is very powerful but may require some tweaking like Hans Bezemer
did to get APA-style citations:

https://www.froude.eu/groff/examples/refer-apa.html

>> Now I'm thinking about writing filters and preprocessors. I dunno if
>> groff or mm provide mechanisms to reference figures and headers,
>> which would be handy. Unless that is the case, I'm thinking about
>> writing small C programs that recognize some commands and count
>> figures and headers and replace text for me when referencing them,
>> something like that. I dunno I dunno. Let's see.
>
> The Table of Contents macro already handles Figures and Tables and
> Headings.  You caption each table and figure with .TB and .FG macros
> respectively/

To add to what Damian said, you could look at page 121 in the Unix
Text Processing book:

http://chuzzlewit.co.uk/utp_book-1.1.pdf

>> Whenever I search for users of groff, people frequently mention
>> computer generated content, such as for business. So I guess troff
>> can be used "directly" by writers, but it also suited to be easy to
>> be... generated?
>
> Yes. We process a database to automatically generate the invoice details
> which is then run through 'groff -mm' to provide the invoice on a
> company letterhead.

Also did an ersatz of this at my previous job to generate numbered
invoices.  The hardest part was finding the company logo in postscript
format :D

-- 
Thomas




[libreoffice-users] Calc scrolling in slow motion

2024-05-22 Thread Thomas Blasejewicz

Good afternoon
I do have a problem.
Recently I upgraded to 24.2.3.2, because I wanted to enjoy the 
highlighting of rows and columns in Calc.

Which to me, however, looks horrible and I am not using it now.

Problem:
Since I upgraded, I noticed that when I try scrolling through big Calc 
sheets, but also somewhat bigger Writer files,
that the scrolling becomes sickeningly slow the moment the cursor 
(selected cell) reaches the screen edge.
Meaning, when I scroll down for example, the cursor scrolls/slides down 
the screen in an almost normal fashion, but
when the lower border is reached, it stops for a short rest, moves to 
the next cell - stops for a short rest - moves to the next cell ... forever.

In ALL directions.
Not as marked, but a similar behavior in Writer too.

I tried the aboven mentioned LO version on FOUR machines, three Windows 
10 (basically all the same setting and software installed) and one Linux 
(LMDE6) machines.

Everywhere exactly the same.
Two note books are a little older and one might argue, the hardware is 
not up to the job, but
I am typing this mail using the workstation in my office: 2 XEON 
processors with a total of 16 cores = 32 logical CPUs, 64 GB RAM, 
several TB to spare for storage

and the OS runs from a 500 GB SSD. And I use a brand new monitor.
I refuse to believe, this kind of hardware is "not enough".

So, is the a trick like changing the setting somewhere to make it faster?
Over the past 10-15 years I have been using Libreoffice, I have never 
felt so frustrated with the poor performance.
If there is no trick and the slow motion "the way LO is supposed to work 
now" ... I gladly go back to an earlier version that worked.

This is too much stress.

I DO hope, there is a trick to get a proper behavior.
Thank you
Thomas


--
To unsubscribe e-mail to: users+unsubscr...@global.libreoffice.org
Problems? https://www.libreoffice.org/get-help/mailing-lists/how-to-unsubscribe/
Posting guidelines + more: https://wiki.documentfoundation.org/Netiquette
List archive: https://listarchives.libreoffice.org/global/users/
Privacy Policy: https://www.documentfoundation.org/privacy


Re: ServiceBindingPropertySource

2024-05-22 Thread Mark Thomas

On 21/05/2024 18:50, Christopher Schultz wrote:



1. Allow ServiceBindingPropertySource to use the SERVICE_BINDING_ROOT 
environment variable *or* a system property with an appropriate name 
such as service.binding.root, with the system property overriding the 
environment variable.


Seems reasonable to me but keep in mind I've never used this code.

2. Have ServiceBindingPropertySource fall-back to system property 
resolution if no matching file is found. Maybe we should do this with 
all PropertySource classes provided by Tomcat?


My reading of the docs and the code is that SystemPropertySource is 
always added already.


3. If the SERVICE_BINDING_ROOT environment variable is being used, copy 
its value into a system property. This will allow application software 
or Tomcat itself to use the file reference as necessary. For example:


Again seems reasonable to me but same caveat as above.

Mark

-
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org



Re: [Bug 2066203] [NEW] Libraries compiled without Shadow Stack support

2024-05-22 Thread Thomas Orgis
Am Mon, 20 May 2024 23:51:15 -
schrieb Marcos Alano <2066...@bugs.launchpad.net>: 

> [6399376a4e90] main audio output warning: cannot load module
`/usr/lib/x86_64-linux-gnu/vlc/plugins/audio_output/libpulse_plugin.so'
(/lib/x86_64-linux-gnu/libmpg123.so.0: rebuild shared object with SHSTK
support enabled)

Regarding libmpg123, you either need to disable assembly optimizations
(build with generic decoders only), I presume, or someone provide a
patch that adds SHSTK to them. I don't know which implementation of
shadow stacks glibc/gcc is using in that setup. I've read up on the
concept and so far only figured that this is part of a spiral that
complicates ABI and makes providing assembly-optimized functions ever
harder. This would be fine if compilers finally would be smart enough
to evade the need to do so. Last time I checked, hand-tuned AVX
decoding was still a lot more efficient.

We already handle IBT, I think, with indirect jumps landing only in C
wrapper functions. I wonder if we could also limit the shadow stack
impact to those with some compiler/linker flags. The assembly routines
are rather strict math, many years old now without much of attack
surface. All parsing of input is before them in C. They just do lots of
multiplication/addition.

One might try to write a set of optimizations using intrinsics for
modern CPUs that then also get the treatment of shadow stacks or the
next shiny security measure. Porting the AVX code to GCC (and/or other)
inline ASM might also work for some platforms.

(Still, I am wondering why pulseaudio output should need MPEG decoding.)


Alrighty then,

Thomas

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2066203

Title:
  Libraries compiled without Shadow Stack support

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/mpg123/+bug/2066203/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Re: [Bug 2066203] [NEW] Libraries compiled without Shadow Stack support

2024-05-22 Thread Thomas Orgis
Am Mon, 20 May 2024 23:51:15 -
schrieb Marcos Alano <2066...@bugs.launchpad.net>: 

> [6399376a4e90] main audio output warning: cannot load module
`/usr/lib/x86_64-linux-gnu/vlc/plugins/audio_output/libpulse_plugin.so'
(/lib/x86_64-linux-gnu/libmpg123.so.0: rebuild shared object with SHSTK
support enabled)

Regarding libmpg123, you either need to disable assembly optimizations
(build with generic decoders only), I presume, or someone provide a
patch that adds SHSTK to them. I don't know which implementation of
shadow stacks glibc/gcc is using in that setup. I've read up on the
concept and so far only figured that this is part of a spiral that
complicates ABI and makes providing assembly-optimized functions ever
harder. This would be fine if compilers finally would be smart enough
to evade the need to do so. Last time I checked, hand-tuned AVX
decoding was still a lot more efficient.

We already handle IBT, I think, with indirect jumps landing only in C
wrapper functions. I wonder if we could also limit the shadow stack
impact to those with some compiler/linker flags. The assembly routines
are rather strict math, many years old now without much of attack
surface. All parsing of input is before them in C. They just do lots of
multiplication/addition.

One might try to write a set of optimizations using intrinsics for
modern CPUs that then also get the treatment of shadow stacks or the
next shiny security measure. Porting the AVX code to GCC (and/or other)
inline ASM might also work for some platforms.

(Still, I am wondering why pulseaudio output should need MPEG decoding.)


Alrighty then,

Thomas

-- 
You received this bug notification because you are a member of Ubuntu
Desktop Bugs, which is subscribed to mpg123 in Ubuntu.
https://bugs.launchpad.net/bugs/2066203

Title:
  Libraries compiled without Shadow Stack support

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/mpg123/+bug/2066203/+subscriptions


-- 
desktop-bugs mailing list
desktop-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/desktop-bugs

Re: [PULL 10/38] tests/qtest/migration: Add a test for the analyze-migration script

2024-05-21 Thread Thomas Huth

On 21/05/2024 14.46, Fabiano Rosas wrote:

Alex Bennée  writes:


Juan Quintela  writes:


From: Fabiano Rosas 

Add a smoke test that migrates to a file and gives it to the
script. It should catch the most annoying errors such as changes in
the ram flags.

After code has been merged it becomes way harder to figure out what is
causing the script to fail, the person making the change is the most
likely to know right away what the problem is.

Signed-off-by: Fabiano Rosas 
Acked-by: Thomas Huth 
Reviewed-by: Juan Quintela 
Signed-off-by: Juan Quintela 
Message-ID: <20231009184326.15777-7-faro...@suse.de>


I bisected the failures I'm seeing on s390x to the introduction of this
script. I don't know if its simply a timeout on a relatively slow VM:


What's the range of your bisect? That test has been disabled and then
reenabled on s390x. It could be tripping the bisect.

04131e0009 ("tests/qtest/migration-test: Disable the analyze-migration.py test on 
s390x")
81c2c9dd5d ("tests/qtest/migration-test: Fix analyze-migration.py for s390x")

I don't think that test itself could be timing out. It's a very simple
test. It runs a migration and then uses the output to validate the
script.


Agreed, the analyze-migration.py is unlikely to be the issue - especially 
since it seems to have been disabled again in commit 6f0771de903b ...
Fabiano, why did you disable it here again? The reason is not mentioned in 
the commit description.


But with regards to the failures, please note that we also had a bug 
recently, starting with commit 9d1b0f5bf515a0 and just fixed in commit 
bebe9603fcb072dc ... maybe that affected your bisecting, too.
(it's really bad that this bug sneaked in without being noticed ... we 
should maybe look into running at least some of the migration tests with TCG 
on x86 hosts, too...)


 Thomas




Re: [PULL 10/38] tests/qtest/migration: Add a test for the analyze-migration script

2024-05-21 Thread Thomas Huth

On 21/05/2024 14.46, Fabiano Rosas wrote:

Alex Bennée  writes:


Juan Quintela  writes:


From: Fabiano Rosas 

Add a smoke test that migrates to a file and gives it to the
script. It should catch the most annoying errors such as changes in
the ram flags.

After code has been merged it becomes way harder to figure out what is
causing the script to fail, the person making the change is the most
likely to know right away what the problem is.

Signed-off-by: Fabiano Rosas 
Acked-by: Thomas Huth 
Reviewed-by: Juan Quintela 
Signed-off-by: Juan Quintela 
Message-ID: <20231009184326.15777-7-faro...@suse.de>


I bisected the failures I'm seeing on s390x to the introduction of this
script. I don't know if its simply a timeout on a relatively slow VM:


What's the range of your bisect? That test has been disabled and then
reenabled on s390x. It could be tripping the bisect.

04131e0009 ("tests/qtest/migration-test: Disable the analyze-migration.py test on 
s390x")
81c2c9dd5d ("tests/qtest/migration-test: Fix analyze-migration.py for s390x")

I don't think that test itself could be timing out. It's a very simple
test. It runs a migration and then uses the output to validate the
script.


Agreed, the analyze-migration.py is unlikely to be the issue - especially 
since it seems to have been disabled again in commit 6f0771de903b ...
Fabiano, why did you disable it here again? The reason is not mentioned in 
the commit description.


But with regards to the failures, please note that we also had a bug 
recently, starting with commit 9d1b0f5bf515a0 and just fixed in commit 
bebe9603fcb072dc ... maybe that affected your bisecting, too.
(it's really bad that this bug sneaked in without being noticed ... we 
should maybe look into running at least some of the migration tests with TCG 
on x86 hosts, too...)


 Thomas




Re: The nested-splitter project has collapsed in complexity

2024-05-21 Thread Thomas Passin

On Tuesday, May 21, 2024 at 3:36:03 PM UTC-4 Thomas Passin wrote:

VR3 is working with  *ekr-3910-no-fl-ns-plugins*, possible quirks aside.  
Will check soon on Linux.  Freewin is working. RPCalc is still working.


The new layout *in-body* does put VR3 in a new panel next to the body 
editor.  But that whole frame is squashed all the way over to the right so 
it's not visible.  You have to notice the splitter separator and drag it to 
pull the body.VR3 frame into visibility.

-- 
You received this message because you are subscribed to the Google Groups 
"leo-editor" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to leo-editor+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/leo-editor/3767a2fb-cfc9-415a-bef0-38ef75e3ecd5n%40googlegroups.com.


[kcolorchooser] [Bug 479406] The "Pick Screen Color" button is missing on Wayland session

2024-05-21 Thread Thomas Weißschuh
https://bugs.kde.org/show_bug.cgi?id=479406

Thomas Weißschuh  changed:

   What|Removed |Added

  Latest Commit||https://invent.kde.org/grap
   ||hics/kcolorchooser/-/commit
   ||/dace6c0d2b04b444b4e4a92045
   ||0a7ed24b79cc30
 Resolution|--- |FIXED
 Status|CONFIRMED   |RESOLVED

--- Comment #18 from Thomas Weißschuh  ---
Git commit dace6c0d2b04b444b4e4a920450a7ed24b79cc30 by Thomas Weißschuh.
Committed on 21/05/2024 at 18:02.
Pushed by nicolasfella into branch 'master'.

Allow dbus processing in qt-base to enable color-picking via portal

qt-base uses dbus to query the desktop portal if color-picking is
supported, without explicitly waiting for the result.
kcolorchooser creates its QColorDialog before the response was processed
and therefore color picking via the portal is presumed to be
unavailable.

Give the eventloop the opportunity to process the event and only
afterwards create the QColorDialog.

See https://bugreports.qt.io/browse/QTBUG-120957

M  +3-0kcolorchooser.cpp

https://invent.kde.org/graphics/kcolorchooser/-/commit/dace6c0d2b04b444b4e4a920450a7ed24b79cc30

-- 
You are receiving this mail because:
You are watching all bug changes.

Re: The next version of Leo will be 6.8.0

2024-05-21 Thread Thomas Passin

On Tuesday, May 21, 2024 at 3:20:34 PM UTC-4 Edward K. Ream wrote:

The new version number indicates (per the semantic versioning 
 convention) that the next version of Leo will contain 
*breaking 
changes* that might significantly impact existing scripts and plugins.

Three issues could break existing code:

- #3910 : Deprecate 
free_layout and nested_splitter plugins. This issue is potentially a 
wide-ranging change.

- #3915 : Use slots 
for most of Leo's classes. This issue affects only scripts that inject 
ivars into Leo's classes. The workarounds are straightforward.

As long as we continue to have commander-specific and global user 
dictionaries, it shouldn't be much of a problem. I have sometimes added 
functions directly to c or g so certain variables or functions would 
persist past invocation.  With stable user dictionaries I could use them 
just as well.  

-- #3925 : Make 
reload-settings/stylesheets be synonyms for restart-leo. This issue should 
have minimal practical impact.

Actually, I don't agree with this one about reload-settings, at least for 
outline-local settings.  I have often changed a setting in one outline and 
reloaded settings to see the effect.  Restarting Leo each time would be a 
nuisance and would slow the development of the changes.  Globally, changing 
menus and then resetting doesn't work anyway, at least not for 
currently-open outlines, and reloading the stylesheets has been a little 
weird so I wouldn't miss that command, I think.

*Summary*

None of these issues is complete, but I expect all three to be part of Leo 
6.8.0.

All of your questions and comments are welcome. Expect 6.8.0 sometime this 
summer.

Edward

-- 
You received this message because you are subscribed to the Google Groups 
"leo-editor" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to leo-editor+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/leo-editor/2e6a936b-ad8c-43b4-b60e-c83d2d9ef97fn%40googlegroups.com.


Re: The nested-splitter project has collapsed in complexity

2024-05-21 Thread Thomas Passin
VR3 is working with  *ekr-3910-no-fl-ns-plugins*, possible quirks aside.  
Will check soon on Linux.  Freewin is working. RPCalc is still working.

Did you by any chance bind the *vr3* command to a key shortcut? W11 might 
be using that shortcut itself by default.

On Tuesday, May 21, 2024 at 3:02:21 PM UTC-4 Edward K. Ream wrote:

On Tue, May 21, 2024 at 1:04 PM Thomas Passin wrote:

>> Thomas, please test the ekr-3910-no-fl-ns-plugins branch and report if 
you notice anything unusual with the `vr3` command.  Thanks.

> I'm about to do that very thing.  I don't have Windows 11, only Windows 
10, and I will be resisting W11 as long as possible.  So I may not be able 
to work on that particular quirk. 

Excellent. Every bit of testing helps.

Again, feel free to change my new code in any way you like.

Edward

-- 
You received this message because you are subscribed to the Google Groups 
"leo-editor" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to leo-editor+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/leo-editor/5292ce0d-21e4-4208-8ab5-e70ce0f79e94n%40googlegroups.com.


Bug#1071581: dialog: stop using libtool-bin

2024-05-21 Thread Thomas Dickey
On Tue, May 21, 2024 at 04:00:56PM +0200, Helmut Grohne wrote:
> Source: dialog
> Version: 1.3-20240307-2
> Severity: normal
> Tags: patch
> User: debian-cr...@lists.debian.org
> Usertags: cross-satisfiability
> 
> Hi Santiago,
> 
> we want to remove the package libtool-bin from the archive, because any
> attempt of using it breaks cross compilation. The dialog package is a
> bit strange in this regard. It's autoconf stuff attempts to detect
> whether there is a libtool.m4 and when there isn't attempts to use a
> pre-configured libtool (the one from libtool-bin). Unfortunately, last
> time it was autoreconf'ed, that happened without libtool.m4. So
> basically, making libtool-bin go away here amounts to autoreconfing
> dialog after libtoolizing it. And that's pretty much what I did in the
> attached patch. The dialog binary and libdialog.la are bit-identical
> with this change.

hmm - there are two sets of changes - I don't see a reason for the
change to the curses function checks.

(as for libtool - I recall commenting on that, recently)

-- 
Thomas E. Dickey 
https://invisible-island.net


signature.asc
Description: PGP signature


Re: Cron cd /srv/gump/public/gump/cron; /bin/bash gump.sh all

2024-05-21 Thread Mark Thomas
Nothing to worry about. I started the run early so this is just the cron 
job complaining about the job I started.


Mark


On 21/05/2024 19:00, Cron Daemon wrote:

The lock file [/srv/gump/public/gump/gump.lock] exists, and is locked..
Either Gump is still running, or it terminated very abnormally.
Please resolve this (waiting or removing the lock file) before retrying.
 


-
To unsubscribe, e-mail: general-unsubscr...@gump.apache.org
For additional commands, e-mail: general-h...@gump.apache.org



-
To unsubscribe, e-mail: general-unsubscr...@gump.apache.org
For additional commands, e-mail: general-h...@gump.apache.org



Re: The nested-splitter project has collapsed in complexity

2024-05-21 Thread Thomas Passin

On Tuesday, May 21, 2024 at 1:10:16 PM UTC-4 Edward K. Ream wrote:


If you mean merging my private branch (PR #3924) with  your 
*ekr-3910-no-fl-ns-plugins* branch, please go ahead right away.  

Done.

Before and after the merge,  the `vr3` command works as I expect, but it 
also causes a weird interaction with Windows 11. I get a message saying 
"Press Alt-Z to use GeForce experience"!! I have no idea what is going on.

Thomas, please test the ekr-3910-no-fl-ns-plugins branch and report if you 
notice anything unusual with the `vr3` command.  Thanks.


I'm about to do that very thing.  I don't have Windows 11, only Windows 10, 
and I will be resisting W11 as long as possible.  So I may not be able to 
work on that particular quirk. 

-- 
You received this message because you are subscribed to the Google Groups 
"leo-editor" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to leo-editor+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/leo-editor/18600d07-c4d4-46ff-8a34-6ef6ab319103n%40googlegroups.com.


[clang] [clang] Fix Typo on cindex.py (PR #92945)

2024-05-21 Thread Thomas Applencourt via cfe-commits

https://github.com/TApplencourt edited 
https://github.com/llvm/llvm-project/pull/92945
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] Fix Typo on cindex.py (PR #92945)

2024-05-21 Thread Thomas Applencourt via cfe-commits

https://github.com/TApplencourt created 
https://github.com/llvm/llvm-project/pull/92945

None

>From b42fda1b00f2f5ca8ae312cf0d372e04582c5e4b Mon Sep 17 00:00:00 2001
From: Thomas Applencourt 
Date: Tue, 21 May 2024 13:00:37 -0500
Subject: [PATCH] Fix Typo on cindex.py

---
 clang/bindings/python/clang/cindex.py | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/clang/bindings/python/clang/cindex.py 
b/clang/bindings/python/clang/cindex.py
index 302d99dccd77b..2e20821b5f26d 100644
--- a/clang/bindings/python/clang/cindex.py
+++ b/clang/bindings/python/clang/cindex.py
@@ -865,7 +865,7 @@ def __repr__(self):
 # template parameter, or class template partial specialization.
 CursorKind.TEMPLATE_REF = CursorKind(45)
 
-# A reference to a namespace or namepsace alias.
+# A reference to a namespace or namespace alias.
 CursorKind.NAMESPACE_REF = CursorKind(46)
 
 # A reference to a member of a struct, union, or class that occurs in
@@ -2769,7 +2769,7 @@ class _CXUnsavedFile(Structure):
 
 
 # Functions calls through the python interface are rather slow. Fortunately,
-# for most symboles, we do not need to perform a function call. Their spelling
+# for most symbols, we do not need to perform a function call. Their spelling
 # never changes and is consequently provided by this spelling cache.
 SpellingCache = {
 # 0: CompletionChunk.Kind("Optional"),

___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


DFSort query

2024-05-21 Thread Ron Thomas
Hi All-

In the below Data we need to extract with in the cross ref nbr , if  seq Nbr  
=1  get Pacct_NBR and its related acct nbrs from the set 

In the below dataset for cross ref nbr = 24538 we have 2 sets of data  and 
24531 we have 1 set .



Acct _NBR   Pacct_NBR   LAST_CHANGE_TS  CROSS_REF_NBR   SEQ_NBR
600392811   1762220138659   2024-04-18-10.38.09.570030  24538   1
505756281   1500013748790   2024-04-18-10.38.09.570030  24538   2
593830611500013748790   2024-04-18-10.38.09.570030  24538   3
592670711500013748790   2024-04-18-10.38.09.570030  24538   4
505756281   1500013748790   2024-01-15-08.05.14.038792  24538   1
593830611500013748790   2024-01-15-08.05.14.038792  24538   2
592670711500013748790   2024-01-15-08.05.14.038792  24538   3
600392811   1762220138659   2024-01-15-08.05.14.038792  24538   4
600392561   1762220138631   2024-01-15-08.05.14.038792  24531   1

Output 

Acct _NBR   Pacct_NBR
600392811   1762220138659
505756281   1762220138659
593830611762220138659
592670711762220138659
505756281   1500013748790
593830611500013748790
592670711500013748790
600392811   1500013748790
600392561   1762220138631

Data size
Acct _NBR 10 bytes
Pacct_NBR 15 bytes
LAST_CHANGE_TS 20 bytes
CROSS_REF_NBR  5 bytes
SEQ_NBR 2 bytes

Could someone please let me know how we can build this data using dfsort ?

Regards
Ron T

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


[dolphin] [Bug 483797] Selected folders are black

2024-05-21 Thread Thomas Rossi
https://bugs.kde.org/show_bug.cgi?id=483797

Thomas Rossi  changed:

   What|Removed |Added

 CC||trossi@gmail.com

--- Comment #5 from Thomas Rossi  ---
I confirm this bug on my configuration

SOFTWARE/OS VERSIONS
Linux: Fedora 40
KDE Plasma Version: 6.0.4
KDE Frameworks Version: 6.2.0
Qt Version: 6.7.0
Graphics Platform: Wayland

-- 
You are receiving this mail because:
You are watching all bug changes.

Re: gwt-maven-springboot-archetype updated ...

2024-05-21 Thread Thomas Broyer


On Tuesday, May 21, 2024 at 6:02:31 PM UTC+2 frank.h...@googlemail.com 
wrote:

Ok, got it, was thinking, we were talking about the generated project ... 
Yep correct, usually, running the verify goal, will compare the generated 
sources with the ones stored under test resources. There is no test were 
the generated project gets started/tested, if it works.


Out of curiosity, any specific reason you don't automate a "mvn clean 
verify" to validate the generated project?
(of course testing that devmode works, or that the built artifact actually 
works is trickier and probably not worth it, but a simple build would still 
be better than nothing IMO)

-- 
You received this message because you are subscribed to the Google Groups "GWT 
Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-web-toolkit+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/google-web-toolkit/05b26042-335b-4443-b036-70eecf9a0ef0n%40googlegroups.com.


[Bug 2066285] [NEW] ModuleNotFoundError: No module named 'imp' with python 3.12

2024-05-21 Thread Thomas Bechtold
Public bug reported:

From Noble, using osc (0.169.1-2) from universe and calling osc results
in:

[~]$ osc

Traceback (most recent call last):
  File "/usr/bin/osc", line 10, in 
from osc import commandline, babysitter
  File "/usr/lib/python3/dist-packages/osc/commandline.py", line 14, in 
import imp
ModuleNotFoundError: No module named 'imp'

ProblemType: Bug
DistroRelease: Ubuntu 24.04
Package: osc 0.169.1-2
ProcVersionSignature: Ubuntu 6.8.0-31.31-generic 6.8.1
Uname: Linux 6.8.0-31-generic x86_64
NonfreeKernelModules: zfs
ApportVersion: 2.28.1-0ubuntu3
Architecture: amd64
CasperMD5CheckResult: pass
CurrentDesktop: ubuntu:GNOME
Date: Tue May 21 17:26:32 2024
InstallationDate: Installed on 2023-10-08 (226 days ago)
InstallationMedia: Ubuntu 23.10 "Mantic Minotaur" - Beta amd64 (20230919.1)
PackageArchitecture: all
ProcEnviron:
 LANG=en_US.UTF-8
 PATH=(custom, no user)
 SHELL=/usr/bin/zsh
 TERM=xterm-256color
 XDG_RUNTIME_DIR=
SourcePackage: osc
UpgradeStatus: No upgrade log present (probably fresh install)

** Affects: osc (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: amd64 apport-bug noble wayland-session

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2066285

Title:
  ModuleNotFoundError: No module named 'imp' with python 3.12

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/osc/+bug/2066285/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Re: Using JsInterop to create JS object literals

2024-05-21 Thread Thomas Broyer
With the following:

@JsType(namespace = JsPackage.GLOBAL, name = "Object", isNative = true)
public class XXXGWT {
public String codiceAmministrazione;
}

you should be able to write:

var xxx = new XXXGWT();
xxx.codiceAmministrazione = "foo";

and have it translate more or less to:

let xxx = new $wnd.Object()
xxx.codiceAmministrazione = "foo";

With the getter and setter, you have to annotate them with @JsProperty, but 
the generated JS will be identical, so you've actually only made your code 
more verbose for no compelling reasons (besides maybe pleasing static 
analyzers like Sonar that will yell if you don't encapsulate fields).

On Tuesday, May 21, 2024 at 4:30:35 PM UTC+2 tenti...@gmail.com wrote:

Just to be clear on my use case this is my java class:

@JsType(namespace = JsPackage.GLOBAL, name = "Object", isNative = true)
public class XXXGWT {

private String codiceAmministrazione;

@JsConstructor
public DocumentoPAC4DTOGWT() {}

public native String getCodiceAmministrazione();

public native void setCodiceAmministrazione(String codiceAmministrazione);

}

and this is the erorr i'm getting on the setter : 


DesktopApp-0.js:13132 Tue May 21 16:24:02 GMT+200 2024 
com.google.gwt.logging.client.LogConfiguration SEVERE: Exception caught: 
(TypeError) : b.setCodiceAmministrazione is not a function 
com.google.gwt.event.shared.UmbrellaException: Exception caught: 
(TypeError) : b.setCodiceAmministrazione is not a function
Il giorno martedì 21 maggio 2024 alle 14:56:22 UTC+2 Marco Tenti 
(IoProgrammo88) ha scritto:

Sorry Thomas about your last comment "  I'd rather user fields than 
getters/setters for such objects "  can you point me out to some example 
for this ?

Il giorno mercoledì 29 giugno 2022 alle 09:48:49 UTC+2 Thomas Broyer ha 
scritto:

Using isNative=true, you're telling GWT that you're only "mapping" in Java 
a *type* that exists in JS. The default naming rule is that the full name 
of that type is the fully qualified name of the Java class, you can change 
the simple name with 'name', and the *prefix* with namespace (which 
defaults to the package name for top-level classes, or the enclosing type's 
JS name for nested classes). So with namespace=GLOBAL but without the 
name="Object", you're saying that you want to basically do a 'new 
$wnd.MyPluginConfig()' in JS, and the browser rightfully complains that 
there's no such MyPluginConfig. Adding name="Object" means you'll do a 'new 
$wnd.Object()' in JS.

Fwiw, I'd rather user fields than getters/setters for such objects. YMMV.

On Wednesday, June 29, 2022 at 8:38:19 AM UTC+2 Nicolas Chamouard wrote:

Thank you !
It is a bit mysterious to me, but with *name = "Object"* the constructor 
works :)


Le mercredi 29 juin 2022 à 00:47:32 UTC+2, m.conr...@gmail.com a écrit :

try adding name = "Object" so that it uses an empty javascript Object as 
the wrapped item.

I found this via Googling:

@JsType(namespace = JsPackage.GLOBAL, isNative = true, name = "Object") 
public class MyPluginConfig { @JsProperty public void set(String str); @
JsProperty public String get(); ... }

Ref: https://stackoverflow.com/a/36329387/12407701


On Tue, Jun 28, 2022 at 6:24 PM Nicolas Chamouard  
wrote:

Yes, it does not change anything : 

@JsType(*isNative*=*true*, *namespace* = JsPackage.*GLOBAL*)

*public* *class* OptionOverrides {


@JsConstructor

*public* OptionOverrides() {}



@JsProperty

*public* *native* String getInitialView();

@JsProperty

*public* *native* *void* setInitialView(String initialView);

}


Still the same error : *$wnd.OptionOverrides is not a constructor*

Le mardi 28 juin 2022 à 23:27:08 UTC+2, m.conr...@gmail.com a écrit :

Have you tried giving the class a constructor?


On Tue, Jun 28, 2022 at 4:04 PM Nicolas Chamouard  
wrote:

Hello,

I am using JsInterop to integrate FullCalendar to my GWT application.
As described here https://fullcalendar.io/docs/initialize-globals, I am 
supposed to create an object literal and pass it to the Calendar() 
constructor.

I have managed to create this class : 

@JsType(*namespace* = JsPackage.*GLOBAL*)

*public* *class* OptionOverrides {


@JsProperty

*public* *native* String getInitialView();

@JsProperty

*public* *native* *void* setInitialView(String initialView);

}

It works but the FullCalendar complains about all the Java Object stuff 
that is translated to javascript : equals(), hashCode(), etc

I have tried to add* isNative=true* to my class, but in this case i cannot 
instantiate it in Java (error : $wnd.OptionOverrides is not a constructor)

Is there an other way to do this, am I missing something here ?

Thanks



-- 
You received this message because you are subscribed to the Google Groups 
"GWT Users" group.
To unsubscribe from this group and stop receiving emails from it, send an 
email to

[nexa] Data Protection Regulation - AI labour : interviews request

2024-05-21 Thread Thomas Le Bonniec
Dear members of the Nexa mailing list, 

This e-mail is intended to ask for those of you who might be available and 
interest if you'd be willing to conduct an interview with me. 
I am now visiting the Nexa center up until the end of June, and will be 
presenting my research topic tomorrow on the [ https://nexa.polito.it/lunch-114 
| 114th luch seminar. ] 

So if you : 

- engage in the topics of AI labour and/or personal data protection 
- are a member of an AI producing company // work (or worked) with or as a data 
annotator to train AI systems // provide support or advice to Data Protection 
Authorities in the EU // are or were a member of a union or an NGO engaging 
with personal data protection and AI at work 


and have some free time to have a chat, even an informal one, please let me 
know. 

Many thanks, 

Kind regards, 

Thomas Le Bonniec 



[ https://www.telecom-paris.fr/ ] 

Thomas LE BONNIEC 
Doctorant SES- I3 
[ https://diplab.eu/ | DiPLab ] 

Bureau 3A343 
19 place Marguerite Perey 
CS 20031 
91123 Palaiseau Cedex 
Une école de [ https://www.imt.fr/ | l'IMT ] 



[pve-devel] applied-series: [PATCH zfsonlinux v2 0/2] Update to ZFS 2.2.4

2024-05-21 Thread Thomas Lamprecht
Am 07/05/2024 um 17:02 schrieb Stoiko Ivanov:
> v1->v2:
> Patch 2/2 (adaptation of arc_summary/arcstat patch) modified:
> * right after sending the v1 I saw a report where pinning kernel 6.2 (thus
>   ZFS 2.1) leads to a similar traceback - which I seem to have overlooked
>   when packaging 2.2.0 ...
>   adapted the patch by booting a VM with kernel 6.2 and the current
>   userspace and running arc_summary /arcstat -a until no traceback was
>   displayed with a single-disk pool.
> 
> original cover-letter for v1:
> This patchset updates ZFS to the recently released 2.2.4
> 
> We had about half of the patches already in 2.2.3-2, due to the needed
> support for kernel 6.8.
> 
> Compared to the last 2.2 point releases this one compares quite a few
> potential performance improvments:
> * for ZVOL workloads (relevant for qemu guests) multiple taskq were
>   introduced [1] - this change is active by default (can be put back to
>   the old behavior with explicitly setting `zvol_num_taskqs=1`
> * the interface for ZFS submitting operations to the kernel's block layer
>   was augmented to better deal with split-pages [2] - which should also
>   improve performance, and prevent unaligned writes which are rejected by
>   e.g. the SCSI subsystem. - The default remains with the current code
>   (`zfs_vdev_disk_classic=0` turns on the 'new' behavior...)
> * Speculative prefetching was improved [3], which introduced not kstats,
>   which are reported by`arc_summary` and `arcstat`, as before with the
>   MRU/MFU additions there was not guard for running the new user-space
>   with an old kernel resulting in Python exceptions of both tools.
>   I adapted the patch where Thomas fixed that back in the 2.1 release
>   times. - sending as separate patch for easier review - and I hope it's
>   ok that I dropped the S-o-b tag (as it's changed code) - glad to resend
>   it, if this should be adapted.
> 
> Minimally tested on 2 VMs (the arcstat/arc_summary changes by running with
> an old kernel and new user-space)
> 
> 
> [0] https://github.com/openzfs/zfs/releases/tag/zfs-2.2.4
> [1] https://github.com/openzfs/zfs/pull/15992
> [2] https://github.com/openzfs/zfs/pull/15588
> [3] https://github.com/openzfs/zfs/pull/16022
> 
> Stoiko Ivanov (2):
>   update zfs submodule to 2.2.4 and refresh patches
>   update arc_summary arcstat patch with new introduced values
> 
>  ...md-unit-for-importing-specific-pools.patch |   4 +-
>  ...-move-manpage-arcstat-1-to-arcstat-8.patch |   2 +-
>  ...-guard-access-to-freshly-introduced-.patch | 438 
>  ...-guard-access-to-l2arc-MFU-MRU-stats.patch | 113 ---
>  ...hten-bounds-for-noalloc-stat-availab.patch |   4 +-
>  ...rectly-handle-partition-16-and-later.patch |  52 --
>  ...-use-splice_copy_file_range-for-fall.patch | 135 
>  .../0014-linux-5.4-compat-page_size.patch | 121 
>  .../patches/0015-abd-add-page-iterator.patch  | 334 -
>  ...-existing-functions-to-vdev_classic_.patch | 349 -
>  ...v_disk-reorganise-vdev_disk_io_start.patch | 111 ---
>  ...-read-write-IO-function-configurable.patch |  69 --
>  ...e-BIO-filling-machinery-to-avoid-spl.patch | 671 --
>  ...dule-parameter-to-select-BIO-submiss.patch | 104 ---
>  ...se-bio_chain-to-submit-multiple-BIOs.patch | 363 --
>  ...on-t-use-compound-heads-on-Linux-4.5.patch |  96 ---
>  ...ault-to-classic-submission-for-2.2.x.patch |  90 ---
>  ...ion-caused-by-mmap-flushing-problems.patch | 104 ---
>  ...touch-vbio-after-its-handed-off-to-t.patch |  57 --
>  debian/patches/series |  16 +-
>  upstream  |   2 +-
>  21 files changed, 445 insertions(+), 2790 deletions(-)
>  create mode 100644 
> debian/patches/0009-arc-stat-summary-guard-access-to-freshly-introduced-.patch
>  delete mode 100644 
> debian/patches/0009-arc-stat-summary-guard-access-to-l2arc-MFU-MRU-stats.patch
>  delete mode 100644 
> debian/patches/0012-udev-correctly-handle-partition-16-and-later.patch
>  delete mode 100644 
> debian/patches/0013-Linux-6.8-compat-use-splice_copy_file_range-for-fall.patch
>  delete mode 100644 debian/patches/0014-linux-5.4-compat-page_size.patch
>  delete mode 100644 debian/patches/0015-abd-add-page-iterator.patch
>  delete mode 100644 
> debian/patches/0016-vdev_disk-rename-existing-functions-to-vdev_classic_.patch
>  delete mode 100644 
> debian/patches/0017-vdev_disk-reorganise-vdev_disk_io_start.patch
>  delete mode 100644 
> debian/patches/0018-vdev_disk-make-read-write-IO-function-configurable.patch
>  delete mode 100644 
> debian/patches/0019-vdev_disk-rewrite-BIO-filling-machinery-to-avoid-spl.patch
>  delete mode 100644 
> debian/patches/0020-vdev_disk-ad

[pve-devel] applied: [PATCH docs] network: override device names: suggest running update-initramfs

2024-05-21 Thread Thomas Lamprecht
Am 21/05/2024 um 14:55 schrieb Friedrich Weber:
> The initramfs-tools hook /usr/share/initramfs-tools/hooks/udev copies
> link files from /etc/systemd/network to the initramfs, where they take
> effect in early userspace. If the link files in the initramfs diverge
> from the link files in the rootfs, this can lead to confusing
> behavior, as reported in enterprise support. For instance:
> 
> - If an interface matches link files both in the initramfs and the
>   rootfs, it will be renamed twice during boot.
> - A leftover link file in the initramfs renaming an interface A to a
>   new name X may prevent a link file in the rootfs from renaming a
>   different interface B to the same name X (it will fail with "File
>   exists").
> 
> To avoid this confusion, mention the link files are copied to the
> initramfs, and suggest updating the initramfs after making changes to
> the link files.
> 
> Suggested-by: Hannes Laimer 
> Signed-off-by: Friedrich Weber 
> ---
>  pve-network.adoc | 12 ++--
>  1 file changed, 10 insertions(+), 2 deletions(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH v2 manager] api: add proxmox-firewall to versions pkg list

2024-05-21 Thread Thomas Lamprecht
Am 24/04/2024 um 13:35 schrieb Mira Limbeck:
> Signed-off-by: Mira Limbeck 
> ---
> v2:
>  - add `api: ` prefix to commit msg
> 
>  PVE/API2/APT.pm | 1 +
>  1 file changed, 1 insertion(+)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied-series: [PATCH proxmox-firewall 1/2] firewall: improve handling of ARP traffic for guests

2024-05-21 Thread Thomas Lamprecht
Am 15/05/2024 um 15:37 schrieb Stefan Hanreich:
> In order to be able to send outgoing ARP packets when the default
> policy is set to drop or reject, we need to explicitly allow ARP
> traffic in the outgoing chain of guests. We need to do this in the
> guest chain itself in order to be able to filter spoofed packets via
> the MAC filter.
> 
> Contrary to the out direction we can simply accept all incoming ARP
> traffic, since we do not do any MAC filtering for incoming traffic.
> Since we create fdb entries for every NIC, guests should only see ARP
> traffic for their MAC addresses anyway.
> 
> Signed-off-by: Stefan Hanreich 
> Originally-by: Laurent Guerby 
> ---
>  proxmox-firewall/resources/proxmox-firewall.nft   | 1 +
>  proxmox-firewall/src/firewall.rs  | 8 
>  .../tests/snapshots/integration_tests__firewall.snap  | 4 ++--
>  3 files changed, 7 insertions(+), 6 deletions(-)
> 
>

applied both patches, thanks!

I reworded the subject here too and re-ordered the git trailers, as they
should have a causal order where possible. I.e., if someone else made a
patch, or helped you to do so, their co-authored-by or originally-by is
normally before your signed-off-by, as your "signature" shows that all
above it is (to your best knowledge) correct w.r.t patch ownership and
description, and like on "real" documents that signature goes at the
bottom.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH proxmox-firewall v2 1/1] firewall: properly reject ipv6 traffic

2024-05-21 Thread Thomas Lamprecht
Am 13/05/2024 um 14:14 schrieb Stefan Hanreich:
> ICMPv6 has different message types for rejecting traffic. With ICMP we
> used host-prohibited as rejection type, which doesn't exist in ICMPv6.
> Add an additional rule for IPv6, so it uses admin-prohibited.
> 
> Additionally, add a terminal drop statement in order to prevent any
> traffic that does not get matched from bypassing the reject chain.
> 
> Signed-off-by: Stefan Hanreich 
> ---
> Changes from v1 -> v2:
> * add a terminal drop statement to prevent any unmatched traffic from
>   bypassing the reject chain
> * properly match ICMPv6 traffic via l4proto
> 
>  proxmox-firewall/resources/proxmox-firewall.nft | 8 ++--
>  1 file changed, 6 insertions(+), 2 deletions(-)
> 
>

applied, with an updated commit subject (as per our guideline[0], using the
"firewall" tag inside a repo that has "firewall" already in the name is not
really adding much), thanks!

[0]: 
https://pve.proxmox.com/wiki/Developer_Documentation#Commits_and_Commit_Messages


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: The nested-splitter project has collapsed in complexity

2024-05-21 Thread Thomas Passin

On Tuesday, May 21, 2024 at 7:46:15 AM UTC-4 Edward K. Ream wrote:

On Mon, May 20, 2024 at 10:13 PM Thomas Passin  wrote:

Is your experimental version of VR3 using the free_layout command?


Not any more.
 

Are you good with me merging the PR today?


If you mean merging my private branch (PR #3924) with  your 
*ekr-3910-no-fl-ns-plugins* branch, please go ahead right away.  The reason 
is that it contains code to display images with relative URLs correctly 
when the rendering is exported. I developed that code in a branch that is 
based off of *devel* - before the new layout code. My new PR includes both 
that work and my recent work using the new free-layout free code. If you 
merge your branch into *devel* before merging my PR, I will have to do all 
that work over again, and it was not easy.

If you mean merging with *devel*,  I'm less sure. Without my changes, I 
don't think VR3 will be very useful and I believe that there are a few 
day-to-day users out there.

How about if we take a little survey in this group and ask who is using 
which GUI plugins?  That might give us some guidance before the change to 
the new way.

-- 
You received this message because you are subscribed to the Google Groups 
"leo-editor" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to leo-editor+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/leo-editor/fdff4a42-51a5-4a75-bd04-bad6b62b09b8n%40googlegroups.com.


[jira] [Commented] (MYFACES-4667) UIData#invokeOnComponent does not find components

2024-05-21 Thread Thomas Andraschko (Jira)


[ 
https://issues.apache.org/jira/browse/MYFACES-4667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848207#comment-17848207
 ] 

Thomas Andraschko commented on MYFACES-4667:


isnt it? https://github.com/jakartaee/faces/issues/1905

> UIData#invokeOnComponent does not find components
> -
>
> Key: MYFACES-4667
> URL: https://issues.apache.org/jira/browse/MYFACES-4667
> Project: MyFaces Core
>  Issue Type: Bug
>Affects Versions: 2.3.10
>Reporter: SteGr
>Priority: Major
>
> While working an a bug in [mojarra implementation|#issuecomment-2115154280]] 
> I found an [old primefaces 
> issue|https://github.com/primefaces/primefaces/issues/9245#issuecomment-2122507698]
>  which was somehow related to my issue. Using the reproducer of that PF 
> issue, I was able to really find the root cause of the issue and that even 
> myfaces is affected - contrary to what was assumed at the time.
>  
> *What happens?*
> A {{p:datatable}} with {{p:column*s*}} was updated by the backend using 
> {{{}PrimeFaces#Ajax#update{}}}. The implementation used 
> {{UIComponent#invokeOnComponent}} to find the component of the given 
> clientId. As {{p:dataTable}} relies on {{UIData}} of myfaces, {{p:columns}} 
> won't be handled. Nothing could be found and {{PrimeFaces#Ajax#update}} falls 
> back to just forwarding the given clientId.
> *Why does it happen?*
> Like mojarra, myfaces filters the children on processing using {{{}instanceof 
> UIColumn{}}}. But {{p:columns}} does not extend that class. Therefore 
> {{p:columns}} is not a candiate and is simply ignored.
> *Possible fix* (not yet tested)
> Remove the check.
> *How to reproduce*
> Use the [reproducer 
> |https://github.com/primefaces/primefaces/files/9664695/primefaces-test.zip] 
> and change the PROJECT_STAGE to Development. Run the project using the 
> myfaces23 profile. You should get a lot of logging entries like
> {quote}Mai 21, 2024 2:53:41 PM org.primefaces.PrimeFaces$Ajax update
> WARNUNG: PrimeFaces.current().ajax().update() called but component cant be 
> resolved! Expression will just be added to the renderIds: \{0}
> {quote}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PATCH 4/4] tests/vm: remove plain centos image

2024-05-21 Thread Thomas Huth

On 21/05/2024 14.53, Alex Bennée wrote:

This isn't really used and we have lighter weight docker containers
for testing this stuff directly.

Signed-off-by: Alex Bennée 
---
  tests/vm/Makefile.include |  1 -
  tests/vm/centos   | 51 ---
  2 files changed, 52 deletions(-)
  delete mode 100755 tests/vm/centos


Reviewed-by: Thomas Huth 




Re: [sparc64] unbreak spirv-tools build

2024-05-21 Thread Thomas Frohwein
ok with +=

On Tue, May 21, 2024, at 4:25 AM, Theo Buehler wrote:
> On Tue, May 21, 2024 at 10:00:23AM +0200, Theo Buehler wrote:
>> As can be seen on
>> 
>> http://build-failures.rhaalovely.net/sparc64/2024-05-18/summary.log
>> 
>> spirv-tools is the immediate blocker for many missing ports on sparc64.
>> It needs to link against stdc++fs with ports-gcc:
>> 
>> http://build-failures.rhaalovely.net/sparc64/2024-05-18/graphics/spirv-tools.log
>> 
>> The diff below uses the same approach as the one used by jca for glslang
>> 
>> https://github.com/openbsd/ports/commit/eb51153047ff2fdba5334b386c814557b77857ba
>> 
>> Packages on sparc64 and arm64.
>> 
>> Index: Makefile
>> ===
>> RCS file: /cvs/ports/graphics/spirv-tools/Makefile,v
>> diff -u -p -r1.20 Makefile
>> --- Makefile 20 May 2024 15:46:33 -  1.20
>> +++ Makefile 21 May 2024 07:57:39 -
>> @@ -31,7 +31,16 @@ BUILD_DEPENDS =   graphics/spirv-headers
>>  
>>  CONFIGURE_ARGS =-DSPIRV-Headers_SOURCE_DIR="${LOCALBASE}"
>>  
>> +SUBST_VARS =ADDITIONAL_LIBRARIES
>
> changed to +=
>
>> +
>> +pre-configure:
>> +${SUBST_CMD} ${WRKSRC}/tools/CMakeLists.txt
>> +
>>  # effcee is missing to build tests
>>  NO_TEST =   Yes
>>  
>>  .include 
>> +
>> +.if ${CHOSEN_COMPILER} == ports-gcc
>> +ADDITIONAL_LIBRARIES = stdc++fs
>> +.endif
>> Index: patches/patch-tools_CMakeLists_txt
>> ===
>> RCS file: patches/patch-tools_CMakeLists_txt
>> diff -N patches/patch-tools_CMakeLists_txt
>> --- /dev/null1 Jan 1970 00:00:00 -
>> +++ patches/patch-tools_CMakeLists_txt   21 May 2024 07:58:40 -
>> @@ -0,0 +1,14 @@
>> +Add -lstdc++fs for ports-gcc
>> +
>> +Index: tools/CMakeLists.txt
>> +--- tools/CMakeLists.txt.orig
>>  tools/CMakeLists.txt
>> +@@ -74,7 +74,7 @@ if (NOT ${SPIRV_SKIP_EXECUTABLES})
>> +objdump/extract_source.cpp
>> +util/cli_consumer.cpp
>> +${COMMON_TOOLS_SRCS}
>> +-  LIBS ${SPIRV_TOOLS_FULL_VISIBILITY})
>> ++  LIBS ${SPIRV_TOOLS_FULL_VISIBILITY} 
>> ${ADDITIONAL_LIBRARIES})
>> + target_include_directories(spirv-objdump PRIVATE 
>> ${spirv-tools_SOURCE_DIR}
>> +  
>> ${SPIRV_HEADER_INCLUDE_DIR})
>> + set(SPIRV_INSTALL_TARGETS ${SPIRV_INSTALL_TARGETS} spirv-objdump)
>>



[jira] [Commented] (OAK-8046) Result items are not always correctly counted against the configured read limit if a query uses a lucene index

2024-05-21 Thread Thomas Mueller (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848193#comment-17848193
 ] 

Thomas Mueller commented on OAK-8046:
-

> Should a reindex be triggered

No. That won't help.

> Result items are not always correctly counted against the configured read 
> limit if a query uses a lucene index 
> ---
>
> Key: OAK-8046
> URL: https://issues.apache.org/jira/browse/OAK-8046
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Affects Versions: 1.8.7
>Reporter: Georg Henzler
>Assignee: Vikas Saurabh
>Priority: Major
> Fix For: 1.12.0, 1.10.1, 1.8.12
>
> Attachments: OAK-8046-take2.patch, OAK-8046.patch
>
>
> There are cases where an index is re-opened during query execution. In that 
> case, already returned entries are read again and skipped, so basically 
> counted twice. This should be fixed to only count entries once (see also [1])
> The issue most likely exists since the read limit was introduced with OAK-6875
> [1] 
> https://lists.apache.org/thread.html/dddf9834fee0bccb6e48f61ba2a01430e34fc0b464b12809f7dfe2eb@%3Coak-dev.jackrabbit.apache.org%3E



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (OAK-8046) Result items are not always correctly counted against the configured read limit if a query uses a lucene index

2024-05-21 Thread Thomas Mueller (Jira)


[ 
https://issues.apache.org/jira/browse/OAK-8046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848192#comment-17848192
 ] 

Thomas Mueller commented on OAK-8046:
-

[~royteeuwen] it means while the query is still running (and reading more 
nodes), the index was updated concurrently. Indexes are updated every ~5 
seconds.

Best is if the queries read less than 200 nodes, and relatively quickly (within 
a second or so). If you have queries that read 100'000 or more nodes, it is 
quite easy to get into this situation. With less than 200 nodes, it's typically 
never a problem. (There's also the case where less than 200 nodes are read, but 
very slowly... but that's unlikely).

> Result items are not always correctly counted against the configured read 
> limit if a query uses a lucene index 
> ---
>
> Key: OAK-8046
> URL: https://issues.apache.org/jira/browse/OAK-8046
> Project: Jackrabbit Oak
>  Issue Type: Improvement
>  Components: lucene
>Affects Versions: 1.8.7
>Reporter: Georg Henzler
>Assignee: Vikas Saurabh
>Priority: Major
> Fix For: 1.12.0, 1.10.1, 1.8.12
>
> Attachments: OAK-8046-take2.patch, OAK-8046.patch
>
>
> There are cases where an index is re-opened during query execution. In that 
> case, already returned entries are read again and skipped, so basically 
> counted twice. This should be fixed to only count entries once (see also [1])
> The issue most likely exists since the read limit was introduced with OAK-6875
> [1] 
> https://lists.apache.org/thread.html/dddf9834fee0bccb6e48f61ba2a01430e34fc0b464b12809f7dfe2eb@%3Coak-dev.jackrabbit.apache.org%3E



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PATCH 1/1] x86/vector: Fix vector leak during CPU offline

2024-05-21 Thread Thomas Gleixner
On Wed, May 15 2024 at 12:51, Dongli Zhang wrote:
> On 5/13/24 3:46 PM, Thomas Gleixner wrote:
>> So yes, moving the invocation of irq_force_complete_move() before the
>> irq_needs_fixup() call makes sense, but it wants this to actually work
>> correctly:
>> @@ -1097,10 +1098,11 @@ void irq_force_complete_move(struct irq_
>>  goto unlock;
>>  
>>  /*
>> - * If prev_vector is empty, no action required.
>> + * If prev_vector is empty or the descriptor was previously
>> + * not on the outgoing CPU no action required.
>>   */
>>  vector = apicd->prev_vector;
>> -if (!vector)
>> +if (!vector || apicd->prev_cpu != smp_processor_id())
>>  goto unlock;
>>  
>
> The above may not work. migrate_one_irq() relies on irq_force_complete_move() 
> to
> always reclaim the apicd->prev_vector. Otherwise, the call of
> irq_do_set_affinity() later may return -EBUSY.

You're right. But that still can be handled in irq_force_complete_move()
with a single unconditional invocation in migrate_one_irq():

cpu = smp_processor_id();
if (!vector || (apicd->cur_cpu != cpu && apicd->prev_cpu != cpu))
goto unlock;

because there are only two cases when a cleanup is required:

   1) The outgoing CPU is the current target

   2) The outgoing CPU was the previous target

No?

Thanks,

tglx



[jira] [Assigned] (CAMEL-20679) camel-smb: create a producer for the component

2024-05-21 Thread Thomas Cunningham (Jira)


 [ 
https://issues.apache.org/jira/browse/CAMEL-20679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Cunningham reassigned CAMEL-20679:
-

Assignee: Thomas Cunningham

> camel-smb: create a producer for the component
> --
>
> Key: CAMEL-20679
> URL: https://issues.apache.org/jira/browse/CAMEL-20679
> Project: Camel
>  Issue Type: New Feature
>  Components: camel-smb
>Affects Versions: 4.6.0
>Reporter: Otavio Rodolfo Piske
>Assignee: Thomas Cunningham
>Priority: Major
>  Labels: help-wanted
>
> As part of CAMEL-19997 we created a component for the SMB protocol. However 
> it supports only consumer. It would be good to support also a producer.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Bug#1071565: autopkgtest: ERROR: cloud-efi.raw is too small: 131 MB. Should be greater 890 MB

2024-05-21 Thread Thomas Lange


The problem was, that a package could not be downloaded:

833s fai.log:W: Couldn't download package libip4tc2 (ver 1.8.9-2 arch
amd64) at
http://deb.debian.org/debian/pool/main/i/iptables/libip4tc2_1.8.9-2_amd64.deb

Another test run passed without any errors. See 46897030
https://ci.debian.net/packages/f/fai/testing/amd64/

Does this solve your problem? Can we close this bug now?


-- 
 Thomas



Re: [pve-devel] [PATCH qemu-server v10 1/4] add C program to get hardware capabilities from CPUID

2024-05-21 Thread Thomas Lamprecht
Am 17/05/2024 um 13:21 schrieb Dominik Csapak:
> one small nit inline:
> 
> On 5/10/24 13:47, Markus Frank wrote:
>> diff --git a/query-machine-capabilities/Makefile 
>> b/query-machine-capabilities/Makefile
>> new file mode 100644
>> index 000..c5f6348
>> --- /dev/null
>> +++ b/query-machine-capabilities/Makefile
>> @@ -0,0 +1,21 @@
>> +DESTDIR=
>> +PREFIX=/usr
>> +SBINDIR=${PREFIX}/libexec/qemu-server
>> +SERVICEDIR=/lib/systemd/system
>> +
> 
> PREFIX is only used once here, so it's probably better inlining the value

No, having the PREFIX variable separate is a common pattern
that allows customizing installation.
Even if we do not need that ourselves, it's still not costing us really
anything to keep following that here and could make comparing changes
between two packages with binaries installed in different paths easier.
So while I wouldn't go through all our build systems and introduce this
variable if missing, I'd also not recommend developers to drop it, as it
is not better to do so.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



  1   2   3   4   5   6   7   8   9   10   >