[jira] [Commented] (CASSANDRA-19593) Transactional Guardrails

2024-05-27 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-19593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849717#comment-17849717
 ] 

Stefan Miklosovic commented on CASSANDRA-19593:
---

I am formally stopping the progress on this as I do not look into it any 
further as we need to tackle the "general case". 

On the other hand, I would definitely appreciate if this work is just 
integrated to whatever will be proposed / implemented for general transactional 
configuration. Basically all which needs to be done is to read 
default_keyspace_rf from it plus add more tests and probably remove some 
functionality which I do not consider to be necessary (in terms of cep 24)

> Transactional Guardrails
> 
>
> Key: CASSANDRA-19593
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19593
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Feature/Guardrails, Transactional Cluster Metadata
>Reporter: Stefan Miklosovic
>Assignee: Stefan Miklosovic
>Priority: Normal
> Fix For: 5.x
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I think it is time to start to think about this more seriously. TCM is 
> getting into pretty nice shape and we might start to investigate how to do 
> this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



Re: [de-users] Pause-Symbol auf LibreOffice-Impress-Sildes

2024-05-27 Thread Stefan

Hi.
Also, ich hab ein bisschen rumprobiert und das Ergebnis ist das gleiche.
Ich hab LibreOffice deinstalliert, den Libreoffice-Programm-Ordner samt
User-Ordner komplett gelöscht (im Verzeichnis AppData/Roaming auf
Windows) und zur Sicherheit auch den alten LibreOffice-Download im
Downloads-Ordner. Den Rechner neu gestartet und LibreOffice neu
installiert. Ich hab testweise eine neue Impress-Datei erstellt, nur
bestehend aus weißen Seiten, also definitiv ohne Animination. Und beim
Abspielen sehe ich wieder bei jeder Slide beim Weiterklicken das kleine
Pause-Symbol links unten.
@Robert und @Franklin: Ich schicke euch einfach mal diese
Testpräsentation. Seht ihr da ein Pausesymbol beim Abspielen? ^^
LG Stefan

<<<
Franklin Schiftan:
Das heißt, Du hast mit dem neuen User-Ordner keine neue Präsentation
erstellt, sondern eine alte bestehende abgespielt?

Vielleicht wird das Pause-Symbol ja doch nicht im User-Ordner, sondern
mit in der Präsentation selbst abgespeichert ?




Am 25.05.2024 um 15:00 schrieb Robert Großkopf:

Hallo Stefan,

zum einen, wie Franklin beschreibt: Entscheidend ist Dein
Benutzerverzeichnis. Dort sind die individuellen Einstellungen, die
manchmal Probleme verursachen können.

Zu dem Symbol: Hast Du da irgendwelche Medien mit eingebunden, die so
auf "Pause" gesetzt sind? Erstelle einfach einmal eine neue
Präsentation mit 2 Seiten ohne großen Inhalt. Starte die Präsentation.
Wie ist es da?

Ggf. kannst Du mir die Präsentation auch gerne zuschicken. Dann suche
ich einmal, was da denn falsch läuft.

Gruß

Robert


--
Liste abmelden mit E-Mail an: users+unsubscr...@de.libreoffice.org
Probleme? 
https://de.libreoffice.org/hilfe-kontakt/mailing-listen/abmeldung-liste/
Tipps zu Listenmails: https://wiki.documentfoundation.org/Netiquette/de
Listenarchiv: https://listarchives.libreoffice.org/de/users/
Datenschutzerklärung: https://www.documentfoundation.org/privacy


Re: [VOTE] FLIP-443: Interruptible watermark processing

2024-05-27 Thread Stefan Richter

+1 (binding)



> On 24. May 2024, at 09:59, Martijn Visser  wrote:
> 
> +1 (binding)
> 
> On Fri, May 24, 2024 at 7:31 AM weijie guo  >
> wrote:
> 
>> +1(binding)
>> 
>> Thanks for driving this!
>> 
>> Best regards,
>> 
>> Weijie
>> 
>> 
>> Rui Fan <1996fan...@gmail.com> 于2024年5月24日周五 13:03写道:
>> 
>>> +1(binding)
>>> 
>>> Best,
>>> Rui
>>> 
>>> On Fri, May 24, 2024 at 12:01 PM Yanfei Lei  wrote:
>>> 
 Thanks for driving this!
 
 +1 (binding)
 
 Best,
 Yanfei
 
 Zakelly Lan  于2024年5月24日周五 10:13写道:
 
> 
> +1 (binding)
> 
> Best,
> Zakelly
> 
> On Thu, May 23, 2024 at 8:21 PM Piotr Nowojski >> 
 wrote:
> 
>> Hi all,
>> 
>> After reaching what looks like a consensus in the discussion thread
 [1], I
>> would like to put FLIP-443 [2] to the vote.
>> 
>> The vote will be open for at least 72 hours unless there is an
 objection or
>> insufficient votes.
>> 
>> [1]
>> https://www.google.com/url?q=https://lists.apache.org/thread/flxm7rphvfgqdn2gq2z0bb7kl007olpz=gmail-imap=171714246900=AOvVaw1sxqcTTJfXbE_qaBA0l1FH
>> [2] 
>> https://www.google.com/url?q=https://cwiki.apache.org/confluence/x/qgn9EQ=gmail-imap=171714246900=AOvVaw3yQ55VLWPxkY2OHXf0k72Q
>> 
>> Bets,
>> Piotrek



[jira] [Commented] (CASSANDRA-19632) wrap tracing logs in isTraceEnabled across the codebase

2024-05-27 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-19632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849653#comment-17849653
 ] 

Stefan Miklosovic commented on CASSANDRA-19632:
---

I have also changed the logging level to INFO and run the bench to see the 
times as if trace was enabled (because it just logs on INFO by default)

Without wrapping it in isInfoEnabled, it takes around 41 microseconds to log. 
With the check it takes around 46 microseconds. So the check takes around 5 
microseconds and wrapping it makes it about 13% slower.

{noformat}
Result "org.apache.cassandra.test.microbench.LoggingBench.threeObjects":
  N = 12306865
  mean =  41340.392 ±(99.9%) 864.269 ns/op

  Histogram, ns/op:
    [        0.000,  1250.000) = 12305828 
    [ 1250.000,  2500.000) = 0 
    [ 2500.000,  3750.000) = 0 
    [ 3750.000,  5000.000) = 0 
    [ 5000.000,  6250.000) = 0 
    [ 6250.000,  7500.000) = 0 
    [ 7500.000,  8750.000) = 0 
    [ 8750.000, 1.000) = 620 
    [1.000, 11250.000) = 375 
    [11250.000, 12500.000) = 34 
    [12500.000, 13750.000) = 8 
    [13750.000, 15000.000) = 0 
    [15000.000, 16250.000) = 0 
    [16250.000, 17500.000) = 0 
    [17500.000, 18750.000) = 0 

  Percentiles, ns/op:
      p(0.) =   6640.000 ns/op
     p(50.) =  23648.000 ns/op
     p(90.) =  69760.000 ns/op
     p(95.) =  90496.000 ns/op
     p(99.) = 151296.000 ns/op
     p(99.9000) = 381440.000 ns/op
     p(99.9900) = 1861632.000 ns/op
     p(99.9990) = 102760448.000 ns/op
     p(99.) = 117178368.000 ns/op
    p(100.) = 127401984.000 ns/op


# Run complete. Total time: 00:02:04

REMEMBER: The numbers below are just data. To gain reusable insights, you need 
to follow up on
why the numbers are the way they are. Use profilers (see -prof, -lprof), design 
factorial
experiments, perform baseline and negative tests that provide experimental 
control, make sure
the benchmarking environment is safe on JVM/OS/HW level, ask for reviews from 
the domain experts.
Do not assume the numbers tell you what you want them to tell.

Benchmark                            Mode       Cnt          Score     Error  
Units
LoggingBench.threeObjects          sample  12306865      41340.392 ± 864.269  
ns/op
LoggingBench.threeObjects:p0.00    sample                 6640.000            
ns/op
LoggingBench.threeObjects:p0.50    sample                23648.000            
ns/op
LoggingBench.threeObjects:p0.90    sample                69760.000            
ns/op
LoggingBench.threeObjects:p0.95    sample                90496.000            
ns/op
LoggingBench.threeObjects:p0.99    sample               151296.000            
ns/op
LoggingBench.threeObjects:p0.999   sample               381440.000            
ns/op
LoggingBench.threeObjects:p0.  sample              1861632.000            
ns/op
LoggingBench.threeObjects:p1.00    sample            127401984.000            
ns/op

Process finished with exit code 0


Result 
"org.apache.cassandra.test.microbench.LoggingBench.threeObjectsWithCheck":
  N = 11186868
  mean =  46985.745 ±(99.9%) 1093.366 ns/op

  Histogram, ns/op:
    [        0.000,  1250.000) = 11185771 
    [ 1250.000,  2500.000) = 3 
    [ 2500.000,  3750.000) = 0 
    [ 3750.000,  5000.000) = 0 
    [ 5000.000,  6250.000) = 0 
    [ 6250.000,  7500.000) = 0 
    [ 7500.000,  8750.000) = 0 
    [ 8750.000, 1.000) = 0 
    [1.000, 11250.000) = 712 
    [11250.000, 12500.000) = 326 
    [12500.000, 13750.000) = 48 
    [13750.000, 15000.000) = 8 
    [15000.000, 16250.000) = 0 
    [16250.000, 17500.000) = 0 
    [17500.000, 18750.000) = 0 

  Percentiles, ns/op:
      p(0.) =   6680.000 ns/op
     p(50.) =  25952.000 ns/op
     p(90.) =  77312.000 ns/op
     p(95.) = 100608.000 ns/op
     p(99.) = 171520.000 ns/op
     p(99.9000) = 422400.000 ns/op
     p(99.9900) = 5999162.162 ns/op
     p(99.9990) = 117833728.000 ns/op
     p(99.) = 131203072.000 ns/op
    p(100.) = 138936320.000 ns/op


# Run complete. Total time: 00:02:04

REMEMBER: The numbers below are just data. To gain reusable insights, you need 
to follow up on
why the numbers are the way they are. Use profilers (see -prof, -lprof), design 
factorial
experiments, perform baseline and negative tests that provide experimental 
control, make sure
the benchmarking environment is safe on JVM/OS/HW level, ask for reviews from 
the domain experts.
Do not assume the numbers tell you what you want them to tell.

Benchmark                                     Mode       Cnt          Score     
 Error  Units
LoggingBench.threeObjectsWithCheck          sample  11186868      46985.74

[jira] [Commented] (CASSANDRA-19632) wrap tracing logs in isTraceEnabled across the codebase

2024-05-27 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-19632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849647#comment-17849647
 ] 

Stefan Miklosovic commented on CASSANDRA-19632:
---

I tried to benchmark this, I was tracing three-param message with and without 
isTraceEnabled check.

{noformat}
package org.apache.cassandra.test.microbench;

import java.util.concurrent.TimeUnit;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import org.openjdk.jmh.annotations.Benchmark;
import org.openjdk.jmh.annotations.BenchmarkMode;
import org.openjdk.jmh.annotations.Fork;
import org.openjdk.jmh.annotations.Measurement;
import org.openjdk.jmh.annotations.Mode;
import org.openjdk.jmh.annotations.OutputTimeUnit;
import org.openjdk.jmh.annotations.Scope;
import org.openjdk.jmh.annotations.State;
import org.openjdk.jmh.annotations.Threads;
import org.openjdk.jmh.annotations.Warmup;

@BenchmarkMode(Mode.SampleTime)
@OutputTimeUnit(TimeUnit.NANOSECONDS)
@Warmup(iterations = 3, time = 1)
@Measurement(iterations = 6, time = 20)
@Fork(value = 1, jvmArgsAppend = { "-Xmx256M", "-Djmh.executor=CUSTOM", 
"-Djmh.executor.class=org.apache.cassandra.test.microbench.FastThreadExecutor"})
@Threads(8)
@State(Scope.Benchmark)
public class LoggingBench
{
private static final Logger logger = 
LoggerFactory.getLogger(LoggingBench.class);

private static final MyObject myObject = new MyObject();

@Benchmark
public void threeObjects()
{
logger.trace("this is a message with one parameter {}, {}, {}", 
myObject, myObject, myObject);
}

@Benchmark
public void threeObjectsWithCheck()
{
if (logger.isTraceEnabled())
logger.trace("this is a message with one parameter {}, {}, {}", 
myObject, myObject, myObject);
}

public static class MyObject
{
@Override
public String toString()
{
return "MyObject{}";
}
}
}
{noformat}

Results:

{noformat}
# JMH version: 1.37
# VM version: JDK 11.0.12, OpenJDK 64-Bit Server VM, 11.0.12+7
# VM invoker: /home/fermat/.sdkman/candidates/java/11.0.12-open/bin/java
# VM options: 
-javaagent:/home/fermat/.local/share/JetBrains/Toolbox/apps/IDEA-U/ch-0/233.15026.9/lib/idea_rt.jar=40847:/home/fermat/.local/share/JetBrains/Toolbox/apps/IDEA-U/ch-0/233.15026.9/bin
 -Dfile.encoding=UTF-8 -Xmx256M -Djmh.executor=CUSTOM 
-Djmh.executor.class=org.apache.cassandra.test.microbench.FastThreadExecutor
# Blackhole mode: full + dont-inline hint (auto-detected, use 
-Djmh.blackhole.autoDetect=false to disable)
# Warmup: 3 iterations, 1 s each
# Measurement: 6 iterations, 20 s each
# Timeout: 10 min per iteration
# Threads: 8 threads, will synchronize iterations
# Benchmark mode: Sampling time
# Benchmark: org.apache.cassandra.test.microbench.LoggingBench.threeObjects

# Run progress: 0.00% complete, ETA 00:04:06
# Fork: 1 of 1
# Warmup Iteration   1: DEBUG [main] 2024-05-27 09:10:02,600 
InternalLoggerFactory.java:63 - Using SLF4J as the default logging framework
44.788 ±(99.9%) 10.007 ns/op
# Warmup Iteration   2: 34.661 ±(99.9%) 1.355 ns/op
# Warmup Iteration   3: 26.908 ±(99.9%) 0.543 ns/op
Iteration   1: 26.945 ±(99.9%) 0.113 ns/op
 p0.00:   20.000 ns/op
 p0.50:   30.000 ns/op
 p0.90:   30.000 ns/op
 p0.95:   30.000 ns/op
 p0.99:   31.000 ns/op
 p0.999:  50.000 ns/op
 p0.: 1857.329 ns/op
 p1.00:   53312.000 ns/op

Iteration   2: 27.736 ±(99.9%) 0.136 ns/op
 p0.00:   20.000 ns/op
 p0.50:   30.000 ns/op
 p0.90:   30.000 ns/op
 p0.95:   30.000 ns/op
 p0.99:   31.000 ns/op
 p0.999:  50.000 ns/op
 p0.: 2592.000 ns/op
 p1.00:   29376.000 ns/op

Iteration   3: 28.188 ±(99.9%) 0.088 ns/op
 p0.00:   20.000 ns/op
 p0.50:   30.000 ns/op
 p0.90:   30.000 ns/op
 p0.95:   31.000 ns/op
 p0.99:   40.000 ns/op
 p0.999:  60.000 ns/op
 p0.: 170.000 ns/op
 p1.00:   94720.000 ns/op

Iteration   4: 28.113 ±(99.9%) 0.099 ns/op
 p0.00:   20.000 ns/op
 p0.50:   30.000 ns/op
 p0.90:   30.000 ns/op
 p0.95:   31.000 ns/op
 p0.99:   31.000 ns/op
 p0.999:  50.000 ns/op
 p0.: 330.000 ns/op
 p1.00:   41216.000 ns/op

Iteration   5: 28.349 ±(99.9%) 0.071 ns/op
 p0.00:   20.000 ns/op
 p0.50:   30.000 ns/op
 p0.90:   30.000 ns/op
 p0.95:   31.000 ns/op
 p0.99:   40.000 ns/op
 p

Re: Continuous integration with Debian virtual machines

2024-05-26 Thread Stefan Monnier
> Anyone know a hosting service, like GitHub or GitLab, offering recent Debian
> virtual machines to run tests ?

I'd expect most of them do, but at least SourceHut does according to
https://man.sr.ht/builds.sr.ht/compatibility.md#debian


    Stefan



Re: disk encryption for remote server

2024-05-26 Thread Stefan Kreutz
Can you access the machine's serial console, maybe redirected over IP?

On Sun, May 26, 2024 at 08:33:59PM GMT, 04-psyche.tot...@icloud.com wrote:
> Hi everyone,
> 
> Is there any way to use disk encryption without having physical access to the 
> device?
> 
> A few potential ideas:
> - is there a way to enter the encryption passphrase via ssh?
> - is there a way to create a non encrypted partition on the same hard drive, 
> where the keydisk would be stored, and automatically used? (For various 
> reasons, an external usb key is not feasible). And yes, I realize this would 
> weaken the security significantly, but I'd still like to know if it's 
> feasible?
> 
> My guess is that it's not possible, but I wanted to ask to make sure.
> 
> Cheers,
> Jake



Re: Suche Drucker bis 80¹€

2024-05-26 Thread Stefan U. Hegner

Moin Answart,

Am 25.05.24 um 22:17 schrieb Answart Lauermann:

für meine selbst gemachten Mini-Flugblätter². Wichtig sind niedrige 
Seitenkosten, Qualität nicht so sehr, insbesondere würde s/w reichen.
Auch Nadeldrucker oder (schmale) Exoten kommen in Frage. Randbedingungen: Nur 
Linux, kein Auto.


Ich habe über die Jahrzehnte gute Erfahrungen mit den Kyocera SW-Lasern 
gemacht, insbes. was die Druck- und Folgekosten angeht.


"Arbeitstiere" wie den FS-1020D oder 1030D bekommt man im Re-Marketing 
oder auf Kleinanzeigen.de


... Sowas könnte für Dich in Frage kommen.

VG

Stefan.

--
Stefan U. Hegner
 
  * * *
D-32584 Löhne --- good ole Germany
internet: http://www.hegner-web.de
  * * *
GPG-Key | 048D 7F64 0BEB 73B1 2725
F-Print | C05E 4F77 9674 EF11 55FE



OpenPGP_signature.asc
Description: OpenPGP digital signature
-- 
Linux mailing list Linux@lug-owl.de
subscribe/unsubscribe: https://lug-owl.de/mailman/listinfo/linux
Hinweise zur Nutzung: http://www.lug-owl.de/Mailingliste/hints.epo


[elpa] externals/gnu-elpa-keyring-update 1e8726c459: (gnu-elpa-keyring-update): Install the new keys even into empty keyring

2024-05-25 Thread Stefan Monnier via
branch: externals/gnu-elpa-keyring-update
commit 1e8726c459258fba62ee38807abdae4e350e5238
Author: Stefan Monnier 
Commit: Stefan Monnier 

(gnu-elpa-keyring-update): Install the new keys even into empty keyring
---
 gnu-elpa-keyring-update.el | 20 +---
 1 file changed, 13 insertions(+), 7 deletions(-)

diff --git a/gnu-elpa-keyring-update.el b/gnu-elpa-keyring-update.el
index 758ae1ab1a..7ed604d9e5 100644
--- a/gnu-elpa-keyring-update.el
+++ b/gnu-elpa-keyring-update.el
@@ -1,11 +1,11 @@
 ;;; gnu-elpa-keyring-update.el --- Update Emacs's GPG keyring for GNU ELPA  
-*- lexical-binding: t; -*-
 
-;; Copyright (C) 2019-2022  Free Software Foundation, Inc.
+;; Copyright (C) 2019-2024  Free Software Foundation, Inc.
 
 ;; Author: Stefan Monnier 
 ;; Keywords: maint, tools
 ;; Package-Type: multi
-;; Version: 2022.12
+;; Version: 2022.12.1
 
 ;; This program is free software; you can redistribute it and/or modify
 ;; it under the terms of the GNU General Public License as published by
@@ -36,6 +36,14 @@
 ;; temporarily disable signature verification (see variable
 ;;   `package-check-signature') :-(
 
+;;; News:
+
+;; Since 2022.12:
+;; - Fix a bug where the new keys could end up remaining non-installed.
+;;
+;; Since 2019.3:
+;; - New GPG keys
+
 ;;; Code:
 
 ;;;###autoload
@@ -71,11 +79,9 @@
   (let ((gnupghome-dir (or (bound-and-true-p package-gnupghome-dir)
(expand-file-name "gnupg"
  package-user-dir
-(if (not (file-directory-p gnupghome-dir))
-(error "No keyring to update!")
-  (package-import-keyring (gnu-elpa-keyring-update--keyring))
-  (write-region "" nil (expand-file-name "gnu-elpa.timestamp" 
gnupghome-dir)
-nil 'silent
+(package-import-keyring (gnu-elpa-keyring-update--keyring))
+(write-region "" nil (expand-file-name "gnu-elpa.timestamp" gnupghome-dir)
+  nil 'silent)))
 
 ;; FIXME: Maybe we should use an advice on `package--check-signature'
 ;; so as to avoid this startup cost?



Re: nsh: fix build with upcoming stdio changes

2024-05-25 Thread Stefan Sperling
On Sat, May 25, 2024 at 06:57:51PM +0100, Tom Smyth wrote:
> Hi Stefan,
> 
> make obj is still needed
> 
> Stop in bgpnsh
> *** Error 2 in /home/tom/nsh1.4.1/nsh (:48 'all': @for entry
> in bgpnsh nshdoas; do  set -e; if test -d /home/tom/nsh1.4.1/nsh...)
> 
> make obj  first
> then
> make
> gets around it ...

Indeed, I overlooked obj dirs in subdirectories.
Thanks, this has now been fixed in the nsh github repo, as of
commit 957f3c72fb1d7839c038dd8126a59e390661ae15



Re: how to remove marks automatically

2024-05-25 Thread Stefan Thomas
Dear Silvain,
thank You, this worked perfectly for me!
Providing you have juste one space as in the example,
in the part \mark \markup {
you can do a search and replace

\\mark.\\markup.{[^{]+{[^}]+}[^}]+}

with nothing

(I tested it in Geany on Ubuntu)

dot means one character
[^{]* means a non-empty sequence of characters without any { character
\ needs escaping
Your could add \s at the end to remove empty spaces or lines


Le 25.05.24 à 18:15, Stefan Thomas a écrit :

Dear community,
I would like to remove automatically all the "\mark \markup { \box { LETTER
} }" in the below quoted text. Can I do this with regex? Does someone know
how?
Thanks,
Stefan

\version "2.22.2"

violine =  {
\clef "treble" | % 1
R1*8 | % 9
\mark \markup { \box { A   } } R1*8 -\markup{ \tiny {Lija+Tambor} }
\mark \markup { \box { B   } }
 R1*8
\mark \markup { \box { C   } }
r4 2. \fermata :32
}



-- 
Silvain Dupertuis
Route de Lausanne 335
1293 Bellevue (Switzerland)
tél. +41-(0)22-774.20.67
portable +41-(0)79-604.87.52
web: silvain-dupertuis.org <https://perso.silvain-dupertuis.org>


Re: Improve testing

2024-05-25 Thread Stefan Vodita
Some useful documentation on the gradlew commands:
https://github.com/apache/lucene/blob/main/help/workflow.txt



On Sat, 25 May 2024 at 19:38, Stefan Vodita  wrote:

> I'll add a step in between 1 and 2 that I often forget: ./gradlew tidy
> This refactors your code to the style the project uses, which we have
> checks for.
>
>
> On Sat, 25 May 2024 at 00:53, Michael Froh  wrote:
>
>> Is your new test uncommitted?
>>
>> The Gradle check will fail if you have uncommitted files, to avoid the
>> situation where it "works on my machine (because of a file that I forgot to
>> commit)".
>>
>> The rough workflow is:
>>
>> 1. Develop stuff (code and/or tests).
>> 2. Commit it.
>> 3. Gradle check.
>> 4. If Gradle check fails, then make changes and amend your commit. Go to
>> 3.
>>
>> Hope that helps,
>> Froh
>>
>>
>> On Fri, May 24, 2024 at 3:31 PM Chang Hank 
>> wrote:
>>
>>> After I added the new test case, I failed the ./gradlew check and it
>>> seems like the check failed because I added the new test case.
>>> Is there anything I need to do before executing ./gradlew check?
>>>
>>> Best,
>>> Hank
>>>
>>> > On May 24, 2024, at 12:53 PM, Chang Hank 
>>> wrote:
>>> >
>>> > Hi Robert,
>>> >
>>> > Thanks for your advice, will look into it!!
>>> >
>>> > Best,
>>> > Hank
>>> >> On May 24, 2024, at 12:46 PM, Robert Muir  wrote:
>>> >>
>>> >> On Fri, May 24, 2024 at 2:33 PM Chang Hank 
>>> wrote:
>>> >>>
>>> >>> Hi all,
>>> >>>
>>> >>> I want to improve the code coverage for Lucene, which package would
>>> you recommend testing to do so? Do we need more coverage in the Core
>>> package?
>>> >>>
>>> >>
>>> >> Hello,
>>> >>
>>> >> I'd recommend looking at the help/tests.txt file, you can generate the
>>> >> coverage report easily and find untested code:
>>> >>
>>> >> https://github.com/apache/lucene/blob/main/help/tests.txt#L193
>>> >>
>>> >> -
>>> >> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> >> For additional commands, e-mail: dev-h...@lucene.apache.org
>>> >>
>>> >
>>>
>>>
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>>
>>>


Re: Improve testing

2024-05-25 Thread Stefan Vodita
I'll add a step in between 1 and 2 that I often forget: ./gradlew tidy
This refactors your code to the style the project uses, which we have
checks for.


On Sat, 25 May 2024 at 00:53, Michael Froh  wrote:

> Is your new test uncommitted?
>
> The Gradle check will fail if you have uncommitted files, to avoid the
> situation where it "works on my machine (because of a file that I forgot to
> commit)".
>
> The rough workflow is:
>
> 1. Develop stuff (code and/or tests).
> 2. Commit it.
> 3. Gradle check.
> 4. If Gradle check fails, then make changes and amend your commit. Go to 3.
>
> Hope that helps,
> Froh
>
>
> On Fri, May 24, 2024 at 3:31 PM Chang Hank 
> wrote:
>
>> After I added the new test case, I failed the ./gradlew check and it
>> seems like the check failed because I added the new test case.
>> Is there anything I need to do before executing ./gradlew check?
>>
>> Best,
>> Hank
>>
>> > On May 24, 2024, at 12:53 PM, Chang Hank 
>> wrote:
>> >
>> > Hi Robert,
>> >
>> > Thanks for your advice, will look into it!!
>> >
>> > Best,
>> > Hank
>> >> On May 24, 2024, at 12:46 PM, Robert Muir  wrote:
>> >>
>> >> On Fri, May 24, 2024 at 2:33 PM Chang Hank 
>> wrote:
>> >>>
>> >>> Hi all,
>> >>>
>> >>> I want to improve the code coverage for Lucene, which package would
>> you recommend testing to do so? Do we need more coverage in the Core
>> package?
>> >>>
>> >>
>> >> Hello,
>> >>
>> >> I'd recommend looking at the help/tests.txt file, you can generate the
>> >> coverage report easily and find untested code:
>> >>
>> >> https://github.com/apache/lucene/blob/main/help/tests.txt#L193
>> >>
>> >> -
>> >> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> >> For additional commands, e-mail: dev-h...@lucene.apache.org
>> >>
>> >
>>
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>>


Re: XCF/XES programming

2024-05-25 Thread Stefan Lezzi
Hi again,

I've got a 2005 SHARE Anaheim version, thank you John!, and I'd love to have the
SHARE Atlanta 2012 version also.
Only there is the code modified to circumvent the RUCSA problem (solved with 
"...the authorized TSO command processor RXCUAUTH via TSOEXEC to update these 
structures").

Sadly, I'm not an enough skilled assembler programmer to code this part myself.

If you can check your old project/files for this RXC zip, it would be great!

Thanks

Stefan

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


how to remove marks automatically

2024-05-25 Thread Stefan Thomas
Dear community,
I would like to remove automatically all the "\mark \markup { \box { LETTER
} }" in the below quoted text. Can I do this with regex? Does someone know
how?
Thanks,
Stefan

\version "2.22.2"

violine =  {
\clef "treble" | % 1
R1*8 | % 9
\mark \markup { \box { A   } } R1*8 -\markup{ \tiny {Lija+Tambor} }
\mark \markup { \box { B   } }
 R1*8
\mark \markup { \box { C   } }
r4 2. \fermata :32
}


Re: [de-users] Pause-Symbol auf LibreOffice-Impress-Sildes

2024-05-25 Thread Stefan

Hi Robert.
Danke für die nette Antwort. Ich schick dir mal auf deine Mailadresse
einen Screenshot des Symbols. So sieht das aus ^^

Ich hab nichts bewusst in den Navigationseinstellungen verändert. Es
kann aber natürlich sein, dass ich mit einer falscher Tastenkombination
unbemerkt eine Einstellung verändert habe. Ich habe mal LibreOffice
deinstalliert, den Rechner neu gestartet und die aktuelle Version neu
installiert. (Version: 7.6.7.2 (X86_64) / LibreOffice Community/ Build
ID: dd47e4b30cb7dab30588d6c79c651f218165e3c5/ CPU threads: 16; OS:
Windows 10.0 Build 22631; UI render: Skia/Raster; VCL: win/ Locale:
de-DE (de_DE); UI: de-DE/ Calc: threaded).

Aber das Pausesymbol wird weiter angezeigt. Und nee, ich lasse die
Präsentation nicht automatisch ablaufen, sondern klicke per Maus bzw.
per Presenter weiter. Und in den Einstellungen (Bildschirmpräsentation
-> Präsentationseinstellungen) bzw. Extras -> Optionen -> LibreOffice
Impress ist mir nichts Verdächtiges aufgefallen. Deshalb bin ich
tatsächlich ratlos.
Schöne Grüße, Stefan

Am 25.05.2024 um 11:36 schrieb Robert Großkopf:

Hallo Stefan,

ich habe gerade einmal eine Präsentation aufgerufen. Bei mir erscheint
da unter

Version: 24.2.3.2 (X86_64) / LibreOffice Community
Build ID: 433d9c2ded56988e8a90e6b2e771ee4e6a5ab2ba
CPU threads: 6; OS: Linux 6.4; UI render: default; VCL: kf5 (cairo+xcb)
Locale: de-DE (de_DE.UTF-8); UI: de-DE
Calc: threaded

kein Pause-Symbol.

Du hast nicht irgendetwas in den Navigationseinstellungen verändert?
Ich benutze Impress sehr selten und weiß nicht, ob da irgendwo ein
Pausesymbol auftaucht. Das dürfte dann ja eigentlich nur der Fall
sein, wenn eine automatisch ablaufende Präsentation unterbrochen wird.

Gruß

Robert


--
Liste abmelden mit E-Mail an: users+unsubscr...@de.libreoffice.org
Probleme? 
https://de.libreoffice.org/hilfe-kontakt/mailing-listen/abmeldung-liste/
Tipps zu Listenmails: https://wiki.documentfoundation.org/Netiquette/de
Listenarchiv: https://listarchives.libreoffice.org/de/users/
Datenschutzerklärung: https://www.documentfoundation.org/privacy


[de-users] Pause-Symbol auf LibreOffice-Impress-Sildes

2024-05-25 Thread Stefan

Hallo,
eine Frage: ich halte öfters Vorträge mit LibreOffice Impress. Mir ist
jetzt erst störend aufgefallen (entweder ist das neu oder es war schon
immer so und ich hab es vorher nicht bemerkt ^^), dass jedes Mal beim
Weiterklicken einer Slide links unten ein kleines Pause-Symbol
eingeblendet wird. Ich würde das gern abstellen, habe aber keine
Möglichkeit dafür gefunden. Kann mir jemand vielleicht weiterhelfen?
Tausend Dank, Stefan


--
Liste abmelden mit E-Mail an: users+unsubscr...@de.libreoffice.org
Probleme? 
https://de.libreoffice.org/hilfe-kontakt/mailing-listen/abmeldung-liste/
Tipps zu Listenmails: https://wiki.documentfoundation.org/Netiquette/de
Listenarchiv: https://listarchives.libreoffice.org/de/users/
Datenschutzerklärung: https://www.documentfoundation.org/privacy


Re: TortoiseProc.exe command line merge regularly asks for cleanup

2024-05-24 Thread Stefan via TortoiseSVN
one sure way to get errors that require a cleanup is to have the working 
copies stored on a network share instead of a local drive.

-- 
You received this message because you are subscribed to the Google Groups 
"TortoiseSVN" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to tortoisesvn+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/tortoisesvn/bad935c6-5cfd-4b4d-aad0-b2cba78c5177n%40googlegroups.com.


[jira] [Commented] (CASSANDRA-19654) Update bundled Cassandra cassandra-driver-core dependency

2024-05-24 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-19654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849228#comment-17849228
 ] 

Stefan Miklosovic commented on CASSANDRA-19654:
---

I am not sure if we are going to bump the versions in a patch releases. We 
should carefully evaluate what impact it has because people bumping a patch 
release in their integrations do not expect that their dependency tree would be 
changed suddenly and substantially. cc [~brandon.williams]

> Update bundled Cassandra cassandra-driver-core dependency
> -
>
> Key: CASSANDRA-19654
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19654
> Project: Cassandra
>  Issue Type: Task
>  Components: Dependencies
>Reporter: Jackson Fleming
>Priority: Normal
>
> There's a dependency in Cassandra project on an old version of the Java 
> driver cassandra-driver-core - 3.11.0 in the 4.0 and later releases of 
> Cassandra 
>  
> (For example on the 4.1 branch 
> [https://github.com/apache/cassandra/blob/cassandra-4.1/build.xml#L691)] 
>  
> It appears that this dependency may have some security vulnerabilities in 
> transitive dependencies.
> But also this is a very old version of the driver, ideally it would be 
> aligned to a newer version, I would suggest either 3.11.5 which is the latest 
> in that line of driver versions 
> [https://mvnrepository.com/artifact/com.datastax.cassandra/cassandra-driver-core|https://mvnrepository.com/artifact/com.datastax.cassandra/cassandra-driver-core)]
> or this gets updated to the latest 4.x driver (as of writing that's 4.18.1 in 
> [https://mvnrepository.com/artifact/org.apache.cassandra/java-driver-core] ) 
> but this seems like a larger undertaking.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-19654) Update bundled Cassandra cassandra-driver-core dependency

2024-05-24 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-19654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849228#comment-17849228
 ] 

Stefan Miklosovic edited comment on CASSANDRA-19654 at 5/24/24 9:25 AM:


I am not sure if we are going to bump the versions in a patch release. We 
should carefully evaluate what impact it has because people bumping a patch 
release in their integrations do not expect that their dependency tree would be 
changed suddenly and substantially. cc [~brandon.williams]


was (Author: smiklosovic):
I am not sure if we are going to bump the versions in a patch releases. We 
should carefully evaluate what impact it has because people bumping a patch 
release in their integrations do not expect that their dependency tree would be 
changed suddenly and substantially. cc [~brandon.williams]

> Update bundled Cassandra cassandra-driver-core dependency
> -
>
> Key: CASSANDRA-19654
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19654
> Project: Cassandra
>  Issue Type: Task
>  Components: Dependencies
>Reporter: Jackson Fleming
>Priority: Normal
>
> There's a dependency in Cassandra project on an old version of the Java 
> driver cassandra-driver-core - 3.11.0 in the 4.0 and later releases of 
> Cassandra 
>  
> (For example on the 4.1 branch 
> [https://github.com/apache/cassandra/blob/cassandra-4.1/build.xml#L691)] 
>  
> It appears that this dependency may have some security vulnerabilities in 
> transitive dependencies.
> But also this is a very old version of the driver, ideally it would be 
> aligned to a newer version, I would suggest either 3.11.5 which is the latest 
> in that line of driver versions 
> [https://mvnrepository.com/artifact/com.datastax.cassandra/cassandra-driver-core|https://mvnrepository.com/artifact/com.datastax.cassandra/cassandra-driver-core)]
> or this gets updated to the latest 4.x driver (as of writing that's 4.18.1 in 
> [https://mvnrepository.com/artifact/org.apache.cassandra/java-driver-core] ) 
> but this seems like a larger undertaking.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-19450) Hygiene updates for warnings and pytests

2024-05-24 Thread Stefan Miklosovic (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-19450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Miklosovic updated CASSANDRA-19450:
--
  Fix Version/s: 5.1
 (was: 5.x)
Source Control Link: 
https://github.com/apache/cassandra/commit/6701259bce91672a7c3ca9fb77ea7b040e9c
 Resolution: Fixed
 Status: Resolved  (was: Ready to Commit)

> Hygiene updates for warnings and pytests
> 
>
> Key: CASSANDRA-19450
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19450
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL/Interpreter
>Reporter: Brad Schoening
>Assignee: Brad Schoening
>Priority: Low
> Fix For: 5.1
>
>
>  
>  * -Update 'Warning' message to write to stderr-
>  * -Replace TimeoutError Exception with builtin (since Python 3.3)-
>  * -Remove re.pattern_type (removed since Python 3.7)-
>  * Fix mutable arg [] in test/run_cqlsh.py read_until()
>  * Remove redirect of stderr to stdout in pytest fixture with tty=false; 
> Deprecation warnings can otherwise fail unit tests when stdout & stderr 
> output is combined.
>  * Fix several pycodestyle issues



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[PATCH] Hard register asm constraint

2024-05-24 Thread Stefan Schulze Frielinghaus
This implements hard register constraints for inline asm.  A hard register
constraint is of the form {regname} where regname is any valid register.  This
basically renders register asm superfluous.  For example, the snippet

int test (int x, int y)
{
  register int r4 asm ("r4") = x;
  register int r5 asm ("r5") = y;
  unsigned int copy = y;
  asm ("foo %0,%1,%2" : "+d" (r4) : "d" (r5), "d" (copy));
  return r4;
}

could be rewritten into

int test (int x, int y)
{
  asm ("foo %0,%1,%2" : "+{r4}" (x) : "{r5}" (y), "d" (y));
  return x;
}

As a side-effect this also solves the problem of call-clobbered registers.
That being said, I was wondering whether we could utilize this feature in order
to get rid of local register asm automatically?  For example, converting

// Result will be in r2 on s390
extern int bar (void);

void test (void)
{
  register int x asm ("r2") = 42;
  bar ();
  asm ("foo %0\n" :: "r" (x));
}

into

void test (void)
{
  int x = 42;
  bar ();
  asm ("foo %0\n" :: "{r2}" (x));
}

in order to get rid of the limitation of call-clobbered registers which may
lead to subtle bugs---especially if you think of non-obvious calls e.g.
introduced by sanitizer/tracer/whatever.  Since such a transformation has the
potential to break existing code do you see any edge cases where this might be
problematic or even show stoppers?  Currently, even

int test (void)
{
  register int x asm ("r2") = 42;
  register int y asm ("r2") = 24;
  asm ("foo %0,%1\n" :: "r" (x), "r" (y));
}

is allowed which seems error prone to me.  Thus, if 100% backwards
compatibility would be required, then automatically converting every register
asm to the new mechanism isn't viable.  Still quite a lot could be transformed.
Any thoughts?

Currently I allow multiple alternatives as demonstrated by
gcc/testsuite/gcc.target/s390/asm-hard-reg-2.c.  However, since a hard register
constraint is pretty specific I could also think of erroring out in case of
alternatives.  Are there any real use cases out there for multiple
alternatives where one would like to use hard register constraints?

With the current implementation we have a "user visible change" in the sense
that for

void test (void)
{
  register int x asm ("r2") = 42;
  register int y asm ("r2") = 24;
  asm ("foo %0,%1\n" : "=r" (x), "=r" (y));
}

we do not get the error

  "invalid hard register usage between output operands"

anymore but rather

  "multiple outputs to hard register: %r2"

This is due to the error handling in gimplify_asm_expr ().  Speaking of errors,
I also error out earlier as before which means that e.g. in pr87600-2.c only
the first error is reported and processing is stopped afterwards which means
the subsequent tests fail.

I've been skimming through all targets and it looks to me as if none is using
curly brackets for their constraints.  Of course, I may have missed something.

Cheers,
Stefan

PS: Current state for Clang: https://reviews.llvm.org/D105142

---
 gcc/cfgexpand.cc  |  42 ---
 gcc/genpreds.cc   |   4 +-
 gcc/gimplify.cc   | 115 +-
 gcc/lra-constraints.cc|  17 +++
 gcc/recog.cc  |  14 ++-
 gcc/stmt.cc   | 102 +++-
 gcc/stmt.h|  10 +-
 .../gcc.target/s390/asm-hard-reg-1.c  | 103 
 .../gcc.target/s390/asm-hard-reg-2.c  |  29 +
 .../gcc.target/s390/asm-hard-reg-3.c  |  24 
 gcc/testsuite/lib/scanasm.exp |   4 +
 11 files changed, 407 insertions(+), 57 deletions(-)
 create mode 100644 gcc/testsuite/gcc.target/s390/asm-hard-reg-1.c
 create mode 100644 gcc/testsuite/gcc.target/s390/asm-hard-reg-2.c
 create mode 100644 gcc/testsuite/gcc.target/s390/asm-hard-reg-3.c

diff --git a/gcc/cfgexpand.cc b/gcc/cfgexpand.cc
index 557cb28733b..47f71a2e803 100644
--- a/gcc/cfgexpand.cc
+++ b/gcc/cfgexpand.cc
@@ -2955,44 +2955,6 @@ expand_asm_loc (tree string, int vol, location_t locus)
   emit_insn (body);
 }
 
-/* Return the number of times character C occurs in string S.  */
-static int
-n_occurrences (int c, const char *s)
-{
-  int n = 0;
-  while (*s)
-n += (*s++ == c);
-  return n;
-}
-
-/* A subroutine of expand_asm_operands.  Check that all operands have
-   the same number of alternatives.  Return true if so.  */
-
-static bool
-check_operand_nalternatives (const vec )
-{
-  unsigned len = constraints.length();
-  if (len > 0)
-{
-  int nalternatives = n_occurrences (',', constraints[0]);
-
-  if (nalternatives + 1 > MAX_RECOG_ALTERNATIVES)
-   {
- 

[jira] [Comment Edited] (CASSANDRA-19450) Hygiene updates for warnings and pytests

2024-05-24 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-19450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849221#comment-17849221
 ] 

Stefan Miklosovic edited comment on CASSANDRA-19450 at 5/24/24 8:51 AM:


I think this just looks fine. Going to ship it.

[CASSANDRA-19450|https://github.com/instaclustr/cassandra/tree/CASSANDRA-19450]
{noformat}
java17_pre-commit_tests 
  ✓ j17_build4m 14s
  ✓ j17_cqlsh_dtests_py311   6m 57s
  ✓ j17_cqlsh_dtests_py311_vnode 7m 22s
  ✓ j17_cqlsh_dtests_py386m 55s
  ✓ j17_cqlsh_dtests_py38_vnode  7m 17s
  ✓ j17_cqlshlib_cython_tests7m 16s
  ✓ j17_cqlshlib_tests   6m 23s
  ✓ j17_dtests  36m 35s
  ✓ j17_dtests_vnode 35m 1s
  ✓ j17_unit_tests  14m 20s
  ✓ j17_utests_oa13m 2s
  ✕ j17_dtests_latest   35m 19s
  streaming_test.TestStreaming test_zerocopy_streaming_no_replication
  ✕ j17_jvm_dtests  24m 34s
  
org.apache.cassandra.fuzz.harry.integration.model.InJVMTokenAwareExecutorTest 
testRepair TIMEOUTED
  ✕ j17_jvm_dtests_latest_vnode 24m 15s
  org.apache.cassandra.distributed.test.jmx.JMXFeatureTest 
testOneNetworkInterfaceProvisioning
  ✕ j17_utests_latest   15m 50s
  org.apache.cassandra.tcm.DiscoverySimulationTest discoveryTest
java17_separate_tests
java11_pre-commit_tests 
java11_separate_tests
{noformat}

[java17_pre-commit_tests|https://app.circleci.com/pipelines/github/instaclustr/cassandra/4351/workflows/5f522c11-aae5-4915-8102-f79807d661d6]
[java17_separate_tests|https://app.circleci.com/pipelines/github/instaclustr/cassandra/4351/workflows/87429218-e1e2-4589-b879-09f9e80ef3b6]
[java11_pre-commit_tests|https://app.circleci.com/pipelines/github/instaclustr/cassandra/4351/workflows/222ad871-683e-4a14-919d-9e3e92def96c]
[java11_separate_tests|https://app.circleci.com/pipelines/github/instaclustr/cassandra/4351/workflows/3cd71329-9b0f-4f48-ae2b-33fd057520c9]

streaming_test.py::TestStreaming::test_zerocopy_streaming_no_replication was 
verified to work locally


was (Author: smiklosovic):
I think this just looks fine. Going to ship it.

[CASSANDRA-19450|https://github.com/instaclustr/cassandra/tree/CASSANDRA-19450]
{noformat}
java17_pre-commit_tests 
  ✓ j17_build4m 14s
  ✓ j17_cqlsh_dtests_py311   6m 57s
  ✓ j17_cqlsh_dtests_py311_vnode 7m 22s
  ✓ j17_cqlsh_dtests_py386m 55s
  ✓ j17_cqlsh_dtests_py38_vnode  7m 17s
  ✓ j17_cqlshlib_cython_tests7m 16s
  ✓ j17_cqlshlib_tests   6m 23s
  ✓ j17_dtests  36m 35s
  ✓ j17_dtests_vnode 35m 1s
  ✓ j17_unit_tests  14m 20s
  ✓ j17_utests_oa13m 2s
  ✕ j17_dtests_latest   35m 19s
  streaming_test.TestStreaming test_zerocopy_streaming_no_replication
  ✕ j17_jvm_dtests  24m 34s
  
org.apache.cassandra.fuzz.harry.integration.model.InJVMTokenAwareExecutorTest 
testRepair TIMEOUTED
  ✕ j17_jvm_dtests_latest_vnode 24m 15s
  org.apache.cassandra.distributed.test.jmx.JMXFeatureTest 
testOneNetworkInterfaceProvisioning
  ✕ j17_utests_latest   15m 50s
  org.apache.cassandra.tcm.DiscoverySimulationTest discoveryTest
java17_separate_tests
java11_pre-commit_tests 
java11_separate_tests
{noformat}

[java17_pre-commit_tests|https://app.circleci.com/pipelines/github/instaclustr/cassandra/4351/workflows/5f522c11-aae5-4915-8102-f79807d661d6]
[java17_separate_tests|https://app.circleci.com/pipelines/github/instaclustr/cassandra/4351/workflows/87429218-e1e2-4589-b879-09f9e80ef3b6]
[java11_pre-commit_tests|https://app.circleci.com/pipelines/github/instaclustr/cassandra/4351/workflows/222ad871-683e-4a14-919d-9e3e92def96c]
[java11_separate_tests|https://app.circleci.com/pipelines/github/instaclustr/cassandra/4351/workflows/3cd71329-9b0f-4f48-ae2b-33fd057520c9]


> Hygiene updates for warnings and pytests
> 
>
> Key: CASSANDRA-19450
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19450
>  

[jira] [Commented] (CASSANDRA-19632) wrap tracing logs in isTraceEnabled across the codebase

2024-05-24 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-19632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849222#comment-17849222
 ] 

Stefan Miklosovic commented on CASSANDRA-19632:
---

[~brandon.williams] any feedback on the second patch I just provided CI for 
above?

> wrap tracing logs in isTraceEnabled across the codebase
> ---
>
> Key: CASSANDRA-19632
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19632
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Legacy/Core
>    Reporter: Stefan Miklosovic
>Assignee: Stefan Miklosovic
>Priority: Normal
> Fix For: 5.x
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Our usage of logger.isTraceEnabled across the codebase is inconsistent. This 
> would also fix issues similar in e.g. CASSANDRA-19429 as [~rustyrazorblade] 
> suggested.
> We should fix this at least in trunk and 5.0 (not critical though) and 
> probably come up with a checkstyle rule to prevent not calling isTraceEnabled 
> while logging with TRACE level. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-19450) Hygiene updates for warnings and pytests

2024-05-24 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-19450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849221#comment-17849221
 ] 

Stefan Miklosovic commented on CASSANDRA-19450:
---

I think this just looks fine. Going to ship it.

[CASSANDRA-19450|https://github.com/instaclustr/cassandra/tree/CASSANDRA-19450]
{noformat}
java17_pre-commit_tests 
  ✓ j17_build4m 14s
  ✓ j17_cqlsh_dtests_py311   6m 57s
  ✓ j17_cqlsh_dtests_py311_vnode 7m 22s
  ✓ j17_cqlsh_dtests_py386m 55s
  ✓ j17_cqlsh_dtests_py38_vnode  7m 17s
  ✓ j17_cqlshlib_cython_tests7m 16s
  ✓ j17_cqlshlib_tests   6m 23s
  ✓ j17_dtests  36m 35s
  ✓ j17_dtests_vnode 35m 1s
  ✓ j17_unit_tests  14m 20s
  ✓ j17_utests_oa13m 2s
  ✕ j17_dtests_latest   35m 19s
  streaming_test.TestStreaming test_zerocopy_streaming_no_replication
  ✕ j17_jvm_dtests  24m 34s
  
org.apache.cassandra.fuzz.harry.integration.model.InJVMTokenAwareExecutorTest 
testRepair TIMEOUTED
  ✕ j17_jvm_dtests_latest_vnode 24m 15s
  org.apache.cassandra.distributed.test.jmx.JMXFeatureTest 
testOneNetworkInterfaceProvisioning
  ✕ j17_utests_latest   15m 50s
  org.apache.cassandra.tcm.DiscoverySimulationTest discoveryTest
java17_separate_tests
java11_pre-commit_tests 
java11_separate_tests
{noformat}

[java17_pre-commit_tests|https://app.circleci.com/pipelines/github/instaclustr/cassandra/4351/workflows/5f522c11-aae5-4915-8102-f79807d661d6]
[java17_separate_tests|https://app.circleci.com/pipelines/github/instaclustr/cassandra/4351/workflows/87429218-e1e2-4589-b879-09f9e80ef3b6]
[java11_pre-commit_tests|https://app.circleci.com/pipelines/github/instaclustr/cassandra/4351/workflows/222ad871-683e-4a14-919d-9e3e92def96c]
[java11_separate_tests|https://app.circleci.com/pipelines/github/instaclustr/cassandra/4351/workflows/3cd71329-9b0f-4f48-ae2b-33fd057520c9]


> Hygiene updates for warnings and pytests
> 
>
> Key: CASSANDRA-19450
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19450
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL/Interpreter
>Reporter: Brad Schoening
>Assignee: Brad Schoening
>Priority: Low
> Fix For: 5.x
>
>
>  
>  * -Update 'Warning' message to write to stderr-
>  * -Replace TimeoutError Exception with builtin (since Python 3.3)-
>  * -Remove re.pattern_type (removed since Python 3.7)-
>  * Fix mutable arg [] in test/run_cqlsh.py read_until()
>  * Remove redirect of stderr to stdout in pytest fixture with tty=false; 
> Deprecation warnings can otherwise fail unit tests when stdout & stderr 
> output is combined.
>  * Fix several pycodestyle issues



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-19632) wrap tracing logs in isTraceEnabled across the codebase

2024-05-24 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-19632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849220#comment-17849220
 ] 

Stefan Miklosovic commented on CASSANDRA-19632:
---

Looks fine, I think that cqlsh_tests.test_cqlsh.TestCqlsh test_describe is the 
result of CASSANDRA-19592 and this build was not rebased againt the current 
trunk

[CASSANDRA-19632-2|https://github.com/instaclustr/cassandra/tree/CASSANDRA-19632-2]
{noformat}
java17_pre-commit_tests 
  ✓ j17_build5m 12s
  ✓ j17_cqlsh_dtests_py311_vnode 7m 45s
  ✓ j17_cqlsh_dtests_py386m 56s
  ✓ j17_cqlsh_dtests_py38_vnode  7m 20s
  ✓ j17_cqlshlib_cython_tests9m 35s
  ✓ j17_cqlshlib_tests   9m 30s
  ✓ j17_dtests  35m 34s
  ✓ j17_dtests_vnode36m 38s
  ✓ j17_jvm_dtests_latest_vnode 20m 44s
  ✓ j17_unit_tests  14m 15s
  ✓ j17_utests_latest   14m 27s
  ✓ j17_utests_oa   14m 21s
  ✕ j17_cqlsh_dtests_py311   6m 41s
  cqlsh_tests.test_cqlsh.TestCqlsh test_describe
  ✕ j17_dtests_latest   36m 44s
  auth_test.TestAuthUnavailable test_authorization_handle_unavailable
  configuration_test.TestConfiguration test_change_durable_writes
  ✕ j17_jvm_dtests  22m 46s
java17_separate_tests
java11_pre-commit_tests 
java11_separate_tests
{noformat}

[java17_pre-commit_tests|https://app.circleci.com/pipelines/github/instaclustr/cassandra/4344/workflows/f9410f3a-9af4-4924-a2c8-44fc3d7384c0]
[java17_separate_tests|https://app.circleci.com/pipelines/github/instaclustr/cassandra/4344/workflows/5a2470b3-7dee-4b50-98c7-1a5f9b50e650]
[java11_pre-commit_tests|https://app.circleci.com/pipelines/github/instaclustr/cassandra/4344/workflows/7263d3ca-ef7f-4499-9fff-fbb77617b613]
[java11_separate_tests|https://app.circleci.com/pipelines/github/instaclustr/cassandra/4344/workflows/91f98de5-c3e5-45bb-9737-a6d98922fedf]


> wrap tracing logs in isTraceEnabled across the codebase
> ---
>
> Key: CASSANDRA-19632
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19632
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Legacy/Core
>    Reporter: Stefan Miklosovic
>Assignee: Stefan Miklosovic
>Priority: Normal
> Fix For: 5.x
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Our usage of logger.isTraceEnabled across the codebase is inconsistent. This 
> would also fix issues similar in e.g. CASSANDRA-19429 as [~rustyrazorblade] 
> suggested.
> We should fix this at least in trunk and 5.0 (not critical though) and 
> probably come up with a checkstyle rule to prevent not calling isTraceEnabled 
> while logging with TRACE level. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-12937) Default setting (yaml) for SSTable compression

2024-05-24 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-12937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849218#comment-17849218
 ] 

Stefan Miklosovic edited comment on CASSANDRA-12937 at 5/24/24 8:36 AM:


Seems reasonably clean ... 

What I have not done is that the idea [~jlewandowski] had with "if some config 
parameter is not in cql statement just merge the values from cassandra.yaml" 
because it is quite tricky to get that right. We would need to know what values 
were specfied and then diffing what is not there and then validating that such 
combination makes sense (and if it does not, should we fail otherwise valid CQL 
statement just because we happened to merge values from cassandra.yaml and that 
combination was not right? I do not think so).

Let's just go with a simple case of "if compression is not specified just take 
the defaults from cassandra.yaml" rather then trying to merge the configs ... 
Too much of a hassle, might come as an improvement if somebody is really after 
that.

I will try to come up with more tests and I think that sometimes next week this 
should be all completed and ready for review again.

[CASSANDRA-12937-squashed|https://github.com/instaclustr/cassandra/tree/CASSANDRA-12937-squashed]
{noformat}
java17_pre-commit_tests 
  ✓ j17_build 4m 7s
  ✓ j17_cqlsh_dtests_py311   7m 11s
  ✓ j17_cqlsh_dtests_py311_vnode 7m 18s
  ✓ j17_cqlsh_dtests_py386m 58s
  ✓ j17_cqlsh_dtests_py38_vnode   7m 1s
  ✓ j17_cqlshlib_cython_tests7m 26s
  ✓ j17_cqlshlib_tests   6m 50s
  ✓ j17_unit_tests  17m 36s
  ✓ j17_utests_oa   15m 39s
  ✕ j17_dtests  37m 48s
  scrub_test.TestScrub test_standalone_scrub_essential_files_only
  topology_test.TestTopology test_movement
  ✕ j17_dtests_latest   35m 36s
  offline_tools_test.TestOfflineTools test_sstableverify
  scrub_test.TestScrub test_standalone_scrub_essential_files_only
  configuration_test.TestConfiguration test_change_durable_writes
  ✕ j17_dtests_vnode35m 11s
  scrub_test.TestScrub test_standalone_scrub_essential_files_only
  ✕ j17_jvm_dtests  28m 15s
  ✕ j17_jvm_dtests_latest_vnode 27m 59s
  
org.apache.cassandra.fuzz.harry.integration.model.ConcurrentQuiescentCheckerIntegrationTest
 testConcurrentReadWriteWorkload
  ✕ j17_utests_latest   14m 34s
  org.apache.cassandra.tcm.DiscoverySimulationTest discoveryTest
java17_separate_tests
java11_pre-commit_tests 
java11_separate_tests
{noformat}

[java17_pre-commit_tests|https://app.circleci.com/pipelines/github/instaclustr/cassandra/4350/workflows/a68e4fb0-bd7a-4758-841c-6b4b0fe22865]
[java17_separate_tests|https://app.circleci.com/pipelines/github/instaclustr/cassandra/4350/workflows/fa57a86d-d120-4304-bbdf-a6cf8fefc4d2]
[java11_pre-commit_tests|https://app.circleci.com/pipelines/github/instaclustr/cassandra/4350/workflows/91afc77c-54fe-4369-9cb9-ababa3568e16]
[java11_separate_tests|https://app.circleci.com/pipelines/github/instaclustr/cassandra/4350/workflows/71add260-9c68-4d87-9d5a-99863a01bb3f]



was (Author: smiklosovic):
Seems reasonably clean ... 

What I have not done is that the idea Jacek had with "if some config parameter 
is not in cql statement just merge the values from cassandra.yaml" because it 
is quite tricky to get that right. We would need to know what values were 
specfied and then diffing what is not there and then validating that such 
combination makes sense (and if it does not, should we fail otherwise valid CQL 
statement just because we happened to merge values from cassandra.yaml and that 
combination was not right? I do not think so).

Let's just go with a simple case of "if compression is not specified just take 
the defaults from cassandra.yaml" rather then trying to merge the configs ... 
Too much of a hassle, might come as an improvement if somebody is really after 
that.

I will try to come up with more tests and I think that sometimes next week this 
should be all completed and ready for review again.

[CASSANDRA-12937-squashed|https://github.com/instaclustr/cassandra/tree/CASSANDRA-12937-squashed]
{noformat}
java17_pre-commit_tests 
  ✓ j17_build 4m 7s
  ✓ j17_cqlsh_dtests_py311   7m 11s
  ✓ j17_cqlsh_dtests_py311_vnode 7m 18s
  ✓ j17_cqlsh_dtest

[jira] [Commented] (CASSANDRA-12937) Default setting (yaml) for SSTable compression

2024-05-24 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-12937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849218#comment-17849218
 ] 

Stefan Miklosovic commented on CASSANDRA-12937:
---

Seems reasonably clean ... 

What I have not done is that the idea Jacek had with "if some config parameter 
is not in cql statement just merge the values from cassandra.yaml" because it 
is quite tricky to get that right. We would need to know what values were 
specfied and then diffing what is not there and then validating that such 
combination makes sense (and if it does not, should we fail otherwise valid CQL 
statement just because we happened to merge values from cassandra.yaml and that 
combination was not right? I do not think so).

Let's just go with a simple case of "if compression is not specified just take 
the defaults from cassandra.yaml" rather then trying to merge the configs ... 
Too much of a hassle, might come as an improvement if somebody is really after 
that.

I will try to come up with more tests and I think that sometimes next week this 
should be all completed and ready for review again.

[CASSANDRA-12937-squashed|https://github.com/instaclustr/cassandra/tree/CASSANDRA-12937-squashed]
{noformat}
java17_pre-commit_tests 
  ✓ j17_build 4m 7s
  ✓ j17_cqlsh_dtests_py311   7m 11s
  ✓ j17_cqlsh_dtests_py311_vnode 7m 18s
  ✓ j17_cqlsh_dtests_py386m 58s
  ✓ j17_cqlsh_dtests_py38_vnode   7m 1s
  ✓ j17_cqlshlib_cython_tests7m 26s
  ✓ j17_cqlshlib_tests   6m 50s
  ✓ j17_unit_tests  17m 36s
  ✓ j17_utests_oa   15m 39s
  ✕ j17_dtests  37m 48s
  scrub_test.TestScrub test_standalone_scrub_essential_files_only
  topology_test.TestTopology test_movement
  ✕ j17_dtests_latest   35m 36s
  offline_tools_test.TestOfflineTools test_sstableverify
  scrub_test.TestScrub test_standalone_scrub_essential_files_only
  configuration_test.TestConfiguration test_change_durable_writes
  ✕ j17_dtests_vnode35m 11s
  scrub_test.TestScrub test_standalone_scrub_essential_files_only
  ✕ j17_jvm_dtests  28m 15s
  ✕ j17_jvm_dtests_latest_vnode 27m 59s
  
org.apache.cassandra.fuzz.harry.integration.model.ConcurrentQuiescentCheckerIntegrationTest
 testConcurrentReadWriteWorkload
  ✕ j17_utests_latest   14m 34s
  org.apache.cassandra.tcm.DiscoverySimulationTest discoveryTest
java17_separate_tests
java11_pre-commit_tests 
java11_separate_tests
{noformat}

[java17_pre-commit_tests|https://app.circleci.com/pipelines/github/instaclustr/cassandra/4350/workflows/a68e4fb0-bd7a-4758-841c-6b4b0fe22865]
[java17_separate_tests|https://app.circleci.com/pipelines/github/instaclustr/cassandra/4350/workflows/fa57a86d-d120-4304-bbdf-a6cf8fefc4d2]
[java11_pre-commit_tests|https://app.circleci.com/pipelines/github/instaclustr/cassandra/4350/workflows/91afc77c-54fe-4369-9cb9-ababa3568e16]
[java11_separate_tests|https://app.circleci.com/pipelines/github/instaclustr/cassandra/4350/workflows/71add260-9c68-4d87-9d5a-99863a01bb3f]


> Default setting (yaml) for SSTable compression
> --
>
> Key: CASSANDRA-12937
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12937
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Config
>Reporter: Michael Semb Wever
>Assignee: Stefan Miklosovic
>Priority: Low
>  Labels: AdventCalendar2021
> Fix For: 5.x
>
>  Time Spent: 8h 20m
>  Remaining Estimate: 0h
>
> In many situations the choice of compression for sstables is more relevant to 
> the disks attached than to the schema and data.
> This issue is to add to cassandra.yaml a default value for sstable 
> compression that new tables will inherit (instead of the defaults found in 
> {{CompressionParams.DEFAULT}}.
> Examples where this can be relevant are filesystems that do on-the-fly 
> compression (btrfs, zfs) or specific disk configurations or even specific C* 
> versions (see CASSANDRA-10995 ).
> +Additional information for newcomers+
> Some new fields need to be added to {{cassandra.yaml}} to allow specifying 
> the field required for defining the default compression parameters. In 
> {{DatabaseDescriptor}} a new {

The Way of Minimum Gas Calculation

2024-05-23 Thread Disselborg, Stefan via subsurface
Hello Subsurface Team,

i have following Dive in the Planer:

back gas:  21% D12 / 232 bar
Deco gas: 50% AL80
SAC: 20ℓ/min
SAC Factor:   3,0
Problem solving time:3min

➙

45m

15min

20min



➚

21m

5min

25min













— Minimum Gas (basierend auf 3,0xAMV/+3min@45m): 1.906ℓ/79bar

If i calculate:
+3min @ 45m * 60 ℓ /min = (3 * 5,5 * 60) = 990 ℓ
5min Δ 24m * 60 ℓ /min = (5 * 4,3 * 60) = 1290 ℓ  (4,3 = Average pressure from 
45m to 21m)

I have:
SUM:   2280 ℓ / 95 bar (D12)

Can somebody explain the calculation to get only “1.906ℓ/79bar”

Thanks

Best regards
Stefan

Stefan Disselborg
Karwendelstr. 1
82061 Neuried

Mobil +49 1520 9878525

___
subsurface mailing list -- subsurface@subsurface-divelog.org
To unsubscribe send an email to subsurface-le...@subsurface-divelog.org


[jira] [Updated] (CASSANDRA-12937) Default setting (yaml) for SSTable compression

2024-05-23 Thread Stefan Miklosovic (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Miklosovic updated CASSANDRA-12937:
--
Status: In Progress  (was: Patch Available)

> Default setting (yaml) for SSTable compression
> --
>
> Key: CASSANDRA-12937
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12937
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Config
>Reporter: Michael Semb Wever
>Assignee: Stefan Miklosovic
>Priority: Low
>  Labels: AdventCalendar2021
> Fix For: 5.x
>
>  Time Spent: 8h 20m
>  Remaining Estimate: 0h
>
> In many situations the choice of compression for sstables is more relevant to 
> the disks attached than to the schema and data.
> This issue is to add to cassandra.yaml a default value for sstable 
> compression that new tables will inherit (instead of the defaults found in 
> {{CompressionParams.DEFAULT}}.
> Examples where this can be relevant are filesystems that do on-the-fly 
> compression (btrfs, zfs) or specific disk configurations or even specific C* 
> versions (see CASSANDRA-10995 ).
> +Additional information for newcomers+
> Some new fields need to be added to {{cassandra.yaml}} to allow specifying 
> the field required for defining the default compression parameters. In 
> {{DatabaseDescriptor}} a new {{CompressionParams}} field should be added for 
> the default compression. This field should be initialized in 
> {{DatabaseDescriptor.applySimpleConfig()}}. At the different places where 
> {{CompressionParams.DEFAULT}} was used the code should call 
> {{DatabaseDescriptor#getDefaultCompressionParams}} that should return some 
> copy of configured {{CompressionParams}}.
> Some unit test using {{OverrideConfigurationLoader}} should be used to test 
> that the table schema use the new default when a new table is created (see 
> CreateTest for some example).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-12937) Default setting (yaml) for SSTable compression

2024-05-23 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-12937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849018#comment-17849018
 ] 

Stefan Miklosovic commented on CASSANDRA-12937:
---

I consolidated the links for PR's as it was getting confusing.

Claude's PR: [https://github.com/apache/cassandra/pull/3168]
mine which is same as his but squashed: 
[https://github.com/apache/cassandra/pull/3330]

I rebased mine against current trunk where CASSANDRA-19592 was merged and I see 
that the problematic test (SSTableCompressionTest#configChangeIsolation) passes 
now which is indeed good news.

I will run a CI to see if something else is broken.

btw [~jlewandowski] mentioned me privately that it would be nice if we had the 
configuration like this:

{code}
sstable:
  selected_format: big
  default_compression: lz4  check this
  format:
big:
  option1: abc
  option2: 123
bti:
  option3: xyz
  option4: 999
  compression:  check this
lz4:
  enabled: true
  chunk_length: 16KiB
  max_compressed_length: 16KiB
snappy:
  enabled: true
  chunk_length: 16KiB
  max_compressed_length: 16KiB
deflate:
  enabled: false
  chunk_length: 16KiB
  max_compressed_length: 16KiB
{code}

instead of what we have now:

{code}
sstable_compression:
 - class_name: lz4
   parameters:
 - enabled: "true"
   chunk_length: 16KiB
   max_compressed_length: 16KiB
{code}

The reasoning behind that is that we are just enriching existing configuration 
section, we are not inventing anything new. Plus it would be cool to have 
predefined compression options so if we just use lz4 in CQL then all parameters 
will be automatically taken into consideration as well. If we provide some 
parameters on CQL, these will be merged into what is in cassandra.yaml.

[~claude] I can take a look into this.

> Default setting (yaml) for SSTable compression
> --
>
> Key: CASSANDRA-12937
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12937
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Config
>Reporter: Michael Semb Wever
>Assignee: Stefan Miklosovic
>Priority: Low
>  Labels: AdventCalendar2021
> Fix For: 5.x
>
>  Time Spent: 8h 20m
>  Remaining Estimate: 0h
>
> In many situations the choice of compression for sstables is more relevant to 
> the disks attached than to the schema and data.
> This issue is to add to cassandra.yaml a default value for sstable 
> compression that new tables will inherit (instead of the defaults found in 
> {{CompressionParams.DEFAULT}}.
> Examples where this can be relevant are filesystems that do on-the-fly 
> compression (btrfs, zfs) or specific disk configurations or even specific C* 
> versions (see CASSANDRA-10995 ).
> +Additional information for newcomers+
> Some new fields need to be added to {{cassandra.yaml}} to allow specifying 
> the field required for defining the default compression parameters. In 
> {{DatabaseDescriptor}} a new {{CompressionParams}} field should be added for 
> the default compression. This field should be initialized in 
> {{DatabaseDescriptor.applySimpleConfig()}}. At the different places where 
> {{CompressionParams.DEFAULT}} was used the code should call 
> {{DatabaseDescriptor#getDefaultCompressionParams}} that should return some 
> copy of configured {{CompressionParams}}.
> Some unit test using {{OverrideConfigurationLoader}} should be used to test 
> that the table schema use the new default when a new table is created (see 
> CreateTest for some example).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[PATCH 0/3] net: phy: dp83867: sync dp83867_phy_reset

2024-05-23 Thread Stefan Kerkmann
I ran into the same problem as Roland with this Phy driver and missed
the fix that landed in 095cd32961aab64cfe72941ce43d99876852e12b ("net:
phy: dp83867: reset PHY on probe"). I'm still submitting this series as
a refactoring of the fix - with the goal of being closer to the upstream
Linux implementation.

Signed-off-by: Stefan Kerkmann 
---
Stefan Kerkmann (3):
  net: phy: allow PHY drivers to implement their own software reset
  net: phy: document core PHY structures
  net: phy: dp83867: sync dp83867_phy_rest

 drivers/net/phy/dp83867.c | 28 ++--
 drivers/net/phy/phy.c | 16 ++--
 include/linux/phy.h   | 25 -
 3 files changed, 52 insertions(+), 17 deletions(-)
---
base-commit: 8ad1b68cf755873866945f77e2e635a2c47e4c5e
change-id: 20240523-feature-dp83867-soft-reset-d9dc632017c4

Best regards,
-- 
Stefan Kerkmann 




[PATCH 2/3] net: phy: document core PHY structures

2024-05-23 Thread Stefan Kerkmann
This is a port of linux commit 4069a572d423b73919ae40a500dfc4b10f8a6f32
("net: phy: Document core PHY structures"), that copies the Doxygen
comments for the PHY structure where applicable.

Signed-off-by: Stefan Kerkmann 
---
 include/linux/phy.h | 20 +++-
 1 file changed, 11 insertions(+), 9 deletions(-)

diff --git a/include/linux/phy.h b/include/linux/phy.h
index a6b96a5984..ef25dec033 100644
--- a/include/linux/phy.h
+++ b/include/linux/phy.h
@@ -280,36 +280,38 @@ struct phy_driver {
 */
int (*soft_reset)(struct phy_device *phydev);
 
-   /*
-* Called to initialize the PHY,
+   /**
+* @config_init: Called to initialize the PHY,
 * including after a reset
 */
int (*config_init)(struct phy_device *phydev);
 
-   /*
-* Called during discovery.  Used to set
+   /**
+* @probe: Called during discovery.  Used to set
 * up device-specific structures, if any
 */
int (*probe)(struct phy_device *phydev);
 
-   /*
-* Configures the advertisement and resets
+   /**
+* @config_aneg: Configures the advertisement and resets
 * autonegotiation if phydev->autoneg is on,
 * forces the speed to the current settings in phydev
 * if phydev->autoneg is off
 */
int (*config_aneg)(struct phy_device *phydev);
 
-   /* Determines the auto negotiation result */
+   /** @aneg_done: Determines the auto negotiation result */
int (*aneg_done)(struct phy_device *phydev);
 
-   /* Determines the negotiated speed and duplex */
+   /** @read_status: Determines the negotiated speed and duplex */
int (*read_status)(struct phy_device *phydev);
 
-   /* Clears up any memory if needed */
+   /** @remove: Clears up any memory if needed */
void (*remove)(struct phy_device *phydev);
 
+   /** @read_page: Return the current PHY register page number */
int (*read_page)(struct phy_device *phydev);
+   /** @write_page: Set the current PHY register page number */
int (*write_page)(struct phy_device *phydev, int page);
 
struct driverdrv;

-- 
2.39.2




[PATCH 3/3] net: phy: dp83867: sync dp83867_phy_rest

2024-05-23 Thread Stefan Kerkmann
This is a port of the `dp83867_phy_reset` function at the state of linux
commit a129b41fe0a8b4da828c46b10f5244ca07a3fec3 ("Revert "net: phy:
dp83867: perform soft reset and retain established link"). Which
performs a reset before starting the initial configuration.

It is a refactoring of commit 095cd32961aab64cfe72941ce43d99876852e12b
("net: phy: dp83867: reset PHY on probe") which already ported parts of
the function, but diverted from the upstream implementation.

Signed-off-by: Stefan Kerkmann 
---
 drivers/net/phy/dp83867.c | 28 ++--
 1 file changed, 22 insertions(+), 6 deletions(-)

diff --git a/drivers/net/phy/dp83867.c b/drivers/net/phy/dp83867.c
index aefc651489..3c9a8e6355 100644
--- a/drivers/net/phy/dp83867.c
+++ b/drivers/net/phy/dp83867.c
@@ -362,8 +362,6 @@ static int dp83867_of_init(struct phy_device *phydev)
return 0;
 }
 
-static int dp83867_phy_reset(struct phy_device *phydev); /* see below */
-
 static int dp83867_probe(struct phy_device *phydev)
 {
struct dp83867_private *dp83867;
@@ -372,8 +370,6 @@ static int dp83867_probe(struct phy_device *phydev)
 
phydev->priv = dp83867;
 
-   dp83867_phy_reset(phydev);
-
return dp83867_of_init(phydev);
 }
 
@@ -571,14 +567,33 @@ static int dp83867_phy_reset(struct phy_device *phydev)
 {
int err;
 
+   err = phy_write(phydev, DP83867_CTRL, DP83867_SW_RESET);
+   if (err < 0)
+   return err;
+
+   udelay(20);
+
+   err = phy_modify(phydev, MII_DP83867_PHYCTRL,
+DP83867_PHYCR_FORCE_LINK_GOOD, 0);
+   if (err < 0)
+   return err;
+
+   /* Configure the DSP Feedforward Equalizer Configuration register to
+* improve short cable (< 1 meter) performance. This will not affect
+* long cable performance.
+*/
+   err = phy_write_mmd(phydev, DP83867_DEVADDR, DP83867_DSP_FFE_CFG,
+   0x0e81);
+   if (err < 0)
+   return err;
+
err = phy_write(phydev, DP83867_CTRL, DP83867_SW_RESTART);
if (err < 0)
return err;
 
udelay(20);
 
-   return phy_modify(phydev, MII_DP83867_PHYCTRL,
-DP83867_PHYCR_FORCE_LINK_GOOD, 0);
+   return 0;
 }
 
 static struct phy_driver dp83867_driver[] = {
@@ -590,6 +605,7 @@ static struct phy_driver dp83867_driver[] = {
 
.probe  = dp83867_probe,
.config_init= dp83867_config_init,
+   .soft_reset = dp83867_phy_reset,
 
.read_status= dp83867_read_status,
},

-- 
2.39.2




[PATCH 1/3] net: phy: allow PHY drivers to implement their own software reset

2024-05-23 Thread Stefan Kerkmann
This is a port of linux commit 9df81dd7583d14862d0cfb673a941b261f3b2112
("net: phy: allow PHY drivers to implement their own software reset")
which implements the ability for phy drivers to implement the own
non-standard software reset sequence.

A side effect is that fixups will now applied always even if
.config_init is undefined. This shouldn't be possible to happen though,
because phy_driver_register will populate the member with
genphy_config_init in that case, so the member should never be NULL.

Signed-off-by: Stefan Kerkmann 
---
 drivers/net/phy/phy.c | 16 ++--
 include/linux/phy.h   |  5 +
 2 files changed, 19 insertions(+), 2 deletions(-)

diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c
index abd78b2c80..a83f183302 100644
--- a/drivers/net/phy/phy.c
+++ b/drivers/net/phy/phy.c
@@ -1108,14 +1108,26 @@ int phy_init_hw(struct phy_device *phydev)
struct phy_driver *phydrv = to_phy_driver(phydev->dev.driver);
int ret;
 
-   if (!phydrv || !phydrv->config_init)
+   if (!phydrv)
return 0;
 
+   if (phydrv->soft_reset) {
+   ret = phydrv->soft_reset(phydev);
+   if (ret < 0)
+   return ret;
+   }
+
ret = phy_scan_fixups(phydev);
if (ret < 0)
return ret;
 
-   return phydrv->config_init(phydev);
+   if (phydrv->config_init) {
+   ret = phydrv->config_init(phydev);
+   if (ret < 0)
+   return ret;
+   }
+
+   return 0;
 }
 
 static struct phy_driver genphy_driver = {
diff --git a/include/linux/phy.h b/include/linux/phy.h
index 7da4f94e0e..a6b96a5984 100644
--- a/include/linux/phy.h
+++ b/include/linux/phy.h
@@ -275,6 +275,11 @@ struct phy_driver {
const void *driver_data;
bool is_phy;
 
+   /**
+* @soft_reset: Called to issue a PHY software reset
+*/
+   int (*soft_reset)(struct phy_device *phydev);
+
/*
 * Called to initialize the PHY,
 * including after a reset

-- 
2.39.2




[jira] [Commented] (CASSANDRA-19592) Expand CREATE TABLE CQL on a coordinating node before submitting to CMS

2024-05-23 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-19592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848969#comment-17848969
 ] 

Stefan Miklosovic commented on CASSANDRA-19592:
---

Does anything else need to be done except merging? This would unblock 
CASSANDRA-12937 as you are for sure aware of.

> Expand CREATE TABLE CQL on a coordinating node before submitting to CMS
> ---
>
> Key: CASSANDRA-19592
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19592
> Project: Cassandra
>  Issue Type: Bug
>  Components: Cluster/Schema
>Reporter: Alex Petrov
>Assignee: Alex Petrov
>Priority: Normal
> Attachments: ci_summary-1.html, ci_summary.html
>
>
> This is done to unblock CASSANDRA-12937 and allow preserving defaults with 
> which the table was created between node bounces and between nodes with 
> different configurations. For now, we are preserving 5.0 behaviour.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[SCM] Samba Shared Repository - branch master updated

2024-05-23 Thread Stefan Metzmacher
The branch, master has been updated
   via  5a54c9b28ab s3:utils: let smbstatus report anonymous 
signing/encryption explicitly
   via  f3ddfb828e6 s3:smbd: allow anonymous encryption after one 
authenticated session setup
   via  551756abd2c s3:utils: let smbstatus also report partial tcon 
signing/encryption
   via  8119fd6d6a4 s3:utils: let smbstatus also report AES-256 encryption 
types for tcons
   via  5089d855064 s3:utils: let connections_forall_read() report if the 
session was authenticated
   via  596a10d1079 s3:lib: let sessionid_traverse_read() report if the 
session was authenticated
   via  a9f84593f44 s3:utils: remove unused signing_flags in 
connections_forall()
   via  6c5781b5f15 s4:torture/smb2: add 
smb2.session.anon-{encryption{1,2,},signing{1,2}}
   via  6a89615d781 s4:libcli/smb2: add hack to test anonymous signing and 
encryption
   via  14d6e267212 smbXcli_base: add hacks to test anonymous signing and 
encryption
  from  d6581d213d5 ldb: move struct ldb_debug_ops to ldb_private.h

https://git.samba.org/?p=samba.git;a=shortlog;h=master


- Log -
commit 5a54c9b28abb1464c84cb4be15a49718d8ae6795
Author: Stefan Metzmacher 
Date:   Mon Jul 3 15:14:38 2023 +0200

s3:utils: let smbstatus report anonymous signing/encryption explicitly

We should mark sessions/tcons with anonymous encryption or signing
in a special way, as the value of it is void, all based on a
session key with 16 zero bytes.

BUG: https://bugzilla.samba.org/show_bug.cgi?id=15412

Signed-off-by: Stefan Metzmacher 
Reviewed-by: Andrew Bartlett 
Reviewed-by: Günther Deschner 

Autobuild-User(master): Stefan Metzmacher 
Autobuild-Date(master): Thu May 23 13:37:09 UTC 2024 on atb-devel-224

commit f3ddfb828e66738ca461c3284c423defb774547c
Author: Stefan Metzmacher 
Date:   Fri Jun 30 18:05:51 2023 +0200

s3:smbd: allow anonymous encryption after one authenticated session setup

I have captures where a client tries smb3 encryption on an anonymous 
session,
we used to allow that before commit da7dcc443f45d07d9963df9daae458fbdd991a47
was released with samba-4.15.0rc1.

Testing against Windows Server 2022 revealed that anonymous signing is 
always
allowed (with the session key derived from 16 zero bytes) and
anonymous encryption is allowed after one authenticated session setup on
the tcp connection.

https://bugzilla.samba.org/show_bug.cgi?id=15412

Signed-off-by: Stefan Metzmacher 
Reviewed-by: Andrew Bartlett 
Reviewed-by: Günther Deschner 

commit 551756abd2c9e4922075bc3037db645355542363
Author: Stefan Metzmacher 
Date:   Mon Jul 3 15:12:38 2023 +0200

s3:utils: let smbstatus also report partial tcon signing/encryption

We already do that for sessions and also for the json output,
but it was missing in the non-json output for tcons.

BUG: https://bugzilla.samba.org/show_bug.cgi?id=15412

Signed-off-by: Stefan Metzmacher 
Reviewed-by: Andrew Bartlett 
Reviewed-by: Günther Deschner 

commit 8119fd6d6a49b869bd9e8ff653b500e194b070de
Author: Stefan Metzmacher 
Date:   Mon Jul 3 15:12:38 2023 +0200

s3:utils: let smbstatus also report AES-256 encryption types for tcons

We already do that for sessions.

BUG: https://bugzilla.samba.org/show_bug.cgi?id=15412

Signed-off-by: Stefan Metzmacher 
Reviewed-by: Andrew Bartlett 
Reviewed-by: Günther Deschner 

commit 5089d8550640f72b1e0373f8ac321378ccaa8bd5
Author: Stefan Metzmacher 
Date:   Mon Jul 3 15:10:08 2023 +0200

s3:utils: let connections_forall_read() report if the session was 
authenticated

BUG: https://bugzilla.samba.org/show_bug.cgi?id=15412

Signed-off-by: Stefan Metzmacher 
Reviewed-by: Andrew Bartlett 
Reviewed-by: Günther Deschner 

commit 596a10d1079f5c4a954108c81efc862c22a11f28
Author: Stefan Metzmacher 
Date:   Mon Jul 3 15:08:31 2023 +0200

s3:lib: let sessionid_traverse_read() report if the session was 
authenticated

BUG: https://bugzilla.samba.org/show_bug.cgi?id=15412

Signed-off-by: Stefan Metzmacher 
Reviewed-by: Andrew Bartlett 
Reviewed-by: Günther Deschner 

commit a9f84593f44f15a19c4cdde1e7ad53cd5e03b4d9
Author: Stefan Metzmacher 
Date:   Mon Jul 3 15:05:59 2023 +0200

s3:utils: remove unused signing_flags in connections_forall()

We never use the signing flags from the session, as the tcon
has its own signing flags.

https://bugzilla.samba.org/show_bug.cgi?id=15412

Signed-off-by: Stefan Metzmacher 
Reviewed-by: Andrew Bartlett 
Reviewed-by: Günther Deschner 

commit 6c5781b5f154857f1454f41133687fba8c4c9df9
Author: Stefan Metzmacher 
Date:   Wed May 15 10:02:00 2024 +0200

s4:torture/smb2: add smb2.session.anon-{encryption{1,2,},signing{1,2

[jira] [Updated] (CASSANDRA-19632) wrap tracing logs in isTraceEnabled across the codebase

2024-05-23 Thread Stefan Miklosovic (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-19632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Miklosovic updated CASSANDRA-19632:
--
Status: Needs Committer  (was: Patch Available)

> wrap tracing logs in isTraceEnabled across the codebase
> ---
>
> Key: CASSANDRA-19632
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19632
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Legacy/Core
>    Reporter: Stefan Miklosovic
>Assignee: Stefan Miklosovic
>Priority: Normal
> Fix For: 5.x
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Our usage of logger.isTraceEnabled across the codebase is inconsistent. This 
> would also fix issues similar in e.g. CASSANDRA-19429 as [~rustyrazorblade] 
> suggested.
> We should fix this at least in trunk and 5.0 (not critical though) and 
> probably come up with a checkstyle rule to prevent not calling isTraceEnabled 
> while logging with TRACE level. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-19632) wrap tracing logs in isTraceEnabled across the codebase

2024-05-23 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-19632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848940#comment-17848940
 ] 

Stefan Miklosovic edited comment on CASSANDRA-19632 at 5/23/24 12:59 PM:
-

I went through all logger.trace in the production code and I modified only 59 
files instead of 127 in the first PR.

[https://github.com/apache/cassandra/pull/3329]

The perception I got by going through all of that is that people were already 
following the rule of "if it has more than 2 arguments then wrap it in 
logger.isTraceEnabled" so I went by that logic as well everywhere where it was 
not done like that.

There were also inconsistent usages of logger.trace() with 0 / 1 / 2 arguments. 
Sometimes it was wrapped in isTraceEnabled, sometimes it was not, without any 
apparent reason. I think that for simple cases it is not necessary to wrap it, 
we have majority of cases like that in the code base (not wrapped).

I have also fixed the cases where string concatenation was used and similar.

Not all people also seem to understand that when it is logged like this:
{code:java}
logger.trace("abc {}", object);
{code}
then the actual object.toString() is evaluated _after_ we are absolutely sure 
we go to indeed log. I do not think that this is necessary, even "object" is 
some "heavyweight" when it comes to toString because it is not called 
prematurely anyway.
{code:java}
if (logger.isTraceEnabled())
logger.trace("abc {}", object);
{code}
as per [https://www.slf4j.org/faq.html#string_contents]
{quote}The logging system will invoke complexObject.toString() method only 
after it has ascertained that the log statement was enabled. Otherwise, the 
cost of complexObject.toString() conversion will be advantageously avoided.
{quote}


was (Author: smiklosovic):
I went through all logger.trace in the production code and I modified only 59 
files instead of 127 in the first one.

https://github.com/apache/cassandra/pull/3329

The perception I got by going through all of that is that people were already 
following the rule of "it it  has more than 2 arguments then wrap it in 
logger.isTraceEnabled" so I went by that logic as well everywhere where it was 
not done like that.

There were also inconsistent usages of logger.trace() with 0 / 1 / 2 arguments. 
Sometimes it was wrapped in isTraceEnabled, sometimes it was not, without any 
apparent reason. I think that for simple cases it is not necessary to wrap it, 
we have majority of cases like that in the code base (not wrapped).

I have also fixed the cases where string concatenation was used and similar.

Not all people also seem to understand that when it is logged like this:

{code}
logger.trace("abc {}", object);
{code}

then the actual object.toString() is evaluated _after_ we are absolutely sure 
we go to indeed log. I do not think that this is necessary, even "object" is 
some "heavyweight" when it comes to toString because it is not called 
prematurely anyway. 

{code}
if (logger.isTraceEnabled())
logger.trace("abc {}", object);
{code}

as per https://www.slf4j.org/faq.html#string_contents

{quote}
The logging system will invoke complexObject.toString() method only after it 
has ascertained that the log statement was enabled. Otherwise, the cost of 
complexObject.toString() conversion will be advantageously avoided.
{quote}

> wrap tracing logs in isTraceEnabled across the codebase
> ---
>
> Key: CASSANDRA-19632
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19632
>     Project: Cassandra
>  Issue Type: Improvement
>  Components: Legacy/Core
>Reporter: Stefan Miklosovic
>Assignee: Stefan Miklosovic
>Priority: Normal
> Fix For: 5.x
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Our usage of logger.isTraceEnabled across the codebase is inconsistent. This 
> would also fix issues similar in e.g. CASSANDRA-19429 as [~rustyrazorblade] 
> suggested.
> We should fix this at least in trunk and 5.0 (not critical though) and 
> probably come up with a checkstyle rule to prevent not calling isTraceEnabled 
> while logging with TRACE level. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-19632) wrap tracing logs in isTraceEnabled across the codebase

2024-05-23 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-19632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848940#comment-17848940
 ] 

Stefan Miklosovic edited comment on CASSANDRA-19632 at 5/23/24 12:57 PM:
-

I went through all logger.trace in the production code and I modified only 59 
files instead of 127 in the first one.

https://github.com/apache/cassandra/pull/3329

The perception I got by going through all of that is that people were already 
following the rule of "it it  has more than 2 arguments then wrap it in 
logger.isTraceEnabled" so I went by that logic as well everywhere where it was 
not done like that.

There were also inconsistent usages of logger.trace() with 0 / 1 / 2 arguments. 
Sometimes it was wrapped in isTraceEnabled, sometimes it was not, without any 
apparent reason. I think that for simple cases it is not necessary to wrap it, 
we have majority of cases like that in the code base (not wrapped).

I have also fixed the cases where string concatenation was used and similar.

Not all people also seem to understand that when it is logged like this:

{code}
logger.trace("abc {}", object);
{code}

then the actual object.toString() is evaluated _after_ we are absolutely sure 
we go to indeed log. I do not think that this is necessary, even "object" is 
some "heavyweight" when it comes to toString because it is not called 
prematurely anyway. 

{code}
if (logger.isTraceEnabled())
logger.trace("abc {}", object);
{code}

as per https://www.slf4j.org/faq.html#string_contents

{quote}
The logging system will invoke complexObject.toString() method only after it 
has ascertained that the log statement was enabled. Otherwise, the cost of 
complexObject.toString() conversion will be advantageously avoided.
{quote}


was (Author: smiklosovic):
I went through all logger.trace in the production code and I modified only 59 
files instead of 127 in the first one.

https://github.com/apache/cassandra/pull/3329

The perception I got by going through all of that is that people were already 
following the rule of "it it  has more than 2 arguments then wrap it in 
logger.isTraceEnabled" so I went by that logic as well everywhere where it was 
not done like that.

There were also inconsistent usages of logger.trace() with 0 / 1 / 2 arguments. 
Sometimes it was wrapped in isTraceEnabled, sometimes it was not, without any 
apparent reason. I think that for simple cases it is not necessary to wrap it, 
we have majority of cases like that in the code base (not wrapped).

I have also fixed the cases where string concatenation was used and similar.

Not all people also seem to understand that when it is logged like this:

{code}
logger.trace("abc {}", object);
{code}

then the actual object.toString() is evaluated _after_ we are absolutely sure 
we go to indeed log. I do not think that this is necessary, even "object" is 
some "heavyweight" when it comes to toString because it is not called 
prematurely anyway. 

{code}
if (logger.isTraceEnabled())
logger.trace("abc {}", object);
{code}

> wrap tracing logs in isTraceEnabled across the codebase
> ---
>
> Key: CASSANDRA-19632
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19632
>     Project: Cassandra
>  Issue Type: Improvement
>  Components: Legacy/Core
>Reporter: Stefan Miklosovic
>Assignee: Stefan Miklosovic
>Priority: Normal
> Fix For: 5.x
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Our usage of logger.isTraceEnabled across the codebase is inconsistent. This 
> would also fix issues similar in e.g. CASSANDRA-19429 as [~rustyrazorblade] 
> suggested.
> We should fix this at least in trunk and 5.0 (not critical though) and 
> probably come up with a checkstyle rule to prevent not calling isTraceEnabled 
> while logging with TRACE level. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-19632) wrap tracing logs in isTraceEnabled across the codebase

2024-05-23 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-19632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848940#comment-17848940
 ] 

Stefan Miklosovic edited comment on CASSANDRA-19632 at 5/23/24 12:55 PM:
-

I went through all logger.trace in the production code and I modified only 59 
files instead of 127 in the first one.

https://github.com/apache/cassandra/pull/3329

The perception I got by going through all of that is that people were already 
following the rule of "it it  has more than 2 arguments then wrap it in 
logger.isTraceEnabled" so I went by that logic as well everywhere where it was 
not done like that.

There were also inconsistent usages of logger.trace() with 0 / 1 / 2 arguments. 
Sometimes it was wrapped in isTraceEnabled, sometimes it was not, without any 
apparent reason. I think that for simple cases it is not necessary to wrap it, 
we have majority of cases like that in the code base (not wrapped).

I have also fixed the cases where string concatenation was used and similar.

Not all people also dont seem to understand that when it is logger like this:

{code}
logger.trace("abc {}", object);
{code}

then the actual object.toString() is evaluated _after_ we are absolutely sure 
we go to indeed log. I do not think that this is necessary, even "object" is 
some "heavyweight" when it comes to toString because it is not called 
prematurely anyway. 

{code}
if (logger.isTraceEnabled())
logger.trace("abc {}", object);
{code}


was (Author: smiklosovic):
I went through all logger.trace in the production code and I modified only 59 
files instead of 127 in the first one.

https://github.com/apache/cassandra/pull/3329

The perception I got by going through all of that is that people were already 
following the rule of "it it  has more than 2 arguments then wrap it in 
logger.isTraceEnabled" so I went by that logic as well everywhere where it was 
not done like that.

There were also inconsistent usages of logger.trace() with 0 / 1 / 2 arguments. 
Sometimes it was wrapped in isTraceEnabled, sometimes it was not, without any 
apparent reason. I think that for simple cases it is not necessary to wrap it, 
we have majority of cases like that in the code base (not wrapped).

I have also fixed the cases where string concatenation was used and similar.

> wrap tracing logs in isTraceEnabled across the codebase
> ---
>
> Key: CASSANDRA-19632
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19632
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Legacy/Core
>Reporter: Stefan Miklosovic
>Assignee: Stefan Miklosovic
>Priority: Normal
> Fix For: 5.x
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Our usage of logger.isTraceEnabled across the codebase is inconsistent. This 
> would also fix issues similar in e.g. CASSANDRA-19429 as [~rustyrazorblade] 
> suggested.
> We should fix this at least in trunk and 5.0 (not critical though) and 
> probably come up with a checkstyle rule to prevent not calling isTraceEnabled 
> while logging with TRACE level. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-19632) wrap tracing logs in isTraceEnabled across the codebase

2024-05-23 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-19632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848940#comment-17848940
 ] 

Stefan Miklosovic edited comment on CASSANDRA-19632 at 5/23/24 12:55 PM:
-

I went through all logger.trace in the production code and I modified only 59 
files instead of 127 in the first one.

https://github.com/apache/cassandra/pull/3329

The perception I got by going through all of that is that people were already 
following the rule of "it it  has more than 2 arguments then wrap it in 
logger.isTraceEnabled" so I went by that logic as well everywhere where it was 
not done like that.

There were also inconsistent usages of logger.trace() with 0 / 1 / 2 arguments. 
Sometimes it was wrapped in isTraceEnabled, sometimes it was not, without any 
apparent reason. I think that for simple cases it is not necessary to wrap it, 
we have majority of cases like that in the code base (not wrapped).

I have also fixed the cases where string concatenation was used and similar.

Not all people also seem to understand that when it is logged like this:

{code}
logger.trace("abc {}", object);
{code}

then the actual object.toString() is evaluated _after_ we are absolutely sure 
we go to indeed log. I do not think that this is necessary, even "object" is 
some "heavyweight" when it comes to toString because it is not called 
prematurely anyway. 

{code}
if (logger.isTraceEnabled())
logger.trace("abc {}", object);
{code}


was (Author: smiklosovic):
I went through all logger.trace in the production code and I modified only 59 
files instead of 127 in the first one.

https://github.com/apache/cassandra/pull/3329

The perception I got by going through all of that is that people were already 
following the rule of "it it  has more than 2 arguments then wrap it in 
logger.isTraceEnabled" so I went by that logic as well everywhere where it was 
not done like that.

There were also inconsistent usages of logger.trace() with 0 / 1 / 2 arguments. 
Sometimes it was wrapped in isTraceEnabled, sometimes it was not, without any 
apparent reason. I think that for simple cases it is not necessary to wrap it, 
we have majority of cases like that in the code base (not wrapped).

I have also fixed the cases where string concatenation was used and similar.

Not all people also seem to understand that when it is logger like this:

{code}
logger.trace("abc {}", object);
{code}

then the actual object.toString() is evaluated _after_ we are absolutely sure 
we go to indeed log. I do not think that this is necessary, even "object" is 
some "heavyweight" when it comes to toString because it is not called 
prematurely anyway. 

{code}
if (logger.isTraceEnabled())
logger.trace("abc {}", object);
{code}

> wrap tracing logs in isTraceEnabled across the codebase
> ---
>
> Key: CASSANDRA-19632
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19632
>     Project: Cassandra
>  Issue Type: Improvement
>  Components: Legacy/Core
>Reporter: Stefan Miklosovic
>Assignee: Stefan Miklosovic
>Priority: Normal
> Fix For: 5.x
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Our usage of logger.isTraceEnabled across the codebase is inconsistent. This 
> would also fix issues similar in e.g. CASSANDRA-19429 as [~rustyrazorblade] 
> suggested.
> We should fix this at least in trunk and 5.0 (not critical though) and 
> probably come up with a checkstyle rule to prevent not calling isTraceEnabled 
> while logging with TRACE level. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-19632) wrap tracing logs in isTraceEnabled across the codebase

2024-05-23 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-19632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848940#comment-17848940
 ] 

Stefan Miklosovic edited comment on CASSANDRA-19632 at 5/23/24 12:55 PM:
-

I went through all logger.trace in the production code and I modified only 59 
files instead of 127 in the first one.

https://github.com/apache/cassandra/pull/3329

The perception I got by going through all of that is that people were already 
following the rule of "it it  has more than 2 arguments then wrap it in 
logger.isTraceEnabled" so I went by that logic as well everywhere where it was 
not done like that.

There were also inconsistent usages of logger.trace() with 0 / 1 / 2 arguments. 
Sometimes it was wrapped in isTraceEnabled, sometimes it was not, without any 
apparent reason. I think that for simple cases it is not necessary to wrap it, 
we have majority of cases like that in the code base (not wrapped).

I have also fixed the cases where string concatenation was used and similar.

Not all people also seem to understand that when it is logger like this:

{code}
logger.trace("abc {}", object);
{code}

then the actual object.toString() is evaluated _after_ we are absolutely sure 
we go to indeed log. I do not think that this is necessary, even "object" is 
some "heavyweight" when it comes to toString because it is not called 
prematurely anyway. 

{code}
if (logger.isTraceEnabled())
logger.trace("abc {}", object);
{code}


was (Author: smiklosovic):
I went through all logger.trace in the production code and I modified only 59 
files instead of 127 in the first one.

https://github.com/apache/cassandra/pull/3329

The perception I got by going through all of that is that people were already 
following the rule of "it it  has more than 2 arguments then wrap it in 
logger.isTraceEnabled" so I went by that logic as well everywhere where it was 
not done like that.

There were also inconsistent usages of logger.trace() with 0 / 1 / 2 arguments. 
Sometimes it was wrapped in isTraceEnabled, sometimes it was not, without any 
apparent reason. I think that for simple cases it is not necessary to wrap it, 
we have majority of cases like that in the code base (not wrapped).

I have also fixed the cases where string concatenation was used and similar.

Not all people also dont seem to understand that when it is logger like this:

{code}
logger.trace("abc {}", object);
{code}

then the actual object.toString() is evaluated _after_ we are absolutely sure 
we go to indeed log. I do not think that this is necessary, even "object" is 
some "heavyweight" when it comes to toString because it is not called 
prematurely anyway. 

{code}
if (logger.isTraceEnabled())
logger.trace("abc {}", object);
{code}

> wrap tracing logs in isTraceEnabled across the codebase
> ---
>
> Key: CASSANDRA-19632
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19632
>     Project: Cassandra
>  Issue Type: Improvement
>  Components: Legacy/Core
>Reporter: Stefan Miklosovic
>Assignee: Stefan Miklosovic
>Priority: Normal
> Fix For: 5.x
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Our usage of logger.isTraceEnabled across the codebase is inconsistent. This 
> would also fix issues similar in e.g. CASSANDRA-19429 as [~rustyrazorblade] 
> suggested.
> We should fix this at least in trunk and 5.0 (not critical though) and 
> probably come up with a checkstyle rule to prevent not calling isTraceEnabled 
> while logging with TRACE level. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-19632) wrap tracing logs in isTraceEnabled across the codebase

2024-05-23 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-19632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848940#comment-17848940
 ] 

Stefan Miklosovic commented on CASSANDRA-19632:
---

I went through all logger.trace in the production code and I modified only 59 
files instead of 127 in the first one.

https://github.com/apache/cassandra/pull/3329

The perception I got by going through all of that is that people were already 
following the rule of "it it  has more than 2 arguments then wrap it in 
logger.isTraceEnabled" so I went by that logic as well everywhere where it was 
not done like that.

There were also inconsistent usages of logger.trace() with 0 / 1 / 2 arguments. 
Sometimes it was wrapped in isTraceEnabled, sometimes it was not, without any 
apparent reason. I think that for simple cases it is not necessary to wrap it, 
we have majority of cases like that in the code base (not wrapped).

I have also fixed the cases where string concatenation was used and similar.

> wrap tracing logs in isTraceEnabled across the codebase
> ---
>
> Key: CASSANDRA-19632
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19632
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Legacy/Core
>Reporter: Stefan Miklosovic
>Assignee: Stefan Miklosovic
>Priority: Normal
> Fix For: 5.x
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Our usage of logger.isTraceEnabled across the codebase is inconsistent. This 
> would also fix issues similar in e.g. CASSANDRA-19429 as [~rustyrazorblade] 
> suggested.
> We should fix this at least in trunk and 5.0 (not critical though) and 
> probably come up with a checkstyle rule to prevent not calling isTraceEnabled 
> while logging with TRACE level. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



Re: [Bacula-users] HP 1/8 G2 Autoloader

2024-05-23 Thread Stefan G. Weichinger

Am 20.05.24 um 11:44 schrieb Stefan G. Weichinger:


Is there an easy way to delete all volumes belonging to a specific Job-ID?

For now I looked it up in Bacularis: check which jobs are on which 
(disk-based) volume, rm the volume in Bacularis, then rm the 
corresponding file from disk (in the shell).


OK so far, I am just curious if there is a clever way for that in Bacula.


quicker way: look up the job logs in bacularis, in the raw log the used 
volumes are listed. At least that helps.




___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


CVS: cvs.openbsd.org: src

2024-05-23 Thread Stefan Sperling
CVSROOT:/cvs
Module name:src
Changes by: s...@cvs.openbsd.org2024/05/23 05:19:13

Modified files:
sys/net80211   : ieee80211_input.c 

Log message:
increment CCMP decryption error counter if hw decrypt fails to get PN

This case will only occur if the IV has been stripped by hardware and
the driver has not cleared the protected bit in the frame header as it
should. Incrementing this counter will make the problem more obvious
when looking at netstat -W output.

No functional change for people who do not work on wifi drivers.



[jira] [Comment Edited] (CASSANDRA-19632) wrap tracing logs in isTraceEnabled across the codebase

2024-05-23 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-19632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848892#comment-17848892
 ] 

Stefan Miklosovic edited comment on CASSANDRA-19632 at 5/23/24 10:48 AM:
-

I did some research on this and it is quite interesting read. 

https://www.slf4j.org/faq.html#logging_performance

If we do this

{code}
logger.trace("abc" + obj + "def");
{code}

there is a performance penalty related to evaluating the argument.

When we do this:
{code}
logger.trace("abc {}", def);
{code}

this will evaluate {} only when we go to log on trace so it will check tracing 
just once and it will evaluate {} at most once (if we indeed go to log).

Doing this

{code}
if (logger.isTracingEnabled()) logger.trace("abc" + obj + "def"); 
{code}

is the least performance-friendly, because it needs to evaluate if we go to log 
on trace and if we do, then it will evaluate it for the second time in 
logger.trace itself, plus it needs to construct the logging message before 
doing so.

Doing this:

{code}
if (logger.isTracingEnabled()) logger.trace("abc {}", def);
{code}

will check if we go to trace at most twice and it will evaluate the placeholder 
at most once.

I think that wrapping logger.trace with logger.isTracingEnabled() is necessary 
only in case if 

1) we do not use placeholders in the logging message, only string 
concatenation, which would construct the logging message regardless of logging 
or not, because it was not checked yet
2) if the number of placeholder arguments in logger.trace is bigger than 2. 

For example, for 2)

{code}
logger.trace("abc {} def {}", obj1, obj2);
{code}

This will be OK. However this:

{code}
logger.trace("abc {} def {} ghi {}", obj1, obj2, obj3);
{code}

this will contain the hidden cost of constructing an object, Object[] ... with 
these three parameters. Logging internals will create an object if the number 
of placeholders is bigger than 2.

The cost of calling logger.isTracingEnabled is negligable, 1% of the actual 
logging of the message but I think it is not necessary, as long as we make sure 
that we are not using string concatenation upon message construction and as 
long as we are not using more than 2 placeholders.

Also, it is important to check that it does not cost a lot to evaluate the 
argument itself, as we hit that case in CASSANDRA-19429, for example

{code}
logger.trace("abc {}", thisIs.Hard.ToResolve())
{code}

Even we use a placeholder, if thisIs.Hard.ToResolve() takes a lot of resources, 
that is not good and in that case it is preferable to wrap it in 
isTracingEnabled(). There is no silver bullet, I think we need to just go case 
by case by the rules I described and change it where it does not comply. 

According to the docs, the alternative is also to use lambdas:

{code}
logger.trace("abc {}", () -> thisIs.Hard.ToResolve())
{code}

this will check if we go to log on trace level just once and it will evaluate 
the placeholder if we indeed go to do that. I think this is the best solution.

I will try to go over the codebase to see where we are at currently. 

EDIT: lambdas were added in SLF4J 2.0.0-alpha1


was (Author: smiklosovic):
I did some research on this and it is quite interesting read. 

https://www.slf4j.org/faq.html#logging_performance

If we do this

{code}
logger.trace("abc" + obj + "def");
{code}

there is a performance penalty related to evaluating the argument.

When we do this:
{code}
logger.trace("abc {}", def);
{code}

this will evaluate {} only when we go to log on trace so it will check tracing 
just once and it will evaluate {} at most once (if we indeed go to log).

Doing this

{code}
if (logger.isTracingEnabled()) logger.trace("abc" + obj + "def"); 
{code}

is the least performance-friendly, because it needs to evaluate if we go to log 
on trace and if we do, then it will evaluate it for the second time in 
logger.trace itself, plus it needs to construct the logging message before 
doing so.

Doing this:

{code}
if (logger.isTracingEnabled()) logger.trace("abc {}", def);
{code}

will check if we go to trace at most twice and it will evaluate the placeholder 
at most once.

I think that wrapping logger.trace with logger.isTracingEnabled() is necessary 
only in case if 

1) we do not use placeholders in the logging message, only string 
concatenation, which would construct the logging message regardless of logging 
or not, because it was not checked yet
2) if the number of placeholder arguments in logger.trace is bigger than 2. 

For example, for 2)

{code}
logger.trace("abc {} def {}", obj1, obj2);
{code}

This will be OK. However this:

{code}
logger.trace("abc {} def {} ghi {}", obj1, obj2, obj3);
{code}

this will contain t

[jira] [Commented] (CASSANDRA-19632) wrap tracing logs in isTraceEnabled across the codebase

2024-05-23 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-19632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848892#comment-17848892
 ] 

Stefan Miklosovic commented on CASSANDRA-19632:
---

I did some research on this and it is quite interesting read. 

https://www.slf4j.org/faq.html#logging_performance

If we do this

{code}
logger.trace("abc" + obj + "def");
{code}

there is a performance penalty related to evaluating the argument.

When we do this:
{code}
logger.trace("abc {}", def);
{code}

this will evaluate {} only when we go to log on trace so it will check tracing 
just once and it will evaluate {} at most once (if we indeed go to log).

Doing this

{code}
if (logger.isTracingEnabled()) logger.trace("abc" + obj + "def"); 
{code}

is the least performance-friendly, because it needs to evaluate if we go to log 
on trace and if we do, then it will evaluate it for the second time in 
logger.trace itself, plus it needs to construct the logging message before 
doing so.

Doing this:

{code}
if (logger.isTracingEnabled()) logger.trace("abc {}", def);
{code}

will check if we go to trace at most twice and it will evaluate the placeholder 
at most once.

I think that wrapping logger.trace with logger.isTracingEnabled() is necessary 
only in case if 

1) we do not use placeholders in the logging message, only string 
concatenation, which would construct the logging message regardless of logging 
or not, because it was not checked yet
2) if the number of placeholder arguments in logger.trace is bigger than 2. 

For example, for 2)

{code}
logger.trace("abc {} def {}", obj1, obj2);
{code}

This will be OK. However this:

{code}
logger.trace("abc {} def {} ghi {}", obj1, obj2, obj3);
{code}

this will contain the hidden cost of constructing an object, Object[] ... with 
these three parameters. Logging internals will create an object if the number 
of placeholders is bigger than 2.

The cost of calling logger.isTracingEnabled is negligable, 1% of the actual 
logging of the message but I think it is not necessary, as long as we make sure 
that we are not using string concatenation upon message construction and as 
long as we are not using more than 2 placeholders.

Also, it is important to check that it does not cost a lot to evaluate the 
argument itself, as we hit that case in CASSANDRA-19429, for example

{code}
logger.trace("abc {}", thisIs.Hard.ToResolve())
{code}

Even we use a placeholder, if thisIs.Hard.ToResolve() takes a lot of resources, 
that is not good and in that case it is preferable to wrap it in 
isTracingEnabled(). There is no silver bullet, I think we need to just go case 
by case by the rules I described and change it where it does not comply. 

According to the docs, the alternative is also to use lambdas:

{code}
logger.trace("abc {}", () -> thisIs.Hard.ToResolve())
{code}

this will check if we go to log on trace level just once and it will evaluate 
the placeholder if we indeed go to do that. I think this is the best solution.

I will try to go over the codebase to see where we are at currently. 

> wrap tracing logs in isTraceEnabled across the codebase
> ---
>
> Key: CASSANDRA-19632
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19632
>     Project: Cassandra
>  Issue Type: Improvement
>  Components: Legacy/Core
>Reporter: Stefan Miklosovic
>Assignee: Stefan Miklosovic
>Priority: Normal
> Fix For: 5.x
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Our usage of logger.isTraceEnabled across the codebase is inconsistent. This 
> would also fix issues similar in e.g. CASSANDRA-19429 as [~rustyrazorblade] 
> suggested.
> We should fix this at least in trunk and 5.0 (not critical though) and 
> probably come up with a checkstyle rule to prevent not calling isTraceEnabled 
> while logging with TRACE level. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



RE: parent pom 60 & requirement to use java17 for builds

2024-05-23 Thread Stefan Seifert
konrad provided this wiki page with hints for update to parent 60+: 
https://cwiki.apache.org/confluence/x/SI75E

you have to remove java 11 from .sling-module.json during the update, or add 
one like this if it does not exist:
https://github.com/apache/sling-org-apache-sling-testing-sling-mock/blob/master/.sling-module.json

stefan

> -Original Message-
> From: Jörg Hoh 
> Sent: Thursday, May 23, 2024 11:04 AM
> To: Sling Developers List 
> Subject: parent pom 60 & requirement to use java17 for builds
> 
> Hi,
> 
> While updating the sling engine to use the latest parent, I found that it
> enforces Java 17 or newer to build:
> 
> https://ci-
> builds.apache.org/blue/organizations/jenkins/Sling%2Fmodules%2Fsling-org-
> apache-sling-engine/detail/PR-46/1/pipeline
> 
> Rule 0: org.apache.maven.enforcer.rules.version.RequireJavaVersion
> failed with message:
> 
>  <https://ci-
> builds.apache.org/blue/organizations/jenkins/Sling%2Fmodules%2Fsling-org-
> apache-sling-engine/detail/PR-46/1/pipeline#step-49-log-153>Detected
> JDK version 11.0.16-1
> (JAVA_HOME=/usr/local/asfpackages/java/adoptium-jdk-11.0.16.1+1) is
> not in the allowed range [17,).
> 
> 
> I checked and found that the resulting bytecode is version 55.0 (Java
> SE 11), so it should be okay.
> 
> But: How do we adjust the CI builds in a way, that it des not try to
> build them with Java 11 anymore?
> 
> Jörg
> 
> --
> https://cqdump.joerghoh.de


[gcc r15-787] s390: Implement TARGET_NOCE_CONVERSION_PROFITABLE_P [PR109549]

2024-05-23 Thread Stefan Schulze Frielinghaus via Gcc-cvs
https://gcc.gnu.org/g:57e04879389f9c0d5d53f316b468ce1bddbab350

commit r15-787-g57e04879389f9c0d5d53f316b468ce1bddbab350
Author: Stefan Schulze Frielinghaus 
Date:   Thu May 23 08:43:35 2024 +0200

s390: Implement TARGET_NOCE_CONVERSION_PROFITABLE_P [PR109549]

Consider a NOCE conversion as profitable if there is at least one
conditional move.

gcc/ChangeLog:

PR target/109549
* config/s390/s390.cc (TARGET_NOCE_CONVERSION_PROFITABLE_P):
Define.
(s390_noce_conversion_profitable_p): Implement.

gcc/testsuite/ChangeLog:

* gcc.target/s390/ccor.c: Order of loads are reversed, now, as a
consequence the condition has to be reversed.

Diff:
---
 gcc/config/s390/s390.cc  | 32 
 gcc/testsuite/gcc.target/s390/ccor.c |  4 ++--
 2 files changed, 34 insertions(+), 2 deletions(-)

diff --git a/gcc/config/s390/s390.cc b/gcc/config/s390/s390.cc
index 5968808fcb6..fa517bd3e77 100644
--- a/gcc/config/s390/s390.cc
+++ b/gcc/config/s390/s390.cc
@@ -78,6 +78,7 @@ along with GCC; see the file COPYING3.  If not see
 #include "tree-pass.h"
 #include "context.h"
 #include "builtins.h"
+#include "ifcvt.h"
 #include "rtl-iter.h"
 #include "intl.h"
 #include "tm-constrs.h"
@@ -18037,6 +18038,34 @@ s390_vectorize_vec_perm_const (machine_mode vmode, 
machine_mode op_mode,
   return vectorize_vec_perm_const_1 (d);
 }
 
+/* Consider a NOCE conversion as profitable if there is at least one
+   conditional move.  */
+
+static bool
+s390_noce_conversion_profitable_p (rtx_insn *seq, struct noce_if_info *if_info)
+{
+  if (if_info->speed_p)
+{
+  for (rtx_insn *insn = seq; insn; insn = NEXT_INSN (insn))
+   {
+ rtx set = single_set (insn);
+ if (set == NULL)
+   continue;
+ if (GET_CODE (SET_SRC (set)) != IF_THEN_ELSE)
+   continue;
+ rtx src = SET_SRC (set);
+ machine_mode mode = GET_MODE (src);
+ if (GET_MODE_CLASS (mode) != MODE_INT
+ && GET_MODE_CLASS (mode) != MODE_FLOAT)
+   continue;
+ if (GET_MODE_SIZE (mode) > UNITS_PER_WORD)
+   continue;
+ return true;
+   }
+}
+  return default_noce_conversion_profitable_p (seq, if_info);
+}
+
 /* Initialize GCC target structure.  */
 
 #undef  TARGET_ASM_ALIGNED_HI_OP
@@ -18350,6 +18379,9 @@ s390_vectorize_vec_perm_const (machine_mode vmode, 
machine_mode op_mode,
 #undef TARGET_VECTORIZE_VEC_PERM_CONST
 #define TARGET_VECTORIZE_VEC_PERM_CONST s390_vectorize_vec_perm_const
 
+#undef TARGET_NOCE_CONVERSION_PROFITABLE_P
+#define TARGET_NOCE_CONVERSION_PROFITABLE_P s390_noce_conversion_profitable_p
+
 struct gcc_target targetm = TARGET_INITIALIZER;
 
 #include "gt-s390.h"
diff --git a/gcc/testsuite/gcc.target/s390/ccor.c 
b/gcc/testsuite/gcc.target/s390/ccor.c
index 31f30f60314..36a3c3a999a 100644
--- a/gcc/testsuite/gcc.target/s390/ccor.c
+++ b/gcc/testsuite/gcc.target/s390/ccor.c
@@ -42,7 +42,7 @@ GENFUN1(2)
 
 GENFUN1(3)
 
-/* { dg-final { scan-assembler {locrno} } } */
+/* { dg-final { scan-assembler {locro} } } */
 
 GENFUN2(0,1)
 
@@ -58,7 +58,7 @@ GENFUN2(0,3)
 
 GENFUN2(1,2)
 
-/* { dg-final { scan-assembler {locrnlh} } } */
+/* { dg-final { scan-assembler {locrlh} } } */
 
 GENFUN2(1,3)


Re: Debian bookwork / grub2 / LVM / RAID / dm-integrity fails to boot

2024-05-22 Thread Stefan Monnier
> I found this [1], quoting: "I'd also like to share an issue I've
> discovered: if /boot's partition is a LV, then there must not be a
> raidintegrity LV anywhere before that LV inside the same VG. Otherwise,
> update-grub will show an error (disk `lvmid/.../...' not found) and GRUB
> cannot boot. So it's best if you put /boot into its own VG. (PS: Errors
> like unknown node '..._rimage_0 can be ignored.)"

Hmm... I've been using a "plain old partition" for /boot (with
everything else in LVM) for "ever", originally because the boot loader
was not able to read LVM, and later out of habit.  I was thinking of
finally moving /boot into an LV to make things simpler, but I see that
it'd still be playing with fire (AFAICT booting off of LVM was still not
supported by U-Boot either last time I checked).  


Stefan



Bug#1071596: apache2: envvars evaluates string in conditional instead of testing for empty string

2024-05-22 Thread Stefan Fritsch

Hi Mark,

Am 21.05.24 um 22:30 schrieb Mark Hedges:

Package: apache2
Version: 2.4.59-1~deb12u1
Severity: normal

Dear Maintainer,

`envvars` evaluates string in conditional instead of testing for empty string.

`apachectl` calls `envvars` which shows a syntax error despite working:

  root@nodeo:/etc/letsencrypt# apachectl configtest
  /usr/sbin/apachectl: 6: [: /etc/apache2: unexpected operator
  Syntax OK

If I change this line in `envvars`:

  if [ "${APACHE_CONFDIR}" == "" ]; then
 export APACHE_CONFDIR=/etc/apache2
  fi


This snippet is not in the original file from the apache2 package. 
Compare to 
https://salsa.debian.org/apache-team/apache2/-/blob/master/debian/config-dir/envvars?ref_type=heads


Either you or some package or script has changed the file. If you have 
etckeeper you could dig in the logs.


Cheers,
Stefan



to this:

  if [ -z ${APACHE_CONFDIR} ]; then
 export APACHE_CONFDIR=/etc/apache2
  fi

... then it works.

It's trying to evaluate `/etc/apache2` as a command?  Weird.

PATH seems totally normal.

Mark

-- Package-specific info:

-- System Information:
Debian Release: 12.5
   APT prefers stable-updates
   APT policy: (500, 'stable-updates'), (500, 'stable-security'), (500, 
'stable')
Architecture: amd64 (x86_64)

Kernel: Linux 6.1.0-21-amd64 (SMP w/1 CPU thread; PREEMPT)
Locale: LANG=en_US.UTF-8, LC_CTYPE=en_US.UTF-8 (charmap=UTF-8), LANGUAGE not set
Shell: /bin/sh linked to /usr/bin/dash
Init: systemd (via /run/systemd/system)
LSM: AppArmor: enabled

Versions of packages apache2 depends on:
ii  apache2-bin2.4.59-1~deb12u1
ii  apache2-data   2.4.59-1~deb12u1
ii  apache2-utils  2.4.59-1~deb12u1
ii  init-system-helpers1.65.2
ii  lsb-base   11.6
ii  media-types10.0.0
ii  perl   5.36.0-7+deb12u1
ii  procps 2:4.0.2-3
ii  sysvinit-utils [lsb-base]  3.06-4

Versions of packages apache2 recommends:
ii  ssl-cert  1.1.2

Versions of packages apache2 suggests:
pn  apache2-doc  
pn  apache2-suexec-pristine | apache2-suexec-custom  
ii  chromium [www-browser]   125.0.6422.60-1~deb12u1

Versions of packages apache2-bin depends on:
ii  libapr1  1.7.2-3
ii  libaprutil1  1.6.3-1
ii  libaprutil1-dbd-sqlite3  1.6.3-1
ii  libaprutil1-ldap 1.6.3-1
ii  libbrotli1   1.0.9-2+b6
ii  libc62.36-9+deb12u7
ii  libcrypt11:4.4.33-2
ii  libcurl4 7.88.1-10+deb12u5
ii  libjansson4  2.14-2
ii  libldap-2.5-02.5.13+dfsg-5
ii  liblua5.3-0  5.3.6-2
ii  libnghttp2-141.52.0-1+deb12u1
ii  libpcre2-8-0 10.42-1
ii  libssl3  3.0.11-1~deb12u2
ii  libxml2  2.9.14+dfsg-1.3~deb12u1
ii  perl 5.36.0-7+deb12u1
ii  zlib1g   1:1.2.13.dfsg-1

Versions of packages apache2-bin suggests:
pn  apache2-doc  
pn  apache2-suexec-pristine | apache2-suexec-custom  
ii  chromium [www-browser]   125.0.6422.60-1~deb12u1

Versions of packages apache2 is related to:
ii  apache2  2.4.59-1~deb12u1
ii  apache2-bin  2.4.59-1~deb12u1

-- Configuration Files:
/etc/apache2/apache2.conf changed:
DefaultRuntimeDir ${APACHE_RUN_DIR}
PidFile ${APACHE_PID_FILE}
Timeout 300
KeepAlive On
MaxKeepAliveRequests 100
KeepAliveTimeout 5
User ${APACHE_RUN_USER}
Group ${APACHE_RUN_GROUP}
HostnameLookups Off
ErrorLog ${APACHE_LOG_DIR}/error.log
LogLevel warn
IncludeOptional mods-enabled/*.load
IncludeOptional mods-enabled/*.conf
Include ports.conf

Options FollowSymLinks
AllowOverride None
Require all denied


AllowOverride None
Require all granted


Options Indexes FollowSymLinks
AllowOverride None
Require all granted

AccessFileName .htaccess

Require all denied

LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" 
vhost_combined
LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" 
combined
LogFormat "%h %l %u %t \"%r\" %>s %O" common
LogFormat "%{Referer}i -> %U" referer
LogFormat "%{User-agent}i" agent
IncludeOptional conf-enabled/*.conf
IncludeOptional sites-enabled/*.conf

/etc/apache2/envvars changed:
unset HOME
if [ -z "${APACHE_CONFDIR}" ]; then
export APACHE_CONFDIR=/etc/apache2
fi
if [ "${APACHE_CONFDIR##/etc/apache2-}" != "${APACHE_CONFDIR}" ] ; then
SUFFIX="-${APACHE_CONFDIR##/etc/apache2-}"
else
SUFFIX=
fi
export APACHE_RUN_USER=www-data
export APACHE_RUN_GROUP=www-data
export APACHE_PID_FILE=/var/run/apache2$SUFFIX/apache2.pid
export APACHE_RUN_DIR=/var/run/apache2$SUFFIX
export APACHE_LOCK_DIR=/var/lock/apache2$SUFFIX
export APACHE_LOG_DIR=/var/log/apache2$SUFFIX
export LANG=C
export LANG


-- no debconf information





Bug#1071596: apache2: envvars evaluates string in conditional instead of testing for empty string

2024-05-22 Thread Stefan Fritsch

Hi Mark,

Am 21.05.24 um 22:30 schrieb Mark Hedges:

Package: apache2
Version: 2.4.59-1~deb12u1
Severity: normal

Dear Maintainer,

`envvars` evaluates string in conditional instead of testing for empty string.

`apachectl` calls `envvars` which shows a syntax error despite working:

  root@nodeo:/etc/letsencrypt# apachectl configtest
  /usr/sbin/apachectl: 6: [: /etc/apache2: unexpected operator
  Syntax OK

If I change this line in `envvars`:

  if [ "${APACHE_CONFDIR}" == "" ]; then
 export APACHE_CONFDIR=/etc/apache2
  fi


This snippet is not in the original file from the apache2 package. 
Compare to 
https://salsa.debian.org/apache-team/apache2/-/blob/master/debian/config-dir/envvars?ref_type=heads


Either you or some package or script has changed the file. If you have 
etckeeper you could dig in the logs.


Cheers,
Stefan



to this:

  if [ -z ${APACHE_CONFDIR} ]; then
 export APACHE_CONFDIR=/etc/apache2
  fi

... then it works.

It's trying to evaluate `/etc/apache2` as a command?  Weird.

PATH seems totally normal.

Mark

-- Package-specific info:

-- System Information:
Debian Release: 12.5
   APT prefers stable-updates
   APT policy: (500, 'stable-updates'), (500, 'stable-security'), (500, 
'stable')
Architecture: amd64 (x86_64)

Kernel: Linux 6.1.0-21-amd64 (SMP w/1 CPU thread; PREEMPT)
Locale: LANG=en_US.UTF-8, LC_CTYPE=en_US.UTF-8 (charmap=UTF-8), LANGUAGE not set
Shell: /bin/sh linked to /usr/bin/dash
Init: systemd (via /run/systemd/system)
LSM: AppArmor: enabled

Versions of packages apache2 depends on:
ii  apache2-bin2.4.59-1~deb12u1
ii  apache2-data   2.4.59-1~deb12u1
ii  apache2-utils  2.4.59-1~deb12u1
ii  init-system-helpers1.65.2
ii  lsb-base   11.6
ii  media-types10.0.0
ii  perl   5.36.0-7+deb12u1
ii  procps 2:4.0.2-3
ii  sysvinit-utils [lsb-base]  3.06-4

Versions of packages apache2 recommends:
ii  ssl-cert  1.1.2

Versions of packages apache2 suggests:
pn  apache2-doc  
pn  apache2-suexec-pristine | apache2-suexec-custom  
ii  chromium [www-browser]   125.0.6422.60-1~deb12u1

Versions of packages apache2-bin depends on:
ii  libapr1  1.7.2-3
ii  libaprutil1  1.6.3-1
ii  libaprutil1-dbd-sqlite3  1.6.3-1
ii  libaprutil1-ldap 1.6.3-1
ii  libbrotli1   1.0.9-2+b6
ii  libc62.36-9+deb12u7
ii  libcrypt11:4.4.33-2
ii  libcurl4 7.88.1-10+deb12u5
ii  libjansson4  2.14-2
ii  libldap-2.5-02.5.13+dfsg-5
ii  liblua5.3-0  5.3.6-2
ii  libnghttp2-141.52.0-1+deb12u1
ii  libpcre2-8-0 10.42-1
ii  libssl3  3.0.11-1~deb12u2
ii  libxml2  2.9.14+dfsg-1.3~deb12u1
ii  perl 5.36.0-7+deb12u1
ii  zlib1g   1:1.2.13.dfsg-1

Versions of packages apache2-bin suggests:
pn  apache2-doc  
pn  apache2-suexec-pristine | apache2-suexec-custom  
ii  chromium [www-browser]   125.0.6422.60-1~deb12u1

Versions of packages apache2 is related to:
ii  apache2  2.4.59-1~deb12u1
ii  apache2-bin  2.4.59-1~deb12u1

-- Configuration Files:
/etc/apache2/apache2.conf changed:
DefaultRuntimeDir ${APACHE_RUN_DIR}
PidFile ${APACHE_PID_FILE}
Timeout 300
KeepAlive On
MaxKeepAliveRequests 100
KeepAliveTimeout 5
User ${APACHE_RUN_USER}
Group ${APACHE_RUN_GROUP}
HostnameLookups Off
ErrorLog ${APACHE_LOG_DIR}/error.log
LogLevel warn
IncludeOptional mods-enabled/*.load
IncludeOptional mods-enabled/*.conf
Include ports.conf

Options FollowSymLinks
AllowOverride None
Require all denied


AllowOverride None
Require all granted


Options Indexes FollowSymLinks
AllowOverride None
Require all granted

AccessFileName .htaccess

Require all denied

LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" 
vhost_combined
LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" 
combined
LogFormat "%h %l %u %t \"%r\" %>s %O" common
LogFormat "%{Referer}i -> %U" referer
LogFormat "%{User-agent}i" agent
IncludeOptional conf-enabled/*.conf
IncludeOptional sites-enabled/*.conf

/etc/apache2/envvars changed:
unset HOME
if [ -z "${APACHE_CONFDIR}" ]; then
export APACHE_CONFDIR=/etc/apache2
fi
if [ "${APACHE_CONFDIR##/etc/apache2-}" != "${APACHE_CONFDIR}" ] ; then
SUFFIX="-${APACHE_CONFDIR##/etc/apache2-}"
else
SUFFIX=
fi
export APACHE_RUN_USER=www-data
export APACHE_RUN_GROUP=www-data
export APACHE_PID_FILE=/var/run/apache2$SUFFIX/apache2.pid
export APACHE_RUN_DIR=/var/run/apache2$SUFFIX
export APACHE_LOCK_DIR=/var/lock/apache2$SUFFIX
export APACHE_LOG_DIR=/var/log/apache2$SUFFIX
export LANG=C
export LANG


-- no debconf information





[jira] [Commented] (CASSANDRA-19556) Add guardrail to block DDL/DCL queries and replace alter_table_enabled guardrail

2024-05-22 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-19556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848712#comment-17848712
 ] 

Stefan Miklosovic commented on CASSANDRA-19556:
---

OK, that sounds good.

> Add guardrail to block DDL/DCL queries and replace alter_table_enabled 
> guardrail
> 
>
> Key: CASSANDRA-19556
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19556
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Feature/Guardrails
>Reporter: Yuqi Yan
>Assignee: Yuqi Yan
>Priority: Normal
> Fix For: 5.x
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Sometimes we want to block DDL/DCL queries to stop new schemas being created 
> or roles created. (e.g. when doing live-upgrade)
> For DDL guardrail current implementation won't block the query if it's no-op 
> (e.g. CREATE TABLE...IF NOT EXISTS, but table already exists, etc. The 
> guardrail check is added in apply() right after all the existence check)
> I don't have preference on either block every DDL query or check whether if 
> it's no-op here. Just we have some users always run CREATE..IF NOT EXISTS.. 
> at startup, which is no-op but will be blocked by this guardrail and failed 
> to start.
>  
> 4.1 PR: [https://github.com/apache/cassandra/pull/3248]
> trunk PR: [https://github.com/apache/cassandra/pull/3275]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-19632) wrap tracing logs in isTraceEnabled across the codebase

2024-05-22 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-19632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848637#comment-17848637
 ] 

Stefan Miklosovic commented on CASSANDRA-19632:
---

That will be probably the truth. I might check that. We can revert the cases 
when it is logging like this:
{code:java}
if (logger.isTracingEnabled()) logger.trace("a message");
{code}
In these cases, I think it is pretty much an overkill. It is more about not 
evaluating the message itself if there are some arguments. 

> wrap tracing logs in isTraceEnabled across the codebase
> ---
>
> Key: CASSANDRA-19632
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19632
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Legacy/Core
>Reporter: Stefan Miklosovic
>Assignee: Stefan Miklosovic
>Priority: Normal
> Fix For: 5.x
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Our usage of logger.isTraceEnabled across the codebase is inconsistent. This 
> would also fix issues similar in e.g. CASSANDRA-19429 as [~rustyrazorblade] 
> suggested.
> We should fix this at least in trunk and 5.0 (not critical though) and 
> probably come up with a checkstyle rule to prevent not calling isTraceEnabled 
> while logging with TRACE level. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-19632) wrap tracing logs in isTraceEnabled across the codebase

2024-05-22 Thread Stefan Miklosovic (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-19632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Miklosovic updated CASSANDRA-19632:
--
Test and Documentation Plan: ci
 Status: Patch Available  (was: In Progress)

I've created a PR. I have not added a checkstyle rule, it is actually quite 
tricky to get right and I do not think we can generalize it enough. It might be 
very specific to the code.

> wrap tracing logs in isTraceEnabled across the codebase
> ---
>
> Key: CASSANDRA-19632
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19632
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Legacy/Core
>    Reporter: Stefan Miklosovic
>Assignee: Stefan Miklosovic
>Priority: Normal
> Fix For: 5.x
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Our usage of logger.isTraceEnabled across the codebase is inconsistent. This 
> would also fix issues similar in e.g. CASSANDRA-19429 as [~rustyrazorblade] 
> suggested.
> We should fix this at least in trunk and 5.0 (not critical though) and 
> probably come up with a checkstyle rule to prevent not calling isTraceEnabled 
> while logging with TRACE level. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-19593) Transactional Guardrails

2024-05-22 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-19593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848590#comment-17848590
 ] 

Stefan Miklosovic commented on CASSANDRA-19593:
---

There is minimumReplicationFactor guardrail which looks into 
DatabaseDescriptor.getDefaultKeyspaceRF() when validating the value:
{code:java}
public static void validateMinRFThreshold(int warn, int fail)
{
validateMinIntThreshold(warn, fail, "minimum_replication_factor");

if (fail > DatabaseDescriptor.getDefaultKeyspaceRF())
throw new 
IllegalArgumentException(format("minimum_replication_factor_fail_threshold to 
be set (%d) " +
  "cannot be greater than 
default_keyspace_rf (%d)",
  fail, 
DatabaseDescriptor.getDefaultKeyspaceRF()));
} {code}
this is similarly done for maximum_replication_factor.

Obviously, conf.default_keyspace_rf can be different per node, if 
misconfigured, so transformation application would not be the same on all nodes 
and it might fail on some and not fail on others.

This brings us to more general problem of transactional configuration which 
should be done as well. It is questionable if it is desirable to do it as part 
of this ticket or not, however, I would like to look into how we could do that 
as well.

> Transactional Guardrails
> 
>
> Key: CASSANDRA-19593
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19593
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Feature/Guardrails, Transactional Cluster Metadata
>        Reporter: Stefan Miklosovic
>Assignee: Stefan Miklosovic
>Priority: Normal
> Fix For: 5.x
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I think it is time to start to think about this more seriously. TCM is 
> getting into pretty nice shape and we might start to investigate how to do 
> this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-19556) Add guardrail to block DDL/DCL queries and replace alter_table_enabled guardrail

2024-05-22 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-19556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848560#comment-17848560
 ] 

Stefan Miklosovic edited comment on CASSANDRA-19556 at 5/22/24 11:00 AM:
-

That being said, I am curious if the patch for trunk might go in as is - it is 
removing alter_table_enabled guardrail.

If that is not desirable, then shadowing it is the next option. That means I 
would need to get alter_table_enabled back.

Is everybody OK with this? Otherwise I am all ears how to do this - that was my 
whole point why I wanted to do it before 5.0.0 is out by removing the old one.

Or we just do what Sam suggests, we just remove this feature altogether and 
replace it by a system property. That will work but it will not work while 
cluster is up and people need to prevent schema modifications operationally. 


was (Author: smiklosovic):
That being said, I am curious if the patch for trunk might go in as is - it is 
removing alter_table_enabled guardrail.

If that is not desirable, then shadowing it is the next option. That means I 
would need to get alter_table_enabled back.

Is everybody OK with this? Otherwise I am all ears how to do this - that was my 
whole point why I wanted to do it before 5.0.0 is out by removing the old one.

> Add guardrail to block DDL/DCL queries and replace alter_table_enabled 
> guardrail
> 
>
> Key: CASSANDRA-19556
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19556
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Feature/Guardrails
>Reporter: Yuqi Yan
>Assignee: Yuqi Yan
>Priority: Normal
> Fix For: 5.x
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Sometimes we want to block DDL/DCL queries to stop new schemas being created 
> or roles created. (e.g. when doing live-upgrade)
> For DDL guardrail current implementation won't block the query if it's no-op 
> (e.g. CREATE TABLE...IF NOT EXISTS, but table already exists, etc. The 
> guardrail check is added in apply() right after all the existence check)
> I don't have preference on either block every DDL query or check whether if 
> it's no-op here. Just we have some users always run CREATE..IF NOT EXISTS.. 
> at startup, which is no-op but will be blocked by this guardrail and failed 
> to start.
>  
> 4.1 PR: [https://github.com/apache/cassandra/pull/3248]
> trunk PR: [https://github.com/apache/cassandra/pull/3275]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-19556) Add guardrail to block DDL/DCL queries and replace alter_table_enabled guardrail

2024-05-22 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-19556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848553#comment-17848553
 ] 

Stefan Miklosovic edited comment on CASSANDRA-19556 at 5/22/24 10:49 AM:
-

OK, I think we can just defer this effort to trunk / 5.1 then. Removing it from 
5.0-rc.


was (Author: smiklosovic):
OK, I think we can just defer this effor to trunk / 5.1 then. Removing it from 
5.0-rc.

> Add guardrail to block DDL/DCL queries and replace alter_table_enabled 
> guardrail
> 
>
> Key: CASSANDRA-19556
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19556
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Feature/Guardrails
>Reporter: Yuqi Yan
>Assignee: Yuqi Yan
>Priority: Normal
> Fix For: 5.x
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Sometimes we want to block DDL/DCL queries to stop new schemas being created 
> or roles created. (e.g. when doing live-upgrade)
> For DDL guardrail current implementation won't block the query if it's no-op 
> (e.g. CREATE TABLE...IF NOT EXISTS, but table already exists, etc. The 
> guardrail check is added in apply() right after all the existence check)
> I don't have preference on either block every DDL query or check whether if 
> it's no-op here. Just we have some users always run CREATE..IF NOT EXISTS.. 
> at startup, which is no-op but will be blocked by this guardrail and failed 
> to start.
>  
> 4.1 PR: [https://github.com/apache/cassandra/pull/3248]
> trunk PR: [https://github.com/apache/cassandra/pull/3275]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-19556) Add guardrail to block DDL/DCL queries and replace alter_table_enabled guardrail

2024-05-22 Thread Stefan Miklosovic (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-19556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Miklosovic updated CASSANDRA-19556:
--
Fix Version/s: (was: 5.0-rc)

> Add guardrail to block DDL/DCL queries and replace alter_table_enabled 
> guardrail
> 
>
> Key: CASSANDRA-19556
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19556
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Feature/Guardrails
>Reporter: Yuqi Yan
>Assignee: Yuqi Yan
>Priority: Normal
> Fix For: 5.x
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Sometimes we want to block DDL/DCL queries to stop new schemas being created 
> or roles created. (e.g. when doing live-upgrade)
> For DDL guardrail current implementation won't block the query if it's no-op 
> (e.g. CREATE TABLE...IF NOT EXISTS, but table already exists, etc. The 
> guardrail check is added in apply() right after all the existence check)
> I don't have preference on either block every DDL query or check whether if 
> it's no-op here. Just we have some users always run CREATE..IF NOT EXISTS.. 
> at startup, which is no-op but will be blocked by this guardrail and failed 
> to start.
>  
> 4.1 PR: [https://github.com/apache/cassandra/pull/3248]
> trunk PR: [https://github.com/apache/cassandra/pull/3275]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-19556) Add guardrail to block DDL/DCL queries and replace alter_table_enabled guardrail

2024-05-22 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-19556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848553#comment-17848553
 ] 

Stefan Miklosovic commented on CASSANDRA-19556:
---

OK, I think we can just defer this effor to trunk / 5.1 then. Removing it from 
5.0-rc.

> Add guardrail to block DDL/DCL queries and replace alter_table_enabled 
> guardrail
> 
>
> Key: CASSANDRA-19556
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19556
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Feature/Guardrails
>Reporter: Yuqi Yan
>Assignee: Yuqi Yan
>Priority: Normal
> Fix For: 5.0-rc, 5.x
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Sometimes we want to block DDL/DCL queries to stop new schemas being created 
> or roles created. (e.g. when doing live-upgrade)
> For DDL guardrail current implementation won't block the query if it's no-op 
> (e.g. CREATE TABLE...IF NOT EXISTS, but table already exists, etc. The 
> guardrail check is added in apply() right after all the existence check)
> I don't have preference on either block every DDL query or check whether if 
> it's no-op here. Just we have some users always run CREATE..IF NOT EXISTS.. 
> at startup, which is no-op but will be blocked by this guardrail and failed 
> to start.
>  
> 4.1 PR: [https://github.com/apache/cassandra/pull/3248]
> trunk PR: [https://github.com/apache/cassandra/pull/3275]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[ceph-users] Re: How network latency affects ceph performance really with NVME only storage?

2024-05-22 Thread Stefan Bauer

Hi Frank,

it's pretty straightforward. Just follow the steps:

apt install tuned

tuned-adm profile network-latency

According to [1]:

network-latency
   A server profile focused on lowering network latency.
   This profile favors performance over power savings by setting
   |intel_pstate| and |min_perf_pct=100|. It disables transparent huge
   pages, and automatic NUMA balancing. It also uses *cpupower* to set
   the |performance| cpufreq governor, and requests a
   /|cpu_dma_latency|/ value of |1|. It also sets /|busy_read|/ and
   /|busy_poll|/ times to |50| μs, and /|tcp_fastopen|/ to |3|.

[1] 
https://access.redhat.com/documentation/de-de/red_hat_enterprise_linux/7/html/performance_tuning_guide/sect-red_hat_enterprise_linux-performance_tuning_guide-tool_reference-tuned_adm


Cheers.

Stefan

Am 22.05.24 um 12:18 schrieb Frank Schilder:

Hi Stefan,

can you provide a link to or copy of the contents of the tuned-profile so 
others can also profit from it?

Thanks!
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14


From: Stefan Bauer
Sent: Wednesday, May 22, 2024 10:51 AM
To: Anthony D'Atri;ceph-users@ceph.io
Subject: [ceph-users] Re: How network latency affects ceph performance really 
with NVME only storage?

Hi Anthony and others,

thank you for your reply.  To be honest, I'm not even looking for a
solution, i just wanted to ask if latency affects the performance at all
in my case and how others handle this ;)

One of our partners delivered a solution with a latency-optimized
profile for tuned-daemon. Now the latency is much better:

apt install tuned

tuned-adm profile network-latency

# ping 10.1.4.13
PING 10.1.4.13 (10.1.4.13) 56(84) bytes of data.
64 bytes from 10.1.4.13: icmp_seq=1 ttl=64 time=0.047 ms
64 bytes from 10.1.4.13: icmp_seq=2 ttl=64 time=0.028 ms
64 bytes from 10.1.4.13: icmp_seq=3 ttl=64 time=0.025 ms
64 bytes from 10.1.4.13: icmp_seq=4 ttl=64 time=0.020 ms
64 bytes from 10.1.4.13: icmp_seq=5 ttl=64 time=0.023 ms
64 bytes from 10.1.4.13: icmp_seq=6 ttl=64 time=0.026 ms
64 bytes from 10.1.4.13: icmp_seq=7 ttl=64 time=0.024 ms
64 bytes from 10.1.4.13: icmp_seq=8 ttl=64 time=0.023 ms
64 bytes from 10.1.4.13: icmp_seq=9 ttl=64 time=0.033 ms
64 bytes from 10.1.4.13: icmp_seq=10 ttl=64 time=0.021 ms
^C
--- 10.1.4.13 ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 9001ms
rtt min/avg/max/mdev = 0.020/0.027/0.047/0.007 ms

Am 21.05.24 um 15:08 schrieb Anthony D'Atri:

Check the netmask on your interfaces, is it possible that you're sending 
inter-node traffic up and back down needlessly?


On May 21, 2024, at 06:02, Stefan Bauer  wrote:

Dear Users,

i recently setup a new ceph 3 node cluster. Network is meshed between all nodes 
(2 x 25G with DAC).
Storage is flash only (Kioxia 3.2 TBBiCS FLASH 3D TLC, KCMYXVUG3T20)

The latency with ping tests between the nodes shows:

# ping 10.1.3.13
PING 10.1.3.13 (10.1.3.13) 56(84) bytes of data.
64 bytes from 10.1.3.13: icmp_seq=1 ttl=64 time=0.145 ms
64 bytes from 10.1.3.13: icmp_seq=2 ttl=64 time=0.180 ms
64 bytes from 10.1.3.13: icmp_seq=3 ttl=64 time=0.180 ms
64 bytes from 10.1.3.13: icmp_seq=4 ttl=64 time=0.115 ms
64 bytes from 10.1.3.13: icmp_seq=5 ttl=64 time=0.110 ms
64 bytes from 10.1.3.13: icmp_seq=6 ttl=64 time=0.120 ms
64 bytes from 10.1.3.13: icmp_seq=7 ttl=64 time=0.124 ms
64 bytes from 10.1.3.13: icmp_seq=8 ttl=64 time=0.140 ms
64 bytes from 10.1.3.13: icmp_seq=9 ttl=64 time=0.127 ms
64 bytes from 10.1.3.13: icmp_seq=10 ttl=64 time=0.143 ms
64 bytes from 10.1.3.13: icmp_seq=11 ttl=64 time=0.129 ms
--- 10.1.3.13 ping statistics ---
11 packets transmitted, 11 received, 0% packet loss, time 10242ms
rtt min/avg/max/mdev = 0.110/0.137/0.180/0.022 ms


On another cluster i have much better values, with 10G SFP+ and fibre-cables:

64 bytes from large-ipv6-ip: icmp_seq=42 ttl=64 time=0.081 ms
64 bytes from large-ipv6-ip: icmp_seq=43 ttl=64 time=0.078 ms
64 bytes from large-ipv6-ip: icmp_seq=44 ttl=64 time=0.084 ms
64 bytes from large-ipv6-ip: icmp_seq=45 ttl=64 time=0.075 ms
64 bytes from large-ipv6-ip: icmp_seq=46 ttl=64 time=0.071 ms
64 bytes from large-ipv6-ip: icmp_seq=47 ttl=64 time=0.081 ms
64 bytes from large-ipv6-ip: icmp_seq=48 ttl=64 time=0.074 ms
64 bytes from large-ipv6-ip: icmp_seq=49 ttl=64 time=0.085 ms
64 bytes from large-ipv6-ip: icmp_seq=50 ttl=64 time=0.077 ms
64 bytes from large-ipv6-ip: icmp_seq=51 ttl=64 time=0.080 ms
64 bytes from large-ipv6-ip: icmp_seq=52 ttl=64 time=0.084 ms
64 bytes from large-ipv6-ip: icmp_seq=53 ttl=64 time=0.084 ms
^C
--- long-ipv6-ip ping statistics ---
53 packets transmitted, 53 received, 0% packet loss, time 53260ms
rtt min/avg/max/mdev = 0.071/0.082/0.111/0.006 ms

If i want best performance, does the latency difference matter at all? Should i 
change DAC to SFP-transceivers wwith fibre-cables to improve overall ceph 
performance or is this nitpicking?

Thanks a lot.

Stefan

[ceph-users] Re: How network latency affects ceph performance really with NVME only storage?

2024-05-22 Thread Stefan Bauer

Hi Anthony and others,

thank you for your reply.  To be honest, I'm not even looking for a 
solution, i just wanted to ask if latency affects the performance at all 
in my case and how others handle this ;)


One of our partners delivered a solution with a latency-optimized 
profile for tuned-daemon. Now the latency is much better:


apt install tuned

tuned-adm profile network-latency

# ping 10.1.4.13
PING 10.1.4.13 (10.1.4.13) 56(84) bytes of data.
64 bytes from 10.1.4.13: icmp_seq=1 ttl=64 time=0.047 ms
64 bytes from 10.1.4.13: icmp_seq=2 ttl=64 time=0.028 ms
64 bytes from 10.1.4.13: icmp_seq=3 ttl=64 time=0.025 ms
64 bytes from 10.1.4.13: icmp_seq=4 ttl=64 time=0.020 ms
64 bytes from 10.1.4.13: icmp_seq=5 ttl=64 time=0.023 ms
64 bytes from 10.1.4.13: icmp_seq=6 ttl=64 time=0.026 ms
64 bytes from 10.1.4.13: icmp_seq=7 ttl=64 time=0.024 ms
64 bytes from 10.1.4.13: icmp_seq=8 ttl=64 time=0.023 ms
64 bytes from 10.1.4.13: icmp_seq=9 ttl=64 time=0.033 ms
64 bytes from 10.1.4.13: icmp_seq=10 ttl=64 time=0.021 ms
^C
--- 10.1.4.13 ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 9001ms
rtt min/avg/max/mdev = 0.020/0.027/0.047/0.007 ms

Am 21.05.24 um 15:08 schrieb Anthony D'Atri:

Check the netmask on your interfaces, is it possible that you're sending 
inter-node traffic up and back down needlessly?


On May 21, 2024, at 06:02, Stefan Bauer  wrote:

Dear Users,

i recently setup a new ceph 3 node cluster. Network is meshed between all nodes 
(2 x 25G with DAC).
Storage is flash only (Kioxia 3.2 TBBiCS FLASH 3D TLC, KCMYXVUG3T20)

The latency with ping tests between the nodes shows:

# ping 10.1.3.13
PING 10.1.3.13 (10.1.3.13) 56(84) bytes of data.
64 bytes from 10.1.3.13: icmp_seq=1 ttl=64 time=0.145 ms
64 bytes from 10.1.3.13: icmp_seq=2 ttl=64 time=0.180 ms
64 bytes from 10.1.3.13: icmp_seq=3 ttl=64 time=0.180 ms
64 bytes from 10.1.3.13: icmp_seq=4 ttl=64 time=0.115 ms
64 bytes from 10.1.3.13: icmp_seq=5 ttl=64 time=0.110 ms
64 bytes from 10.1.3.13: icmp_seq=6 ttl=64 time=0.120 ms
64 bytes from 10.1.3.13: icmp_seq=7 ttl=64 time=0.124 ms
64 bytes from 10.1.3.13: icmp_seq=8 ttl=64 time=0.140 ms
64 bytes from 10.1.3.13: icmp_seq=9 ttl=64 time=0.127 ms
64 bytes from 10.1.3.13: icmp_seq=10 ttl=64 time=0.143 ms
64 bytes from 10.1.3.13: icmp_seq=11 ttl=64 time=0.129 ms
--- 10.1.3.13 ping statistics ---
11 packets transmitted, 11 received, 0% packet loss, time 10242ms
rtt min/avg/max/mdev = 0.110/0.137/0.180/0.022 ms


On another cluster i have much better values, with 10G SFP+ and fibre-cables:

64 bytes from large-ipv6-ip: icmp_seq=42 ttl=64 time=0.081 ms
64 bytes from large-ipv6-ip: icmp_seq=43 ttl=64 time=0.078 ms
64 bytes from large-ipv6-ip: icmp_seq=44 ttl=64 time=0.084 ms
64 bytes from large-ipv6-ip: icmp_seq=45 ttl=64 time=0.075 ms
64 bytes from large-ipv6-ip: icmp_seq=46 ttl=64 time=0.071 ms
64 bytes from large-ipv6-ip: icmp_seq=47 ttl=64 time=0.081 ms
64 bytes from large-ipv6-ip: icmp_seq=48 ttl=64 time=0.074 ms
64 bytes from large-ipv6-ip: icmp_seq=49 ttl=64 time=0.085 ms
64 bytes from large-ipv6-ip: icmp_seq=50 ttl=64 time=0.077 ms
64 bytes from large-ipv6-ip: icmp_seq=51 ttl=64 time=0.080 ms
64 bytes from large-ipv6-ip: icmp_seq=52 ttl=64 time=0.084 ms
64 bytes from large-ipv6-ip: icmp_seq=53 ttl=64 time=0.084 ms
^C
--- long-ipv6-ip ping statistics ---
53 packets transmitted, 53 received, 0% packet loss, time 53260ms
rtt min/avg/max/mdev = 0.071/0.082/0.111/0.006 ms

If i want best performance, does the latency difference matter at all? Should i 
change DAC to SFP-transceivers wwith fibre-cables to improve overall ceph 
performance or is this nitpicking?

Thanks a lot.

Stefan
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


--
Mit freundlichen Grüßen

Stefan Bauer
Schulstraße 5
83308 Trostberg
0179-1194767
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


RE: [VOTE] Release Apache Sling JCR Resource 3.3.2

2024-05-22 Thread Stefan Seifert
+1

stefan


[jira] [Updated] (CASSANDRA-19632) wrap tracing logs in isTraceEnabled across the codebase

2024-05-22 Thread Stefan Miklosovic (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-19632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Miklosovic updated CASSANDRA-19632:
--
Change Category: Code Clarity
 Complexity: Normal
Component/s: Legacy/Core
   Assignee: Stefan Miklosovic
 Status: Open  (was: Triage Needed)

> wrap tracing logs in isTraceEnabled across the codebase
> ---
>
> Key: CASSANDRA-19632
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19632
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Legacy/Core
>    Reporter: Stefan Miklosovic
>Assignee: Stefan Miklosovic
>Priority: Normal
> Fix For: 5.x
>
>
> Our usage of logger.isTraceEnabled across the codebase is inconsistent. This 
> would also fix issues similar in e.g. CASSANDRA-19429 as [~rustyrazorblade] 
> suggested.
> We should fix this at least in trunk and 5.0 (not critical though) and 
> probably come up with a checkstyle rule to prevent not calling isTraceEnabled 
> while logging with TRACE level. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-19632) wrap tracing logs in isTraceEnabled across the codebase

2024-05-22 Thread Stefan Miklosovic (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-19632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Miklosovic updated CASSANDRA-19632:
--
Fix Version/s: (was: 5.0.x)

> wrap tracing logs in isTraceEnabled across the codebase
> ---
>
> Key: CASSANDRA-19632
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19632
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Stefan Miklosovic
>Priority: Normal
> Fix For: 5.x
>
>
> Our usage of logger.isTraceEnabled across the codebase is inconsistent. This 
> would also fix issues similar in e.g. CASSANDRA-19429 as [~rustyrazorblade] 
> suggested.
> We should fix this at least in trunk and 5.0 (not critical though) and 
> probably come up with a checkstyle rule to prevent not calling isTraceEnabled 
> while logging with TRACE level. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[ceph-users] Re: rbd-mirror failed to query services: (13) Permission denied

2024-05-21 Thread Stefan Kooman

Hi,

On 29-04-2024 17:15, Ilya Dryomov wrote:

On Tue, Apr 23, 2024 at 8:28 PM Stefan Kooman  wrote:


On 23-04-2024 17:44, Ilya Dryomov wrote:

On Mon, Apr 22, 2024 at 7:45 PM Stefan Kooman  wrote:


Hi,

We are testing rbd-mirroring. There seems to be a permission error with
the rbd-mirror user. Using this user to query the mirror pool status gives:

failed to query services: (13) Permission denied

And results in the following output:

health: UNKNOWN
daemon health: UNKNOWN
image health: OK
images: 3 total
   2 replaying
   1 stopped

So, this command: rbd --id rbd-mirror mirror pool status rbd


Hi Stefan,

What is the output of "ceph auth get client.rbd-mirror"?


[client.rbd-mirror]
 key = REDACTED
 caps mon = "profile rbd-mirror"
 caps osd = "profile rbd"


Hi Stefan,

I went through the git history and this appears to be expected, at
least for some definition of expected.  Commit [1] clearly recognized
the problem and made the

 rbd: failed to query services: (13) Permission denied

error that you ran into with "rbd mirror pool status" non-fatal.

Also, there is a comment in the respective PR [2] acknowledging that
even

 caps mgr = "profile rbd"

cap (which your client.rbd-mirror user doesn't have and rbd-mirror
daemon doesn't actually need) would NOT be sufficient to resolve the
error because "our profiles don't give the average user access to see
Ceph cluster services".

[1] 
https://github.com/ceph/ceph/pull/33219/commits/1cb9e3b56932a1b00850b9cce4c65f8681dcc3cc
[2] https://github.com/ceph/ceph/pull/33219#discussion_r378436795


Sorry for the late reply, and thanks for looking into it. Properly 
fixing it would probably be a lot of work not worth the effort. Fair enough.


Gr. Stefan
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[jira] [Created] (SPARK-48379) Cancel build during a PR when a new commit is pushed

2024-05-21 Thread Stefan Kandic (Jira)
Stefan Kandic created SPARK-48379:
-

 Summary: Cancel build during a PR when a new commit is pushed
 Key: SPARK-48379
 URL: https://issues.apache.org/jira/browse/SPARK-48379
 Project: Spark
  Issue Type: Improvement
  Components: Project Infra
Affects Versions: 4.0.0
Reporter: Stefan Kandic


Creating a new commit on a branch should cancel the build of previous commits 
for the same branch.

Exceptions are master and branch-* branches where we still want to have 
concurrent builds.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



Re: [PATCH v2 0/3] docs: define policy forbidding use of "AI" / LLM code generators

2024-05-21 Thread Stefan Hajnoczi
On Thu, 16 May 2024 at 12:23, Daniel P. Berrangé  wrote:
>
> This patch kicks the hornet's nest of AI / LLM code generators.
>
> With the increasing interest in code generators in recent times,
> it is inevitable that QEMU contributions will include AI generated
> code. Thus far we have remained silent on the matter. Given that
> everyone knows these tools exist, our current position has to be
> considered tacit acceptance of the use of AI generated code in QEMU.
>
> The question for the project is whether that is a good position for
> QEMU to take or not ?
>
> IANAL, but I like to think I'm reasonably proficient at understanding
> open source licensing. I am not inherantly against the use of AI tools,
> rather I am anti-risk. I also want to see OSS licenses respected and
> complied with.
>
> AFAICT at its current state of (im)maturity the question of licensing
> of AI code generator output does not have a broadly accepted / settled
> legal position. This is an inherant bias/self-interest from the vendors
> promoting their usage, who tend to minimize/dismiss the legal questions.
> From my POV, this puts such tools in a position of elevated legal risk.
>
> Given the fuzziness over the legal position of generated code from
> such tools, I don't consider it credible (today) for a contributor
> to assert compliance with the DCO terms (b) or (c) (which is a stated
> pre-requisite for QEMU accepting patches) when a patch includes (or is
> derived from) AI generated code.
>
> By implication, I think that QEMU must (for now) explicitly decline
> to (knowingly) accept AI generated code.
>
> Perhaps a few years down the line the legal uncertainty will have
> reduced and we can re-evaluate this policy.
>
> Discuss...

Although this policy is unenforceable, I think it's a valid position
to take until the legal situation becomes clear.

Acked-by: Stefan Hajnoczi 



Re: [PATCH v4 0/3] cyclic/watchdog patches

2024-05-21 Thread Stefan Roese

On 5/21/24 14:45, Rasmus Villemoes wrote:

On 21/05/2024 13.54, Stefan Roese wrote:

On 5/21/24 11:47, Rasmus Villemoes wrote:

On 21/05/2024 10.46, Rasmus Villemoes wrote:

A bit of a mixed bag. I've been wanting to submit something like 3/3
for a while. So when I stumbled on Marek's patch
https://lore.kernel.org/u-boot/20240316201416.211480-1-marek.vasut+rene...@mailbox.org/
, I got reminded of that plan, and I think that patch could be more
readable if we adopt this model.

While actually doing those mostly mechanical changes, I stumbled on
two separate issues that probably want fixing regardless of the fate
of 3/3.

Mostly just compile-tested, and now also checked that at least the
sandbox test runs succesfully, and that it builds both with and
without CONFIG_CYCLIC.


So I managed to trigger an azure test by pushing to github and creating
a dummy PR: https://github.com/u-boot/u-boot/pull/542

That fails, and while it involves the cyclic framework, I'm pretty sure
these patches are not to blame, since the same error also exists in
other pipelines. It's an "expect" failure, because some watchdog
callback apparently sometimes takes more than 1ms, so the default 1000us
threshold is exceeded, and that prints a warning which breaks the
"expect".

An example is

https://dev.azure.com/u-boot/u-boot/_build/results?buildId=8459=logs=a1270dec-081b-5c65-5cd5-5e915a842596=69f6cf72-86f3-551a-807d-f28f62a1426f=1055
.

I don't know why it only/mostly seems to happen in clang builds, but I
think the fact that these happen quite frequently warrants either
bumping the threshold used in the CI builds quite a lot, or adding a
config option to suppress that warning/limit altogether for CI builds.


I've also seen CI build issues from time to time and restarting the
build magically solved these issues. I'm all for making this CI
build more stable, perhaps Tom has some ideas?


Well, the problem seems to be inherent in the warning from the cyclic
framework; maybe more so when the build server is overloaded (as
sometimes those callbacks are reported to have taken 5+ ms). So when
running sandbox, or under qemu, I think that warning should be disabled.


Agreed.


Regarding this cyclic patch:

Still some problems, MIPS64 related at least, octeon_nic23 target:

https://dev.azure.com/sr0718/0cded7c3-6e6a-4b57-8d0f-65c99496c42f/_apis/build/builds/357/logs/415


Oh my, this is starting to be really embarrassing. The fix is trivial
(as the callback doesn't even use any context):

-static void octeon_board_restore_pf(void *ctx)
+static void octeon_board_restore_pf(struct cyclic_info *c)

Should I resend yet again?


No need. I'll apply this and squash it here. If CI build works, I'll
send the pull request. Otherwise this needs to wait for the next
merge window I'm afraid.

Thanks,
Stefan


Re: [PATCH v4 0/3] cyclic/watchdog patches

2024-05-21 Thread Stefan Roese

On 5/21/24 11:47, Rasmus Villemoes wrote:

On 21/05/2024 10.46, Rasmus Villemoes wrote:

A bit of a mixed bag. I've been wanting to submit something like 3/3
for a while. So when I stumbled on Marek's patch
https://lore.kernel.org/u-boot/20240316201416.211480-1-marek.vasut+rene...@mailbox.org/
, I got reminded of that plan, and I think that patch could be more
readable if we adopt this model.

While actually doing those mostly mechanical changes, I stumbled on
two separate issues that probably want fixing regardless of the fate
of 3/3.

Mostly just compile-tested, and now also checked that at least the
sandbox test runs succesfully, and that it builds both with and
without CONFIG_CYCLIC.


So I managed to trigger an azure test by pushing to github and creating
a dummy PR: https://github.com/u-boot/u-boot/pull/542

That fails, and while it involves the cyclic framework, I'm pretty sure
these patches are not to blame, since the same error also exists in
other pipelines. It's an "expect" failure, because some watchdog
callback apparently sometimes takes more than 1ms, so the default 1000us
threshold is exceeded, and that prints a warning which breaks the "expect".

An example is

https://dev.azure.com/u-boot/u-boot/_build/results?buildId=8459=logs=a1270dec-081b-5c65-5cd5-5e915a842596=69f6cf72-86f3-551a-807d-f28f62a1426f=1055
.

I don't know why it only/mostly seems to happen in clang builds, but I
think the fact that these happen quite frequently warrants either
bumping the threshold used in the CI builds quite a lot, or adding a
config option to suppress that warning/limit altogether for CI builds.


I've also seen CI build issues from time to time and restarting the
build magically solved these issues. I'm all for making this CI
build more stable, perhaps Tom has some ideas?

Regarding this cyclic patch:

Still some problems, MIPS64 related at least, octeon_nic23 target:

https://dev.azure.com/sr0718/0cded7c3-6e6a-4b57-8d0f-65c99496c42f/_apis/build/builds/357/logs/415

Thanks,
Stefan


[jira] [Comment Edited] (CASSANDRA-19593) Transactional Guardrails

2024-05-21 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-19593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848171#comment-17848171
 ] 

Stefan Miklosovic edited comment on CASSANDRA-19593 at 5/21/24 11:45 AM:
-

Progress:

I finished all transformations (Flags, Values, Thresholds, Customs) (Custom is 
a custom guardrail as per CEP-24). I have also tested ser/de of these 
transformations, there are tests for vtables and diffing of transformations as 
well.

I have also started to validate the input in vtables before it is committed to 
TCM. That means that no invalid configuration (e.g. warn threshold bigger than 
fail threshold) will be committed when CQL statement against such vtables is 
executed. Validations are done per each logical guardrail category if 
applicable.

It is worth to say that from the implementation perspective, Values (related to 
values guardrail) are rather special when it comes to CQL.

If we want to have this table
{code:java}
VIRTUAL TABLE system_guardrails.values (
    name text PRIMARY KEY,
    disallowed frozen>,
    ignored frozen>,
    warned frozen>
) {code}
when we do this query:
{code:java}
update system_guardrails.values set warned = {'QUORUM'}, disallowed = 
{'EACH_QUORUM'} WHERE name = 'read_consistency_levels'; {code}
the way how it works until now is that each value for respective column will 
come to 
AbstractMutableVirtualTable#applyColumnUpdate. But, think about that, if we 
have two columns modified, as in the above example, that would translate to two 
separate commits into TCM, just because mutable vtable iterates over such 
columns.
 
I do not think this is desirable, there should be one commit per query, 
basically. So the transformation might contain more than one column.
 
In order to do that, I had to override "apply" method in 
AbstractMutableVirtualTable and I had to remove "final" modifier. This was 
already discussed with [~blerer] that it might be possible to remove that in 
order to accommodate this kind of situations.
 
Also, I am making sure that I am not committing something which has not 
changed. E.g. when I execute above query twice, it will be actually committed 
just once, because for the second time, when diffing it, there is no 
difference, hence no commit is necessary. This was quite tricky to get right, 
especially for values, because I wanted to model the situation when we are 
removing the value by setting it to null, like this:
 
{code:java}
update system_guardrails.values set warned = null, disallowed = {} WHERE name = 
'read_consistency_levels';  {code}
"null" and "empty" are two different operations in terms of mutable vtable. If 
it is set to null, it is looked at as if we are going to delete but if it is an 
empty set, it is a regular update. This was tricky to get right too but I think 
I am there.


was (Author: smiklosovic):
Progress:

I finished all transformations (Flags, Values, Thresholds, Customs) (Custom is 
a custom guardrail as per CEP-24). I have also tested ser/de of these 
transformations, there are tests for vtables and diffing of transformations as 
well.

I have also started to validate the input in vtables before it is committed to 
TCM. That means that no invalid configuration (e.g. warn threshold bigger than 
fail threshold) will be committed when CQL statement against such vtables is 
executed. Validations are done per each logical guardrail category if 
applicable.

 

> Transactional Guardrails
> 
>
> Key: CASSANDRA-19593
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19593
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Feature/Guardrails, Transactional Cluster Metadata
>Reporter: Stefan Miklosovic
>Assignee: Stefan Miklosovic
>Priority: Normal
> Fix For: 5.x
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I think it is time to start to think about this more seriously. TCM is 
> getting into pretty nice shape and we might start to investigate how to do 
> this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-19593) Transactional Guardrails

2024-05-21 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-19593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848171#comment-17848171
 ] 

Stefan Miklosovic commented on CASSANDRA-19593:
---

Progress:

I finished all transformations (Flags, Values, Thresholds, Customs) (Custom is 
a custom guardrail as per CEP-24). I have also tested ser/de of these 
transformations, there are tests for vtables and diffing of transformations as 
well.

I have also started to validate the input in vtables before it is committed to 
TCM. That means that no invalid configuration (e.g. warn threshold bigger than 
fail threshold) will be committed when CQL statement against such vtables is 
executed. Validations are done per each logical guardrail category if 
applicable.

 

> Transactional Guardrails
> 
>
> Key: CASSANDRA-19593
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19593
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Feature/Guardrails, Transactional Cluster Metadata
>    Reporter: Stefan Miklosovic
>Assignee: Stefan Miklosovic
>Priority: Normal
> Fix For: 5.x
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I think it is time to start to think about this more seriously. TCM is 
> getting into pretty nice shape and we might start to investigate how to do 
> this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[ceph-users] How network latency affects ceph performance really with NVME only storage?

2024-05-21 Thread Stefan Bauer

Dear Users,

i recently setup a new ceph 3 node cluster. Network is meshed between 
all nodes (2 x 25G with DAC).

Storage is flash only (Kioxia 3.2 TBBiCS FLASH 3D TLC, KCMYXVUG3T20)

The latency with ping tests between the nodes shows:

# ping 10.1.3.13
PING 10.1.3.13 (10.1.3.13) 56(84) bytes of data.
64 bytes from 10.1.3.13: icmp_seq=1 ttl=64 time=0.145 ms
64 bytes from 10.1.3.13: icmp_seq=2 ttl=64 time=0.180 ms
64 bytes from 10.1.3.13: icmp_seq=3 ttl=64 time=0.180 ms
64 bytes from 10.1.3.13: icmp_seq=4 ttl=64 time=0.115 ms
64 bytes from 10.1.3.13: icmp_seq=5 ttl=64 time=0.110 ms
64 bytes from 10.1.3.13: icmp_seq=6 ttl=64 time=0.120 ms
64 bytes from 10.1.3.13: icmp_seq=7 ttl=64 time=0.124 ms
64 bytes from 10.1.3.13: icmp_seq=8 ttl=64 time=0.140 ms
64 bytes from 10.1.3.13: icmp_seq=9 ttl=64 time=0.127 ms
64 bytes from 10.1.3.13: icmp_seq=10 ttl=64 time=0.143 ms
64 bytes from 10.1.3.13: icmp_seq=11 ttl=64 time=0.129 ms
--- 10.1.3.13 ping statistics ---
11 packets transmitted, 11 received, 0% packet loss, time 10242ms
rtt min/avg/max/mdev = 0.110/0.137/0.180/0.022 ms


On another cluster i have much better values, with 10G SFP+ and 
fibre-cables:


64 bytes from large-ipv6-ip: icmp_seq=42 ttl=64 time=0.081 ms
64 bytes from large-ipv6-ip: icmp_seq=43 ttl=64 time=0.078 ms
64 bytes from large-ipv6-ip: icmp_seq=44 ttl=64 time=0.084 ms
64 bytes from large-ipv6-ip: icmp_seq=45 ttl=64 time=0.075 ms
64 bytes from large-ipv6-ip: icmp_seq=46 ttl=64 time=0.071 ms
64 bytes from large-ipv6-ip: icmp_seq=47 ttl=64 time=0.081 ms
64 bytes from large-ipv6-ip: icmp_seq=48 ttl=64 time=0.074 ms
64 bytes from large-ipv6-ip: icmp_seq=49 ttl=64 time=0.085 ms
64 bytes from large-ipv6-ip: icmp_seq=50 ttl=64 time=0.077 ms
64 bytes from large-ipv6-ip: icmp_seq=51 ttl=64 time=0.080 ms
64 bytes from large-ipv6-ip: icmp_seq=52 ttl=64 time=0.084 ms
64 bytes from large-ipv6-ip: icmp_seq=53 ttl=64 time=0.084 ms
^C
--- long-ipv6-ip ping statistics ---
53 packets transmitted, 53 received, 0% packet loss, time 53260ms
rtt min/avg/max/mdev = 0.071/0.082/0.111/0.006 ms

If i want best performance, does the latency difference matter at all? 
Should i change DAC to SFP-transceivers wwith fibre-cables to improve 
overall ceph performance or is this nitpicking?


Thanks a lot.

Stefan
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


CVS: cvs.openbsd.org: ports

2024-05-21 Thread Stefan Sperling
CVSROOT:/cvs
Module name:ports
Changes by: s...@cvs.openbsd.org2024/05/21 03:35:25

Modified files:
sysutils/firmware/iwx: Makefile distinfo 

Log message:
Update iwx(4) firmware to linux-firmware 20240513 release.

The following files have changed:
iwx-Qu-b0-hr-b0-77
iwx-Qu-b0-jf-b0-77
iwx-Qu-c0-hr-b0-77
iwx-Qu-c0-jf-b0-77
iwx-QuZ-a0-hr-b0-77
iwx-cc-a0-77
iwx-so-a0-gf-a0.pnvm
iwx-so-a0-gf4-a0.pnvm
iwx-ty-a0-gf-a0.pnvm

Tested by jmc@ and myself on AX200. More testing now in -current.
If a new problem appears with this update please report to bugs@.



CVS: cvs.openbsd.org: ports

2024-05-21 Thread Stefan Sperling
CVSROOT:/cvs
Module name:ports
Changes by: s...@cvs.openbsd.org2024/05/21 03:34:52

Modified files:
sysutils/firmware/iwm: Makefile distinfo 

Log message:
Update iwm(4) firmware to linux-firmware release 20240410.

Only 9260 and 9560 devices receive new firmware.

Tested by florian@ on 9260 and steven@ on 9260 and 9560.



Re: [PATCH v2 3/3] cyclic: make clients embed a struct cyclic_info in their own data structure

2024-05-21 Thread Stefan Roese

On 5/21/24 10:38, Rasmus Villemoes wrote:

On 21/05/2024 08.57, Stefan Roese wrote:

On 5/19/24 21:44, Rasmus Villemoes wrote:

On 18/05/2024 09.34, Stefan Roese wrote:


This introduces some problems when compiling e.g. sandbox:

In file included from test/common/cyclic.c:10:
test/common/cyclic.c: In function ‘dm_test_cyclic_running’:
test/common/cyclic.c:25:42: warning: passing argument 1 of
‘cyclic_register’ from incompatible pointer type
[-Wincompatible-pointer-types]
     25 | ut_assertnonnull(cyclic_register(cyclic_test, 10 *
1000,
"cyclic_demo",
    |  ^~~
    |  |
    |  void (*)(void *)
include/test/ut.h:298:29: note: in definition of macro
‘ut_assertnonnull’

[...]


Could you please also change the test file accordingly? I'll then
try to get this upstream shortly.


Whoops, I don't know how I managed to miss that when grepping for users.
Sorry about that. Updated version coming shortly.


Thanks. Still the new version also fails in the CI build. I'm using
MS Azure for this, here the link to the failing build:

https://dev.azure.com/sr0718/u-boot/_build/results?buildId=355=results


Apparently I'm blind, the full definition of 'struct cyclic_info' was
not guarded by CONFIG_CYCLIC.


Yes, I already "played" a bit here but still got other problems.


Could you please make sure that CI fully builds?


Is there a way I can trigger that from my side without sending patches?


You need to have an azure account and push a branch with your patches
into your u-boot repo to trigger the CI build. Gitlab also is possible
AFAIK.


BTW: Not sure if I still can pull this (updated version) in, since I'm
leaving for a 2 week vacation tomorrow morning.


NP, I should have been more careful. I'll send an updated version in a
moment anyway.


Thanks,
Stefan


Re: [PATCH v2 3/3] cyclic: make clients embed a struct cyclic_info in their own data structure

2024-05-21 Thread Stefan Roese

On 5/19/24 21:44, Rasmus Villemoes wrote:

On 18/05/2024 09.34, Stefan Roese wrote:


This introduces some problems when compiling e.g. sandbox:

In file included from test/common/cyclic.c:10:
test/common/cyclic.c: In function ‘dm_test_cyclic_running’:
test/common/cyclic.c:25:42: warning: passing argument 1 of
‘cyclic_register’ from incompatible pointer type
[-Wincompatible-pointer-types]
    25 | ut_assertnonnull(cyclic_register(cyclic_test, 10 * 1000,
"cyclic_demo",
   |  ^~~
   |  |
   |  void (*)(void *)
include/test/ut.h:298:29: note: in definition of macro ‘ut_assertnonnull’

[...]


Could you please also change the test file accordingly? I'll then
try to get this upstream shortly.


Whoops, I don't know how I managed to miss that when grepping for users.
Sorry about that. Updated version coming shortly.


Thanks. Still the new version also fails in the CI build. I'm using
MS Azure for this, here the link to the failing build:

https://dev.azure.com/sr0718/u-boot/_build/results?buildId=355=results

Could you please make sure that CI fully builds?

BTW: Not sure if I still can pull this (updated version) in, since I'm
leaving for a 2 week vacation tomorrow morning.

Thanks,
Stefan


[jira] [Commented] (CASSANDRA-19450) Hygiene updates for warnings and pytests

2024-05-20 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-19450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848042#comment-17848042
 ] 

Stefan Miklosovic commented on CASSANDRA-19450:
---

[CASSANDRA-19450|https://github.com/instaclustr/cassandra/tree/CASSANDRA-19450]
{noformat}
java17_pre-commit_tests 
  ✓ j17_build4m 27s
  ✓ j17_cqlshlib_cython_tests7m 57s
  ✓ j17_cqlshlib_tests7m 7s
  ✓ j17_dtests  37m 34s
  ✓ j17_dtests_vnode35m 29s
  ✓ j17_unit_tests   14m 4s
  ✓ j17_utests_latest   14m 24s
  ✓ j17_utests_oa   15m 52s
  ✕ j17_cqlsh_dtests_py311   7m 11s
  cqlsh_tests.test_cqlsh.TestCqlsh test_pycodestyle_compliance
  ✕ j17_cqlsh_dtests_py311_vnode 7m 15s
  cqlsh_tests.test_cqlsh.TestCqlsh test_pycodestyle_compliance
  ✕ j17_cqlsh_dtests_py387m 10s
  cqlsh_tests.test_cqlsh.TestCqlsh test_pycodestyle_compliance
  ✕ j17_cqlsh_dtests_py38_vnode  7m 26s
  cqlsh_tests.test_cqlsh.TestCqlsh test_pycodestyle_compliance
  ✕ j17_dtests_latest   36m 34s
  configuration_test.TestConfiguration test_change_durable_writes
  ✕ j17_jvm_dtests  21m 45s
  ✕ j17_jvm_dtests_latest_vnode 21m 35s
  junit.framework.TestSuite 
org.apache.cassandra.fuzz.harry.integration.model.InJVMTokenAwareExecutorTest 
TIMEOUTED
java17_separate_tests
java11_pre-commit_tests 
java11_separate_tests
{noformat}

[java17_pre-commit_tests|https://app.circleci.com/pipelines/github/instaclustr/cassandra/4315/workflows/8e5e7328-875d-4554-8535-b7c1437bf2ec]
[java17_separate_tests|https://app.circleci.com/pipelines/github/instaclustr/cassandra/4315/workflows/c0eb2ffe-439c-462d-b633-7fd8034c3291]
[java11_pre-commit_tests|https://app.circleci.com/pipelines/github/instaclustr/cassandra/4315/workflows/16247a5a-16b0-43a7-8aea-a72853cdcd78]
[java11_separate_tests|https://app.circleci.com/pipelines/github/instaclustr/cassandra/4315/workflows/b4b06e17-bc75-4341-9416-91424ceda80c]


> Hygiene updates for warnings and pytests
> 
>
> Key: CASSANDRA-19450
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19450
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL/Interpreter
>Reporter: Brad Schoening
>Assignee: Brad Schoening
>Priority: Low
> Fix For: 5.x
>
>
>  
>  * Update 'Warning' message to write to stderr
>  * -Replace TimeoutError Exception with builtin (since Python 3.3)-
>  * -Remove re.pattern_type (removed since Python 3.7)-
>  * Fix mutable arg [] in read_until()
>  * Remove redirect of stderr to stdout in pytest fixture with tty=false; 
> Deprecation warnings can otherwise fail unit tests when stdout & stderr 
> output is combined.
>  * Fix several pycodestyle issues



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



Re: nsh: fix build with upcoming stdio changes

2024-05-20 Thread Stefan Sperling
On Mon, May 20, 2024 at 08:33:17PM +0200, Theo Buehler wrote:
> extern.h depends on stdio.h because of FILENAME_MAX. wchar.h will stop
> pulling in stdio.h, hence break the build in utf8.c which will no longer
> pull in stdio.h. The diff below fixes the build.
> 
> Longer term it would be preferable to make extern.h self-standing so it
> doesn't depend on other headers being pulled in.

ok stsp@, thank you!

I have already committed your diff to the nsh upstream repository.

> Also, these obj hacks in the port are really annoying (this breaks
> generating patches as it is owned by _pbuild:_pbuild and has perms 770).

I've also committed a tweak such that nsh will compile even if 'make obj'
is not used. See commit b0b69440cc3f1f8127d3b6f341eb0e61116f7918 ; feel
free to pull this into the port if it is urgent. Otherwise the tweak
will be pulled in with the next upstream release.



Re: bug#66790: [BUG] org, ispell [9.6.6 (release_9.6.6 @ /Applications/Emacs.app/Contents/Resources/lisp/org/)]

2024-05-20 Thread Stefan Kangas
Ihor Radchenko  writes:

> Ihor Radchenko  writes:
>
>> Confirmed.
>>
>> At this point, I feel that supporting isearch + text properties is an
>> uphill battle. I was hoping to contribute isearch.el patch upstream, but
>> apparently numerous other libraries (ispell, regexp-search, evil,
>> swiper) are relying upon invisibility being handled in very specific
>> way, using overlays. Considering that using overlays is no longer slower
>> compared to text properties in Emacs 29+, I am thinking of slowly
>> switching back to using overlays for folding for newer Emacs versions.
>
> Org mode development branch will have `org-fold-core-style' defaulting
> to 'overlays now as long as Emacs version is recent enough to handle
> overlays efficiently.
>
> Handled.

Thanks, I'm therefore closing this bug report.



[Wikimania-l] Re: Excursion to Auschwitz?

2024-05-20 Thread Stefan Fussan
I would be interested as well. I’ll arrive in Katowice on 04. Aug an d leave on 
11. Aug in the morning

 

Stefan Fussan (he/him)

Wikimedia projects: DerFussi

Home wiki: Wikivoyage/de

Hailing from: Cottbus, Germany

 

Von: Vanj Padilla  
Gesendet: Montag, 20. Mai 2024 16:19
An: Wikimania general list (open subscription) 
Cc: Tomasz Ganicz 
Betreff: [Wikimania-l] Re: Excursion to Auschwitz?

 

Is there already a tentative date for this? I would like to go too if it is 
within my stay :-) 

 

On Mon, 20 May 2024, 5:34 pm Laliv Gal, mailto:laliv@gmail.com> > wrote:

Me 2

Laiv g

 

On Mon, May 20, 2024, 06:33 Rosie Stephenson-Goodknight 
mailto:rosiestep.w...@gmail.com> > wrote:

I'm interested. Thanks. -Rosie


 

 

On Sun, May 19, 2024 at 1:17 PM Samuel Klein mailto:meta...@gmail.com> > wrote:

Please count me interested. It is a short drive, perhaps we could arrange a van 
or large car. Perhaps someone who knows the director could start a thread w/ 
them and inquire about doing something a day post-mania?

 



 

On Thu, Dec 14, 2023, 2:22 PM Harry Mitchell mailto:hjmw...@gmail.com> > wrote:

That's absolutely fair enough. Thank you for your answer. I wasn't looking for 
one answer or another; it's something I wanted to do and didn't want to 
pre-empt anything if plans were being made.

 

If anyone is interested in going with a group of Wikimedians, please feel free 
to get in touch with me. If there are a few of us we could coordinate plans. 

 

Harry

 

On Thu, 14 Dec 2023, 13:13 Wikipedysta Nadzik, mailto:pl.wikipedia.nad...@gmail.com> > wrote:

Hi,

For both logistical and cost reasons, the Core Organizing Team has decided to 
focus on side events inside the city of Katowice. We do not plan to organize 
any activities outside of the city. However, attendees are always free to 
self-organize tours and excursions based on their interests.

We expect to coordinate and announce our side events in the months leading up 
to Wikimania, so keep your eye out for that on the Wikimania-wiki!

Best,

Maciej on behalf of the Wikimania 2024 COT

--


  
<https://upload.wikimedia.org/wikipedia/commons/thumb/5/55/Wikimania_%28blue-red%29.svg/195px-Wikimania_%28blue-red%29.svg.png>
 

Maciej Artur Nadzikiewicz (He/him)

Wikimania 2024 Poland – Team Lead

Wikimedia Europe Board Member

Wikipedia Administrator

 <https://meta.wikimedia.org/wiki/User:Nadzik> User:Nadzik

 

pt., 8 gru 2023 o 22:56 Harry Mitchell mailto:hjmw...@gmail.com> > napisał(a):

Hi all, 

 

I'm sure the organisers have a lot on their plate right now so I don't want to 
add any unnecessary pressure, I was just wondering if any thought had been 
given to an organised excursion of Wikimedians to Auschwitz during Wikimania as 
Krakow seems reasonably close?

 

I know this is a sensitive topic and likely to be deeply distressing but I'm 
sure many of us would find it worthwhile.

 

Best,

Harry

___
Wikimania-l mailing list -- wikimania-l@lists.wikimedia.org 
<mailto:wikimania-l@lists.wikimedia.org> 
To unsubscribe send an email to wikimania-l-le...@lists.wikimedia.org 
<mailto:wikimania-l-le...@lists.wikimedia.org> 




 

-- 

 <https://meta.wikimedia.org/wiki/User:Nadzik> User:Nadzik

___
Wikimania-l mailing list -- wikimania-l@lists.wikimedia.org 
<mailto:wikimania-l@lists.wikimedia.org> 
To unsubscribe send an email to wikimania-l-le...@lists.wikimedia.org 
<mailto:wikimania-l-le...@lists.wikimedia.org> 

___
Wikimania-l mailing list -- wikimania-l@lists.wikimedia.org 
<mailto:wikimania-l@lists.wikimedia.org> 
To unsubscribe send an email to wikimania-l-le...@lists.wikimedia.org 
<mailto:wikimania-l-le...@lists.wikimedia.org> 

___
Wikimania-l mailing list -- wikimania-l@lists.wikimedia.org 
<mailto:wikimania-l@lists.wikimedia.org> 
To unsubscribe send an email to wikimania-l-le...@lists.wikimedia.org 
<mailto:wikimania-l-le...@lists.wikimedia.org> 

___
Wikimania-l mailing list -- wikimania-l@lists.wikimedia.org 
<mailto:wikimania-l@lists.wikimedia.org> 
To unsubscribe send an email to wikimania-l-le...@lists.wikimedia.org 
<mailto:wikimania-l-le...@lists.wikimedia.org> 

___
Wikimania-l mailing list -- wikimania-l@lists.wikimedia.org 
<mailto:wikimania-l@lists.wikimedia.org> 
To unsubscribe send an email to wikimania-l-le...@lists.wikimedia.org 
<mailto:wikimania-l-le...@lists.wikimedia.org> 

___
Wikimania-l mailing list -- wikimania-l@lists.wikimedia.org
To unsubscribe send an email to wikimania-l-le...@lists.wikimedia.org


[jira] [Updated] (CASSANDRA-19648) Flaky test: StartupChecksTest#testKernelBug1057843Check() on Non-Linux OS

2024-05-20 Thread Stefan Miklosovic (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-19648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Miklosovic updated CASSANDRA-19648:
--
  Fix Version/s: 5.1-alpha1
 (was: 5.x)
Source Control Link: 
https://github.com/apache/cassandra/commit/df78296dcbc67c1d6dd1e0412fcd71f0a8f8fa7c
 Resolution: Fixed
 Status: Resolved  (was: Ready to Commit)

> Flaky test: StartupChecksTest#testKernelBug1057843Check() on Non-Linux OS
> -
>
> Key: CASSANDRA-19648
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19648
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Test/unit
>Reporter: Ling Mao
>Assignee: Ling Mao
>Priority: Low
> Fix For: 5.1-alpha1
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Flaky test: StartupChecksTest#testKernelBug1057843Check() cannot pass in my 
> MacOs(maybe Windows OS). Just skip this test when tested on Non-Linux OS



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-19556) Add guardrail to block DDL/DCL queries and replace alter_table_enabled guardrail

2024-05-20 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-19556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17847948#comment-17847948
 ] 

Stefan Miklosovic commented on CASSANDRA-19556:
---

Links were consolidated in Links section of this ticket.

> Add guardrail to block DDL/DCL queries and replace alter_table_enabled 
> guardrail
> 
>
> Key: CASSANDRA-19556
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19556
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Feature/Guardrails
>Reporter: Yuqi Yan
>Assignee: Yuqi Yan
>Priority: Normal
> Fix For: 5.0-rc, 5.x
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Sometimes we want to block DDL/DCL queries to stop new schemas being created 
> or roles created. (e.g. when doing live-upgrade)
> For DDL guardrail current implementation won't block the query if it's no-op 
> (e.g. CREATE TABLE...IF NOT EXISTS, but table already exists, etc. The 
> guardrail check is added in apply() right after all the existence check)
> I don't have preference on either block every DDL query or check whether if 
> it's no-op here. Just we have some users always run CREATE..IF NOT EXISTS.. 
> at startup, which is no-op but will be blocked by this guardrail and failed 
> to start.
>  
> 4.1 PR: [https://github.com/apache/cassandra/pull/3248]
> trunk PR: [https://github.com/apache/cassandra/pull/3275]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-19556) Add guardrail to block DDL/DCL queries and replace alter_table_enabled guardrail

2024-05-20 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-19556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17847930#comment-17847930
 ] 

Stefan Miklosovic commented on CASSANDRA-19556:
---

Let's ask then and get these binding -1s explicitly.

> Add guardrail to block DDL/DCL queries and replace alter_table_enabled 
> guardrail
> 
>
> Key: CASSANDRA-19556
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19556
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Feature/Guardrails
>Reporter: Yuqi Yan
>Assignee: Yuqi Yan
>Priority: Normal
> Fix For: 5.0-rc, 5.x
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Sometimes we want to block DDL/DCL queries to stop new schemas being created 
> or roles created. (e.g. when doing live-upgrade)
> For DDL guardrail current implementation won't block the query if it's no-op 
> (e.g. CREATE TABLE...IF NOT EXISTS, but table already exists, etc. The 
> guardrail check is added in apply() right after all the existence check)
> I don't have preference on either block every DDL query or check whether if 
> it's no-op here. Just we have some users always run CREATE..IF NOT EXISTS.. 
> at startup, which is no-op but will be blocked by this guardrail and failed 
> to start.
>  
> 4.1 PR: [https://github.com/apache/cassandra/pull/3248]
> trunk PR: [https://github.com/apache/cassandra/pull/3275]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-19648) Flaky test: StartupChecksTest#testKernelBug1057843Check() on Non-Linux OS

2024-05-20 Thread Stefan Miklosovic (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-19648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Miklosovic updated CASSANDRA-19648:
--
Status: Ready to Commit  (was: Review In Progress)

> Flaky test: StartupChecksTest#testKernelBug1057843Check() on Non-Linux OS
> -
>
> Key: CASSANDRA-19648
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19648
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Test/unit
>Reporter: Ling Mao
>Assignee: Ling Mao
>Priority: Low
> Fix For: 5.x
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Flaky test: StartupChecksTest#testKernelBug1057843Check() cannot pass in my 
> MacOs(maybe Windows OS). Just skip this test when tested on Non-Linux OS



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-19648) Flaky test: StartupChecksTest#testKernelBug1057843Check() on Non-Linux OS

2024-05-20 Thread Stefan Miklosovic (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-19648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Miklosovic updated CASSANDRA-19648:
--
Reviewers: Brandon Williams, Stefan Miklosovic
   Status: Review In Progress  (was: Needs Committer)

> Flaky test: StartupChecksTest#testKernelBug1057843Check() on Non-Linux OS
> -
>
> Key: CASSANDRA-19648
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19648
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Test/unit
>Reporter: Ling Mao
>Assignee: Ling Mao
>Priority: Low
> Fix For: 5.x
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Flaky test: StartupChecksTest#testKernelBug1057843Check() cannot pass in my 
> MacOs(maybe Windows OS). Just skip this test when tested on Non-Linux OS



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-19648) Flaky test: StartupChecksTest#testKernelBug1057843Check() on Non-Linux OS

2024-05-20 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-19648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17847913#comment-17847913
 ] 

Stefan Miklosovic commented on CASSANDRA-19648:
---

+1

> Flaky test: StartupChecksTest#testKernelBug1057843Check() on Non-Linux OS
> -
>
> Key: CASSANDRA-19648
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19648
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Test/unit
>Reporter: Ling Mao
>Assignee: Ling Mao
>Priority: Low
> Fix For: 5.x
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Flaky test: StartupChecksTest#testKernelBug1057843Check() cannot pass in my 
> MacOs(maybe Windows OS). Just skip this test when tested on Non-Linux OS



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (SPARK-48354) Added function support and testing for filter push down in JDBC connectors

2024-05-20 Thread Stefan Bukorovic (Jira)
Stefan Bukorovic created SPARK-48354:


 Summary: Added function support and testing for filter push down 
in JDBC connectors
 Key: SPARK-48354
 URL: https://issues.apache.org/jira/browse/SPARK-48354
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Affects Versions: 3.4.3
Reporter: Stefan Bukorovic


There is a possibility to add a support for push down of multiple widely used 
spark functions such as lower or upper... to JDBC data sources. 
Also, more integration tests are needed for push down of filters. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (CASSANDRA-19645) Mismatch of number of args of String.format() in three classes

2024-05-20 Thread Stefan Miklosovic (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-19645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Miklosovic updated CASSANDRA-19645:
--
  Fix Version/s: 5.1-alpha1
 (was: 5.x)
  Since Version: NA
Source Control Link: 
https://github.com/apache/cassandra/commit/dc89df7da1d9577ed0130873c491f7f4ccf99bae
 Resolution: Fixed
 Status: Resolved  (was: Ready to Commit)

> Mismatch of number of args of String.format() in three classes
> --
>
> Key: CASSANDRA-19645
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19645
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Other
>Reporter: Dmitrii Kriukov
>Assignee: Dmitrii Kriukov
>Priority: Normal
> Fix For: 5.1-alpha1
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Affected classes:
> GossipHelper lines 196-197
> SchemaGenerators line 488
> StorageService line 1087
> I'm goind to provide a PR



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (SPARK-48339) Revert converting collated strings to regular strings when writing to hive metastore

2024-05-20 Thread Stefan Kandic (Jira)
Stefan Kandic created SPARK-48339:
-

 Summary: Revert converting collated strings to regular strings 
when writing to hive metastore
 Key: SPARK-48339
 URL: https://issues.apache.org/jira/browse/SPARK-48339
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Affects Versions: 4.0.0
Reporter: Stefan Kandic


No longer needed due to https://github.com/apache/spark/pull/46083



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



Re: [Bacula-users] HP 1/8 G2 Autoloader

2024-05-20 Thread Stefan G. Weichinger

Am 15.05.24 um 16:02 schrieb Rob Gerber:

To manually remove the disk volumes, and associated job and file 
entries, do a delete operation. Easily done in Bacularis or baculum, (I 
haven't done it in bconsole though I'm sure it's doable there too). I 
would have a catalog backup restored BEFORE I performed the delete 
operation, and I would be very careful to select the correct volumes to 
delete.


I have to balance things now until we have a new tape drive.

disk runs full ...

Is there an easy way to delete all volumes belonging to a specific Job-ID?

For now I looked it up in Bacularis: check which jobs are on which 
(disk-based) volume, rm the volume in Bacularis, then rm the 
corresponding file from disk (in the shell).


OK so far, I am just curious if there is a clever way for that in Bacula.



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: RFR: 8332494: java/util/zip/EntryCount64k.java failing with java.lang.RuntimeException: '\\A\\Z' missing from stderr [v2]

2024-05-20 Thread Stefan Karlsson
On Mon, 20 May 2024 07:24:16 GMT, Axel Boldt-Christmas  
wrote:

>> Improve `java/util/zip/EntryCount64k.java` stderr parsing by ignoring 
>> deprecated warnings. Testing non-generational ZGC requires the use of a 
>> deprecated option.
>
> Axel Boldt-Christmas has updated the pull request incrementally with one 
> additional commit since the last revision:
> 
>   Update copyright year

Marked as reviewed by stefank (Reviewer).

-

PR Review: https://git.openjdk.org/jdk/pull/19297#pullrequestreview-2065538663


Re: RFR: 8332495: java/util/logging/LoggingDeadlock2.java fails with AssertionError: Some tests failed [v2]

2024-05-20 Thread Stefan Karlsson
On Mon, 20 May 2024 07:22:49 GMT, Axel Boldt-Christmas  
wrote:

>> Improve ` java/util/logging/LoggingDeadlock2.java` stderr parsing by 
>> ignoring deprecated warnings. Testing non-generational ZGC requires the use 
>> of a deprecated option.
>
> Axel Boldt-Christmas has updated the pull request incrementally with one 
> additional commit since the last revision:
> 
>   Update copyright year

Marked as reviewed by stefank (Reviewer).

-

PR Review: https://git.openjdk.org/jdk/pull/19298#pullrequestreview-2065533348


Re: A different way to build GCC to overcome issues, especially with C++ for embedded systems

2024-05-19 Thread Stefan

Hi Sergio!


I'm very interested in the code for ZMK provided in this thread[1].

I've tried it locally and it compiles successfully. 


That's nice to hear!


Does anyone know if this is available in a public repository? Or if it has been 
moved forward?


There is no public repository for it. I moved it a bit further. There is now a 
GCC with picolibc. GCC in general is more self-contained and is completely 
usable by itself (including paths to specs; Attila, this may solve the issue 
you were facing). There is no real need for GCC-toolchain packages anymore. 
However, as some projects invoke programs of Binutils directly and because of 
locales, there are still GCC-toolchain packages. The Binutils for arm-none-eabi 
get build separately as well now, as the one from Guix has failing tests (if 
activated) due to the selected configure flags. GCC is bumped to 13.2.0. Zephyr 
is updated to 3.5 and ZMK to end of April. There is also more ZMK support for 
combos, conditional-layers and a bit more.

A problem I figured for arm-none-eabi-g++ is that it always links the generic 
libstdc++.a, instead of the existing CPU and FPU specific multilib variant from 
a sub-directory. I'd appreciate any GCC configuration hint to solve this.


Anyone here have tried to do something with ZMK? I'm interested in what would 
be the Guix approach for ZMK development.

The blog post of Zephyr[2] was a very interesting read, does anyone know of 
other resources regarding this topic?


I'm not aware of anything else regarding ZMK or Zephyr in Guix.


Bye

Stefan


embedded.tar.gz
Description: application/gzip


Re: Saving some kitten, plus some questions along the way

2024-05-19 Thread Stefan Monnier
>> I understand it's important in general, but the question is for this
>> specific use of `org-funcall-in-calendar` where all we do (apparently)
>> is to set `cursor-type` which shouldn't require any change to the
>> overlay (nor does it require to `select-window`), or should it?
> No, it should not, and it does not require `select-window'.

OK, changed it to `with-current-buffer`.

I pushed the resulting patch (along with three other patches resulting
from running the tests) to `scratch/org` on `elpa.git`.

You can also find them attached,


Stefan
>From 30f64453a570004524de07aa352f7c631414df7c Mon Sep 17 00:00:00 2001
From: Stefan Monnier 
Date: Sun, 19 May 2024 17:12:36 -0400
Subject: [PATCH 1/4] (org-*-in-calendar): Prefer `apply` to `eval

* lisp/org.el (org-funcall-in-calendar): Rename from `org-eval-in-calendar`.
Use `apply` rather than `eval`.  Use `with-selected-window` rather than
cooking our own version of it.  Update all callers.
(org-read-date): Use `with-current-buffer` to set `cursor-type` since
it's not affected by the selected window.
Move the `calendar-forward-day` call to `org-funcall-in-calendar` to
clarify the need to update `org-date-ovl`.
(org-eval-in-calendar): New backward compatibility function.

* lisp/org-keys.el (org-eval-in-calendar): Remove unused declaration.
---
 lisp/org-keys.el |  7 +++--
 lisp/org.el  | 66 +---
 2 files changed, 37 insertions(+), 36 deletions(-)

diff --git a/lisp/org-keys.el b/lisp/org-keys.el
index 50e05efa1b..fc5fd53aa8 100644
--- a/lisp/org-keys.el
+++ b/lisp/org-keys.el
@@ -89,7 +89,6 @@
 (declare-function org-emphasize "org" ( char))
 (declare-function org-end-of-line "org" ( n))
 (declare-function org-entry-put "org" (pom property value))
-(declare-function org-eval-in-calendar "org" (form  keepdate))
 (declare-function org-calendar-goto-today-or-insert-dot "org" ())
 (declare-function org-calendar-goto-today "org" ())
 (declare-function org-calendar-backward-month "org" ())
@@ -390,9 +389,9 @@ COMMANDS is a list of alternating OLDDEF NEWDEF command names."
 ;;; Global bindings
 
  Outline functions
-(define-key org-mode-map [menu-bar headings] 'undefined)
-(define-key org-mode-map [menu-bar hide] 'undefined)
-(define-key org-mode-map [menu-bar show] 'undefined)
+(define-key org-mode-map [menu-bar headings] #'undefined)
+(define-key org-mode-map [menu-bar hide] #'undefined)
+(define-key org-mode-map [menu-bar show] #'undefined)
 
 (define-key org-mode-map [remap outline-mark-subtree] #'org-mark-subtree)
 (define-key org-mode-map [remap outline-show-subtree] #'org-fold-show-subtree)
diff --git a/lisp/org.el b/lisp/org.el
index 4342ddd735..9604887623 100644
--- a/lisp/org.el
+++ b/lisp/org.el
@@ -14009,13 +14009,15 @@ user."
 	  (setq cal-frame
 		(window-frame (get-buffer-window calendar-buffer 'visible)))
 	  (select-frame cal-frame))
-	(org-eval-in-calendar '(setq cursor-type nil) t)
+	;; FIXME: Not sure we need `with-current-buffer' but I couldn't
+;; convince myself that we're always in `calendar-buffer' after
+;; the call to `calendar'.
+	(with-current-buffer calendar-buffer (setq cursor-type nil))
 	(unwind-protect
-		(progn
-		  (calendar-forward-day (- (time-to-days org-def)
-	   (calendar-absolute-from-gregorian
-	(calendar-current-date
-		  (org-eval-in-calendar nil t)
+		(let ((days (- (time-to-days org-def)
+			   (calendar-absolute-from-gregorian
+(calendar-current-date)
+		  (org-funcall-in-calendar #'calendar-forward-day t days)
 		  (let* ((old-map (current-local-map))
 			 (map (copy-keymap calendar-mode-map))
 			 (minibuffer-local-map
@@ -14398,20 +14400,20 @@ user function argument order change dependent on argument order."
 (`european (list arg2 arg1 arg3))
 (`iso (list arg2 arg3 arg1
 
-(defun org-eval-in-calendar (form  keepdate)
-  "Eval FORM in the calendar window and return to current window.
+(defun org-funcall-in-calendar (func  keepdate  args)
+  "Call FUNC in the calendar window and return to current window.
 Unless KEEPDATE is non-nil, update `org-ans2' to the cursor date."
-  (let ((sf (selected-frame))
-	(sw (selected-window)))
-(select-window (get-buffer-window calendar-buffer t))
-(eval form t)
+  (with-selected-window (get-buffer-window calendar-buffer t)
+(apply func args)
 (when (and (not keepdate) (calendar-cursor-to-date))
   (let* ((date (calendar-cursor-to-date))
 	 (time (org-encode-time 0 0 0 (nth 1 date) (nth 0 date) (nth 2 date
 	(setq org-ans2 (format-time-string "%Y-%m-%d" time
-(move-overlay org-date-ovl (1- (point)) (1+ (point)) (current-buffer))
-(select-window sw)
-(select-frame-set-input-focus sf)))
+(move-overlay org-date-ovl (1- (point)) (1+ (point)

[elpa] scratch/org 30f64453a5 1/4: (org-*-in-calendar): Prefer `apply` to `eval

2024-05-19 Thread Stefan Monnier via
branch: scratch/org
commit 30f64453a570004524de07aa352f7c631414df7c
Author: Stefan Monnier 
Commit: Stefan Monnier 

(org-*-in-calendar): Prefer `apply` to `eval

* lisp/org.el (org-funcall-in-calendar): Rename from `org-eval-in-calendar`.
Use `apply` rather than `eval`.  Use `with-selected-window` rather than
cooking our own version of it.  Update all callers.
(org-read-date): Use `with-current-buffer` to set `cursor-type` since
it's not affected by the selected window.
Move the `calendar-forward-day` call to `org-funcall-in-calendar` to
clarify the need to update `org-date-ovl`.
(org-eval-in-calendar): New backward compatibility function.

* lisp/org-keys.el (org-eval-in-calendar): Remove unused declaration.
---
 lisp/org-keys.el |  7 +++---
 lisp/org.el  | 66 +---
 2 files changed, 37 insertions(+), 36 deletions(-)

diff --git a/lisp/org-keys.el b/lisp/org-keys.el
index 50e05efa1b..fc5fd53aa8 100644
--- a/lisp/org-keys.el
+++ b/lisp/org-keys.el
@@ -89,7 +89,6 @@
 (declare-function org-emphasize "org" ( char))
 (declare-function org-end-of-line "org" ( n))
 (declare-function org-entry-put "org" (pom property value))
-(declare-function org-eval-in-calendar "org" (form  keepdate))
 (declare-function org-calendar-goto-today-or-insert-dot "org" ())
 (declare-function org-calendar-goto-today "org" ())
 (declare-function org-calendar-backward-month "org" ())
@@ -390,9 +389,9 @@ COMMANDS is a list of alternating OLDDEF NEWDEF command 
names."
 ;;; Global bindings
 
  Outline functions
-(define-key org-mode-map [menu-bar headings] 'undefined)
-(define-key org-mode-map [menu-bar hide] 'undefined)
-(define-key org-mode-map [menu-bar show] 'undefined)
+(define-key org-mode-map [menu-bar headings] #'undefined)
+(define-key org-mode-map [menu-bar hide] #'undefined)
+(define-key org-mode-map [menu-bar show] #'undefined)
 
 (define-key org-mode-map [remap outline-mark-subtree] #'org-mark-subtree)
 (define-key org-mode-map [remap outline-show-subtree] #'org-fold-show-subtree)
diff --git a/lisp/org.el b/lisp/org.el
index 4342ddd735..9604887623 100644
--- a/lisp/org.el
+++ b/lisp/org.el
@@ -14009,13 +14009,15 @@ user."
  (setq cal-frame
(window-frame (get-buffer-window calendar-buffer 'visible)))
  (select-frame cal-frame))
-   (org-eval-in-calendar '(setq cursor-type nil) t)
+   ;; FIXME: Not sure we need `with-current-buffer' but I couldn't
+;; convince myself that we're always in `calendar-buffer' after
+;; the call to `calendar'.
+   (with-current-buffer calendar-buffer (setq cursor-type nil))
(unwind-protect
-   (progn
- (calendar-forward-day (- (time-to-days org-def)
-  (calendar-absolute-from-gregorian
-   (calendar-current-date
- (org-eval-in-calendar nil t)
+   (let ((days (- (time-to-days org-def)
+  (calendar-absolute-from-gregorian
+   (calendar-current-date)
+ (org-funcall-in-calendar #'calendar-forward-day t days)
  (let* ((old-map (current-local-map))
 (map (copy-keymap calendar-mode-map))
 (minibuffer-local-map
@@ -14398,20 +14400,20 @@ user function argument order change dependent on 
argument order."
 (`european (list arg2 arg1 arg3))
 (`iso (list arg2 arg3 arg1
 
-(defun org-eval-in-calendar (form  keepdate)
-  "Eval FORM in the calendar window and return to current window.
+(defun org-funcall-in-calendar (func  keepdate  args)
+  "Call FUNC in the calendar window and return to current window.
 Unless KEEPDATE is non-nil, update `org-ans2' to the cursor date."
-  (let ((sf (selected-frame))
-   (sw (selected-window)))
-(select-window (get-buffer-window calendar-buffer t))
-(eval form t)
+  (with-selected-window (get-buffer-window calendar-buffer t)
+(apply func args)
 (when (and (not keepdate) (calendar-cursor-to-date))
   (let* ((date (calendar-cursor-to-date))
 (time (org-encode-time 0 0 0 (nth 1 date) (nth 0 date) (nth 2 
date
(setq org-ans2 (format-time-string "%Y-%m-%d" time
-(move-overlay org-date-ovl (1- (point)) (1+ (point)) (current-buffer))
-(select-window sw)
-(select-frame-set-input-focus sf)))
+(move-overlay org-date-ovl (1- (point)) (1+ (point)) (current-buffer
+
+(defun org-eval-in-calendar (form  keepdate)
+  (declare (obsolete org-funcall-in-calendar "2024"))
+  (org-funcall-in-calendar (lambda () (eval form t)) keepdate))
 
 (defun org-calendar-goto-today-or-insert-dot ()
   "Go to the cur

[elpa] scratch/org f471f5b16c 3/4: testing/org-test.el (): Remove dead code

2024-05-19 Thread Stefan Monnier via
branch: scratch/org
commit f471f5b16c4ebf2f079779e5691e38aae9a6b705
Author: Stefan Monnier 
Commit: Stefan Monnier 

testing/org-test.el (): Remove dead code

(featurep 'org) is always non-nil here since we have a (require 'org)
further up.  I suspect other `require`s nearby could be removed or
moved to toplevel.
---
 testing/org-test.el | 29 ++---
 1 file changed, 10 insertions(+), 19 deletions(-)

diff --git a/testing/org-test.el b/testing/org-test.el
index d9fe33284c..8060e1d210 100644
--- a/testing/org-test.el
+++ b/testing/org-test.el
@@ -48,25 +48,16 @@
(file-name-directory
 (or load-file-name buffer-file-name
 (org-lisp-dir (expand-file-name
-   (concat org-test-dir "../lisp"
-
-(unless (featurep 'org)
-  (setq load-path (cons org-lisp-dir load-path))
-  (require 'org)
-  (require 'org-id)
-  (require 'ox)
-  (org-babel-do-load-languages
-   'org-babel-load-languages '((shell . t) (org . t
-
-(let ((load-path (cons org-test-dir
-  (cons (expand-file-name "jump" org-test-dir)
-load-path
-  (require 'cl-lib)
-  (require 'ert)
-  (require 'ert-x)
-  (when (file-exists-p (expand-file-name "jump/jump.el" org-test-dir))
-   (require 'jump)
-   (require 'which-func)
+   (concat org-test-dir "../lisp")))
+(load-path (cons org-test-dir
+ (cons (expand-file-name "jump" org-test-dir)
+   load-path
+(require 'cl-lib)
+(require 'ert)
+(require 'ert-x)
+(when (file-exists-p (expand-file-name "jump/jump.el" org-test-dir))
+  (require 'jump)
+  (require 'which-func
 
 (defconst org-test-default-test-file-name "tests.el"
   "For each defun a separate file with tests may be defined.



[elpa] scratch/org dc62e4a1f9 4/4: testing/lisp/*.el: Fix second arg to `signal`

2024-05-19 Thread Stefan Monnier via
branch: scratch/org
commit dc62e4a1f943f7ca49b3075d9160dc19856fae7b
Author: Stefan Monnier 
Commit: Stefan Monnier 

testing/lisp/*.el: Fix second arg to `signal`

The second argument to `signal` should be a list, as explained in its
docstring.  Fix `missing-test-dependency` signals accordingly.
---
 testing/lisp/test-ob-C.el| 2 +-
 testing/lisp/test-ob-R.el| 4 ++--
 testing/lisp/test-ob-awk.el  | 2 +-
 testing/lisp/test-ob-calc.el | 2 +-
 testing/lisp/test-ob-clojure.el  | 2 +-
 testing/lisp/test-ob-eshell.el   | 2 +-
 testing/lisp/test-ob-fortran.el  | 2 +-
 testing/lisp/test-ob-haskell-ghci.el | 4 ++--
 testing/lisp/test-ob-java.el | 2 +-
 testing/lisp/test-ob-julia.el| 4 ++--
 testing/lisp/test-ob-lua.el  | 2 +-
 testing/lisp/test-ob-maxima.el   | 2 +-
 testing/lisp/test-ob-octave.el   | 2 +-
 testing/lisp/test-ob-perl.el | 2 +-
 testing/lisp/test-ob-python.el   | 2 +-
 testing/lisp/test-ob-ruby.el | 4 ++--
 testing/lisp/test-ob-scheme.el   | 6 +++---
 testing/lisp/test-ob-sed.el  | 2 +-
 testing/lisp/test-ob-shell.el| 2 +-
 testing/lisp/test-ob-sql.el  | 2 +-
 testing/lisp/test-ob-sqlite.el   | 2 +-
 testing/lisp/test-org-ctags.el   | 2 +-
 testing/lisp/test-org-tempo.el   | 2 +-
 testing/lisp/test-ox-ascii.el| 2 +-
 testing/lisp/test-ox-beamer.el   | 2 +-
 testing/lisp/test-ox-latex.el| 2 +-
 testing/lisp/test-ox-texinfo.el  | 2 +-
 27 files changed, 33 insertions(+), 33 deletions(-)

diff --git a/testing/lisp/test-ob-C.el b/testing/lisp/test-ob-C.el
index c70534a51f..5399aed122 100644
--- a/testing/lisp/test-ob-C.el
+++ b/testing/lisp/test-ob-C.el
@@ -20,7 +20,7 @@
 
 ;;; Code:
 (unless (featurep 'ob-C)
-  (signal 'missing-test-dependency "Support for C code blocks"))
+  (signal 'missing-test-dependency '("Support for C code blocks")))
 
 (ert-deftest ob-C/simple-program ()
   "Hello world program."
diff --git a/testing/lisp/test-ob-R.el b/testing/lisp/test-ob-R.el
index 9ffbf3afd9..0d291bf543 100644
--- a/testing/lisp/test-ob-R.el
+++ b/testing/lisp/test-ob-R.el
@@ -22,7 +22,7 @@
 (org-test-for-executable "R")
 (require 'ob-core)
 (unless (featurep 'ess)
-  (signal 'missing-test-dependency "ESS"))
+  (signal 'missing-test-dependency '("ESS")))
 (defvar ess-ask-for-ess-directory)
 (defvar ess-history-file)
 (defvar ess-r-post-run-hook)
@@ -32,7 +32,7 @@
 (declare-function ess-calculate-width "ext:ess-inf" (opt))
 
 (unless (featurep 'ob-R)
-  (signal 'missing-test-dependency "Support for R code blocks"))
+  (signal 'missing-test-dependency '("Support for R code blocks")))
 
 (ert-deftest test-ob-R/simple-session ()
   (let (ess-ask-for-ess-directory ess-history-file)
diff --git a/testing/lisp/test-ob-awk.el b/testing/lisp/test-ob-awk.el
index 874d2c0268..b3a2a50fce 100644
--- a/testing/lisp/test-ob-awk.el
+++ b/testing/lisp/test-ob-awk.el
@@ -21,7 +21,7 @@
 ;;; Code:
 (org-test-for-executable "awk")
 (unless (featurep 'ob-awk)
-  (signal 'missing-test-dependency "Support for Awk code blocks"))
+  (signal 'missing-test-dependency '("Support for Awk code blocks")))
 
 (ert-deftest ob-awk/input-none ()
   "Test with no input file"
diff --git a/testing/lisp/test-ob-calc.el b/testing/lisp/test-ob-calc.el
index 12f97279f6..f6e8a5a2fd 100644
--- a/testing/lisp/test-ob-calc.el
+++ b/testing/lisp/test-ob-calc.el
@@ -21,7 +21,7 @@
 (require 'ob-calc)
 
 (unless (featurep 'ob-calc)
-  (signal 'missing-test-dependency "Support for Calc code blocks"))
+  (signal 'missing-test-dependency '("Support for Calc code blocks")))
 
 (ert-deftest ob-calc/simple-program-mult ()
   "Test of simple multiplication."
diff --git a/testing/lisp/test-ob-clojure.el b/testing/lisp/test-ob-clojure.el
index 33052c98c9..4836917b3a 100644
--- a/testing/lisp/test-ob-clojure.el
+++ b/testing/lisp/test-ob-clojure.el
@@ -25,7 +25,7 @@
 ;;; Code:
 
 (unless (featurep 'ob-clojure)
-  (signal 'missing-test-dependency "Support for Clojure code blocks"))
+  (signal 'missing-test-dependency '("Support for Clojure code blocks")))
 
 ;; FIXME: The old tests where totally off.  We need to write new tests.
 
diff --git a/testing/lisp/test-ob-eshell.el b/testing/lisp/test-ob-eshell.el
index 0d704b16a3..5d0da8d991 100644
--- a/testing/lisp/test-ob-eshell.el
+++ b/testing/lisp/test-ob-eshell.el
@@ -24,7 +24,7 @@
 
 ;;; Code:
 (unless (featurep 'ob-eshell)
-  (signal 'missing-test-dependency "Support for Eshell code blocks"))
+  (signal 'missing-test-dependency '("Support for Eshell code blocks")))
 
 (ert-deftest ob-eshell/execute ()
   "Test ob-eshell execute."
diff --git a/testing/lisp/test-ob-fortran.el b/testing/lisp/test-ob-fortran.el
index 4947d142b7..20

[elpa] scratch/org 3bb14f3c70 2/4: mk/default.mk (BTEST): Remove `-l cl`

2024-05-19 Thread Stefan Monnier via
branch: scratch/org
commit 3bb14f3c7065c42b310a7d761be3a0d369cda334
Author: Stefan Monnier 
Commit: Stefan Monnier 

mk/default.mk (BTEST): Remove `-l cl`

The CL library is deprecated and it's not needed here any more.
I suspect other `-l` could be removed here since `org-test` loads
those libraries anyway.
---
 mk/default.mk | 28 ++--
 1 file changed, 14 insertions(+), 14 deletions(-)

diff --git a/mk/default.mk b/mk/default.mk
index 979a6c07e6..4690ef035b 100644
--- a/mk/default.mk
+++ b/mk/default.mk
@@ -78,20 +78,20 @@ BTEST_LOAD  = \
--eval '(add-to-list `load-path (concat default-directory "testing"))'
 BTEST_INIT  = $(BTEST_PRE) $(BTEST_LOAD) $(BTEST_POST)
 
-BTEST = $(BATCH) $(BTEST_INIT) \
- -l org-batch-test-init \
- --eval '(setq \
-   org-batch-test t \
-   org-babel-load-languages \
- (quote ($(foreach ob-lang,\
-   $(BTEST_OB_LANGUAGES) emacs-lisp shell org,\
-   $(lst-ob-lang \
-   org-test-select-re "$(BTEST_RE)" \
-   )' \
- -l org-loaddefs.el \
- -l cl -l testing/org-test.el \
- -l ert -l org -l ox -l ol \
- $(foreach req,$(BTEST_EXTRA),$(req-extra)) \
+BTEST = $(BATCH) $(BTEST_INIT) 
\
+ -l org-batch-test-init\
+ --eval '(setq \
+   org-batch-test t\
+   org-babel-load-languages\
+ (quote ($(foreach ob-lang,\
+   $(BTEST_OB_LANGUAGES) emacs-lisp shell org, \
+   $(lst-ob-lang   \
+   org-test-select-re "$(BTEST_RE)"\
+   )'  \
+ -l org-loaddefs.el\
+ -l testing/org-test.el\
+ -l ert -l org -l ox -l ol \
+ $(foreach req,$(BTEST_EXTRA),$(req-extra))\
  --eval '(org-test-run-batch-tests org-test-select-re)'
 
 # Running a plain emacs with no config and this Org mode loaded.  This



  1   2   3   4   5   6   7   8   9   10   >