reviews
Thread
Date
Earlier messages
Later messages
Messages by Thread
Re: [PR] [SPARK-53228][CORE][SQL] Use Java `Map` constructors instead of `Maps.new*HashMap()` [spark]
via GitHub
Re: [PR] [SPARK-53228][CORE][SQL] Use Java `Map` constructors instead of `Maps.new*HashMap()` [spark]
via GitHub
Re: [PR] [SPARK-53228][CORE][SQL] Use Java `Map` constructors instead of `Maps.new*HashMap()` [spark]
via GitHub
Re: [PR] [SPARK-53228][CORE][SQL] Use Java `Map` constructors instead of `Maps.new*HashMap()` [spark]
via GitHub
[PR] [SPARK-53227][SQL][TESTS] Use `HashMap.equals` instead of `Maps.difference.areEqual` [spark]
via GitHub
Re: [PR] [SPARK-53227][SQL][TESTS] Use Java `HashMap.equals` instead of `Maps.difference.areEqual` [spark]
via GitHub
Re: [PR] [SPARK-53227][SQL][TESTS] Use Java `HashMap.equals` instead of `Maps.difference.areEqual` [spark]
via GitHub
Re: [PR] [SPARK-53227][SQL][TESTS] Use Java `HashMap.equals` instead of `Maps.difference.areEqual` [spark]
via GitHub
Re: [PR] [SPARK-53227][SQL][TESTS] Use Java `HashMap.equals` instead of `Maps.difference.areEqual` [spark]
via GitHub
[PR] [SPARK-53224][BUILD] Upgrade `joda-time` to 2.14.0 [spark]
via GitHub
Re: [PR] [SPARK-53224][BUILD] Upgrade `joda-time` to 2.14.0 [spark]
via GitHub
Re: [PR] [SPARK-53224][BUILD] Upgrade `joda-time` to 2.14.0 [spark]
via GitHub
Re: [PR] [SPARK-53224][BUILD] Upgrade `joda-time` to 2.14.0 [spark]
via GitHub
Re: [PR] [SPARK-53224][BUILD] Upgrade `joda-time` to 2.14.0 [spark]
via GitHub
Re: [PR] [SPARK-53224][BUILD] Upgrade `joda-time` to 2.14.0 [spark]
via GitHub
Re: [PR] [SPARK-53224][BUILD] Upgrade `joda-time` to 2.14.0 [spark]
via GitHub
Re: [PR] [SPARK-53224][BUILD] Upgrade `joda-time` to 2.14.0 [spark]
via GitHub
[PR] [SPARK-53223][BUILD] Upgrade `jersey` to 3.0.18 [spark]
via GitHub
Re: [PR] [SPARK-53223][BUILD] Upgrade `jersey` to 3.0.18 [spark]
via GitHub
Re: [PR] [SPARK-53223][BUILD] Upgrade `jersey` to 3.0.18 [spark]
via GitHub
Re: [PR] [SPARK-53223][BUILD] Upgrade `jersey` to 3.0.18 [spark]
via GitHub
[PR] [SPARK-53225][BUILD] Upgrade `datasketches-java` to 7.0.1 and `datasketches-memory` to 4.1.0 [spark]
via GitHub
Re: [PR] [SPARK-53225][BUILD] Upgrade `datasketches-java` to 7.0.1 and `datasketches-memory` to 4.1.0 [spark]
via GitHub
Re: [PR] [SPARK-53225][BUILD] Upgrade `datasketches-java` to 7.0.1 and `datasketches-memory` to 4.1.0 [spark]
via GitHub
[PR] [SPARK-53209][YARN] Add ActiveProcessorCount JVM option to YARN executor and driver [spark]
via GitHub
Re: [PR] [SPARK-53209][YARN] Add ActiveProcessorCount JVM option to YARN executor and driver [spark]
via GitHub
Re: [PR] [SPARK-53209][YARN] Add ActiveProcessorCount JVM option to YARN executor and driver [spark]
via GitHub
Re: [PR] [SPARK-53209][YARN] Add ActiveProcessorCount JVM option to YARN executor and driver [spark]
via GitHub
Re: [PR] [SPARK-53209][YARN] Add ActiveProcessorCount JVM option to YARN executor and driver [spark]
via GitHub
Re: [PR] [SPARK-53209][YARN] Add ActiveProcessorCount JVM option to YARN executor and driver [spark]
via GitHub
Re: [PR] [SPARK-53209][YARN] Add ActiveProcessorCount JVM option to YARN executor and driver [spark]
via GitHub
Re: [PR] [SPARK-53209][YARN] Add ActiveProcessorCount JVM option to YARN executor and driver [spark]
via GitHub
Re: [PR] [SPARK-53209][YARN] Add ActiveProcessorCount JVM option to Spark driver and executor in YARN mode [spark]
via GitHub
Re: [PR] [SPARK-53209][YARN] Add ActiveProcessorCount JVM option to Spark driver and executor in YARN mode [spark]
via GitHub
Re: [PR] [SPARK-53209][YARN] Add ActiveProcessorCount JVM option to YARN executor and AM [spark]
via GitHub
[PR] [SPARK-53222][BUILD] Upgrade `commons-compress` to 1.28.0 [spark]
via GitHub
Re: [PR] [SPARK-53222][BUILD] Upgrade `commons-compress` to 1.28.0 [spark]
via GitHub
Re: [PR] [SPARK-53222][BUILD] Upgrade `commons-compress` to 1.28.0 [spark]
via GitHub
Re: [PR] [SPARK-53222][BUILD] Upgrade `commons-compress` to 1.28.0 [spark]
via GitHub
[PR] [SPARK-53221][BUILD] Upgrade `commons-codec` to 1.19.0 [spark]
via GitHub
Re: [PR] [SPARK-53221][BUILD] Upgrade `commons-codec` to 1.19.0 [spark]
via GitHub
Re: [PR] [SPARK-53221][BUILD] Upgrade `commons-codec` to 1.19.0 [spark]
via GitHub
Re: [PR] [SPARK-53221][BUILD] Upgrade `commons-codec` to 1.19.0 [spark]
via GitHub
Re: [PR] [SPARK-53221][BUILD] Upgrade `commons-codec` to 1.19.0 [spark]
via GitHub
Re: [PR] [SPARK-53221][BUILD] Upgrade `commons-codec` to 1.19.0 [spark]
via GitHub
[PR] [SPARK-53220][BUILD] Upgrade `dev.ludovic.netlib` to 3.0.4 [spark]
via GitHub
Re: [PR] [SPARK-53220][BUILD] Upgrade `dev.ludovic.netlib` to 3.0.4 [spark]
via GitHub
Re: [PR] [SPARK-53220][BUILD] Upgrade `dev.ludovic.netlib` to 3.0.4 [spark]
via GitHub
Re: [PR] [SPARK-53220][BUILD] Upgrade `dev.ludovic.netlib` to 3.0.4 [spark]
via GitHub
[PR] [SPARK-53219][BUILD] Upgrade `Dropwizard` metrics to 4.2.33 [spark]
via GitHub
Re: [PR] [SPARK-53219][BUILD] Upgrade `Dropwizard` metrics to 4.2.33 [spark]
via GitHub
Re: [PR] [SPARK-53219][BUILD] Upgrade `Dropwizard` metrics to 4.2.33 [spark]
via GitHub
Re: [PR] [SPARK-53219][BUILD] Upgrade `Dropwizard` metrics to 4.2.33 [spark]
via GitHub
Re: [PR] [SPARK-53219][BUILD] Upgrade `Dropwizard` metrics to 4.2.33 [spark]
via GitHub
[PR] [SPARK-53218][BUILD] Upgrade `bouncycastle` to 1.81 [spark]
via GitHub
Re: [PR] [SPARK-53218][BUILD] Upgrade `bouncycastle` to 1.81 [spark]
via GitHub
Re: [PR] [SPARK-53218][BUILD] Upgrade `bouncycastle` to 1.81 [spark]
via GitHub
Re: [PR] [SPARK-53218][BUILD] Upgrade `bouncycastle` to 1.81 [spark]
via GitHub
Re: [PR] [SPARK-53218][BUILD] Upgrade `bouncycastle` to 1.81 [spark]
via GitHub
[PR] [SPARK-53217][CORE][DSTREAM] Use Java `Set.of` instead of `Sets.newHashSet` [spark]
via GitHub
Re: [PR] [SPARK-53217][CORE][DSTREAM] Use Java `Set.of` instead of `Sets.newHashSet` [spark]
via GitHub
Re: [PR] [SPARK-53217][CORE][DSTREAM] Use Java `Set.of` instead of `Sets.newHashSet` [spark]
via GitHub
Re: [PR] [SPARK-53217][CORE][DSTREAM] Use Java `Set.of` instead of `Sets.newHashSet` [spark]
via GitHub
Re: [PR] [SPARK-53217][CORE][DSTREAM] Use Java `Set.of` instead of `Sets.newHashSet` [spark]
via GitHub
[PR] [SPARK-53216][CORE] Move `is*(Blank|Empty)` from `object SparkStringUtils` to `trait SparkStringUtils` [spark]
via GitHub
Re: [PR] [SPARK-53216][CORE] Move `is*(Blank|Empty)` from `object SparkStringUtils` to `trait SparkStringUtils` [spark]
via GitHub
Re: [PR] [SPARK-53216][CORE] Move `is*(Blank|Empty)` from `object SparkStringUtils` to `trait SparkStringUtils` [spark]
via GitHub
Re: [PR] [SPARK-53216][CORE] Move `is*(Blank|Empty)` from `object SparkStringUtils` to `trait SparkStringUtils` [spark]
via GitHub
Re: [PR] [SPARK-53216][CORE] Move `is*(Blank|Empty)` from `object SparkStringUtils` to `trait SparkStringUtils` [spark]
via GitHub
Re: [PR] [SPARK-53216][CORE] Move `is*(Blank|Empty)` from `object SparkStringUtils` to `trait SparkStringUtils` [spark]
via GitHub
Re: [PR] [SPARK-53216][CORE] Move `is*(Blank|Empty)` from `object SparkStringUtils` to `trait SparkStringUtils` [spark]
via GitHub
[PR] [SPARK-53215][CORE][TESTS] Use `JavaUtils.listFiles` in `CleanupNonShuffleServiceServedFilesSuite` [spark]
via GitHub
Re: [PR] [SPARK-53215][CORE][TESTS] Use `JavaUtils.listFiles` in `CleanupNonShuffleServiceServedFilesSuite` [spark]
via GitHub
Re: [PR] [SPARK-53215][CORE][TESTS] Use `JavaUtils.listFiles` in `CleanupNonShuffleServiceServedFilesSuite` [spark]
via GitHub
Re: [PR] [SPARK-53215][CORE][TESTS] Use `JavaUtils.listFiles` in `CleanupNonShuffleServiceServedFilesSuite` [spark]
via GitHub
Re: [PR] [SPARK-53215][CORE][TESTS] Use `JavaUtils.listFiles` in `CleanupNonShuffleServiceServedFilesSuite` [spark]
via GitHub
Re: [PR] [SPARK-53215][CORE][TESTS] Use `JavaUtils.listFiles` in `CleanupNonShuffleServiceServedFilesSuite` [spark]
via GitHub
[PR] [SPARK-53214][CORE][SQL][K8S] Use Java `HexFormat` instead of `Hex.encodeHexString` [spark]
via GitHub
Re: [PR] [SPARK-53214][CORE][SQL][K8S] Use Java `HexFormat` instead of `Hex.encodeHexString` [spark]
via GitHub
Re: [PR] [SPARK-53214][CORE][SQL][K8S] Use Java `HexFormat` instead of `Hex.encodeHexString` [spark]
via GitHub
Re: [PR] [SPARK-53214][CORE][SQL][K8S] Use Java `HexFormat` instead of `Hex.encodeHexString` [spark]
via GitHub
Re: [PR] [SPARK-53214][CORE][SQL][K8S] Use Java `HexFormat` instead of `Hex.encodeHexString` [spark]
via GitHub
Re: [PR] [SPARK-53214][CORE][SQL][K8S] Use Java `HexFormat` instead of `Hex.encodeHexString` [spark]
via GitHub
Re: [PR] [SPARK-53214][CORE][SQL][K8S] Use Java `HexFormat` instead of `Hex.encodeHexString` [spark]
via GitHub
[PR] [SPARK-53213][CORE][SQL][K8S] Use Java `Base64` instead of `Base64.(en|decodeBase64)*` [spark]
via GitHub
Re: [PR] [SPARK-53213][CORE][SQL][K8S] Use Java `Base64` instead of `Base64.(decodeBase64|encodeBase64String)` [spark]
via GitHub
Re: [PR] [SPARK-53213][CORE][SQL][K8S] Use Java `Base64` instead of `Base64.(decodeBase64|encodeBase64String)` [spark]
via GitHub
Re: [PR] [SPARK-53213][CORE][SQL][K8S] Use Java `Base64` instead of `Base64.(decodeBase64|encodeBase64String)` [spark]
via GitHub
Re: [PR] [SPARK-53213][CORE][SQL][K8S] Use Java `Base64` instead of `Base64.(decodeBase64|encodeBase64String)` [spark]
via GitHub
Re: [PR] [SPARK-53213][CORE][SQL][K8S] Use Java `Base64` instead of `Base64.(decodeBase64|encodeBase64String)` [spark]
via GitHub
[PR] [SPARK-53212] improve error handling for scalar Pandas UDFs [spark]
via GitHub
Re: [PR] [SPARK-53212] improve error handling for scalar Pandas UDFs [spark]
via GitHub
[PR] [SPARK-53211][TESTS] Ban `com.google.common.io.Files` [spark]
via GitHub
Re: [PR] [SPARK-53211][TESTS] Ban `com.google.common.io.Files` [spark]
via GitHub
Re: [PR] [SPARK-53211][TESTS] Ban `com.google.common.io.Files` [spark]
via GitHub
Re: [PR] [SPARK-53211][TESTS] Ban `com.google.common.io.Files` [spark]
via GitHub
Re: [PR] [SPARK-53211][TESTS] Ban `com.google.common.io.Files` [spark]
via GitHub
[PR] [SPARK-53210][CORE][SQL][DSTREAM][YARN] Use Java `Files.write` instead of `com.google.common.io.Files.write` [spark]
via GitHub
Re: [PR] [SPARK-53210][CORE][SQL][DSTREAM][YARN] Use Java `Files.write` instead of `com.google.common.io.Files.write` [spark]
via GitHub
Re: [PR] [SPARK-53210][CORE][SQL][DSTREAM][YARN] Use Java `Files.write` instead of `com.google.common.io.Files.write` [spark]
via GitHub
Re: [PR] [SPARK-53210][CORE][SQL][DSTREAM][YARN] Use Java `Files.write(String)?` instead of `com.google.common.io.Files.write` [spark]
via GitHub
Re: [PR] [SPARK-53210][CORE][SQL][DSTREAM][YARN] Use Java `Files.write(String)?` instead of `com.google.common.io.Files.write` [spark]
via GitHub
Re: [PR] [SPARK-53210][CORE][SQL][DSTREAM][YARN] Use Java `Files.write(String)?` instead of `com.google.common.io.Files.write` [spark]
via GitHub
Re: [PR] [SPARK-53210][CORE][SQL][DSTREAM][YARN] Use Java `Files.write(String)?` instead of `com.google.common.io.Files.write` [spark]
via GitHub
Re: [PR] [SPARK-53210][CORE][SQL][DSTREAM][YARN] Use Java `Files.write(String)?` instead of `com.google.common.io.Files.write` [spark]
via GitHub
[PR] [SPARK-53208][SQL] Use `Hex.unhex` instead of `o.a.commons.codec.binary.Hex.decodeHex` [spark]
via GitHub
Re: [PR] [SPARK-53208][SQL][TESTS] Use `Hex.unhex` instead of `o.a.commons.codec.binary.Hex.decodeHex` [spark]
via GitHub
Re: [PR] [SPARK-53208][SQL][TESTS] Use `Hex.unhex` instead of `o.a.commons.codec.binary.Hex.decodeHex` [spark]
via GitHub
Re: [PR] [SPARK-53208][SQL][TESTS] Use `Hex.unhex` instead of `o.a.commons.codec.binary.Hex.decodeHex` [spark]
via GitHub
Re: [PR] [SPARK-53208][SQL][TESTS] Use `Hex.unhex` instead of `o.a.commons.codec.binary.Hex.decodeHex` [spark]
via GitHub
Re: [PR] [SPARK-53208][SQL][TESTS] Use `Hex.unhex` instead of `o.a.commons.codec.binary.Hex.decodeHex` [spark]
via GitHub
[PR] [SPARK-53206][CORE] Use `SparkFileUtils.move` instead of `com.google.common.io.Files.move` [spark]
via GitHub
Re: [PR] [SPARK-53206][CORE] Use `SparkFileUtils.move` instead of `com.google.common.io.Files.move` [spark]
via GitHub
Re: [PR] [SPARK-53206][CORE] Use `SparkFileUtils.move` instead of `com.google.common.io.Files.move` [spark]
via GitHub
Re: [PR] [SPARK-53206][CORE] Use `SparkFileUtils.move` instead of `com.google.common.io.Files.move` [spark]
via GitHub
Re: [PR] [SPARK-53206][CORE] Use `SparkFileUtils.move` instead of `com.google.common.io.Files.move` [spark]
via GitHub
Re: [PR] [SPARK-53206][CORE] Use `SparkFileUtils.move` instead of `com.google.common.io.Files.move` [spark]
via GitHub
[PR] [SPARK-53205][CORE][SQL] Support `createParentDirs` in `SparkFileUtils` [spark]
via GitHub
Re: [PR] [SPARK-53205][CORE][SQL] Support `createParentDirs` in `SparkFileUtils` [spark]
via GitHub
Re: [PR] [SPARK-53205][CORE][SQL] Support `createParentDirs` in `SparkFileUtils` [spark]
via GitHub
Re: [PR] [SPARK-53205][CORE][SQL] Support `createParentDirs` in `SparkFileUtils` [spark]
via GitHub
Re: [PR] [SPARK-53205][CORE][SQL] Support `createParentDirs` in `SparkFileUtils` [spark]
via GitHub
Re: [PR] [SPARK-53205][CORE][SQL] Support `createParentDirs` in `SparkFileUtils` [spark]
via GitHub
Re: [PR] [SPARK-53205][CORE][SQL] Support `createParentDirs` in `SparkFileUtils` [spark]
via GitHub
Re: [PR] [SPARK-53205][CORE][SQL] Support `createParentDirs` in `SparkFileUtils` [spark]
via GitHub
[PR] [SPARK-53202][SQL][TESTS] Use `SparkFileUtils.touch` instead of `Files.touch` [spark]
via GitHub
Re: [PR] [SPARK-53202][SQL][TESTS] Use `SparkFileUtils.touch` instead of `Files.touch` [spark]
via GitHub
Re: [PR] [SPARK-53202][SQL][TESTS] Use `SparkFileUtils.touch` instead of `Files.touch` [spark]
via GitHub
Re: [PR] [SPARK-53202][SQL][TESTS] Use `SparkFileUtils.touch` instead of `Files.touch` [spark]
via GitHub
Re: [PR] [SPARK-53202][SQL][TESTS] Use `SparkFileUtils.touch` instead of `Files.touch` [spark]
via GitHub
Re: [PR] [SPARK-53202][SQL][TESTS] Use `SparkFileUtils.touch` instead of `Files.touch` [spark]
via GitHub
[PR] [SPARK-53201][CORE] Use `SparkFileUtils.contentEquals` instead of `Files.equal` [spark]
via GitHub
Re: [PR] [SPARK-53201][CORE] Use `SparkFileUtils.contentEquals` instead of `Files.equal` [spark]
via GitHub
Re: [PR] [SPARK-53201][CORE] Use `SparkFileUtils.contentEquals` instead of `Files.equal` [spark]
via GitHub
Re: [PR] [SPARK-53201][CORE] Use `SparkFileUtils.contentEquals` instead of `Files.equal` [spark]
via GitHub
Re: [PR] [SPARK-53201][CORE] Use `SparkFileUtils.contentEquals` instead of `Files.equal` [spark]
via GitHub
Re: [PR] [SPARK-53201][CORE] Use `SparkFileUtils.contentEquals` instead of `Files.equal` [spark]
via GitHub
[PR] [SPARK-53198][CORE] Support terminating driver JVM after SparkContext is stopped [spark]
via GitHub
Re: [PR] [SPARK-53198][CORE] Support terminating driver JVM after SparkContext is stopped [spark]
via GitHub
Re: [PR] [SPARK-53198][CORE] Support terminating driver JVM after SparkContext is stopped [spark]
via GitHub
Re: [PR] [SPARK-53198][CORE] Support terminating driver JVM after SparkContext is stopped [spark]
via GitHub
Re: [PR] [SPARK-53198][CORE] Support terminating driver JVM after SparkContext is stopped [spark]
via GitHub
Re: [PR] [SPARK-53198][CORE] Support terminating driver JVM after SparkContext is stopped [spark]
via GitHub
Re: [PR] [SPARK-53198][CORE] Support terminating driver JVM after SparkContext is stopped [spark]
via GitHub
[PR] [SPARK-53200][CORE] Use Java `Files.newInputStream` instead of `FilesasByteSource().openStream()` [spark]
via GitHub
Re: [PR] [SPARK-53200][CORE] Use Java `Files.newInputStream` instead of `FilesasByteSource().openStream()` [spark]
via GitHub
Re: [PR] [SPARK-53200][CORE] Use Java `Files.newInputStream` instead of `FilesasByteSource().openStream()` [spark]
via GitHub
Re: [PR] [SPARK-53200][CORE] Use Java `Files.newInputStream` instead of `Files.asByteSource().openStream()` [spark]
via GitHub
Re: [PR] [SPARK-53200][CORE] Use Java `Files.newInputStream` instead of `Files.asByteSource().openStream()` [spark]
via GitHub
Re: [PR] [SPARK-53200][CORE] Use Java `Files.newInputStream` instead of `Files.asByteSource().openStream()` [spark]
via GitHub
Re: [PR] [SPARK-53200][CORE] Use Java `Files.newInputStream` instead of `Files.asByteSource().openStream()` [spark]
via GitHub
[PR] [SPARK-53199][SQL][TESTS] Use Java `Files.copy` instead of `com.google.common.io.Files.copy` [spark]
via GitHub
Re: [PR] [SPARK-53199][SQL][TESTS] Use Java `Files.copy` instead of `com.google.common.io.Files.copy` [spark]
via GitHub
Re: [PR] [SPARK-53199][SQL][TESTS] Use Java `Files.copy` instead of `com.google.common.io.Files.copy` [spark]
via GitHub
Re: [PR] [SPARK-53199][SQL][TESTS] Use Java `Files.copy` instead of `com.google.common.io.Files.copy` [spark]
via GitHub
Re: [PR] [SPARK-53199][SQL][TESTS] Use Java `Files.copy` instead of `com.google.common.io.Files.copy` [spark]
via GitHub
[PR] [SPARK-52976][PYTHON] Fix Python UDF not accepting collated string as input param/return type [spark]
via GitHub
Re: [PR] [SPARK-52976][PYTHON] Fix Python UDF not accepting collated string as input param/return type [spark]
via GitHub
Re: [PR] [SPARK-52976][PYTHON] Fix Python UDF not accepting collated string as input param/return type [spark]
via GitHub
[PR] [SPARK-53197][CORE][SQL] Use `java.util.Objects#requireNonNull` instead of `com.google.common.base.Preconditions#checkNotNull` [spark]
via GitHub
Re: [PR] [SPARK-53197][CORE][SQL] Use `java.util.Objects#requireNonNull` instead of `com.google.common.base.Preconditions#checkNotNull` [spark]
via GitHub
Re: [PR] [SPARK-53197][CORE][SQL] Use `java.util.Objects#requireNonNull` instead of `com.google.common.base.Preconditions#checkNotNull` [spark]
via GitHub
Re: [PR] [SPARK-53197][CORE][SQL] Use `java.util.Objects#requireNonNull` instead of `com.google.common.base.Preconditions#checkNotNull` [spark]
via GitHub
Re: [PR] [SPARK-53197][CORE][SQL] Use `java.util.Objects#requireNonNull` instead of `com.google.common.base.Preconditions#checkNotNull` [spark]
via GitHub
Re: [PR] [SPARK-53197][CORE][SQL] Use `java.util.Objects#requireNonNull` instead of `com.google.common.base.Preconditions#checkNotNull` [spark]
via GitHub
Re: [PR] [SPARK-53197][CORE][SQL] Use `java.util.Objects#requireNonNull` instead of `com.google.common.base.Preconditions#checkNotNull` [spark]
via GitHub
Re: [PR] [SPARK-53197][CORE][SQL] Use `java.util.Objects#requireNonNull` instead of `com.google.common.base.Preconditions#checkNotNull` [spark]
via GitHub
Re: [PR] [SPARK-53197][CORE][SQL] Use `java.util.Objects#requireNonNull` instead of `com.google.common.base.Preconditions#checkNotNull` [spark]
via GitHub
[PR] [SPARK-53196][CORE] Use Java `OutputStream.nullOutputStream` instead of `ByteStreams.nullOutputStream` [spark]
via GitHub
Re: [PR] [SPARK-53196][CORE] Use Java `OutputStream.nullOutputStream` instead of `ByteStreams.nullOutputStream` [spark]
via GitHub
Re: [PR] [SPARK-53196][CORE] Use Java `OutputStream.nullOutputStream` instead of `ByteStreams.nullOutputStream` [spark]
via GitHub
[PR] [SPARK-53195][CORE] Use Java `InputStream.readNBytes` instead of `ByteStreams.read` [spark]
via GitHub
Re: [PR] [SPARK-53195][CORE] Use Java `InputStream.readNBytes` instead of `ByteStreams.read` [spark]
via GitHub
Re: [PR] [SPARK-53195][CORE] Use Java `InputStream.readNBytes` instead of `ByteStreams.read` [spark]
via GitHub
Re: [PR] [SPARK-53195][CORE] Use Java `InputStream.readNBytes` instead of `ByteStreams.read` [spark]
via GitHub
Re: [PR] [SPARK-53195][CORE] Use Java `InputStream.readNBytes` instead of `ByteStreams.read` [spark]
via GitHub
[PR] [SPARK-53192][CONNECT] Always cache a DataSource in the Spark Connect Plan Cache [spark]
via GitHub
Re: [PR] [SPARK-53192][CONNECT] Always cache a DataSource in the Spark Connect Plan Cache [spark]
via GitHub
Re: [PR] [SPARK-53192][CONNECT] Always cache a DataSource in the Spark Connect Plan Cache [spark]
via GitHub
[PR] [SPARK-53194][INFRA] Set -XX:ErrorFile to build/target directory for tests [spark]
via GitHub
Re: [PR] [SPARK-53194][INFRA] Set -XX:ErrorFile to build/target directory for tests [spark]
via GitHub
Re: [PR] [SPARK-53194][INFRA] Set -XX:ErrorFile to build/target directory for tests [spark]
via GitHub
[PR] [SPARK-53193][DOCS] Add advanced JVM optimization parameters to tuning guide [spark]
via GitHub
Re: [PR] [SPARK-53193][DOCS] Add advanced JVM optimization parameters to tuning guide [spark]
via GitHub
Re: [PR] [SPARK-53193][DOCS] Add advanced JVM optimization parameters to tuning guide [spark]
via GitHub
[PR] [SPARK-53191][CORE][SQL][MLLIB][YARN] Use Java `InputStream.readAllBytes` instead of `ByteStreams.toByteArray` [spark]
via GitHub
Re: [PR] [SPARK-53191][CORE][SQL][MLLIB][YARN] Use Java `InputStream.readAllBytes` instead of `ByteStreams.toByteArray` [spark]
via GitHub
Re: [PR] [SPARK-53191][CORE][SQL][MLLIB][YARN] Use Java `InputStream.readAllBytes` instead of `ByteStreams.toByteArray` [spark]
via GitHub
Re: [PR] [SPARK-53191][CORE][SQL][MLLIB][YARN] Use Java `InputStream.readAllBytes` instead of `ByteStreams.toByteArray` [spark]
via GitHub
[PR] [SPARK-53190][CORE] Use Java `InputStream.transferTo` instead of `ByteStreams.copy` [spark]
via GitHub
Re: [PR] [SPARK-53190][CORE] Use Java `InputStream.transferTo` instead of `ByteStreams.copy` [spark]
via GitHub
Re: [PR] [SPARK-53190][CORE] Use Java `InputStream.transferTo` instead of `ByteStreams.copy` [spark]
via GitHub
Re: [PR] [SPARK-53190][CORE] Use Java `InputStream.transferTo` instead of `ByteStreams.copy` [spark]
via GitHub
Re: [PR] [SPARK-53190][CORE] Use Java `InputStream.transferTo` instead of `ByteStreams.copy` [spark]
via GitHub
Re: [PR] [SPARK-53190][CORE] Use Java `InputStream.transferTo` instead of `ByteStreams.copy` [spark]
via GitHub
Re: [PR] [SPARK-53190][CORE] Use Java `InputStream.transferTo` instead of `ByteStreams.copy` [spark]
via GitHub
Re: [PR] [SPARK-53190][CORE] Use Java `InputStream.transferTo` instead of `ByteStreams.copy` [spark]
via GitHub
Re: [PR] [SPARK-53190][CORE] Use Java `InputStream.transferTo` instead of `ByteStreams.copy` [spark]
via GitHub
Re: [PR] [SPARK-53190][CORE] Use Java `InputStream.transferTo` instead of `ByteStreams.copy` [spark]
via GitHub
Re: [PR] [SPARK-53190][CORE] Use Java `InputStream.transferTo` instead of `ByteStreams.copy` [spark]
via GitHub
Earlier messages
Later messages