merrily01 commented on code in PR #2105:
URL: https://github.com/apache/auron/pull/2105#discussion_r2958147027


##########
auron-core/src/main/java/org/apache/auron/configuration/AuronConfiguration.java:
##########
@@ -39,6 +39,31 @@ public abstract class AuronConfiguration {
             .withDescription("Log level for native execution.")
             .withDefaultValue("info");
 
+    public static final ConfigOption<Integer> TOKIO_WORKER_THREADS_PER_CPU = 
new ConfigOption<>(Integer.class)
+            .withKey("auron.tokio.worker.threads.per.cpu")
+            .withCategory("Runtime Configuration")
+            .withDescription(
+                    "Number of Tokio worker threads to create per CPU core 
(spark.task.cpus). Set to 0 for automatic detection "
+                            + "based on available CPU cores. This setting 
controls the thread pool size for Tokio-based asynchronous operations.")
+            .withDefaultValue(0);
+
+    public static final ConfigOption<Integer> SUGGESTED_BATCH_MEM_SIZE = new 
ConfigOption<>(Integer.class)
+            .withKey("auron.suggested.batch.memSize")
+            .withCategory("Runtime Configuration")
+            .withDescription(
+                    "Suggested memory size in bytes for record batches. This 
setting controls the target memory allocation "
+                            + "for individual data batches to optimize memory 
usage and processing efficiency. Default is 8MB (8,388,608 bytes).")
+            .withDefaultValue(8388608);
+
+    public static final ConfigOption<Integer> TASK_CPUS = new 
ConfigOption<>(Integer.class)
+            .withKey("task.cpus")
+            .withCategory("Runtime Configuration")
+            .withDescription(
+                    "Number of CPU cores allocated per Spark task. This 
setting determines the parallelism level "
+                            + "for individual tasks and affects resource 
allocation and task scheduling. "
+                            + "Defaults to spark.task.cpus.")

Review Comment:
   The description says "Defaults to spark.task.cpus" but the implementation 
uses withDefaultValue(1).
   
   Consider updating the description?
   



##########
auron-core/src/main/java/org/apache/auron/configuration/AuronConfiguration.java:
##########
@@ -39,6 +39,31 @@ public abstract class AuronConfiguration {
             .withDescription("Log level for native execution.")
             .withDefaultValue("info");
 
+    public static final ConfigOption<Integer> TOKIO_WORKER_THREADS_PER_CPU = 
new ConfigOption<>(Integer.class)
+            .withKey("auron.tokio.worker.threads.per.cpu")
+            .withCategory("Runtime Configuration")
+            .withDescription(
+                    "Number of Tokio worker threads to create per CPU core 
(spark.task.cpus). Set to 0 for automatic detection "
+                            + "based on available CPU cores. This setting 
controls the thread pool size for Tokio-based asynchronous operations.")
+            .withDefaultValue(0);
+
+    public static final ConfigOption<Integer> SUGGESTED_BATCH_MEM_SIZE = new 
ConfigOption<>(Integer.class)
+            .withKey("auron.suggested.batch.memSize")
+            .withCategory("Runtime Configuration")
+            .withDescription(
+                    "Suggested memory size in bytes for record batches. This 
setting controls the target memory allocation "
+                            + "for individual data batches to optimize memory 
usage and processing efficiency. Default is 8MB (8,388,608 bytes).")
+            .withDefaultValue(8388608);
+
+    public static final ConfigOption<Integer> TASK_CPUS = new 
ConfigOption<>(Integer.class)
+            .withKey("task.cpus")
+            .withCategory("Runtime Configuration")
+            .withDescription(
+                    "Number of CPU cores allocated per Spark task. This 
setting determines the parallelism level "
+                            + "for individual tasks and affects resource 
allocation and task scheduling. "
+                            + "Defaults to spark.task.cpus.")
+            .withDefaultValue(1);

Review Comment:
   Previously used withDynamicDefaultValue to read from `spark.task.cpus`. Now 
uses static default 1. Can you confirm that `spark.task.cpus` is still 
correctly passed to the Native layer when set?
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to