[ 
https://issues.apache.org/jira/browse/CASSANDRA-19827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17873029#comment-17873029
 ] 

Yifan Cai commented on CASSANDRA-19827:
---------------------------------------

CI is green

> [Analytics] Add job_ideal_timeout_seconds writer option
> -------------------------------------------------------
>
>                 Key: CASSANDRA-19827
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-19827
>             Project: Cassandra
>          Issue Type: Improvement
>          Components: Analytics Library
>            Reporter: Yifan Cai
>            Assignee: Yifan Cai
>            Priority: Normal
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> Option to specify the ideal timeout in seconds for bulk write jobs.
> It is only effective when the bulk write job is using S3_COMPACT data 
> transport mode.
> When JOB_IDEAL_TIMEOUT_SECONDS is specified and less than the actual time the 
> bulk write job
> needs to achieve the specified consistency level, it is ignored and job only 
> exit after the desired consistency level has been satisfied.
> For example, a bulk write job indeed requires 1 hour to achieve LOCAL_QUORUM, 
> it ignores
> any JOB_IDEAL_TIMEOUT_SECONDS that is less than 3600 seconds (1 hour), and 
> only complete after 1 hour.
> If JOB_IDEAL_TIMEOUT_SECONDS is 5400 seconds (1.5 hours), the job after 
> achieve LOCAL_QUORUM waits for at most 0.5 hours in addition. The effective
> wait time is the minimum of the remaining time to ideal timeout and the 
> estimated wait time to finish all slice import (as estimated
> in org.apache.cassandra.spark.bulkwriter.ImportCompletionCoordinator).
> The ideal timeout is ignored in order to complete the bulk write job in some 
> circumstances, hence named "ideal".



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to