ASF GitHub Bot logged work on GOBBLIN-865:

                Author: ASF GitHub Bot
            Created on: 17/Sep/19 19:50
            Start Date: 17/Sep/19 19:50
    Worklog Time Spent: 10m 
      Work Description: arekusuri commented on pull request #2722: GOBBLIN-865: 
Add feature that enables PK-chunking in partition
URL: https://github.com/apache/incubator-gobblin/pull/2722#discussion_r325340740

 File path: 
 @@ -144,28 +142,18 @@
   private final boolean pkChunkingSkipCountCheck;
   private final boolean bulkApiUseQueryAll;
+  private WorkUnitState workUnitState;
   public SalesforceExtractor(WorkUnitState state) {
-    this.sfConnector = (SalesforceConnector) this.connector;
-    // don't allow pk chunking if max partitions too high or have user 
specified partitions
-    if (state.getPropAsBoolean(Partitioner.HAS_USER_SPECIFIED_PARTITIONS, 
 Review comment:
   This part code was only for **PK chunking**. We don't need 2nd level 
PK-chunking any more. (if we have normal pre-partition, we should not use pk 
chunking. It burns out request quota)
   I create separate function - getQueryResultIdsPkChunking for pkchunking. and 
made it not to depends on class member this.pkChunking, therefore removed 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:

Issue Time Tracking

    Worklog Id:     (was: 313876)
    Time Spent: 5h 40m  (was: 5.5h)

> Add feature that enables PK-chunking in partition 
> --------------------------------------------------
>                 Key: GOBBLIN-865
>                 URL: https://issues.apache.org/jira/browse/GOBBLIN-865
>             Project: Apache Gobblin
>          Issue Type: Task
>            Reporter: Alex Li
>            Priority: Major
>              Labels: salesforce
>          Time Spent: 5h 40m
>  Remaining Estimate: 0h
> In SFDC(salesforce) connector, we have partitioning mechanisms to split a 
> giant query to multiple sub queries. There are 3 mechanisms:
>  * simple partition (equally split by time)
>  * dynamic pre-partition (generate histogram and split by row numbers)
>  * user specified partition (set up time range in job file)
> However there are tables like Task and Contract are failing time to time to 
> fetch full data.
> We may want to utilize PK-chunking to partition the query.
> The pk-chunking doc from SFDC - 
> [https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/async_api_headers_enable_pk_chunking.htm]

This message was sent by Atlassian Jira

Reply via email to