[ 
https://issues.apache.org/jira/browse/GOBBLIN-1025?focusedWorklogId=375296&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-375296
 ]

ASF GitHub Bot logged work on GOBBLIN-1025:
-------------------------------------------

                Author: ASF GitHub Bot
            Created on: 21/Jan/20 23:41
            Start Date: 21/Jan/20 23:41
    Worklog Time Spent: 10m 
      Work Description: arekusuri commented on pull request #2868: 
GOBBLIN-1025: Add retry for PK-Chuking iterator
URL: https://github.com/apache/incubator-gobblin/pull/2868#discussion_r369302945
 
 

 ##########
 File path: 
gobblin-salesforce/src/main/java/org/apache/gobblin/salesforce/SalesforceRecordIterator.java
 ##########
 @@ -99,7 +103,40 @@ private void fulfillCurrentRecord() {
     }
   }
 
-  private void resetCurrentRecordStatus() {
+  private InputStreamCSVReader reopenStreamWithRetry(ResultStruct 
resultStruct, int seekLineNumber, int retryNumber, int count) {
 
 Review comment:
   Thanks! I did a refactor to make both pk-chunking and normal bulk batch 
using Iterator.
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

    Worklog Id:     (was: 375296)
    Time Spent: 1h 10m  (was: 1h)

> Add retry for PK-Chunking iterator
> ----------------------------------
>
>                 Key: GOBBLIN-1025
>                 URL: https://issues.apache.org/jira/browse/GOBBLIN-1025
>             Project: Apache Gobblin
>          Issue Type: Improvement
>            Reporter: Alex Li
>            Priority: Major
>          Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> In SFDC connector, there is a class called `ResultIterator` (I will change 
> the name to SalesforceRecordIterator).
> It was using by only PK-Chunking currently. It encapsulated fetching a list 
> of result files to a record iterator.
> However, the csvReader.nextRecord() may throw out network IO exception. We 
> should do retry in this case.
> When a result file is fetched partly and one network IO exception happens, we 
> are in a special situation - first half of the file is already fetched to our 
> local, but another half of the file is still on datasource. 
> We need to
> 1. reopen the file stream
> 2. skip all the records that we already fetched, seek the cursor to the 
> record which we haven't fetched yet.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to