[jira] [Commented] (CARBONDATA-4239) Carbondata 2.1.1 MV : Incremental refresh : Doesnot aggregate data correctly
[ https://issues.apache.org/jira/browse/CARBONDATA-4239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17381175#comment-17381175 ] Indhumathi commented on CARBONDATA-4239: MV can be used for real-time data loading, even for every 15 mins data, but, with more data. If you use INSERT to add a single row every 5/15 mins, then it will not give much benefit. As i already suggested in previous comments, you can still use MV for your scenario, with manual refresh. > Carbondata 2.1.1 MV : Incremental refresh : Doesnot aggregate data correctly > - > > Key: CARBONDATA-4239 > URL: https://issues.apache.org/jira/browse/CARBONDATA-4239 > Project: CarbonData > Issue Type: Bug > Components: core, data-load >Affects Versions: 2.1.1 > Environment: RHEL spark-2.4.5-bin-hadoop2.7 for carbon 2.1.1 >Reporter: Sushant Sammanwar >Priority: Major > Labels: Materialistic_Views, materializedviews, refreshnodes > > Hi Team , > We are doing a POC with Carbondata using MV . > Our MV doesnot contain AVG function as we wanted to utilize the feature of > incremental refresh. > But with incremetnal refresh , we noticed the MV doesnot aggregate value > correctly. > If a row is inserted , it creates another row in MV instead of adding > incremental value . > As a result no. of rows in MV are almost same as raw table. > This doesnot happen with full refresh MV. > Below is the data in MV with 3 rows : > scala> carbon.sql("select * from fact_365_1_eutrancell_21_30_minute").show() > ++---+---+--+-+-++ > |fact_365_1_eutrancell_21_tags_id|fact_365_1_eutrancell_21_metric| ts| > sum_value|min_value|max_value|fact_365_1_eutrancell_21_ts2| > ++---+---+--+-+-++ > | ff6cb0f7-fba0-413...| eUtranCell.HHO.X2...|2020-09-25 > 06:30:00|5412.68105| 31.345| 4578.112| 2020-09-25 05:30:00| > | ff6cb0f7-fba0-413...| eUtranCell.HHO.X2...|2020-09-25 05:30:00| 1176.7035| > 392.2345| 392.2345| 2020-09-25 05:30:00| > | ff6cb0f7-fba0-413...| eUtranCell.HHO.X2...|2020-09-25 06:00:00| 58.112| > 58.112| 58.112| 2020-09-25 05:30:00| > ++---+---+--+-+-++ > Below , i am inserting data for 6th hour, and it should add incremental > values to 6th hour row of MV. > Note the data being inserted ; columns which are part of groupby clause are > having same values as existing data. > scala> carbon.sql("insert into fact_365_1_eutrancell_21 values ('2020-09-25 > 06:05:00','eUtranCell.HHO.X2.InterFreq.PrepAttOut','ff6cb0f7-fba0-4134-81ee-55e820574627',118.112,'2020-09-25 > 05:30:00')").show() > 21/06/28 16:01:31 AUDIT audit: \{"time":"June 28, 2021 4:01:31 PM > IST","username":"root","opName":"INSERT > INTO","opId":"7332282307468267","opStatus":"START"} > 21/06/28 16:01:32 WARN CarbonOutputIteratorWrapper: try to poll a row batch > one more time. > 21/06/28 16:01:32 WARN CarbonOutputIteratorWrapper: try to poll a row batch > one more time. > 21/06/28 16:01:32 WARN CarbonOutputIteratorWrapper: try to poll a row batch > one more time. > 21/06/28 16:01:33 AUDIT audit: \{"time":"June 28, 2021 4:01:33 PM > IST","username":"root","opName":"INSERT > INTO","opId":"7332284066443156","opStatus":"START"} > [Stage 40:=>(199 + 1) / > 200]21/06/28 16:01:44 WARN CarbonOutputIteratorWrapper: try to poll a row > batch one more time. > 21/06/28 16:01:44 WARN CarbonOutputIteratorWrapper: try to poll a row batch > one more time. > 21/06/28 16:01:44 WARN CarbonOutputIteratorWrapper: try to poll a row batch > one more time. > 21/06/28 16:01:44 AUDIT audit: \{"time":"June 28, 2021 4:01:44 PM > IST","username":"root","opName":"INSERT > INTO","opId":"7332284066443156","opStatus":"SUCCESS","opTime":"11343 > ms","table":"default.fact_365_1_eutrancell_21_30_minute","extraInfo":{}} > 21/06/28 16:01:44 AUDIT audit: \{"time":"June 28, 2021 4:01:44 PM > IST","username":"root","opName":"INSERT > INTO","opId":"7332282307468267","opStatus":"SUCCESS","opTime":"13137 > ms","table":"default.fact_365_1_eutrancell_21","extraInfo":{}} > +--+ > |Segment ID| > +--+ > | 8| > +--+ > Below we can see it has added another row of 2020-09-25 06:00:00 . > Note: All values of columns which are part of groupby caluse have same value. > This means there should have been single row for 2020-09-25 06:00:00 . > scala> carbon.sql("select * from
[jira] [Commented] (CARBONDATA-4239) Carbondata 2.1.1 MV : Incremental refresh : Doesnot aggregate data correctly
[ https://issues.apache.org/jira/browse/CARBONDATA-4239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17381160#comment-17381160 ] Sushant Sammanwar commented on CARBONDATA-4239: --- Thanks [~indhumuthumurugesh] [~Indhumathi27] for your response. Does this mean MV should NOT be used for real-time (continuous , incremental ) data loading ? It should be used only in bulk data load ( for eg, load data for 30 mins or 1 hr instead of every 5 or 15 mind )? Only then it will benefit storage and query time . Is my understanding correct ? > Carbondata 2.1.1 MV : Incremental refresh : Doesnot aggregate data correctly > - > > Key: CARBONDATA-4239 > URL: https://issues.apache.org/jira/browse/CARBONDATA-4239 > Project: CarbonData > Issue Type: Bug > Components: core, data-load >Affects Versions: 2.1.1 > Environment: RHEL spark-2.4.5-bin-hadoop2.7 for carbon 2.1.1 >Reporter: Sushant Sammanwar >Priority: Major > Labels: Materialistic_Views, materializedviews, refreshnodes > > Hi Team , > We are doing a POC with Carbondata using MV . > Our MV doesnot contain AVG function as we wanted to utilize the feature of > incremental refresh. > But with incremetnal refresh , we noticed the MV doesnot aggregate value > correctly. > If a row is inserted , it creates another row in MV instead of adding > incremental value . > As a result no. of rows in MV are almost same as raw table. > This doesnot happen with full refresh MV. > Below is the data in MV with 3 rows : > scala> carbon.sql("select * from fact_365_1_eutrancell_21_30_minute").show() > ++---+---+--+-+-++ > |fact_365_1_eutrancell_21_tags_id|fact_365_1_eutrancell_21_metric| ts| > sum_value|min_value|max_value|fact_365_1_eutrancell_21_ts2| > ++---+---+--+-+-++ > | ff6cb0f7-fba0-413...| eUtranCell.HHO.X2...|2020-09-25 > 06:30:00|5412.68105| 31.345| 4578.112| 2020-09-25 05:30:00| > | ff6cb0f7-fba0-413...| eUtranCell.HHO.X2...|2020-09-25 05:30:00| 1176.7035| > 392.2345| 392.2345| 2020-09-25 05:30:00| > | ff6cb0f7-fba0-413...| eUtranCell.HHO.X2...|2020-09-25 06:00:00| 58.112| > 58.112| 58.112| 2020-09-25 05:30:00| > ++---+---+--+-+-++ > Below , i am inserting data for 6th hour, and it should add incremental > values to 6th hour row of MV. > Note the data being inserted ; columns which are part of groupby clause are > having same values as existing data. > scala> carbon.sql("insert into fact_365_1_eutrancell_21 values ('2020-09-25 > 06:05:00','eUtranCell.HHO.X2.InterFreq.PrepAttOut','ff6cb0f7-fba0-4134-81ee-55e820574627',118.112,'2020-09-25 > 05:30:00')").show() > 21/06/28 16:01:31 AUDIT audit: \{"time":"June 28, 2021 4:01:31 PM > IST","username":"root","opName":"INSERT > INTO","opId":"7332282307468267","opStatus":"START"} > 21/06/28 16:01:32 WARN CarbonOutputIteratorWrapper: try to poll a row batch > one more time. > 21/06/28 16:01:32 WARN CarbonOutputIteratorWrapper: try to poll a row batch > one more time. > 21/06/28 16:01:32 WARN CarbonOutputIteratorWrapper: try to poll a row batch > one more time. > 21/06/28 16:01:33 AUDIT audit: \{"time":"June 28, 2021 4:01:33 PM > IST","username":"root","opName":"INSERT > INTO","opId":"7332284066443156","opStatus":"START"} > [Stage 40:=>(199 + 1) / > 200]21/06/28 16:01:44 WARN CarbonOutputIteratorWrapper: try to poll a row > batch one more time. > 21/06/28 16:01:44 WARN CarbonOutputIteratorWrapper: try to poll a row batch > one more time. > 21/06/28 16:01:44 WARN CarbonOutputIteratorWrapper: try to poll a row batch > one more time. > 21/06/28 16:01:44 AUDIT audit: \{"time":"June 28, 2021 4:01:44 PM > IST","username":"root","opName":"INSERT > INTO","opId":"7332284066443156","opStatus":"SUCCESS","opTime":"11343 > ms","table":"default.fact_365_1_eutrancell_21_30_minute","extraInfo":{}} > 21/06/28 16:01:44 AUDIT audit: \{"time":"June 28, 2021 4:01:44 PM > IST","username":"root","opName":"INSERT > INTO","opId":"7332282307468267","opStatus":"SUCCESS","opTime":"13137 > ms","table":"default.fact_365_1_eutrancell_21","extraInfo":{}} > +--+ > |Segment ID| > +--+ > | 8| > +--+ > Below we can see it has added another row of 2020-09-25 06:00:00 . > Note: All values of columns which are part of groupby caluse have same value. > This means there should have been
[jira] [Commented] (CARBONDATA-4239) Carbondata 2.1.1 MV : Incremental refresh : Doesnot aggregate data correctly
[ https://issues.apache.org/jira/browse/CARBONDATA-4239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17381075#comment-17381075 ] Indhumathi commented on CARBONDATA-4239: For loading data(LOAD using csv, .txt, where each load has more data), Incremental loading will save time, and benefit load performance. If your case is INSERT scenario, then the mv table with automatic refresh(which is enabled by default), will not benefit in terms of both storage and performance. For your scenario, i suggest you to use MV with Manual Refresh. You can Refresh mv, at some interval (say, at each hour, which will load 4 segments of main table to single segment of MV), which will benefit both storage cost and mv performance also. To create MV with manual Refresh, use create materialized view mv_name with deferred refresh as SELECT(..) (or) create materialized view mv_name properties('refresh_trigger_mode'='on_manual') as SELECT(...) Refer https://github.com/apache/carbondata/blob/master/docs/mv-guide.md#loading-data > Carbondata 2.1.1 MV : Incremental refresh : Doesnot aggregate data correctly > - > > Key: CARBONDATA-4239 > URL: https://issues.apache.org/jira/browse/CARBONDATA-4239 > Project: CarbonData > Issue Type: Bug > Components: core, data-load >Affects Versions: 2.1.1 > Environment: RHEL spark-2.4.5-bin-hadoop2.7 for carbon 2.1.1 >Reporter: Sushant Sammanwar >Priority: Major > Labels: Materialistic_Views, materializedviews, refreshnodes > > Hi Team , > We are doing a POC with Carbondata using MV . > Our MV doesnot contain AVG function as we wanted to utilize the feature of > incremental refresh. > But with incremetnal refresh , we noticed the MV doesnot aggregate value > correctly. > If a row is inserted , it creates another row in MV instead of adding > incremental value . > As a result no. of rows in MV are almost same as raw table. > This doesnot happen with full refresh MV. > Below is the data in MV with 3 rows : > scala> carbon.sql("select * from fact_365_1_eutrancell_21_30_minute").show() > ++---+---+--+-+-++ > |fact_365_1_eutrancell_21_tags_id|fact_365_1_eutrancell_21_metric| ts| > sum_value|min_value|max_value|fact_365_1_eutrancell_21_ts2| > ++---+---+--+-+-++ > | ff6cb0f7-fba0-413...| eUtranCell.HHO.X2...|2020-09-25 > 06:30:00|5412.68105| 31.345| 4578.112| 2020-09-25 05:30:00| > | ff6cb0f7-fba0-413...| eUtranCell.HHO.X2...|2020-09-25 05:30:00| 1176.7035| > 392.2345| 392.2345| 2020-09-25 05:30:00| > | ff6cb0f7-fba0-413...| eUtranCell.HHO.X2...|2020-09-25 06:00:00| 58.112| > 58.112| 58.112| 2020-09-25 05:30:00| > ++---+---+--+-+-++ > Below , i am inserting data for 6th hour, and it should add incremental > values to 6th hour row of MV. > Note the data being inserted ; columns which are part of groupby clause are > having same values as existing data. > scala> carbon.sql("insert into fact_365_1_eutrancell_21 values ('2020-09-25 > 06:05:00','eUtranCell.HHO.X2.InterFreq.PrepAttOut','ff6cb0f7-fba0-4134-81ee-55e820574627',118.112,'2020-09-25 > 05:30:00')").show() > 21/06/28 16:01:31 AUDIT audit: \{"time":"June 28, 2021 4:01:31 PM > IST","username":"root","opName":"INSERT > INTO","opId":"7332282307468267","opStatus":"START"} > 21/06/28 16:01:32 WARN CarbonOutputIteratorWrapper: try to poll a row batch > one more time. > 21/06/28 16:01:32 WARN CarbonOutputIteratorWrapper: try to poll a row batch > one more time. > 21/06/28 16:01:32 WARN CarbonOutputIteratorWrapper: try to poll a row batch > one more time. > 21/06/28 16:01:33 AUDIT audit: \{"time":"June 28, 2021 4:01:33 PM > IST","username":"root","opName":"INSERT > INTO","opId":"7332284066443156","opStatus":"START"} > [Stage 40:=>(199 + 1) / > 200]21/06/28 16:01:44 WARN CarbonOutputIteratorWrapper: try to poll a row > batch one more time. > 21/06/28 16:01:44 WARN CarbonOutputIteratorWrapper: try to poll a row batch > one more time. > 21/06/28 16:01:44 WARN CarbonOutputIteratorWrapper: try to poll a row batch > one more time. > 21/06/28 16:01:44 AUDIT audit: \{"time":"June 28, 2021 4:01:44 PM > IST","username":"root","opName":"INSERT > INTO","opId":"7332284066443156","opStatus":"SUCCESS","opTime":"11343 > ms","table":"default.fact_365_1_eutrancell_21_30_minute","extraInfo":{}} > 21/06/
[jira] [Commented] (CARBONDATA-4239) Carbondata 2.1.1 MV : Incremental refresh : Doesnot aggregate data correctly
[ https://issues.apache.org/jira/browse/CARBONDATA-4239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17380685#comment-17380685 ] Sushant Sammanwar commented on CARBONDATA-4239: --- Thanks [~Indhumathi27] for your response. If it is expected to for MV to write data to new segment then what benefit is MV giving here. I have data being inserted every 15 mins and for hourly MV all 4 rows are there in parent table as well as MV. I donot get any benefit in terms of storage. As far as query time is concerned as no. of rows are same in MV ,it will take same time to run query on table. > Carbondata 2.1.1 MV : Incremental refresh : Doesnot aggregate data correctly > - > > Key: CARBONDATA-4239 > URL: https://issues.apache.org/jira/browse/CARBONDATA-4239 > Project: CarbonData > Issue Type: Bug > Components: core, data-load >Affects Versions: 2.1.1 > Environment: RHEL spark-2.4.5-bin-hadoop2.7 for carbon 2.1.1 >Reporter: Sushant Sammanwar >Priority: Major > Labels: Materialistic_Views, materializedviews, refreshnodes > > Hi Team , > We are doing a POC with Carbondata using MV . > Our MV doesnot contain AVG function as we wanted to utilize the feature of > incremental refresh. > But with incremetnal refresh , we noticed the MV doesnot aggregate value > correctly. > If a row is inserted , it creates another row in MV instead of adding > incremental value . > As a result no. of rows in MV are almost same as raw table. > This doesnot happen with full refresh MV. > Below is the data in MV with 3 rows : > scala> carbon.sql("select * from fact_365_1_eutrancell_21_30_minute").show() > ++---+---+--+-+-++ > |fact_365_1_eutrancell_21_tags_id|fact_365_1_eutrancell_21_metric| ts| > sum_value|min_value|max_value|fact_365_1_eutrancell_21_ts2| > ++---+---+--+-+-++ > | ff6cb0f7-fba0-413...| eUtranCell.HHO.X2...|2020-09-25 > 06:30:00|5412.68105| 31.345| 4578.112| 2020-09-25 05:30:00| > | ff6cb0f7-fba0-413...| eUtranCell.HHO.X2...|2020-09-25 05:30:00| 1176.7035| > 392.2345| 392.2345| 2020-09-25 05:30:00| > | ff6cb0f7-fba0-413...| eUtranCell.HHO.X2...|2020-09-25 06:00:00| 58.112| > 58.112| 58.112| 2020-09-25 05:30:00| > ++---+---+--+-+-++ > Below , i am inserting data for 6th hour, and it should add incremental > values to 6th hour row of MV. > Note the data being inserted ; columns which are part of groupby clause are > having same values as existing data. > scala> carbon.sql("insert into fact_365_1_eutrancell_21 values ('2020-09-25 > 06:05:00','eUtranCell.HHO.X2.InterFreq.PrepAttOut','ff6cb0f7-fba0-4134-81ee-55e820574627',118.112,'2020-09-25 > 05:30:00')").show() > 21/06/28 16:01:31 AUDIT audit: \{"time":"June 28, 2021 4:01:31 PM > IST","username":"root","opName":"INSERT > INTO","opId":"7332282307468267","opStatus":"START"} > 21/06/28 16:01:32 WARN CarbonOutputIteratorWrapper: try to poll a row batch > one more time. > 21/06/28 16:01:32 WARN CarbonOutputIteratorWrapper: try to poll a row batch > one more time. > 21/06/28 16:01:32 WARN CarbonOutputIteratorWrapper: try to poll a row batch > one more time. > 21/06/28 16:01:33 AUDIT audit: \{"time":"June 28, 2021 4:01:33 PM > IST","username":"root","opName":"INSERT > INTO","opId":"7332284066443156","opStatus":"START"} > [Stage 40:=>(199 + 1) / > 200]21/06/28 16:01:44 WARN CarbonOutputIteratorWrapper: try to poll a row > batch one more time. > 21/06/28 16:01:44 WARN CarbonOutputIteratorWrapper: try to poll a row batch > one more time. > 21/06/28 16:01:44 WARN CarbonOutputIteratorWrapper: try to poll a row batch > one more time. > 21/06/28 16:01:44 AUDIT audit: \{"time":"June 28, 2021 4:01:44 PM > IST","username":"root","opName":"INSERT > INTO","opId":"7332284066443156","opStatus":"SUCCESS","opTime":"11343 > ms","table":"default.fact_365_1_eutrancell_21_30_minute","extraInfo":{}} > 21/06/28 16:01:44 AUDIT audit: \{"time":"June 28, 2021 4:01:44 PM > IST","username":"root","opName":"INSERT > INTO","opId":"7332282307468267","opStatus":"SUCCESS","opTime":"13137 > ms","table":"default.fact_365_1_eutrancell_21","extraInfo":{}} > +--+ > |Segment ID| > +--+ > | 8| > +--+ > Below we can see it has added another row of 2020-09-25 06:00:00 . > Note: All values of columns which are part of groupby caluse h
[jira] [Commented] (CARBONDATA-4239) Carbondata 2.1.1 MV : Incremental refresh : Doesnot aggregate data correctly
[ https://issues.apache.org/jira/browse/CARBONDATA-4239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17380485#comment-17380485 ] Indhumathi Muthumurugesh commented on CARBONDATA-4239: -- Hi suyash, Incremental dataloading concept in mv, will aggreagate the new incoming data (new Load/Insert) and write it to a new segment. It will not append to existing segment. Full Refresh mode -> Will do Aggreagtion on table data (all segments) , ie, insert overwrite operation, whereas, incremental refresh will create new segment for new incoming data. So, in INSERT case, number of rows will be same as parent table. And, When you do select * from mv_table, the data is partially-aggregated. When the query (that you have created as MV) is fired, it will do aggregation on this partially-aggregated data and return results. So, in your case, this is not an issue. For INSERT CASE, if you dont want to load to MV for each row, you can create MV with " with deferred refresh" and refresh it when required. Please have a look at the design document link below, for more understanding. [https://docs.google.com/document/d/1AACOYmBpwwNdHjJLOub0utSc6JCBMZn8VL5CvZ9hygA/edit] > Carbondata 2.1.1 MV : Incremental refresh : Doesnot aggregate data correctly > - > > Key: CARBONDATA-4239 > URL: https://issues.apache.org/jira/browse/CARBONDATA-4239 > Project: CarbonData > Issue Type: Bug > Components: core, data-load >Affects Versions: 2.1.1 > Environment: RHEL spark-2.4.5-bin-hadoop2.7 for carbon 2.1.1 >Reporter: Sushant Sammanwar >Priority: Major > Labels: Materialistic_Views, materializedviews, refreshnodes > > Hi Team , > We are doing a POC with Carbondata using MV . > Our MV doesnot contain AVG function as we wanted to utilize the feature of > incremental refresh. > But with incremetnal refresh , we noticed the MV doesnot aggregate value > correctly. > If a row is inserted , it creates another row in MV instead of adding > incremental value . > As a result no. of rows in MV are almost same as raw table. > This doesnot happen with full refresh MV. > Below is the data in MV with 3 rows : > scala> carbon.sql("select * from fact_365_1_eutrancell_21_30_minute").show() > ++---+---+--+-+-++ > |fact_365_1_eutrancell_21_tags_id|fact_365_1_eutrancell_21_metric| ts| > sum_value|min_value|max_value|fact_365_1_eutrancell_21_ts2| > ++---+---+--+-+-++ > | ff6cb0f7-fba0-413...| eUtranCell.HHO.X2...|2020-09-25 > 06:30:00|5412.68105| 31.345| 4578.112| 2020-09-25 05:30:00| > | ff6cb0f7-fba0-413...| eUtranCell.HHO.X2...|2020-09-25 05:30:00| 1176.7035| > 392.2345| 392.2345| 2020-09-25 05:30:00| > | ff6cb0f7-fba0-413...| eUtranCell.HHO.X2...|2020-09-25 06:00:00| 58.112| > 58.112| 58.112| 2020-09-25 05:30:00| > ++---+---+--+-+-++ > Below , i am inserting data for 6th hour, and it should add incremental > values to 6th hour row of MV. > Note the data being inserted ; columns which are part of groupby clause are > having same values as existing data. > scala> carbon.sql("insert into fact_365_1_eutrancell_21 values ('2020-09-25 > 06:05:00','eUtranCell.HHO.X2.InterFreq.PrepAttOut','ff6cb0f7-fba0-4134-81ee-55e820574627',118.112,'2020-09-25 > 05:30:00')").show() > 21/06/28 16:01:31 AUDIT audit: \{"time":"June 28, 2021 4:01:31 PM > IST","username":"root","opName":"INSERT > INTO","opId":"7332282307468267","opStatus":"START"} > 21/06/28 16:01:32 WARN CarbonOutputIteratorWrapper: try to poll a row batch > one more time. > 21/06/28 16:01:32 WARN CarbonOutputIteratorWrapper: try to poll a row batch > one more time. > 21/06/28 16:01:32 WARN CarbonOutputIteratorWrapper: try to poll a row batch > one more time. > 21/06/28 16:01:33 AUDIT audit: \{"time":"June 28, 2021 4:01:33 PM > IST","username":"root","opName":"INSERT > INTO","opId":"7332284066443156","opStatus":"START"} > [Stage 40:=>(199 + 1) / > 200]21/06/28 16:01:44 WARN CarbonOutputIteratorWrapper: try to poll a row > batch one more time. > 21/06/28 16:01:44 WARN CarbonOutputIteratorWrapper: try to poll a row batch > one more time. > 21/06/28 16:01:44 WARN CarbonOutputIteratorWrapper: try to poll a row batch > one more time. > 21/06/28 16:01:44 AUDIT audit: \{"time":"June 28, 2021 4:01:44 PM > IST","username":"root","opName":"INSER