[ 
https://issues.apache.org/jira/browse/HUDI-2886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17451471#comment-17451471
 ] 

Manoj Govindassamy commented on HUDI-2886:
------------------------------------------

Applied a workaround of retrying S3 IO Exception errors and now the same MOR 
test run with more than 10 commits which will trigger Compaction PASSED.

 
{noformat}

$ sh run_test_suite.sanity.mor.sh /home/hadoop/staging/
\n============================== Executing 
MERGE_ON_READ-true-sanity_spark_command.sh test suite 
==============================
\n============================== Finished 
MERGE_ON_READ-true-sanity_spark_command.sh test suite 
==============================
Namespace(all=False, details=True, log_dir='/home/hadoop/staging//.logs', 
log_file='MERGE_ON_READ-true-sanity_spark_command.sh.logs',
show_panels=True)
Infra Test MERGE_ON_READ-true-sanity_spark_command.sh
├── first_insert
│   ├── name: f953ad26-bd82-4058-b6d4-e8c3d734078e
│   ├── record_size: 1000
│   ├── repeat_count: 5
│   ├── num_partitions_insert: 5
│   ├── num_records_insert: 1000
│   ├── config: first_insert
│   └── node_status: Succeeded
├── second_insert
│   ├── name: 2f800cee-b6b2-4e5e-9c27-bd8a31eec7d2
│   ├── record_size: 1000
│   ├── repeat_count: 5
│   ├── num_partitions_insert: 50
│   ├── num_records_insert: 10000
│   ├── config: second_insert
│   └── node_status: Succeeded
├── third_insert
│   ├── name: 81c63535-fd71-461c-a714-58349d86327f
│   ├── record_size: 1000
│   ├── repeat_count: 5
│   ├── num_partitions_insert: 2
│   ├── num_records_insert: 300
│   ├── config: third_insert
│   └── node_status: Succeeded
├── first_validate
│   ├── name: 6f68bcb2-1b59-4f16-80f7-2a759d8be0ef
│   ├── validate_hive: False
│   ├── config: first_validate
│   └── node_status: Succeeded
├── first_upsert
│   ├── name: 278bddc1-d8f3-494c-955b-d2af26bc7209
│   ├── record_size: 1000
│   ├── repeat_count: 1
│   ├── num_records_upsert: 100
│   ├── num_partitions_insert: 2
│   ├── num_records_insert: 300
│   ├── num_partitions_upsert: 1
│   ├── config: first_upsert
│   └── node_status: Succeeded
└── first_delete
    ├── name: e78b4b92-7560-4182-892a-b1315881fbe4
    ├── num_partitions_delete: 50
    ├── num_records_delete: 8000
    ├── config: first_delete
    └── node_status: Succeeded
MERGE_ON_READ-true-sanity_spark_command.sh
╭───────────────────────────────────────────────────────────────────╮
│ Infra Test: MERGE_ON_READ TRUE SANITY                             │
│ Test Status: None                                                 │
│ Current Dag Round: 1                                              │
│ Failed Node: None                                                 │
│ Last Running Node: first_delete                                   │
│ Table Type: MERGE_ON_READ                                         │
│ Metadata Enable: TRUE                                             │
│ YML Type Executed: SANITY                                         │
│ Run Time Minutes: None                                            │
│ Spark Application Completed: None%                                │
│ Spark Application Id: None                                        │
│ Test Origin File: MERGE_ON_READ-true-sanity_spark_command.sh      │
│ Logs Origin File: MERGE_ON_READ-true-sanity_spark_command.sh.logs │
╰───────────────────────────────────────────────────────────────────╯{noformat}

> Certify metadata table using large-scale cluster testing
> --------------------------------------------------------
>
>                 Key: HUDI-2886
>                 URL: https://issues.apache.org/jira/browse/HUDI-2886
>             Project: Apache Hudi
>          Issue Type: Task
>            Reporter: Rajesh Mahindra
>            Assignee: Manoj Govindassamy
>            Priority: Blocker
>             Fix For: 0.10.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)

Reply via email to