vingov opened a new pull request #2769:
URL: https://github.com/apache/hudi/pull/2769
## What is the purpose of the pull request
*This pull request adds a new partition extractor class to support Hive
style partitioning: HiveStylePartitionExtractor.*
## Brief change log
- *Added HiveStylePartitionExtractor to support Hive style partitions.*
## Verify this pull request
This change added tests and can be verified as follows:
- *Added a new test method `testHiveStylePartition()` in the
`TestPartitionValueExtractor` class to verify the change.*
- *Manually verified the change by running a job locally.*
Before the fix:
`21/04/01 23:10:33 ERROR deltastreamer.HoodieDeltaStreamer: Got error
running delta sync once. Shutting down
org.apache.hudi.exception.HoodieException: Got runtime exception when hive
syncing delta_streamer_test
at
org.apache.hudi.hive.HiveSyncTool.syncHoodieTable(HiveSyncTool.java:122)
at
org.apache.hudi.utilities.deltastreamer.DeltaSync.syncMeta(DeltaSync.java:560)
at
org.apache.hudi.utilities.deltastreamer.DeltaSync.writeToSink(DeltaSync.java:475)
at
org.apache.hudi.utilities.deltastreamer.DeltaSync.syncOnce(DeltaSync.java:282)
at
org.apache.hudi.utilities.deltastreamer.HoodieDeltaStreamer.lambda$sync$2(HoodieDeltaStreamer.java:170)
at org.apache.hudi.common.util.Option.ifPresent(Option.java:96)
at
org.apache.hudi.utilities.deltastreamer.HoodieDeltaStreamer.sync(HoodieDeltaStreamer.java:168)
at
org.apache.hudi.utilities.deltastreamer.HoodieDeltaStreamer.main(HoodieDeltaStreamer.java:470)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:690)
Caused by: org.apache.hudi.hive.HoodieHiveSyncException: Failed to sync
partitions for table fact_scheduled_trip__1pc_trip_uuid
at
org.apache.hudi.hive.HiveSyncTool.syncPartitions(HiveSyncTool.java:229)
at
org.apache.hudi.hive.HiveSyncTool.syncHoodieTable(HiveSyncTool.java:166)
at
org.apache.hudi.hive.HiveSyncTool.syncHoodieTable(HiveSyncTool.java:108)
... 12 more
Caused by: java.lang.IllegalArgumentException: Partition path
datestr=2021-03-28 is not in the form yyyy/mm/dd
at
org.apache.hudi.hive.SlashEncodedDayPartitionValueExtractor.extractPartitionValuesInPath(SlashEncodedDayPartitionValueExtractor.java:55)
at
org.apache.hudi.hive.HoodieHiveClient.getPartitionEvents(HoodieHiveClient.java:220)
at
org.apache.hudi.hive.HiveSyncTool.syncPartitions(HiveSyncTool.java:221)`
After the fix:
`21/04/02 19:04:40 INFO hive.HiveSyncTool: No Schema difference for
delta_streamer_test
21/04/02 19:04:40 INFO hive.HiveSyncTool: Schema sync complete. Syncing
partitions for delta_streamer_test
21/04/02 19:04:41 INFO hive.HiveSyncTool: Last commit time synced was found
to be null
21/04/02 19:04:41 INFO common.AbstractSyncHoodieClient: Last commit time
synced is not known, listing all partitions in /tmp/delta_streamer_test,FS
:DFS[DFSClient[clientName=DFSClient_NONMAPREDUCE_-2134537191_53, ugi=vinothg
(auth:SIMPLE)]]
21/04/02 19:04:41 INFO hive.HiveSyncTool: Storage partitions scan complete.
Found 6
21/04/02 19:04:41 INFO hive.HiveSyncTool: New Partitions
[datestr=2021-03-28, datestr=2021-03-29, datestr=2021-03-30,
datestr=2021-03-31, datestr=2021-04-01, datestr=2021-04-02]
21/04/02 19:04:41 INFO hive.HoodieHiveClient: Adding partitions 6 to table
delta_streamer_test`
## Committer checklist
- [x] Has a corresponding JIRA in PR title & commit
- [x] Commit message is descriptive of the change
- [ ] CI is green
- [ ] Necessary doc changes done or have another open PR
- [ ] For large changes, please consider breaking it into sub-tasks under
an umbrella JIRA.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]