hudi-bot commented on PR #8598:
URL: https://github.com/apache/hudi/pull/8598#issuecomment-1534130948
## CI report:
* c16f375a644f5417d9f90883bcbe5f377095 Azure:
SteNicholas commented on code in PR #8611:
URL: https://github.com/apache/hudi/pull/8611#discussion_r1184569380
##
hudi-flink-datasource/hudi-flink/src/main/java/org/apache/hudi/configuration/OptionsResolver.java:
##
@@ -260,6 +260,17 @@ public static boolean
Timothy Brown created HUDI-6168:
---
Summary: Add source partition columns to rows in S3/GCS Sources
Key: HUDI-6168
URL: https://issues.apache.org/jira/browse/HUDI-6168
Project: Apache Hudi
Issue
[
https://issues.apache.org/jira/browse/HUDI-6168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Timothy Brown reassigned HUDI-6168:
---
Assignee: Timothy Brown
> Add source partition columns to rows in S3/GCS Sources
>
xushiyan commented on code in PR #8490:
URL: https://github.com/apache/hudi/pull/8490#discussion_r1184560033
##
hudi-spark-datasource/hudi-spark/src/test/scala/org/apache/hudi/functional/TestMORDataSourceStorage.scala:
##
@@ -133,4 +132,69 @@ class TestMORDataSourceStorage
This is an automated email from the ASF dual-hosted git repository.
bhavanisudha pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/hudi.git
The following commit(s) were added to refs/heads/asf-site by this push:
new f654e4bb250 updated community content
bhasudha merged PR #8621:
URL: https://github.com/apache/hudi/pull/8621
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
danny0405 commented on issue #7602:
URL: https://github.com/apache/hudi/issues/7602#issuecomment-1534112611
Okay, I guess it is also feasible to add that support for consistent hasing
index.
--
This is an automated message from the Apache Git Service.
To respond to the message, please
danny0405 commented on PR #8478:
URL: https://github.com/apache/hudi/pull/8478#issuecomment-1534111844
> > I saw some sub-claused locations are changed, like the `LOCATION` and
`CLUSTETERED BY`, is that as expected?
>
> @danny0405 Thank you for your thorough review! The modification
Danny Chen created HUDI-6167:
Summary: Automatically schema inferrence for delta stream with
JSON document datasource
Key: HUDI-6167
URL: https://issues.apache.org/jira/browse/HUDI-6167
Project: Apache
danny0405 commented on issue #8626:
URL: https://github.com/apache/hudi/issues/8626#issuecomment-1534109380
Sounds like a feature enquiry, it is feasible for automically JSON based
schema evolution, especially for document data source, I have created a JIRA
issue:
JoshuaZhuCN commented on issue #7602:
URL: https://github.com/apache/hudi/issues/7602#issuecomment-1534108727
> Is this the fix you want? #7834
@danny0405 This PR addresses bulk insert support under SIMPLE BUCKET, but my
usage scenarios are all CONSISTENT_ HASHING BUCKET index table,
danny0405 commented on code in PR #8472:
URL: https://github.com/apache/hudi/pull/8472#discussion_r1184550750
##
hudi-common/src/main/java/org/apache/hudi/common/model/IndexItem.java:
##
@@ -0,0 +1,91 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or
Amar1404 commented on issue #8626:
URL: https://github.com/apache/hudi/issues/8626#issuecomment-1534104878
hi @danny0405 - I mean if i don't provide any schema class, so while using
spark.read.json() is will automatically infer the schema, but in the code we
are using
danny0405 commented on code in PR #8190:
URL: https://github.com/apache/hudi/pull/8190#discussion_r1184548905
##
hudi-common/src/main/java/org/apache/hudi/metadata/FileSystemBackedTableMetadata.java:
##
@@ -106,9 +106,9 @@ private List getPartitionPathWithPathPrefix(String
hudi-bot commented on PR #8611:
URL: https://github.com/apache/hudi/pull/8611#issuecomment-1534103080
## CI report:
* 8a10affd53d66b88abd116587e6dd5e0c43e542a Azure:
danny0405 commented on code in PR #8596:
URL: https://github.com/apache/hudi/pull/8596#discussion_r1184546697
##
hudi-cli/src/test/java/org/apache/hudi/cli/commands/TestRepairsCommand.java:
##
@@ -234,6 +234,30 @@ public void testOverwriteHoodieProperties() throws
IOException
danny0405 commented on PR #8082:
URL: https://github.com/apache/hudi/pull/8082#issuecomment-1534100699
> TestNestedSchemaPruningOptimization failed only in spark3.3.2
Yes, I even try the Spark 3.2.1 and it works fine.
--
This is an automated message from the Apache Git Service.
To
danny0405 commented on issue #8628:
URL: https://github.com/apache/hudi/issues/8628#issuecomment-1534099485
Is your table partitioned as expected? Like you are using the BloomFilter
index by default which takes deduplication per-partition scope.
--
This is an automated message from the
hudi-bot commented on PR #8611:
URL: https://github.com/apache/hudi/pull/8611#issuecomment-1534098133
## CI report:
* 8a10affd53d66b88abd116587e6dd5e0c43e542a Azure:
hudi-bot commented on PR #8595:
URL: https://github.com/apache/hudi/pull/8595#issuecomment-1534093550
## CI report:
* 21e3090d2bd0eb714322e3ffad7b3554f4440829 UNKNOWN
* ae38bcb32800fe4f6a14ee1e627607296041c10a Azure:
xushiyan commented on PR #8390:
URL: https://github.com/apache/hudi/pull/8390#issuecomment-1534075473
> lets move the sample writes call as early as possible. so we construct the
writeConfig w/ the avg record size over-ridden if need be. we don't want to
mutate the write config.
xushiyan commented on code in PR #8390:
URL: https://github.com/apache/hudi/pull/8390#discussion_r1184523790
##
hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/client/utils/SparkSampleWritesUtils.java:
##
@@ -0,0 +1,143 @@
+/*
+ * Licensed to the Apache Software
hudi-bot commented on PR #7826:
URL: https://github.com/apache/hudi/pull/7826#issuecomment-1534073143
## CI report:
* b74d73f66e53a4cbd6b6048c4d07e19c1b9ad566 Azure:
xushiyan commented on code in PR #8390:
URL: https://github.com/apache/hudi/pull/8390#discussion_r1184522344
##
hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/client/utils/SparkSampleWritesUtils.java:
##
@@ -0,0 +1,143 @@
+/*
+ * Licensed to the Apache Software
hudi-bot commented on PR #8490:
URL: https://github.com/apache/hudi/pull/8490#issuecomment-1534067199
## CI report:
* 7575e66d6a48d702fe1e8d4670cb0890b370e94b Azure:
hudi-bot commented on PR #7826:
URL: https://github.com/apache/hudi/pull/7826#issuecomment-1534066550
## CI report:
* b74d73f66e53a4cbd6b6048c4d07e19c1b9ad566 Azure:
stream2000 commented on PR #7826:
URL: https://github.com/apache/hudi/pull/7826#issuecomment-1534064726
> a related PR #7469
The iusse that this PR is trying to solve not only happens in multi-writer
scenarios but also in single writer with async lazy clean
--
This is an
stream2000 commented on code in PR #7826:
URL: https://github.com/apache/hudi/pull/7826#discussion_r1184519230
##
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/client/BaseHoodieTableServiceClient.java:
##
@@ -707,20 +709,34 @@ protected List
hudi-bot commented on PR #8490:
URL: https://github.com/apache/hudi/pull/8490#issuecomment-1534062678
## CI report:
* 7575e66d6a48d702fe1e8d4670cb0890b370e94b UNKNOWN
Bot commands
@hudi-bot supports the following commands:
- `@hudi-bot run azure` re-run the
tomyanth opened a new issue, #8628:
URL: https://github.com/apache/hudi/issues/8628
**Describe the problem you faced**
The partitionpath field act somewhat similar to another recordkey(primary
key)
LinMingQiang commented on code in PR #7469:
URL: https://github.com/apache/hudi/pull/7469#discussion_r1184510754
##
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/client/BaseHoodieWriteClient.java:
##
@@ -897,28 +897,40 @@ public HoodieCleanMetadata clean(String
CTTY commented on code in PR #8190:
URL: https://github.com/apache/hudi/pull/8190#discussion_r1184505252
##
hudi-common/src/main/java/org/apache/hudi/metadata/FileSystemBackedTableMetadata.java:
##
@@ -106,9 +106,9 @@ private List getPartitionPathWithPathPrefix(String
boneanxs commented on code in PR #8452:
URL: https://github.com/apache/hudi/pull/8452#discussion_r1184492603
##
hudi-client/hudi-spark-client/src/test/java/org/apache/hudi/io/storage/row/TestHoodieRowCreateHandle.java:
##
@@ -190,16 +189,8 @@ public void
hudi-bot commented on PR #8598:
URL: https://github.com/apache/hudi/pull/8598#issuecomment-1534028913
## CI report:
* 6f5685dd6a464ce37a213b80c5ded5151a2710e5 Azure:
hudi-bot commented on PR #8598:
URL: https://github.com/apache/hudi/pull/8598#issuecomment-1534021989
## CI report:
* 6f5685dd6a464ce37a213b80c5ded5151a2710e5 Azure:
hudi-bot commented on PR #8595:
URL: https://github.com/apache/hudi/pull/8595#issuecomment-1534021944
## CI report:
* 38e644e9bd5d2dfb345dcbab6b6d4946f5124988 Azure:
vinothchandar commented on PR #8472:
URL: https://github.com/apache/hudi/pull/8472#issuecomment-1534017088
@prashantwason @nbalajee @suryaprasanna would this break you all in anyway?
Do we need the record data anywhere for successful writes?
cc @rmahindra123 as well. same question.
c-f-cooper commented on code in PR #8596:
URL: https://github.com/apache/hudi/pull/8596#discussion_r1184493448
##
hudi-cli/src/test/java/org/apache/hudi/cli/commands/TestRepairsCommand.java:
##
@@ -234,6 +234,30 @@ public void testOverwriteHoodieProperties() throws
IOException
hudi-bot commented on PR #8595:
URL: https://github.com/apache/hudi/pull/8595#issuecomment-1534013742
## CI report:
* 38e644e9bd5d2dfb345dcbab6b6d4946f5124988 Azure:
danny0405 commented on code in PR #8596:
URL: https://github.com/apache/hudi/pull/8596#discussion_r1184491794
##
hudi-cli/src/test/java/org/apache/hudi/cli/commands/TestRepairsCommand.java:
##
@@ -234,6 +234,30 @@ public void testOverwriteHoodieProperties() throws
IOException
boneanxs commented on PR #8452:
URL: https://github.com/apache/hudi/pull/8452#issuecomment-1534009597
> Caused by: org.eclipse.aether.resolution.ArtifactDescriptorException:
Failed to read artifact descriptor for
org.apache.maven:maven-plugin-api:jar:3.8.6
@bvaradar Hey, it Looks an
boneanxs commented on code in PR #7627:
URL: https://github.com/apache/hudi/pull/7627#discussion_r1184488167
##
hudi-spark-datasource/hudi-spark-common/src/main/scala/org/apache/spark/sql/hudi/streaming/HoodieStreamSource.scala:
##
@@ -163,10 +178,7 @@ class HoodieStreamSource(
xiarixiaoyao commented on PR #8082:
URL: https://github.com/apache/hudi/pull/8082#issuecomment-1533999490
> BaseFileOnlyRelation.scala
the reason why we hard-coded is that:
The parent class of baseFileOnlyRelation is hard-coded to disable
vectorization by default, resulting in the
hudi-bot commented on PR #8595:
URL: https://github.com/apache/hudi/pull/8595#issuecomment-1533985956
## CI report:
* 38e644e9bd5d2dfb345dcbab6b6d4946f5124988 Azure:
hudi-bot commented on PR #8303:
URL: https://github.com/apache/hudi/pull/8303#issuecomment-1533985556
## CI report:
* 3cfef7fc92a6c5ce9bb078a7186e04614c11647f UNKNOWN
* e4144fb95b764a96f71b125bd02fd62bac9f00ba Azure:
PaddyMelody commented on code in PR #8595:
URL: https://github.com/apache/hudi/pull/8595#discussion_r1184474743
##
hudi-flink-datasource/hudi-flink/src/main/java/org/apache/hudi/source/FileIndex.java:
##
@@ -66,15 +66,19 @@ public class FileIndex {
private final RowType
c-f-cooper commented on code in PR #8596:
URL: https://github.com/apache/hudi/pull/8596#discussion_r1184474302
##
hudi-cli/src/test/java/org/apache/hudi/cli/commands/TestRepairsCommand.java:
##
@@ -234,6 +234,30 @@ public void testOverwriteHoodieProperties() throws
IOException
danny0405 commented on code in PR #8596:
URL: https://github.com/apache/hudi/pull/8596#discussion_r1184472771
##
hudi-cli/src/test/java/org/apache/hudi/cli/commands/TestRepairsCommand.java:
##
@@ -234,6 +234,30 @@ public void testOverwriteHoodieProperties() throws
IOException
danny0405 commented on code in PR #8595:
URL: https://github.com/apache/hudi/pull/8595#discussion_r1184472506
##
hudi-flink-datasource/hudi-flink/src/main/java/org/apache/hudi/source/FileIndex.java:
##
@@ -66,15 +66,19 @@ public class FileIndex {
private final RowType
PaddyMelody commented on code in PR #8595:
URL: https://github.com/apache/hudi/pull/8595#discussion_r1184472150
##
hudi-flink-datasource/hudi-flink/src/main/java/org/apache/hudi/source/FileIndex.java:
##
@@ -66,15 +66,19 @@ public class FileIndex {
private final RowType
danny0405 commented on issue #8626:
URL: https://github.com/apache/hudi/issues/8626#issuecomment-1533979912
What are you indicating for `InferSchema` ? Is is a builtin funcionality for
DeltaStreamer?
--
This is an automated message from the Apache Git Service.
To respond to the message,
danny0405 commented on code in PR #8190:
URL: https://github.com/apache/hudi/pull/8190#discussion_r1184471426
##
hudi-common/src/main/java/org/apache/hudi/metadata/FileSystemBackedTableMetadata.java:
##
@@ -106,9 +106,9 @@ private List getPartitionPathWithPathPrefix(String
[
https://issues.apache.org/jira/browse/HUDI-6111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17719114#comment-17719114
]
Ran Tao commented on HUDI-6111:
---
hi. [~guoyihua] my cmd is "mvn clean package -DskipTests". it works in
PaddyMelody commented on code in PR #8595:
URL: https://github.com/apache/hudi/pull/8595#discussion_r1184470886
##
hudi-flink-datasource/hudi-flink/src/main/java/org/apache/hudi/source/FileIndex.java:
##
@@ -66,15 +66,19 @@ public class FileIndex {
private final RowType
danny0405 commented on issue #8617:
URL: https://github.com/apache/hudi/issues/8617#issuecomment-1533976325
Because `HoodieRecord` sereliaze the inputs into avro bytes in general, even
though the output file is Parquet
--
This is an automated message from the Apache Git Service.
To
danny0405 commented on code in PR #8595:
URL: https://github.com/apache/hudi/pull/8595#discussion_r1184469395
##
hudi-flink-datasource/hudi-flink/src/main/java/org/apache/hudi/source/FileIndex.java:
##
@@ -66,15 +66,19 @@ public class FileIndex {
private final RowType
c-f-cooper commented on code in PR #8596:
URL: https://github.com/apache/hudi/pull/8596#discussion_r1184469314
##
hudi-cli/src/test/java/org/apache/hudi/cli/commands/TestRepairsCommand.java:
##
@@ -234,6 +234,30 @@ public void testOverwriteHoodieProperties() throws
IOException
PaddyMelody commented on code in PR #8595:
URL: https://github.com/apache/hudi/pull/8595#discussion_r1184454362
##
hudi-flink-datasource/hudi-flink/src/main/java/org/apache/hudi/source/FileIndex.java:
##
@@ -145,7 +149,7 @@ public FileStatus[] getFilesInPartitions() {
danny0405 commented on code in PR #8596:
URL: https://github.com/apache/hudi/pull/8596#discussion_r1184466905
##
hudi-cli/src/test/java/org/apache/hudi/cli/commands/TestRepairsCommand.java:
##
@@ -234,6 +234,30 @@ public void testOverwriteHoodieProperties() throws
IOException
PaddyMelody commented on code in PR #8595:
URL: https://github.com/apache/hudi/pull/8595#discussion_r1184454362
##
hudi-flink-datasource/hudi-flink/src/main/java/org/apache/hudi/source/FileIndex.java:
##
@@ -145,7 +149,7 @@ public FileStatus[] getFilesInPartitions() {
duc-dn commented on issue #7806:
URL: https://github.com/apache/hudi/issues/7806#issuecomment-1533938209
@ad1happy2go Thanks a lot
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
hudi-bot commented on PR #8303:
URL: https://github.com/apache/hudi/pull/8303#issuecomment-1533904674
## CI report:
* 3cfef7fc92a6c5ce9bb078a7186e04614c11647f UNKNOWN
* 3ad5ae580928952bb601cf90f09abb53d1d436e4 Azure:
hudi-bot commented on PR #8618:
URL: https://github.com/apache/hudi/pull/8618#issuecomment-1533900435
## CI report:
* 62a3bc4cd0e932895bcdb9eb8ae0936348066289 Azure:
hudi-bot commented on PR #8303:
URL: https://github.com/apache/hudi/pull/8303#issuecomment-1533899798
## CI report:
* 3cfef7fc92a6c5ce9bb078a7186e04614c11647f UNKNOWN
* 3ad5ae580928952bb601cf90f09abb53d1d436e4 Azure:
soumilshah1995 commented on issue #8400:
URL: https://github.com/apache/hudi/issues/8400#issuecomment-1533872885
Any updates @ad1happy2go
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
nsivabalan merged PR #8622:
URL: https://github.com/apache/hudi/pull/8622
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
This is an automated email from the ASF dual-hosted git repository.
sivabalan pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hudi.git
from 7f41e22eb3b [HUDI-6113] Support multiple transformers using the same
config keys in DeltaStreamer (#8514)
add
soumilshah1995 commented on issue #7879:
URL: https://github.com/apache/hudi/issues/7879#issuecomment-1533835424
@juanAmayaRamirez
lets hop on call here is link
https://meet.google.com/gam-wsca-hxi
--
This is an automated message from the Apache Git Service.
To respond to
juanAmayaRamirez commented on issue #7879:
URL: https://github.com/apache/hudi/issues/7879#issuecomment-1533834470
Thanks for the quick response! (love your videos BTW)
but sorry to tell that I am getting the same error.
`An error occurred while calling o110.getDynamicFrame. Reads
soumilshah1995 commented on issue #7879:
URL: https://github.com/apache/hudi/issues/7879#issuecomment-1533820177
Hey Buddy @juanAmayaRamirez
just use glue 4.0 and pass these param it will be fixed
```
"""
--additional-python-modules | faker==11.3.0
--conf |
juanAmayaRamirez commented on issue #7879:
URL: https://github.com/apache/hudi/issues/7879#issuecomment-1533818465
Hi @soumilshah1995 just here to ask what the issue was.
I am having a similar issue with lake formation that I can't get to figure
out when trying to read a Hudi table from
hudi-bot commented on PR #8618:
URL: https://github.com/apache/hudi/pull/8618#issuecomment-1533814379
## CI report:
* 17d27ed9986c621ceb8bd576931349d58d0269f8 Azure:
hudi-bot commented on PR #8490:
URL: https://github.com/apache/hudi/pull/8490#issuecomment-1533813860
## CI report:
* 7575e66d6a48d702fe1e8d4670cb0890b370e94b Azure:
hudi-bot commented on PR #8618:
URL: https://github.com/apache/hudi/pull/8618#issuecomment-1533806869
## CI report:
* 17d27ed9986c621ceb8bd576931349d58d0269f8 Azure:
psendyk commented on PR #8627:
URL: https://github.com/apache/hudi/pull/8627#issuecomment-1533771953
my bad, meant to open a PR into my fork
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
psendyk closed pull request #8627: keep a single random record instance
URL: https://github.com/apache/hudi/pull/8627
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
hudi-bot commented on PR #8627:
URL: https://github.com/apache/hudi/pull/8627#issuecomment-1533751232
## CI report:
* a3c973f33153bccbf78c4c0c7ecb60e6d852bd0f UNKNOWN
Bot commands
@hudi-bot supports the following commands:
- `@hudi-bot run azure` re-run the
psendyk commented on code in PR #8627:
URL: https://github.com/apache/hudi/pull/8627#discussion_r1184277562
##
hudi-cli/src/main/scala/org/apache/hudi/cli/DedupeSparkJob.scala:
##
@@ -100,81 +106,31 @@ class DedupeSparkJob(basePath: String,
getDedupePlan(dupeMap)
}
-
kazdy commented on PR #7922:
URL: https://github.com/apache/hudi/pull/7922#issuecomment-1533683312
> @kazdy : Is this PR still required ?
yes it is, I had issues running integration tests on M1, I did not have time
to run these on my amd box yet
--
This is an automated message
hudi-bot commented on PR #8490:
URL: https://github.com/apache/hudi/pull/8490#issuecomment-1533665739
## CI report:
* 3d7d1f6d3da030e8416a24a9e1e61f191ba40271 Azure:
hudi-bot commented on PR #8490:
URL: https://github.com/apache/hudi/pull/8490#issuecomment-1533654796
## CI report:
* 8ee4e9f6036cdaf1241665ef853a5297f422a59e Azure:
hudi-bot commented on PR #8574:
URL: https://github.com/apache/hudi/pull/8574#issuecomment-1533644900
## CI report:
* cfa118d8ae39e0cf4bb128dae0893f930c05b38c Azure:
hudi-bot commented on PR #8490:
URL: https://github.com/apache/hudi/pull/8490#issuecomment-1533558714
## CI report:
* 8ee4e9f6036cdaf1241665ef853a5297f422a59e Azure:
hudi-bot commented on PR #8490:
URL: https://github.com/apache/hudi/pull/8490#issuecomment-1533547804
## CI report:
* 8ee4e9f6036cdaf1241665ef853a5297f422a59e Azure:
sydneyhoran commented on issue #8519:
URL: https://github.com/apache/hudi/issues/8519#issuecomment-1533514539
The reason I was getting an error on deleting records without tombstone was
because we were testing by starting from a midpoint of a Kafka topic, so I
suspect Deltastreamer didn't
sydneyhoran commented on issue #8519:
URL: https://github.com/apache/hudi/issues/8519#issuecomment-1533511853
Thanks to help from Aditya, @rmahindra123 and @nsivabalan , this was the
fix that worked for us to filter out tombstones:
willforevercn commented on issue #8617:
URL: https://github.com/apache/hudi/issues/8617#issuecomment-1533471395
The sample code snippet is using COW table, but I still see the error.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
hudi-bot commented on PR #7359:
URL: https://github.com/apache/hudi/pull/7359#issuecomment-1533462978
## CI report:
* 2999c56d853134e8476908b79ce77737293ce867 Azure:
hudi-bot commented on PR #8574:
URL: https://github.com/apache/hudi/pull/8574#issuecomment-1533395428
## CI report:
* 2002f1535315a129bfd8b3985e0e5691ca75b2e9 Azure:
hudi-bot commented on PR #8574:
URL: https://github.com/apache/hudi/pull/8574#issuecomment-1533376928
## CI report:
* 2002f1535315a129bfd8b3985e0e5691ca75b2e9 Azure:
[
https://issues.apache.org/jira/browse/HUDI-5493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17718975#comment-17718975
]
Lokesh Jain commented on HUDI-5493:
---
All the known gaps related to clustering and archival are fixed
[
https://issues.apache.org/jira/browse/HUDI-6113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Lokesh Jain resolved HUDI-6113.
---
> Support multiple transformers using the same config keys in DeltaStreamer
>
This is an automated email from the ASF dual-hosted git repository.
sivabalan pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hudi.git
The following commit(s) were added to refs/heads/master by this push:
new 7f41e22eb3b [HUDI-6113] Support multiple
nsivabalan merged PR #8514:
URL: https://github.com/apache/hudi/pull/8514
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail:
hudi-bot commented on PR #8596:
URL: https://github.com/apache/hudi/pull/8596#issuecomment-1533214279
## CI report:
* dbc08754cbe6334473bb72bdc1f0f6ceb39fecfe Azure:
hudi-bot commented on PR #7359:
URL: https://github.com/apache/hudi/pull/7359#issuecomment-1533210204
## CI report:
* 6371776cabd9b1ba518eced2e3f8611e4a5bd641 Azure:
ad1happy2go commented on issue #8623:
URL: https://github.com/apache/hudi/issues/8623#issuecomment-1533201402
Can you try checking out 0.13.0 . I see a similar issue with respect to
master for this ticket - https://github.com/apache/hudi/issues/8447
--
This is an automated
hudi-bot commented on PR #7359:
URL: https://github.com/apache/hudi/pull/7359#issuecomment-1533194573
## CI report:
* 6371776cabd9b1ba518eced2e3f8611e4a5bd641 Azure:
alberttwong commented on issue #8623:
URL: https://github.com/apache/hudi/issues/8623#issuecomment-1533131272
Using instructions at https://hudi.apache.org/docs/docker_demo
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and
1 - 100 of 147 matches
Mail list logo