ashoksri0 commented on issue #3299:
URL: https://github.com/apache/iceberg/issues/3299#issuecomment-947684021
> > > From the test files `TestRewriteFiles.java` and
`TestCreateActions.java`, you could figure out the following sample code:
> > > ```java
> > > TestTables.TestTable table = TestTables.create(tableDir, "test",
schema, spec, formatVersion);
> > > // or you do not need create a new HiveCatalog, instead using the
internal
> > > // TableCatalog catalog = (TableCatalog)
spark.sessionState().catalogManager().catalog("spark_catalog");
> > > // SparkTable table = catalog.loadTable("SUBSCRIBER");
> > > table.newRewrite()
> > > .rewriteFiles(ImmutableSet.of(FILE_A), ImmutableSet.of(),
> > > ImmutableSet.of(), ImmutableSet.of(FILE_A_DELETES))
> > > .apply()
> > > ```
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > I think you lose the `apply()` action here. Or you may code simply:
> > > ```java
> > > spark.read().parquet("/path.parquet").writeTo("SUBSCRIBER");
> > > ```
> >
> >
> > I added to apply to append still getting same no data directory found
and unable to read from trino. I need a java solution from the spark side
everything works well, but I can't use spark right now. I need to have the
input parquet in my local directory or inside hdfs file system
>
> Sorry about my previous reply, `apply()` just returns a BaseSnapshot
instance, `commit()` is actually persisting the metadata and data file.
I tried with commit I get many versions of metadata and Avro files, not the
data, I don't know will I need to get the input from file path from HDFS or
from my local
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]