sivakanthavel-tigeranalytics commented on issue #10003:
URL: https://github.com/apache/iceberg/issues/10003#issuecomment-2138905817
Hello @matepek @ajantha-bhat ,
Need some suggestions !
I am using org.apache.iceberg.spark.SparkSessionCatalog instead of
SparkCatalog. I am able
matepek commented on issue #10003:
URL: https://github.com/apache/iceberg/issues/10003#issuecomment-2040974202
Thank you for the answers.
> fundamentally that issue is the same
Yes I think I understand that. What I'm surprised that why for that table
creation call stack I
ajantha-bhat commented on issue #10003:
URL: https://github.com/apache/iceberg/issues/10003#issuecomment-2034277214
Namespace has to be created explicitly in Nessie as described in
https://projectnessie.org/blog/namespace-enforcement/
--
This is an automated message from the Apache Git
nastra commented on issue #10003:
URL: https://github.com/apache/iceberg/issues/10003#issuecomment-2034239933
@matepek fundamentally that issue is the same as I described in
https://github.com/apache/iceberg/issues/10003#issuecomment-2007780751.
`SparkSessionCatalog` doesn't create a
matepek commented on issue #10003:
URL: https://github.com/apache/iceberg/issues/10003#issuecomment-2027977162
Not strictly related but I'm kinda stuck with this:
Using SparkSessionCatalog with NessieCatalog I cannot create iceberg table:
```
create or replace table
matepek commented on issue #10003:
URL: https://github.com/apache/iceberg/issues/10003#issuecomment-2027948348
I'm working on something like
[this](https://github.com/apache/iceberg/compare/main...matepek:iceberg:main).
It would fixes
matepek commented on issue #10003:
URL: https://github.com/apache/iceberg/issues/10003#issuecomment-2009415954
Okay, for DBT sadly I need the SparkSessionCatalog as I suspected before.
Tried almost everything, it's a pain otherwise.
We had been using a rest catalog so I'm surprised we
nastra commented on issue #10003:
URL: https://github.com/apache/iceberg/issues/10003#issuecomment-2008952584
> `spark.sql.catalog.spark_catalog.type` was configured to `jdbc` which was
actually a mistake of mine.
Yes exactly that's what I meant. That would use the JDBC catalog
matepek commented on issue #10003:
URL: https://github.com/apache/iceberg/issues/10003#issuecomment-2008666705
I see what you meant now..
`spark.sql.catalog.spark_catalog.type` was configured to `jdbc` which was
actually a mistake of mine.
But not defining the `spark_catalog`
matepek commented on issue #10003:
URL: https://github.com/apache/iceberg/issues/10003#issuecomment-2008652290
What do you mean by that I'm using JDBC catalog? I thought
`spark.sql.catalogImplementation = hive` sets it to hive catalog.
(I know I have a knowledge gap and I'm trying
10 matches
Mail list logo