Hi Iceberg Community, Here are the minutes and recording from our Iceberg Sync.
Always remember, anyone can join the discussion so feel free to share the Iceberg-Sync <https://groups.google.com/g/iceberg-sync> google group with anyone seeking an invite. The notes and the agenda are posted in the Iceberg Sync doc <https://docs.google.com/document/d/1YuGhUdukLP5gGiqCbk0A5_Wifqe2CZWgOd3TbhY3UQg/edit?usp=drive_web> that's also attached to the meeting invitation and it's an excellent place to add items as you see fit so we can discuss them in the following community sync. Meeting Recording <https://drive.google.com/file/d/14ZxBN5I41yE9x3qllzO2uK8LMXi5-N7f/view?usp=sharing> ⭕ Meeting Transcript <https://docs.google.com/document/d/12pOgbrKDh0YR1K6XctvVvbh5y8m2TqIdxbesEGFzcRA/edit?usp=sharing> - Highlights - Flink support for inspecting metadata tables <https://github.com/apache/iceberg/pull/6222> (Thanks, Liwei Li!) - Flink read and write support for Avro GenericRecord (Thanks, Steven Wu!) - Implemented branch commits for all operations (Thanks, Namratha and Amogh!) - Added CREATE/REPLACE branch syntax (Thanks, Amogh, Liwei, Xuwei, and Chidayong!) - Added branch/tag support to VERSION AS OF in Spark (Thanks, Jack!) - Added position deletes metadata table (Thanks, Szehon!) - Added a Snowflake catalog (Thanks, Dennis!) - Improved filter pruning in Spark (Thanks, Anton!) - Added Delta to Iceberg table conversion (Thanks Rushan and Eric!) - Releases - Python 0.3.0 - Please vote! - Java 1.2.0 - RM - Jack - Default distribution mode for Spark MERGE - Testing 1.2.0 with Trino <https://github.com/trinodb/trino/pull/15726> / Presto <https://github.com/prestodb/presto/pull/18934> for early feedback - Discussion - Change Default Write Distribution Mode <https://github.com/apache/iceberg/issues/6679> - Hash vs range for default MERGE distribution mode - Add SQL conf to control merge mode (need to consider precedence) - Change the merge default to hash - Change default write distribution from none to hash (partitioned tables) - Branch write configuration in Spark - SQL option for setting up branch writes - Open a PR for branch SQL option - Changelog scan status - Change table scan supports v1 tables - Start/end snapshots as options - PR open to support v2 tables - PR to create a Spark view that produces pre- and post-images - S3FileIO Can Create Non-Posix Paths <https://github.com/apache/iceberg/issues/6758#top> - Quick Fix (Russ) - Adding strip trailing slash to metadata.location() - Jack - Posix Check Flag for optional enforcement - Don’t want to force this for everyone in the future - Hudi Iceberg Conversion: Snapshot Hudi COW table to an iceberg table <https://github.com/apache/iceberg/pull/6642> - Materialized view spec - Please vote for the "Design for approval <https://docs.google.com/document/d/1vaufuD47kMijz97LxM67X8OX-W2Wq7nmlz3jRo8J5Qk/edit?usp=sharing>" section of partition stats. Context about voting request in the mailing list <https://www.mail-archive.com/[email protected]/msg04029.html> Thanks everyone!
