lonless9 commented on code in PR #1854:
URL: https://github.com/apache/iceberg-rust/pull/1854#discussion_r2553042310


##########
docs/rfcs/0001_modularize_iceberg_implementations.md:
##########
@@ -0,0 +1,196 @@
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+-->
+
+# RFC: Modularize `iceberg` Implementations
+
+## Background
+
+Issue #1819 highlighted that the current `iceberg` crate mixes the Iceberg 
protocol abstractions (catalog/table/plan/transaction) with concrete runtime, 
storage, and execution implementations (Tokio runtime wrappers, opendal-based 
`FileIO`, Arrow readers, DataFusion helpers, etc.). This makes the crate heavy, 
couples unrelated dependencies, and prevents users from bringing their own 
engines or storage stacks.
+
+After recent maintainer discussions we agreed on two principles:
+1. The `iceberg` crate itself remains the single source of truth for all 
protocol traits and data structures.
+2. All concrete integrations (Tokio runtime, opendal `FileIO`, 
Arrow/DataFusion executors, catalog adapters, etc.) move out of `iceberg` into 
dedicated companion crates. Users who need a ready-made execution path can 
depend on those crates (for example `iceberg-datafusion`) while users building 
custom stacks can depend solely on `iceberg`.
+
+This RFC describes the plan to slim down `iceberg` into a pure protocol crate 
and to reorganize the workspace around pluggable companion crates.
+
+## Goals and Scope
+
+- **Keep `iceberg` as the protocol crate**: it exposes all traits (`Catalog`, 
`Table`, `Transaction`, `FileIO`, `Runtime`, `ScanPlan`, etc.) plus 
metadata/plan logic, but no longer ships concrete runtimes or storage adapters.
+- **Detach embedded implementations**: move opendal-based IO, Tokio runtime 
helpers, Arrow converters, and similar code into separate crates under 
`crates/fileio/*`, `crates/runtime/*`, `crates/engine/*`, or existing 
integration crates.
+- **Enable composable combinations**: users assemble the stack they need by 
combining `iceberg` with specific implementation crates (e.g., 
`iceberg-fileio-opendal`, `iceberg-runtime-tokio`, `iceberg-engine-arrow`, 
`iceberg-datafusion`).
+- **Minimize breaking surfaces**: trait APIs stay in `iceberg`; downstream 
crates only adjust their dependency graph.
+
+Out of scope: changing the Iceberg table specification or rewriting catalog 
adapters’ external behavior.
+
+## Architecture Overview
+
+### Workspace Layout
+
+```
+crates/
+  iceberg/                # core traits, metadata, planning, transactions
+  fileio/
+    opendal/             # e.g. `iceberg-fileio-opendal`
+    fs/                  # other FileIO implementations
+  runtime/
+    tokio/               # e.g. `iceberg-runtime-tokio`
+    smol/
+  engine/
+    arrow/               # Arrow executor & schema helpers
+  catalog/*              # catalog adapters (REST, HMS, Glue, etc.)
+  integrations/
+    datafusion/          # combines core + implementations for DF
+    cache-moka/
+    playground/
+```
+
+- `crates/iceberg` no longer depends on opendal, Tokio, Arrow, or DataFusion.
+- Implementation crates depend on `iceberg` to get the trait surfaces they 
implement.
+- Higher-level crates (e.g., `iceberg-datafusion`) pull in the required 
runtime/FileIO/executor crates and expose an opinionated combination.
+
+### Core Trait Surfaces (within `iceberg`)
+
+#### FileIO
+
+```rust
+pub struct FileMetadata {
+    pub size: u64,
+    ...
+}
+
+pub type FileReader = Box<dyn FileRead>;
+
+#[async_trait::async_trait]
+pub trait FileRead: Send + Sync + 'static {
+    async fn read(&self, range: Range<u64>) -> Result<Bytes>;
+}
+
+pub type FileWriter = Box<dyn FileWrite>;
+
+#[async_trait::async_trait]
+pub trait FileWrite: Send + Unpin + 'static {
+    async fn write(&mut self, bs: Bytes) -> Result<()>;
+    async fn close(&mut self) -> Result<FileMetadata>;
+}
+
+pub type StorageFactory = fn(attrs: HashMap<String, String> -> Result<Arc<dyn 
Storage>>);
+
+#[async_trait::async_trait]
+pub trait Storage: Clone + Send + Sync {
+    async fn reader(&self, path: &str) -> Result<FileReader>;
+    async fn writer(&self, path: &str) -> Result<FileWriter>;
+    async fn delete(&self, path: &str) -> Result<()>;
+    async fn exists(&self, path: &str) -> Result<bool>;
+
+    ...
+}
+
+pub struct FileIO {
+    registry: DashMap<String, StorageFactory>,
+}
+
+impl FileIO {
+    fn register(scheme: &str, factory: StorageFactory);
+
+    async fn read(path: &str) -> Result<Bytes>;
+    async fn reader(path: &str) -> Result<FileReader>;
+    async fn write(path: &str, bs: Bytes) -> Result<FileMetadata>;
+    async fn writer(path: &str) -> Result<FileWriter>;
+
+    async fn delete(&self, path: &str) -> Result<()>;
+    ...
+}
+```
+
+- `FileRead` / `FileWrite` remain Iceberg-specific traits (range reads, 
metrics hooks, abort/commit) and live inside `iceberg`.
+- Concrete implementations (opendal, local FS, custom stores) live in 
companion crates and return trait objects.
+
+#### Runtime
+
+```rust
+pub trait Runtime: Send + Sync + 'static {
+    type JoinHandle<T>: Future<Output = T> + Send + 'static;
+
+    fn spawn<F, T>(&self, fut: F) -> Self::JoinHandle<T>
+    where
+        F: Future<Output = T> + Send + 'static,
+        T: Send + 'static;
+
+    fn sleep(&self, dur: Duration) -> Pin<Box<dyn Future<Output = ()> + Send>>;
+}
+```
+
+- The trait lives in `iceberg`; crates like `iceberg-runtime-tokio` implement 
it and expose constructors.
+
+#### Catalog / Table / Transaction / Scan
+
+- All existing traits and data structures remain in `crates/iceberg`.
+- `TableScan` continues to emit pure plan descriptors; executors interpret 
them.
+- `Transaction` uses injected `Runtime` for retry/backoff but otherwise stays 
unchanged.
+
+### Usage Modes
+
+- **Custom stacks**: depend solely on `iceberg` plus self-authored 
implementations that satisfy the traits.
+- **Pre-built stacks**: depend on crates such as `iceberg-datafusion` that 
bundle `iceberg` with `iceberg-runtime-tokio`, `iceberg-fileio-opendal`, and 
`iceberg-engine-arrow` (and expose higher-level APIs).
+- `iceberg` itself does not re-export any of the companion crates; users 
compose them explicitly.
+
+## Migration Plan
+
+1. **Phase 1 – Slim down `crates/iceberg`**
+   - Remove direct dependencies on opendal, Tokio, Arrow, and DataFusion from 
`iceberg`.

Review Comment:
   The migration difficulty of the mentioned dependencies varies, and they 
should be prioritized. Some have already been implemented, while others require 
significant modifications. Furthermore, I guess that migrating the arrow 
dependency is not urgent; remaining in an intermediate migration state for an 
extended period is acceptable, and this could ideally be considered a final 
goal.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to