This is an automated email from the ASF dual-hosted git repository.
chenliang613 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/carbondata.git
The following commit(s) were added to refs/heads/master by this push:
new 80bada924e Revise README for AI-native data clarity (#4370)
80bada924e is described below
commit 80bada924e232ece18a38a290359f0408f61757d
Author: Liang Chen <[email protected]>
AuthorDate: Mon Oct 13 22:08:53 2025 +0800
Revise README for AI-native data clarity (#4370)
Updated the README to clarify the concept of AI-native data and its
relevance to CarbonData. Improved formatting for better readability.
---
AI-DATA/README.md | 20 +++++++++-----------
1 file changed, 9 insertions(+), 11 deletions(-)
diff --git a/AI-DATA/README.md b/AI-DATA/README.md
index ef40ce0a33..fe052badce 100644
--- a/AI-DATA/README.md
+++ b/AI-DATA/README.md
@@ -18,27 +18,25 @@
<img src="/docs/images/CarbonData_logo.png" width="200" height="40">
-## What is AI-native data storage
+## What is AI-native data
-* AI-native data storage is a data storage and management system designed and
built specifically for the needs of artificial intelligence (AI) workloads,
particularly machine learning and deep learning. Its core concept is to
transform data storage from a passive, isolated component of the AI process
into an active, intelligent, and deeply integrated infrastructure.
+AI-native data storage is a data storage and management system designed and
built specifically for the needs of artificial intelligence (AI) workloads,
particularly machine learning and deep learning. Its core concept is to
transform data storage from a passive, isolated component of the AI process
into an active, intelligent, and deeply integrated infrastructure.
-## Why AI-native data storage for CarbonData's new scope
+## Why AI-native data for CarbonData's new scope
In AI projects, data scientists and engineers spend 80% of their time on data
preparation. Traditional storage presents numerous bottlenecks in this process:
-Data silos: Training data may be scattered across data lakes, data warehouses,
file systems, object storage, and other locations, making integration difficult.
+* Data silos: Training data may be scattered across data lakes, data
warehouses, file systems, object storage, and other locations, making
integration difficult.
-Performance bottlenecks:
+* Performance bottlenecks:Training phase: High-speed, low-latency data
throughput is required to feed GPUs to avoid expensive GPU resources sitting
idle.
-Training phase: High-speed, low-latency data throughput is required to feed
GPUs to avoid expensive GPU resources sitting idle.
+* Inference phase: High-concurrency, low-latency vector similarity search
capabilities are required.
-Inference phase: High-concurrency, low-latency vector similarity search
capabilities are required.
+* Complex data formats: AI processes data types far beyond tables, including
unstructured data (images, videos, text, audio) and semi-structured data (JSON,
XML). Traditional databases have limited capabilities for processing and
querying such data.
-Complex data formats: AI processes data types far beyond tables, including
unstructured data (images, videos, text, audio) and semi-structured data (JSON,
XML). Traditional databases have limited capabilities for processing and
querying such data.
+* Lack of metadata management: The lack of effective management of rich
metadata such as data versions, lineage, annotation information, and
experimental parameters leads to poor experimental reproducibility.
-Lack of metadata management: The lack of effective management of rich metadata
such as data versions, lineage, annotation information, and experimental
parameters leads to poor experimental reproducibility.
-
-Vectorization requirements: Modern AI models (such as large language models)
convert all data into vector embeddings. Traditional storage cannot efficiently
store and retrieve high-dimensional vectors.
+* Vectorization requirements: Modern AI models (such as large language models)
convert all data into vector embeddings. Traditional storage cannot efficiently
store and retrieve high-dimensional vectors.
## About