This is an automated email from the ASF dual-hosted git repository.

agrove pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/arrow-ballista-python.git


The following commit(s) were added to refs/heads/main by this push:
     new 30a47ae  Prepare to archive repo (#66)
30a47ae is described below

commit 30a47ae960ff9766fb300fe9908b1aaf8d479aac
Author: Andy Grove <[email protected]>
AuthorDate: Sat Feb 10 08:40:39 2024 -0700

    Prepare to archive repo (#66)
    
    * prepare to archive repo
    
    * prettier
---
 README.md => OLDREADME.md |  0
 README.md                 | 94 ++---------------------------------------------
 2 files changed, 3 insertions(+), 91 deletions(-)

diff --git a/README.md b/OLDREADME.md
similarity index 100%
copy from README.md
copy to OLDREADME.md
diff --git a/README.md b/README.md
index 4fc9389..e44d4ec 100644
--- a/README.md
+++ b/README.md
@@ -17,96 +17,8 @@
   under the License.
 -->
 
-# Ballista Python Bindings (PyBallista)
+# Ballista Python Bindings (PyBallista
 
-This is a Python library that binds to [Apache 
Arrow](https://arrow.apache.org/) distributed query
-engine [Ballista](https://github.com/apache/arrow-ballista).
+PyBallista is now located within the main Ballista repo 
[here](https://github.com/apache/arrow-ballista/tree/main/python).
 
-## Status
-
-### What works?
-
-- Start Ballista schedulers and executors from Python
-- Execute distributed SQL queries (with DataFusion backend)
-- Use DataFrame API to read files and execute distributed queries (with 
DataFusion backend)
-- Support for CSV, Parquet, and Avro formats
-
-### What does not work?
-
-- Python UDFs
-- JSON
-
-## Roadmap
-
-- Support reading JSON
-- Support distributed Python UDFs and UDAFs
-- Support distributed query execution against Python DataFrame libraries such 
as Polars, Pandas, and cuDF, that are
-  already supported by DataFusion's Python bindings (this will require new 
features in Ballista)
-
-## Examples
-
-- [Query a Parquet file using SQL](./examples/sql-parquet.py)
-- [Query a Parquet file using DataFrame API](./examples/dataframe-parquet.py)
-- [Start a scheduler from within a Python process](./examples/run-scheduler.py)
-- [Start an executor from within a Python process](./examples/run-executor.py)
-
-## How to install (from pip)
-
-```bash
-pip install ballista
-# or
-python -m pip install ballista
-```
-
-## How to develop
-
-This assumes that you have rust and cargo installed. We use the workflow 
recommended by [pyo3](https://github.com/PyO3/pyo3) and 
[maturin](https://github.com/PyO3/maturin).
-
-Bootstrap:
-
-```bash
-# fetch this repo
-git clone [email protected]:apache/arrow-ballista-python.git
-# change to python directory
-cd arrow-ballista-python
-# prepare development environment (used to build wheel / install in 
development)
-python3 -m venv venv
-# activate the venv
-source venv/bin/activate
-# update pip itself if necessary
-python -m pip install -U pip
-# if python -V gives python 3.7
-python -m pip install -r requirements-37.txt
-# if python -V gives python 3.8/3.9/3.10
-python -m pip install -r requirements-310.txt
-```
-
-Whenever rust code changes (your changes or via `git pull`):
-
-```bash
-# make sure you activate the venv using "source venv/bin/activate" first
-maturin develop
-python -m pytest
-```
-
-## How to update dependencies
-
-To change test dependencies, change the `requirements.in` and run
-
-```bash
-# install pip-tools (this can be done only once), also consider running in venv
-python -m pip install pip-tools
-
-# change requirements.in and then run
-python -m piptools compile --generate-hashes -o requirements-37.txt
-# or run this is you are on python 3.8/3.9/3.10
-python -m piptools compile --generate-hashes -o requirements.txt
-```
-
-To update dependencies, run with `-U`
-
-```bash
-python -m piptools compile -U --generate-hashes -o requirements-310.txt
-```
-
-More details [here](https://github.com/jazzband/pip-tools)
+The original README is [here](OLDREADME.md).

Reply via email to