Hi,

This issue is to let user directly uses spark to read data in IoTDB for 
analyzing.

This function can be done in many ways in IoTDB:

(1) Storing all TsFiles (data files) and other files (system files, WALs) on 
HDFS, then use spark-tsfile to read TsFiles on HDFS.
(2) Storing only TsFiles on HDFS, and other files on local file system, then 
use spark-tsfile to read TsFiles on HDFS.
(3) Storing all files on local file system and let user use 
spark-iotdb-connector to read data from IoTDB, regardless where TsFiles store.

Personally, I prefer the second and the third. If we use the second way, do we 
need the FileFactory for all Files?

Best,
--
Jialin Qiao
School of Software, Tsinghua University

乔嘉林
清华大学 软件学院

> -----原始邮件-----
> 发件人: "Zesong Sun (Jira)" <[email protected]>
> 发送时间: 2019-08-29 19:34:00 (星期四)
> 收件人: [email protected]
> 抄送: 
> 主题: [jira] [Created] (IOTDB-187) Enable to choose storage in local file 
> system or HDFS
> 
> Zesong Sun created IOTDB-187:
> --------------------------------
> 
>              Summary: Enable to choose storage in local file system or HDFS
>                  Key: IOTDB-187
>                  URL: https://issues.apache.org/jira/browse/IOTDB-187
>              Project: Apache IoTDB
>           Issue Type: Improvement
>             Reporter: Zesong Sun
> 
> 
> Enable to choose storage in local file system or HDFS
> "is_hdfs_storage=false" by default
> 
> 
> 
> --
> This message was sent by Atlassian Jira
> (v8.3.2#803003)

Reply via email to