.apache.org>
날짜: 2016년 7월 14일 목요일 오전 3:18
받는 사람: "users@zeppelin.apache.org" <users@zeppelin.apache.org>
주제: Re: Order of paragraphs vs. different interpreters (spark vs. pyspark)
It is easy to change the code. I did myself and use it as an ETL tool. It is
very powerful
hi
I think you can run the workflows that you defined just 'run' paragraph.
and I believe functionality of view are going to be better. :)
2016년 7월 14일 목요일, xiufeng liu님이 작성한 메시지:
> It is easy to change the code. I did myself and use it as an ETL tool. It
> is very powerful
It is easy to change the code. I did myself and use it as an ETL tool. It
is very powerful
Afancy
On Wednesday, July 13, 2016, Ahmed Sobhi wrote:
> I think this pr addresses what I need. Case 2 seem to describe the issue
> I'm having if I'm reading it correctly.
>
> The
You have to change the source codes to add the dependencies of running
paragraphs. I think it is a really interesting feature, for example, it can
be use as an ETL tool. But, unfortunately, there is no configuration option
right now.
/afancy
On Wed, Jul 13, 2016 at 12:27 PM, Ahmed Sobhi
Hello,
I have been working on a large Spark Scala notebook. I recently had the
requirement to produce graphs/plots out of these data. Python and PySpark
seemed like a natural fit but since I've already invested a lot of time and
effort into the Scala version, I want to restrict my usage of python