I'm a little confused around Hive Spark, can someone shed some light ?
Using Spark, I can access the Hive metastore and run Hive queries. Since I
am able to do this in stand-alone mode, it can't be using map-reduce to run
the Hive queries and I suppose it's building a query plan and executing it
In IntelliJ:
- Open View - Tool Windows - Maven Projects
- Right click on Spark Project External Flume Sink
- Click Generate Sources and Update Folders
This should generate source code from sparkflume.avdl.
Vu~
--
View this message in context:
Most likely no. We are using the embedded mode of Jetty, rather than using
servlets.
Even if it is possible, you probably wouldn't want to embed Spark in your
application server ...
On Sun, Feb 15, 2015 at 9:08 PM, Niranda Perera niranda.per...@gmail.com
wrote:
Hi,
We are thinking of
Hi Reynold,
Thank you for the response. Could you please clarify the need of Jetty
server inside Spark? Is it used for Spark core functionality or is it there
for Spark jobs UI purposes?
cheers
On Mon, Feb 16, 2015 at 10:47 AM, Reynold Xin r...@databricks.com wrote:
Most likely no. We are
Hi,
We are thinking of integrating Spark server inside a product. Our current
product uses Tomcat as its webserver.
Is it possible to switch the Jetty webserver in Spark to Tomcat
off-the-shelf?
Cheers
--
Niranda
Spark SQL is not the same as Hive on Spark.
Spark SQL is a query engine that is designed from ground up for Spark
without the historic baggage of Hive. It also does more than SQL now -- it
is meant for structured data processing (e.g. the new DataFrame API) and
SQL. Spark SQL is mostly compatible
Mostly UI.
However, we are also using Jetty as a file server I believe (for
distributing files from the driver to workers).
On Sun, Feb 15, 2015 at 9:24 PM, Niranda Perera niranda.per...@gmail.com
wrote:
Hi Reynold,
Thank you for the response. Could you please clarify the need of Jetty