Jay running sbt compile or assembly should generate the sources.
On Monday, August 11, 2014, Devl Devel devl.developm...@gmail.com wrote:
Hi
So far I've been managing to build Spark from source but since a change in
spark-streaming-flume I have no idea how to generate classes (e.g.
I spent some time on this and I'm not sure either of these is an option,
unfortunately.
We typically can't use custom JIRA plug-in's because this JIRA is
controlled by the ASF and we don't have rights to modify most things about
how it works (it's a large shared JIRA instance used by more than 50
If you don't want to build the entire thing, you can also do
mvn generate-sources in externals/flume-sink
Thanks,
Ron
Sent from my iPhone
On Aug 11, 2014, at 8:32 AM, Hari Shreedharan hshreedha...@cloudera.com
wrote:
Jay running sbt compile or assembly should generate the sources.
On
Hi,
I’ve been able to get things compiled on my environment, but I’m noticing
that it’s been quite difficult in IntelliJ. It always recompiles everything
when I try to run one test like BroadcastTest, for example, despite having
compiled make-distribution previously. In eclipse, I have no
Thanks for looking into this. I think little tools like this are super
helpful.
Would it hurt to open a request with INFRA to install/configure the
JIRA-GitHub plugin while we continue to use the Python script we have? I
wouldn't mind opening that JIRA issue with them.
Nick
On Mon, Aug 11,
I am trying to change spark to support hive-0.13, but always met following
problem when running the test. My feeling is the test setup may need to
change, but don't know exactly. Who has the similar issue or is able to shed
light on it?
13:50:53.331 ERROR org.apache.hadoop.hive.ql.Driver: FAILED:
Try setting it to handle incremental compilation of Scala by itself
(IntelliJ) and to run its own compile server. This is in global
settings, under the Scala settings. It seems to compile incrementally
for me when I change a file or two.
On Mon, Aug 11, 2014 at 8:57 PM, Ron's Yahoo!
Thanks Sean,
I change both the API and version because there are some incompatibility
with hive-0.13, and actually can do some basic operation with the real hive
environment. But the test suite always complain with no default database
message. No clue yet.
--
View this message in context:
Hi folks,
I met several Spark SQL unit test failures when sort-based shuffle is enabled,
seems Spark SQL uses GenericMutableRow which will make ExternalSorter's
internal buffer all referred to the same object, I guess GenericMutableRow uses
only one mutable object to represent different rows,
I am new at contributing, How is the best way to start out?
Thanks!
Chris
--
View this message in context:
http://apache-spark-developers-list.1001551.n3.nabble.com/New-to-Open-Source-and-Sparc-Would-Like-to-Contribute-tp7812.html
Sent from the Apache Spark Developers List mailing list
10 matches
Mail list logo