// + *Davies* for his comments
// + Punya for SA
For development and CI, like Olivier mentioned, I think it would be hugely
beneficial to publish pyspark (only code in the python/ dir) on PyPI. If
anyone wants to develop against PySpark APIs, they need to download the
distribution and do a lot of
Hi all,
The unreleased version 1.6.0 has was removed from JIRA due to my
misoperation. I've added it back, but JIRA tickets that once targeted to
1.6.0 now have empty target version/s. If you found tickets that should
have targeted to 1.6.0, please help marking the target version/s field
Hi Yu,
As it stands today, they are identical except for trigger mechanism. When
you say test this please or push a commit, SparkPullRequestBuilder is the
one that's running the tests. SlowSparkPullRequestBuilder, however, is not
used by default, but only triggered when you say slow test please.
Hi Andrew,
I understand that there is no difference currently.
Thanks,
Yu
-
-- Yu Ishikawa
--
View this message in context:
http://apache-spark-developers-list.1001551.n3.nabble.com/What-is-the-difference-between-SlowSparkPullRequestBuilder-and-SparkPullRequestBuilder-tp13377p13380.html
Yeah I'll send a note to the mesos dev list just to make sure they are
informed.
Shivaram
On Tue, Jul 21, 2015 at 11:47 AM, Sean Owen so...@cloudera.com wrote:
I agree it's worth informing Mesos devs and checking that there are no
big objections. I presume Shivaram is plugged in enough to
Hi all:
I am developing an algorithm that needs to put together elements with
the same key as much as possible but with always using a fixed number of
partitions. To do that, this algorithm sorts by key the elements. The
problem is that the number of distinct keys influences in the number of
The move prevents some errors though, all the errors cannot be gone.
For example, in o.a.s.sql.catalyst.analysis.*Suite,
The case '
https://github.com/maropu/spark/commit/961b5e99e2136167f175598ed36585987cc1e236
'
causes 3 errors.
AnalysisSuite:
- analyze project *** FAILED ***
Hi guys,
I’m trying to patch hive thrift server part related to HIVE-7620. I saw in
spark is pulling a private fork of hive under spark-project hive name.
Any idea where I can find the source code of it?
Thanks~
马晓宇 / Xiaoyu Ma
hzmaxia...@corp.netease.com
Yes, but not all SQL-standard insert variants .
From: Debasish Das [mailto:debasish.da...@gmail.com]
Sent: Wednesday, July 22, 2015 7:36 PM
To: Bing Xiao (Bing)
Cc: user; dev; Yan Zhou.sc
Subject: Re: Package Release Annoucement: Spark SQL on HBase Astro
Does it also support insert operations ?
Does it also support insert operations ?
On Jul 22, 2015 4:53 PM, Bing Xiao (Bing) bing.x...@huawei.com wrote:
We are happy to announce the availability of the Spark SQL on HBase
1.0.0 release.
http://spark-packages.org/package/Huawei-Spark/Spark-SQL-on-HBase
The main features in this
Hi all,
FYI, we just merged a patch that fails a build if there is a scala compiler
warning (if it is not deprecation warning).
In the past, many compiler warnings are actually caused by legitimate bugs
that we need to address. However, if we don't fail the build with warnings,
people don't pay
I agree with everything Justin just said. An additional advantage of
publishing PySpark's Python code in a standards-compliant way is the fact
that we'll be able to declare transitive dependencies (Pandas, Py4J) in a
way that pip can use. Contrast this with the current situation, where
We are happy to announce the availability of the Spark SQL on HBase 1.0.0
release. http://spark-packages.org/package/Huawei-Spark/Spark-SQL-on-HBase
The main features in this package, dubbed Astro, include:
* Systematic and powerful handling of data pruning and intelligent
scan, based
Ok, thanks,
I understood why this happened.
best regards,
// maropu
On Wed, Jul 22, 2015 at 10:26 PM, Takeshi Yamamuro linguin@gmail.com
wrote:
The move prevents some errors though, all the errors cannot be gone.
For example, in o.a.s.sql.catalyst.analysis.*Suite,
The case '
14 matches
Mail list logo