I have tried select ceil(2/3), but got key not found: floor
On Tue, Jan 27, 2015 at 11:05 AM, Ted Yu yuzhih...@gmail.com wrote:
Have you tried floor() or ceil() functions ?
According to http://spark.apache.org/sql/, Spark SQL is compatible with
Hive SQL.
Cheers
On Mon, Jan 26, 2015 at
Any ideas? Anyone got the same error?
On Mon, Dec 1, 2014 at 2:37 PM, Alexey Romanchuk alexey.romanc...@gmail.com
wrote:
Hello spark users!
I found lots of strange messages in driver log. Here it is:
2014-12-01 11:54:23,849 [sparkDriver-akka.actor.default-dispatcher-25]
ERROR
Hello spark users!
I found lots of strange messages in driver log. Here it is:
2014-12-01 11:54:23,849 [sparkDriver-akka.actor.default-dispatcher-25]
ERROR
Hello spark users and developers!
I am using hdfs + spark sql + hive schema + parquet as storage format. I
have lot of parquet files - one files fits one hdfs block for one day. The
strange thing is very slow first query for spark sql.
To reproduce situation I use only one core and I have 97sec
the upfront compilation really helps. I doubt it.
However is this almost surely due to caching somewhere, in Spark SQL
or HDFS? I really doubt hotspot makes a difference compared to these
much larger factors.
On Fri, Oct 10, 2014 at 8:49 AM, Alexey Romanchuk
alexey.romanc...@gmail.com wrote
which really kills
performance.
Hope that helps!
Andrew
On Thu, Sep 25, 2014 at 12:09 AM, Alexey Romanchuk
alexey.romanc...@gmail.com wrote:
Hello again spark users and developers!
I have standalone spark cluster (1.1.0) and spark sql running on it. My
cluster consists of 4 datanodes
Hello again spark users and developers!
I have standalone spark cluster (1.1.0) and spark sql running on it. My
cluster consists of 4 datanodes and replication factor of files is 3.
I use thrift server to access spark sql and have 1 table with 30+
partitions. When I run query on whole table