Is there anyway to map pyspark.sql.Row columns to JDBC table columns, or do
I have to just put them in the right order before saving?
I'm using code like this:
```
rdd = rdd.map(lambda i: Row(name=i.name, value=i.value))
sqlCtx.createDataFrame(rdd).write.jdbc(dbconn_string, tablename,
Has anyone successful built this? I'm trying to determine if there is a
defect in the source package or something strange about my environment. I
get a FileNotFound exception on MQTTUtils.class during the build of the
MQTT module. The only work around I've found is to remove the MQTT modules
from
were you building ?
>
> Thanks
>
> On Wed, Oct 28, 2015 at 6:19 AM, Bob Corsaro <rcors...@gmail.com> wrote:
>
>> Has anyone successful built this? I'm trying to determine if there is a
>> defect in the source package or something strange about my environment. I
>
I'm running a spark cluster and I'd like to access the Spark-UI from
outside the LAN. The problem is all the links are to internal IP addresses.
Is there anyway to config hostnames for each of the hosts in the cluster
and use those for the links?
, 2015 at 9:59 AM Bob Corsaro rcors...@gmail.com wrote:
I'm running a spark cluster and I'd like to access the Spark-UI from
outside the LAN. The problem is all the links are to internal IP addresses.
Is there anyway to config hostnames for each of the hosts in the cluster
and use those
I'm having trouble using select pow(col) from table It seems the function
is not registered for SparkSQL. Is this on purpose or an oversight? I'm
using pyspark.
they are working fine.
pow(2,4)
16
2**4
16
Kind Regards
Salih Oztop
--
*From:* Bob Corsaro rcors...@gmail.com
*To:* user user@spark.apache.org
*Sent:* Monday, June 29, 2015 7:27 PM
*Subject:* SparkSQL built in functions
I'm having trouble using select pow(col
(numbers.name, numbers.value, numbers2.other) \
.collect()
On Mon, Jun 22, 2015 at 12:53 PM, Ignacio Blasco elnopin...@gmail.com
wrote:
Sorry thought it was scala/spark
El 22/6/2015 9:49 p. m., Bob Corsaro rcors...@gmail.com escribió:
That's invalid syntax. I'm pretty sure pyspark
I've only tried it in python
On Tue, Jun 23, 2015 at 12:16 PM Ignacio Blasco elnopin...@gmail.com
wrote:
That issue happens only in python dsl?
El 23/6/2015 5:05 p. m., Bob Corsaro rcors...@gmail.com escribió:
Thanks! The solution:
https://gist.github.com/dokipen/018a1deeab668efdf455
Can anyone explain why the dataframe API doesn't work as I expect it to
here? It seems like the column identifiers are getting confused.
https://gist.github.com/dokipen/4b324a7365ae87b7b0e5
That's invalid syntax. I'm pretty sure pyspark is using a DSL to create a
query here and not actually doing an equality operation.
On Mon, Jun 22, 2015 at 3:43 PM Ignacio Blasco elnopin...@gmail.com wrote:
Probably you should use === instead of == and !== instead of !=
Can anyone explain why
this?
myDStream.foreachRDD(rdd = rdd.saveAsTextFile(/sigmoid/, codec ))
Thanks
Best Regards
On Mon, Jun 8, 2015 at 8:06 PM, Bob Corsaro rcors...@gmail.com wrote:
It looks like saveAsTextFiles doesn't support the compression parameter
of RDD.saveAsTextFile. Is there a way to add
I'm setting PYTHONPATH before calling pyspark, but the worker nodes aren't
inheriting it. I've tried looking through the code and it appears that it
should, I can't find the bug. Here's an example, what am I doing wrong?
https://gist.github.com/dokipen/84c4e4a89fddf702fdf1
It looks like saveAsTextFiles doesn't support the compression parameter of
RDD.saveAsTextFile. Is there a way to add the functionality in my client
code without patching Spark? I tried making my own saveFunc function and
calling DStream.foreachRDD but ran into trouble with invoking rddToFileName
14 matches
Mail list logo