switched to immutable.Set and it works. this is weird as the code in
ScalaReflection.scala seems to support scala.collection.Set
cc: dev list, in case this is a bug
On Thu, Aug 8, 2019 at 8:41 PM Mohit Jaggi wrote:
> Is this not supported? I found this diff
> <https://github.com/apa
unction-from-a-task
>>
>> Sent with ProtonMail Secure Email.
>>
>> ‐‐‐ Original Message ‐‐‐
>>
>> On July 15, 2018 8:01 AM, Mohit Jaggi wrote:
>>
>> > Trying again…anyone know how to make this work?
>> >
>> > > On Jul
Trying again…anyone know how to make this work?
> On Jul 9, 2018, at 3:45 PM, Mohit Jaggi wrote:
>
> Folks,
> I am writing some Scala/Java code and want it to be usable from pyspark.
>
> For example:
> class MyStuff(addend: Int) {
> def myMapFunction(x: Int) = x
kJoinWorkerThread.java:107)
Process finished with exit code 137 (interrupted by signal 9: SIGKILL)
Mohit Jaggi
Founder,
Data Orchard LLC
www.dataorchardllc.com
doExec(ForkJoinTask.java:260)
at
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Process finis
Thanks Cody. That was a good explanation!
Mohit Jaggi
Founder,
Data Orchard LLC
www.dataorchardllc.com
> On Aug 31, 2016, at 7:32 AM, Cody Koeninger wrote:
>
> http://blog.originate.com/blog/2014/02/27/types-inside-types-in-scala/
>
> On Wed, Aug 31, 2016 at 2:19 AM, S
new AA(1)
}
Mohit Jaggi
Founder,
Data Orchard LLC
www.dataorchardllc.com
> On Aug 30, 2016, at 9:51 PM, Mohit Jaggi wrote:
>
> thanks Sean. I am cross posting on dev to see why the code was written that
> way. Perhaps, this.type doesn’t do what is needed.
>
> Mohit Jaggi
>
thanks Sean. I am cross posting on dev to see why the code was written that
way. Perhaps, this.type doesn’t do what is needed.
Mohit Jaggi
Founder,
Data Orchard LLC
www.dataorchardllc.com
On Aug 30, 2016, at 2:08 PM, Sean Owen wrote:
I think it's imitating, for example, how Enum is del
thanks Sean. I am cross posting on dev to see why the code was written that
way. Perhaps, this.type doesn’t do what is needed.
Mohit Jaggi
Founder,
Data Orchard LLC
www.dataorchardllc.com
> On Aug 30, 2016, at 2:08 PM, Sean Owen wrote:
>
> I think it's imitating, for exampl
#x27;t?
> This one com.databricks.spark.csv.util.TextFile has hadoop imports.
>
> I figured out that the answer to my question is just to add
> libraryDependencies
> += "org.apache.hadoop" % "hadoop-client" % "2.6.0".
> But i still wonder where is this 2.2.0 default com
spark-csv should not depend on hadoop
On Sun, Aug 16, 2015 at 9:05 AM, Gil Vernik wrote:
> I would like to build spark-csv with Hadoop 2.6.0
> I noticed that when i build it with sbt/sbt ++2.10.4 package it build it
> with Hadoop 2.2.0 ( at least this is what i saw in the .ivy2 repository).
>
>
t the sliding window API be moved to spark-core. not
sure if that happened ]
- previous posts ---
http://spark.apache.org/docs/1.4.0/api/scala/index.html#org.apache.spark.mllib.rdd.RDDFunctions
> On Fri, Jan 30, 2015 at 12:27 AM, Mohit Jaggi
> wrote:
>
>
> http://mail-archives.ap
e more efficient that extracting key and value and then using combine,
> however.
>
> —
> FG
>
>
> On Tue, Jan 27, 2015 at 10:17 PM, Mohit Jaggi <mailto:mohitja...@gmail.com>> wrote:
>
> Hi All,
> I have a use case where I have an RDD (not a k,v pair) where
https://issues.apache.org/jira/browse/SPARK-3489
Folks,
I am Mohit Jaggi and I work for Ayasdi Inc. After experimenting with Spark
for a while and discovering its awesomeness(!) I made an attempt to
provide a wrapper API that looks like R and/or pandas dataframe.
https://github.com
14 matches
Mail list logo