Re: IllegalArgumentException UnsatisfiedLinkError snappy-1.1.2 spark-shell error

2016-06-15 Thread Arul Ramachandran
Hi Paolo,
Were you able to get this resolved? I am hitting this issue, can you please
share what was your solution.
Thanks

On Mon, Feb 15, 2016 at 7:49 PM, Paolo Villaflores 
wrote:

>
> Yes, I have sen that. But java.io.tmpdir has a default definition in
> linux--it is /tmp.
>
>
>
> On Tue, Feb 16, 2016 at 2:17 PM, Ted Yu  wrote:
>
>> Have you seen this thread ?
>>
>>
>> http://search-hadoop.com/m/q3RTtW43zT1e2nfb=Re+ibsnappyjava+so+failed+to+map+segment+from+shared+object
>>
>> On Mon, Feb 15, 2016 at 7:09 PM, Paolo Villaflores <
>> pbvillaflo...@gmail.com> wrote:
>>
>>>
>>> Hi,
>>>
>>>
>>>
>>> I am trying to run spark 1.6.0.
>>>
>>> I have previously just installed a fresh instance of hadoop 2.6.0 and
>>> hive 0.14.
>>>
>>> Hadoop, mapreduce, hive and beeline are working.
>>>
>>> However, as soon as I run `sc.textfile()` within spark-shell, it returns
>>> an error:
>>>
>>>
>>> $ spark-shell
>>> Welcome to
>>>     __
>>>  / __/__  ___ _/ /__
>>> _\ \/ _ \/ _ `/ __/  '_/
>>>/___/ .__/\_,_/_/ /_/\_\   version 1.6.0
>>>   /_/
>>>
>>> Using Scala version 2.10.5 (Java HotSpot(TM) 64-Bit Server VM, Java
>>> 1.7.0_67)
>>> Type in expressions to have them evaluated.
>>> Type :help for more information.
>>> Spark context available as sc.
>>> SQL context available as sqlContext.
>>>
>>> scala> val textFile = sc.textFile("README.md")
>>> java.lang.IllegalArgumentException: java.lang.UnsatisfiedLinkError:
>>> /tmp/snappy-1.1.2-2ccaf764-c7c4-4ff1-a68e-bbfdec0a3aa1-libsnappyjava.so:
>>> /tmp/snappy-1.1.2-2ccaf764-c7c4-4ff1-a68e-bbfdec0a3aa1-libsnappyjava.so:
>>> failed to map segment from shared object: Operation not permitted
>>> at
>>> org.apache.spark.io.SnappyCompressionCodec.(CompressionCodec.scala:156)
>>> at
>>> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>>> at
>>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>>> at
>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>>> at
>>> java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>>> at
>>> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:72)
>>> at
>>> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:65)
>>> at org.apache.spark.broadcast.TorrentBroadcast.org
>>> $apache$spark$broadcast$TorrentBroadcast$$setConf(TorrentBroadcast.scala:73)
>>> at
>>> org.apache.spark.broadcast.TorrentBroadcast.(TorrentBroadcast.scala:80)
>>> at
>>> org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
>>> at
>>> org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:63)
>>> at
>>> org.apache.spark.SparkContext.broadcast(SparkContext.scala:1326)
>>> at
>>> org.apache.spark.SparkContext$$anonfun$hadoopFile$1.apply(SparkContext.scala:1014)
>>> at
>>> org.apache.spark.SparkContext$$anonfun$hadoopFile$1.apply(SparkContext.scala:1011)
>>> at
>>> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
>>> at
>>> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
>>> at
>>> org.apache.spark.SparkContext.withScope(SparkContext.scala:714)
>>> at
>>> org.apache.spark.SparkContext.hadoopFile(SparkContext.scala:1011)
>>> at
>>> org.apache.spark.SparkContext$$anonfun$textFile$1.apply(SparkContext.scala:832)
>>> at
>>> org.apache.spark.SparkContext$$anonfun$textFile$1.apply(SparkContext.scala:830)
>>> at
>>> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
>>> at
>>> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
>>> at
>>> org.apache.spark.SparkContext.withScope(SparkContext.scala:714)
>>> at
>>> org.apache.spark.SparkContext.textFile(SparkContext.scala:830)
>>> at
>>> $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:27)
>>> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:32)
>>> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:34)
>>> at $iwC$$iwC$$iwC$$iwC$$iwC.(:36)
>>> at $iwC$$iwC$$iwC$$iwC.(:38)
>>> at $iwC$$iwC$$iwC.(:40)
>>> at $iwC$$iwC.(:42)
>>> at $iwC.(:44)
>>> at (:46)
>>> at .(:50)
>>> at .()
>>> at .(:7)
>>> at .()
>>> at $print()
>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
>>> Method)
>>> at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>> at
>>> 

Re: IllegalArgumentException UnsatisfiedLinkError snappy-1.1.2 spark-shell error

2016-02-15 Thread Paolo Villaflores
Yes, I have sen that. But java.io.tmpdir has a default definition in
linux--it is /tmp.



On Tue, Feb 16, 2016 at 2:17 PM, Ted Yu  wrote:

> Have you seen this thread ?
>
>
> http://search-hadoop.com/m/q3RTtW43zT1e2nfb=Re+ibsnappyjava+so+failed+to+map+segment+from+shared+object
>
> On Mon, Feb 15, 2016 at 7:09 PM, Paolo Villaflores <
> pbvillaflo...@gmail.com> wrote:
>
>>
>> Hi,
>>
>>
>>
>> I am trying to run spark 1.6.0.
>>
>> I have previously just installed a fresh instance of hadoop 2.6.0 and
>> hive 0.14.
>>
>> Hadoop, mapreduce, hive and beeline are working.
>>
>> However, as soon as I run `sc.textfile()` within spark-shell, it returns
>> an error:
>>
>>
>> $ spark-shell
>> Welcome to
>>     __
>>  / __/__  ___ _/ /__
>> _\ \/ _ \/ _ `/ __/  '_/
>>/___/ .__/\_,_/_/ /_/\_\   version 1.6.0
>>   /_/
>>
>> Using Scala version 2.10.5 (Java HotSpot(TM) 64-Bit Server VM, Java
>> 1.7.0_67)
>> Type in expressions to have them evaluated.
>> Type :help for more information.
>> Spark context available as sc.
>> SQL context available as sqlContext.
>>
>> scala> val textFile = sc.textFile("README.md")
>> java.lang.IllegalArgumentException: java.lang.UnsatisfiedLinkError:
>> /tmp/snappy-1.1.2-2ccaf764-c7c4-4ff1-a68e-bbfdec0a3aa1-libsnappyjava.so:
>> /tmp/snappy-1.1.2-2ccaf764-c7c4-4ff1-a68e-bbfdec0a3aa1-libsnappyjava.so:
>> failed to map segment from shared object: Operation not permitted
>> at
>> org.apache.spark.io.SnappyCompressionCodec.(CompressionCodec.scala:156)
>> at
>> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>> at
>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>> at
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>> at
>> java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>> at
>> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:72)
>> at
>> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:65)
>> at org.apache.spark.broadcast.TorrentBroadcast.org
>> $apache$spark$broadcast$TorrentBroadcast$$setConf(TorrentBroadcast.scala:73)
>> at
>> org.apache.spark.broadcast.TorrentBroadcast.(TorrentBroadcast.scala:80)
>> at
>> org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
>> at
>> org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:63)
>> at
>> org.apache.spark.SparkContext.broadcast(SparkContext.scala:1326)
>> at
>> org.apache.spark.SparkContext$$anonfun$hadoopFile$1.apply(SparkContext.scala:1014)
>> at
>> org.apache.spark.SparkContext$$anonfun$hadoopFile$1.apply(SparkContext.scala:1011)
>> at
>> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
>> at
>> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
>> at
>> org.apache.spark.SparkContext.withScope(SparkContext.scala:714)
>> at
>> org.apache.spark.SparkContext.hadoopFile(SparkContext.scala:1011)
>> at
>> org.apache.spark.SparkContext$$anonfun$textFile$1.apply(SparkContext.scala:832)
>> at
>> org.apache.spark.SparkContext$$anonfun$textFile$1.apply(SparkContext.scala:830)
>> at
>> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
>> at
>> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
>> at
>> org.apache.spark.SparkContext.withScope(SparkContext.scala:714)
>> at
>> org.apache.spark.SparkContext.textFile(SparkContext.scala:830)
>> at
>> $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:27)
>> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:32)
>> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:34)
>> at $iwC$$iwC$$iwC$$iwC$$iwC.(:36)
>> at $iwC$$iwC$$iwC$$iwC.(:38)
>> at $iwC$$iwC$$iwC.(:40)
>> at $iwC$$iwC.(:42)
>> at $iwC.(:44)
>> at (:46)
>> at .(:50)
>> at .()
>> at .(:7)
>> at .()
>> at $print()
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:606)
>> at
>> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
>> at
>> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
>> 

Re: IllegalArgumentException UnsatisfiedLinkError snappy-1.1.2 spark-shell error

2016-02-15 Thread Ted Yu
Have you seen this thread ?

http://search-hadoop.com/m/q3RTtW43zT1e2nfb=Re+ibsnappyjava+so+failed+to+map+segment+from+shared+object

On Mon, Feb 15, 2016 at 7:09 PM, Paolo Villaflores 
wrote:

>
> Hi,
>
>
>
> I am trying to run spark 1.6.0.
>
> I have previously just installed a fresh instance of hadoop 2.6.0 and hive
> 0.14.
>
> Hadoop, mapreduce, hive and beeline are working.
>
> However, as soon as I run `sc.textfile()` within spark-shell, it returns
> an error:
>
>
> $ spark-shell
> Welcome to
>     __
>  / __/__  ___ _/ /__
> _\ \/ _ \/ _ `/ __/  '_/
>/___/ .__/\_,_/_/ /_/\_\   version 1.6.0
>   /_/
>
> Using Scala version 2.10.5 (Java HotSpot(TM) 64-Bit Server VM, Java
> 1.7.0_67)
> Type in expressions to have them evaluated.
> Type :help for more information.
> Spark context available as sc.
> SQL context available as sqlContext.
>
> scala> val textFile = sc.textFile("README.md")
> java.lang.IllegalArgumentException: java.lang.UnsatisfiedLinkError:
> /tmp/snappy-1.1.2-2ccaf764-c7c4-4ff1-a68e-bbfdec0a3aa1-libsnappyjava.so:
> /tmp/snappy-1.1.2-2ccaf764-c7c4-4ff1-a68e-bbfdec0a3aa1-libsnappyjava.so:
> failed to map segment from shared object: Operation not permitted
> at
> org.apache.spark.io.SnappyCompressionCodec.(CompressionCodec.scala:156)
> at
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at
> java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at
> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:72)
> at
> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:65)
> at org.apache.spark.broadcast.TorrentBroadcast.org
> $apache$spark$broadcast$TorrentBroadcast$$setConf(TorrentBroadcast.scala:73)
> at
> org.apache.spark.broadcast.TorrentBroadcast.(TorrentBroadcast.scala:80)
> at
> org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
> at
> org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:63)
> at
> org.apache.spark.SparkContext.broadcast(SparkContext.scala:1326)
> at
> org.apache.spark.SparkContext$$anonfun$hadoopFile$1.apply(SparkContext.scala:1014)
> at
> org.apache.spark.SparkContext$$anonfun$hadoopFile$1.apply(SparkContext.scala:1011)
> at
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
> at
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
> at
> org.apache.spark.SparkContext.withScope(SparkContext.scala:714)
> at
> org.apache.spark.SparkContext.hadoopFile(SparkContext.scala:1011)
> at
> org.apache.spark.SparkContext$$anonfun$textFile$1.apply(SparkContext.scala:832)
> at
> org.apache.spark.SparkContext$$anonfun$textFile$1.apply(SparkContext.scala:830)
> at
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
> at
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
> at
> org.apache.spark.SparkContext.withScope(SparkContext.scala:714)
> at
> org.apache.spark.SparkContext.textFile(SparkContext.scala:830)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:27)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:32)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:34)
> at $iwC$$iwC$$iwC$$iwC$$iwC.(:36)
> at $iwC$$iwC$$iwC$$iwC.(:38)
> at $iwC$$iwC$$iwC.(:40)
> at $iwC$$iwC.(:42)
> at $iwC.(:44)
> at (:46)
> at .(:50)
> at .()
> at .(:7)
> at .()
> at $print()
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at
> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
> at
> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
> at
> org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
> at
> org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
> at
> org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
> at
>