Should be an easy rebase for your PR, so I went ahead just to get this fixed up:

https://github.com/apache/spark/pull/1466

On Thu, Jul 17, 2014 at 5:32 PM, Ted Malaska <ted.mala...@cloudera.com> wrote:
> Don't make this change yet.  I have a 1642 that needs to get through around
> the same code.
>
> I can make this change after 1642 is through.
>
>
> On Thu, Jul 17, 2014 at 12:25 PM, Sean Owen <so...@cloudera.com> wrote:
>>
>> CC tmalaska since he touched the line in question. This is a fun one.
>> So, here's the line of code added last week:
>>
>> val channelFactory = new NioServerSocketChannelFactory
>>   (Executors.newCachedThreadPool(), Executors.newCachedThreadPool());
>>
>> Scala parses this as two statements, one invoking a no-arg constructor
>> and one making a tuple for fun. Put it on one line and it's fine.
>>
>> It works with newer Netty since there is a no-arg constructor. It
>> fails with older Netty, which is what you get with older Hadoop.
>>
>> The fix is obvious. I'm away and if nobody beats me to a PR in the
>> meantime, I'll propose one as an addendum to the recent JIRA.
>>
>> Sean
>>
>> *
>>
>> On Thu, Jul 17, 2014 at 3:58 PM, Nathan Kronenfeld
>> <nkronenf...@oculusinfo.com> wrote:
>> > My full build command is:
>> > ./sbt/sbt -Dhadoop.version=2.0.0-mr1-cdh4.6.0 clean assembly
>> >
>> >
>> > I've changed one line in RDD.scala, nothing else.
>> >
>> >
>> >
>> > On Thu, Jul 17, 2014 at 10:56 AM, Sean Owen <so...@cloudera.com> wrote:
>> >
>> >> This looks like a Jetty version problem actually. Are you bringing in
>> >> something that might be changing the version of Jetty used by Spark?
>> >> It depends a lot on how you are building things.
>> >>
>> >> Good to specify exactly how your'e building here.
>> >>
>> >> On Thu, Jul 17, 2014 at 3:43 PM, Nathan Kronenfeld
>> >> <nkronenf...@oculusinfo.com> wrote:
>> >> > I'm trying to compile the latest code, with the hadoop-version set
>> >> > for
>> >> > 2.0.0-mr1-cdh4.6.0.
>> >> >
>> >> > I'm getting the following error, which I don't get when I don't set
>> >> > the
>> >> > hadoop version:
>> >> >
>> >> > [error]
>> >> >
>> >>
>> >> /data/hdfs/1/home/nkronenfeld/git/spark-ndk/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeInputDStream.scala:156:
>> >> > overloaded method constructor NioServerSocketChannelFactory with
>> >> > alternatives:
>> >> > [error]   (x$1: java.util.concurrent.Executor,x$2:
>> >> > java.util.concurrent.Executor,x$3:
>> >> > Int)org.jboss.netty.channel.socket.nio.NioServerSocketChannelFactory
>> >> <and>
>> >> > [error]   (x$1: java.util.concurrent.Executor,x$2:
>> >> >
>> >>
>> >> java.util.concurrent.Executor)org.jboss.netty.channel.socket.nio.NioServerSocketChannelFactory
>> >> > [error]  cannot be applied to ()
>> >> > [error]       val channelFactory = new NioServerSocketChannelFactory
>> >> > [error]                            ^
>> >> > [error] one error found
>> >> >
>> >> >
>> >> > I don't know flume from a hole in the wall - does anyone know what I
>> >> > can
>> >> do
>> >> > to fix this?
>> >> >
>> >> >
>> >> > Thanks,
>> >> >          -Nathan
>> >> >
>> >> >
>> >> > --
>> >> > Nathan Kronenfeld
>> >> > Senior Visualization Developer
>> >> > Oculus Info Inc
>> >> > 2 Berkeley Street, Suite 600,
>> >> > Toronto, Ontario M5A 4J5
>> >> > Phone:  +1-416-203-3003 x 238
>> >> > Email:  nkronenf...@oculusinfo.com
>> >>
>> >
>> >
>> >
>> > --
>> > Nathan Kronenfeld
>> > Senior Visualization Developer
>> > Oculus Info Inc
>> > 2 Berkeley Street, Suite 600,
>> > Toronto, Ontario M5A 4J5
>> > Phone:  +1-416-203-3003 x 238
>> > Email:  nkronenf...@oculusinfo.com
>
>

Reply via email to