Pardon, I was wrong about this. There is actually code distributed
under com.hadoop, and that's where this class is. Oops.

https://code.google.com/a/apache-extras.org/p/hadoop-gpl-compression/source/browse/trunk/src/java/com/hadoop/mapreduce/LzoTextInputFormat.java

On Sun, Jul 6, 2014 at 6:37 AM, Sean Owen <so...@cloudera.com> wrote:
> The package com.hadoop.mapreduce certainly looks wrong. If it is a Hadoop
> class it starts with org.apache.hadoop
>
> On Jul 6, 2014 4:20 AM, "Nicholas Chammas" <nicholas.cham...@gmail.com>
> wrote:
>>
>> On Fri, Jul 4, 2014 at 3:33 PM, Gurvinder Singh
>> <gurvinder.si...@uninett.no> wrote:
>>>
>>> csv =
>>>
>>> sc.newAPIHadoopFile(opts.input,"com.hadoop.mapreduce.LzoTextInputFormat","org.apache.hadoop.io.LongWritable","org.apache.hadoop.io.Text").count()
>>
>> Does anyone know what the rough equivalent of this would be in the Scala
>> API?
>>
>> I am trying the following, but the first import yields an error on my
>> spark-ec2 cluster:
>>
>> import com.hadoop.mapreduce.LzoTextInputFormat
>> import org.apache.hadoop.io.LongWritable
>> import org.apache.hadoop.io.Text
>>
>>
>> sc.newAPIHadoopFile("s3n://datasets.elasticmapreduce/ngrams/books/20090715/eng-us-all/1gram/data",
>> LzoTextInputFormat, LongWritable, Text)
>>
>> scala> import com.hadoop.mapreduce.LzoTextInputFormat
>> <console>:12: error: object hadoop is not a member of package com
>>        import com.hadoop.mapreduce.LzoTextInputFormat
>>
>> Nick

Reply via email to