Ideally there would be an exclusion in the pom to deal with this.
>
> Dale.
>
> From: Zhan Zhang
> Date: Friday, March 27, 2015 at 4:28 PM
> To: "Johnson, Dale"
> Cc: Ted Yu , user
>
> Subject: Re: Can't access file in spark, but can in hadoop
>
hnson, Dale" mailto:daljohn...@ebay.com>>
Cc: Ted Yu mailto:yuzhih...@gmail.com>>, user
mailto:user@spark.apache.org>>
Subject: Re: Can't access file in spark, but can in hadoop
Probably guava version conflicts issue. What spark version did you use, and
which hadoop version i
uot;Johnson, Dale" mailto:daljohn...@ebay.com>>
Cc: user mailto:user@spark.apache.org>>
Subject: Re: Can't access file in spark, but can in hadoop
Looks like the following assertion failed:
Preconditions.checkState(storageIDsCount == locs.size());
locs is List
Can you enhance the asse
>>
Date: Thursday, March 26, 2015 at 4:54 PM
To: "Johnson, Dale" mailto:daljohn...@ebay.com>>
Cc: user mailto:user@spark.apache.org>>
Subject: Re: Can't access file in spark, but can in hadoop
Looks like the following assertion failed:
Preconditions.checkS
Looks like the following assertion failed:
Preconditions.checkState(storageIDsCount == locs.size());
locs is List
Can you enhance the assertion to log more information ?
Cheers
On Thu, Mar 26, 2015 at 3:06 PM, Dale Johnson wrote:
> There seems to be a special kind of "corrupted according
There seems to be a special kind of "corrupted according to Spark" state of
file in HDFS. I have isolated a set of files (maybe 1% of all files I need
to work with) which are producing the following stack dump when I try to
sc.textFile() open them. When I try to open directories, most large
direc