This here may also be of help:
http://docs.aws.amazon.com/AmazonS3/latest/dev/request-rate-perf-considerations.html.
Make sure to spread your objects across multiple partitions to not be rate
limited by S3.
-Sven

On Mon, Dec 22, 2014 at 10:20 AM, durga katakam <durgak...@gmail.com> wrote:

> Yes . I am reading thousands of files every hours. Is there any way I can
> tell spark to timeout.
> Thanks for your help.
>
> -D
>
> On Mon, Dec 22, 2014 at 4:57 AM, Shuai Zheng <szheng.c...@gmail.com>
> wrote:
>
>> Is it possible too many connections open to read from s3 from one node? I
>> have this issue before because I open a few hundreds of files on s3 to read
>> from one node. It just block itself without error until timeout later.
>>
>> On Monday, December 22, 2014, durga <durgak...@gmail.com> wrote:
>>
>>> Hi All,
>>>
>>> I am facing a strange issue sporadically. occasionally my spark job is
>>> hungup on reading s3 files. It is not throwing exception . or making some
>>> progress, it is just hungs up there.
>>>
>>> Is this a known issue , Please let me know how could I solve this issue.
>>>
>>> Thanks,
>>> -D
>>>
>>>
>>>
>>> --
>>> View this message in context:
>>> http://apache-spark-user-list.1001560.n3.nabble.com/S3-files-Spark-job-hungsup-tp20806.html
>>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>>> For additional commands, e-mail: user-h...@spark.apache.org
>>>
>>>
>


-- 
http://sites.google.com/site/krasser/?utm_source=sig

Reply via email to