[ 
https://issues.apache.org/jira/browse/BAHIR-130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16206390#comment-16206390
 ] 

ASF GitHub Bot commented on BAHIR-130:
--------------------------------------

Github user romeokienzler commented on the issue:

    https://github.com/apache/bahir/pull/49
  
    Will be fixed through migration to java-cloudant (see comments in 
[JIRA](https://issues.apache.org/jira/browse/BAHIR-130) for details)


> Support Cloudant Lite Plan
> --------------------------
>
>                 Key: BAHIR-130
>                 URL: https://issues.apache.org/jira/browse/BAHIR-130
>             Project: Bahir
>          Issue Type: Improvement
>          Components: Spark SQL Data Sources
>    Affects Versions: Spark-2.0.0, Spark-2.0.1, Spark-2.0.2, Spark-2.1.0, 
> Spark-2.1.1, Spark-2.2.0
>         Environment: ApacheSpark, any
>            Reporter: Romeo Kienzer
>            Assignee: Romeo Kienzer
>            Priority: Minor
>             Fix For: Spark-2.1.1, Spark-2.2.1
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Cloudant has a plan called "Lite" supporting only five requests per second. 
> So you end up with the following exception:
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 4 in 
> stage 0.0 failed 10 times, most recent failure: Lost task 4.9 in stage 0.0 
> (TID 42, yp-spark-dal09-env5-0040): java.lang.RuntimeException: Database 
> harlemshake2 request error: {"error":"too_many_requests","reason":"You've 
> exceeded your current limit of 5 requests per second for query class. Please 
> try later.","class":"query","rate":5}
>       at 
> org.apache.bahir.cloudant.common.JsonStoreDataAccess.getQueryResult(JsonStoreDataAccess.scala:158)
>       at 
> org.apache.bahir.cloudant.common.JsonStoreDataAccess.getIterator(JsonStoreDataAccess.scala:72)
> Suggestion: Change JsonStoreDataAccess.scala in a way that when a 403 HTTP 
> status code is returned the response is parsed in order to obtain the rate 
> limit and then throttle the query down to that limit. In addition issue a 
> WARNING in the log



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to