Re: Usergrid 2.x Issues

2015-12-08 Thread Jaskaran Singh
Hi Michael,

This makes sense. I can confirm that while we have been seeing missing
entity errors on high load; these automatically get resolved themselves as
the load decreases.

Another anomaly that we have noticed is that usergrid responds with a code
"401" and message "Unable to authenticate OAuth credentials" for certain
user's credentials under high load and the same credentials work fine after
the load reduces. Can we assume that this issue (intermittent invalid
credentials) has the same underlying root cause (ie elasticsearch is not
responding)? See below a couple of examples of the error_description for
such 401 errors:
1. 'invalid username or password'
2. ‘Unable to authenticate OAuth credentials’
3. ‘Unable to authenticate due to corrupt access token’

Regarding your suggestion to increase the search thread pool queue size, we
were already using a setting of 1000 (with 320 threads). Should we consider
further increasing this? Or simply provide additional resources (cpu / ram)
to the ES process.

Additionally we are also seeing cassandra connection timeouts, specifically
the exceptions below under high load conditions:
ERROR stage.write.WriteCommit.call(132)-
Failed to execute write asynchronously
com.netflix.astyanax.connectionpool.exceptions.TimeoutException:
TimeoutException: [host=10.0.0.237(10.0.0.237):9160, latency=2003(2003),
attempts=1]org.apache.thrift.transport.TTransportException:
java.net.SocketTimeoutException: Read timed out

These exceptions occur even though opscenter was reporting medium load on
our cluster. Is there a way to optimize the astyanax library. Please let us
know if you have any recommendations in this area.

Thanks a lot for the help.

Thanks
Jaskaran

On Mon, Dec 7, 2015 at 2:29 AM, Michael Russo <michaelaru...@gmail.com>
wrote:

> Here are a couple things to check:
>
> 1) Can you query all of these entities out when the system is not under
> load?
> 2) Elasticsearch has a search queue for index query requests. (
> https://www.elastic.co/guide/en/elasticsearch/reference/1.6/modules-threadpool.html)
> When this is full the searches are rejected. Currently Usergrid surfaces
> this as no results returned rather than unable to query or some other
> identifying error message (we're aware and plan to fix this in the future).
> Try increasing the queue size to 1000. You might have delayed results, but
> can prevent them from being empty results for data that's known to be in
> the index.
>
> Thanks.
> -Michael R.
>
> On Dec 5, 2015, at 07:07, Jaskaran Singh <
> jaskaran.si...@comprotechnologies.com> wrote:
>
> Hello All,
>
> We are testing usergrid 2.x (master branch) for our application that was
> previously being prototyped on usergrid 1.x. We are noticing some weird
> anomalies which are causing errors in our application which otherwise works
> fine against usergrid 1.x. Specifically, we are seeing empty responses when
> querying custom collections for a particular entity record.
> Following is an example of one such query:
> http://server-name/b2perf1/default/userdata?client_id=
> <...>_secret=<>=userproductid='4d543507-9839-11e5-ba08-0a75091e6d25~~5c856de9-9828-11e5-ba08-0a75091e6d25'
>
> In the above scenario, we are querying a custom collection “userdata”.
> And under high load conditions (performance tests), this query starts
> returning an empty entities array (see below), even though this entity did
> exist at one time and we have no code / logic to delete entities.
> {
> "action": "get",
> "application": "0f7a2396-9826-11e5-ba08-0a75091e6d25",
> "params": {
> "ql": [
>
> "userproductid='4d543507-9839-11e5-ba08-0a75091e6d25~~5c856de9-9828-11e5-ba08-0a75091e6d25'"
> ]
> },
> "path": "/userdata",
> "uri": "http://localhost:8080/b2perf1/default/userdata;,
> "entities": [],
> "timestamp": 1449322746733,
> "duration": 1053,
> "organization": "b2perf1",
> "applicationName": "default",
> "count": 0
> }
>
> This has been happening quite randomly / intermittently and we have not
> been able to isolate any replication steps besides, running load /
> performance tests when this problem does eventually show up.
> Note, the creation of the entities is prior to the load test and we can
> confirm that they existed before running the load test.
>
> We have never noticed this issue for ‘non’ query calls (ie calls that do
> not directly provide a field to query on)
>
> Our suspicion is that while these records do exist in Cassandra (because
> we have never deleted them), but the ElasticSearch index is ‘not’ in sync
> or is not functioning properly.
> How do we go about debugging this problem? Is there any particular logging
> or metric that we can check for us to confirm if all the elasticsearch
> index is upto date with the changes in cassandra.
>
> Any other suggestions will be greatly appreciated.
>
> Thanks
> Jaskaran
>
>


Usergrid 2.x Issues

2015-12-05 Thread Jaskaran Singh
Hello All,

We are testing usergrid 2.x (master branch) for our application that was
previously being prototyped on usergrid 1.x. We are noticing some weird
anomalies which are causing errors in our application which otherwise works
fine against usergrid 1.x. Specifically, we are seeing empty responses when
querying custom collections for a particular entity record.
Following is an example of one such query:
http://server-name/b2perf1/default/userdata?client_id=
<...>_secret=<>=userproductid='4d543507-9839-11e5-ba08-0a75091e6d25~~5c856de9-9828-11e5-ba08-0a75091e6d25'

In the above scenario, we are querying a custom collection “userdata”.  And
under high load conditions (performance tests), this query starts returning
an empty entities array (see below), even though this entity did exist at
one time and we have no code / logic to delete entities.
{
"action": "get",
"application": "0f7a2396-9826-11e5-ba08-0a75091e6d25",
"params": {
"ql": [

"userproductid='4d543507-9839-11e5-ba08-0a75091e6d25~~5c856de9-9828-11e5-ba08-0a75091e6d25'"
]
},
"path": "/userdata",
"uri": "http://localhost:8080/b2perf1/default/userdata;,
"entities": [],
"timestamp": 1449322746733,
"duration": 1053,
"organization": "b2perf1",
"applicationName": "default",
"count": 0
}

This has been happening quite randomly / intermittently and we have not
been able to isolate any replication steps besides, running load /
performance tests when this problem does eventually show up.
Note, the creation of the entities is prior to the load test and we can
confirm that they existed before running the load test.

We have never noticed this issue for ‘non’ query calls (ie calls that do
not directly provide a field to query on)

Our suspicion is that while these records do exist in Cassandra (because we
have never deleted them), but the ElasticSearch index is ‘not’ in sync or
is not functioning properly.
How do we go about debugging this problem? Is there any particular logging
or metric that we can check for us to confirm if all the elasticsearch
index is upto date with the changes in cassandra.

Any other suggestions will be greatly appreciated.

Thanks
Jaskaran


Re: Usergrid 2.x Issues

2015-12-10 Thread Jaskaran Singh
Hi Michael,

I am providing an update on my situation. We have changed our application
logic to minimize the use of queries (ie calls with "ql=.") in usergrid
2.x. This seems to have provided significant benefit and all the problems
reported below, seem to have disappeared.

To some extent this is good news. However we were lucky that we were able
to work around the logic and would like to understand any limitations or
best practices around the use of queries (which are serviced by
elasticsearch in usergrid 2.x) under high load situations.

Also please let me know me know if there is an existing Jira issue for
addressing the empty entity response when elasticsearch when it is
overloaded. Or should i add one?

Thanks in advance,

Thanks
Jaskaran


On Tue, Dec 8, 2015 at 6:00 PM, Jaskaran Singh <
jaskaran.si...@comprotechnologies.com> wrote:

> Hi Michael,
>
> This makes sense. I can confirm that while we have been seeing missing
> entity errors on high load; these automatically get resolved themselves as
> the load decreases.
>
> Another anomaly that we have noticed is that usergrid responds with a code
> "401" and message "Unable to authenticate OAuth credentials" for certain
> user's credentials under high load and the same credentials work fine after
> the load reduces. Can we assume that this issue (intermittent invalid
> credentials) has the same underlying root cause (ie elasticsearch is not
> responding)? See below a couple of examples of the error_description for
> such 401 errors:
> 1. 'invalid username or password'
> 2. ‘Unable to authenticate OAuth credentials’
> 3. ‘Unable to authenticate due to corrupt access token’
>
> Regarding your suggestion to increase the search thread pool queue size,
> we were already using a setting of 1000 (with 320 threads). Should we
> consider further increasing this? Or simply provide additional resources
> (cpu / ram) to the ES process.
>
> Additionally we are also seeing cassandra connection timeouts,
> specifically the exceptions below under high load conditions:
> ERROR stage.write.WriteCommit.call(132)-
> Failed to execute write asynchronously
> com.netflix.astyanax.connectionpool.exceptions.TimeoutException:
> TimeoutException: [host=10.0.0.237(10.0.0.237):9160, latency=2003(2003),
> attempts=1]org.apache.thrift.transport.TTransportException:
> java.net.SocketTimeoutException: Read timed out
>
> These exceptions occur even though opscenter was reporting medium load on
> our cluster. Is there a way to optimize the astyanax library. Please let us
> know if you have any recommendations in this area.
>
> Thanks a lot for the help.
>
> Thanks
> Jaskaran
>
> On Mon, Dec 7, 2015 at 2:29 AM, Michael Russo <michaelaru...@gmail.com>
> wrote:
>
>> Here are a couple things to check:
>>
>> 1) Can you query all of these entities out when the system is not under
>> load?
>> 2) Elasticsearch has a search queue for index query requests. (
>> https://www.elastic.co/guide/en/elasticsearch/reference/1.6/modules-threadpool.html)
>> When this is full the searches are rejected. Currently Usergrid surfaces
>> this as no results returned rather than unable to query or some other
>> identifying error message (we're aware and plan to fix this in the future).
>> Try increasing the queue size to 1000. You might have delayed results, but
>> can prevent them from being empty results for data that's known to be in
>> the index.
>>
>> Thanks.
>> -Michael R.
>>
>> On Dec 5, 2015, at 07:07, Jaskaran Singh <
>> jaskaran.si...@comprotechnologies.com> wrote:
>>
>> Hello All,
>>
>> We are testing usergrid 2.x (master branch) for our application that was
>> previously being prototyped on usergrid 1.x. We are noticing some weird
>> anomalies which are causing errors in our application which otherwise works
>> fine against usergrid 1.x. Specifically, we are seeing empty responses when
>> querying custom collections for a particular entity record.
>> Following is an example of one such query:
>> http://server-name/b2perf1/default/userdata?client_id=
>> <...>_secret=<>=userproductid='4d543507-9839-11e5-ba08-0a75091e6d25~~5c856de9-9828-11e5-ba08-0a75091e6d25'
>>
>> In the above scenario, we are querying a custom collection “userdata”.
>> And under high load conditions (performance tests), this query starts
>> returning an empty entities array (see below), even though this entity did
>> exist at one time and we have no code / logic to delete entities.
>> {
>> "action": "get",
>> "application": "0f7a2396-9826-11e5-ba08-0a75091e6d25",
>>

Re: Startup fails with BeanCreationException for shiroFilter

2016-06-06 Thread Jaskaran Singh
Hi Petteri,
Could you post your usergrid-deployment.properties file. This will help in
figuring out the issue.
Also by any chance have you specified the elasticsearch port to 9200 in
your usergrid property config? Normally it should connect on 9300, unless
you have specified the port in your config.
https://groups.google.com/forum/#!topic/elasticsearch/sIxoF76OuxY


On Mon, Jun 6, 2016 at 12:29 PM, Petteri Sulonen <
petteri.sulo...@avaintec.com> wrote:

> Hi again --
>
> All right, making slow progress, but I'm stuck again. Attempting to curl
> status fails after a pretty long timeout:
>
> root@vmu-psulonen2:/var/log/tomcat7# curl http://localhost:8080/status
> {"error":"uncaught","timestamp":1465195897823,"duration":0,"error_description":"Internal
> Server
> Error","exception":"org.apache.usergrid.rest.exceptions.UncaughtException","error_id":"1daa4fe9-2bb3-11e6-9b84-08002798df4e"}
>
>
> The catalina.out log has quite a lot of stuff in it. From the relevant
> time:
>
> 08:51:37,823 ERROR AbstractExceptionMapper:106 - Server Error (500):
> {"error":"uncaught","timestamp":1465195897823,"duration":0,"error_description":"Internal
> Server
> Error","exception":"org.apache.usergrid.rest.exceptions.UncaughtException","error_id":"1daa4fe9-2bb3-11e6-9b84-08002798df4e"}
> 08:51:37,828  INFO UsergridSystemMonitor:103 - TimerThreshold triggered on
> duration: 30009
> {"path":"/status","applicationId":null}
> 
> 08:51:39,588  WARN unicast:460 - [default] failed to send ping to
> [[#zen_unicast_1#][vmu-psulonen2][inet[/127.0.0.1:9200]]]
> org.elasticsearch.transport.ReceiveTimeoutTransportException:
> [][inet[/127.0.0.1:9200]][internal:discovery/zen/unicast_gte_1_4]
> request_id [267] timed out after [3751ms]
> at
> org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:366)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> 08:51:42,076  INFO JobSchedulerService:97 - Running one check iteration ...
> 08:51:42,077  INFO CassandraMQUtils:249 -
> QueueManagerFactoryImpl.getFromQueue: /jobs/
> 08:51:42,145 ERROR AbstractSearch:272 - Error getting oldest queue message
> ID
> me.prettyprint.hector.api.exceptions.HInvalidRequestException:
> InvalidRequestException(why:Keyspace 'Usergrid_Applications' does not exist)
> at
> me.prettyprint.cassandra.connection.client.HThriftClient.getCassandra(HThriftClient.java:112)
> at
> me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:251)
> at
> me.prettyprint.cassandra.service.KeyspaceServiceImpl.operateWithFailover(KeyspaceServiceImpl.java:132)
> at
> me.prettyprint.cassandra.service.KeyspaceServiceImpl.getSlice(KeyspaceServiceImpl.java:290)
> at
> me.prettyprint.cassandra.service.VirtualKeyspaceServiceImpl.getSlice(VirtualKeyspaceServiceImpl.java:133)
>
> I figure the "Keyspace 'Usergrid_Applications' does not exist" errors are
> because the database isn't initialised. However curl'ing the DB init URL
> from the config doc results in a similar error as my status call above.
> Additionally, there are those failed pings to elasticsearch; I did telnet
> to 127.0.0.1 9200 and the port does connect.
>
> There's more stuff in the log from the startup but I'm not sure how
> relevant it is.
>
> Your help is again much appreciated,
>
> Petteri
>
> On 03/06/16 15:35, Dave wrote:
>
>> The root cause of the problem appears to be this:
>>
>>   me.prettyprint.hector.api.exceptions.HectorException: All host pools
>> marked down.
>>
>> That exception means that Hector (one of the Cassandra clients we use)
>> cannot contact Cassandra. Usually this means that you have the wrong value
>> in one of the Cassandra-related configuration properties (hostname or port
>> is wrong), Cassandra is not running or some network issue is preventing
>> connection to Cassandra.
>>
>> By default we have this:
>>cassandra.url=localhost:9160
>>
>> Usergrid will connect to Cassandra and expects Thrift protocol on port
>> 9160.  I wonder: do you have to explicitly enable Thrift on that port in
>> Cassandra 3.x?
>>
>> Dave
>>
>>
>>
>> On Fri, Jun 3, 2016 at 8:49 AM Petteri Sulonen <
>> petteri.sulo...@avaintec.com >
>> wrote:
>>
>> Hi, folks --
>>
>> I'm evaluating Usergrid as a candidate for our cloud service backend,
>> and am attempting to set up a simple, single-node, single-server
>> installation of it, but can't get it to respond;
>> http://localhost:8080/status comes back with a blank 404.
>>
>> I'm following the instructions here:
>> https://usergrid.apache.org/docs/installation/deployment-guide.html.
>>
>> OS: Ubuntu 16.04 (server)
>>
>> Java version: 1.8.0_91 (Oracle)
>>
>> Cassandra:
>>
>> $ cqlsh
>>
>> Connected to Test Cluster at 

Re: Usergrid 2.2 (Master Branch, 1 October) - Startup fails in Tomcat

2016-10-05 Thread Jaskaran Singh
Thanks Michael.

On Thu, Oct 6, 2016 at 10:44 AM, Michael Russo <michaelaru...@gmail.com>
wrote:

> Had a deeper look ( thanks for the log snippet, it helped).  I think we
> have a small bug in org/apache/usergrid/persistence/collection/exception/
> CollectionRuntimeException.java:60.  Basically in the latest Master
> branch, we've started to move some of the database code over to the
> Datastax CQL driver from Astyanax.  Because of this, the error messages
> thrown from each of those is different and this is getting missed.  I'll be
> able to have a look further tomorrow 10/6 and fix in Master.  I'll reply
> back after I've verified the problem, and subsequently fixed it.
>
> Thanks.
> -Michael
>
> On Wed, Oct 5, 2016 at 7:24 AM, Jaskaran Singh <jaskaran.singh@
> comprotechnologies.com> wrote:
>
>> Hi Michael,
>>
>> Thank you for your response. With the latest Usergrid 2.2.0 (Master
>> branch from 1 October, SHA:77d2026907b03625ad7e1ef742c8656712497c8d),
>> the ROOT war does not get loaded / deployed properly in Tomcat, due to
>> startup errors (see below). Due to this we’re unable to call the “setup”
>> and “bootstrap” curl calls, as mentioned in your mail.
>>
>> 
>> --
>> INFO ShutdownListener:59 - ShutdownListener invoked
>>
>> SEVERE: Error listenerStart
>>
>> Caused by: org.springframework.beans.BeanInstantiationException: Could
>> not instantiate bean class [org.apache.usergrid.corepersi
>> stence.CpEntityManagerFactory]: Constructor threw exception; nested
>> exception is java.lang.RuntimeException: Unable to get management app after
>> 101 retries
>>
>> ERROR ContextLoader:331 - Context initialization failed
>>
>> INFO DefaultListableBeanFactory:444 - Destroying singletons in
>> org.springframework.beans.factory.support.DefaultListableBea
>> nFactory@58548b9f: defining beans [queueJob,org.springframework.
>> context.annotation.internalConfigurationAnnotationProcessor,.
>>
>> InvalidRequestException(why:unconfigured table Data_Migration_Info)
>> ERROR CpEntityManagerFactory:302 - 1: Error (BadRequestException) Unable
>> to connect to cassandra to retrieve status
>>
>> InvalidRequestException(why:unconfigured table Data_Migration_Info)
>> ERROR CpEntityManagerFactory:360 - Error getting entity manager
>> com.google.common.util.concurrent.UncheckedExecutionException:
>> java.lang.RuntimeException: Error getting application
>> b6768a08-b5d5-11e3-a495-11ddb1de66c8
>> 
>> --
>>
>> Please suggest if we are missing something.
>>
>> Thanks
>> Jaskaran
>>
>>
>> On Wed, Oct 5, 2016 at 8:51 AM, Michael Russo <michaelaru...@gmail.com>
>> wrote:
>>
>>> In fresh, first-time deployments of Usergrid to a new database and
>>> Elasticsearch cluster, you need to invoke the database setup and bootstrap
>>> APIs:
>>>
>>> curl -i -X PUT -u : "
>>> http://localhost:8080/system/database/setup;
>>> curl -i -X PUT -u : "
>>> http://localhost:8080/system/database/bootstrap;
>>>
>>> After starting tomcat and running the above curl commands, it should set
>>> up the schema and bootstrap the system.
>>>
>>> Thanks.
>>> -Michael R.
>>>
>>>
>>> On Tue, Oct 4, 2016 at 3:22 AM, Jaskaran Singh <
>>> jaskaran.si...@comprotechnologies.com> wrote:
>>>
>>>> Hello Usergrid Team,
>>>>
>>>> Our application works with Usergrid 1.0.2, 2.1.0 and 2.2.0 (Master
>>>> branch from 2nd September commit SHA: 9fae8037a4b881e9c13a5a1f23f71d
>>>> c34e950c40).
>>>>
>>>> Now, we tried testing our application with the latest 2.2.0 (Master
>>>> branch from 1 October SHA: 77d2026907b03625ad7e1ef742c8656712497c8d).
>>>> But during deployment/startup of Tomcat, we are getting the following
>>>> error (with fresh Cassandra + ES environments).
>>>>
>>>> BadRequestException: InvalidRequestException(why:unconfigured table
>>>> Data_Migration_Info)
>>>> Caused by: java.lang.RuntimeException: Unable to connect to cassandra
>>>> to retrieve status
>>>>
>>>> We checked and found that the connection to cassandra is fine. This
>>>> error does 'not' come in previous versions of Usergrid (1.0.2, 2.1.0 and
>>>> 2.2.0 - Master branch from 2nd September)
>>>> I wanted to check if i am missing something? Please advise.
>>>>
>>>> Thanks
>>>> Jaskaran
>>>>
>>>
>>>
>>
>


Re: Usergrid 2.2 (Master Branch, 1 October) - Startup fails in Tomcat

2016-10-05 Thread Jaskaran Singh
Hi Michael,

Thank you for your response. With the latest Usergrid 2.2.0 (Master branch
from 1 October, SHA:77d2026907b03625ad7e1ef742c8656712497c8d), the ROOT war
does not get loaded / deployed properly in Tomcat, due to startup errors
(see below). Due to this we’re unable to call the “setup” and “bootstrap”
curl calls, as mentioned in your mail.

--
INFO ShutdownListener:59 - ShutdownListener invoked

SEVERE: Error listenerStart

Caused by: org.springframework.beans.BeanInstantiationException: Could not
instantiate bean class
[org.apache.usergrid.corepersistence.CpEntityManagerFactory]: Constructor
threw exception; nested exception is java.lang.RuntimeException: Unable to
get management app after 101 retries

ERROR ContextLoader:331 - Context initialization failed

INFO DefaultListableBeanFactory:444 - Destroying singletons in
org.springframework.beans.factory.support.DefaultListableBeanFactory@58548b9f:
defining beans
[queueJob,org.springframework.context.annotation.internalConfigurationAnnotationProcessor,.

InvalidRequestException(why:unconfigured table Data_Migration_Info)
ERROR CpEntityManagerFactory:302 - 1: Error (BadRequestException) Unable to
connect to cassandra to retrieve status

InvalidRequestException(why:unconfigured table Data_Migration_Info)
ERROR CpEntityManagerFactory:360 - Error getting entity manager
com.google.common.util.concurrent.UncheckedExecutionException:
java.lang.RuntimeException: Error getting application
b6768a08-b5d5-11e3-a495-11ddb1de66c8
--

Please suggest if we are missing something.

Thanks
Jaskaran

On Wed, Oct 5, 2016 at 8:51 AM, Michael Russo <michaelaru...@gmail.com>
wrote:

> In fresh, first-time deployments of Usergrid to a new database and
> Elasticsearch cluster, you need to invoke the database setup and bootstrap
> APIs:
>
> curl -i -X PUT -u : "
> http://localhost:8080/system/database/setup;
> curl -i -X PUT -u : "http://localhost:8080/
> system/database/bootstrap"
>
> After starting tomcat and running the above curl commands, it should set
> up the schema and bootstrap the system.
>
> Thanks.
> -Michael R.
>
>
> On Tue, Oct 4, 2016 at 3:22 AM, Jaskaran Singh <jaskaran.singh@
> comprotechnologies.com> wrote:
>
>> Hello Usergrid Team,
>>
>> Our application works with Usergrid 1.0.2, 2.1.0 and 2.2.0 (Master branch
>> from 2nd September commit SHA: 9fae8037a4b881e9c13a5a1f23f71dc34e950c40).
>>
>> Now, we tried testing our application with the latest 2.2.0 (Master
>> branch from 1 October SHA: 77d2026907b03625ad7e1ef742c8656712497c8d).
>> But during deployment/startup of Tomcat, we are getting the following
>> error (with fresh Cassandra + ES environments).
>>
>> BadRequestException: InvalidRequestException(why:unconfigured table
>> Data_Migration_Info)
>> Caused by: java.lang.RuntimeException: Unable to connect to cassandra to
>> retrieve status
>>
>> We checked and found that the connection to cassandra is fine. This error
>> does 'not' come in previous versions of Usergrid (1.0.2, 2.1.0 and 2.2.0 -
>> Master branch from 2nd September)
>> I wanted to check if i am missing something? Please advise.
>>
>> Thanks
>> Jaskaran
>>
>
>


Usergrid 2.2 (Master Branch, 1 October) - Startup fails in Tomcat

2016-10-04 Thread Jaskaran Singh
Hello Usergrid Team,

Our application works with Usergrid 1.0.2, 2.1.0 and 2.2.0 (Master branch
from 2nd September commit SHA: 9fae8037a4b881e9c13a5a1f23f71dc34e950c40).

Now, we tried testing our application with the latest 2.2.0 (Master branch
from 1 October SHA: 77d2026907b03625ad7e1ef742c8656712497c8d).
But during deployment/startup of Tomcat, we are getting the following error
(with fresh Cassandra + ES environments).

BadRequestException: InvalidRequestException(why:unconfigured table
Data_Migration_Info)
Caused by: java.lang.RuntimeException: Unable to connect to cassandra to
retrieve status

We checked and found that the connection to cassandra is fine. This error
does 'not' come in previous versions of Usergrid (1.0.2, 2.1.0 and 2.2.0 -
Master branch from 2nd September)
I wanted to check if i am missing something? Please advise.

Thanks
Jaskaran


client_id & client_secret Errors (2.2.0)

2016-10-12 Thread Jaskaran Singh
Hi Usergrid Team,

We are migrating our application from 1.0.2 to 2.2.0 (Master branch, 2nd
September, SHA: 9fae8037a4b881e9c13a5a1f23f71dc34e950c40). We have observed
a new issue (in 2.2.0, Master branch), while using valid client_id &
client_secret. Below is a sample request and response.

*Request:*
http://
///users?client_id=_secret=

*Response:*
Http 401 Unauthorized
{ "error": "unauthorized", "timestamp": 1475131455582, "duration": 0,
"error_description": "Subject does not have permission to access this
resource", "exception":
"org.apache.usergrid.rest.exceptions.SecurityException" }

*Notes on the Error and Observations:*
(1) The unauthorised error (with client_id and client_secret) is random
(but quite frequent) - ‘suddenly’ all Usergrid API calls fail.
(2) On its own, after some times (few hours), the same call with same
client_id and client_secret will start working again.
(3) The problem is NOT related to Loading of the system. It occurs during
NO-LOAD conditions as well.
(4) We have tested and ‘not’ observed this issue (with client_id and
client_secret) with 2.1.0 and 1.0.2 releases.
(5) Interestingly, the user access tokens (access_token) ‘always’ works
with 2.2.0 - it is the current workaround we’re using.

Note, since admin token expires in 7 days - we can not continue using this
workaround approach (user access_token). We have also also opened a JIRA
for this issue:
https://issues.apache.org/jira/browse/USERGRID-1319

Please help.
Thanks
Jaskaran


TOMCAT RESTART necessary to scale the CASSANDRA cluster

2016-12-15 Thread Jaskaran Singh
Hello Usergrid team,

We were trying to scale our Usergrid Cassandra cluster - Cassandra Datastax
version 2.2.6 & Usergrid version 2.2.

We were able to successfully add a cassandra node to the cluster i.e. it
was automatically detected/discovered (showed up in OPSCENTER). But we
noted that Read-Write requests from Usergrid were NOT being routed/serviced
by the new cassandra node.

The only way we could FIX this was to update the
usergrid-deployment.properties file (add the new Cassandra IP)  and restart
TOMCAT. Post RESTART the new Cassandra node became fully operational and
the Read-Write requests were being serviced by it.

Questions:
(1) Is it necessary to re-start TOMCAT when we scale the cassandra cluster?
(2) Are we missing a setting/configuration somewhere to make this happen
automatically (no restart)

Please advise.

Thanks
Jaskaran


Re: Usergrid 2.2 (Master Branch, 1 October) - Startup fails in Tomcat

2017-04-04 Thread Jaskaran Singh
There is a JIRA for the issue as well:
https://issues.apache.org/jira/browse/USERGRID-1321


On Tue, Apr 4, 2017 at 2:44 PM, Ganaraj Tejasvi <gteja...@gmail.com> wrote:

> Yes. Same issue even now.
>
> On Tue, Apr 4, 2017 at 2:41 PM, Jaskaran Singh <jaskaran.singh@
> comprotechnologies.com> wrote:
>
>> Hi Ganaraj, we were not able to get this to work. Have you tried with the
>> latest master ?
>>
>>
>> On Tue, Apr 4, 2017 at 1:25 PM, Ganaraj Tejasvi <gteja...@gmail.com>
>> wrote:
>>
>>> Hi,
>>>
>>> Was this problem with usergrid resolved. If it was resolved could you
>>> let me know what was the solution to the problem
>>>
>>> --
>>> Regards
>>> Ganaraj Tejasvi
>>>
>>
>>
>
>
> --
> Regards
> Ganaraj Tejasvi
>


Re: Usergrid 2.2 (Master Branch, 1 October) - Startup fails in Tomcat

2017-04-04 Thread Jaskaran Singh
Hi Ganaraj, we were not able to get this to work. Have you tried with the
latest master ?


On Tue, Apr 4, 2017 at 1:25 PM, Ganaraj Tejasvi  wrote:

> Hi,
>
> Was this problem with usergrid resolved. If it was resolved could you let
> me know what was the solution to the problem
>
> --
> Regards
> Ganaraj Tejasvi
>


Tomcat - out of memory exceptions

2017-11-09 Thread Jaskaran Singh
Hello Usergrid Team,

We are suddenly facing "out of memory" exceptions in our Tomcat Severs,
under low load conditions. Please note, our usergrid installations have
been very stable over the last 6 months, and we have "not" seen such issues
before. Our setup configuration is as below:
Environment: Ubuntu 14.04, Tomcat 7, JDK 1.8.0_65 (Oracle);
Cassandra version: 2.2.6 (DataStax);
Usergrid version: 2.2.0 (Master branch, 3rd May, 2016)

I am pasting a few logs that have suddenly started showing up.


Nov 09 16:15:26 catalina.out: 05:45:26,812  WARN EntityMappingParser:116 -
Encountered 2 collections consecutively.  N+1 dimensional arrays are
unsupported, only arrays of depth 1 are supported

Nov 09 17:22:12 catalina.out: 06:52:12,848  WARN AsyncEventServiceImpl:362
- No index operation messages came back from event processing for msg:

Nov 09 17:39:56 catalina.out: 07:09:56,177  INFO transport:470 -
[ip-10-0-2-128] failed to get local cluster state for
[#transport#-3][ip-10-0-2-128][inet[/10.0.4.205:9300]], disconnecting...
Nov 09 17:39:56 catalina.out:
org.elasticsearch.transport.ReceiveTimeoutTransportException:
[][inet[/10.0.4.205:9300]][cluster:monitor/state] request_id [11652] timed
out after [5247ms]
Nov 09 17:39:56 catalina.out:  at
org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:529)
Nov 09 17:39:56 catalina.out:  at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
Nov 09 17:39:56 catalina.out:  at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
Nov 09 17:39:56 catalina.out:  at java.lang.Thread.run(Thread.java:745)

Nov 09 17:40:17 catalina.out: 07:10:17,557  WARN transport:415 -
[ip-10-0-2-128] Received response for a request that has timed out, sent
[10887ms] ago, timed out [3ms] ago, action [cluster:monitor/state], node
[[bluedls__us-east-1a__db__10.0.4.63][T6OWiR1US9m5ABxHh0tW0w][ip-10-0-4-63][inet[/10.0.4.63:9300]]{zone=us-east-1__us-east-1a}],
id [11678]

Nov 09 17:43:05 catalina.out: 07:13:05,091 ERROR AbstractExceptionMapper:74
- com.netflix.hystrix.exception.HystrixRuntimeException 5XX Uncaught
Exception (500)
Nov 09 17:43:05 catalina.out:
com.netflix.hystrix.exception.HystrixRuntimeException:
ConsistentReplayCommand timed-out and fallback failed.
..
Nov 09 17:43:05 catalina.out: Caused by:
java.util.concurrent.TimeoutException
..
Nov 09 17:43:05 catalina.out: Caused by:
rx.exceptions.OnErrorThrowable$OnNextValue: OnError while emitting onNext
value:
org.apache.usergrid.persistence.collection.mvcc.stage.CollectionIoEvent.class
..
Nov 09 17:43:05 catalina.out: 07:13:05,123 ERROR
AbstractExceptionMapper:108 - Server Error (500):
Nov 09 17:43:05 catalina.out:
{"error":"hystrix_runtime","timestamp":1510229585122,"duration":0,"error_description":"ConsistentReplayCommand
timed-out and fallback
failed.","exception":"com.netflix.hystrix.exception.HystrixRuntimeException"}


Our monitoring indicates there is no issue in cassandra and elasticseach
clusters. Look forward to your help.

Thanks
Jaskaran