Re: [Dev] DAS going OOM frequently

2015-12-16 Thread Anjana Fernando
Hi guys,

So as Sumedha told me, the error that has come is a OOM perm gen error. And
we suspect it's just because they've installed many features there in the
IoT server, so lots of classes being loaded. So after increasing the perm
gen size, Sumedha mentioned that the issue hasn't come yet.

Cheers,
Anjana.
On Dec 16, 2015 8:02 PM, "Niranda Perera"  wrote:

> Hi Gihan,
>
> The memory can be set by using the conf parameters ie. "
> spark.executor.memory"
>
> rgds
>
> On Wed, Dec 16, 2015 at 7:01 PM, Gihan Anuruddha  wrote:
>
>> Hi Niranda,
>>
>> So let say we have to run embedded DAS in a memory restricted
>> environment. So where I can define the spark allocated memory configuration
>> information?
>>
>> Regards,
>> Gihan
>>
>> On Wed, Dec 16, 2015 at 6:55 PM, Niranda Perera  wrote:
>>
>>> Hi Sumedha,
>>>
>>> I checked the heapdump you provided, and the size of it is around 230MB.
>>> I presume this was not a OOM scenario.
>>>
>>> As per the Spark memory usage, when you use spark in the local mode, the
>>> processing will happen inside that JVM itself. So, we have to make sure
>>> that we allocate enough memory for that
>>>
>>> Rgds
>>>
>>> On Wed, Dec 16, 2015 at 6:11 PM, Anjana Fernando 
>>> wrote:
>>>
 Hi Ayoma,

 Thanks for checking up on it, actually "getAllIndexedTables" doesn't
 return the Set here, it returns an array that was previously populated in
 the refresh operation, so no need to synchronize that method.

 Cheers,
 Anjana.

 On Wed, Dec 16, 2015 at 5:44 PM, Ayoma Wijethunga 
 wrote:

> And, missed mentioning that when this this race condition / state
> corruption happens all "get" operations performed on Set/Map get blocked
> resulting in OOM situation. [1
> ]
> has all that explained nicely. I have checked a heap dump in a similar
> situation and if you take one, you will clearly see many threads waiting 
> to
> access this Set instance.
>
> [1]
> http://mailinator.blogspot.gr/2009/06/beautiful-race-condition.html
>
> On Wed, Dec 16, 2015 at 5:37 PM, Ayoma Wijethunga 
> wrote:
>
>> Hi Anjana,
>>
>> Sorry, I didn't notice that you have already replied this thread.
>>
>> However, please consider my point on "getAllIndexedTables" as well.
>>
>> Thank you,
>> Ayoma.
>>
>> On Wed, Dec 16, 2015 at 5:12 PM, Anjana Fernando 
>> wrote:
>>
>>> Hi Sumedha,
>>>
>>> Thank you for reporting the issue. I've fixed the concurrent
>>> modification exception issue, where, actually both the methods
>>> "addIndexedTable" and "removeIndexedTable" needed to be synchronized, 
>>> since
>>> they both work on the shared Set object there.
>>>
>>> As for the OOM issue, can you please share a heap dump when the OOM
>>> happened. So we can see what is causing this. And also, I see there are
>>> multiple scripts running at the same time, so this actually can be a
>>> legitimate error also, where the server actually doesn't have enough 
>>> memory
>>> to continue its operations. @Niranda, please share if there is any info 
>>> on
>>> tuning Spark's memory requirements.
>>>
>>> Cheers,
>>> Anjana.
>>>
>>> On Wed, Dec 16, 2015 at 3:32 PM, Sumedha Rubasinghe <
>>> sume...@wso2.com> wrote:
>>>
 We have DAS Lite included in IoT Server and several summarisation
 scripts deployed. Server is going OOM frequently with following 
 exception.

 Shouldn't this[1] method be synchronised?

 [1]
 https://github.com/wso2/carbon-analytics/blob/master/components/analytics-core/org.wso2.carbon.analytics.dataservice.core/src/main/java/org/wso2/carbon/analytics/dataservice/core/indexing/AnalyticsIndexedTableStore.java#L45


 >>>
 [2015-12-16 15:11:00,004]  INFO
 {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
 schedule task for: Light_Sensor_Script for tenant id: -1234
 [2015-12-16 15:11:00,005]  INFO
 {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
 schedule task for: Magnetic_Sensor_Script for tenant id: -1234
 [2015-12-16 15:11:00,005]  INFO
 {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
 schedule task for: Pressure_Sensor_Script for tenant id: -1234
 [2015-12-16 15:11:00,006]  INFO
 {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
 schedule task for: Proximity_Sensor_Script for tenant id: -1234
 [2015-12-16 15:11:00,006]  INFO
 {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
 schedule 

Re: [Dev] DAS going OOM frequently

2015-12-16 Thread Anjana Fernando
Hi Ayoma,

Thanks for checking up on it, actually "getAllIndexedTables" doesn't return
the Set here, it returns an array that was previously populated in the
refresh operation, so no need to synchronize that method.

Cheers,
Anjana.

On Wed, Dec 16, 2015 at 5:44 PM, Ayoma Wijethunga  wrote:

> And, missed mentioning that when this this race condition / state
> corruption happens all "get" operations performed on Set/Map get blocked
> resulting in OOM situation. [1
> ]
> has all that explained nicely. I have checked a heap dump in a similar
> situation and if you take one, you will clearly see many threads waiting to
> access this Set instance.
>
> [1] http://mailinator.blogspot.gr/2009/06/beautiful-race-condition.html
>
> On Wed, Dec 16, 2015 at 5:37 PM, Ayoma Wijethunga  wrote:
>
>> Hi Anjana,
>>
>> Sorry, I didn't notice that you have already replied this thread.
>>
>> However, please consider my point on "getAllIndexedTables" as well.
>>
>> Thank you,
>> Ayoma.
>>
>> On Wed, Dec 16, 2015 at 5:12 PM, Anjana Fernando  wrote:
>>
>>> Hi Sumedha,
>>>
>>> Thank you for reporting the issue. I've fixed the concurrent
>>> modification exception issue, where, actually both the methods
>>> "addIndexedTable" and "removeIndexedTable" needed to be synchronized, since
>>> they both work on the shared Set object there.
>>>
>>> As for the OOM issue, can you please share a heap dump when the OOM
>>> happened. So we can see what is causing this. And also, I see there are
>>> multiple scripts running at the same time, so this actually can be a
>>> legitimate error also, where the server actually doesn't have enough memory
>>> to continue its operations. @Niranda, please share if there is any info on
>>> tuning Spark's memory requirements.
>>>
>>> Cheers,
>>> Anjana.
>>>
>>> On Wed, Dec 16, 2015 at 3:32 PM, Sumedha Rubasinghe 
>>> wrote:
>>>
 We have DAS Lite included in IoT Server and several summarisation
 scripts deployed. Server is going OOM frequently with following exception.

 Shouldn't this[1] method be synchronised?

 [1]
 https://github.com/wso2/carbon-analytics/blob/master/components/analytics-core/org.wso2.carbon.analytics.dataservice.core/src/main/java/org/wso2/carbon/analytics/dataservice/core/indexing/AnalyticsIndexedTableStore.java#L45


 >>>
 [2015-12-16 15:11:00,004]  INFO
 {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
 schedule task for: Light_Sensor_Script for tenant id: -1234
 [2015-12-16 15:11:00,005]  INFO
 {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
 schedule task for: Magnetic_Sensor_Script for tenant id: -1234
 [2015-12-16 15:11:00,005]  INFO
 {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
 schedule task for: Pressure_Sensor_Script for tenant id: -1234
 [2015-12-16 15:11:00,006]  INFO
 {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
 schedule task for: Proximity_Sensor_Script for tenant id: -1234
 [2015-12-16 15:11:00,006]  INFO
 {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
 schedule task for: Rotation_Sensor_Script for tenant id: -1234
 [2015-12-16 15:11:00,007]  INFO
 {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
 schedule task for: Temperature_Sensor_Script for tenant id: -1234
 [2015-12-16 15:11:01,132] ERROR
 {org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter} -  Error in
 executing task: null
 java.util.ConcurrentModificationException
 at java.util.HashMap$HashIterator.nextEntry(HashMap.java:922)
 at java.util.HashMap$KeyIterator.next(HashMap.java:956)
 at java.util.AbstractCollection.toArray(AbstractCollection.java:195)
 at
 org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsIndexedTableStore.refreshIndexedTableArray(AnalyticsIndexedTableStore.java:46)
 at
 org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsIndexedTableStore.addIndexedTable(AnalyticsIndexedTableStore.java:37)
 at
 org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.refreshIndexedTableStoreEntry(AnalyticsDataServiceImpl.java:512)
 at
 org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.invalidateAnalyticsTableInfo(AnalyticsDataServiceImpl.java:525)
 at
 org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.checkAndInvalidateTableInfo(AnalyticsDataServiceImpl.java:504)
 at
 org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.setTableSchema(AnalyticsDataServiceImpl.java:495)
 at
 org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelation.insert(AnalyticsRelation.java:162)
 at
 org.apache.spark.sql.sources.InsertIntoDataSource.run(commands.scala:53)
 at

Re: [Dev] DAS going OOM frequently

2015-12-16 Thread Gihan Anuruddha
Hi Niranda,

So let say we have to run embedded DAS in a memory restricted environment.
So where I can define the spark allocated memory configuration information?

Regards,
Gihan

On Wed, Dec 16, 2015 at 6:55 PM, Niranda Perera  wrote:

> Hi Sumedha,
>
> I checked the heapdump you provided, and the size of it is around 230MB. I
> presume this was not a OOM scenario.
>
> As per the Spark memory usage, when you use spark in the local mode, the
> processing will happen inside that JVM itself. So, we have to make sure
> that we allocate enough memory for that
>
> Rgds
>
> On Wed, Dec 16, 2015 at 6:11 PM, Anjana Fernando  wrote:
>
>> Hi Ayoma,
>>
>> Thanks for checking up on it, actually "getAllIndexedTables" doesn't
>> return the Set here, it returns an array that was previously populated in
>> the refresh operation, so no need to synchronize that method.
>>
>> Cheers,
>> Anjana.
>>
>> On Wed, Dec 16, 2015 at 5:44 PM, Ayoma Wijethunga  wrote:
>>
>>> And, missed mentioning that when this this race condition / state
>>> corruption happens all "get" operations performed on Set/Map get blocked
>>> resulting in OOM situation. [1
>>> ]
>>> has all that explained nicely. I have checked a heap dump in a similar
>>> situation and if you take one, you will clearly see many threads waiting to
>>> access this Set instance.
>>>
>>> [1] http://mailinator.blogspot.gr/2009/06/beautiful-race-condition.html
>>>
>>> On Wed, Dec 16, 2015 at 5:37 PM, Ayoma Wijethunga 
>>> wrote:
>>>
 Hi Anjana,

 Sorry, I didn't notice that you have already replied this thread.

 However, please consider my point on "getAllIndexedTables" as well.

 Thank you,
 Ayoma.

 On Wed, Dec 16, 2015 at 5:12 PM, Anjana Fernando 
 wrote:

> Hi Sumedha,
>
> Thank you for reporting the issue. I've fixed the concurrent
> modification exception issue, where, actually both the methods
> "addIndexedTable" and "removeIndexedTable" needed to be synchronized, 
> since
> they both work on the shared Set object there.
>
> As for the OOM issue, can you please share a heap dump when the OOM
> happened. So we can see what is causing this. And also, I see there are
> multiple scripts running at the same time, so this actually can be a
> legitimate error also, where the server actually doesn't have enough 
> memory
> to continue its operations. @Niranda, please share if there is any info on
> tuning Spark's memory requirements.
>
> Cheers,
> Anjana.
>
> On Wed, Dec 16, 2015 at 3:32 PM, Sumedha Rubasinghe 
> wrote:
>
>> We have DAS Lite included in IoT Server and several summarisation
>> scripts deployed. Server is going OOM frequently with following 
>> exception.
>>
>> Shouldn't this[1] method be synchronised?
>>
>> [1]
>> https://github.com/wso2/carbon-analytics/blob/master/components/analytics-core/org.wso2.carbon.analytics.dataservice.core/src/main/java/org/wso2/carbon/analytics/dataservice/core/indexing/AnalyticsIndexedTableStore.java#L45
>>
>>
>> >>>
>> [2015-12-16 15:11:00,004]  INFO
>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>> schedule task for: Light_Sensor_Script for tenant id: -1234
>> [2015-12-16 15:11:00,005]  INFO
>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>> schedule task for: Magnetic_Sensor_Script for tenant id: -1234
>> [2015-12-16 15:11:00,005]  INFO
>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>> schedule task for: Pressure_Sensor_Script for tenant id: -1234
>> [2015-12-16 15:11:00,006]  INFO
>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>> schedule task for: Proximity_Sensor_Script for tenant id: -1234
>> [2015-12-16 15:11:00,006]  INFO
>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>> schedule task for: Rotation_Sensor_Script for tenant id: -1234
>> [2015-12-16 15:11:00,007]  INFO
>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>> schedule task for: Temperature_Sensor_Script for tenant id: -1234
>> [2015-12-16 15:11:01,132] ERROR
>> {org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter} -  Error in
>> executing task: null
>> java.util.ConcurrentModificationException
>> at java.util.HashMap$HashIterator.nextEntry(HashMap.java:922)
>> at java.util.HashMap$KeyIterator.next(HashMap.java:956)
>> at java.util.AbstractCollection.toArray(AbstractCollection.java:195)
>> at
>> org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsIndexedTableStore.refreshIndexedTableArray(AnalyticsIndexedTableStore.java:46)
>> at
>> 

Re: [Dev] DAS going OOM frequently

2015-12-16 Thread Ayoma Wijethunga
And, missed mentioning that when this this race condition / state
corruption happens all "get" operations performed on Set/Map get blocked
resulting in OOM situation. [1
] has
all that explained nicely. I have checked a heap dump in a similar
situation and if you take one, you will clearly see many threads waiting to
access this Set instance.

[1] http://mailinator.blogspot.gr/2009/06/beautiful-race-condition.html

On Wed, Dec 16, 2015 at 5:37 PM, Ayoma Wijethunga  wrote:

> Hi Anjana,
>
> Sorry, I didn't notice that you have already replied this thread.
>
> However, please consider my point on "getAllIndexedTables" as well.
>
> Thank you,
> Ayoma.
>
> On Wed, Dec 16, 2015 at 5:12 PM, Anjana Fernando  wrote:
>
>> Hi Sumedha,
>>
>> Thank you for reporting the issue. I've fixed the concurrent modification
>> exception issue, where, actually both the methods "addIndexedTable" and
>> "removeIndexedTable" needed to be synchronized, since they both work on the
>> shared Set object there.
>>
>> As for the OOM issue, can you please share a heap dump when the OOM
>> happened. So we can see what is causing this. And also, I see there are
>> multiple scripts running at the same time, so this actually can be a
>> legitimate error also, where the server actually doesn't have enough memory
>> to continue its operations. @Niranda, please share if there is any info on
>> tuning Spark's memory requirements.
>>
>> Cheers,
>> Anjana.
>>
>> On Wed, Dec 16, 2015 at 3:32 PM, Sumedha Rubasinghe 
>> wrote:
>>
>>> We have DAS Lite included in IoT Server and several summarisation
>>> scripts deployed. Server is going OOM frequently with following exception.
>>>
>>> Shouldn't this[1] method be synchronised?
>>>
>>> [1]
>>> https://github.com/wso2/carbon-analytics/blob/master/components/analytics-core/org.wso2.carbon.analytics.dataservice.core/src/main/java/org/wso2/carbon/analytics/dataservice/core/indexing/AnalyticsIndexedTableStore.java#L45
>>>
>>>
>>> >>>
>>> [2015-12-16 15:11:00,004]  INFO
>>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>>> schedule task for: Light_Sensor_Script for tenant id: -1234
>>> [2015-12-16 15:11:00,005]  INFO
>>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>>> schedule task for: Magnetic_Sensor_Script for tenant id: -1234
>>> [2015-12-16 15:11:00,005]  INFO
>>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>>> schedule task for: Pressure_Sensor_Script for tenant id: -1234
>>> [2015-12-16 15:11:00,006]  INFO
>>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>>> schedule task for: Proximity_Sensor_Script for tenant id: -1234
>>> [2015-12-16 15:11:00,006]  INFO
>>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>>> schedule task for: Rotation_Sensor_Script for tenant id: -1234
>>> [2015-12-16 15:11:00,007]  INFO
>>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>>> schedule task for: Temperature_Sensor_Script for tenant id: -1234
>>> [2015-12-16 15:11:01,132] ERROR
>>> {org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter} -  Error in
>>> executing task: null
>>> java.util.ConcurrentModificationException
>>> at java.util.HashMap$HashIterator.nextEntry(HashMap.java:922)
>>> at java.util.HashMap$KeyIterator.next(HashMap.java:956)
>>> at java.util.AbstractCollection.toArray(AbstractCollection.java:195)
>>> at
>>> org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsIndexedTableStore.refreshIndexedTableArray(AnalyticsIndexedTableStore.java:46)
>>> at
>>> org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsIndexedTableStore.addIndexedTable(AnalyticsIndexedTableStore.java:37)
>>> at
>>> org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.refreshIndexedTableStoreEntry(AnalyticsDataServiceImpl.java:512)
>>> at
>>> org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.invalidateAnalyticsTableInfo(AnalyticsDataServiceImpl.java:525)
>>> at
>>> org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.checkAndInvalidateTableInfo(AnalyticsDataServiceImpl.java:504)
>>> at
>>> org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.setTableSchema(AnalyticsDataServiceImpl.java:495)
>>> at
>>> org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelation.insert(AnalyticsRelation.java:162)
>>> at
>>> org.apache.spark.sql.sources.InsertIntoDataSource.run(commands.scala:53)
>>> at
>>> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:57)
>>> at
>>> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:57)
>>> at
>>> org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:68)
>>> at
>>> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
>>> at
>>> 

Re: [Dev] DAS going OOM frequently

2015-12-16 Thread Niranda Perera
Hi Sumedha,

I checked the heapdump you provided, and the size of it is around 230MB. I
presume this was not a OOM scenario.

As per the Spark memory usage, when you use spark in the local mode, the
processing will happen inside that JVM itself. So, we have to make sure
that we allocate enough memory for that

Rgds

On Wed, Dec 16, 2015 at 6:11 PM, Anjana Fernando  wrote:

> Hi Ayoma,
>
> Thanks for checking up on it, actually "getAllIndexedTables" doesn't
> return the Set here, it returns an array that was previously populated in
> the refresh operation, so no need to synchronize that method.
>
> Cheers,
> Anjana.
>
> On Wed, Dec 16, 2015 at 5:44 PM, Ayoma Wijethunga  wrote:
>
>> And, missed mentioning that when this this race condition / state
>> corruption happens all "get" operations performed on Set/Map get blocked
>> resulting in OOM situation. [1
>> ]
>> has all that explained nicely. I have checked a heap dump in a similar
>> situation and if you take one, you will clearly see many threads waiting to
>> access this Set instance.
>>
>> [1] http://mailinator.blogspot.gr/2009/06/beautiful-race-condition.html
>>
>> On Wed, Dec 16, 2015 at 5:37 PM, Ayoma Wijethunga  wrote:
>>
>>> Hi Anjana,
>>>
>>> Sorry, I didn't notice that you have already replied this thread.
>>>
>>> However, please consider my point on "getAllIndexedTables" as well.
>>>
>>> Thank you,
>>> Ayoma.
>>>
>>> On Wed, Dec 16, 2015 at 5:12 PM, Anjana Fernando 
>>> wrote:
>>>
 Hi Sumedha,

 Thank you for reporting the issue. I've fixed the concurrent
 modification exception issue, where, actually both the methods
 "addIndexedTable" and "removeIndexedTable" needed to be synchronized, since
 they both work on the shared Set object there.

 As for the OOM issue, can you please share a heap dump when the OOM
 happened. So we can see what is causing this. And also, I see there are
 multiple scripts running at the same time, so this actually can be a
 legitimate error also, where the server actually doesn't have enough memory
 to continue its operations. @Niranda, please share if there is any info on
 tuning Spark's memory requirements.

 Cheers,
 Anjana.

 On Wed, Dec 16, 2015 at 3:32 PM, Sumedha Rubasinghe 
 wrote:

> We have DAS Lite included in IoT Server and several summarisation
> scripts deployed. Server is going OOM frequently with following exception.
>
> Shouldn't this[1] method be synchronised?
>
> [1]
> https://github.com/wso2/carbon-analytics/blob/master/components/analytics-core/org.wso2.carbon.analytics.dataservice.core/src/main/java/org/wso2/carbon/analytics/dataservice/core/indexing/AnalyticsIndexedTableStore.java#L45
>
>
> >>>
> [2015-12-16 15:11:00,004]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: Light_Sensor_Script for tenant id: -1234
> [2015-12-16 15:11:00,005]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: Magnetic_Sensor_Script for tenant id: -1234
> [2015-12-16 15:11:00,005]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: Pressure_Sensor_Script for tenant id: -1234
> [2015-12-16 15:11:00,006]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: Proximity_Sensor_Script for tenant id: -1234
> [2015-12-16 15:11:00,006]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: Rotation_Sensor_Script for tenant id: -1234
> [2015-12-16 15:11:00,007]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: Temperature_Sensor_Script for tenant id: -1234
> [2015-12-16 15:11:01,132] ERROR
> {org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter} -  Error in
> executing task: null
> java.util.ConcurrentModificationException
> at java.util.HashMap$HashIterator.nextEntry(HashMap.java:922)
> at java.util.HashMap$KeyIterator.next(HashMap.java:956)
> at java.util.AbstractCollection.toArray(AbstractCollection.java:195)
> at
> org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsIndexedTableStore.refreshIndexedTableArray(AnalyticsIndexedTableStore.java:46)
> at
> org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsIndexedTableStore.addIndexedTable(AnalyticsIndexedTableStore.java:37)
> at
> org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.refreshIndexedTableStoreEntry(AnalyticsDataServiceImpl.java:512)
> at
> 

Re: [Dev] DAS going OOM frequently

2015-12-16 Thread Ayoma Wijethunga
Hi,

I have seen that same sort of exception occurs, when a HashMap is used by
multiple threads concurrently. It was necessary to use ConcurrentHashMap or
do the proper synchronization in our logic. This was explained as a state
corruption [3 - *(interesting read)*] and it is no wonder looking at source
of relevant methods [1].

As there is no such thing called ConcurrentHashSet (and as it is not the
best option), we should synchronizing removeIndexedTable,
refreshIndexedTableArray, as well as getAllIndexedTables to correct this
behaviour.

"getAllIndexedTables" should be synchronized because we might return a
"Set" which is in an inconsistent state, if "getAllIndexedTables" is
invoked while some other thread executes removeIndexedTable or
refreshIndexedTableArray. And may be, the method which called
"getAllIndexedTables"
might attempt to execute another state changing method available in "Set"
before refresh or remove completes, the "Set" ending up with state
corruption.

[2] is also interesting although it does not directly relate with this.
With "newSetFromMap" you can create a backed Set based on a
ConcurrentHashMap.


[1]
http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/java/util/AbstractCollection.java#AbstractCollection.toArray%28%29
[2]
http://docs.oracle.com/javase/6/docs/api/java/util/Collections.html#newSetFromMap%28java.util.Map%29
[3] http://mailinator.blogspot.gr/2009/06/beautiful-race-condition.html

Best Regards,
Ayoma.

On Wed, Dec 16, 2015 at 3:32 PM, Sumedha Rubasinghe 
wrote:

> We have DAS Lite included in IoT Server and several summarisation scripts
> deployed. Server is going OOM frequently with following exception.
>
> Shouldn't this[1] method be synchronised?
>
> [1]
> https://github.com/wso2/carbon-analytics/blob/master/components/analytics-core/org.wso2.carbon.analytics.dataservice.core/src/main/java/org/wso2/carbon/analytics/dataservice/core/indexing/AnalyticsIndexedTableStore.java#L45
>
>
> >>>
> [2015-12-16 15:11:00,004]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: Light_Sensor_Script for tenant id: -1234
> [2015-12-16 15:11:00,005]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: Magnetic_Sensor_Script for tenant id: -1234
> [2015-12-16 15:11:00,005]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: Pressure_Sensor_Script for tenant id: -1234
> [2015-12-16 15:11:00,006]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: Proximity_Sensor_Script for tenant id: -1234
> [2015-12-16 15:11:00,006]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: Rotation_Sensor_Script for tenant id: -1234
> [2015-12-16 15:11:00,007]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: Temperature_Sensor_Script for tenant id: -1234
> [2015-12-16 15:11:01,132] ERROR
> {org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter} -  Error in
> executing task: null
> java.util.ConcurrentModificationException
> at java.util.HashMap$HashIterator.nextEntry(HashMap.java:922)
> at java.util.HashMap$KeyIterator.next(HashMap.java:956)
> at java.util.AbstractCollection.toArray(AbstractCollection.java:195)
> at
> org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsIndexedTableStore.refreshIndexedTableArray(AnalyticsIndexedTableStore.java:46)
> at
> org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsIndexedTableStore.addIndexedTable(AnalyticsIndexedTableStore.java:37)
> at
> org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.refreshIndexedTableStoreEntry(AnalyticsDataServiceImpl.java:512)
> at
> org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.invalidateAnalyticsTableInfo(AnalyticsDataServiceImpl.java:525)
> at
> org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.checkAndInvalidateTableInfo(AnalyticsDataServiceImpl.java:504)
> at
> org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.setTableSchema(AnalyticsDataServiceImpl.java:495)
> at
> org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelation.insert(AnalyticsRelation.java:162)
> at org.apache.spark.sql.sources.InsertIntoDataSource.run(commands.scala:53)
> at
> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:57)
> at
> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:57)
> at
> org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:68)
> at
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
> at
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
> at
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
> at 

Re: [Dev] DAS going OOM frequently

2015-12-16 Thread Ayoma Wijethunga
Hi Anjana,

Yes. Agreed, sorry I misread that.  In that case OOM should be fine after
the fix.

Thank you,
Ayoma.

On Wed, Dec 16, 2015 at 6:11 PM, Anjana Fernando  wrote:

> Hi Ayoma,
>
> Thanks for checking up on it, actually "getAllIndexedTables" doesn't
> return the Set here, it returns an array that was previously populated in
> the refresh operation, so no need to synchronize that method.
>
> Cheers,
> Anjana.
>
> On Wed, Dec 16, 2015 at 5:44 PM, Ayoma Wijethunga  wrote:
>
>> And, missed mentioning that when this this race condition / state
>> corruption happens all "get" operations performed on Set/Map get blocked
>> resulting in OOM situation. [1
>> ]
>> has all that explained nicely. I have checked a heap dump in a similar
>> situation and if you take one, you will clearly see many threads waiting to
>> access this Set instance.
>>
>> [1] http://mailinator.blogspot.gr/2009/06/beautiful-race-condition.html
>>
>> On Wed, Dec 16, 2015 at 5:37 PM, Ayoma Wijethunga  wrote:
>>
>>> Hi Anjana,
>>>
>>> Sorry, I didn't notice that you have already replied this thread.
>>>
>>> However, please consider my point on "getAllIndexedTables" as well.
>>>
>>> Thank you,
>>> Ayoma.
>>>
>>> On Wed, Dec 16, 2015 at 5:12 PM, Anjana Fernando 
>>> wrote:
>>>
 Hi Sumedha,

 Thank you for reporting the issue. I've fixed the concurrent
 modification exception issue, where, actually both the methods
 "addIndexedTable" and "removeIndexedTable" needed to be synchronized, since
 they both work on the shared Set object there.

 As for the OOM issue, can you please share a heap dump when the OOM
 happened. So we can see what is causing this. And also, I see there are
 multiple scripts running at the same time, so this actually can be a
 legitimate error also, where the server actually doesn't have enough memory
 to continue its operations. @Niranda, please share if there is any info on
 tuning Spark's memory requirements.

 Cheers,
 Anjana.

 On Wed, Dec 16, 2015 at 3:32 PM, Sumedha Rubasinghe 
 wrote:

> We have DAS Lite included in IoT Server and several summarisation
> scripts deployed. Server is going OOM frequently with following exception.
>
> Shouldn't this[1] method be synchronised?
>
> [1]
> https://github.com/wso2/carbon-analytics/blob/master/components/analytics-core/org.wso2.carbon.analytics.dataservice.core/src/main/java/org/wso2/carbon/analytics/dataservice/core/indexing/AnalyticsIndexedTableStore.java#L45
>
>
> >>>
> [2015-12-16 15:11:00,004]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: Light_Sensor_Script for tenant id: -1234
> [2015-12-16 15:11:00,005]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: Magnetic_Sensor_Script for tenant id: -1234
> [2015-12-16 15:11:00,005]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: Pressure_Sensor_Script for tenant id: -1234
> [2015-12-16 15:11:00,006]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: Proximity_Sensor_Script for tenant id: -1234
> [2015-12-16 15:11:00,006]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: Rotation_Sensor_Script for tenant id: -1234
> [2015-12-16 15:11:00,007]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: Temperature_Sensor_Script for tenant id: -1234
> [2015-12-16 15:11:01,132] ERROR
> {org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter} -  Error in
> executing task: null
> java.util.ConcurrentModificationException
> at java.util.HashMap$HashIterator.nextEntry(HashMap.java:922)
> at java.util.HashMap$KeyIterator.next(HashMap.java:956)
> at java.util.AbstractCollection.toArray(AbstractCollection.java:195)
> at
> org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsIndexedTableStore.refreshIndexedTableArray(AnalyticsIndexedTableStore.java:46)
> at
> org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsIndexedTableStore.addIndexedTable(AnalyticsIndexedTableStore.java:37)
> at
> org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.refreshIndexedTableStoreEntry(AnalyticsDataServiceImpl.java:512)
> at
> org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.invalidateAnalyticsTableInfo(AnalyticsDataServiceImpl.java:525)
> at
> org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.checkAndInvalidateTableInfo(AnalyticsDataServiceImpl.java:504)
> at
> 

Re: [Dev] DAS going OOM frequently

2015-12-16 Thread Niranda Perera
Hi Gihan,

The memory can be set by using the conf parameters ie. "
spark.executor.memory"

rgds

On Wed, Dec 16, 2015 at 7:01 PM, Gihan Anuruddha  wrote:

> Hi Niranda,
>
> So let say we have to run embedded DAS in a memory restricted environment.
> So where I can define the spark allocated memory configuration information?
>
> Regards,
> Gihan
>
> On Wed, Dec 16, 2015 at 6:55 PM, Niranda Perera  wrote:
>
>> Hi Sumedha,
>>
>> I checked the heapdump you provided, and the size of it is around 230MB.
>> I presume this was not a OOM scenario.
>>
>> As per the Spark memory usage, when you use spark in the local mode, the
>> processing will happen inside that JVM itself. So, we have to make sure
>> that we allocate enough memory for that
>>
>> Rgds
>>
>> On Wed, Dec 16, 2015 at 6:11 PM, Anjana Fernando  wrote:
>>
>>> Hi Ayoma,
>>>
>>> Thanks for checking up on it, actually "getAllIndexedTables" doesn't
>>> return the Set here, it returns an array that was previously populated in
>>> the refresh operation, so no need to synchronize that method.
>>>
>>> Cheers,
>>> Anjana.
>>>
>>> On Wed, Dec 16, 2015 at 5:44 PM, Ayoma Wijethunga 
>>> wrote:
>>>
 And, missed mentioning that when this this race condition / state
 corruption happens all "get" operations performed on Set/Map get blocked
 resulting in OOM situation. [1
 ]
 has all that explained nicely. I have checked a heap dump in a similar
 situation and if you take one, you will clearly see many threads waiting to
 access this Set instance.

 [1] http://mailinator.blogspot.gr/2009/06/beautiful-race-condition.html

 On Wed, Dec 16, 2015 at 5:37 PM, Ayoma Wijethunga 
 wrote:

> Hi Anjana,
>
> Sorry, I didn't notice that you have already replied this thread.
>
> However, please consider my point on "getAllIndexedTables" as well.
>
> Thank you,
> Ayoma.
>
> On Wed, Dec 16, 2015 at 5:12 PM, Anjana Fernando 
> wrote:
>
>> Hi Sumedha,
>>
>> Thank you for reporting the issue. I've fixed the concurrent
>> modification exception issue, where, actually both the methods
>> "addIndexedTable" and "removeIndexedTable" needed to be synchronized, 
>> since
>> they both work on the shared Set object there.
>>
>> As for the OOM issue, can you please share a heap dump when the OOM
>> happened. So we can see what is causing this. And also, I see there are
>> multiple scripts running at the same time, so this actually can be a
>> legitimate error also, where the server actually doesn't have enough 
>> memory
>> to continue its operations. @Niranda, please share if there is any info 
>> on
>> tuning Spark's memory requirements.
>>
>> Cheers,
>> Anjana.
>>
>> On Wed, Dec 16, 2015 at 3:32 PM, Sumedha Rubasinghe > > wrote:
>>
>>> We have DAS Lite included in IoT Server and several summarisation
>>> scripts deployed. Server is going OOM frequently with following 
>>> exception.
>>>
>>> Shouldn't this[1] method be synchronised?
>>>
>>> [1]
>>> https://github.com/wso2/carbon-analytics/blob/master/components/analytics-core/org.wso2.carbon.analytics.dataservice.core/src/main/java/org/wso2/carbon/analytics/dataservice/core/indexing/AnalyticsIndexedTableStore.java#L45
>>>
>>>
>>> >>>
>>> [2015-12-16 15:11:00,004]  INFO
>>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>>> schedule task for: Light_Sensor_Script for tenant id: -1234
>>> [2015-12-16 15:11:00,005]  INFO
>>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>>> schedule task for: Magnetic_Sensor_Script for tenant id: -1234
>>> [2015-12-16 15:11:00,005]  INFO
>>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>>> schedule task for: Pressure_Sensor_Script for tenant id: -1234
>>> [2015-12-16 15:11:00,006]  INFO
>>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>>> schedule task for: Proximity_Sensor_Script for tenant id: -1234
>>> [2015-12-16 15:11:00,006]  INFO
>>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>>> schedule task for: Rotation_Sensor_Script for tenant id: -1234
>>> [2015-12-16 15:11:00,007]  INFO
>>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>>> schedule task for: Temperature_Sensor_Script for tenant id: -1234
>>> [2015-12-16 15:11:01,132] ERROR
>>> {org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter} -  Error in
>>> executing task: null
>>> java.util.ConcurrentModificationException
>>> at java.util.HashMap$HashIterator.nextEntry(HashMap.java:922)
>>> at 

Re: [Dev] DAS going OOM frequently

2015-12-16 Thread Ayoma Wijethunga
Hi Anjana,

Sorry, I didn't notice that you have already replied this thread.

However, please consider my point on "getAllIndexedTables" as well.

Thank you,
Ayoma.

On Wed, Dec 16, 2015 at 5:12 PM, Anjana Fernando  wrote:

> Hi Sumedha,
>
> Thank you for reporting the issue. I've fixed the concurrent modification
> exception issue, where, actually both the methods "addIndexedTable" and
> "removeIndexedTable" needed to be synchronized, since they both work on the
> shared Set object there.
>
> As for the OOM issue, can you please share a heap dump when the OOM
> happened. So we can see what is causing this. And also, I see there are
> multiple scripts running at the same time, so this actually can be a
> legitimate error also, where the server actually doesn't have enough memory
> to continue its operations. @Niranda, please share if there is any info on
> tuning Spark's memory requirements.
>
> Cheers,
> Anjana.
>
> On Wed, Dec 16, 2015 at 3:32 PM, Sumedha Rubasinghe 
> wrote:
>
>> We have DAS Lite included in IoT Server and several summarisation scripts
>> deployed. Server is going OOM frequently with following exception.
>>
>> Shouldn't this[1] method be synchronised?
>>
>> [1]
>> https://github.com/wso2/carbon-analytics/blob/master/components/analytics-core/org.wso2.carbon.analytics.dataservice.core/src/main/java/org/wso2/carbon/analytics/dataservice/core/indexing/AnalyticsIndexedTableStore.java#L45
>>
>>
>> >>>
>> [2015-12-16 15:11:00,004]  INFO
>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>> schedule task for: Light_Sensor_Script for tenant id: -1234
>> [2015-12-16 15:11:00,005]  INFO
>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>> schedule task for: Magnetic_Sensor_Script for tenant id: -1234
>> [2015-12-16 15:11:00,005]  INFO
>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>> schedule task for: Pressure_Sensor_Script for tenant id: -1234
>> [2015-12-16 15:11:00,006]  INFO
>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>> schedule task for: Proximity_Sensor_Script for tenant id: -1234
>> [2015-12-16 15:11:00,006]  INFO
>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>> schedule task for: Rotation_Sensor_Script for tenant id: -1234
>> [2015-12-16 15:11:00,007]  INFO
>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>> schedule task for: Temperature_Sensor_Script for tenant id: -1234
>> [2015-12-16 15:11:01,132] ERROR
>> {org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter} -  Error in
>> executing task: null
>> java.util.ConcurrentModificationException
>> at java.util.HashMap$HashIterator.nextEntry(HashMap.java:922)
>> at java.util.HashMap$KeyIterator.next(HashMap.java:956)
>> at java.util.AbstractCollection.toArray(AbstractCollection.java:195)
>> at
>> org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsIndexedTableStore.refreshIndexedTableArray(AnalyticsIndexedTableStore.java:46)
>> at
>> org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsIndexedTableStore.addIndexedTable(AnalyticsIndexedTableStore.java:37)
>> at
>> org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.refreshIndexedTableStoreEntry(AnalyticsDataServiceImpl.java:512)
>> at
>> org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.invalidateAnalyticsTableInfo(AnalyticsDataServiceImpl.java:525)
>> at
>> org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.checkAndInvalidateTableInfo(AnalyticsDataServiceImpl.java:504)
>> at
>> org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.setTableSchema(AnalyticsDataServiceImpl.java:495)
>> at
>> org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelation.insert(AnalyticsRelation.java:162)
>> at
>> org.apache.spark.sql.sources.InsertIntoDataSource.run(commands.scala:53)
>> at
>> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:57)
>> at
>> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:57)
>> at
>> org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:68)
>> at
>> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
>> at
>> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
>> at
>> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
>> at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:87)
>> at
>> org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:950)
>> at
>> org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:950)
>> at org.apache.spark.sql.DataFrame.(DataFrame.scala:144)
>> at org.apache.spark.sql.DataFrame.(DataFrame.scala:128)
>> at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:51)
>> at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:755)
>> at
>> 

Re: [Dev] DAS going OOM frequently

2015-12-16 Thread Anjana Fernando
Hi Sumedha,

Thank you for reporting the issue. I've fixed the concurrent modification
exception issue, where, actually both the methods "addIndexedTable" and
"removeIndexedTable" needed to be synchronized, since they both work on the
shared Set object there.

As for the OOM issue, can you please share a heap dump when the OOM
happened. So we can see what is causing this. And also, I see there are
multiple scripts running at the same time, so this actually can be a
legitimate error also, where the server actually doesn't have enough memory
to continue its operations. @Niranda, please share if there is any info on
tuning Spark's memory requirements.

Cheers,
Anjana.

On Wed, Dec 16, 2015 at 3:32 PM, Sumedha Rubasinghe 
wrote:

> We have DAS Lite included in IoT Server and several summarisation scripts
> deployed. Server is going OOM frequently with following exception.
>
> Shouldn't this[1] method be synchronised?
>
> [1]
> https://github.com/wso2/carbon-analytics/blob/master/components/analytics-core/org.wso2.carbon.analytics.dataservice.core/src/main/java/org/wso2/carbon/analytics/dataservice/core/indexing/AnalyticsIndexedTableStore.java#L45
>
>
> >>>
> [2015-12-16 15:11:00,004]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: Light_Sensor_Script for tenant id: -1234
> [2015-12-16 15:11:00,005]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: Magnetic_Sensor_Script for tenant id: -1234
> [2015-12-16 15:11:00,005]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: Pressure_Sensor_Script for tenant id: -1234
> [2015-12-16 15:11:00,006]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: Proximity_Sensor_Script for tenant id: -1234
> [2015-12-16 15:11:00,006]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: Rotation_Sensor_Script for tenant id: -1234
> [2015-12-16 15:11:00,007]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: Temperature_Sensor_Script for tenant id: -1234
> [2015-12-16 15:11:01,132] ERROR
> {org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter} -  Error in
> executing task: null
> java.util.ConcurrentModificationException
> at java.util.HashMap$HashIterator.nextEntry(HashMap.java:922)
> at java.util.HashMap$KeyIterator.next(HashMap.java:956)
> at java.util.AbstractCollection.toArray(AbstractCollection.java:195)
> at
> org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsIndexedTableStore.refreshIndexedTableArray(AnalyticsIndexedTableStore.java:46)
> at
> org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsIndexedTableStore.addIndexedTable(AnalyticsIndexedTableStore.java:37)
> at
> org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.refreshIndexedTableStoreEntry(AnalyticsDataServiceImpl.java:512)
> at
> org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.invalidateAnalyticsTableInfo(AnalyticsDataServiceImpl.java:525)
> at
> org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.checkAndInvalidateTableInfo(AnalyticsDataServiceImpl.java:504)
> at
> org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.setTableSchema(AnalyticsDataServiceImpl.java:495)
> at
> org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelation.insert(AnalyticsRelation.java:162)
> at org.apache.spark.sql.sources.InsertIntoDataSource.run(commands.scala:53)
> at
> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:57)
> at
> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:57)
> at
> org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:68)
> at
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
> at
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
> at
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
> at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:87)
> at
> org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:950)
> at
> org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:950)
> at org.apache.spark.sql.DataFrame.(DataFrame.scala:144)
> at org.apache.spark.sql.DataFrame.(DataFrame.scala:128)
> at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:51)
> at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:755)
> at
> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:710)
> at
> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQuery(SparkAnalyticsExecutor.java:692)
> at
> 

[Dev] DAS going OOM frequently

2015-12-16 Thread Sumedha Rubasinghe
We have DAS Lite included in IoT Server and several summarisation scripts
deployed. Server is going OOM frequently with following exception.

Shouldn't this[1] method be synchronised?

[1]
https://github.com/wso2/carbon-analytics/blob/master/components/analytics-core/org.wso2.carbon.analytics.dataservice.core/src/main/java/org/wso2/carbon/analytics/dataservice/core/indexing/AnalyticsIndexedTableStore.java#L45


>>>
[2015-12-16 15:11:00,004]  INFO
{org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
schedule task for: Light_Sensor_Script for tenant id: -1234
[2015-12-16 15:11:00,005]  INFO
{org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
schedule task for: Magnetic_Sensor_Script for tenant id: -1234
[2015-12-16 15:11:00,005]  INFO
{org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
schedule task for: Pressure_Sensor_Script for tenant id: -1234
[2015-12-16 15:11:00,006]  INFO
{org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
schedule task for: Proximity_Sensor_Script for tenant id: -1234
[2015-12-16 15:11:00,006]  INFO
{org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
schedule task for: Rotation_Sensor_Script for tenant id: -1234
[2015-12-16 15:11:00,007]  INFO
{org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
schedule task for: Temperature_Sensor_Script for tenant id: -1234
[2015-12-16 15:11:01,132] ERROR
{org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter} -  Error in
executing task: null
java.util.ConcurrentModificationException
at java.util.HashMap$HashIterator.nextEntry(HashMap.java:922)
at java.util.HashMap$KeyIterator.next(HashMap.java:956)
at java.util.AbstractCollection.toArray(AbstractCollection.java:195)
at
org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsIndexedTableStore.refreshIndexedTableArray(AnalyticsIndexedTableStore.java:46)
at
org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsIndexedTableStore.addIndexedTable(AnalyticsIndexedTableStore.java:37)
at
org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.refreshIndexedTableStoreEntry(AnalyticsDataServiceImpl.java:512)
at
org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.invalidateAnalyticsTableInfo(AnalyticsDataServiceImpl.java:525)
at
org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.checkAndInvalidateTableInfo(AnalyticsDataServiceImpl.java:504)
at
org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.setTableSchema(AnalyticsDataServiceImpl.java:495)
at
org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelation.insert(AnalyticsRelation.java:162)
at org.apache.spark.sql.sources.InsertIntoDataSource.run(commands.scala:53)
at
org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:57)
at
org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:57)
at
org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:68)
at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:87)
at
org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:950)
at
org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:950)
at org.apache.spark.sql.DataFrame.(DataFrame.scala:144)
at org.apache.spark.sql.DataFrame.(DataFrame.scala:128)
at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:51)
at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:755)
at
org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:710)
at
org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQuery(SparkAnalyticsExecutor.java:692)
at
org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeQuery(CarbonAnalyticsProcessorService.java:199)
at
org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeScript(CarbonAnalyticsProcessorService.java:149)
at
org.wso2.carbon.analytics.spark.core.AnalyticsTask.execute(AnalyticsTask.java:57)
at
org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:67)
at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
[2015-12-16 15:12:00,001]  INFO
{org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
schedule task for: Accelerometer_Sensor_Script for tenant id: -1234

--