Hi Gihan,

The memory can be set by using the conf parameters ie. "
spark.executor.memory"

rgds

On Wed, Dec 16, 2015 at 7:01 PM, Gihan Anuruddha <[email protected]> wrote:

> Hi Niranda,
>
> So let say we have to run embedded DAS in a memory restricted environment.
> So where I can define the spark allocated memory configuration information?
>
> Regards,
> Gihan
>
> On Wed, Dec 16, 2015 at 6:55 PM, Niranda Perera <[email protected]> wrote:
>
>> Hi Sumedha,
>>
>> I checked the heapdump you provided, and the size of it is around 230MB.
>> I presume this was not a OOM scenario.
>>
>> As per the Spark memory usage, when you use spark in the local mode, the
>> processing will happen inside that JVM itself. So, we have to make sure
>> that we allocate enough memory for that
>>
>> Rgds
>>
>> On Wed, Dec 16, 2015 at 6:11 PM, Anjana Fernando <[email protected]> wrote:
>>
>>> Hi Ayoma,
>>>
>>> Thanks for checking up on it, actually "getAllIndexedTables" doesn't
>>> return the Set here, it returns an array that was previously populated in
>>> the refresh operation, so no need to synchronize that method.
>>>
>>> Cheers,
>>> Anjana.
>>>
>>> On Wed, Dec 16, 2015 at 5:44 PM, Ayoma Wijethunga <[email protected]>
>>> wrote:
>>>
>>>> And, missed mentioning that when this this race condition / state
>>>> corruption happens all "get" operations performed on Set/Map get blocked
>>>> resulting in OOM situation. [1
>>>> <http://mailinator.blogspot.gr/2009/06/beautiful-race-condition.html>]
>>>> has all that explained nicely. I have checked a heap dump in a similar
>>>> situation and if you take one, you will clearly see many threads waiting to
>>>> access this Set instance.
>>>>
>>>> [1] http://mailinator.blogspot.gr/2009/06/beautiful-race-condition.html
>>>>
>>>> On Wed, Dec 16, 2015 at 5:37 PM, Ayoma Wijethunga <[email protected]>
>>>> wrote:
>>>>
>>>>> Hi Anjana,
>>>>>
>>>>> Sorry, I didn't notice that you have already replied this thread.
>>>>>
>>>>> However, please consider my point on "getAllIndexedTables" as well.
>>>>>
>>>>> Thank you,
>>>>> Ayoma.
>>>>>
>>>>> On Wed, Dec 16, 2015 at 5:12 PM, Anjana Fernando <[email protected]>
>>>>> wrote:
>>>>>
>>>>>> Hi Sumedha,
>>>>>>
>>>>>> Thank you for reporting the issue. I've fixed the concurrent
>>>>>> modification exception issue, where, actually both the methods
>>>>>> "addIndexedTable" and "removeIndexedTable" needed to be synchronized, 
>>>>>> since
>>>>>> they both work on the shared Set object there.
>>>>>>
>>>>>> As for the OOM issue, can you please share a heap dump when the OOM
>>>>>> happened. So we can see what is causing this. And also, I see there are
>>>>>> multiple scripts running at the same time, so this actually can be a
>>>>>> legitimate error also, where the server actually doesn't have enough 
>>>>>> memory
>>>>>> to continue its operations. @Niranda, please share if there is any info 
>>>>>> on
>>>>>> tuning Spark's memory requirements.
>>>>>>
>>>>>> Cheers,
>>>>>> Anjana.
>>>>>>
>>>>>> On Wed, Dec 16, 2015 at 3:32 PM, Sumedha Rubasinghe <[email protected]
>>>>>> > wrote:
>>>>>>
>>>>>>> We have DAS Lite included in IoT Server and several summarisation
>>>>>>> scripts deployed. Server is going OOM frequently with following 
>>>>>>> exception.
>>>>>>>
>>>>>>> Shouldn't this[1] method be synchronised?
>>>>>>>
>>>>>>> [1]
>>>>>>> https://github.com/wso2/carbon-analytics/blob/master/components/analytics-core/org.wso2.carbon.analytics.dataservice.core/src/main/java/org/wso2/carbon/analytics/dataservice/core/indexing/AnalyticsIndexedTableStore.java#L45
>>>>>>>
>>>>>>>
>>>>>>> >>>>>>>>>>>
>>>>>>> [2015-12-16 15:11:00,004]  INFO
>>>>>>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>>>>>>> schedule task for: Light_Sensor_Script for tenant id: -1234
>>>>>>> [2015-12-16 15:11:00,005]  INFO
>>>>>>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>>>>>>> schedule task for: Magnetic_Sensor_Script for tenant id: -1234
>>>>>>> [2015-12-16 15:11:00,005]  INFO
>>>>>>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>>>>>>> schedule task for: Pressure_Sensor_Script for tenant id: -1234
>>>>>>> [2015-12-16 15:11:00,006]  INFO
>>>>>>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>>>>>>> schedule task for: Proximity_Sensor_Script for tenant id: -1234
>>>>>>> [2015-12-16 15:11:00,006]  INFO
>>>>>>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>>>>>>> schedule task for: Rotation_Sensor_Script for tenant id: -1234
>>>>>>> [2015-12-16 15:11:00,007]  INFO
>>>>>>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>>>>>>> schedule task for: Temperature_Sensor_Script for tenant id: -1234
>>>>>>> [2015-12-16 15:11:01,132] ERROR
>>>>>>> {org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter} -  Error in
>>>>>>> executing task: null
>>>>>>> java.util.ConcurrentModificationException
>>>>>>> at java.util.HashMap$HashIterator.nextEntry(HashMap.java:922)
>>>>>>> at java.util.HashMap$KeyIterator.next(HashMap.java:956)
>>>>>>> at java.util.AbstractCollection.toArray(AbstractCollection.java:195)
>>>>>>> at
>>>>>>> org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsIndexedTableStore.refreshIndexedTableArray(AnalyticsIndexedTableStore.java:46)
>>>>>>> at
>>>>>>> org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsIndexedTableStore.addIndexedTable(AnalyticsIndexedTableStore.java:37)
>>>>>>> at
>>>>>>> org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.refreshIndexedTableStoreEntry(AnalyticsDataServiceImpl.java:512)
>>>>>>> at
>>>>>>> org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.invalidateAnalyticsTableInfo(AnalyticsDataServiceImpl.java:525)
>>>>>>> at
>>>>>>> org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.checkAndInvalidateTableInfo(AnalyticsDataServiceImpl.java:504)
>>>>>>> at
>>>>>>> org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.setTableSchema(AnalyticsDataServiceImpl.java:495)
>>>>>>> at
>>>>>>> org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelation.insert(AnalyticsRelation.java:162)
>>>>>>> at
>>>>>>> org.apache.spark.sql.sources.InsertIntoDataSource.run(commands.scala:53)
>>>>>>> at
>>>>>>> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:57)
>>>>>>> at
>>>>>>> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:57)
>>>>>>> at
>>>>>>> org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:68)
>>>>>>> at
>>>>>>> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
>>>>>>> at
>>>>>>> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
>>>>>>> at
>>>>>>> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
>>>>>>> at
>>>>>>> org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:87)
>>>>>>> at
>>>>>>> org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:950)
>>>>>>> at
>>>>>>> org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:950)
>>>>>>> at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:144)
>>>>>>> at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:128)
>>>>>>> at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:51)
>>>>>>> at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:755)
>>>>>>> at
>>>>>>> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:710)
>>>>>>> at
>>>>>>> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQuery(SparkAnalyticsExecutor.java:692)
>>>>>>> at
>>>>>>> org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeQuery(CarbonAnalyticsProcessorService.java:199)
>>>>>>> at
>>>>>>> org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeScript(CarbonAnalyticsProcessorService.java:149)
>>>>>>> at
>>>>>>> org.wso2.carbon.analytics.spark.core.AnalyticsTask.execute(AnalyticsTask.java:57)
>>>>>>> at
>>>>>>> org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:67)
>>>>>>> at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
>>>>>>> at
>>>>>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>>>>>>> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>>>>>> at
>>>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>>>>> at
>>>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>>>>> at java.lang.Thread.run(Thread.java:745)
>>>>>>> [2015-12-16 15:12:00,001]  INFO
>>>>>>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>>>>>>> schedule task for: Accelerometer_Sensor_Script for tenant id: -1234
>>>>>>>
>>>>>>> --
>>>>>>> /sumedha
>>>>>>> m: +94 773017743
>>>>>>> b :  bit.ly/sumedha
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> *Anjana Fernando*
>>>>>> Senior Technical Lead
>>>>>> WSO2 Inc. | http://wso2.com
>>>>>> lean . enterprise . middleware
>>>>>>
>>>>>> _______________________________________________
>>>>>> Dev mailing list
>>>>>> [email protected]
>>>>>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Ayoma Wijethunga
>>>>> Software Engineer
>>>>> WSO2, Inc.; http://wso2.com
>>>>> lean.enterprise.middleware
>>>>>
>>>>> Mobile : +94 (0) 719428123 <+94+(0)+719428123>
>>>>> Blog : http://www.ayomaonline.com
>>>>> LinkedIn: https://www.linkedin.com/in/ayoma
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Ayoma Wijethunga
>>>> Software Engineer
>>>> WSO2, Inc.; http://wso2.com
>>>> lean.enterprise.middleware
>>>>
>>>> Mobile : +94 (0) 719428123 <+94+(0)+719428123>
>>>> Blog : http://www.ayomaonline.com
>>>> LinkedIn: https://www.linkedin.com/in/ayoma
>>>>
>>>
>>>
>>>
>>> --
>>> *Anjana Fernando*
>>> Senior Technical Lead
>>> WSO2 Inc. | http://wso2.com
>>> lean . enterprise . middleware
>>>
>>
>>
>>
>> --
>> *Niranda Perera*
>> Software Engineer, WSO2 Inc.
>> Mobile: +94-71-554-8430
>> Twitter: @n1r44 <https://twitter.com/N1R44>
>> https://pythagoreanscript.wordpress.com/
>>
>> _______________________________________________
>> Dev mailing list
>> [email protected]
>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>
>>
>
>
> --
> W.G. Gihan Anuruddha
> Senior Software Engineer | WSO2, Inc.
> M: +94772272595
>



-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
_______________________________________________
Dev mailing list
[email protected]
http://wso2.org/cgi-bin/mailman/listinfo/dev

Reply via email to