Re: [Dev] DAS going OOM frequently

2015-12-16 Thread Anjana Fernando
Hi guys, So as Sumedha told me, the error that has come is a OOM perm gen error. And we suspect it's just because they've installed many features there in the IoT server, so lots of classes being loaded. So after increasing the perm gen size, Sumedha mentioned that the issue hasn't come yet.

Re: [Dev] DAS going OOM frequently

2015-12-16 Thread Anjana Fernando
Hi Ayoma, Thanks for checking up on it, actually "getAllIndexedTables" doesn't return the Set here, it returns an array that was previously populated in the refresh operation, so no need to synchronize that method. Cheers, Anjana. On Wed, Dec 16, 2015 at 5:44 PM, Ayoma Wijethunga

Re: [Dev] DAS going OOM frequently

2015-12-16 Thread Gihan Anuruddha
Hi Niranda, So let say we have to run embedded DAS in a memory restricted environment. So where I can define the spark allocated memory configuration information? Regards, Gihan On Wed, Dec 16, 2015 at 6:55 PM, Niranda Perera wrote: > Hi Sumedha, > > I checked the heapdump

Re: [Dev] DAS going OOM frequently

2015-12-16 Thread Ayoma Wijethunga
And, missed mentioning that when this this race condition / state corruption happens all "get" operations performed on Set/Map get blocked resulting in OOM situation. [1 ] has all that explained nicely. I have checked a heap dump

Re: [Dev] DAS going OOM frequently

2015-12-16 Thread Niranda Perera
Hi Sumedha, I checked the heapdump you provided, and the size of it is around 230MB. I presume this was not a OOM scenario. As per the Spark memory usage, when you use spark in the local mode, the processing will happen inside that JVM itself. So, we have to make sure that we allocate enough

Re: [Dev] DAS going OOM frequently

2015-12-16 Thread Ayoma Wijethunga
Hi, I have seen that same sort of exception occurs, when a HashMap is used by multiple threads concurrently. It was necessary to use ConcurrentHashMap or do the proper synchronization in our logic. This was explained as a state corruption [3 - *(interesting read)*] and it is no wonder looking at

Re: [Dev] DAS going OOM frequently

2015-12-16 Thread Ayoma Wijethunga
Hi Anjana, Yes. Agreed, sorry I misread that. In that case OOM should be fine after the fix. Thank you, Ayoma. On Wed, Dec 16, 2015 at 6:11 PM, Anjana Fernando wrote: > Hi Ayoma, > > Thanks for checking up on it, actually "getAllIndexedTables" doesn't > return the Set here,

Re: [Dev] DAS going OOM frequently

2015-12-16 Thread Niranda Perera
Hi Gihan, The memory can be set by using the conf parameters ie. " spark.executor.memory" rgds On Wed, Dec 16, 2015 at 7:01 PM, Gihan Anuruddha wrote: > Hi Niranda, > > So let say we have to run embedded DAS in a memory restricted environment. > So where I can define the spark

Re: [Dev] DAS going OOM frequently

2015-12-16 Thread Ayoma Wijethunga
Hi Anjana, Sorry, I didn't notice that you have already replied this thread. However, please consider my point on "getAllIndexedTables" as well. Thank you, Ayoma. On Wed, Dec 16, 2015 at 5:12 PM, Anjana Fernando wrote: > Hi Sumedha, > > Thank you for reporting the issue.

Re: [Dev] DAS going OOM frequently

2015-12-16 Thread Anjana Fernando
Hi Sumedha, Thank you for reporting the issue. I've fixed the concurrent modification exception issue, where, actually both the methods "addIndexedTable" and "removeIndexedTable" needed to be synchronized, since they both work on the shared Set object there. As for the OOM issue, can you please

[Dev] DAS going OOM frequently

2015-12-16 Thread Sumedha Rubasinghe
We have DAS Lite included in IoT Server and several summarisation scripts deployed. Server is going OOM frequently with following exception. Shouldn't this[1] method be synchronised? [1]