Hi Andrey,

if you notice in the log, time taken to process the partition is high ( >
15 sec). Not sure what is causing that high query time.

In my case, both caches are collocated, and eqId column is indexed and
setLocal is true for the query.

I wonder if my approach is correct. please correct it, in case you see it
is suspicious.

Thanks.



On 24 February 2017 at 18:37, Anil <[email protected]> wrote:

> Hi Andrey,
>
> I have attached the log. thanks.
>
> Thanks.
>
>
>
>
>
> On 24 February 2017 at 18:16, Andrey Mashenkov <[email protected]
> > wrote:
>
>> Hi Anil,
>>
>> Would you please provide ignite logs as well?
>>
>>
>> On Fri, Feb 24, 2017 at 3:33 PM, Andrey Gura <[email protected]> wrote:
>>
>>> Hi, Anil
>>>
>>> Could you please provide crash dump? In your case it is
>>> /opt/ignite-manager/api/hs_err_pid18543.log file.
>>>
>>> On Fri, Feb 24, 2017 at 9:05 AM, Anil <[email protected]> wrote:
>>> > Hi ,
>>> >
>>> > I see the node is down with following error while running compute task
>>> >
>>> >
>>> > # A fatal error has been detected by the Java Runtime Environment:
>>> > #
>>> > #  SIGSEGV (0xb) at pc=0x00007facd5cae561, pid=18543,
>>> tid=0x00007fab8a9ea700
>>> > #
>>> > # JRE version: OpenJDK Runtime Environment (8.0_111-b14) (build
>>> > 1.8.0_111-8u111-b14-3~14.04.1-b14)
>>> > # Java VM: OpenJDK 64-Bit Server VM (25.111-b14 mixed mode linux-amd64
>>> > compressed oops)
>>> > # Problematic frame:
>>> > # J 8676 C2
>>> > org.apache.ignite.internal.processors.query.h2.opt.GridH2Key
>>> ValueRowOffheap.getOffheapValue(I)Lorg/h2/value/Value;
>>> > (290 bytes) @ 0x00007facd5cae561 [0x00007facd5cae180+0x3e1]
>>> > #
>>> > # Failed to write core dump. Core dumps have been disabled. To enable
>>> core
>>> > dumping, try "ulimit -c unlimited" before starting Java again
>>> > #
>>> > # An error report file with more information is saved as:
>>> > # /opt/ignite-manager/api/hs_err_pid18543.log
>>> > #
>>> > # If you would like to submit a bug report, please visit:
>>> > #   http://bugreport.java.com/bugreport/crash.jsp
>>> > #
>>> >
>>> >
>>> > I have two 2 caches on 4 node cluster each cache is configured with 10
>>> gb
>>> > off heap.
>>> >
>>> > ComputeTask performs the following execution and it is broad casted to
>>> all
>>> > nodes.
>>> >
>>> >                for (Integer part : parts) {
>>> > ScanQuery<String, Person> scanQuery = new ScanQuery<String, Person>();
>>> > scanQuery.setLocal(true);
>>> > scanQuery.setPartition(part);
>>> >
>>> > Iterator<Cache.Entry<String, Person>> iterator =
>>> > cache.query(scanQuery).iterator();
>>> >
>>> > while (iterator.hasNext()) {
>>> > Cache.Entry<String, Person> row = iterator.next();
>>> > String eqId =   row.getValue().getEqId();
>>> > try {
>>> > QueryCursor<Entry<AffinityKey<String>, Contract>> pdCursor =
>>> > detailsCache.query(new SqlQuery<AffinityKey<String>,
>>> > PersonDetail>(PersonDetail.class,
>>> > "select * from DETAIL_CACHE.PersonDetail where eqId = ? order by
>>> enddate
>>> > desc").setLocal(true).setArgs(eqId));
>>> > Long prev = null;
>>> > for (Entry<AffinityKey<String>, PersonDetail> d : pdCursor) {
>>> > // populate person info into person detail
>>> > dataStreamer.addData(new AffinityKey<String>(detaildId, eqId),
>>> > d);
>>> > }
>>> > pdCursor.close();
>>> > }catch (Exception ex){
>>> > }
>>> > }
>>> >
>>> > }
>>> >
>>> >
>>> > Please let me know if you see any issues with approach or any
>>> > configurations.
>>> >
>>> > Thanks.
>>> >
>>>
>>
>>
>>
>> --
>> Best regards,
>> Andrey V. Mashenkov
>>
>
>

Reply via email to