[jira] [Comment Edited] (IGNITE-10959) Memory leaks in continuous query handlers

2019-10-19 Thread Zane Hu (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-10959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16955345#comment-16955345
 ] 

Zane Hu edited comment on IGNITE-10959 at 10/20/19 1:09 AM:


An error case TransactionalPartitionedTwoBackupFullSync is as the following log 
we got from a slightly modified CacheContinuousQueryMemoryUsageTest.java. But 
we don't see such error for cases of TransactionalReplicatedTwoBackupFullSync 
and TransactionalPartitionedOneBackupFullSync. 
{quote}[ERROR] 
CacheContinuousQueryMemoryUsageTest>GridAbstractTest.access$000:143->GridAbstractTest.runTestInternal:2177->testTransactionalPartitionedTwoBackupFullSync:235->testContinuousQuery:355->assertEntriesReleased:423->assertEntriesReleased:435->checkEntryBuffers:466
 Backup queue is not empty. Node: 
continuous.CacheContinuousQueryMemoryUsageTest0; cache: test-cache. 
expected:<0> but was:<1>
{quote}
Looked at Ignite code, the following snip of onEntryUpdated() in 
CacheContinuousQueryHandler.java

 
{code:java}
if (primary || skipPrimaryCheck) // 
TransactionalReplicatedTwoBackupFullSync goes here
    onEntryUpdate(evt, notify, loc, recordIgniteEvt); // Notify the query 
client without putting evt.entry() into backupQ.
else // A backup node of TransactionalPartitionedTwoBackupFullSync goes here
    handleBackupEntry(cctx, evt.entry()); // This will put evt.entry() into 
backupQ
{code}
  

After notifying the query client, there seems an ack msg 
CacheContinuousQueryBatchAck sent to the CQ server side on backup nodes to 
clean up the entries in backupQ. And there is even a periodic BackupCleaner 
task every 5 seconds to clean up backupQ. The actual cleanup code is as below:

 
{code:java}
/**
 * @param updateCntr Acknowledged counter.
 */
void cleanupBackupQueue(Long updateCntr) {
Iterator it = backupQ.iterator();
while (it.hasNext()) {
CacheContinuousQueryEntry backupEntry = it.next();
if (backupEntry.updateCounter() <= updateCntr) // Remove 
backupEntry if its updateCounter <= Ack updateCntr
it.remove();
}
}
{code}
 

So some questions are
 # Why is a backupEntry still left over in backupQ after all these?
 # Is it possible that the updateCounter and Ack updateCntr are mis-calculated?
 # Is it possible that the ack msg is sent to only one of the two backup nodes? 
The load of 1000 updates of 3 nodes in a stable network, so there shouldn't be 
a msg somehow dropped in the middle. 

 

Please help to look into more, especially from Ignite experts or developers.

Thanks,

 


was (Author: zanehu):
An error case TransactionalPartitionedTwoBackupFullSync is as the following log 
we got from a slightly modified CacheContinuousQueryMemoryUsageTest.java. But 
we don't see such error for cases of TransactionalReplicatedTwoBackupFullSync 
and TransactionalPartitionedOneBackupFullSync. 

 

[ERROR] 
CacheContinuousQueryMemoryUsageTest>GridAbstractTest.access$000:143->GridAbstractTest.runTestInternal:2177->testTransactionalPartitionedTwoBackupFullSync:235->testContinuousQuery:355->assertEntriesReleased:423->assertEntriesReleased:435->checkEntryBuffers:466
 Backup queue is not empty. Node: 
continuous.CacheContinuousQueryMemoryUsageTest0; cache: test-cache. 
expected:<0> but was:<1>

 

Looked at Ignite code, the following snip of onEntryUpdated() in 
CacheContinuousQueryHandler.java

 
{code:java}
if (primary || skipPrimaryCheck) // 
TransactionalReplicatedTwoBackupFullSync goes here
    onEntryUpdate(evt, notify, loc, recordIgniteEvt); // Notify the query 
client without putting evt.entry() into backupQ.
else // A backup node of TransactionalPartitionedTwoBackupFullSync goes here
    handleBackupEntry(cctx, evt.entry()); // This will put evt.entry() into 
backupQ
{code}
  

After notifying the query client, there seems an ack msg 
CacheContinuousQueryBatchAck sent to the CQ server side on backup nodes to 
clean up the entries in backupQ. And there is even a periodic BackupCleaner 
task every 5 seconds to clean up backupQ. The actual cleanup code is as below:

 
{code:java}
/**
 * @param updateCntr Acknowledged counter.
 */
void cleanupBackupQueue(Long updateCntr) {
Iterator it = backupQ.iterator();
while (it.hasNext()) {
CacheContinuousQueryEntry backupEntry = it.next();
if (backupEntry.updateCounter() <= updateCntr) // Remove 
backupEntry if its updateCounter <= Ack updateCntr
it.remove();
}
}
{code}
 

So some questions are
 # Why is a backupEntry still left over in backupQ after all these?
 # Is it possible that the updateCounter and Ack updateCntr are mis-calculated?
 # Is it possible that the ack msg is sent to only one of the two backup nodes? 
The load of 1000 updates of 3 nodes in a stable network, so there shouldn't be 
a msg somehow dropped 

[jira] [Comment Edited] (IGNITE-10959) Memory leaks in continuous query handlers

2019-10-19 Thread Zane Hu (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-10959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16955345#comment-16955345
 ] 

Zane Hu edited comment on IGNITE-10959 at 10/20/19 1:08 AM:


An error case TransactionalPartitionedTwoBackupFullSync is as the following log 
we got from a slightly modified CacheContinuousQueryMemoryUsageTest.java. But 
we don't see such error for cases of TransactionalReplicatedTwoBackupFullSync 
and TransactionalPartitionedOneBackupFullSync. 

 

[ERROR] 
CacheContinuousQueryMemoryUsageTest>GridAbstractTest.access$000:143->GridAbstractTest.runTestInternal:2177->testTransactionalPartitionedTwoBackupFullSync:235->testContinuousQuery:355->assertEntriesReleased:423->assertEntriesReleased:435->checkEntryBuffers:466
 Backup queue is not empty. Node: 
continuous.CacheContinuousQueryMemoryUsageTest0; cache: test-cache. 
expected:<0> but was:<1>

 

Looked at Ignite code, the following snip of onEntryUpdated() in 
CacheContinuousQueryHandler.java

 
{code:java}
if (primary || skipPrimaryCheck) // 
TransactionalReplicatedTwoBackupFullSync goes here
    onEntryUpdate(evt, notify, loc, recordIgniteEvt); // Notify the query 
client without putting evt.entry() into backupQ.
else // A backup node of TransactionalPartitionedTwoBackupFullSync goes here
    handleBackupEntry(cctx, evt.entry()); // This will put evt.entry() into 
backupQ
{code}
  

After notifying the query client, there seems an ack msg 
CacheContinuousQueryBatchAck sent to the CQ server side on backup nodes to 
clean up the entries in backupQ. And there is even a periodic BackupCleaner 
task every 5 seconds to clean up backupQ. The actual cleanup code is as below:

 
{code:java}
/**
 * @param updateCntr Acknowledged counter.
 */
void cleanupBackupQueue(Long updateCntr) {
Iterator it = backupQ.iterator();
while (it.hasNext()) {
CacheContinuousQueryEntry backupEntry = it.next();
if (backupEntry.updateCounter() <= updateCntr) // Remove 
backupEntry if its updateCounter <= Ack updateCntr
it.remove();
}
}
{code}
 

So some questions are
 # Why is a backupEntry still left over in backupQ after all these?
 # Is it possible that the updateCounter and Ack updateCntr are mis-calculated?
 # Is it possible that the ack msg is sent to only one of the two backup nodes? 
The load of 1000 updates of 3 nodes in a stable network, so there shouldn't be 
a msg somehow dropped in the middle. 

 

Please help to look into more, especially from Ignite experts or developers.

Thanks,

 


was (Author: zanehu):
An error case TransactionalPartitionedTwoBackupFullSync is as the following log 
we got from a slightly modified CacheContinuousQueryMemoryUsageTest.java:

 

[ERROR] 
CacheContinuousQueryMemoryUsageTest>GridAbstractTest.access$000:143->GridAbstractTest.runTestInternal:2177->testTransactionalPartitionedTwoBackupFullSync:235->testContinuousQuery:355->assertEntriesReleased:423->assertEntriesReleased:435->checkEntryBuffers:466
 Backup queue is not empty. Node: 
continuous.CacheContinuousQueryMemoryUsageTest0; cache: test-cache. 
expected:<0> but was:<1>

 

But we don't see such error for TransactionalReplicatedTwoBackupFullSync or 
TransactionalPartitionedOneBackupFullSync. Looked at Ignite code, the following 
snip of onEntryUpdated() in CacheContinuousQueryHandler.java

 
{code:java}
if (primary || skipPrimaryCheck) // 
TransactionalReplicatedTwoBackupFullSync goes here
    onEntryUpdate(evt, notify, loc, recordIgniteEvt); // Notify the query 
client without putting evt.entry() into backupQ.
else // A backup node of TransactionalPartitionedTwoBackupFullSync goes here
    handleBackupEntry(cctx, evt.entry()); // This will put evt.entry() into 
backupQ
{code}
  

After notifying the query client, there seems an ack msg 
CacheContinuousQueryBatchAck sent to the CQ server side on backup nodes to 
clean up the entries in backupQ. And there is even a periodic BackupCleaner 
task every 5 seconds to clean up backupQ. The actual cleanup code is as below:

 
{code:java}
/**
 * @param updateCntr Acknowledged counter.
 */
void cleanupBackupQueue(Long updateCntr) {
Iterator it = backupQ.iterator();
while (it.hasNext()) {
CacheContinuousQueryEntry backupEntry = it.next();
if (backupEntry.updateCounter() <= updateCntr) // Remove 
backupEntry if its updateCounter <= Ack updateCntr
it.remove();
}
}
{code}
 

So some questions are
 # Why is a backupEntry still left over in backupQ after all these?
 # Is it possible that the updateCounter and Ack updateCntr are mis-calculated?
 # Is it possible that the ack msg is sent to only one of the two backup nodes? 
The load of 1000 updates of 3 nodes in a stable network, so there shouldn't be 
a msg somehow dropped in the middle. 

 

[jira] [Comment Edited] (IGNITE-10959) Memory leaks in continuous query handlers

2019-10-19 Thread Zane Hu (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-10959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16955345#comment-16955345
 ] 

Zane Hu edited comment on IGNITE-10959 at 10/20/19 1:07 AM:


An error case TransactionalPartitionedTwoBackupFullSync is as the following log 
we got from a slightly modified CacheContinuousQueryMemoryUsageTest.java:

 

[ERROR] 
CacheContinuousQueryMemoryUsageTest>GridAbstractTest.access$000:143->GridAbstractTest.runTestInternal:2177->testTransactionalPartitionedTwoBackupFullSync:235->testContinuousQuery:355->assertEntriesReleased:423->assertEntriesReleased:435->checkEntryBuffers:466
 Backup queue is not empty. Node: 
continuous.CacheContinuousQueryMemoryUsageTest0; cache: test-cache. 
expected:<0> but was:<1>

 

But we don't see such error for TransactionalReplicatedTwoBackupFullSync or 
TransactionalPartitionedOneBackupFullSync. Looked at Ignite code, the following 
snip of onEntryUpdated() in CacheContinuousQueryHandler.java

 
{code:java}
if (primary || skipPrimaryCheck) // 
TransactionalReplicatedTwoBackupFullSync goes here
    onEntryUpdate(evt, notify, loc, recordIgniteEvt); // Notify the query 
client without putting evt.entry() into backupQ.
else // A backup node of TransactionalPartitionedTwoBackupFullSync goes here
    handleBackupEntry(cctx, evt.entry()); // This will put evt.entry() into 
backupQ
{code}
  

After notifying the query client, there seems an ack msg 
CacheContinuousQueryBatchAck sent to the CQ server side on backup nodes to 
clean up the entries in backupQ. And there is even a periodic BackupCleaner 
task every 5 seconds to clean up backupQ. The actual cleanup code is as below:

 
{code:java}
/**
 * @param updateCntr Acknowledged counter.
 */
void cleanupBackupQueue(Long updateCntr) {
Iterator it = backupQ.iterator();
while (it.hasNext()) {
CacheContinuousQueryEntry backupEntry = it.next();
if (backupEntry.updateCounter() <= updateCntr) // Remove 
backupEntry if its updateCounter <= Ack updateCntr
it.remove();
}
}
{code}
 

So some questions are
 # Why is a backupEntry still left over in backupQ after all these?
 # Is it possible that the updateCounter and Ack updateCntr are mis-calculated?
 # Is it possible that the ack msg is sent to only one of the two backup nodes? 
The load of 1000 updates of 3 nodes in a stable network, so there shouldn't be 
a msg somehow dropped in the middle. 

 

Please help to look into more, especially from Ignite experts or developers.

Thanks,

 


was (Author: zanehu):
An error case TransactionalPartitionedTwoBackupFullSync is as the following log 
we got from a slightly modified CacheContinuousQueryMemoryUsageTest.java:

 

[ERROR] 
CacheContinuousQueryMemoryUsageTest>GridAbstractTest.access$000:143->GridAbstractTest.runTestInternal:2177->testTransactionalPartitionedTwoBackupFullSync:235->testContinuousQuery:355->assertEntriesReleased:423->assertEntriesReleased:435->checkEntryBuffers:466
 Backup queue is not empty. Node: 
continuous.CacheContinuousQueryMemoryUsageTest0; cache: test-cache. 
expected:<0> but was:<1>

 

But we don't see such error for TransactionalReplicatedTwoBackupFullSync or 
TransactionalPartitionedOneBackupFullSync

Looked at Ignite code, the following snip of onEntryUpdated() in 
CacheContinuousQueryHandler.java

 
{code:java}
if (primary || skipPrimaryCheck) // 
TransactionalReplicatedTwoBackupFullSync goes here
    onEntryUpdate(evt, notify, loc, recordIgniteEvt); // Notify the query 
client without putting evt.entry() into backupQ.
else // A backup node of TransactionalPartitionedTwoBackupFullSync goes here
    handleBackupEntry(cctx, evt.entry()); // This will put evt.entry() into 
backupQ
{code}
  

After notifying the query client, there seems an ack msg 
CacheContinuousQueryBatchAck sent to the CQ server side on backup nodes to 
clean up the entries in backupQ. And there is even a periodic BackupCleaner 
task every 5 seconds to clean up backupQ. The actual cleanup code is as below:

 
{code:java}
/**
 * @param updateCntr Acknowledged counter.
 */
void cleanupBackupQueue(Long updateCntr) {
Iterator it = backupQ.iterator();
while (it.hasNext()) {
CacheContinuousQueryEntry backupEntry = it.next();
if (backupEntry.updateCounter() <= updateCntr) // Remove 
backupEntry if its updateCounter <= Ack updateCntr
it.remove();
}
}
{code}
 

So some questions are
 # Why is a backupEntry still left over in backupQ after all these?
 # Is it possible that the updateCounter and Ack updateCntr are mis-calculated?
 # Is it possible that the ack msg is sent to only one of the two backup nodes? 
The load of 1000 updates of 3 nodes in a stable network, so there shouldn't be 
a msg somehow dropped in the middle. 

 

Please 

[jira] [Comment Edited] (IGNITE-10959) Memory leaks in continuous query handlers

2019-10-19 Thread Zane Hu (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-10959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16955345#comment-16955345
 ] 

Zane Hu edited comment on IGNITE-10959 at 10/20/19 1:06 AM:


An error case TransactionalPartitionedTwoBackupFullSync is as the following log 
we got from a slightly modified CacheContinuousQueryMemoryUsageTest.java:

 

[ERROR] 
CacheContinuousQueryMemoryUsageTest>GridAbstractTest.access$000:143->GridAbstractTest.runTestInternal:2177->testTransactionalPartitionedTwoBackupFullSync:235->testContinuousQuery:355->assertEntriesReleased:423->assertEntriesReleased:435->checkEntryBuffers:466
 Backup queue is not empty. Node: 
continuous.CacheContinuousQueryMemoryUsageTest0; cache: test-cache. 
expected:<0> but was:<1>

 

But we don't see such error for TransactionalReplicatedTwoBackupFullSync or 
TransactionalPartitionedOneBackupFullSync

Looked at Ignite code, the following snip of onEntryUpdated() in 
CacheContinuousQueryHandler.java

 
{code:java}
if (primary || skipPrimaryCheck) // 
TransactionalReplicatedTwoBackupFullSync goes here
    onEntryUpdate(evt, notify, loc, recordIgniteEvt); // Notify the query 
client without putting evt.entry() into backupQ.
else // A backup node of TransactionalPartitionedTwoBackupFullSync goes here
    handleBackupEntry(cctx, evt.entry()); // This will put evt.entry() into 
backupQ
{code}
  

After notifying the query client, there seems an ack msg 
CacheContinuousQueryBatchAck sent to the CQ server side on backup nodes to 
clean up the entries in backupQ. And there is even a periodic BackupCleaner 
task every 5 seconds to clean up backupQ. The actual cleanup code is as below:

 
{code:java}
/**
 * @param updateCntr Acknowledged counter.
 */
void cleanupBackupQueue(Long updateCntr) {
Iterator it = backupQ.iterator();
while (it.hasNext()) {
CacheContinuousQueryEntry backupEntry = it.next();
if (backupEntry.updateCounter() <= updateCntr) // Remove 
backupEntry if its updateCounter <= Ack updateCntr
it.remove();
}
}
{code}
 

So some questions are
 # Why is a backupEntry still left over in backupQ after all these?
 # Is it possible that the updateCounter and Ack updateCntr are mis-calculated?
 # Is it possible that the ack msg is sent to only one of the two backup nodes? 
The load of 1000 updates of 3 nodes in a stable network, so there shouldn't be 
a msg somehow dropped in the middle. 

 

Please help to look into more, especially from Ignite experts or developers.

Thanks,

 


was (Author: zanehu):
An error case TransactionalPartitionedTwoBackupFullSync is as the following log 
we got from a slightly modified CacheContinuousQueryMemoryUsageTest.java:

 

[ERROR] 
CacheContinuousQueryMemoryUsageTest>GridAbstractTest.access$000:143->GridAbstractTest.runTestInternal:2177->testTransactionalPartitionedTwoBackupFullSync:235->testContinuousQuery:355->assertEntriesReleased:423->assertEntriesReleased:435->checkEntryBuffers:466
 Backup queue is not empty. Node: 
continuous.CacheContinuousQueryMemoryUsageTest0; cache: test-cache. 
expected:<0> but was:<1>

 

But we don't see such error for TransactionalReplicatedTwoBackupFullSync or 
TransactionalPartitionedOneBackupFullSync

Looked at Ignite code, the following snip of onEntryUpdated() in 
CacheContinuousQueryHandler.java

 

 
{code:java}
if (primary || skipPrimaryCheck) // 
TransactionalReplicatedTwoBackupFullSync goes here
    onEntryUpdate(evt, notify, loc, recordIgniteEvt); // Notify the query 
client without putting evt.entry() into backupQ.
else // A backup node of TransactionalPartitionedTwoBackupFullSync goes here
    handleBackupEntry(cctx, evt.entry()); // This will put evt.entry() into 
backupQ
{code}
 

 

After notifying the query client, there seems an ack msg 
CacheContinuousQueryBatchAck sent to the CQ server side on backup nodes to 
clean up the entries in backupQ. And there is even a periodic BackupCleaner 
task every 5 seconds to clean up backupQ. The actual cleanup code is as below:

 

 

 
{code:java}
/**
 * @param updateCntr Acknowledged counter.
 */
void cleanupBackupQueue(Long updateCntr) {
Iterator it = backupQ.iterator();
while (it.hasNext()) {
CacheContinuousQueryEntry backupEntry = it.next();
if (backupEntry.updateCounter() <= updateCntr) // Remove 
backupEntry if its updateCounter <= Ack updateCntr
it.remove();
}
}
{code}
 

So some questions are
 # Why is a backupEntry still left over in backupQ after all these?
 # Is it possible that the updateCounter and Ack updateCntr are mis-calculated?
 # Is it possible that the ack msg is sent to only one of the two backup nodes? 
The load of 1000 updates of 3 nodes in a stable network, so there shouldn't be 
a msg somehow dropped in the middle.

 


[jira] [Commented] (IGNITE-10959) Memory leaks in continuous query handlers

2019-10-19 Thread Zane Hu (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-10959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16955345#comment-16955345
 ] 

Zane Hu commented on IGNITE-10959:
--

An error case TransactionalPartitionedTwoBackupFullSync is as the following log 
we got from a slightly modified CacheContinuousQueryMemoryUsageTest.java:

 

[ERROR] 
CacheContinuousQueryMemoryUsageTest>GridAbstractTest.access$000:143->GridAbstractTest.runTestInternal:2177->testTransactionalPartitionedTwoBackupFullSync:235->testContinuousQuery:355->assertEntriesReleased:423->assertEntriesReleased:435->checkEntryBuffers:466
 Backup queue is not empty. Node: 
continuous.CacheContinuousQueryMemoryUsageTest0; cache: test-cache. 
expected:<0> but was:<1>

 

But we don't see such error for TransactionalReplicatedTwoBackupFullSync or 
TransactionalPartitionedOneBackupFullSync

Looked at Ignite code, the following snip of onEntryUpdated() in 
CacheContinuousQueryHandler.java

 

 
{code:java}
if (primary || skipPrimaryCheck) // 
TransactionalReplicatedTwoBackupFullSync goes here
    onEntryUpdate(evt, notify, loc, recordIgniteEvt); // Notify the query 
client without putting evt.entry() into backupQ.
else // A backup node of TransactionalPartitionedTwoBackupFullSync goes here
    handleBackupEntry(cctx, evt.entry()); // This will put evt.entry() into 
backupQ
{code}
 

 

After notifying the query client, there seems an ack msg 
CacheContinuousQueryBatchAck sent to the CQ server side on backup nodes to 
clean up the entries in backupQ. And there is even a periodic BackupCleaner 
task every 5 seconds to clean up backupQ. The actual cleanup code is as below:

 

 

 
{code:java}
/**
 * @param updateCntr Acknowledged counter.
 */
void cleanupBackupQueue(Long updateCntr) {
Iterator it = backupQ.iterator();
while (it.hasNext()) {
CacheContinuousQueryEntry backupEntry = it.next();
if (backupEntry.updateCounter() <= updateCntr) // Remove 
backupEntry if its updateCounter <= Ack updateCntr
it.remove();
}
}
{code}
 

So some questions are
 # Why is a backupEntry still left over in backupQ after all these?
 # Is it possible that the updateCounter and Ack updateCntr are mis-calculated?
 # Is it possible that the ack msg is sent to only one of the two backup nodes? 
The load of 1000 updates of 3 nodes in a stable network, so there shouldn't be 
a msg somehow dropped in the middle.

 

Please help to look into more, especially from Ignite experts or developers.

Thanks,

 

> Memory leaks in continuous query handlers
> -
>
> Key: IGNITE-10959
> URL: https://issues.apache.org/jira/browse/IGNITE-10959
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.7
>Reporter: Denis Mekhanikov
>Priority: Major
> Fix For: 2.9
>
> Attachments: CacheContinuousQueryMemoryUsageTest.java, 
> continuousquery_leak_profile.png
>
>
> Continuous query handlers don't clear internal data structures after cache 
> events are processed.
> A test, that reproduces the problem, is attached.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12228) Implement an Apache Beam runner on top of Ignite compute grid?

2019-10-19 Thread Saikat Maitra (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16955338#comment-16955338
 ] 

Saikat Maitra commented on IGNITE-12228:


Hello [~romain.manni-bucau] , [~dmagda]

I would like to contribute to Apache Beam runner.

I have started the discussion here 
[https://lists.apache.org/thread.html/8adbf97a83fb2a9ef8ac511b0c9de5664eed9ea6ea81a48d0b2d4d46@%3Cdev.beam.apache.org%3E]

 

Regards,

Saikat

> Implement an Apache Beam runner on top of Ignite compute grid?
> --
>
> Key: IGNITE-12228
> URL: https://issues.apache.org/jira/browse/IGNITE-12228
> Project: Ignite
>  Issue Type: Wish
>Reporter: Romain Manni-Bucau
>Assignee: Saikat Maitra
>Priority: Major
>
> Apache Ignite provides a compute grid.
> It is therefore feasible to use that to execute a DAG pipeline.
> Therefore implementing a Apache Beam runner executing a DAG on an ignite 
> cluster would be very valuable for users and would provide a very interesting 
> alternative to Spark, Dataflow etc...which would be very valuable when using 
> data from ignite itself. 
> Hoping it can help, here is Hazelcast Jet implementation which is not that 
> far even if Jet is a bit more advanced in term of DAG support: 
> [https://github.com/apache/beam/tree/master/runners/jet-experimental/src/main/java/org/apache/beam/runners/jet]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (IGNITE-12228) Implement an Apache Beam runner on top of Ignite compute grid?

2019-10-19 Thread Saikat Maitra (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Saikat Maitra reassigned IGNITE-12228:
--

Assignee: Saikat Maitra

> Implement an Apache Beam runner on top of Ignite compute grid?
> --
>
> Key: IGNITE-12228
> URL: https://issues.apache.org/jira/browse/IGNITE-12228
> Project: Ignite
>  Issue Type: Wish
>Reporter: Romain Manni-Bucau
>Assignee: Saikat Maitra
>Priority: Major
>
> Apache Ignite provides a compute grid.
> It is therefore feasible to use that to execute a DAG pipeline.
> Therefore implementing a Apache Beam runner executing a DAG on an ignite 
> cluster would be very valuable for users and would provide a very interesting 
> alternative to Spark, Dataflow etc...which would be very valuable when using 
> data from ignite itself. 
> Hoping it can help, here is Hazelcast Jet implementation which is not that 
> far even if Jet is a bit more advanced in term of DAG support: 
> [https://github.com/apache/beam/tree/master/runners/jet-experimental/src/main/java/org/apache/beam/runners/jet]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-7198) Integrate with Apache Beam

2019-10-19 Thread Saikat Maitra (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-7198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16955336#comment-16955336
 ] 

Saikat Maitra commented on IGNITE-7198:
---

Hello [~junwan01] [~dmagda]

I would like to contribute to Apache Beam integration.

I have started the discussion here 
[https://lists.apache.org/thread.html/8adbf97a83fb2a9ef8ac511b0c9de5664eed9ea6ea81a48d0b2d4d46@%3Cdev.beam.apache.org%3E]

 

Regards,

Saikat

 

> Integrate with Apache Beam 
> ---
>
> Key: IGNITE-7198
> URL: https://issues.apache.org/jira/browse/IGNITE-7198
> Project: Ignite
>  Issue Type: Task
>Reporter: Denis A. Magda
>Assignee: Saikat Maitra
>Priority: Critical
>
> Apache Beam (https://beam.apache.org/) provides a unified API for batch and 
> streaming processing. It can be seen as a generic/portable API with multiple 
> implementations (Spark, Flink, etc.).
> Apache Ignite perfectly fits Beam architecture as a Runner:
> https://beam.apache.org/contribute/runner-guide/
> Here is Beam's capability matrix. Let's support as much as we're capable of:
> https://beam.apache.org/documentation/runners/capability-matrix/
> Most likely the integration should go to Beam's repository:
> https://github.com/apache/beam/tree/master/runners
> Discussion on the dev list: 
> http://apache-ignite-developers.2346864.n4.nabble.com/Apache-Ignite-as-a-distributed-processing-back-ends-td25122.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (IGNITE-7198) Integrate with Apache Beam

2019-10-19 Thread Saikat Maitra (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-7198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Saikat Maitra reassigned IGNITE-7198:
-

Assignee: Saikat Maitra

> Integrate with Apache Beam 
> ---
>
> Key: IGNITE-7198
> URL: https://issues.apache.org/jira/browse/IGNITE-7198
> Project: Ignite
>  Issue Type: Task
>Reporter: Denis A. Magda
>Assignee: Saikat Maitra
>Priority: Critical
>
> Apache Beam (https://beam.apache.org/) provides a unified API for batch and 
> streaming processing. It can be seen as a generic/portable API with multiple 
> implementations (Spark, Flink, etc.).
> Apache Ignite perfectly fits Beam architecture as a Runner:
> https://beam.apache.org/contribute/runner-guide/
> Here is Beam's capability matrix. Let's support as much as we're capable of:
> https://beam.apache.org/documentation/runners/capability-matrix/
> Most likely the integration should go to Beam's repository:
> https://github.com/apache/beam/tree/master/runners
> Discussion on the dev list: 
> http://apache-ignite-developers.2346864.n4.nabble.com/Apache-Ignite-as-a-distributed-processing-back-ends-td25122.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-7285) Add default query timeout

2019-10-19 Thread Saikat Maitra (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-7285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16955332#comment-16955332
 ] 

Saikat Maitra commented on IGNITE-7285:
---

Hi Ivan,

Thank you for sharing your feedback.

I have incorporated most of your review comments and have shared questions for 
few changes.

Can you please take a look and share your thoughts?

Regards,

Saikat

 

> Add default query timeout
> -
>
> Key: IGNITE-7285
> URL: https://issues.apache.org/jira/browse/IGNITE-7285
> Project: Ignite
>  Issue Type: Improvement
>  Components: cache, sql
>Affects Versions: 2.3
>Reporter: Valentin Kulichenko
>Assignee: Saikat Maitra
>Priority: Major
>  Labels: sql-stability
> Fix For: 2.8
>
>  Time Spent: 6h 40m
>  Remaining Estimate: 0h
>
> Currently it's possible to provide timeout only on query level. It would be 
> very useful to have default timeout value provided on cache startup. Let's 
> add {{CacheConfiguration#defaultQueryTimeout}} configuration property.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)