[jira] [Created] (IGNITE-10858) Memory can not be timely recovery when executing the ignite service with thread pool

2019-01-03 Thread renx (JIRA)
renx created IGNITE-10858:
-

 Summary: Memory can not be timely recovery when executing the 
ignite service with thread pool 
 Key: IGNITE-10858
 URL: https://issues.apache.org/jira/browse/IGNITE-10858
 Project: Ignite
  Issue Type: Bug
  Components: binary
Affects Versions: 2.7
Reporter: renx
 Fix For: 2.8


我将Ignite Client集成至J2EE中,HttpRequest处理线程中调用Ignite 
Service,参数序列化后比较大(大概100MB),通过观察JVM console发现内存占用出现了异常,通过jvm 
dump,发现是ThreadLocal导致OptimizedObjectStreamRegistry.StreamHolder内存泄漏,可以通过如下修改解决。

I integrate IgniteClient into J2EE. In the HTTP-Request thread calls Ignite 
Service which the parameter's serialization is larger (about 100MB). By 
observing JVM console, I find that the memory occupancy is very bad. Through 
JVM dump, I find that ThreadLocal causes 
OptimizedObjectStreamRegistry.StreamHolder memory leak, which can be solved by 
the following modifications.

org.apache.ignite.internal.util.io.GridUnsafeDataOutput#reset:

public void reset() {
      off = 0;
      out = null;

      this.bytes = new byte[0];

      this.maxOff = 0;

      this.lastCheck = U.currentTimeMillis();
}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-10857) System threads blocked in the cluster and unable to recover

2019-01-03 Thread Mahesh Renduchintala (JIRA)
Mahesh Renduchintala created IGNITE-10857:
-

 Summary: System threads blocked in the cluster and unable to 
recover
 Key: IGNITE-10857
 URL: https://issues.apache.org/jira/browse/IGNITE-10857
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.7
 Environment: Ubuntu 16.0.4 server
Reporter: Mahesh Renduchintala
 Attachments: ignite-fe92056c.0.log

Attached are the logs. Please the very end of the logs



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Changing public IgniteCompute API to improve changes in 5037 ticket

2019-01-03 Thread ihorps
hi all

I'd like to ask if somebody is going to continue with a proposal for final
API design here. We have a use-case were map-reduce API with a built-in
affinity feature would be a big benefit for us (project is covering 26
European countries now). Otherwise we have to think how to implement it on
our own (proposal from Valentin K. with "AffinityComputeJob" doesn't look
bad indeed...).

Thank you in advance.



--
Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/


Re: Ignite ML withKeepBinary cache

2019-01-03 Thread otorreno
Denis,

That's great news! I will wait till your ML expert is back from holidays to
work with him in a clean solution.

Regarding the blog post, sure, it could be interesting and useful writing
about how to use Ignite ML with BinaryObject caches IMHO.

Thanks,
Oscar



--
Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/


Re: Ignite ML withKeepBinary cache

2019-01-03 Thread Denis Magda
Oscar,

Sounds like Ignite ML is a perfect fit for your task. Our ML expert will
help you to come up with a clean solution once the holidays season is over.

In general, will you be able to write a blog post on how Ignite ML is used
for your task once the issues are addressed?

--
Denis

On Wed, Jan 2, 2019 at 11:25 PM otorreno  wrote:

> Denis,
>
> We have some metadata stored in an Ignite Cache where each row describes a
> certain data series, and each column is a property (could be actually of
> any
> type: strings, doubles, etc.). You can think about it as a table describing
> our data series. This table might be potentially quite big, given a high
> number of series and properties.
>
> Based on this table we would like to clusterize our data using different
> algorithms (e.g. k-means, decision tree).
>
> I started looking at it and I liked pretty much the way you have done the
> pre-processing pipeline for feature selection, transformation,
> normalization
> and scaling. The only stone I found on my way was the BinaryObject problem
> I
> mentioned.
>
> In fact I made it work as I described in my first post, but with a dirty
> solution as I didn't find the way to access the keepBinary property of the
> cache used as input. In any case, I will be glad to help in finding a clean
> solution to the problem if needed.
>
> Best,
> Oscar
>
>
>
> --
> Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
>


Re: Problem with reading incomplete payload - IGNITE-7153

2019-01-03 Thread Dmitriy Pavlov
Hi Igniters,

I'm trying to reach the author of the fix because the ticket is still in In
Progress.

Could you please advice me how to handle it (because fix seems to be
useful)? Can we set Patch Available status by lazy consensus and review
possibly incomplete fix?
https://issues.apache.org/jira/browse/IGNITE-7153

Sincerely,
Dmitriy Pavlov

пт, 2 нояб. 2018 г. в 13:20, Michael Fong :

> Hi Yakov,
>
> Thanks so much for your analysis.
>
> Parser expects chunks to be complete and has all the data to read entire
> > message, but this is not guaranteed and single message can arrive in
> > several chunks.
>
> This is indeed the the assumption to my implementation. I have not come up
> a another parsing algorithm to handle this rainy day case. Perhaps, it
> would require more refactoring on existing code. In addition, I might need
> to check how Redis dev implements the parser state machine.
>
> I would be interested to see how current implementation (based on
> 2.6/master) behaves if we intentionally split the message into chunks as
> you suggested for the reproducer.
>
> Regards,
>
> Michael
>
> On Wed, Oct 31, 2018 at 7:08 PM Yakov Zhdanov  wrote:
>
> > Hi Mike!
> >
> > Thanks for reproducer. Now I understand the problem. NIO worker reads
> > chunks from the network and notifies the parser on data read. Parser
> > expects chunks to be complete and has all the data to read entire
> message,
> > but this is not guaranteed and single message can arrive in several
> chunks.
> > Which is demostrated by your test.
> >
> > The problem is inside GridRedisProtocolParser. We should add ability to
> > store the parsing context if we do not have all the data to complete
> > message parsing, as it is done, for example in GridBufferedParser. So, it
> > is definitely an issue and should be fixed by adding parsing state. I see
> > you attempted to do so in PR
> > https://github.com/apache/ignite/pull/5044/files. I did not do a formal
> > review, so let's ask community to review your patch.
> >
> > Couple of comments about your reproducer.
> >
> > 1. Let's dump a proper Redis message bytes sent by Jedis.
> > 2. Let's split this dump into 5 chunks and send them with 100 ms delays.
> >
> > This should fail before fix is applied, and should pass with proper
> message
> > parsed after we have the issue fixed.
> >
> > Thanks!
> >
> > --Yakov
> >
>


[jira] [Created] (IGNITE-10856) cassandra-store seems to be broken by incorrect guava version

2019-01-03 Thread Stanislav Lukyanov (JIRA)
Stanislav Lukyanov created IGNITE-10856:
---

 Summary: cassandra-store seems to be broken by incorrect guava 
version
 Key: IGNITE-10856
 URL: https://issues.apache.org/jira/browse/IGNITE-10856
 Project: Ignite
  Issue Type: Bug
Reporter: Stanislav Lukyanov


IGNITE-9131 upgrade guava from 18 to 25.
However, cassandra-driver-core:3.0 dependency of the Ignite's cassandra-store 
requires guava 16-19.
Need to fix the dependency conflict there.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)