[GitHub] ignite pull request #2705: IGNITE-584: proper datastructures setDataMap fill...

2018-06-12 Thread zstan
Github user zstan closed the pull request at:

https://github.com/apache/ignite/pull/2705


---


Re: How to create a cache with 2 backups using REST

2018-06-12 Thread Alexey Kuznetsov
Prachi,

This is a bug in REST "metadata"command.

I created issue: REST: metadata command failed on cluster of size 1.
https://issues.apache.org/jira/browse/IGNITE-8777

If you start 2 nodes,  "metadata"command will be executed correctly.


On Wed, Jun 13, 2018 at 6:13 AM, Prachi Garg  wrote:

> Hi Alexey,
>
> I used the following command to create a cache with 2 backups -
> http://localhost:8080/ignite?cmd=getorcreate&cacheName=
> myNewPartionedCache&backups=2
>
> This is the response - {“successStatus”:0,“error”:null,“response”:null,“
> sessionToken”:null}
>
> Here, it does not give me much info in the response. Looking at the
> status, which is 0, I can just assume that everything went ok.(I guess..)
>
> Then I try to get the cache metadata, using this command -
> http://localhost:8080/ignite?cmd=metadata&cacheName=myNewPartionedCache
>
> I get this error - {“successStatus”:1,“error”:“Failed to handle request:
> [req=CACHE_METADATA, err=Failed to request meta data. myNewPartionedCache
> is not found]“,”response”:null,“sessionToken”:null}
>
> What am I missing here?
>
> -P
>



-- 
Alexey Kuznetsov


[jira] [Created] (IGNITE-8777) REST: metadata command failed on cluster of size 1.

2018-06-12 Thread Alexey Kuznetsov (JIRA)
Alexey Kuznetsov created IGNITE-8777:


 Summary: REST: metadata command failed on cluster of size 1.
 Key: IGNITE-8777
 URL: https://issues.apache.org/jira/browse/IGNITE-8777
 Project: Ignite
  Issue Type: Improvement
  Components: rest
Affects Versions: 2.5
Reporter: Alexey Kuznetsov


Start *only one *node.
Execute REST command: 
http://localhost:8080/ignite?cmd=getorcreate&cacheName=myNewPartionedCache&backups=2
Cache will be created.

Execute http://localhost:8080/ignite?cmd=metadata&cacheName=myNewPartionedCache
Error will be returned:  {“successStatus”:1,“error”:“Failed to handle request: 
[req=CACHE_METADATA, err=Failed to request meta data. myNewPartionedCache is 
not found]“,”response”:null,“sessionToken”:null}

After some debug, I see in code GridCacheCommandHandler.MetadataTask#map:
{code}
...
for (int i = 1; i < subgrid.size(); i++) {
 
}

if (map.isEmpty())
throw new IgniteException("Failed to request meta data. " + 
cacheName + " is not found");
...
{code}

So, in case of cluster with only one node this code will throw exception.

I guess the fix should be - just replace "int i = 1" with "int i = 0".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: How does Ignite garbage collect unused pages?

2018-06-12 Thread Denis Magda
Whenever you add or remove an entry, it changes the size of a page which
can lead to page movement between free list. According to this page, the
page defragmentation/compaction happens in the background and when a
threshold is met: https://apacheignite.readme.io/docs/memory-defragmentation

Hope Ignite persistence experts can shine more light on this.

--
Denis

On Tue, Jun 12, 2018 at 3:12 PM John Wilson  wrote:

> thanks. But *when* does that happen - i.e. when is the decision made to
> move pages? Is this part of the cache.put path or a separate thread?
>
> On Tue, Jun 12, 2018 at 1:03 PM, Denis Magda  wrote:
>
> > A page is moved between free lists that used to track pages of similar
> free
> > space left:
> >
> https://apacheignite.readme.io/docs/memory-architecture#section-free-lists
> >
> > --
> > Denis
> >
> >
> > On Tue, Jun 12, 2018 at 12:35 PM John Wilson 
> > wrote:
> >
> > > Hi,
> > >
> > > How does Ignite free unused pages? Is there some kind of background
> > thread
> > > process that scans unused pages?
> > >
> > > Thanks,
> > >
> >
>


How to create a cache with 2 backups using REST

2018-06-12 Thread Prachi Garg
Hi Alexey,

I used the following command to create a cache with 2 backups -
http://localhost:8080/ignite?cmd=getorcreate&cacheName=myNewPartionedCache&backups=2

This is the response -
{“successStatus”:0,“error”:null,“response”:null,“sessionToken”:null}

Here, it does not give me much info in the response. Looking at the status,
which is 0, I can just assume that everything went ok.(I guess..)

Then I try to get the cache metadata, using this command -
http://localhost:8080/ignite?cmd=metadata&cacheName=myNewPartionedCache

I get this error - {“successStatus”:1,“error”:“Failed to handle request:
[req=CACHE_METADATA, err=Failed to request meta data. myNewPartionedCache
is not found]“,”response”:null,“sessionToken”:null}

What am I missing here?

-P


Re: How does Ignite garbage collect unused pages?

2018-06-12 Thread John Wilson
thanks. But *when* does that happen - i.e. when is the decision made to
move pages? Is this part of the cache.put path or a separate thread?

On Tue, Jun 12, 2018 at 1:03 PM, Denis Magda  wrote:

> A page is moved between free lists that used to track pages of similar free
> space left:
> https://apacheignite.readme.io/docs/memory-architecture#section-free-lists
>
> --
> Denis
>
>
> On Tue, Jun 12, 2018 at 12:35 PM John Wilson 
> wrote:
>
> > Hi,
> >
> > How does Ignite free unused pages? Is there some kind of background
> thread
> > process that scans unused pages?
> >
> > Thanks,
> >
>


Re: How does Ignite garbage collect unused pages?

2018-06-12 Thread Denis Magda
A page is moved between free lists that used to track pages of similar free
space left:
https://apacheignite.readme.io/docs/memory-architecture#section-free-lists

--
Denis


On Tue, Jun 12, 2018 at 12:35 PM John Wilson 
wrote:

> Hi,
>
> How does Ignite free unused pages? Is there some kind of background thread
> process that scans unused pages?
>
> Thanks,
>


How does Ignite garbage collect unused pages?

2018-06-12 Thread John Wilson
Hi,

How does Ignite free unused pages? Is there some kind of background thread
process that scans unused pages?

Thanks,


Re: Memory leak in ignite-cassandra module

2018-06-12 Thread Igor Rudyak
I will be also good to know which version of Cassandra driver was used to
run into OOM exception.

Igor

On Tue, Jun 12, 2018 at 11:39 AM, Igor Rudyak  wrote:

> Denis,
>
> I don't have ideas right now. First need to create a test to reproduce
> this case. Then I'll have some ideas :-)
>
> Igor
>
> On Tue, Jun 12, 2018 at 11:26 AM, Denis Magda  wrote:
>
>> Igor,
>>
>> Do you have any glues/ideas how to fix it? Is the provided information
>> enough for you?
>>
>> --
>> Denis
>>
>> On Mon, Jun 11, 2018 at 11:45 PM Igor Rudyak  wrote:
>>
>> > Hi Kotamrajuyashasvi,
>> >
>> > Could you please create a ticket for this in Ignite JIRA? That's the
>> > standard process to make improvements/fixes to Ignite.
>> >
>> > Thanks,
>> > Igor Rudyak
>> >
>> > On Mon, Jun 11, 2018 at 11:36 PM, kotamrajuyashasvi <
>> > kotamrajuyasha...@gmail.com> wrote:
>> >
>> > > Hi
>> > >
>> > > We are working on an Ignite project with Cassandra as persistent
>> storage.
>> > > During our tests we faced the continuous cassandra session refresh
>> issue.
>> > > https://issues.apache.org/jira/browse/IGNITE-8354
>> > >
>> > > When we observed the above issue we also ran into OutOfMemory
>> Exception.
>> > > Though the above issue is solved we ran through the source code to
>> find
>> > out
>> > > the root cause
>> > > of OOM. We found one potential cause.
>> > >
>> > > In org.apache.ignite.cache.store.cassandra.session.
>> > > CassandraSessionImpl.java
>> > > when refresh() method is invoked to handle Exceptions, new Cluster is
>> > build
>> > > with same LoadBalancingPolicy Object. We are using RoundRobinPolicy so
>> > same
>> > > RoundRobinPolicy object would be used while building Cluster when
>> > refresh()
>> > > is invoked. In RoundRobinPolicy there is a CopyOnWriteArrayList
>> > > liveHosts. When ever init(Cluster cluster, Collection hosts) is
>> > > called
>> > > on RoundRobinPolicy  it calls liveHosts.addAll(hosts) adding all the
>> Host
>> > > Object Collection to liveHosts.
>> > > When ever Cluster is build during refresh() the Host Collection are
>> added
>> > > again to the liveHosts of the same RoundRobinPolicy that is used. Thus
>> > same
>> > > Hosts are added again to liveHosts for every refresh() and the size
>> would
>> > > grow indefinitely after many refresh() calls causing OOM. Even in the
>> > heap
>> > > dump post OOM we found huge number of Objects in liveHosts of
>> > > RoundRobinPolicy Object.
>> > >
>> > > IGNITE-8354 has fixed the OOM by preventing unnecessary refresh() but
>> > still
>> > > does not fix the actual Memory leak caused due to RoundRobinPolicy .
>> In a
>> > > long run we can have many Cassandra refresh due to some genuine
>> reasons
>> > and
>> > > then we end up with many Hosts in liveHosts of the RoundRobinPolicy
>> > Object.
>> > > Some possible solutions would be
>> > > 1. To use new LoadBalancingPolicy object while building new Cluster
>> > during
>> > > refresh().
>> > > 2. Somehow clear Objects in liveHosts during refresh().
>> > >
>> > > Also there's a work around to use DCAwareRoundRobinPolicy as it uses
>> adds
>> > > hosts dc wise and adds only if absent. But we are using single
>> datacenter
>> > > and its not recommended to use DCAwareRoundRobinPolicy when we have
>> > single
>> > > datacenter.
>> > >
>> > > I would like to request some one from ignite cassandra module
>> development
>> > > look into this issue.
>> > >
>> > >
>> > >
>> > > --
>> > > Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
>> > >
>> >
>>
>
>


Re: Memory leak in ignite-cassandra module

2018-06-12 Thread Igor Rudyak
Denis,

I don't have ideas right now. First need to create a test to reproduce this
case. Then I'll have some ideas :-)

Igor

On Tue, Jun 12, 2018 at 11:26 AM, Denis Magda  wrote:

> Igor,
>
> Do you have any glues/ideas how to fix it? Is the provided information
> enough for you?
>
> --
> Denis
>
> On Mon, Jun 11, 2018 at 11:45 PM Igor Rudyak  wrote:
>
> > Hi Kotamrajuyashasvi,
> >
> > Could you please create a ticket for this in Ignite JIRA? That's the
> > standard process to make improvements/fixes to Ignite.
> >
> > Thanks,
> > Igor Rudyak
> >
> > On Mon, Jun 11, 2018 at 11:36 PM, kotamrajuyashasvi <
> > kotamrajuyasha...@gmail.com> wrote:
> >
> > > Hi
> > >
> > > We are working on an Ignite project with Cassandra as persistent
> storage.
> > > During our tests we faced the continuous cassandra session refresh
> issue.
> > > https://issues.apache.org/jira/browse/IGNITE-8354
> > >
> > > When we observed the above issue we also ran into OutOfMemory
> Exception.
> > > Though the above issue is solved we ran through the source code to find
> > out
> > > the root cause
> > > of OOM. We found one potential cause.
> > >
> > > In org.apache.ignite.cache.store.cassandra.session.
> > > CassandraSessionImpl.java
> > > when refresh() method is invoked to handle Exceptions, new Cluster is
> > build
> > > with same LoadBalancingPolicy Object. We are using RoundRobinPolicy so
> > same
> > > RoundRobinPolicy object would be used while building Cluster when
> > refresh()
> > > is invoked. In RoundRobinPolicy there is a CopyOnWriteArrayList
> > > liveHosts. When ever init(Cluster cluster, Collection hosts) is
> > > called
> > > on RoundRobinPolicy  it calls liveHosts.addAll(hosts) adding all the
> Host
> > > Object Collection to liveHosts.
> > > When ever Cluster is build during refresh() the Host Collection are
> added
> > > again to the liveHosts of the same RoundRobinPolicy that is used. Thus
> > same
> > > Hosts are added again to liveHosts for every refresh() and the size
> would
> > > grow indefinitely after many refresh() calls causing OOM. Even in the
> > heap
> > > dump post OOM we found huge number of Objects in liveHosts of
> > > RoundRobinPolicy Object.
> > >
> > > IGNITE-8354 has fixed the OOM by preventing unnecessary refresh() but
> > still
> > > does not fix the actual Memory leak caused due to RoundRobinPolicy .
> In a
> > > long run we can have many Cassandra refresh due to some genuine reasons
> > and
> > > then we end up with many Hosts in liveHosts of the RoundRobinPolicy
> > Object.
> > > Some possible solutions would be
> > > 1. To use new LoadBalancingPolicy object while building new Cluster
> > during
> > > refresh().
> > > 2. Somehow clear Objects in liveHosts during refresh().
> > >
> > > Also there's a work around to use DCAwareRoundRobinPolicy as it uses
> adds
> > > hosts dc wise and adds only if absent. But we are using single
> datacenter
> > > and its not recommended to use DCAwareRoundRobinPolicy when we have
> > single
> > > datacenter.
> > >
> > > I would like to request some one from ignite cassandra module
> development
> > > look into this issue.
> > >
> > >
> > >
> > > --
> > > Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
> > >
> >
>


Re: Ignite 2.6 emergency release suggestion

2018-06-12 Thread Denis Magda
Agree with Ray. The ticket has been already reviewed and requires us to run
tests for an isolated module - Spark.

Dmitriy Pavlov, Nickolay Izhikov, could you step in as final reviewers and
merge the changes?

--
Denis

On Tue, Jun 12, 2018 at 12:01 AM Ray  wrote:

> Igniters,
>
> Can you squeeze this ticket into 2.6 scope?
> https://issues.apache.org/jira/browse/IGNITE-8534
>
> As ignite-spark module is relatively independent module, and there're
> already two users in the user list trying to use spark 2.3 with Ignite last
> week only.
>
>
> http://apache-ignite-users.70518.x6.nabble.com/Spark-Ignite-connection-using-Config-file-td21827.html
>
>
> http://apache-ignite-users.70518.x6.nabble.com/Spark-Ignite-standalone-mode-on-Kubernetes-cluster-td21739.html
>
>
>
>
>
> --
> Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
>


Re: Memory leak in ignite-cassandra module

2018-06-12 Thread Denis Magda
Igor,

Do you have any glues/ideas how to fix it? Is the provided information
enough for you?

--
Denis

On Mon, Jun 11, 2018 at 11:45 PM Igor Rudyak  wrote:

> Hi Kotamrajuyashasvi,
>
> Could you please create a ticket for this in Ignite JIRA? That's the
> standard process to make improvements/fixes to Ignite.
>
> Thanks,
> Igor Rudyak
>
> On Mon, Jun 11, 2018 at 11:36 PM, kotamrajuyashasvi <
> kotamrajuyasha...@gmail.com> wrote:
>
> > Hi
> >
> > We are working on an Ignite project with Cassandra as persistent storage.
> > During our tests we faced the continuous cassandra session refresh issue.
> > https://issues.apache.org/jira/browse/IGNITE-8354
> >
> > When we observed the above issue we also ran into OutOfMemory Exception.
> > Though the above issue is solved we ran through the source code to find
> out
> > the root cause
> > of OOM. We found one potential cause.
> >
> > In org.apache.ignite.cache.store.cassandra.session.
> > CassandraSessionImpl.java
> > when refresh() method is invoked to handle Exceptions, new Cluster is
> build
> > with same LoadBalancingPolicy Object. We are using RoundRobinPolicy so
> same
> > RoundRobinPolicy object would be used while building Cluster when
> refresh()
> > is invoked. In RoundRobinPolicy there is a CopyOnWriteArrayList
> > liveHosts. When ever init(Cluster cluster, Collection hosts) is
> > called
> > on RoundRobinPolicy  it calls liveHosts.addAll(hosts) adding all the Host
> > Object Collection to liveHosts.
> > When ever Cluster is build during refresh() the Host Collection are added
> > again to the liveHosts of the same RoundRobinPolicy that is used. Thus
> same
> > Hosts are added again to liveHosts for every refresh() and the size would
> > grow indefinitely after many refresh() calls causing OOM. Even in the
> heap
> > dump post OOM we found huge number of Objects in liveHosts of
> > RoundRobinPolicy Object.
> >
> > IGNITE-8354 has fixed the OOM by preventing unnecessary refresh() but
> still
> > does not fix the actual Memory leak caused due to RoundRobinPolicy . In a
> > long run we can have many Cassandra refresh due to some genuine reasons
> and
> > then we end up with many Hosts in liveHosts of the RoundRobinPolicy
> Object.
> > Some possible solutions would be
> > 1. To use new LoadBalancingPolicy object while building new Cluster
> during
> > refresh().
> > 2. Somehow clear Objects in liveHosts during refresh().
> >
> > Also there's a work around to use DCAwareRoundRobinPolicy as it uses adds
> > hosts dc wise and adds only if absent. But we are using single datacenter
> > and its not recommended to use DCAwareRoundRobinPolicy when we have
> single
> > datacenter.
> >
> > I would like to request some one from ignite cassandra module development
> > look into this issue.
> >
> >
> >
> > --
> > Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/
> >
>


[GitHub] ignite pull request #3874: IGNITE-7319: Cancelable future task for backup cl...

2018-06-12 Thread aealeksandrov
Github user aealeksandrov closed the pull request at:

https://github.com/apache/ignite/pull/3874


---


[jira] [Created] (IGNITE-8776) Eviction policy MBeans are never registered if evictionPolicyFactory is used

2018-06-12 Thread Stanislav Lukyanov (JIRA)
Stanislav Lukyanov created IGNITE-8776:
--

 Summary: Eviction policy MBeans are never registered if 
evictionPolicyFactory is used
 Key: IGNITE-8776
 URL: https://issues.apache.org/jira/browse/IGNITE-8776
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.5
Reporter: Stanislav Lukyanov


Eviction policy MBeans, such as LruEvictionPolicyMBean, are never registered if 
evictionPolicyFactory is set instead of evictionPolicy (the latter is 
deprecated).

This happens because GridCacheProcessor::registerMbean attempts to find either 
an *MBean interface or IgniteMBeanAware interface on the passed object. It 
works for LruEvictionPolicy but not for LruEvictionPolicyFactory (which doesn't 
implement these interfaces).

The code needs to be adjusted to handle factories correctly.
New tests are needed to make sure that all standard beans are registered 
(IgniteKernalMbeansTest does that for kernal mbeans - need the same for cache 
beans).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8775) Memory leak in ignite-cassandra module while using RoundRobinPolicy LoadBalancingPolicy

2018-06-12 Thread Yashasvi Kotamraju (JIRA)
Yashasvi Kotamraju created IGNITE-8775:
--

 Summary: Memory leak in ignite-cassandra module while using 
RoundRobinPolicy LoadBalancingPolicy
 Key: IGNITE-8775
 URL: https://issues.apache.org/jira/browse/IGNITE-8775
 Project: Ignite
  Issue Type: Bug
  Components: cassandra
Reporter: Yashasvi Kotamraju


In org.apache.ignite.cache.store.cassandra.session.CassandraSessionImpl.java 
when refresh() method is invoked to handle Exceptions, new Cluster is build 
with same LoadBalancingPolicy Object. We are using RoundRobinPolicy so same 
RoundRobinPolicy object would be used while building Cluster when refresh() 
is invoked. In RoundRobinPolicy there is a CopyOnWriteArrayList
liveHosts. When ever init(Cluster cluster, Collection hosts) is called 
on RoundRobinPolicy  it calls liveHosts.addAll(hosts) adding all the Host 
Object Collection to liveHosts. 
When ever Cluster is build during refresh() the Host Collection are added 
again to the liveHosts of the same RoundRobinPolicy that is used. Thus same 
Hosts are added again to liveHosts for every refresh() and the size would 
grow indefinitely after many refresh() calls causing OOM. Even in the heap 
dump post OOM we found huge number of Objects in liveHosts of 
RoundRobinPolicy Object. 

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Ignite 2.6 emergency release suggestion

2018-06-12 Thread Ray
Igniters,

Can you squeeze this ticket into 2.6 scope?
https://issues.apache.org/jira/browse/IGNITE-8534

As ignite-spark module is relatively independent module, and there're
already two users in the user list trying to use spark 2.3 with Ignite last
week only.

http://apache-ignite-users.70518.x6.nabble.com/Spark-Ignite-connection-using-Config-file-td21827.html

http://apache-ignite-users.70518.x6.nabble.com/Spark-Ignite-standalone-mode-on-Kubernetes-cluster-td21739.html





--
Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/