[jira] [Created] (IGNITE-9425) NPE on index rebuild

2018-08-29 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-9425:
---

 Summary: NPE on index rebuild
 Key: IGNITE-9425
 URL: https://issues.apache.org/jira/browse/IGNITE-9425
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.6
Reporter: Dmitry Karachentsev
Assignee: Dmitry Karachentsev
 Fix For: 2.7


Summary:
This issue is seen when we have two caches mapped to two different regions (one 
with persistence and one without persistence)
The client-side config should have the cache definitions
The server-side config should have the data regions only
Affinity should be defined on the cache without persistence
We need to populate the data into both the caches and then restart the server 
and clients
The issue will be seen when the client reconnects.
Workaround:
Add the cache configurations (for the caches without persistence) to the 
server-side config.

Steps to reproduce:
Ignite server
- region1 (with persistence)
- region2 (without persistence)

client
- cache1a from region1 – with custom affinity
- cache2a fom region2 – with custom affinity


1. Populate data in both cache1a and cache2a.
2. Restart ignite server. It knows about cache1a from the persistent store. It 
doesn’t know about cache2a.
3. Restart client. When it connects to ignite, the server sees a nullpointer

{noformat}
java.lang.NullPointerException
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager$11.apply(GridCacheDatabaseSharedManager.java:1243)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager$11.apply(GridCacheDatabaseSharedManager.java:1239)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:383)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.listen(GridFutureAdapter.java:353)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.rebuildIndexesIfNeeded(GridCacheDatabaseSharedManager.java:1239)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onDone(GridDhtPartitionsExchangeFuture.java:1711)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onDone(GridDhtPartitionsExchangeFuture.java:126)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:451)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:729)
at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:2419)
at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2299)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:748)
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-9259) Wrong log message in IgniteKernal#ackMemoryConfiguration

2018-08-13 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-9259:
---

 Summary: Wrong log message in IgniteKernal#ackMemoryConfiguration
 Key: IGNITE-9259
 URL: https://issues.apache.org/jira/browse/IGNITE-9259
 Project: Ignite
  Issue Type: Bug
  Components: cache
Affects Versions: 2.6
Reporter: Dmitry Karachentsev
 Fix For: 2.7


Message should be changed:
DataStorageConfiguration.systemCacheMemorySize -> 
DataStorageConfiguration.systemRegionInitialSize

Userlist thread: 
http://apache-ignite-users.70518.x6.nabble.com/System-cache-s-DataRegion-size-is-configured-to-40-MB-td23305.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-9257) LOCAL cache doesn't evict entries from heap

2018-08-13 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-9257:
---

 Summary: LOCAL cache doesn't evict entries from heap
 Key: IGNITE-9257
 URL: https://issues.apache.org/jira/browse/IGNITE-9257
 Project: Ignite
  Issue Type: Bug
  Components: cache
Affects Versions: 2.6
Reporter: Dmitry Karachentsev
 Fix For: 2.7


Reproducer 
http://apache-ignite-users.70518.x6.nabble.com/When-using-CacheMode-LOCAL-OOM-td23285.html

This happens because all entries are kept in GridCacheAdapter#map.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-9038) Node join serialization defaults

2018-07-20 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-9038:
---

 Summary: Node join serialization defaults
 Key: IGNITE-9038
 URL: https://issues.apache.org/jira/browse/IGNITE-9038
 Project: Ignite
  Issue Type: Bug
  Components: cache
Affects Versions: 2.6
Reporter: Dmitry Karachentsev
Assignee: Dmitry Karachentsev
 Fix For: 2.7


staticallyConfigured flag in CacheJoinNodeDiscoveryData.CacheInfo should be 
true by default to keep it consistent to previous protocol.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Neighbors exclusion

2018-07-17 Thread Dmitry Karachentsev

Hi Yakov,

Not sure if it's a good idea, because if we remove that flag, it will 
mean that left only one way to set previous logic - implement backup 
filter. And I don't know how it will influence persistence. I think it's 
a good candidate for 3.0 as it's not so urgent.


Thanks!

16.07.2018 21:24, Yakov Zhdanov пишет:

Dmitry, it hink we can do this change right away. All we need is to add
proper error message on cache config validation in order to tell user that
default changed and manual configuration is needed for compatibility.

--Yakov

2018-07-16 15:47 GMT+03:00 Dmitry Karachentsev :


Created a ticket and mapped to 3.0 version, as it changes basic default
behavior:
https://issues.apache.org/jira/browse/IGNITE-9011

Thanks!

13.07.2018 22:10, Valentin Kulichenko пишет:

Dmitry,

Good point. I think it makes sense to even remove (deprecate) the
excludeNeighbors property and always distribute primary and backups to
different physical hosts in this scenario. Because why would anyone ever
set this to false if we switch default to true? This also automatically
fixes the confusing behavior of backupFilter - it should never be ignored
if it's set.

-Val

On Fri, Jul 13, 2018 at 8:05 AM Dmitry Karachentsev <
dkarachent...@gridgain.com> wrote:

Hi folks,

Why RendezvousAffinityFunction.excludeNeighbors [1] is false by default?
It's not obvious that if user wants to run more than one node per
machine it has also set this flag to true explicitly. Maybe it would be
better to set it to true by default?

At the same time if excludeNeighbors is true, it ignores backupFilter.
Why it's not vice-versa? For example:

1) if backupFilter is set - it will be used,

2) if there are not enough backup nodes (or no backupFilter) - try to
distribute according to excludeNeighbors = true,

3) if this is not possible too (or excludeNeighbors) = false - assign
partitions as possible.

[1]

https://ignite.apache.org/releases/latest/javadoc/org/apache
/ignite/cache/affinity/rendezvous/RendezvousAffinityFunction
.html#setExcludeNeighbors-boolean-

Are there any drawbacks in such approach?

Thanks!







Neighbors exclusion

2018-07-13 Thread Dmitry Karachentsev

Hi folks,

Why RendezvousAffinityFunction.excludeNeighbors [1] is false by default? 
It's not obvious that if user wants to run more than one node per 
machine it has also set this flag to true explicitly. Maybe it would be 
better to set it to true by default?


At the same time if excludeNeighbors is true, it ignores backupFilter. 
Why it's not vice-versa? For example:


1) if backupFilter is set - it will be used,

2) if there are not enough backup nodes (or no backupFilter) - try to 
distribute according to excludeNeighbors = true,


3) if this is not possible too (or excludeNeighbors) = false - assign 
partitions as possible.


[1] 
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/affinity/rendezvous/RendezvousAffinityFunction.html#setExcludeNeighbors-boolean-


Are there any drawbacks in such approach?

Thanks!



Re: Stable/experimental releases

2018-07-06 Thread Dmitry Karachentsev

Hi Dmitry,

I think maintenance release is fine only if they will include ONLY 
fixes. This will allow to release more frequently, so we do not have to 
wait when collect some number of fixes to make a next major version.


So each major release could be named as experimental, and maintenance - 
stable.


Thanks!

06.07.2018 16:47, Dmitry Pavlov пишет:

Hi Igniters,

here I will extremely appreciate vision from community members invoved into
release. What is simpler, support 2.7-EA- or 2.7.x?

Taking into account Teamcity state, it is quite honest to have experimental
releases to underline that
- a lot of new code introduced
- and probably new feateure will be more stable in later release.

WDYT?

Sincerely,
Dmitriy Pavlov

чт, 5 июл. 2018 г. в 17:53, Vladimir Ozerov :


Hi Dmitriy,

AFAIK we have an idea to introduce maintenance releases for Ignite. E.g.
2.6.0 - features, 2.6.1+ - stabilization.

This seems to be more standard and flexible approach.

чт, 5 июля 2018 г. в 17:39, Dmitry Karachentsev <
dkarachent...@gridgain.com

:
Hi igniters!

Following our discussions about emergency releases I see that here might
be applied new way for doing releases. Like it was for Linux or like it
is for Ubuntu. I mean do interleaving releases: first is experimental
with newest features and second - with bug fixes ONLY.

For example, odd version number is unstable and even is stable: 2.5
introduces a lot of new features, when 2.6 brings more stability to
product.

Pros:

1. User always has a choice what to choose: cutting edge technology or
release that has less problems.

2. It will be much easier to add more effort to make TC green again, as
fixes are not mixed with features.

3. We may spend more time on prepare stable release and do more rigorous
testing.

4. Stable release may keep 100% compatibility to previous release (not
always, of course) to make it easier to migrate and take important bug
fixes without introducing a new ones.

5. Not all users will fall in critical issues, in other words, only some
group of users will try to use unstable release with experimental

features.

Cons:

1. Necessity of keeping two branches simultaneous: master and stable
release. Migrate fixes between branches.

2. Less users could report about found issues, as consequence of item #5
from pros.

3. A bit more complex release procedure???

I think it's common and right way to create a less buggy product.

What do you think?

Thanks!







[jira] [Created] (IGNITE-8944) TcpDiscoverySpi: set connection check frequency to fixed value

2018-07-05 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-8944:
---

 Summary: TcpDiscoverySpi: set connection check frequency to fixed 
value
 Key: IGNITE-8944
 URL: https://issues.apache.org/jira/browse/IGNITE-8944
 Project: Ignite
  Issue Type: Improvement
Reporter: Dmitry Karachentsev


Now connCheckFreq value calculated by some not obvious code:
{code:java}
private void initConnectionCheckFrequency() {
if (spi.failureDetectionTimeoutEnabled())
connCheckThreshold = spi.failureDetectionTimeout();
else
connCheckThreshold = Math.min(spi.getSocketTimeout(), 
spi.metricsUpdateFreq);

for (int i = 3; i > 0; i--) {
connCheckFreq = connCheckThreshold / i;

if (connCheckFreq > 10)
break;
}

assert connCheckFreq > 0;

if (log.isInfoEnabled())
log.info("Connection check frequency is calculated: " + 
connCheckFreq);
}
{code}

It should be replaced with constant.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Stable/experimental releases

2018-07-05 Thread Dmitry Karachentsev

Hi igniters!

Following our discussions about emergency releases I see that here might 
be applied new way for doing releases. Like it was for Linux or like it 
is for Ubuntu. I mean do interleaving releases: first is experimental 
with newest features and second - with bug fixes ONLY.


For example, odd version number is unstable and even is stable: 2.5 
introduces a lot of new features, when 2.6 brings more stability to product.


Pros:

1. User always has a choice what to choose: cutting edge technology or 
release that has less problems.


2. It will be much easier to add more effort to make TC green again, as 
fixes are not mixed with features.


3. We may spend more time on prepare stable release and do more rigorous 
testing.


4. Stable release may keep 100% compatibility to previous release (not 
always, of course) to make it easier to migrate and take important bug 
fixes without introducing a new ones.


5. Not all users will fall in critical issues, in other words, only some 
group of users will try to use unstable release with experimental features.


Cons:

1. Necessity of keeping two branches simultaneous: master and stable 
release. Migrate fixes between branches.


2. Less users could report about found issues, as consequence of item #5 
from pros.


3. A bit more complex release procedure???

I think it's common and right way to create a less buggy product.

What do you think?

Thanks!




Re: is these correct?

2018-07-02 Thread Dmitry Karachentsev

Hi,

Please clarify what's wrong in your opinion?

If you mean that you do not see hits or misses - metrics are disabled by 
default [1].


[1] https://apacheignite.readme.io/v2.0/docs/memory-and-cache-metrics

Thanks!
-Dmitry

30.06.2018 05:12, kcheng.mvp пишет:

here is the environment : 1 server node + 1 client node

executed below code
===
IgniteCache igniteCache =
igniteSpringBean.getOrCreateCache("HappyFlow");
 igniteCache.put("hello", "hello1");
 Assert.assertEquals("hello1",igniteCache.get("hello"));

 for (int i = 0;i < 100; i ++) {
 igniteCache.get("hello");
 }
===

I got below screenshot in ignitevisorcmd.sh






--
Sent from: http://apache-ignite-developers.2346864.n4.nabble.com/




Re: Review IGNITE-8859

2018-06-29 Thread Dmitry Karachentsev

Hi Petr,

Yes, as I said I tested on following JDKs: 1.7, 1.8, 9 and 10.
In waring showed JAVA_HOME directory. If it would be useful, I can add 
detected version.


Thanks!

29.06.2018 17:34, Petr Ivanov пишет:

Looks good.

Did you intentionally not mentioned JDK10 in warning/error texts?
Also have you tested running built Apache Ignite under JDK10?



On 29 Jun 2018, at 17:06, Dmitry Karachentsev  
wrote:

Forgot a link to the ticket https://issues.apache.org/jira/browse/IGNITE-8859

29.06.2018 17:06, Dmitry Karachentsev пишет:

Hi guys,

I've enhanced our scripts a bit to allow run Ignite on Java 10+. Please review.

I'm not sure if there exist tests, but I tested on Windows/Linux for 1.7, 1.8, 
9 and 10 JDKs manually.

Thanks!





Re: Review IGNITE-8859

2018-06-29 Thread Dmitry Karachentsev
Forgot a link to the ticket 
https://issues.apache.org/jira/browse/IGNITE-8859


29.06.2018 17:06, Dmitry Karachentsev пишет:

Hi guys,

I've enhanced our scripts a bit to allow run Ignite on Java 10+. 
Please review.


I'm not sure if there exist tests, but I tested on Windows/Linux for 
1.7, 1.8, 9 and 10 JDKs manually.


Thanks!





Review IGNITE-8859

2018-06-29 Thread Dmitry Karachentsev

Hi guys,

I've enhanced our scripts a bit to allow run Ignite on Java 10+. Please 
review.


I'm not sure if there exist tests, but I tested on Windows/Linux for 
1.7, 1.8, 9 and 10 JDKs manually.


Thanks!



[jira] [Created] (IGNITE-8896) Wrong javadoc for TcpCommunicationSpi.setSlowClientQueueLimit()

2018-06-29 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-8896:
---

 Summary: Wrong javadoc for 
TcpCommunicationSpi.setSlowClientQueueLimit()
 Key: IGNITE-8896
 URL: https://issues.apache.org/jira/browse/IGNITE-8896
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.5
Reporter: Dmitry Karachentsev
 Fix For: 2.7


Javadoc for TcpCommunicationSpi.setSlowClientQueueLimit() says that is should 
be set to value equal to TcpCommunicationSpi.getMessageQueueLimit().

But this is wrong, because first checks back pressure limit and sender thread 
will be blocked before slow client check. Also, warning checks the value 
correctly, but message should be "...greater or equal...":
{code:java}
if (slowClientQueueLimit > 0 && msgQueueLimit > 0 && slowClientQueueLimit >= 
msgQueueLimit) {
U.quietAndWarn(log, "Slow client queue limit is set to a value 
greater than message queue limit " +
"(slow client queue limit will have no effect) [msgQueueLimit=" 
+ msgQueueLimit +
", slowClientQueueLimit=" + slowClientQueueLimit + ']');
}
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8859) Open upper Java verison bounds

2018-06-22 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-8859:
---

 Summary: Open upper Java verison bounds
 Key: IGNITE-8859
 URL: https://issues.apache.org/jira/browse/IGNITE-8859
 Project: Ignite
  Issue Type: Improvement
Reporter: Dmitry Karachentsev
 Fix For: 2.6


JDK is going to be release twice a year and it will be always an issue for 
users to start Ignite on newer Java version as we have explicit bounds.

Ignite should not disallow starting in latest JDK, but show warning that it 
wasn't tested against running version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8858) Client none may not stop

2018-06-22 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-8858:
---

 Summary: Client none may not stop
 Key: IGNITE-8858
 URL: https://issues.apache.org/jira/browse/IGNITE-8858
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.5
Reporter: Dmitry Karachentsev
Assignee: Dmitry Karachentsev
 Fix For: 2.6


There is possible case when client node is not stopped and blocked on waiting 
when SocketReader will be completed. Looks like interruption was lost, and the 
only place where it could happen is in unmarshaling message from input stream.

The way to overcome/fix it is to check if InterruptedException was in cause of 
IgniteCheckedException and repeatedly interrupt reader on stop.

 
{noformat}
   java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1245)
- locked <0x00041016a140> (a 
org.apache.ignite.spi.discovery.tcp.ClientImpl$SocketReader)
at java.lang.Thread.join(Thread.java:1319)
at 
org.apache.ignite.internal.util.IgniteUtils.join(IgniteUtils.java:4604)
at 
org.apache.ignite.spi.discovery.tcp.ClientImpl.spiStop(ClientImpl.java:315)
at 
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.spiStop(TcpDiscoverySpi.java:2061)
at 
org.apache.ignite.internal.managers.GridManagerAdapter.stopSpi(GridManagerAdapter.java:330)
at 
org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.stop(GridDiscoveryManager.java:1608)
at org.apache.ignite.internal.IgniteKernal.stop0(IgniteKernal.java:2216)
at org.apache.ignite.internal.IgniteKernal.stop(IgniteKernal.java:2094)
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.stop0(IgnitionEx.java:2545)
- locked <0x000410065e80> (a 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance)
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.stop(IgnitionEx.java:2508)
at org.apache.ignite.internal.IgnitionEx.stop(IgnitionEx.java:365)
at org.apache.ignite.Ignition.stop(Ignition.java:229)
at org.apache.ignite.internal.IgniteKernal.close(IgniteKernal.java:3417)


"tcp-client-disco-sock-reader-#35%Default%" #746 prio=5 os_prio=0 
tid=0x7f6090561800 nid=0x3441 in Object.wait() [0x7f60f23d8000]
   java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:502)
at 
org.apache.ignite.spi.discovery.tcp.ClientImpl$SocketReader.body(ClientImpl.java:1006)
- locked <0x00041016a2e0> (a java.lang.Object)
at org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8852) CacheJoinNodeDiscoveryData.CacheInfo has boolean flags and long field for flags

2018-06-22 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-8852:
---

 Summary: CacheJoinNodeDiscoveryData.CacheInfo has boolean flags 
and long field for flags
 Key: IGNITE-8852
 URL: https://issues.apache.org/jira/browse/IGNITE-8852
 Project: Ignite
  Issue Type: Improvement
Affects Versions: 2.5
Reporter: Dmitry Karachentsev
 Fix For: 3.0


CacheJoinNodeDiscoveryData.CacheInfo has two types of flags: boolean stored in 
fields and long. Latter is not used. Should be picked one approach: or boolean 
fields, or bitmask, not both.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8851) StoredCacheData and CacheJoinNodeDiscoveryData.CacheInfo duplicate "sql" flag

2018-06-22 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-8851:
---

 Summary: StoredCacheData and CacheJoinNodeDiscoveryData.CacheInfo 
duplicate "sql" flag
 Key: IGNITE-8851
 URL: https://issues.apache.org/jira/browse/IGNITE-8851
 Project: Ignite
  Issue Type: Improvement
Affects Versions: 2.5
Reporter: Dmitry Karachentsev
 Fix For: 3.0


Node when joins cluster sends cache configurations. Configuration is wrapped 
into StoredCacheData, and StoredCacheData into 
CacheJoinNodeDiscoveryData.CacheInfo.

Both CacheInfo and StoredCacheData have "sql" flag that seems means the same. 
It should appear only once.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-8813) Add ComputeJobResultPolicy to JobEvent

2018-06-18 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-8813:
---

 Summary: Add ComputeJobResultPolicy to JobEvent
 Key: IGNITE-8813
 URL: https://issues.apache.org/jira/browse/IGNITE-8813
 Project: Ignite
  Issue Type: Improvement
  Components: compute
Affects Versions: 2.5
Reporter: Dmitry Karachentsev
Assignee: Dmitry Karachentsev
 Fix For: 2.6


There is no way to determine on node-mapper what action would be performed on 
EVT_JOB_RESULTED event, as got result could return different policies: WAIT, 
REDUCE, FAILOVER. This knowledge might be critical for user if it relies on 
events, and, for instance, he need to know if EVT_JOB_RESULTED is final event, 
or it will be failed over and will be fired again.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: IGNITE-4188, savepoints with atomic cache?

2018-06-14 Thread Dmitry Karachentsev
I would vote for 1 or 3 (this I like more), but not 2 as such breaking 
change involves more pain to our users. Let it be in 3.0 and included in 
migration guide, I don't see much drawbacks here.


Thanks!

13.06.2018 19:20, Ivan Rakov пишет:

+1 to Eduard.

It's a reasonable change, but we can't just break working code of all 
the guys that access atomic caches in their transactions. If we try, 
we will end up with another emergency release.


Best Regards,
Ivan Rakov

On 13.06.2018 19:13, Eduard Shangareev wrote:

Guys,

I believe, that it's not the case when we should change default 
behaviour.

So, #1 and make it default in 3.0.

On Wed, Jun 13, 2018 at 6:46 PM, Dmitrii Ryabov 
wrote:

Vote for #2. I think no one will change this defaults in 
configuration in

#1.

2018-06-13 18:29 GMT+03:00 Anton Vinogradov :


Vote for #2 since it can shed light on hidden bug at production.

ср, 13 июн. 2018 г. в 18:10, Alexey Goncharuk <

alexey.goncha...@gmail.com

:
Igniters,

Bumping up this discussion. The fix has been implemented and it is 
fine
from the technical point of view, but since the fix did not make 
it to

the

Ignite 2.0, the implemented fix [1] now will be a breaking change for
current Ignite users.

I see the following options:
1) Have the fix merged, but do not change the defaults - atomic 
caches

will

still be allowed in transactions by default and only configuration

change

will make Ignite throw exceptions in this case
2) Have the fix merged as is and describe this change in the release

notes

3) Postpone the fix until Ignite 3.0

I would vote for option #1 and change only the defaults in Ignite 
3.0.


Thoughts?

[1] https://issues.apache.org/jira/browse/IGNITE-2313

ср, 5 апр. 2017 г. в 22:53, Дмитрий Рябов :


IGNITE-2313 done, can you review it?

PR: https://github.com/apache/ignite/pull/1709/files
JIRA: https://issues.apache.org/jira/browse/IGNITE-2313
CI: http://ci.ignite.apache.org/viewType.html?buildTypeId=
IgniteTests_RatJavadoc_IgniteTests=pull%2F1709%
2Fhead=buildTypeStatusDiv

2017-03-29 20:58 GMT+03:00 Denis Magda :


Sorry, I get lost in tickets.

Yes, IGNITE-2313 has to be completed in 2.0 if we want to makes

this

change.

—
Denis


On Mar 29, 2017, at 2:12 AM, Дмитрий Рябов <

somefire...@gmail.com>

wrote:

Savepoints marked for 2.1, exceptions for 2.0. Do you want me to

make

exceptions first?

2017-03-29 11:24 GMT+03:00 Дмитрий Рябов 
:

Finish savepoints or flag for atomic operations?
Not sure about savepoints. Exceptions - yes.

https://issues.apache.

org/jira/browse/IGNITE-2313 isn't it?

2017-03-29 2:12 GMT+03:00 Denis Magda :


If we want to make the exception based approach the default one

then

the

task has to be released in 2.0.

Dmitriy Ryabov, do you think you can finish it (dev, review,

QA)

by

the

code freeze data (April 14)?

—
Denis


On Mar 28, 2017, at 11:57 AM, Dmitriy Setrakyan <

dsetrak...@apache.org>

wrote:

On Tue, Mar 28, 2017 at 11:54 AM, Sergi Vladykin <

sergi.vlady...@gmail.com>

wrote:


I think updating an Atomic cache from within a transaction

perfectly

makes

sense. For example for some kind of operations logging and so

forth.

Still

I agree that this can be error prone and forbidden by

default.

I

agree

with

Yakov that by default we should throw an exception and have

some

kind

of

flag (on cache or on TX?) to be able to explicitly enable

this

behavior.


Agree, this sounds like a good idea.










[jira] [Created] (IGNITE-8500) [Cassandra] Allow dynamic cache creation with Cassandra Store

2018-05-15 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-8500:
---

 Summary: [Cassandra] Allow dynamic cache creation with Cassandra 
Store
 Key: IGNITE-8500
 URL: https://issues.apache.org/jira/browse/IGNITE-8500
 Project: Ignite
  Issue Type: Improvement
  Components: cassandra
Affects Versions: 2.5
Reporter: Dmitry Karachentsev
 Fix For: 2.6


It's not possible for now dynamically create caches with Cassandra cache store:

{noformat}
class org.apache.ignite.IgniteCheckedException: null
at 
org.apache.ignite.internal.util.IgniteUtils.cast(IgniteUtils.java:7307)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.resolve(GridFutureAdapter.java:259)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:207)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:159)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:151)
at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:2433)
at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2299)
at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NullPointerException
at 
org.apache.ignite.cache.store.cassandra.persistence.PojoField.calculatedField(PojoField.java:155)
at 
org.apache.ignite.cache.store.cassandra.persistence.PersistenceController.prepareLoadStatements(PersistenceController.java:313)
at 
org.apache.ignite.cache.store.cassandra.persistence.PersistenceController.(PersistenceController.java:85)
at 
org.apache.ignite.cache.store.cassandra.CassandraCacheStore.(CassandraCacheStore.java:106)
at 
org.apache.ignite.cache.store.cassandra.CassandraCacheStoreFactory.create(CassandraCacheStoreFactory.java:59)
at 
org.apache.ignite.cache.store.cassandra.CassandraCacheStoreFactory.create(CassandraCacheStoreFactory.java:34)
at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.createCache(GridCacheProcessor.java:1437)
at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheStart(GridCacheProcessor.java:1945)
at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.startReceivedCaches(GridCacheProcessor.java:1864)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:674)
at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:2419)
... 3 more
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-7673) Remove using of Collections.UnmodifiableCollection from GridTaskSessionImpl

2018-02-12 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-7673:
---

 Summary: Remove using of Collections.UnmodifiableCollection from 
GridTaskSessionImpl
 Key: IGNITE-7673
 URL: https://issues.apache.org/jira/browse/IGNITE-7673
 Project: Ignite
  Issue Type: Improvement
  Components: compute
Affects Versions: 2.3
Reporter: Dmitry Karachentsev
Assignee: Dmitry Karachentsev
 Fix For: 2.5


GridTaskWorker marshals GridTaskSessionImpl.siblings which is 
UnmodifiableCollection which forces use of OptimizedMarshaller that may have 
performance impact. It should be replaced with original collection.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Text query with persistence

2018-02-07 Thread Dmitry Karachentsev

Thanks Alexey, I'll take a look on it!

07.02.2018 11:48, Alexey Goncharuk пишет:

Hi Dmitriy,

We do have a ticket for this [1], however, I am not sure it will be
completed any time soon.

As for the rebuilding indexes at the node startup, we already have an
internal routine for this (see GridQueryIndexing#rebuildIndexesFromHash).
The only thing we need to implement is skipping of regular indexes update
during the full-text index rebuild and maybe provide an API so that a user
can trigger this.

--Alexey

[1] https://issues.apache.org/jira/browse/IGNITE-5371

2018-02-07 11:30 GMT+03:00 Dmitry Karachentsev <dkarachent...@gridgain.com>:


Hi guys,

We have Apache Lucene based full text search that creates in-memory
indexes. It's a nice feature, but it seems absolutely useless with
persistence as indexes are dropped on node restart.

Probably we need a configuration flag and\or API call to rebuild them, or
persist somehow.

Do we have plans/ticket for that? Or maybe some tricky way to recreate
indexes?

Thanks!

-Dmitry






Text query with persistence

2018-02-07 Thread Dmitry Karachentsev

Hi guys,

We have Apache Lucene based full text search that creates in-memory 
indexes. It's a nice feature, but it seems absolutely useless with 
persistence as indexes are dropped on node restart.


Probably we need a configuration flag and\or API call to rebuild them, 
or persist somehow.


Do we have plans/ticket for that? Or maybe some tricky way to recreate 
indexes?


Thanks!

-Dmitry



Re: LOCAL cache on client

2018-02-02 Thread Dmitry Karachentsev
Sounds pretty reasonable for me, so fix will entail just in proper 
exception.
Maybe it worth to show warning that says that's no data region 
configured and it's not possible to create LOCAL cache?


Thanks!

02.02.2018 00:08, Valentin Kulichenko пишет:

I meant "they should *explicitly* provide data region configuration", of
course.

-Val

On Thu, Feb 1, 2018 at 10:58 AM, Valentin Kulichenko <
valentin.kuliche...@gmail.com> wrote:


Agree with Mike. I don't think it's a good idea to implicitly create data
regions on client node. If one wants to have a local there, they should
implicitly provide data region configuration (default or otherwise). If
data region is not configured, then we should throw proper exception
instead of NPE.

-Val

On Thu, Feb 1, 2018 at 10:54 AM, Michael Cherkasov <
michael.cherka...@gmail.com> wrote:


Hi Dmitry,

I think we should make a user to explicitly create data region on a
client.

Client nodes aren't supposed to be used for data storing, so if someone
what to use a local cache on a client node, let's make him/her create data
region explicitly. Just to make sure that user knows what he/she is doing.

Thanks,
Mike.


2018-02-01 7:36 GMT-08:00 Dmitry Karachentsev <dkarachent...@gridgain.com

:
Hello everyone!

We have an erroneous use case when client tries to create LOCAL cache,

but

by default it does not initiates default data region. So client gets

NPE.

[1]

I think it should be a lazy data region initialization on client. Do you
have any concerns about this approach or other proposals?

[1] https://issues.apache.org/jira/browse/IGNITE-7572

Thanks!








LOCAL cache on client

2018-02-01 Thread Dmitry Karachentsev

Hello everyone!

We have an erroneous use case when client tries to create LOCAL cache, 
but by default it does not initiates default data region. So client gets 
NPE. [1]


I think it should be a lazy data region initialization on client. Do you 
have any concerns about this approach or other proposals?


[1] https://issues.apache.org/jira/browse/IGNITE-7572

Thanks!



[jira] [Created] (IGNITE-7340) Fix flaky org.apache.ignite.internal.processors.service.GridServiceProcessorMultiNodeConfigSelfTest#checkDeployOnEachNodeUpdateTopology

2017-12-29 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-7340:
---

 Summary: Fix flaky 
org.apache.ignite.internal.processors.service.GridServiceProcessorMultiNodeConfigSelfTest#checkDeployOnEachNodeUpdateTopology
 Key: IGNITE-7340
 URL: https://issues.apache.org/jira/browse/IGNITE-7340
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.3
Reporter: Dmitry Karachentsev
Assignee: Dmitry Karachentsev






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: List of running Continuous queries or CacheEntryListener per cache or node

2017-12-26 Thread Dmitry Karachentsev

Hi Nikolay,

I think it may be useful too. Will try to describe possible API in a ticket.

Thanks!
-Dmitry

21.12.2017 13:18, Nikolay Izhikov пишет:

Hello, Dmitry.

I think it a great idea.

Do we a feature to list all running ComputeTasks?

I, personally, think we have to implement possibility to track all
user-provided tasks - CacheListener, ContinuousQuery, ComputeTasks,
etc.

В Чт, 21/12/2017 в 10:13 +0300, Dmitry Karachentsev пишет:

Crossposting to devlist.

Hi Igniters!

It's might be a nice feature to have - get list of registered
continuous
queries with ability to deregister them.

What do you think?

Thanks!
-Dmitry

20.12.2017 16:59, fefe пишет:

For sanity checks or tests. I want to be sure that I haven't forgot
to
deregister any listener.

Its also very important metric to see how many continuous
queries/listeners
are currently running.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/






Re: List of running Continuous queries or CacheEntryListener per cache or node

2017-12-20 Thread Dmitry Karachentsev

Crossposting to devlist.

Hi Igniters!

It's might be a nice feature to have - get list of registered continuous 
queries with ability to deregister them.


What do you think?

Thanks!
-Dmitry

20.12.2017 16:59, fefe пишет:

For sanity checks or tests. I want to be sure that I haven't forgot to
deregister any listener.

Its also very important metric to see how many continuous queries/listeners
are currently running.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/




[jira] [Created] (IGNITE-7239) In case of not serializable cache update response, future on node requester will never complete

2017-12-19 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-7239:
---

 Summary: In case of not serializable cache update response, future 
on node requester will never complete
 Key: IGNITE-7239
 URL: https://issues.apache.org/jira/browse/IGNITE-7239
 Project: Ignite
  Issue Type: Bug
  Components: cache
Affects Versions: 2.3
Reporter: Dmitry Karachentsev
Priority: Minor


To avoid such hang, if response could not be serialized, create other response 
with text of the error.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-7206) Nodes may ping each other on stop for a failureDetectionTimeout period

2017-12-14 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-7206:
---

 Summary: Nodes may ping each other on stop for a 
failureDetectionTimeout period
 Key: IGNITE-7206
 URL: https://issues.apache.org/jira/browse/IGNITE-7206
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.3
Reporter: Dmitry Karachentsev
Assignee: Dmitry Karachentsev
 Fix For: 2.4


On communication connection breakage TcpCommunicationSpi may try to reestablish 
connection and if it's failed try to ping remote node. But if remote node is 
stopping it refuses incoming ping connections.

In case of concurrent few node stopping there possible case when nodes ping 
each other and refuse any incoming pings. This lasts for 
failureDetectionTimeout or reconenctCount that checks in ServerImpl.pingNode() 
method.

pingNode() should check if local node is stopping and interrupt pings.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-7181) Fix IGNITE-7086 doesn't work with PESSIMISTIC REPEATABLE_READ transaction

2017-12-13 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-7181:
---

 Summary: Fix IGNITE-7086 doesn't work with PESSIMISTIC 
REPEATABLE_READ transaction
 Key: IGNITE-7181
 URL: https://issues.apache.org/jira/browse/IGNITE-7181
 Project: Ignite
  Issue Type: Bug
Reporter: Dmitry Karachentsev
Assignee: Dmitry Karachentsev






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-7159) Fix IGNITE-7086 fails IgniteCacheAtomicExpiryPolicyWithStoreTest.testGetReadThrough

2017-12-11 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-7159:
---

 Summary: Fix IGNITE-7086 fails 
IgniteCacheAtomicExpiryPolicyWithStoreTest.testGetReadThrough
 Key: IGNITE-7159
 URL: https://issues.apache.org/jira/browse/IGNITE-7159
 Project: Ignite
  Issue Type: Bug
Reporter: Dmitry Karachentsev






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6818) In case of incoming communication connection ping the old one if it's alive

2017-11-02 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-6818:
---

 Summary: In case of incoming communication connection ping the old 
one if it's alive
 Key: IGNITE-6818
 URL: https://issues.apache.org/jira/browse/IGNITE-6818
 Project: Ignite
  Issue Type: Bug
  Security Level: Public (Viewable by anyone)
Affects Versions: 2.3
Reporter: Dmitry Karachentsev
Assignee: Dmitry Karachentsev
Priority: Critical
 Fix For: 2.4


Assume the following scenario:
1. Client opens connection to the server.
2. Server checks that it is a first connection to that node and accepts it.
3. By some reason firewall starts rejecting client messages with TCP reset flag 
set.
4. Client closes connection, but server doesn't know about it.
5. Client tries connect again.
6. Server rejects new connection, because it already has connection to that 
node.

Possible fix: on step 6 server must check old connection if it's alive by 
sending some communication message and check response. If old connection is 
dead - close it and accept new one.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6671) [Web Console] Wrong java type used in generated config from DB schema

2017-10-19 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-6671:
---

 Summary: [Web Console] Wrong java type used in generated config 
from DB schema
 Key: IGNITE-6671
 URL: https://issues.apache.org/jira/browse/IGNITE-6671
 Project: Ignite
  Issue Type: Bug
  Security Level: Public (Viewable by anyone)
Affects Versions: 2.2
Reporter: Dmitry Karachentsev
 Fix For: 2.3


We should be confident that java types in generated config are able to fit in 
values from DB. For example, WC generates short for Oracle's NUMBER(5), when 
short could be max 32767, but NUMBER(5) - 9.

That may produce errors like below during DB import:
{noformat}
 Exception in thread "main" javax.cache.integration.CacheLoaderException: 
Failed to load cache: test
   at 
org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore.loadCache(CacheAbstractJdbcStore.java:798)
   at 
org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.loadCache(GridCacheStoreManagerAdapter.java:502)
   at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter.localLoadCache(GridDhtCacheAdapter.java:486)
   at 
org.apache.ignite.internal.processors.cache.GridCacheProxyImpl.localLoadCache(GridCacheProxyImpl.java:217)
   at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter$LoadCacheJob.localExecute(GridCacheAdapter.java:5439)
   at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter$LoadCacheJobV2.localExecute(GridCacheAdapter.java:5488)
   at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter$TopologyVersionAwareJob.execute(GridCacheAdapter.java:6103)
   at 
org.apache.ignite.compute.ComputeJobAdapter.call(ComputeJobAdapter.java:132)
   at 
org.apache.ignite.internal.processors.closure.GridClosureProcessor$C2.execute(GridClosureProcessor.java:1842)
   at 
org.apache.ignite.internal.processors.job.GridJobWorker$2.call(GridJobWorker.java:566)
   at 
org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6621)
   at 
org.apache.ignite.internal.processors.job.GridJobWorker.execute0(GridJobWorker.java:560)
   at 
org.apache.ignite.internal.processors.job.GridJobWorker.body(GridJobWorker.java:489)
   at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
   at 
org.apache.ignite.internal.processors.job.GridJobProcessor.processJobExecuteRequest(GridJobProcessor.java:1114)
   at 
org.apache.ignite.internal.processors.job.GridJobProcessor$JobExecutionListener.onMessage(GridJobProcessor.java:1907)
   at 
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1257)
   at 
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:885)
   at 
org.apache.ignite.internal.managers.communication.GridIoManager.access$2100(GridIoManager.java:114)
   at 
org.apache.ignite.internal.managers.communication.GridIoManager$7.run(GridIoManager.java:802)
   at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
   at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
   at java.lang.Thread.run(Thread.java:745)
Caused by: javax.cache.CacheException: Failed to read binary object: 
org.apache.TestModel
   at 
org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStore.buildBinaryObject(CacheJdbcPojoStore.java:255)
   at 
org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStore.buildObject(CacheJdbcPojoStore.java:136)
   at 
org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore$1.call(CacheAbstractJdbcStore.java:463)
   at 
org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore$1.call(CacheAbstractJdbcStore.java:430)
   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
   ... 3 more
Caused by: java.sql.SQLException: Numeric Overflow
   at 
oracle.jdbc.driver.NumberCommonAccessor.throwOverflow(NumberCommonAccessor.java:4170)
   at 
oracle.jdbc.driver.NumberCommonAccessor.getShort(NumberCommonAccessor.java:311)
   at 
oracle.jdbc.driver.GeneratedStatement.getShort(GeneratedStatement.java:305)
   at 
oracle.jdbc.driver.GeneratedScrollableResultSet.getShort(GeneratedScrollableResultSet.java:879)
   at 
org.apache.ignite.cache.store.jdbc.JdbcTypesDefaultTransformer.getColumnValue(JdbcTypesDefaultTransformer.java:84)
   at 
org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStore.buildBinaryObject(CacheJdbcPojoStore.java:247)
   ... 7 more
{noformat}

*This should be checked for all supported databases.*



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6487) Affinity may return more backups that set during exchange

2017-09-22 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-6487:
---

 Summary: Affinity may return more backups that set during exchange
 Key: IGNITE-6487
 URL: https://issues.apache.org/jira/browse/IGNITE-6487
 Project: Ignite
  Issue Type: Bug
  Components: cache
Affects Versions: 2.1
Reporter: Dmitry Karachentsev


Reproducer is in attachment.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6431) Web Console: "Partition loss policy" field is duplicated

2017-09-19 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-6431:
---

 Summary: Web Console: "Partition loss policy" field is duplicated
 Key: IGNITE-6431
 URL: https://issues.apache.org/jira/browse/IGNITE-6431
 Project: Ignite
  Issue Type: Bug
  Components: UI
Affects Versions: 2.1
Reporter: Dmitry Karachentsev
Assignee: Vasiliy Sisko






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6166) Exceptions are printed to console, instead of log

2017-08-23 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-6166:
---

 Summary: Exceptions are printed to console, instead of log
 Key: IGNITE-6166
 URL: https://issues.apache.org/jira/browse/IGNITE-6166
 Project: Ignite
  Issue Type: Bug
  Components: igfs
Affects Versions: 2.1
Reporter: Dmitry Karachentsev
Priority: Trivial
 Fix For: 2.2


IgniteSourceTask.TaskLocalListener,apply()
IgfsThread.run()

Exceptions should be logged.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-6165) Use SSL connection in IpcClientTcpEndpoint when it's configured for grid

2017-08-23 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-6165:
---

 Summary: Use SSL connection in IpcClientTcpEndpoint when it's 
configured for grid
 Key: IGNITE-6165
 URL: https://issues.apache.org/jira/browse/IGNITE-6165
 Project: Ignite
  Issue Type: Improvement
  Components: igfs
Affects Versions: 2.1
Reporter: Dmitry Karachentsev
 Fix For: 2.2


HadoopIgfsIpcIo in start() method always opens insecure connection, even if SSL 
is configured (should be like in TcpDiscoverySpi.createSocket()), 
IpcServerTcpEndpoint must be protected as well.

{code}
if (isSslEnabled())
sock = sslSockFactory.createSocket();
else
sock = new Socket();
{code}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-5795) @AffinityKeyMapped ignored if QueryEntity used

2017-07-20 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-5795:
---

 Summary: @AffinityKeyMapped ignored if QueryEntity used
 Key: IGNITE-5795
 URL: https://issues.apache.org/jira/browse/IGNITE-5795
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.0
Reporter: Dmitry Karachentsev
 Fix For: 2.2


When cache configured with QueryEntity and used key type with 
@AffinityKeyMapped field, it will be ignored and wrong partition calculated. 
This happens because QueryEntity processing precedes key type registering in 
binary meta cache. On that step CacheObjectBinaryProcessorImpl#affinityKeyField 
called and unable to resolve type, so null returned and null putted in 
affKeyFields.

On next put/get operation CacheObjectBinaryProcessorImpl#affinityKeyField will 
return null from affKeyFields, but should be affinity key field.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-5768) Retry resolving class name from marshaller cache and .classname file

2017-07-17 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-5768:
---

 Summary: Retry resolving class name from marshaller cache and 
.classname file
 Key: IGNITE-5768
 URL: https://issues.apache.org/jira/browse/IGNITE-5768
 Project: Ignite
  Issue Type: Bug
  Components: cache
Reporter: Dmitry Karachentsev
Assignee: Dmitry Karachentsev






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-5570) IgniteMessaging should have sendAsync() method

2017-06-21 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-5570:
---

 Summary: IgniteMessaging should have sendAsync() method
 Key: IGNITE-5570
 URL: https://issues.apache.org/jira/browse/IGNITE-5570
 Project: Ignite
  Issue Type: Improvement
  Components: messaging
Affects Versions: 2.1
Reporter: Dmitry Karachentsev
 Fix For: 2.2


With old API IgniteMessaging.send() checks async mode, and if it's true, 
invokes local listener asynchronously in public pool. Support of async listener 
invocation should be added in IgniteMessaging, f.e.:

IgniteFuture sendAsync()



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-5424) GridServiceProxy does not unwraps exception message from InvocationTargetException

2017-06-06 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-5424:
---

 Summary: GridServiceProxy does not unwraps exception message from 
InvocationTargetException
 Key: IGNITE-5424
 URL: https://issues.apache.org/jira/browse/IGNITE-5424
 Project: Ignite
  Issue Type: Bug
  Components: managed services
Affects Versions: 2.1
Reporter: Dmitry Karachentsev
Assignee: Dmitry Karachentsev
 Fix For: 2.2


Instead of correct message 'null' passed.

{noformat}
class org.apache.ignite.IgniteException: null
at 
org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:957)
at 
org.apache.ignite.internal.processors.service.GridServiceProxy.invokeMethod(GridServiceProxy.java:208)
at 
org.apache.ignite.internal.processors.service.GridServiceProxy$ProxyInvocationHandler.invoke(GridServiceProxy.java:356)
at org.apache.ignite.internal.processors.service.$Proxy25.go(Unknown 
Source)
at 
org.apache.ignite.internal.processors.service.GridServiceProcessorProxySelfTest.testException(GridServiceProcessorProxySelfTest.java:78)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at junit.framework.TestCase.runTest(TestCase.java:176)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.runTestInternal(GridAbstractTest.java:1979)
at 
org.apache.ignite.testframework.junits.GridAbstractTest.access$000(GridAbstractTest.java:130)
at 
org.apache.ignite.testframework.junits.GridAbstractTest$5.run(GridAbstractTest.java:1894)
at java.lang.Thread.run(Thread.java:745)
Caused by: class org.apache.ignite.IgniteCheckedException: null
at 
org.apache.ignite.internal.util.IgniteUtils.cast(IgniteUtils.java:7262)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.resolve(GridFutureAdapter.java:258)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:189)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:139)
at 
org.apache.ignite.internal.processors.service.GridServiceProxy.invokeMethod(GridServiceProxy.java:197)
... 12 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.ignite.internal.processors.service.GridServiceProxy$ServiceProxyCallable.call(GridServiceProxy.java:417)
at 
org.apache.ignite.internal.processors.closure.GridClosureProcessor$C2.execute(GridClosureProcessor.java:1847)
at 
org.apache.ignite.internal.processors.job.GridJobWorker$2.call(GridJobWorker.java:566)
at 
org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6641)
at 
org.apache.ignite.internal.processors.job.GridJobWorker.execute0(GridJobWorker.java:560)
at 
org.apache.ignite.internal.processors.job.GridJobWorker.body(GridJobWorker.java:489)
at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at 
org.apache.ignite.internal.processors.job.GridJobProcessor.processJobExecuteRequest(GridJobProcessor.java:1180)
at 
org.apache.ignite.internal.processors.job.GridJobProcessor$JobExecutionListener.onMessage(GridJobProcessor.java:1907)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1550)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1178)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:124)
at 
org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1095)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
... 1 more
Caused by: java.lang.Exception: Test exception
at 
org.apache.ignite.internal.processors.service.GridServiceProcessorAbstractSelfTest$ErrorServiceImpl.go(GridServiceProcessorAbstractSelfTest.java:823)
... 20 more
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (IGNITE-5342) Skip permission check for TASK_EXECUTE for service jobs

2017-05-30 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-5342:
---

 Summary: Skip permission check for TASK_EXECUTE for service jobs
 Key: IGNITE-5342
 URL: https://issues.apache.org/jira/browse/IGNITE-5342
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.1
Reporter: Dmitry Karachentsev
Assignee: Dmitry Karachentsev
 Fix For: 2.1


Services work as compute tasks, and when SERVICE_INVOKE is available, service 
execution should not throw TASK_EXECUTE authorization error



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (IGNITE-5332) Add toString() to GridNearAtomicAbstractSingleUpdateRequest and it's inheritors

2017-05-29 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-5332:
---

 Summary: Add toString() to 
GridNearAtomicAbstractSingleUpdateRequest and it's inheritors
 Key: IGNITE-5332
 URL: https://issues.apache.org/jira/browse/IGNITE-5332
 Project: Ignite
  Issue Type: Bug
Affects Versions: 1.7
Reporter: Dmitry Karachentsev
 Fix For: 1.9


GridNearAtomicAbstractSingleUpdateRequest and all his inheritors should 
implement toString() method.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (IGNITE-5259) Support backward compatibility with versions without service permissions

2017-05-18 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-5259:
---

 Summary: Support backward compatibility with versions without 
service permissions
 Key: IGNITE-5259
 URL: https://issues.apache.org/jira/browse/IGNITE-5259
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.1
Reporter: Dmitry Karachentsev
Assignee: Dmitry Karachentsev
 Fix For: 2.1


[IGNITE-5077|https://issues.apache.org/jira/browse/IGNITE-5077] adds service 
security permissions, but breaks compatibility with older versions. This must 
be fixed somehow.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (IGNITE-5223) Allow use local binary metadata cache if it's possible

2017-05-15 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-5223:
---

 Summary: Allow use local binary metadata cache if it's possible
 Key: IGNITE-5223
 URL: https://issues.apache.org/jira/browse/IGNITE-5223
 Project: Ignite
  Issue Type: Bug
Affects Versions: 1.9
Reporter: Dmitry Karachentsev
Assignee: Dmitry Karachentsev
 Fix For: 1.9


Add system property that will use local binary metadata cache instead of 
distributed one, when all classes available on all nodes and don't change.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (IGNITE-5145) Support multiple service deployment in API

2017-05-03 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-5145:
---

 Summary: Support multiple service deployment in API
 Key: IGNITE-5145
 URL: https://issues.apache.org/jira/browse/IGNITE-5145
 Project: Ignite
  Issue Type: Improvement
Affects Versions: 2.0
Reporter: Dmitry Karachentsev
 Fix For: None


IgniteServices should support possibility of deploying batch of services at 
once to speed up deployment.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (IGNITE-5144) Do not start remappings in DataStreamer on client node leave/arrive

2017-05-03 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-5144:
---

 Summary: Do not start remappings in DataStreamer on client node 
leave/arrive
 Key: IGNITE-5144
 URL: https://issues.apache.org/jira/browse/IGNITE-5144
 Project: Ignite
  Issue Type: Improvement
  Components: cache
Reporter: Dmitry Karachentsev
Priority: Minor


DataStreamerImpl shouldn't start remappings if topology changed by node that is 
not participate in cache (or at least by client node).

Currently data streamer compares topology versions despite of what node caused 
this change: !topVer.equals(reqTopVer).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (IGNITE-5077) Support service permissions

2017-04-25 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-5077:
---

 Summary: Support service permissions
 Key: IGNITE-5077
 URL: https://issues.apache.org/jira/browse/IGNITE-5077
 Project: Ignite
  Issue Type: New Feature
  Components: managed services
Reporter: Dmitry Karachentsev
Assignee: Dmitry Karachentsev
 Fix For: 2.1


Need to add capability to specify permissions to allow/disallow executions of 
particular services (similar to compute tasks).

The following permissions should be added to the SecurityPermission enum:

SERVICE_DEPLOY - for IgniteServices.deployXXX methods.
SERVICE_CANCEL - for IgniteServices.cancel and IgniteServices.cancelAll 
methods.
SERVICE_INVOKE - for IgniteServices.service, IgniteServices.services and 
IgniteServices.serviceProxy methods.

SERVICE_INVOKE should allow fine-grained authorization based on service name, 
similar to TASK_EXECUTE. E.g., a particular user should be able to execute 
service A, but not service B.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (IGNITE-5038) Add BinaryObject.deserialize(ClassLoader) method

2017-04-20 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-5038:
---

 Summary: Add BinaryObject.deserialize(ClassLoader) method
 Key: IGNITE-5038
 URL: https://issues.apache.org/jira/browse/IGNITE-5038
 Project: Ignite
  Issue Type: New Feature
  Components: cache
Affects Versions: 2.0
Reporter: Dmitry Karachentsev
 Fix For: 2.1


In case if used custom class loader object could be serialized to BinaryObject, 
but de-serialization will fail with ClassNotFoundException.

There should be possibility to explicitly set ClassLoader on object 
de-serialization:
BinaryObject.deserialize(ClassLoader)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (IGNITE-5032) IgniteLock broken state resets on node restart

2017-04-19 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-5032:
---

 Summary: IgniteLock broken state resets on node restart
 Key: IGNITE-5032
 URL: https://issues.apache.org/jira/browse/IGNITE-5032
 Project: Ignite
  Issue Type: Bug
  Components: data structures
Affects Versions: 1.9
Reporter: Dmitry Karachentsev


1) Start two nodes.
2) Create distributed lock with failoverSafe = false.
3) Acquire lock on node 1.
4) Restart node 1.
5) Lock acquisition on node 2 throws exception (as expected), but node 1 
successfully acquires lock.

Correct behavior:
5) node 1 throws exception as node 2, and isBroken() returns true.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (IGNITE-5013) Allow pass custom types to Cassandra store

2017-04-18 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-5013:
---

 Summary: Allow pass custom types to Cassandra store
 Key: IGNITE-5013
 URL: https://issues.apache.org/jira/browse/IGNITE-5013
 Project: Ignite
  Issue Type: Improvement
  Components: ignite-cassandra
Affects Versions: 2.0
Reporter: Dmitry Karachentsev
 Fix For: 2.1


Any type that is not in 
org.apache.ignite.cache.store.cassandra.common.PropertyMappingHelper#JAVA_TO_CASSANDRA_MAPPING
 will be serialized even if user specified custom codec for Cassandra. 

Should be ability to configure and bypass custom types to Cassandra.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: IgniteSemaphore and failoverSafe flag

2017-04-14 Thread Dmitry Karachentsev
It's not 100% reproducible, to get failed locally I've ran it many times 
in a loop (Intellij IDEA feature).

N.B. This test was muted before the fix, so yes, it's could not be a cause.

Thanks!

14.04.2017 17:23, Vladisav Jelisavcic пишет:

Hmm, I cannot reproduce this behavior locally,
my guess is interrupt flag is not always cleared properly in 
#GridCacheSemaphore.acquire method (but it doesn't have anything to do 
with latest fix)


Can you make it reproducible?

On Fri, Apr 14, 2017 at 2:46 PM, Dmitry Karachentsev 
<dkarachent...@gridgain.com <mailto:dkarachent...@gridgain.com>> wrote:


Vladislav,

One more thing, This test [1] started failing on semaphore close
when this fix [2] was introduced.
Could you check it please?

[1]

http://ci.ignite.apache.org/viewLog.html?buildId=547151=buildResultsDiv=IgniteTests_IgniteDataStrucutures#testNameId-979977708202725050

<http://ci.ignite.apache.org/viewLog.html?buildId=547151=buildResultsDiv=IgniteTests_IgniteDataStrucutures#testNameId-979977708202725050>
[2] https://issues.apache.org/jira/browse/IGNITE-1977
<https://issues.apache.org/jira/browse/IGNITE-1977>

Thanks!

14.04.2017 15:27, Dmitry Karachentsev пишет:

Vladislav,

Yep, you're right. I'll fix it.

Thanks!

14.04.2017 15:18, Vladisav Jelisavcic пишет:

Hi Dmitry,

it looks to me that this test is not valid - after the semaphore
2 fails the permits are redistributed
so the expected number of permits should really be 20 not 10. Do
you agree?

I guess before latest fix this test was (incorrectly) passing
because permits weren't released properly.

What do you think?

On Fri, Apr 14, 2017 at 11:27 AM, Dmitry Karachentsev
<dkarachent...@gridgain.com <mailto:dkarachent...@gridgain.com>>
wrote:

Hi Vladislav,

It looks like after fix was merged these tests [1] started
failing. Could you please take a look?

[1]

http://ci.ignite.apache.org/viewLog.html?buildId=544238=buildResultsDiv=IgniteTests_IgniteBinaryObjectsDataStrucutures

<http://ci.ignite.apache.org/viewLog.html?buildId=544238=buildResultsDiv=IgniteTests_IgniteBinaryObjectsDataStrucutures>

Thanks!

    -Dmitry.

    13.04.2017 16:15, Dmitry Karachentsev пишет:

Thanks a lot!

12.04.2017 16:35, Vladisav Jelisavcic пишет:

Hi Dmitry,

sure, I made a fix, take a look at the PR and the comments
in the ticket.

Best regards,
Vladisav

    On Tue, Apr 11, 2017 at 3:00 PM, Dmitry Karachentsev
<dkarachent...@gridgain.com
<mailto:dkarachent...@gridgain.com>> wrote:

Hi Vladislav,

Thanks for your contribution! But it seems doesn't fix
related tickets, in particular [1].
Could you please take a look?

[1] https://issues.apache.org/jira/browse/IGNITE-4173
<https://issues.apache.org/jira/browse/IGNITE-4173>

Thanks!

06.04.2017 16:27, Vladisav Jelisavcic пишет:

Hey Dmitry,

sorry for the late reply, I'll try to bake a pr later
during the day.

Best regards,
    Vladisav



On Tue, Apr 4, 2017 at 11:05 AM, Dmitry Karachentsev
<dkarachent...@gridgain.com
<mailto:dkarachent...@gridgain.com>> wrote:

Hi Vladislav,

I see you're developing [1] for a while, did you
have any chance to fix it? If no, is there any
estimate?

[1]
https://issues.apache.org/jira/browse/IGNITE-1977
<https://issues.apache.org/jira/browse/IGNITE-1977>

Thanks!

-Dmitry.



20.03.2017 10:28, Alexey Goncharuk пишет:

I think re-creation should be handled by a
user who will make sure that
nobody else is currently executing the
guarded logic before the
re-creation. This is exactly the same
semantics as with
BrokenBarrierException for j.u.c.CyclicBarrier.

2017-03-17 2:39 GMT+03:00 Vladisav Jelisavcic
<vladis...@gmail.com
<mailto:vladis...@gmail.com>>:

Hi everyone,

I agree with Val, he's got a point;
recreating the lock doesn't seem
possible
(at least not the with the transactional
cache lock/semaphore we have).
Is this re-create behavior really needed?

Best regards,
Vladisav



  

Re: IgniteSemaphore and failoverSafe flag

2017-04-14 Thread Dmitry Karachentsev

Vladislav,

One more thing, This test [1] started failing on semaphore close when 
this fix [2] was introduced.

Could you check it please?

[1] 
http://ci.ignite.apache.org/viewLog.html?buildId=547151=buildResultsDiv=IgniteTests_IgniteDataStrucutures#testNameId-979977708202725050

[2] https://issues.apache.org/jira/browse/IGNITE-1977

Thanks!

14.04.2017 15:27, Dmitry Karachentsev пишет:

Vladislav,

Yep, you're right. I'll fix it.

Thanks!

14.04.2017 15:18, Vladisav Jelisavcic пишет:

Hi Dmitry,

it looks to me that this test is not valid - after the semaphore 2 
fails the permits are redistributed
so the expected number of permits should really be 20 not 10. Do you 
agree?


I guess before latest fix this test was (incorrectly) passing because 
permits weren't released properly.


What do you think?

On Fri, Apr 14, 2017 at 11:27 AM, Dmitry Karachentsev 
<dkarachent...@gridgain.com <mailto:dkarachent...@gridgain.com>> wrote:


Hi Vladislav,

It looks like after fix was merged these tests [1] started
failing. Could you please take a look?

[1]

http://ci.ignite.apache.org/viewLog.html?buildId=544238=buildResultsDiv=IgniteTests_IgniteBinaryObjectsDataStrucutures

<http://ci.ignite.apache.org/viewLog.html?buildId=544238=buildResultsDiv=IgniteTests_IgniteBinaryObjectsDataStrucutures>

Thanks!

-Dmitry.

13.04.2017 16:15, Dmitry Karachentsev пишет:

Thanks a lot!

12.04.2017 16:35, Vladisav Jelisavcic пишет:

Hi Dmitry,

sure, I made a fix, take a look at the PR and the comments in
the ticket.

Best regards,
Vladisav

On Tue, Apr 11, 2017 at 3:00 PM, Dmitry Karachentsev
<dkarachent...@gridgain.com
<mailto:dkarachent...@gridgain.com>> wrote:

Hi Vladislav,

Thanks for your contribution! But it seems doesn't fix
related tickets, in particular [1].
Could you please take a look?

[1] https://issues.apache.org/jira/browse/IGNITE-4173
<https://issues.apache.org/jira/browse/IGNITE-4173>

Thanks!

06.04.2017 16:27, Vladisav Jelisavcic пишет:

Hey Dmitry,

sorry for the late reply, I'll try to bake a pr later
during the day.

Best regards,
Vladisav



    On Tue, Apr 4, 2017 at 11:05 AM, Dmitry Karachentsev
<dkarachent...@gridgain.com
<mailto:dkarachent...@gridgain.com>> wrote:

Hi Vladislav,

I see you're developing [1] for a while, did you have
any chance to fix it? If no, is there any estimate?

[1] https://issues.apache.org/jira/browse/IGNITE-1977
<https://issues.apache.org/jira/browse/IGNITE-1977>

Thanks!

-Dmitry.



20.03.2017 10:28, Alexey Goncharuk пишет:

I think re-creation should be handled by a user
who will make sure that
nobody else is currently executing the guarded
logic before the
re-creation. This is exactly the same semantics as
with
BrokenBarrierException for j.u.c.CyclicBarrier.

2017-03-17 2:39 GMT+03:00 Vladisav Jelisavcic
<vladis...@gmail.com <mailto:vladis...@gmail.com>>:

Hi everyone,

I agree with Val, he's got a point; recreating
the lock doesn't seem
possible
(at least not the with the transactional cache
lock/semaphore we have).
Is this re-create behavior really needed?

Best regards,
Vladisav



On Thu, Mar 16, 2017 at 8:34 PM, Valentin
Kulichenko <
valentin.kuliche...@gmail.com
<mailto:valentin.kuliche...@gmail.com>> wrote:

Guys,

How does recreation of the lock helps? My
understanding is that scenario

is

the following:

1. Client A creates and acquires a lock,
and then starts to execute

guarded

logic.
2. Client B tries to acquire the same lock
and parks to wait.
3. Before client A unlocks, all affinity
nodes for the lock fail, lock
disappears from the cache.
4. Client B fails with exception,
recreates the lock, acquires it, and
starts to execute guarded logic
concurrently with client A.

In my view this is wrong anyway,
regardless of whet

Re: IgniteSemaphore and failoverSafe flag

2017-04-14 Thread Dmitry Karachentsev

Vladislav,

Yep, you're right. I'll fix it.

Thanks!

14.04.2017 15:18, Vladisav Jelisavcic пишет:

Hi Dmitry,

it looks to me that this test is not valid - after the semaphore 2 
fails the permits are redistributed
so the expected number of permits should really be 20 not 10. Do you 
agree?


I guess before latest fix this test was (incorrectly) passing because 
permits weren't released properly.


What do you think?

On Fri, Apr 14, 2017 at 11:27 AM, Dmitry Karachentsev 
<dkarachent...@gridgain.com <mailto:dkarachent...@gridgain.com>> wrote:


Hi Vladislav,

It looks like after fix was merged these tests [1] started
failing. Could you please take a look?

[1]

http://ci.ignite.apache.org/viewLog.html?buildId=544238=buildResultsDiv=IgniteTests_IgniteBinaryObjectsDataStrucutures

<http://ci.ignite.apache.org/viewLog.html?buildId=544238=buildResultsDiv=IgniteTests_IgniteBinaryObjectsDataStrucutures>

Thanks!

-Dmitry.

13.04.2017 16:15, Dmitry Karachentsev пишет:

Thanks a lot!

12.04.2017 16:35, Vladisav Jelisavcic пишет:

Hi Dmitry,

sure, I made a fix, take a look at the PR and the comments in
the ticket.

Best regards,
Vladisav

On Tue, Apr 11, 2017 at 3:00 PM, Dmitry Karachentsev
<dkarachent...@gridgain.com <mailto:dkarachent...@gridgain.com>>
wrote:

Hi Vladislav,

Thanks for your contribution! But it seems doesn't fix
related tickets, in particular [1].
Could you please take a look?

[1] https://issues.apache.org/jira/browse/IGNITE-4173
<https://issues.apache.org/jira/browse/IGNITE-4173>

Thanks!

06.04.2017 16:27, Vladisav Jelisavcic пишет:

Hey Dmitry,

sorry for the late reply, I'll try to bake a pr later
during the day.

Best regards,
Vladisav



    On Tue, Apr 4, 2017 at 11:05 AM, Dmitry Karachentsev
<dkarachent...@gridgain.com
<mailto:dkarachent...@gridgain.com>> wrote:

Hi Vladislav,

I see you're developing [1] for a while, did you have
any chance to fix it? If no, is there any estimate?

[1] https://issues.apache.org/jira/browse/IGNITE-1977
<https://issues.apache.org/jira/browse/IGNITE-1977>

Thanks!

-Dmitry.



20.03.2017 10:28, Alexey Goncharuk пишет:

I think re-creation should be handled by a user who
will make sure that
nobody else is currently executing the guarded
logic before the
re-creation. This is exactly the same semantics as with
BrokenBarrierException for j.u.c.CyclicBarrier.

2017-03-17 2:39 GMT+03:00 Vladisav Jelisavcic
<vladis...@gmail.com <mailto:vladis...@gmail.com>>:

Hi everyone,

I agree with Val, he's got a point; recreating
the lock doesn't seem
possible
(at least not the with the transactional cache
lock/semaphore we have).
Is this re-create behavior really needed?

Best regards,
Vladisav



On Thu, Mar 16, 2017 at 8:34 PM, Valentin
Kulichenko <
valentin.kuliche...@gmail.com
<mailto:valentin.kuliche...@gmail.com>> wrote:

Guys,

How does recreation of the lock helps? My
understanding is that scenario

is

the following:

1. Client A creates and acquires a lock,
and then starts to execute

guarded

logic.
2. Client B tries to acquire the same lock
and parks to wait.
3. Before client A unlocks, all affinity
nodes for the lock fail, lock
disappears from the cache.
4. Client B fails with exception, recreates
the lock, acquires it, and
starts to execute guarded logic
concurrently with client A.

In my view this is wrong anyway, regardless
of whether this happens
silently or with an exception handled in
user's code. Because this code
doesn't have any way to know if client A
still holds the lock or not.

Am I missing something?

-Val

On Tue, Mar 14, 2017 at 10:14 AM, Dmitriy

Re: IgniteSemaphore and failoverSafe flag

2017-04-14 Thread Dmitry Karachentsev

Hi Vladislav,

It looks like after fix was merged these tests [1] started failing. 
Could you please take a look?


[1] 
http://ci.ignite.apache.org/viewLog.html?buildId=544238=buildResultsDiv=IgniteTests_IgniteBinaryObjectsDataStrucutures


Thanks!

-Dmitry.

13.04.2017 16:15, Dmitry Karachentsev пишет:

Thanks a lot!

12.04.2017 16:35, Vladisav Jelisavcic пишет:

Hi Dmitry,

sure, I made a fix, take a look at the PR and the comments in the ticket.

Best regards,
Vladisav

On Tue, Apr 11, 2017 at 3:00 PM, Dmitry Karachentsev 
<dkarachent...@gridgain.com <mailto:dkarachent...@gridgain.com>> wrote:


Hi Vladislav,

Thanks for your contribution! But it seems doesn't fix related
tickets, in particular [1].
Could you please take a look?

[1] https://issues.apache.org/jira/browse/IGNITE-4173
<https://issues.apache.org/jira/browse/IGNITE-4173>

Thanks!

06.04.2017 16:27, Vladisav Jelisavcic пишет:

Hey Dmitry,

sorry for the late reply, I'll try to bake a pr later during the
day.

Best regards,
Vladisav



On Tue, Apr 4, 2017 at 11:05 AM, Dmitry Karachentsev
<dkarachent...@gridgain.com <mailto:dkarachent...@gridgain.com>>
wrote:

Hi Vladislav,

I see you're developing [1] for a while, did you have any
chance to fix it? If no, is there any estimate?

[1] https://issues.apache.org/jira/browse/IGNITE-1977
<https://issues.apache.org/jira/browse/IGNITE-1977>

Thanks!

-Dmitry.



20.03.2017 10:28, Alexey Goncharuk пишет:

I think re-creation should be handled by a user who will
make sure that
nobody else is currently executing the guarded logic
before the
re-creation. This is exactly the same semantics as with
BrokenBarrierException for j.u.c.CyclicBarrier.

2017-03-17 2:39 GMT+03:00 Vladisav Jelisavcic
<vladis...@gmail.com <mailto:vladis...@gmail.com>>:

Hi everyone,

I agree with Val, he's got a point; recreating the
lock doesn't seem
possible
(at least not the with the transactional cache
lock/semaphore we have).
Is this re-create behavior really needed?

Best regards,
Vladisav



On Thu, Mar 16, 2017 at 8:34 PM, Valentin Kulichenko <
valentin.kuliche...@gmail.com
<mailto:valentin.kuliche...@gmail.com>> wrote:

Guys,

How does recreation of the lock helps? My
understanding is that scenario

is

the following:

1. Client A creates and acquires a lock, and
then starts to execute

guarded

logic.
2. Client B tries to acquire the same lock and
parks to wait.
3. Before client A unlocks, all affinity nodes
for the lock fail, lock
disappears from the cache.
4. Client B fails with exception, recreates the
lock, acquires it, and
starts to execute guarded logic concurrently
with client A.

In my view this is wrong anyway, regardless of
whether this happens
silently or with an exception handled in user's
code. Because this code
doesn't have any way to know if client A still
holds the lock or not.

Am I missing something?

-Val

On Tue, Mar 14, 2017 at 10:14 AM, Dmitriy
Setrakyan <

dsetrak...@apache.org <mailto:dsetrak...@apache.org>

wrote:

On Tue, Mar 14, 2017 at 12:46 AM, Alexey
Goncharuk <
alexey.goncha...@gmail.com
<mailto:alexey.goncha...@gmail.com>> wrote:

Which user operation would result in
exception? To my knowledge,

user

may

already be holding the lock and not
invoking any Ignite APIs, no?

Yes, this is exactly my point.

Imagine that a node already holds a lock
and another node is waiting

for

the lock. If all partition nodes leave
the grid and the lock is

re-created,

this

Re: IgniteSemaphore and failoverSafe flag

2017-04-13 Thread Dmitry Karachentsev

Thanks a lot!

12.04.2017 16:35, Vladisav Jelisavcic пишет:

Hi Dmitry,

sure, I made a fix, take a look at the PR and the comments in the ticket.

Best regards,
Vladisav

On Tue, Apr 11, 2017 at 3:00 PM, Dmitry Karachentsev 
<dkarachent...@gridgain.com <mailto:dkarachent...@gridgain.com>> wrote:


Hi Vladislav,

Thanks for your contribution! But it seems doesn't fix related
tickets, in particular [1].
Could you please take a look?

[1] https://issues.apache.org/jira/browse/IGNITE-4173
<https://issues.apache.org/jira/browse/IGNITE-4173>

Thanks!

06.04.2017 16:27, Vladisav Jelisavcic пишет:

Hey Dmitry,

sorry for the late reply, I'll try to bake a pr later during the day.

Best regards,
Vladisav



On Tue, Apr 4, 2017 at 11:05 AM, Dmitry Karachentsev
<dkarachent...@gridgain.com <mailto:dkarachent...@gridgain.com>>
wrote:

Hi Vladislav,

I see you're developing [1] for a while, did you have any
chance to fix it? If no, is there any estimate?

[1] https://issues.apache.org/jira/browse/IGNITE-1977
<https://issues.apache.org/jira/browse/IGNITE-1977>

Thanks!

-Dmitry.



20.03.2017 10:28, Alexey Goncharuk пишет:

I think re-creation should be handled by a user who will
make sure that
nobody else is currently executing the guarded logic
before the
re-creation. This is exactly the same semantics as with
BrokenBarrierException for j.u.c.CyclicBarrier.

2017-03-17 2:39 GMT+03:00 Vladisav Jelisavcic
<vladis...@gmail.com <mailto:vladis...@gmail.com>>:

Hi everyone,

I agree with Val, he's got a point; recreating the
lock doesn't seem
possible
(at least not the with the transactional cache
lock/semaphore we have).
Is this re-create behavior really needed?

Best regards,
Vladisav



On Thu, Mar 16, 2017 at 8:34 PM, Valentin Kulichenko <
valentin.kuliche...@gmail.com
<mailto:valentin.kuliche...@gmail.com>> wrote:

Guys,

How does recreation of the lock helps? My
understanding is that scenario

is

the following:

1. Client A creates and acquires a lock, and then
starts to execute

guarded

logic.
2. Client B tries to acquire the same lock and
parks to wait.
3. Before client A unlocks, all affinity nodes
for the lock fail, lock
disappears from the cache.
4. Client B fails with exception, recreates the
lock, acquires it, and
starts to execute guarded logic concurrently with
client A.

In my view this is wrong anyway, regardless of
whether this happens
silently or with an exception handled in user's
code. Because this code
doesn't have any way to know if client A still
holds the lock or not.

Am I missing something?

-Val

On Tue, Mar 14, 2017 at 10:14 AM, Dmitriy Setrakyan <

dsetrak...@apache.org <mailto:dsetrak...@apache.org>

wrote:

On Tue, Mar 14, 2017 at 12:46 AM, Alexey
Goncharuk <
alexey.goncha...@gmail.com
<mailto:alexey.goncha...@gmail.com>> wrote:

Which user operation would result in
exception? To my knowledge,

user

may

already be holding the lock and not
invoking any Ignite APIs, no?

Yes, this is exactly my point.

Imagine that a node already holds a lock
and another node is waiting

for

the lock. If all partition nodes leave
the grid and the lock is

re-created,

this second node will immediately acquire
the lock and we will have

two

lock owners. I think in this case this
second node (blocked on

lock())

should 

Re: IgniteSemaphore and failoverSafe flag

2017-04-11 Thread Dmitry Karachentsev

Hi Vladislav,

Thanks for your contribution! But it seems doesn't fix related tickets, 
in particular [1].

Could you please take a look?

[1] https://issues.apache.org/jira/browse/IGNITE-4173

Thanks!

06.04.2017 16:27, Vladisav Jelisavcic пишет:

Hey Dmitry,

sorry for the late reply, I'll try to bake a pr later during the day.

Best regards,
Vladisav



On Tue, Apr 4, 2017 at 11:05 AM, Dmitry Karachentsev 
<dkarachent...@gridgain.com <mailto:dkarachent...@gridgain.com>> wrote:


Hi Vladislav,

I see you're developing [1] for a while, did you have any chance
to fix it? If no, is there any estimate?

[1] https://issues.apache.org/jira/browse/IGNITE-1977
<https://issues.apache.org/jira/browse/IGNITE-1977>

Thanks!

-Dmitry.



20.03.2017 10:28, Alexey Goncharuk пишет:

I think re-creation should be handled by a user who will make
sure that
nobody else is currently executing the guarded logic before the
re-creation. This is exactly the same semantics as with
BrokenBarrierException for j.u.c.CyclicBarrier.

2017-03-17 2:39 GMT+03:00 Vladisav Jelisavcic
<vladis...@gmail.com <mailto:vladis...@gmail.com>>:

Hi everyone,

I agree with Val, he's got a point; recreating the lock
doesn't seem
possible
(at least not the with the transactional cache
lock/semaphore we have).
Is this re-create behavior really needed?

Best regards,
Vladisav



On Thu, Mar 16, 2017 at 8:34 PM, Valentin Kulichenko <
valentin.kuliche...@gmail.com
<mailto:valentin.kuliche...@gmail.com>> wrote:

Guys,

How does recreation of the lock helps? My
understanding is that scenario

is

the following:

1. Client A creates and acquires a lock, and then
starts to execute

guarded

logic.
2. Client B tries to acquire the same lock and parks
to wait.
3. Before client A unlocks, all affinity nodes for the
lock fail, lock
disappears from the cache.
4. Client B fails with exception, recreates the lock,
acquires it, and
starts to execute guarded logic concurrently with
client A.

In my view this is wrong anyway, regardless of whether
this happens
silently or with an exception handled in user's code.
Because this code
doesn't have any way to know if client A still holds
the lock or not.

Am I missing something?

-Val

On Tue, Mar 14, 2017 at 10:14 AM, Dmitriy Setrakyan <

dsetrak...@apache.org <mailto:dsetrak...@apache.org>

wrote:

On Tue, Mar 14, 2017 at 12:46 AM, Alexey Goncharuk <
alexey.goncha...@gmail.com
<mailto:alexey.goncha...@gmail.com>> wrote:

Which user operation would result in
exception? To my knowledge,

user

may

already be holding the lock and not
invoking any Ignite APIs, no?

Yes, this is exactly my point.

Imagine that a node already holds a lock and
another node is waiting

for

the lock. If all partition nodes leave the
grid and the lock is

re-created,

this second node will immediately acquire the
lock and we will have

two

lock owners. I think in this case this second
node (blocked on

lock())

should get an exception saying that the lock
was lost (which is, by

the

way, the current behavior), and the first node
should get an

exception

on

unlock.

Makes sense.







Async listeners for IgniteFuture

2017-04-05 Thread Dmitry Karachentsev

Hello everyone!

I'm working on this ticket [1] and Vladimir made a review where he has 
concerns about implementation. I want to know community opinion about 
second one: where user callbacks should be processed by default?


For that purpose I used public pool for now, but this could lead to 
deadlocks, so what is the best decision here: use callback fork join 
pool, additional one, or maybe force user to provide his own Executor?


[1] https://issues.apache.org/jira/browse/IGNITE-4477

Thanks!

-Dmitry.




[jira] [Created] (IGNITE-4912) Fix testDeployCalledAfterCacheStart and testDeployCalledBeforeCacheStart tests

2017-04-05 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-4912:
---

 Summary: Fix testDeployCalledAfterCacheStart and 
testDeployCalledBeforeCacheStart tests
 Key: IGNITE-4912
 URL: https://issues.apache.org/jira/browse/IGNITE-4912
 Project: Ignite
  Issue Type: Sub-task
Reporter: Dmitry Karachentsev
Assignee: Dmitry Karachentsev


IgniteServiceDynamicCachesSelfTest.testDeployCalledAfterCacheStart
IgniteServiceDynamicCachesSelfTest.testDeployCalledBeforeCacheStart



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (IGNITE-4911) Fix tests #1

2017-04-05 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-4911:
---

 Summary: Fix tests #1
 Key: IGNITE-4911
 URL: https://issues.apache.org/jira/browse/IGNITE-4911
 Project: Ignite
  Issue Type: Bug
  Components: cache, general
Reporter: Dmitry Karachentsev
Assignee: Dmitry Karachentsev


Test that constantly failing:
IgniteServiceDynamicCachesSelfTest.testDeployCalledAfterCacheStart
IgniteServiceDynamicCachesSelfTest.testDeployCalledBeforeCacheStart

IgniteCachePeekModesAbstractTest.testNonLocalPartitionSize

GridCacheDeploymentOffHeapValuesSelfTest.testDeployment

GridCacheAtomicPrimaryWriteOrderNearRemoveFailureTest.testPutAndRemove
TcpDiscoveryNodeAttributesUpdateOnReconnectTest.testReconnect



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: IgniteSemaphore and failoverSafe flag

2017-04-04 Thread Dmitry Karachentsev

Hi Vladislav,

I see you're developing [1] for a while, did you have any chance to fix 
it? If no, is there any estimate?


[1] https://issues.apache.org/jira/browse/IGNITE-1977

Thanks!

-Dmitry.



20.03.2017 10:28, Alexey Goncharuk пишет:

I think re-creation should be handled by a user who will make sure that
nobody else is currently executing the guarded logic before the
re-creation. This is exactly the same semantics as with
BrokenBarrierException for j.u.c.CyclicBarrier.

2017-03-17 2:39 GMT+03:00 Vladisav Jelisavcic :


Hi everyone,

I agree with Val, he's got a point; recreating the lock doesn't seem
possible
(at least not the with the transactional cache lock/semaphore we have).
Is this re-create behavior really needed?

Best regards,
Vladisav



On Thu, Mar 16, 2017 at 8:34 PM, Valentin Kulichenko <
valentin.kuliche...@gmail.com> wrote:


Guys,

How does recreation of the lock helps? My understanding is that scenario

is

the following:

1. Client A creates and acquires a lock, and then starts to execute

guarded

logic.
2. Client B tries to acquire the same lock and parks to wait.
3. Before client A unlocks, all affinity nodes for the lock fail, lock
disappears from the cache.
4. Client B fails with exception, recreates the lock, acquires it, and
starts to execute guarded logic concurrently with client A.

In my view this is wrong anyway, regardless of whether this happens
silently or with an exception handled in user's code. Because this code
doesn't have any way to know if client A still holds the lock or not.

Am I missing something?

-Val

On Tue, Mar 14, 2017 at 10:14 AM, Dmitriy Setrakyan <

dsetrak...@apache.org

wrote:


On Tue, Mar 14, 2017 at 12:46 AM, Alexey Goncharuk <
alexey.goncha...@gmail.com> wrote:


Which user operation would result in exception? To my knowledge,

user

may

already be holding the lock and not invoking any Ignite APIs, no?


Yes, this is exactly my point.

Imagine that a node already holds a lock and another node is waiting

for

the lock. If all partition nodes leave the grid and the lock is

re-created,

this second node will immediately acquire the lock and we will have

two

lock owners. I think in this case this second node (blocked on

lock())

should get an exception saying that the lock was lost (which is, by

the

way, the current behavior), and the first node should get an

exception

on

unlock.


Makes sense.





Re: IGNITE-4284 - ready for review

2017-03-28 Thread Dmitry Karachentsev

Hi Nikita,

I've looked through the issue and, because fix is quite small and 
important, committed to PR#1681 [1]. Please take a look at it. Here is 
the code review [2].
I'll assign it to myself and resolve. If you have any questions about 
fix, please ask.


Thanks for your effort!

[1] https://github.com/apache/ignite/pull/1681
[2] http://reviews.ignite.apache.org/ignite/review/IGNT-CR-140

-Dmitry.

22.03.2017 14:41, Nikita Amelchev пишет:

Some about p2p class loading support. I thought that we can unmarshal on
processNodeAddFinishedMessage when discovery data recieved. But for
unmarshal required optimized marshaller(register classes in sys cache) that
starting after joinLatch countdown in this message. I tryed unmarshal
filters after joinLatch down but it broke some tests such as reconnects and
backups. Its possible unmarshal before joinLatch countdown and sys cache
was started? Or forbid node to start and throw checked exception?

2017-02-22 12:35 GMT+03:00 Alexey Goncharuk :


Nikita,

The fix looks wrong to me. The point of the assertion was to ensure an
invariant,
see org.apache.ignite.internal.processors.cache.query.continuous.
CacheContinuousQueryManager#executeQuery
-V2 handler is created only when remote filter factory is not null.

The test observes both fields equal to null because deserialization failed
because peer class loading is not supported in certain places of code in
Ignite (namely, discovery subsystem), and joining node receives registered
continuous queries through discovery data. Silent ignorance of this failure
is wrong, we should either forbid such a node to start or support p2p class
loading for discovery (or suggest yet another solution).

2017-02-22 12:03 GMT+03:00 Nikita Amelchev :


Hello. I fixed it.

Please, review.

https://issues.apache.org/jira/browse/IGNITE-4284 - Failed second client
node join with continuous query and peer class loading enabled

latest ci.tests:
http://ci.ignite.apache.org/project.html?projectId=IgniteTests_
IgniteTests=pull%2F1564%2Fhead


--
Best wishes,
Amelchev Nikita








Re: IgniteMessaging in async mode

2017-03-28 Thread Dmitry Karachentsev
I'm talking about real case [1]. User actually doesn't care about 
difficulties of connection establishing or if message wasn't sent. But 
now he has to manually resolve such case. Why not to use existing async 
mode here?


-Dmitry.

[1] 
http://apache-ignite-users.70518.x6.nabble.com/ignite-messaging-disconnection-behaviour-td11218.html#a11418


24.03.2017 21:10, Valentin Kulichenko пишет:

I believe it can be blocked if connection can't be established (i.e. socket
can't be opened within a timeout). But I don't think that it makes much
sense to add async support because of this. Also it would be very confusing
as actual send is always asynchronous.

-Val

On Fri, Mar 24, 2017 at 9:10 AM, Dmitriy Setrakyan <dsetrak...@apache.org>
wrote:


I am a bit confused. The "send()" method does not wait for any replies, why
would it block?

On Fri, Mar 24, 2017 at 2:38 AM, Dmitry Karachentsev <
dkarachent...@gridgain.com> wrote:


Hi Igniters!

We have in IgniteMessaging interface async support, and it's used only

for

registering/deregistering listeners, not for message sending. But user

may

fall into the case, when connection to destination node was lost and

user's

thread is blocked on IgniteMessaging.send()/sendOrdered() methods till
failureDetectionTimeout is up.

I think, it may be a good idea to support async mode for sending messages
(and probably without creating separate future for each message).

What do you think?

Thanks!

Dmitry.






[jira] [Created] (IGNITE-4867) CacheMetricsSnapshot ignores 'size' and 'keySize' fields

2017-03-27 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-4867:
---

 Summary: CacheMetricsSnapshot ignores 'size' and 'keySize' fields
 Key: IGNITE-4867
 URL: https://issues.apache.org/jira/browse/IGNITE-4867
 Project: Ignite
  Issue Type: Bug
Reporter: Dmitry Karachentsev


'size' and 'keySize' fields in CacheMetricsSnapshot must be serialized and 
properly processed to show correct values in CacheClusterMetricsMXBeanImpl.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (IGNITE-4866) Add lazy value deserialization in GridCacheRawVersionedEntry

2017-03-27 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-4866:
---

 Summary: Add lazy value deserialization in 
GridCacheRawVersionedEntry
 Key: IGNITE-4866
 URL: https://issues.apache.org/jira/browse/IGNITE-4866
 Project: Ignite
  Issue Type: Improvement
Reporter: Dmitry Karachentsev
Assignee: Dmitry Karachentsev






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (IGNITE-4862) NPE in reading data from IGFS

2017-03-24 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-4862:
---

 Summary: NPE in reading data from IGFS
 Key: IGNITE-4862
 URL: https://issues.apache.org/jira/browse/IGNITE-4862
 Project: Ignite
  Issue Type: Bug
  Components: hadoop
Affects Versions: 1.9
Reporter: Dmitry Karachentsev


{noformat}
D:\app\CodexRT.CodexRT_20170314-1>hadoop\bin\hadoop fs -copyToLocal 
igfs:///cacheLink/test3.orc D:\test3.orc
-copyToLocal: Fatal internal error
java.lang.NullPointerException
at 
org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsInputStream$FetchBufferPart.flatten(HadoopIgfsInputStream.java:458)
at 
org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsInputStream$DoubleFetchBuffer.flatten(HadoopIgfsInputStream.java:511)
at 
org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsInputStream.read(HadoopIgfsInputStream.java:177)
at java.io.DataInputStream.read(DataInputStream.java:100)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:91)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:59)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119)
at 
org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:466)
at 
org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:391)
at 
org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:328)
at 
org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:263)
at 
org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:248)
at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
at 
org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
at 
org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:243)
at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
at 
org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:220)
at 
org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
{noformat}

Details in discussion: 
[http://apache-ignite-users.70518.x6.nabble.com/NullPointerException-when-using-IGFS-td11328.html]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


IgniteMessaging in async mode

2017-03-24 Thread Dmitry Karachentsev

Hi Igniters!

We have in IgniteMessaging interface async support, and it's used only 
for registering/deregistering listeners, not for message sending. But 
user may fall into the case, when connection to destination node was 
lost and user's thread is blocked on 
IgniteMessaging.send()/sendOrdered() methods till 
failureDetectionTimeout is up.


I think, it may be a good idea to support async mode for sending 
messages (and probably without creating separate future for each message).


What do you think?

Thanks!

Dmitry.



Re: Can I let Ignite to do rebalance at a particular time?

2017-03-24 Thread Dmitry Karachentsev

Hi,

Totally agree wit that. And it should be possible to start re-balancing 
from client node too.


> But I think it is more convenient to start rebalancing manually from 
management console, is it right?


I don't see the difference on how to start re-balancing: from Ignite API 
or console, result will be the same.


-Dmitry.

24.03.2017 02:55, Valentin Kulichenko пишет:

Cross-posting to dev.

Guys,

Is there any reason for such implementation? Does it ever makes sense 
to trigger rebalancing on only one node?


I think rebalance() method should broadcast automatically and do the 
job on all nodes. Just in case, we can also add localRebalance() 
method. This will be more intuitive and also more consistent with 
other APIs, like loadCache/localLoadCache.


-Val

On Wed, Mar 22, 2017 at 11:52 PM, ght230 > wrote:


Thank you for your guidance.

I will try calling IgniteCache.rebalance()  via compute broadcast
task.

But I think it is more convenient to start rebalancing manually from
management console, is it right?



--
View this message in context:

http://apache-ignite-users.70518.x6.nabble.com/Can-I-let-Ignite-to-do-rebalance-at-a-particular-time-tp11301p11381.html


Sent from the Apache Ignite Users mailing list archive at Nabble.com.






[jira] [Created] (IGNITE-4740) Service could be initialized twice on concurrent cancel and discovery event

2017-02-21 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-4740:
---

 Summary: Service could be initialized twice on concurrent cancel 
and discovery event
 Key: IGNITE-4740
 URL: https://issues.apache.org/jira/browse/IGNITE-4740
 Project: Ignite
  Issue Type: Bug
  Components: managed services
Affects Versions: 1.8
Reporter: Dmitry Karachentsev
Assignee: Dmitry Karachentsev
 Fix For: 2.0


In case of concurrent service cancel and discovery event, for example, applying 
continuous query, may cause double init() and execute() service method 
invocation.

Fix should consist at least in filtering such DiscoveryCustomEvent in 
GridServiceProcessor.TopologyListener.
The best solution is guarantee that despite of any discovery event service 
won't be intialized more than once.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (IGNITE-4718) Add section in documentation on what actions may cause deadlock

2017-02-17 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-4718:
---

 Summary: Add section in documentation on what actions may cause 
deadlock
 Key: IGNITE-4718
 URL: https://issues.apache.org/jira/browse/IGNITE-4718
 Project: Ignite
  Issue Type: Task
Affects Versions: 1.8
Reporter: Dmitry Karachentsev


Ignite has number of common cases, where wrong usage of API may lead to 
deadlocks, starvations, cluster hangs, and they should be properly documented. 

For example, cache operations is not allowed in CacheEntryProcessor, 
ContinuousQuery's remote filter, etc. And in some callbacks it's possible to 
use cache API if they annotated with @IgniteAsyncCallback.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (IGNITE-4671) FairAffinityFunction fails on node restart with backupFilter set and no backups

2017-02-08 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-4671:
---

 Summary: FairAffinityFunction fails on node restart with 
backupFilter set and no backups
 Key: IGNITE-4671
 URL: https://issues.apache.org/jira/browse/IGNITE-4671
 Project: Ignite
  Issue Type: Bug
  Components: cache
Affects Versions: 1.8
Reporter: Dmitry Karachentsev
Assignee: Dmitry Karachentsev
 Fix For: 2.0


Setup: FairAffinityFunction with backup or affinity filter, number of cache 
backups is 0.

On nodes restart fails on assertion in isAssignable() method.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (IGNITE-4667) Throw exception on starting client cache when indexed types cannot be loaded

2017-02-07 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-4667:
---

 Summary: Throw exception on starting client cache when indexed 
types cannot be loaded
 Key: IGNITE-4667
 URL: https://issues.apache.org/jira/browse/IGNITE-4667
 Project: Ignite
  Issue Type: Bug
  Components: cache
Affects Versions: 1.8
Reporter: Dmitry Karachentsev


Assume following situation:
1. Server node configured cache indexed types with classes that client node 
doesn't have.
2. Start server.
3. Start client. Client successfully connected.
4. Try to get cache on client node.
5. Was thrown exception in GridQueryProcessor and request for cache on client 
node hangs forever.

If on some reason cache couldn't be initialized, exception must be thrown on 
Ignite.cache() operation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (IGNITE-4644) Value from IgniteQueue in atomic mode could be lost

2017-02-02 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-4644:
---

 Summary: Value from IgniteQueue in atomic mode could be lost
 Key: IGNITE-4644
 URL: https://issues.apache.org/jira/browse/IGNITE-4644
 Project: Ignite
  Issue Type: Bug
  Components: cache
Affects Versions: 1.8
Reporter: Dmitry Karachentsev


Assume following case (operations are going concurrently):
1) Client1 -> offer. Add new index in queue.
2) On Client1 happens JVM pause or short-time problem with connectivity to 
cluster longer than 3 sec.
3) Client2 -> poll. Get index from cache, but no value available.
4) Client2 checks for value about 3 sec and if no value appears, just skip it.

Should be fixed somehow or make timeout configurable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: Error in JDBC store

2017-01-01 Thread Dmitry Karachentsev

Opened a ticket for that [1].

[1] https://issues.apache.org/jira/browse/IGNITE-4515

On 29.12.2016 23:07, Valentin Kulichenko wrote:

Hi Dmitry,

My opinion is that this is not a valid case and we should throw an
exception on cache startup if two Java fields are mapped to the same DB
field. Even if user needs such duplication on objects level (which I also
doubt, BTW), the mapping in the store must be correct.

-Val

On Thu, Dec 29, 2016 at 12:23 AM, Dmitriy Karachentsev <
dkarachent...@gridgain.com> wrote:


Hi all!

According to this thread [1] in JDBC store configuration:
1. Map key and value fields to the same columns in DB.
2. Try to update data.
3. Got invalid SQL.

Let's see an pseudocode example of described use case.
KeyClass { field1; field2; field3 }, ValClass { field1; field2; field3 }

Map fields to DB table TABLE_NAME columns: field1 -> col1, field2 -> col2,
field3 -> col3.

User expects the following update request built by Ignite:
UPDATE TABLE_NAME SET col1=?, col2=?, col3=? WHERE (col1=?, col2=?,
col3=?);

But Ignite checks that value object fields have the same mappings, throws
them away and builds wrong query:
UPDATE TABLE_NAME SET WHERE (col1=?, col2=?, col3=?);
That is obviously wrong.

Is there any reason to do so?
Probably, it is better to build query according to user mapping and
delegate verification to DB.

[1]
http://apache-ignite-users.70518.x6.nabble.com/Error-
while-writethrough-operation-in-Ignite-td9696.html

Thanks!
Dmitry.





[jira] [Created] (IGNITE-4515) Throw exception when key and value fields mapped to the same DB columns

2017-01-01 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-4515:
---

 Summary: Throw exception when key and value fields mapped to the 
same DB columns
 Key: IGNITE-4515
 URL: https://issues.apache.org/jira/browse/IGNITE-4515
 Project: Ignite
  Issue Type: Improvement
  Components: cache
Affects Versions: 1.8
Reporter: Dmitry Karachentsev
Assignee: Dmitry Karachentsev
 Fix For: 2,0


In case of (pseudocode):
KeyClass { field1; field2; field3 }, ValClass { field1; field2; field3 }

Map fields to DB table TABLE_NAME columns: field1 -> col1, field2 -> col2, 
field3 -> col3.

On node startup throw an exception with message that restricts such case. Field 
mappings should not intersect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-4487) NPE on query execution

2016-12-23 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-4487:
---

 Summary: NPE on query execution
 Key: IGNITE-4487
 URL: https://issues.apache.org/jira/browse/IGNITE-4487
 Project: Ignite
  Issue Type: Bug
  Components: cache
Affects Versions: 1.8
Reporter: Dmitry Karachentsev
 Fix For: 2.0


NPE may be thrown when called destroyCache() and started querying.

Attached example reproduces this case when 
GridDiscoveryManager#removeCacheFilter called but cache state haven't been 
changed to STOPPED 
org.apache.ignite.internal.processors.cache.GridCacheGateway#onStopped.
{code:none}
javax.cache.CacheException: class org.apache.ignite.IgniteCheckedException: null
at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxy.query(IgniteCacheProxy.java:740)
at 
com.intellica.evam.engine.event.future.FutureEventWorker.processFutureEvents(FutureEventWorker.java:117)
at 
com.intellica.evam.engine.event.future.FutureEventWorker.run(FutureEventWorker.java:66)
Caused by: class org.apache.ignite.IgniteCheckedException: null
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:1693)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxy.query(IgniteCacheProxy.java:494)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxy.query(IgniteCacheProxy.java:732)
... 2 more
Caused by: java.lang.NullPointerException
at 
org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter$ScanQueryFallbackClosableIterator.init(GridCacheQueryAdapter.java:712)
at 
org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter$ScanQueryFallbackClosableIterator.(GridCacheQueryAdapter.java:677)
at 
org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter$ScanQueryFallbackClosableIterator.(GridCacheQueryAdapter.java:628)
at 
org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter.executeScanQuery(GridCacheQueryAdapter.java:548)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxy$2.applyx(IgniteCacheProxy.java:497)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxy$2.applyx(IgniteCacheProxy.java:495)
at 
org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36)
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:1670)
... 4 more
{code}






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-4455) Node restarts may hang grid on exchange process

2016-12-19 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-4455:
---

 Summary: Node restarts may hang grid on exchange process
 Key: IGNITE-4455
 URL: https://issues.apache.org/jira/browse/IGNITE-4455
 Project: Ignite
  Issue Type: Bug
  Components: cache
Affects Versions: 1.8
Reporter: Dmitry Karachentsev


Node connect or disconnect may force grid hang on exchange process.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Grid hang on compute

2016-12-07 Thread Dmitry Karachentsev

Thanks!

Opened a ticket to support Yakov's proposal.
https://issues.apache.org/jira/browse/IGNITE-4395

On 08.12.2016 4:03, Dmitriy Setrakyan wrote:

Is there any way we can detect this and prevent from happening? Or perhaps
start rejecting jobs if they can potentially block the system?

On Wed, Dec 7, 2016 at 8:11 AM, Yakov Zhdanov <yzhda...@apache.org> wrote:


Proper solution here is to have communication backpressure per policy -
SYSTEM or PUBLIC, but not single point as it is now. I think we can achieve
this having two queues per communication session or (which looks a bit
easier to implement) to have separate connections.

As a workaround you can increase the limit. Setting it to 0 may lead to a
potential OOME on sender or receiver sides.

--Yakov

2016-12-07 20:35 GMT+07:00 Dmitry Karachentsev <dkarachent...@gridgain.com

:
Igniters!

Recently faced with arguable issue, it looks like a bug. Scenario is
following:

1) Start two data nodes with some cache.

2) From one node in async mode post some big number of jobs to another.
That jobs do some cache operations.

3) Grid hangs almost immediately and all threads are sleeping except
public ones, they are waiting for response.

This happens because all cache and job messages are queued on
communication and limited with default number (1024). It looks like jobs
are waiting for cache responses that could not be received due to this
limit. It's hard to diagnose and looks not convenient (as I know we have

no

limitation in docs for using cache ops from compute jobs).

So, my question is. Should we try to solve that or, may be, it's enough

to

update documentation with recommendation to disable queue limit for such
cases?






[jira] [Created] (IGNITE-4395) Implement communication backpressure per policy - SYSTEM or PUBLIC

2016-12-07 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-4395:
---

 Summary: Implement communication backpressure per policy - SYSTEM 
or PUBLIC
 Key: IGNITE-4395
 URL: https://issues.apache.org/jira/browse/IGNITE-4395
 Project: Ignite
  Issue Type: Improvement
  Components: cache, compute
Affects Versions: 1.7
Reporter: Dmitry Karachentsev


1) Start two data nodes with some cache.
2) From one node in async mode post some big number of jobs to another. That 
jobs do some cache operations.
3) Grid hangs almost immediately and all threads are sleeping except public 
ones, they are waiting for response.

This happens because all cache and job messages are queued on communication and 
limited with default number (1024). It looks like jobs are waiting for cache 
responses that could not be received due to this limit.

Proper solution here is to have communication backpressure per policy -
SYSTEM or PUBLIC, but not single point as it is now. It could be achieved
with having two queues per communication session or (which looks a bit
easier to implement) to have separate connections.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Grid hang on compute

2016-12-07 Thread Dmitry Karachentsev

Igniters!

Recently faced with arguable issue, it looks like a bug. Scenario is 
following:


1) Start two data nodes with some cache.

2) From one node in async mode post some big number of jobs to another. 
That jobs do some cache operations.


3) Grid hangs almost immediately and all threads are sleeping except 
public ones, they are waiting for response.


This happens because all cache and job messages are queued on 
communication and limited with default number (1024). It looks like jobs 
are waiting for cache responses that could not be received due to this 
limit. It's hard to diagnose and looks not convenient (as I know we have 
no limitation in docs for using cache ops from compute jobs).


So, my question is. Should we try to solve that or, may be, it's enough 
to update documentation with recommendation to disable queue limit for 
such cases?




[jira] [Created] (IGNITE-4284) Failed second client node join with continuous query and peer class loading enabled

2016-11-23 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-4284:
---

 Summary: Failed second client node join with continuous query and 
peer class loading enabled
 Key: IGNITE-4284
 URL: https://issues.apache.org/jira/browse/IGNITE-4284
 Project: Ignite
  Issue Type: Bug
Affects Versions: 1.7
Reporter: Dmitry Karachentsev
 Fix For: 2.0


Steps to reproduce:
1) Start server with cache and peer class loading enabled.
2) Start client with peer class loading enabled.
3) Start continuous query with custom event filter factory.
4) Start second client -- Fail with NPE or AsserionError.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-4231) Hangs on compute result serialization error

2016-11-16 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-4231:
---

 Summary: Hangs on compute result serialization error
 Key: IGNITE-4231
 URL: https://issues.apache.org/jira/browse/IGNITE-4231
 Project: Ignite
  Issue Type: Bug
Affects Versions: 1.7
Reporter: Dmitry Karachentsev
Assignee: Dmitry Karachentsev
 Fix For: 1.8


Compute task may hang if was thrown exception on result serialization. This 
should be properly handled and responded to client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Service proxy API changes

2016-11-15 Thread Dmitry Karachentsev
Valentin, as I understand, this situation is abnormal, and if user 
thread hangs on getting service proxy, it's worth to check logs for 
failure instead of making workaround.


On 15.11.2016 19:47, Valentin Kulichenko wrote:

I would still add a timeout there. In my view, it makes sense to have such
option. Currently user thread can block indefinitely or loop in while(true)
forever.

-Val

On Tue, Nov 15, 2016 at 7:10 AM, Dmitriy Karachentsev <
dkarachent...@gridgain.com> wrote:


Perfect, thanks!

On Tue, Nov 15, 2016 at 5:56 PM, Vladimir Ozerov 
wrote:


To avoid log pollution we usually use LT class (alias for

GridLogThrottle).

Please check if it can help you.

On Tue, Nov 15, 2016 at 5:48 PM, Dmitriy Karachentsev <
dkarachent...@gridgain.com> wrote:


Vladimir, thanks for your reply!

What you suggest definitely makes sense and it looks more reasonable
solution.
But there remains other thing. The second issue, that this solution

solves,

is prevent log pollution with GridServiceNotFoundException. The reason

why

it happens is because GridServiceProxy#invokeMethod() designed to

catch

it

and ClusterTopologyCheckedException and retry over and over again

unless

service is become available, but on the same time there are tons of

stack

traces printed on remote node.

If we want to avoid changing API, probably we need to add option to

mute

that exceptions in GridJobWorker.

What do you think?

On Tue, Nov 15, 2016 at 5:10 PM, Vladimir Ozerov  wrote:


Hi Igniters!

I'd like to modify our public API and add

IgniteServices.serviceProxy()

method with timeout argument as part of the task
https://issues.apache.org/jira/browse/IGNITE-3862

In short, without timeout, in case of serialization error, or so,

service

acquirement may hang and infinitely log errors.

Do you have any concerns about this change?

Thanks!
Dmitry.







[jira] [Created] (IGNITE-3813) High contention on GridCacheMapEntry.obsolete() on loading data to data stremer with composite keys

2016-08-31 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-3813:
---

 Summary: High contention on GridCacheMapEntry.obsolete() on 
loading data to data stremer with composite keys
 Key: IGNITE-3813
 URL: https://issues.apache.org/jira/browse/IGNITE-3813
 Project: Ignite
  Issue Type: Improvement
Affects Versions: 1.7
Reporter: Dmitry Karachentsev
 Fix For: 1.8


On loading entries with non-primitive key to data streamer too many calls to 
metadata cache which causes high contention on GridCacheMapEntry.

Example thread dump in attachment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-3590) Create test and first simple implementation of secondary file system.

2016-07-27 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-3590:
---

 Summary: Create test and first simple implementation of secondary 
file system.
 Key: IGNITE-3590
 URL: https://issues.apache.org/jira/browse/IGNITE-3590
 Project: Ignite
  Issue Type: Sub-task
  Components: IGFS
Affects Versions: 1.6
Reporter: Dmitry Karachentsev
Assignee: Dmitry Karachentsev
 Fix For: 1.7


Create new simple (copy paste from IgniteHadoopIgfsSecondaryFileSystem) 
inplementation and tests for it as start point.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-3339) get() in explicit READ_COMMITTED transaction on OFFHEAP_TIERED cache copies data from offheap to onheap.

2016-06-20 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-3339:
---

 Summary: get() in explicit READ_COMMITTED transaction on 
OFFHEAP_TIERED cache copies data from offheap to onheap.
 Key: IGNITE-3339
 URL: https://issues.apache.org/jira/browse/IGNITE-3339
 Project: Ignite
  Issue Type: Bug
  Components: cache
Affects Versions: 1.6
Reporter: Dmitry Karachentsev
 Fix For: 1.7






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Code merge in 1.6 branch at 16-th

2016-05-13 Thread Dmitry Karachentsev
Anton V., is it still be possible to merge changes to 1.6 branch at 
Monday 16-th? I have a fix that waiting for review and it would be nice 
to add it in that release (IGNITE-3087 
).


Thanks!
Dmitry.


[jira] [Created] (IGNITE-3117) Weblogic manual tests for HttpSession caching.

2016-05-12 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-3117:
---

 Summary: Weblogic manual tests for HttpSession caching.
 Key: IGNITE-3117
 URL: https://issues.apache.org/jira/browse/IGNITE-3117
 Project: Ignite
  Issue Type: Test
  Components: websession
Affects Versions: 1.6
Reporter: Dmitry Karachentsev
Assignee: Dmitry Karachentsev
 Fix For: 1.7


Simple tests that allow deploy and check that HttpSession is cached properly in 
grid.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-2995) putAll() and removeAll() operations on transactional cache are very slow on big number of entries

2016-04-13 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-2995:
---

 Summary: putAll() and removeAll() operations on transactional 
cache are very slow on big number of entries
 Key: IGNITE-2995
 URL: https://issues.apache.org/jira/browse/IGNITE-2995
 Project: Ignite
  Issue Type: Improvement
  Components: cache
Affects Versions: 1.5.0.final
Reporter: Dmitry Karachentsev
Priority: Minor


Saving and removing 100_000 cache entries using putAll() and removeAll() 
methods is much slower than save/remove 10 times 10_000 entries. F.e. on my 
test environment second case (10_000 x 10) took 3 mins and first case (100_000) 
- 24 mins.

That issue is observed only for transactional cache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Stream API doesn't update cache metrics.

2016-04-08 Thread Dmitry Karachentsev

Yes, switching it to 'true' does the magic.

Thanks!

On 08.04.2016 13:52, Yakov Zhdanov wrote:

Is allowOverwrite set to 'false'?

Thanks!
--
Yakov Zhdanov, Director R
*GridGain Systems*
www.gridgain.com

2016-04-08 10:30 GMT+03:00 Dmitry Karachentsev <dkarachent...@gridgain.com>:


Hi all.

Adding data to cache via streamer doesn't update cache metrics like
AveragePutTime, CachePuts. Was it made intentionally?

Thanks!
Dmitry.





Stream API doesn't update cache metrics.

2016-04-08 Thread Dmitry Karachentsev

Hi all.

Adding data to cache via streamer doesn't update cache metrics like 
AveragePutTime, CachePuts. Was it made intentionally?


Thanks!
Dmitry.


[jira] [Created] (IGNITE-2763) GridDhtPartitionDemander fails with assertion on partition move

2016-03-04 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-2763:
---

 Summary: GridDhtPartitionDemander fails with assertion on 
partition move
 Key: IGNITE-2763
 URL: https://issues.apache.org/jira/browse/IGNITE-2763
 Project: Ignite
  Issue Type: Bug
  Components: cache
Affects Versions: 1.5.0.final
Reporter: Dmitry Karachentsev
 Fix For: 1.6


When starts single node and is filled with cache items, then are started three 
new nodes GridDhtPartitionDemander fails with assertion error.

java.lang.AssertionError: Partition already done [cache=cache_name, 
fromNode=1073a1d0-5139-44d3-9dee-399637bfd001, part=0, left=[2, 259, 771, 5, 6, 
518, 774, 263, 775, 780, 271, 275, 540, 797, 30, 32, 802, 547, 807, 810, 556, 
45, 301, 813, 302, 305, 561, 55, 312, 57, 59, 575, 324, 327, 331, 336, 594, 
597, 598, 88, 859, 605, 606, 353, 867, 357, 871, 873, 363, 875, 371, 631, 887, 
383, 640, 896, 898, 899, 644, 900, 646, 903, 905, 652, 908, 653, 912, 660, 662, 
919, 153, 419, 422, 934, 172, 173, 429, 941, 176, 177, 180, 694, 953, 955, 445, 
957, 959, 448, 451, 707, 199, 201, 972, 205, 718, 974, 207, 208, 721, 725, 471, 
728, 985, 986, 475, 987, 477, 478, 225, 230, 233, 747, 237, 750, 497, 756, 503, 
505, 1018, 1019, 764, 510, 1022]]
at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionDemander$RebalanceFuture.partitionDone(GridDhtPartitionDemander.java:978)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionDemander$RebalanceFuture.access$1100(GridDhtPartitionDemander.java:751)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionDemander.handleSupplyMessage(GridDhtPartitionDemander.java:600)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader.handleSupplyMessage(GridDhtPreloader.java:408)
at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$5.apply(GridCachePartitionExchangeManager.java:342)
at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$5.apply(GridCachePartitionExchangeManager.java:335)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:605)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:303)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$400(GridCacheIoManager.java:81)
at 
org.apache.ignite.internal.processors.cache.GridCacheIoManager$OrderedMessageListener.onMessage(GridCacheIoManager.java:1108)
at 
org.apache.ignite.internal.managers.communication.GridIoManager$GridCommunicationMessageSet.unwind(GridIoManager.java:2239)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.unwindMessageSet(GridIoManager.java:1004)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.access$1800(GridIoManager.java:103)
at 
org.apache.ignite.internal.managers.communication.GridIoManager$6.run(GridIoManager.java:973)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-2692) Set thread local context in IgnitionEx to affect cache functions

2016-02-19 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-2692:
---

 Summary: Set thread local context in IgnitionEx to affect cache 
functions
 Key: IGNITE-2692
 URL: https://issues.apache.org/jira/browse/IGNITE-2692
 Project: Ignite
  Issue Type: Sub-task
  Components: general
Affects Versions: 1.5.0.final
Reporter: Dmitry Karachentsev
Assignee: Dmitry Karachentsev
Priority: Critical
 Fix For: 1.6


For any usage of cache from single user thread must be initialized thread local 
context in IgnitionEx to support Ignition.localIgnite() in marshalling 
functions.
In other words, any code that might be called during marshaling must be able to 
get Ignite instance from Ignition.localIgnite().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-2691) Wrap all Marshaller interface usage to set thread local context in IgnitionEx

2016-02-19 Thread Dmitry Karachentsev (JIRA)
Dmitry Karachentsev created IGNITE-2691:
---

 Summary: Wrap all Marshaller interface usage to set thread local 
context in IgnitionEx
 Key: IGNITE-2691
 URL: https://issues.apache.org/jira/browse/IGNITE-2691
 Project: Ignite
  Issue Type: Sub-task
  Components: general
Affects Versions: 1.5.0.final
Reporter: Dmitry Karachentsev
Assignee: Dmitry Karachentsev
Priority: Critical
 Fix For: 1.6






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)