Re: Cannot find schema for object with compact footer

2017-10-26 Thread zshamrock
Yes, I suppose so, as it fails in the integration test, which uses the ignite
client, so the client node.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Cannot find schema for object with compact footer

2017-10-23 Thread zshamrock
Slava, what is the consequence/price of disabling compact footer? Although it
looks like it only happens for the integration tests, and never (at least we
have not observed this problem) in production.

This is the test (we use Spock):

class SessionsCacheITSpec extends IgniteCacheITSpec {
@Inject
@Subject
IgniteCache sessionsCache

@Inject
SessionsRepository sessionsRepository

def "verify on the start sessionsCache cache is empty"() {
expect:
sessionsCache.size() == 0
}

def "verify get on non existent session id returns null"() {
setup:
def sessionId = UUID.randomUUID().toString()

when:
def session = sessionsCache.get(sessionId)

then:
session == null
}

def "verify get on the existing session id session is loaded from db"()
{
setup:
...
def session = new Session(...
)
sessionsRepository.save(session)

when:
def actualSession = sessionsCache.get(sessionId)

then:
actualSession == session
}
}

where IgniteCacheITSpec is the following:

@SpringApplicationConfiguration
@IntegrationTest
@DirtiesContext
class IgniteCacheITSpec extends Specification {

@Inject
CacheEnvironment cacheEnvironment

@Inject
@Qualifier("client")
Ignite igniteClient

@Shared
Ignite igniteServer

@Configuration
@ComponentScan([...])
static class IgniteClientConfiguration {
}

def setupSpec() {
Ignition.setClientMode(false)
igniteServer = Ignition.start(new
ClassPathResource("META-INF/grid.xml").inputStream)
}

def cleanupSpec() {
def grids = Ignition.allGrids()
// stop all clients first
grids.findAll { it.configuration().isClientMode() } forEach {
it.close() }
igniteServer.close()
}
}


where the igniteClient is defined as the spring bean in one of the
configuration class, like the following:

@Bean(name = "ignite", destroyMethod = "close")
@Qualifier("client")
public Ignite ignite() {
Ignition.setClientMode(true);
return Ignition.start(igniteConfiguration());
}

Probably that could due to usage of the @DirtiesContext, although just a
guess.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Cannot find schema for object with compact footer

2017-10-22 Thread zshamrock
>From time to time we get the following integration test failure (using mvn
test command). Sometimes it passes without any problems, sometime it fails,
and not necessary the same test. It never fails when running the test
standalone, i.e. from IDEA for example.

Here is the failure:

Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 8.418 sec
<<< FAILURE! - in SessionLineupsCacheITSpec
verify get on the existing session id returns the corresponding session
lineups(SessionLineupsCacheITSpec)  Time elapsed: 0.142 sec  <<< ERROR!
javax.cache.CacheException: class org.apache.ignite.IgniteCheckedException:
Cannot find schema for object with compact footer [typeId=709293316,
schemaId=96980434]
at SessionLineupsCacheITSpec.verify get on the existing session id 
returns
the corresponding session lineups(SessionLineupsCacheITSpec.groovy:48)
Caused by: org.apache.ignite.IgniteCheckedException: Cannot find schema for
object with compact footer [typeId=709293316, schemaId=96980434]
at SessionLineupsCacheITSpec.verify get on the existing session id 
returns
the corresponding session lineups(SessionLineupsCacheITSpec.groovy:48)
Caused by: org.apache.ignite.binary.BinaryObjectException: Cannot find
schema for object with compact footer [typeId=709293316, schemaId=96980434]

What does it mean "Cannot find schema for object with compact footer", and
what could be the cause of this in-consistence behavior, i.e. why it passes
sometimes, and sometimes fails?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Local node was dropped from cluster due to network problems cascading failures

2017-10-12 Thread zshamrock
Ok, we identified the root cause. It was not specifically related to the
Ignite, but rather the security settings (EC2 security group), i.e. we only
had inbound port 47100 open to the EC2 instance. But as you can see from the
original message, the error is about the nodes running on ports 47103 and
47104, actually all others except 47100.

There is `TcpCommunicationSpi`
https://apacheignite.readme.io/v1.9/docs/network-config#section-configuration,
which defines `setLocalPort` (default to 47100) and `setLocalPortRange`
which is 100. And my assumption is that because we are running multiple
services on the same machine, so every Ignite client will get its own port
starting from 47100, and up to 47200 (or 47199?) (see `setLocalPortRange`
above). So as we are running multiple of the them, only one will get 47100
port, others will get 47101, and 47102 (as we have max of 3 running on the
same machine currently), and so on.

And they connect to the server node, which is listening port 47500 (which is
opened in the security group to connect).

So during the cluster start up everything works fine.

But then because ports 47101-... were not open on our app side, the server
could not reach back other clients apart from the one running on port 47100.

This is my theory (but at least opening those ports fixed the problem).

Of course, there is still an open question, is why the client node starts to
fail only when there is a load, I would expect there is a periodic heart
beat, so the server should not reach the client node almost immediately
after the cluster started (I mean the client nodes listening on ports
14701-...).

But we only start see the error after couple of hours when the system is in
use. 

Could you, please, comment on this?

Thank you.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Local node was dropped from cluster due to network problems cascading failures

2017-10-11 Thread zshamrock
Hi, Andrew. Thank you for the reply.

We will definitely try this. 

Another thing we today switched to the static IP list for the TCP IP finder
instead of S3 based discovery, to see whether that could be the problem as
well. So, will test static IP based discovery for today. If it didn't make
any difference, then will try the option you've suggested.

Will keep you updated.

Also could you, please, point into the direction/source of how to check GC
logs for the Ignite client/server?

Thank you once again.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Local node was dropped from cluster due to network problems cascading failures

2017-10-11 Thread zshamrock
Split the services into 2 separate instances, one with high intensive network
usage, and the one with where only Ignite clients are running didn't help,
the error still persists and happens after couple hours of working.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Local node was dropped from cluster due to network problems cascading failures

2017-10-10 Thread zshamrock
Could it be that the network bandwidth is reached on the EC2 instance?
r4.large has up to 10Gigabit network performance, and we are running another
network extensive service on those machines (although split into the
separate instance today), and according to the Cloudwatch we are about the
limit of the network capacity, which probably could affect the Ignite
clients talking to the server. 

Could it be?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Local node was dropped from cluster due to network problems cascading failures

2017-10-09 Thread zshamrock
Hi, we are running cluster of 2 Ignite 1.9 servers running on the EC2. The
EC2 instances are r4.large, i.e. 16GB of memory each. We use Amazon S3 based
discovery for both servers and clients.

We have another EC2 instance (r4.large, 16GB) where our app service is
running, and where the Ignite clients live. There are 5 Ignite clients are
running there (as we run the app in the docker container using
`network_mode: host`), and so there are 5 docker instances with the app are
running.

We also set the socket timeout for the `TcpDiscoverySpi` as the recommended
for the EC2 to 30 seconds for both servers and clients.

The problem is that after some period of time we get `Local node failed`
error, and the looks like the cluster becomes unstable, as it reports in a
loop the new topology version increased constantly, i.e. cascading failure.

```
2017-10-08 17:04:14.520  WARN 6 --- [tcp-client-disco-msg-worker-#4%st%]
o.a.i.spi.discovery.tcp.TcpDiscoverySpi  : Local node was dropped from
cluster due to network problems, will try to reconnect with new id after
1ms (reconnect delay can be changed using
IGNITE_DISCO_FAILED_CLIENT_RECONNECT_DELAY system property)
[newId=85e37c0f-fd44-430f-9247-06f783589523,
prevId=48e71e9f-7548-460b-9320-2155be8a30a4, locNode=TcpDiscoveryNode
[id=48e71e9f-7548-460b-9320-2155be8a30a4, addrs=[0:0:0:0:0:0:0:1%lo,
127.0.0.1, 172.17.0.1, 172.31.29.171],
sockAddrs=[ip-172-17-0-1.us-west-2.compute.internal/172.17.0.1:0,
/0:0:0:0:0:0:0:1%lo:0, /127.0.0.1:0,
ip-172-31-29-171.us-west-2.compute.internal/172.31.29.171:0], discPort=0,
order=138, intOrder=0, lastExchangeTime=1507193821071, loc=true,
ver=1.9.0#20170302-sha1:a8169d0a, isClient=true],
nodeInitiatedFail=e5897e87-65e8-4bf8-947e-7b3f244c3458,
msg=TcpCommunicationSpi failed to establish connection to node
[rmtNode=TcpDiscoveryNode [id=48e71e9f-7548-460b-9320-2155be8a30a4,
addrs=[0:0:0:0:0:0:0:1%lo, 127.0.0.1, 172.17.0.1, 172.31.29.171],
sockAddrs=[ip-172-17-0-1.us-west-2.compute.internal/172.17.0.1:0,
/0:0:0:0:0:0:0:1%lo:0, /127.0.0.1:0,
ip-172-31-29-171.us-west-2.compute.internal/172.31.29.171:0], discPort=0,
order=138, intOrder=74, lastExchangeTime=1507392564555, loc=false,
ver=1.9.0#20170302-sha1:a8169d0a, isClient=true], errs=class
o.a.i.IgniteCheckedException: Failed to connect to node (is node still
alive?). Make sure that each ComputeTask and cache Transaction has a timeout
set in order to prevent parties from waiting forever in case of network
issues [nodeId=48e71e9f-7548-460b-9320-2155be8a30a4,
addrs=[ip-172-17-0-1.us-west-2.compute.internal/172.17.0.1:47103,
ip-172-31-29-171.us-west-2.compute.internal/172.31.29.171:47103,
/0:0:0:0:0:0:0:1%lo:47103, /127.0.0.1:47103]], connectErrs=[class
o.a.i.IgniteCheckedException: Failed to connect to address:
ip-172-17-0-1.us-west-2.compute.internal/172.17.0.1:47103, class
o.a.i.IgniteCheckedException: Failed to connect to address:
ip-172-31-29-171.us-west-2.compute.internal/172.31.29.171:47103, class
o.a.i.IgniteCheckedException: Failed to connect to address:
/0:0:0:0:0:0:0:1%lo:47103, class o.a.i.IgniteCheckedException: Failed to
connect to address: /127.0.0.1:47103]]]
 
2017-10-08 17:04:24.888  WARN 6 --- [tcp-client-disco-msg-worker-#4%st%]
o.a.i.spi.discovery.tcp.TcpDiscoverySpi  : Client node was reconnected after
it was already considered failed by the server topology (this could happen
after all servers restarted or due to a long network outage between the
client and servers). All continuous queries and remote event listeners
created by this client will be unsubscribed, consider listening to
EVT_CLIENT_NODE_RECONNECTED event to restore them.


2017-10-08 17:04:24.981  INFO 6 --- [disco-event-worker-#23%st%]
o.a.i.i.m.d.GridDiscoveryManager : Client node reconnected to
topology: TcpDiscoveryNode [id=85e37c0f-fd44-430f-9247-06f783589523,
addrs=[0:0:0:0:0:0:0:1%lo, 127.0.0.1, 172.17.0.1, 172.31.29.171],
sockAddrs=[ip-172-17-0-1.us-west-2.compute.internal/172.17.0.1:0,
/0:0:0:0:0:0:0:1%lo:0, /127.0.0.1:0,
ip-172-31-29-171.us-west-2.compute.internal/172.31.29.171:0], discPort=0,
order=188, intOrder=0, lastExchangeTime=1507193821071, loc=true,
ver=1.9.0#20170302-sha1:a8169d0a, isClient=true]
2017-10-08 17:04:24.988  INFO 6 --- [disco-event-worker-#23%st%]
o.a.i.i.m.d.GridDiscoveryManager : Topology snapshot [ver=188,
servers=2, clients=8, CPUs=12, heap=17.0GB]
2017-10-08 17:04:47.264  WARN 6 --- [tcp-client-disco-msg-worker-#4%st%]
o.a.i.spi.discovery.tcp.TcpDiscoverySpi  : Received EVT_NODE_FAILED event
with warning [nodeInitiatedEvt=TcpDiscoveryNode
[id=28db9f51-f3a3-42d2-b241-520de1124d77, addrs=[0:0:0:0:0:0:0:1%lo,
127.0.0.1, 172.17.0.1, 172.31.22.48],
sockAddrs=[ip-172-31-22-48.us-west-2.compute.internal/172.31.22.48:47500,
ip-172-17-0-1.us-west-2.compute.internal/172.17.0.1:47500,
/0:0:0:0:0:0:0:1%lo:47500, /127.0.0.1:47500], discPort=47500, order=1,
intOrder=1, lastExchangeTime=1507482264715, loc=false,
ver=1.9.0#20170302-sha1:a8169d0a, isClient=false], 

Re: Affinity key backups mismatch (fix affinity key backups in cache configuration or set -DIGNITE_SKIP_CONFIGURATION_CONSISTENCY_CHECK=true system property

2017-01-29 Thread zshamrock
Or maybe then even more general question: which configuration should
be/enough configured where? Is it enough just to configure Ignite client
with AtomicConfiguration? What settings are enough to configure just on the
client side, which one on the server, and which one on both? I am little bit
lost. Is there a rule of thumb to follow?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Affinity-key-backups-mismatch-fix-affinity-key-backups-in-cache-configuration-or-set-DIGNITE-SKIP-COy-tp10305p10307.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Affinity key backups mismatch (fix affinity key backups in cache configuration or set -DIGNITE_SKIP_CONFIGURATION_CONSISTENCY_CHECK=true system property

2017-01-29 Thread zshamrock
Ok, I found the cause of the issue. It was the mismatch between the Ignite
client:

Ignition.setClientMode(true);
return Ignition.start(igniteConfiguration());

and ignite server running with XML configuration.

Which raises another question - how much in common in the configuration
Ignite client and Ignite server should have? And what would be the
consequences of not keeping them in sync?

As my Ignite server has extensive configuration for each of the cache used
with cachestore, eviction/expiry policies, etc. While my Ignite client
doesn't have anything related to the cache configuration at all, i.e. it is
simple like this:

@Bean
public IgniteConfiguration igniteConfiguration() {
final IgniteConfiguration igniteConfiguration = new
IgniteConfiguration();
final AtomicConfiguration atomicConfiguration = new
AtomicConfiguration(); // 1
   
atomicConfiguration.setBackups(cacheEnvironment.atomicConfigurationBackups()); 
// 2
igniteConfiguration.setAtomicConfiguration(atomicConfiguration); //
3
igniteConfiguration.setGridName(cacheEnvironment.gridName());
igniteConfiguration.setMetricsLogFrequency(0); // 0 - to disable
setupTcpDiscoverySpi(igniteConfiguration);
return igniteConfiguration;
}

(by adding lines 1-3 has helped to solve the issue describe in the topic of
this thread)



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Affinity-key-backups-mismatch-fix-affinity-key-backups-in-cache-configuration-or-set-DIGNITE-SKIP-COy-tp10305p10306.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Affinity key backups mismatch (fix affinity key backups in cache configuration or set -DIGNITE_SKIP_CONFIGURATION_CONSISTENCY_CHECK=true system property

2017-01-29 Thread zshamrock
I am trying to use Ignite AtomicLong, I've configured the XML according to
the official documentation:







Although when I run the application, I get the following exception, and
Ignite is failing to start:

org.apache.ignite.IgniteException: Affinity key backups mismatch (fix
affinity key backups in 
cache configuration or set
-DIGNITE_SKIP_CONFIGURATION_CONSISTENCY_CHECK=true system property)
[cacheName=ignite-atomics-sys-cache, localAffinityKeyBackups=0,
remoteAffinityKeyBacku
ps=1, rmtNodeId=ffaee6a8-19b2-4168-8d9d-dafd4f9a1316]

I suppose Ignite atomics are implemented using the normal Ignite cache
underneath, although as we (as consumers) don't have access to this internal
cache, so I would expect Ignite will take care of the proper cache setup.
But instead I see the exception above.

Is it a bug in Ignite? And what does it mean and how it can be fixed? (yes,
setting  -DIGNITE_SKIP_CONFIGURATION_CONSISTENCY_CHECK=true makes it work,
but also makes me feel uncomfortable about the approach).

Any ideas on this?

Using Ignite 1.7.0.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Affinity-key-backups-mismatch-fix-affinity-key-backups-in-cache-configuration-or-set-DIGNITE-SKIP-COy-tp10305.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Old AWS SDK version, why?

2017-01-07 Thread zshamrock
Then, yes, could you, please, assign this ticket to me, or give the necessary
permissions so I can do it myself, and put it in progress.

Thank you.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Old-AWS-SDK-version-why-tp9824p9955.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Old AWS SDK version, why?

2017-01-06 Thread zshamrock
Also specifically for the Instance Profile credentials provider I guess, on
every getCredentials request, if the credentials are about to expire it
calls instance medata 

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html#instance-metadata-security-credentials,
i.e.

curl http://169.254.169.254/latest/meta-data/iam/security-credentials/

to obtain new temporal credentials, and so on.





--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Old-AWS-SDK-version-why-tp9824p9948.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Old AWS SDK version, why?

2017-01-06 Thread zshamrock
Hi, Denis.
Thank you for creating the issue. I will keep an eye on it. If nobody picks
it up, and I find free time, then I will  pick up it myself. When do you
plan to release 1.9.0 version (is there a roadmap)? (as I've not seen you've
been releasing minor versions like 1.8.1 including some of the bug fixes or
you do?).

Regards the credentials provider refreshing the credentials. I would say it
depends on the implementation, but I've checked a few main used by
DefaultAWSCredentialsProviderChain:
public class DefaultAWSCredentialsProviderChain extends
AWSCredentialsProviderChain {
public DefaultAWSCredentialsProviderChain() {
super(new EnvironmentVariableCredentialsProvider(),
  new SystemPropertiesCredentialsProvider(),
  new ProfileCredentialsProvider(),
  new InstanceProfileCredentialsProvider());
}
}

all of them always do some extra work in the `getCredentials` method rather
than just simply return credentials.

Quite interesting implementation ProfileCredentialsProvider has:
public AWSCredentials getCredentials() {
if (profilesConfigFile == null) {
synchronized (this) {
if (profilesConfigFile == null) {
profilesConfigFile = new ProfilesConfigFile();
lastRefreshed = System.nanoTime();
}
}
}

// Periodically check if the file on disk has been modified
// since we last read it.
//
// For active applications, only have one thread block.
// For applications that use this method in bursts, ensure the
// credentials are never too stale.
long now = System.nanoTime();
long age = now - lastRefreshed;
if (age > refreshForceIntervalNanos) {
refresh();
} else if (age > refreshIntervalNanos) {
if (refreshSemaphore.tryAcquire()) {
try {
refresh();
} finally {
refreshSemaphore.release();
}
}
}

return profilesConfigFile.getCredentials(profileName);
}

where it periodically refreshed the profile file to check if the credentials
were updated in the meanwhile.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Old-AWS-SDK-version-why-tp9824p9947.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Old AWS SDK version, why?

2017-01-04 Thread zshamrock
Thank you for pointing to the Github discussion, Denis.

If I understood it correctly, it doesn't apply to my case (at least
according to my understanding of the topic). I've checked the expire
properties of the S3 objects, and they are not set:

Expiry Date:None
Expiration Rule:N/A

Also I run another services using the same IAM role, and I don't see similar
error happening for any of the other services in use.

Again, it still can be due to some IAM policy misconfiguration, but I would
like to try with the latest AWS SDK to be sure (as I expect S3 SDK should
handle instance profile token expiration by itself, which potentially it
doesn't).



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Old-AWS-SDK-version-why-tp9824p9879.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Old AWS SDK version, why?

2017-01-03 Thread zshamrock
Why I ask. It is not only matter of depending on the latest version, but for
AWS, in the specific case, it probably even causes the error.

When EC2 instance is configured with instance profile, as described here,
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html,
so new credentials are issued and periodically are renewed. 

And this is the error we see in the log from time to time (when running
Ignite in the clustered mode):

/ERROR [tcp-client-disco-reconnector-#5%%] o.a.i.s.d.t.TcpDiscoverySpi
[null] - Failed to get registered addresses from IP fi
nder on start (retrying every 2000 ms).
org.apache.ignite.spi.IgniteSpiException: Failed to list objects in the
bucket: 
at
org.apache.ignite.spi.discovery.tcp.ipfinder.s3.TcpDiscoveryS3IpFinder.getRegisteredAddresses(TcpDiscoveryS3IpFinder.java:168)
~[ignite-aws-1.7.0.ja
r!/:1.7.0]
at
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.registeredAddresses(TcpDiscoverySpi.java:1600)
~[ignite-core-1.7.0.jar!/:1.7.0]
at
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.resolvedAddresses(TcpDiscoverySpi.java:1549)
~[ignite-core-1.7.0.jar!/:1.7.0]
at
org.apache.ignite.spi.discovery.tcp.ClientImpl.joinTopology(ClientImpl.java:475)
[ignite-core-1.7.0.jar!/:1.7.0]
at
org.apache.ignite.spi.discovery.tcp.ClientImpl.access$900(ClientImpl.java:118)
[ignite-core-1.7.0.jar!/:1.7.0]
at
org.apache.ignite.spi.discovery.tcp.ClientImpl$Reconnector.body(ClientImpl.java:1175)
[ignite-core-1.7.0.jar!/:1.7.0]
at
org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
[ignite-core-1.7.0.jar!/:1.7.0]
Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: The provided
token has expired. (Service: Amazon S3; Status Code: 400; Error Code:
ExpiredToken; 
Request ID: EFDFC0BD8F4421AA)
at
com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1307)
~[aws-java-sdk-core-1.10.50.jar!/:na]
at
com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:894)
~[aws-java-sdk-core-1.10.50.jar!/:na]
at
com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:597)
~[aws-java-sdk-core-1.10.50.jar!/:na]
at
com.amazonaws.http.AmazonHttpClient.doExecute(AmazonHttpClient.java:363)
~[aws-java-sdk-core-1.10.50.jar!/:na]
at
com.amazonaws.http.AmazonHttpClient.executeWithTimer(AmazonHttpClient.java:329)
~[aws-java-sdk-core-1.10.50.jar!/:na]
at
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:308)
~[aws-java-sdk-core-1.10.50.jar!/:na]
at
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3595)
~[aws-java-sdk-s3-1.10.29.jar!/:na]
at
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3548)
~[aws-java-sdk-s3-1.10.29.jar!/:na]
at
com.amazonaws.services.s3.AmazonS3Client.listObjects(AmazonS3Client.java:647)
~[aws-java-sdk-s3-1.10.29.jar!/:na]
at
com.amazonaws.services.s3.AmazonS3Client.listObjects(AmazonS3Client.java:626)
~[aws-java-sdk-s3-1.10.29.jar!/:na]
at
org.apache.ignite.spi.discovery.tcp.ipfinder.s3.TcpDiscoveryS3IpFinder.getRegisteredAddresses(TcpDiscoveryS3IpFinder.java:128)
~[ignite-aws-1.7.0.ja
r!/:1.7.0]
... 6 common frames omitted/

Which probably could be due to S3 library is not being worked properly with
EC2 instance profile. So, using the latest S3 AWS SDK, would be a good thing
to try, whether this indeed was fixed in the new version.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Old-AWS-SDK-version-why-tp9824p9855.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Old AWS SDK version, why?

2017-01-02 Thread zshamrock
Why does Apache Ignite still depend on the old AWS SDK version
/1.10.29/.

Are there any technical reasons to do so? What if I explicitly exclude those
dependencies, and use the latest 1.11.76, would it work?




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Old-AWS-SDK-version-why-tp9824.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


What does this error mean?

2016-08-15 Thread zshamrock
"Failed to update store (value will be lost as current buffer size is greater
than 'cacheCriticalSize' or node has been stopped store was repaired)?" - I
start to see this error in the log, and also CPU consumption grows almost to
100% later on (not yet sure that these two are related, do they?)



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/What-does-this-error-mean-tp7080.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Which ports must be open for AWS clustering

2016-08-15 Thread zshamrock
Yes, these are the exact docs I was using. But, indeed would be nice to have
it summarized in one page.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Which-ports-must-be-open-for-AWS-clustering-tp7060p7065.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: H2 console starts with error

2016-08-14 Thread zshamrock
It is not about INFORMATION_SCHEMA, but just simply no tables are displayed
there and this error instead.

Why it can be useful? To check how the data is stored for the application
cache, to better understand SQLQuery feature of the Ignite. And to
troubleshoot if something doesn't work from the code.

I've switched back to 1.6.0, and in 1.6.0 it works without any errors. So,
definitely some issue introduced for 1.7.0 release (as it looks like H2
version was changed from 1.3.175 to 1.4.191, could be a clue).



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/H2-console-starts-with-error-tp7041p7050.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


SqlQuery on hierarchy of classes doesn't work

2016-08-14 Thread zshamrock
I have a simple hierarchy: Location abstract class, and PlayerLocation and
BallLocation extensions of the parent abstract class Location.

If I do SQL query for the `cache`, nothing is returned
back (even though the cache.size() eq to 5). Same true in H2 console -
trying to query for * entries returns empty resultset (even doing new
`SqlQuery<>(Location.class, "1 == 1");` returns empty list back.

Although if I change everything to use the concrete class PlayerLocation,
everything then works like expected, I can query entries from H2, get the
expected resultset from Java code, etc.

So, is it possible still to declare cache storing the parent abstract class,
and story any of its children, and do query on later on it? Is there the way
to enable it, or it is just simply not supported, and I have to use 2 caches
instead - one for player, and one for ball locations, which is not perfect.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/SqlQuery-on-hierarchy-of-classes-doesn-t-work-tp7049.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: H2 console starts with error

2016-08-14 Thread zshamrock
Me too +1



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/H2-console-starts-with-error-tp7041p7047.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Why there are no official 1.6.0 and 1.7.0 ignite docker images on Docker Hub?

2016-08-06 Thread zshamrock
Why there is no official 1.6.0 and 1.7.0 ignite docker images on Docker Hub 
https://hub.docker.com/r/apacheignite/ignite/tags/?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Why-there-are-no-official-1-6-0-and-1-7-0-ignite-docker-images-on-Docker-Hub-tp6832.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Is Ignite worth using in its current state in production? Is it mature enough?

2016-08-06 Thread zshamrock
Thank you, for the feedback, Val. Noticed, 1.7.0 version was released! Great
job!



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Is-Ignite-worth-using-in-its-current-state-in-production-Is-it-mature-enough-tp6748p6831.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Ignite logging and troubleshooting

2016-08-04 Thread zshamrock
How do you troubleshoot ignite issues in production?

Regards logs file, I see there is IGNITE_HOME/work/log directory, with the
following files in it:
 ls -1
ignite-1c980573.0.log
ignite-1c980573.0.log.lck
ignite-2d181411.0.log
ignite-7261bcd3.0.log
ignite-7261bcd3.0.log.lck
ignite-75b64de5.0.log
ignite-75b64de5.0.log.lck
ignite-7b7387d6.0.log
ignite-7b7387d6.0.log.lck
ignite-c7053883.0.log
ignite-c7053883.0.log.lck

How to find out which one is the latest log file to `less`? Is it possible
to configure the different log file location? As well to setup the logging
details?

Thank you.





--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-logging-and-troubleshooting-tp6771.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: What is the recommended/typical deployment/managing procedure for Ignite in production?

2016-08-04 Thread zshamrock
Thank you, vdpyatkov.

Also, do you recommend to run Ignite from Docker in production? As I saw on
the Ignite home page, it says it was fully tested only against Oracle JDK.
But in Docker you are using Open JDK.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/What-is-the-recommended-typical-deployment-managing-procedure-for-Ignite-in-production-tp6747p6769.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


What is the recommended EC2 instance to setup and run Apache Ignite?

2016-08-04 Thread zshamrock
What is the recommended EC2 instance to setup and run Apache Ignite?

Should it be Memory Optimized, or Compute Optimized? What are the
requirements for Network Performance? 

How many vCPUs and Memory should the instance have at very minimum? 

Does Instance Storage type matter?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/What-is-the-recommended-EC2-instance-to-setup-and-run-Apache-Ignite-tp6752.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Is Ignite worth using in its current state in production? Is it mature enough?

2016-08-04 Thread zshamrock
Hi, guys. The more I use Ignite in our system, the more it feels to me like
not yet enough mature product. I see lots of performance complains in the
discussion, people talking about basic essentials things are not working
properly. No one has a clear performance benchmark, just marketing promises.

Also, I see some critical issues (or even blocker issues) are not being
fixed immediately. Obviously, it is an open source, and if someone wants the
production quality product they should buy GridGain Professional or
Enterprise. 

So, question then is: what is the purpose of Apache Ignite product/project
if it can't be used safely/trustedly in production? Is it just a marketing
effort towards GridGain?

Don't get me wrong. We already invested lots of time and resources into
making Ignite the core part of our product, and will continue using it. As
this is the overall experience so far, that the quality/maturity of the
Apache Ignite is not there yet.

Is anybody aware of any product using Apache Ignite in production? Not
GridGain, but Apache Ignite?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Is-Ignite-worth-using-in-its-current-state-in-production-Is-it-mature-enough-tp6748.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


What is the recommended/typical deployment/managing procedure for Ignite in production?

2016-08-04 Thread zshamrock
How do you start/stop and manage Ignite instance on the server?

So, far as I can see there is IGNITE_HOME/bin/ignite.sh, which can be used
only (?) to start the ignite instance. 
But how do you stop/restart? (now I have to find the process by using `ps
aux | grep ignite` and `kill`it manually). 

What is the recommended approach to stop/shutdown ignite gracefully? Why do
not you provide unix service wrapper/script around ignite
start/stop/restart/status lifecycle?

Do you recommend/tested running Ignite in Docker in production? Or it is
only suitable for the development? Also even Ignite 1.6.0 was released
already for a while there is not official Docker image for 1.6.0 (my PR
https://github.com/apache/ignite/pull/757 is still open).

Anything on this topic will be helpful.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/What-is-the-recommended-typical-deployment-managing-procedure-for-Ignite-in-production-tp6747.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: How Asynchronous support works underneath? 60 * N put calls every second

2016-08-04 Thread zshamrock
But, how the future gets notified? What does mean async IO, and how it could
be implemented without using threads? Do you use some kind of event/infinite
loop?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-Asynchronous-support-works-underneath-60-N-put-calls-every-second-tp6482p6745.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: How Asynchronous support works underneath? 60 * N put calls every second

2016-07-26 Thread zshamrock
Hi, Val, but if it is not blocking, how it could be done without using an
additional thread? If you return the Future back, the operation must be
happening in the backround (i.e. a new thread), is not it?

Thank you.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-Asynchronous-support-works-underneath-60-N-put-calls-every-second-tp6482p6539.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


How Asynchronous support works underneath? 5-10 put calls every second

2016-07-22 Thread zshamrock
The question is what is happening when I call `.withAsync()` and then call
any of the method of the returned `IgniteAsyncSupport`? Will Ignite run the
requested operation in the new thread? Does it use Java Executor underneath?
Or just manages its own thread pool? Is it possible to configure how threads
behind async work (i.e. number of threads, pool size, etc.)?

What if I have to put the item in the cache a few every second? Will
`IgniteAsyncSupport` handle it gracefully? (also for that specific cache I
configure the expiry policy of a 5 minutes and write through enabled with
write behind flush frequency of 1 minute).

So, the idea behind this cache is do not block my current thread while
persisting the data, and delegate this to the cache itself (by enabling
write through) (I also actually use a cache, and I am interested/need only a
few minutes of data to be in the cache in order to proceed with the business
logic, this is why the expiry policy is 5 minutes).

On a side note, are there other ways to solve the described problem above
using Ignite (other Ignite features)?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-Asynchronous-support-works-underneath-5-10-put-calls-every-second-tp6482.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Removing or overwriting cache value, but still able to get it from the cache

2016-07-19 Thread zshamrock
Thank you.

Coming with a reproducible example is a challenge, and will take, but I need
this issue solved, so will try to come with something you can reproduce
locally.

Regards /"In you particular case, I would expect the removeAll() removes the
provided keys from all caches and from the persistence store, so the
behavior you're describing is weird. The reproducible example would be
really useful. "/, the item is removed from the data store not because of
cache.removeAll() (as I only configured readThrough, and deleteAll() and
delete() methods of the CacheStoreAdapter simply throw
UnsupportedOperationException), but because on the new session start there
is a logic which updates the old values with the new one in the database.
So, if the cache becomes empty, but calling removeAll() the next call to
get() should retrieve the up to date latest changes, which is not happening.

Again, will try to come with some reproducible example.

Also I am testing the very same application without using a near-cache to
see if it makes any difference.





--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Removing-or-overwriting-cache-value-but-still-able-to-get-it-from-the-cache-tp6334p6372.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Removing or overwriting cache value, but still able to get it from the cache

2016-07-18 Thread zshamrock
Can it be really because of the near cache? Near cache section of the Ignite
documentation doesn't say much or explain the correct usage/caveats of the
near cache https://apacheignite.readme.io/docs/near-caches. Is there more I
can read somewhere?

If I remove the values from the near-cache are those changes propagated to
the server cache? Or the next time I read the value from the near-cache it
will fetch the one from the server cache which was not notified about
removal?

How operations on near-cache are sync with server cache?
Is the configuration option to set to modify the behavior?

Thank you.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Removing-or-overwriting-cache-value-but-still-able-to-get-it-from-the-cache-tp6334p6351.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Removing or overwriting cache value, but still able to get it from the cache

2016-07-18 Thread zshamrock
Yes, Pavel, I will try to come with something reproducible. Not sure whether
it is gonna be minimal or just a complete app setup (which is chunk of work,
but will take a look into).

Any ideas/theories, what I should take a look in the meanwhile?

Thank you.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Removing-or-overwriting-cache-value-but-still-able-to-get-it-from-the-cache-tp6334p6348.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Removing or overwriting cache value, but still able to get it from the cache

2016-07-18 Thread zshamrock
Another question is how I can configure the logger from the cache store used?
I use Logback, so

/private static final Logger logger =
LoggerFactory.getLogger(SensorsToSessionsCacheStore.class);/, and do debug
call:

@Override
public final V load(final K key) throws CacheLoaderException {
if (logger.isDebugEnabled()) {
logger.debug(String.format("Loading %s for %s",
this.getClass().getSimpleName(), key));
}
final V value = doLoad(key);
if (logger.isDebugEnabled()) {
logger.debug(String.format("Loaded value %s for %s in %s",
value, key, this.getClass().getSimpleName()));
}
return value;
}

And I run Ignite with -v (so not in quiet mode), but I don't see any debug
statement related to the store is happening in the log. Do I need (can I?)
to configure it somehow to show debug logs for my cache store classes (which
are package in jar which is place in ignite/libs directory).





--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Removing-or-overwriting-cache-value-but-still-able-to-get-it-from-the-cache-tp6334p6339.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Removing or overwriting cache value, but still able to get it from the cache

2016-07-18 Thread zshamrock
So, we have a REST endpoint which we can call with session id to stop a
session. Which in result clear the cache. Which shows in Visor it works
(cache is empty now). 

So, regards cache store (and read through) my expectations were:
- because the cache is empty now
- the next call to get() will read the value from the database
- and it will be the latest I have there

But, what I see - `get(sensorId)` still returns the previous session id (the
one created before the current one), and which as I've mentioned before,
even no longer in the database (as we overwrite the session id association
with sensor id, every time we start a new session).

Could it be because of the near cache? 

Also if it helps, Ignite is running as standalone process, application is
running from Docker (with network mode = host, so sharing the network with
the host system). Running on AWS  EC2.

Thank you.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Removing-or-overwriting-cache-value-but-still-able-to-get-it-from-the-cache-tp6334p6338.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Removing or overwriting cache value, but still able to get it from the cache

2016-07-18 Thread zshamrock
Thank you, Pavel. 

This is fine. But the problem is that in the database I have the latest
values, the one I see in Visor, but cache still returns the old value (which
is not longer in the database). How it can be?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Removing-or-overwriting-cache-value-but-still-able-to-get-it-from-the-cache-tp6334p6337.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Removing or overwriting cache value, but still able to get it from the cache

2016-07-17 Thread zshamrock
I have an interesting problem.

I have 1 cache sensorsToSessions, mapping String to String, i.e. sensor id
-> session id.

When session is started I overwrite whatever is in the cache by the sensor
ids used in the current session, i.e.:

/sensorsToSessionsCache.putAll(sensorsToSessions);/

Also when the session is stopped I remove the items from the cache, i.e.:

/sensorsToSessionsCache.removeAll(sensorsIds);/

Connecting to the Ignite using Visor shows that it works (/cache -scan
-c=@c4/), i.e.

- after putting the items it prints:

visor> cache -scan -c=@c4
Entries in cache: sandbox.sensorsToSessions
+===+
|Key Class |   Key|   Value Class|Value 
   
|
+===+
| java.lang.String | 50397794 | java.lang.String |
4ae52f51-4c2a-11e6-a169-667afaa8cd5d |
| java.lang.String | 50397793 | java.lang.String |
4ae52f51-4c2a-11e6-a169-667afaa8cd5d |
| java.lang.String | 50397783 | java.lang.String |
4ae52f51-4c2a-11e6-a169-667afaa8cd5d |
| java.lang.String | 50397776 | java.lang.String |
4ae52f51-4c2a-11e6-a169-667afaa8cd5d |
| java.lang.String | 50397846 | java.lang.String |
4ae52f51-4c2a-11e6-a169-667afaa8cd5d |
| java.lang.String | 50397828 | java.lang.String |
4ae52f51-4c2a-11e6-a169-667afaa8cd5d |
| java.lang.String | 50397817 | java.lang.String |
4ae52f51-4c2a-11e6-a169-667afaa8cd5d |
| java.lang.String | 50397812 | java.lang.String |
4ae52f51-4c2a-11e6-a169-667afaa8cd5d |
| java.lang.String | 50397811 | java.lang.String |
4ae52f51-4c2a-11e6-a169-667afaa8cd5d |
| java.lang.String | 50397801 | java.lang.String |
4ae52f51-4c2a-11e6-a169-667afaa8cd5d |
+---+

- after removing the items it prints:

visor> cache -scan -c=@c4
Cache: sandbox.sensorsToSessions is empty

Although, this is where the issue is, doing /final String sessionId =
sensorsToSessionsCache.get(sensorId);/ always returns the previous session
id, i.ie from the log:

/Found sessionId df6f12a0-4c28-11e6-a169-667afaa8cd5d for sensorId 50397783/

So, no matter whether I overwrite the items, or remove them completely, and
even so, Visor proves it works, getting the item from the cache in the code
always returns previous value.

How it could be? Have I misconfigured the Ignite cluster? I am using
near-cache, btw, for the sensorsToSessionsCache. 

There are 1 server node, and 2 clients nodes in the topology.

Also, if it helps, I have never seen the cache store was triggered for this
cache (at least I don't see anything in the log).

This is the cache config I am using:














And this is how I @Inject/instantiate the cache instance into the
application components (so, essentially it creates the near cache for the
same server cache for the client nodes):

@Bean
@Qualifier("sensorsToSessionsCache")
public IgniteCache sensorsToSessionsCache() {
final NearCacheConfiguration nearCacheConfiguration
=
new NearCacheConfiguration()
.setNearEvictionPolicy(
new
LruEvictionPolicy<>(cacheEnvironment.sensorsToSessionsNearCacheEvictionSize()))
   
.setNearStartSize(cacheEnvironment.sensorsToSessionsNearCacheStartSize());
return
ignite().createNearCache(cacheEnvironment.sensorsToSessionsCacheName(),
nearCacheConfiguration);
}

I am running Ignite 1.6.0 on Linux.





--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Removing-or-overwriting-cache-value-but-still-able-to-get-it-from-the-cache-tp6334.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Strange collocated distributed set behavior

2016-06-28 Thread zshamrock
Thank you, Andrey. I will be monitoring the progress of this issue.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Strange-collocated-distributed-set-behavior-tp5643p5963.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Strange collocated distributed set behavior

2016-06-24 Thread zshamrock
Hi, Andrey. Any progress with this? Was you able to reproduce this one on
your machine?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Strange-collocated-distributed-set-behavior-tp5643p5884.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Why ignite-indexing doesn't use/work the latest H2 version

2016-06-20 Thread zshamrock
I found that if I use the latest H2 version 1.4.192, Ignite fails on startup
with the following error:

java.lang.NoClassDefFoundError: org/h2/constant/SysProperties

at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.start(IgniteH2Indexing.java:1487)
at
org.apache.ignite.internal.processors.query.GridQueryProcessor.start(GridQueryProcessor.java:171)
at
org.apache.ignite.internal.IgniteKernal.startProcessor(IgniteKernal.java:1549)
at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:869)
at
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1736)
at
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1589)
at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1042)
at
org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:964)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:930)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:870)
at org.apache.ignite.Ignition.start(Ignition.java:397)

Looking into source code IgniteH2Indexing class it uses the following
imports:

import org.h2.constant.ErrorCode;
import org.h2.constant.SysProperties;

which (looks like) were moved in the latest version of H2 into org.h2.engine
(for SysProperties) and org.h2.api (for ErrorCode) packages.

Why ignite-indexing doesn't use the latest (or at least version 1.4 of H2)?
But relies on 1.3.175. Are there reasons for this?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Why-ignite-indexing-doesn-t-use-work-the-latest-H2-version-tp5765.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


SqlQuery and removing/evict items from the cache

2016-06-20 Thread zshamrock
When the items are removed from the cache explicitly or due to the eviction
or expiration policies, does Ignite adjust the number of entries in the
in-memory H2 database, so to keep its size in sync with the actual items in
the cache?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/SqlQuery-and-removing-evict-items-from-the-cache-tp5754.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Behind cache ScanQuery

2016-06-20 Thread zshamrock
Hi, how the ScanQuery is implemented? Does it, as it says, scans all the
entries in the cache, so if the there are a lot of entries in the cache this
operation could take a while to complete. Correct? And H2 is not used for
ScanQuery, as it is not SqlQuery, correct?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Behind-cache-ScanQuery-tp5753.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Strange collocated distributed set behavior

2016-06-17 Thread zshamrock
Any comment from Ignite/DataGrid? As it looks like a bug to me.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Strange-collocated-distributed-set-behavior-tp5643p5707.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Strange collocated distributed set behavior

2016-06-15 Thread zshamrock
Here is the sample code:

package experiments.ignite;

import java.util.Arrays;

import org.apache.ignite.Ignite;
import org.apache.ignite.IgniteLogger;
import org.apache.ignite.IgniteSet;
import org.apache.ignite.Ignition;
import org.apache.ignite.configuration.CollectionConfiguration;
import org.apache.ignite.lang.IgniteRunnable;
import org.apache.ignite.resources.IgniteInstanceResource;
import org.apache.ignite.resources.LoggerResource;

public class DistributedSetDemo {
public static void main(final String[] args) {
final Ignite ignite = Ignition.start();
final CollectionConfiguration collectionConfiguration = new
CollectionConfiguration();
collectionConfiguration.setCollocated(true); // 1
//collectionConfiguration.setBackups(1); // 2
System.out.printf("Started node %s\n",
ignite.cluster().localNode().id());
IgniteSet numbers = ignite.set("numbers", null);
if (numbers == null) {
numbers = ignite.set("numbers", collectionConfiguration);
}
System.out.printf("[local] Size of the numbers distributed set is
%d, and items are %s\n",
numbers.size(),
Arrays.toString(numbers.toArray(new Long[numbers.size()])));
System.out.printf("[local] Is collocated? %b\n",
numbers.collocated());
System.out.printf("[local] numbers contains 42? %b\n",
numbers.contains(42L));
numbers.affinityRun(new IgniteRunnable() {
@IgniteInstanceResource
private Ignite ignite;

@LoggerResource
private IgniteLogger logger;

@Override
public void run() {
System.out.println(String.format("[affinity] Running on %s
node", ignite.cluster().localNode().id()));
final IgniteSet affinityNumbers =
ignite.set("numbers", null);
System.out.printf("[affinity] Size of the numbers
distributed set is %d, and items are %s\n",
affinityNumbers.size(),
Arrays.toString(affinityNumbers.toArray(new
Long[affinityNumbers.size()])));
System.out.printf("[affinity] Is collocated? %b\n",
affinityNumbers.collocated());
System.out.printf("[affinity] numbers contains 42? %b\n",
affinityNumbers.contains(42L));
}
});
numbers.add(42L);
}
}


or here https://gist.github.com/zshamrock/ac5e907bf1b091ad05570f77ae1ba69f.

Below are different scenarios with the actual and expected behavior.

Scenario #1

 - Run first instance 
Started node 54e59a6c-dc40-4038-b225-b7f8a0a343cf
[local] Size of the numbers distributed set is 0, and items are []
[local] Is collocated? true
[local] numbers contains 42? false
[affinity] Running on 54e59a6c-dc40-4038-b225-b7f8a0a343cf node
[affinity] Size of the numbers distributed set is 0, and items are []
[affinity] Is collocated? true
[affinity] numbers contains 42? false

- Run second instance
(from first instance console)
[affinity] Running on 54e59a6c-dc40-4038-b225-b7f8a0a343cf node
[affinity] Size of the numbers distributed set is 1, and items are [42]
[affinity] Is collocated? true
[affinity] numbers contains 42? true

(from second instance console)
Started node 83429a51-0e6f-48d3-a991-b0fbbe6ac58e
[local] Size of the numbers distributed set is 0, and items are []
[local] Is collocated? true
[local] numbers contains 42? true

- Run third instance
(from first instance console)
[affinity] Running on 54e59a6c-dc40-4038-b225-b7f8a0a343cf node
[affinity] Size of the numbers distributed set is 1, and items are [42]
[affinity] Is collocated? true
[affinity] numbers contains 42? true

(from third instance console)
Started node e844d3fc-e9c2-4bb3-8dab-efe44281c96f
[local] Size of the numbers distributed set is 0, and items are []
[local] Is collocated? true
[local] numbers contains 42? true

- Stop first instance

- Run fourth instance
(from third instance console)
[affinity] Running on e844d3fc-e9c2-4bb3-8dab-efe44281c96f node
[affinity] Size of the numbers distributed set is 0, and items are []
[affinity] Is collocated? true
[affinity] numbers contains 42? true

(from fourth instance console)
Started node b3aff359-80eb-4a26-aa3a-f8ba16d248ee
[local] Size of the numbers distributed set is 0, and items are []
[local] Is collocated? true
[local] numbers contains 42? true

*Actual behavior*
See above

*Expected behavior*
1. As there is no backup configured, I have not expected Set ownership to be
taken by 3rd instance.

2. Even, if 3rd instance took the ownership of the Set, the output from the
affinity run is strange, as comparing to the original affinity output from
the 1st instance (which reports the correct size, and toArray()), 3rd
instance only reports "contains", but empty for size() and toArray(). Which
is weird.

3. I still have expected "local" acces

Running an infinite job? (use case inside) or alternatives

2016-06-05 Thread zshamrock
Are there features in Ignite which would support running an infinite (while
the cluster is up and running) job? For example, continuously reading values
from the distributed queue? So to implement producer/consumer pattern, where
there could be multiple producers, but I want to limit number of consumers,
ideally per specific key/group or if it is not possible, just to have one
consumer per queue.

If I asynchronously submit affinity ignite job with `queue.affinityRun()`
what is the implication of the this job never to finish? Will it consume the
thread from the ExecutorService thread pool on the running node forever
then?

To give a better a context, this is the problem I am trying to solve (maybe
there are even other approaches to  solve it, and I am looking into the
completely wrong direction?):
- there are application events coming periodically (based on the application
state changes)
- I have to accumulate these events until the block of the events is
"complete" (completion is defined by an application rule), as until the
group is complete nothing can be done/processed
- when the group is complete I have to process all of the events in the
group (as one complete chunk), while still accepting new events coming for
now another "incomplete" group
- and repeat since the beginning

So, far I came with the following solution:
- collect and keep all the events in the distributed IgniteQueue
- when the application notifies the completion of the group, I trigger
`queue.affinityRun()` (as I have to do a peek before removing the event from
the queue, so I want to run the execution logic on the node where the queue
is stored, they are small and run in collocated mode, and so peek will not
do an unnecessary network call)
[the reason for a peek, is that even if I receive the application event of
the group completion, due to the way events are stored (in the queue), I
don't know where the group ends, only where it starts (head of the queue),
but looking into the event itself, I can detect whether it is still from the
same group, or already from a new incomplete group, this is why I have to do
peek, as if I do poll/take first then I have to the put the element back
into the head of the queue (which obviously is not possible, as it is a
queue and not a dequeue), then I have to store this element/event somewhere
else, and on the next job submitted start with this stored event as a "head"
of the queue, and only then switch back to the real queue. As I don't want
this extra complexity, I am ready to pay a price for an extra peek before
the take]
- implement custom CollisionSpi which will understand whether there is
already a running job for the given queue, and if so, keeps the newly
submitted job in the waiting list
[here again due to the fact how events are stored (in the queue) I don't
allow multiple jobs running against same queue at the same time, as taking
the element from the middle of one group already processing group is
obviously an error, so I have to limit (to 1) the number of parallel jobs
against the given queue]
- it also requires to submit a new ignite job (distributed closure) on the
queue every time the application triggers/generates a completion group
event, which requires/should schedule a queue processing (also see above on
the overall number of the simultaneous jobs)

I thought about other alternative solutions, but all of them turned out to
be more complex, and involve more moving parts (as for example, for the
distributed queue Ignite manages atomicity, and consistency, with other
approaches I have to do it all manually, which I just want to minimize) and
more logic to maintain and ensure correctness.

Is there any other suitable alternative for the problem described above?





--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Running-an-infinite-job-use-case-inside-or-alternatives-tp5430.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Clear cache from Visor throws an exception

2016-05-24 Thread zshamrock
Yes, I am planning to upgrade soon to the latest 1.6 release, and keep you
updated on the the issue.

Thank you.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Clear-cache-from-Visor-throws-an-exception-tp4848p5137.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Flush the cache into the persistence store manually

2016-05-22 Thread zshamrock
Is it possible to flush the cache into the persistence store (if
write-through/behind is used) manually,
i.e. not waiting for either flush size or flush frequency, but trigger it
directly from the application?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Flush-the-cache-into-the-persistence-store-manually-tp5077.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Can't stop Ignite instance using Ignition.stopAll() or Ignition.kill() until writeBehindFlushFrequency passed

2016-05-16 Thread zshamrock
 

And then it is hanging for about 5 minutes after each IT case.
Does the picture above help somehow?

How I can do a thread dump? I've tried to use jstack, although could not
find anything meaningful in the report it has produced.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Can-t-stop-Ignite-instance-using-Ignition-stopAll-or-Ignition-kill-until-writeBehindFlushFrequency-pd-tp4837p4976.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: How REPLICATED cache is more performant comparing to PARTITIONED

2016-05-13 Thread zshamrock
Hi, Denis. Yes, actually this line "All the operations that are performed on
server nodes and use exchange rates in their calculation won’t need to go to
some primary node to load an exchange rate because all the data will be
located locally" answers my question.

I was just thinking about my use case, where I use the ignite client, and
near cache, even the data is small and updates are infrequent, it doesn't
make sense to make it REPLICATED, as I don't get any benefit from it in my
case.

Thank you.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-REPLICATED-cache-is-more-performant-comparing-to-PARTITIONED-tp4915p4939.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: How REPLICATED cache is more performant comparing to PARTITIONED

2016-05-13 Thread zshamrock
Alexei, but I still trying to understand the use case for the REPLICATED
cache.

As for the computing job, I would use the affinity collocation, so my job
will utilize the local cache anyway.

If I use the Ignite in the client mode (which means my client node doesn't
hold any data), it still has to do the network call to the primary node of
the REPLICATED cache.

So, what are the use cases for the REPLICATED cache. Do I miss anything?

Thank you, Alexei.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-REPLICATED-cache-is-more-performant-comparing-to-PARTITIONED-tp4915p4935.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


How REPLICATED cache is more performant comparing to PARTITIONED

2016-05-12 Thread zshamrock
I have couple of caches which are initialized per system event, and then
almost stay untouched for the next 1 or 2 hours. And only reads are used.
First of all is it a good use case for the REPLICATED cache? Data is small,
just int to int mapping.

The main question is why REPLICATED cache behaves better for frequent reads
comparing to PARTITIONED. As I understood from
https://apacheignite.readme.io/docs/cache-modes#replicated-mode, PARTITIONED
cache with backups set to all is used underneath. Is still affinity
collocation is in place for the REPLICATED cache? If, so it means it has to
go to the primary server every time anyway, so no different comparing to
REPLICATED. So, what are the factors who are giving the better read
performance for REPLICATED cache comparing to PARTITIONED?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-REPLICATED-cache-is-more-performant-comparing-to-PARTITIONED-tp4915.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Clear cache from Visor throws an exception

2016-05-11 Thread zshamrock
Could it be because of the cache store implementation classes I use? That
some of them trigger this exception?

I will try the latest nightly build you've mentioned, and report you about
the results.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Clear-cache-from-Visor-throws-an-exception-tp4848p4880.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Clear cache from Visor throws an exception

2016-05-10 Thread zshamrock
1.5.0.final



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Clear-cache-from-Visor-throws-an-exception-tp4848p4862.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Cache metrics - cache hits and misses are 0, also isEmpty in inconsistent state with getSize

2016-05-06 Thread zshamrock
Thank you, good to know.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Cache-metrics-cache-hits-and-misses-are-0-also-isEmpty-in-inconsistent-state-with-getSize-tp4769p4810.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


CacheStore implementation dependencies for Ignite

2016-05-05 Thread zshamrock
If I have a read/write through enabled, and do implement a CacheStore, and I
use the config file to setup the Ignite. 

How do I provide my CacheStore implementation (and the necessary
dependencies, like database driver, and my application specific classes) for
the Ignite instance I run in the cloud?

Should I put everything into /libs folder of the Ignite distribution?
Will then Ignite be able to detect the classes mentioned in the config file
from those jars?

For, the Docker deployment I believe EXTERNAL_LIBS would be sufficient (does
it support file:/// URL, btw?).

But for the binary distribution, where to put my cache store implementation
and the corresponding dependencies?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/CacheStore-implementation-dependencies-for-Ignite-tp4787.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Cache metrics - cache hits and misses are 0, also isEmpty in inconsistent state with getSize

2016-05-05 Thread zshamrock
So, it is not recommended to turn them on in production?

Then how you measure you cache overall behavior?

Could be it controlled over JMX?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Cache-metrics-cache-hits-and-misses-are-0-also-isEmpty-in-inconsistent-state-with-getSize-tp4769p4783.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Is it possible from ignitevisor to say which nodes are server nodes and which are client

2016-05-05 Thread zshamrock
Great!



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Is-it-possible-from-ignitevisor-to-say-which-nodes-are-server-nodes-and-which-are-client-tp4770p4779.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


How rich is Spring support for Ignite configuration file

2016-05-04 Thread zshamrock
Does Spring configuration file used to configure ignite support property
placeholders ${} and getting the properties from file by using
, as
well as expression language #{}?

In general what is supported by "Ignite" Spring and what is not.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-rich-is-Spring-support-for-Ignite-configuration-file-tp4774.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


How I can be sure that Near cache works and configured properly

2016-05-04 Thread zshamrock
How I can be sure that Near cache works and configured properly? I could not
find anything related to the near cache neither in visor or vvm/mbeans.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-I-can-be-sure-that-Near-cache-works-and-configured-properly-tp4771.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Is it possible from ignitevisor to say which nodes are server nodes and which are client

2016-05-04 Thread zshamrock
Is it possible from ignitevisor to say which nodes are server nodes and which
are client?

 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Is-it-possible-from-ignitevisor-to-say-which-nodes-are-server-nodes-and-which-are-client-tp4770.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Cache metrics - cache hits and misses are 0, also isEmpty in inconsistent state with getSize

2016-05-04 Thread zshamrock
Another question from the Gitter chat.

Why the cache.metrics().cacheHits is 0, even if I do call cache.get(key) and
it returns a stored value from the cache?
How it can be?

 

metrics.size returns 1, but also metrics.isEmpty returns true

Do I need somehow to enable the metrics?

For the image above, I've used SpringCacheManager to enable Read Through
cache on my datastore repository.

While I was trying to reproduce simpler setup locally (without
SpringCacheManager), like the one below:

package experiments.ignite;

import org.apache.ignite.Ignite;
import org.apache.ignite.IgniteCache;
import org.apache.ignite.Ignition;
import org.apache.ignite.configuration.CacheConfiguration;
import org.apache.ignite.configuration.IgniteConfiguration;

public class CacheMetrics {
public static void main(final String[] args) throws InterruptedException
{
final CacheConfiguration cacheConfiguration = new
CacheConfiguration<>("sample");

final IgniteConfiguration igniteConfiguration = new
IgniteConfiguration()
.setGridName("experiments")
.setMetricsUpdateFrequency(1) // every millisecond
.setCacheConfiguration(cacheConfiguration);

final Ignite ignite = Ignition.start(igniteConfiguration);

final IgniteCache cache = ignite.cache("sample");
cache.put("a", (int) 'a');
final int value = cache.get("a");
assert value == 97;

@SuppressWarnings("unused") final Integer missing = cache.get("z");

assert cache.metrics().getCacheHits() == 0;
assert cache.metrics().getCacheMisses() == 0;
assert cache.metrics().getSize() == 1;
assert !cache.metrics().isEmpty();
}
}

It worked fine (i.e. isEmpty() is false now), but still both cacheHits and
cacheMisses are 0.




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Cache-metrics-cache-hits-and-misses-are-0-also-isEmpty-in-inconsistent-state-with-getSize-tp4769.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Ignite client near cache conflicts with server distributed cache

2016-05-04 Thread zshamrock
Based on the discussion in Gitter it was proposed to ask a question here.

package experiments.ignite;

import org.apache.ignite.Ignite;
import org.apache.ignite.Ignition;
import org.apache.ignite.configuration.CacheConfiguration;
import org.apache.ignite.configuration.IgniteConfiguration;
import org.apache.ignite.configuration.NearCacheConfiguration;

public class ClientNearCache {

public static void main(String[] args) {
final CacheConfiguration cacheConfiguration = new
CacheConfiguration<>("sample");

final IgniteConfiguration igniteConfiguration = new
IgniteConfiguration()
.setGridName("experiments")
.setCacheConfiguration(cacheConfiguration);

Ignition.setClientMode(true);
final Ignite client = Ignition.start(igniteConfiguration);
client.createNearCache("sample", new NearCacheConfiguration<>());
}
}


The code above fails with the following error (there is a server node
running in another process):

Caused by: class org.apache.ignite.IgniteCheckedException: Failed to start
near cache (a cache with the same name without near cache is already
started)
at
org.apache.ignite.internal.IgniteKernal.checkNearCacheStarted(IgniteKernal.java:2593)
at
org.apache.ignite.internal.IgniteKernal.createNearCache(IgniteKernal.java:2545)
... 6 more

My expectations would be, that according to
https://apacheignite.readme.io/docs/clients-vs-servers#creating-distributed-caches,
calling Ignition.start(igniteConfiguration) first time, will create a
distributed server cache. And client.createNearCache() will create the near
cache for that specific node's client.

Is it a potential bug, or an expected behavior?




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-client-near-cache-conflicts-with-server-distributed-cache-tp4768.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.