RE: Key Value Store - control TTL refresh

2018-01-25 Thread Ariel Tubaltsev
Hi Stan

Thank you for the quick reply.

Let me clarify my use case: I want to have expiration for all regular
operations.
Along with that, I want to be able to read some or all entries without
refreshing TTLs, for example for debugging.

Following your example, I create a view with expiration and a view without
it, my understanding is that accessing through the view with
EternalExpiryPolicy shouldn't refresh TTLs - which seems to work.

However, accessing through the view with  TouchedExpiryPolicy doesn't seem
to refresh TTLs.

Do you think something like that should work?

  // Auto-close cache at the end of the example.
try (IgniteCache cache =
ignite.getOrCreateCache(CACHE_NAME)) {

// create not expiring view
IgniteCache bypassCache =
cache.withExpiryPolicy(new EternalExpiryPolicy());

// create expiring view, 10 seconds TTL
System.out.println(">>> Set entries to expire in 10
seconds");
IgniteCache workCache =
cache.withExpiryPolicy(new TouchedExpiryPolicy(new
Duration(TimeUnit.SECONDS, 10)));

// entries shouldn't survive
populate(workCache);
sleep(5); // sleep for 5 seconds
System.out.println("\n>>> Dump cache, don't refresh TTL");
getAll(bypassCache);
sleep(5);
System.out.println("\n>>> Work cache should be empty");
getAll(workCache);
System.out.println("\n>>> Bypass cache should be empty");
getAll(bypassCache);

// entries should survive
populate(workCache);
sleep(5);
System.out.println("\n>>> Dump cache, refresh TTL"); //
entries are still there
getAll(workCache);
sleep(5);
System.out.println("\n>>> Bypass cache should be not
empty"); // entries are gone
getAll(bypassCache);
System.out.println("\n>>> Work cache should be not empty");
getAll(workCache);

...

Ariel




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Sharing Dataset Across Multiple Ignite Processes with Same Physical Page Mappings, SharedRDD

2018-01-25 Thread UmurD
Val,

I would like to make one correction. Data could also be shared with Linux
shared memory (like shm). It does not have to be through copy-on-writes with
read-only mapped pages. A shared dataset in shared memory across different
processes also fits my use case.

Sincerely,
Umur



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: a2cf190a-6a44-4b94-baea-c9b88a16922e, class org.apache.ignite.IgniteCheckedException:Failed to execute SQL query

2018-01-25 Thread Mikhail
Hi  Rahul,

Could you please share a log from a node where SQL failed? or even better to
share logs from all nodes, including client nodes.

Does YARN limit resources like CPU and memory for Ignite instances? Or each
Ignite instance on the host can see and use all CPUs?

Thanks,
Mike.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Failed to activate cluster - table already exists

2018-01-25 Thread Mikhail
Hi Thomas,

Looks like you can reproduce the issue with a unit test.

Could you please share it with us?

Thanks,
Mike.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Sharing Dataset Across Multiple Ignite Processes with Same Physical Page Mappings, SharedRDD

2018-01-25 Thread UmurD
Hi Val,

Thanks for the quick response.

I am referring to how Virtual and Physical Memory works.

For more background, when a process is launched, it will be allocated a
virtual address space. This virtual memory will have a translation to the
physical memory you have on your computer. The pages allocated to the
processes will have different permissions (Read vs Read-Write), and some of
them will be exclusively mapped to the process it is assigned to, while some
others will be shared.

A good example of shared physical pages is for say a library (it does not
have to be a library, and I'm only providing that as an example). If I
launch two identical processes on the same machine, the shared libraries
used by these processes will have the same physical address (after
translating from virtual to physical addresses). This is because the library
might be read-only, and there is no need for two copies of the same library
if it is only being read. The processes will not get their own copy until
they attempt to write to the shared page. When they do, this will incur a
page-fault and the process will be allocated it's own (exclusive) copy of
the previously shared page for modification. This is called a Copy-On-Write
(CoW).

The case I am looking for specifically is when I launch 2 processes (say
Ignite for the sake of the example), and load up a dataset to be shared, I
want these 2 processes to point to the same physical memory space for the
shared dataset (until one of them tries to modify it, of course). In other
words, I want the loaded dataset to have the same physical address
translation from their respective virtual addresses. That is what I'm
referring to when I talk about identical physical page mappings.

This is for a research project I am conducting, so performance or
functionality is unimportant. The physical mapping is the only critical
component.

Sincerely,
Umur





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Sharing Dataset Across Multiple Ignite Processes with Same Physical Page Mappings, SharedRDD

2018-01-25 Thread vkulichenko
Umur,

When you talk about "physical page mappings", what exactly are you referring
to? Can you please elaborate a bit more on what and why you're trying to
achieve? What is the issue you're trying to solve?

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Sharing Dataset Across Multiple Ignite Processes with Same Physical Page Mappings, SharedRDD

2018-01-25 Thread UmurD
Hello Apache Ignite Community,

I am currently working with Ignite and Spark; I'm specifically interested in
the Shared RDD functionality. I have a few questions and hope I can find
answers here.

Goal:
I am trying to have a single physical page with multiple sharers (multiple
processes map to the same physical page number) on a dataset. Is this
achievable with Apache Ignite?

Specifications:
This is all running on Ubuntu 14.04 on an x86-64 machine, with Ignite-2.3.0.

I will first introduce the simpler case using only Apache Ignite, and then
talk about integration and data sharing with Spark. I appreciate the
assistance.

IGNITE NODES ONLY
Approach:
I am trying to utilize the Shared RDD of Ignite. Since I also need my data
to persist after the spark processes, I am deploying the Ignite cluster
independently with the following command and config:

'$IGNITE_HOME/bin/ignite.sh
$IGNITE_HOME/examples/config/spark/example-shared-rdd.xml'. 

I populate the Ignite nodes using:

'mvn exec:java
-Dexec.mainClass=org.apache.ignite.examples.spark.SharedRDDExample'. I
modified this file to only populate the SharedRDD cache (partitioned) with
100,000  pairs.

Finally, I observe the status of the ignite cluster using:

'$IGNITE_home/bin/ignitevisorcmd.sh'

Results:
I can confirm that I have average 50,000  pairs per node, totaling
at 100,000 key,value pairs. The memory usage of my Ignite nodes also
increase, confirming the populated RDD. However, when I compare the page
maps of both Ignite nodes, I see that they are oblivious to each others
memory space and have different Physical Page mappings. Is it possible for
me to set Ignite nodes up so that the nodes with the Shared RDD caches share
the datasets with single physical page mappings without duplicating?

SHARING AND INTEGRATION WITH SPARK (A more specific use case)
Approach:

In addition to the Ignite node deployment I mentioned earlier (2 Ignite
nodes with example-shared-rdd, populated using the SharedRDDExample), I also
try the Shared RDD with Spark. I deploy the master with
'$SPARK_HOME/sbin/start-master.sh', and workers are started with
'$SPARK_HOME/bin/spark-class org.apache.spark.deploy.worker.Worker
spark://master_host:master_port'

Here, I am trying to achieve a setup where I have multiple spark workers
that all share a dataset. More specifically, I need the multiple spark
workers/processes to be pointing at the same Physical Page Mappings on
startup (before writing). I first get in a spark-shell with the following
command:

'$SPARK_HOME/bin/spark-shell 
--packages org.apache.ignite:ignite-spark:2.3.0
  --master spark://master_host:master_port
  --repositories http://repo.maven.apache.org/maven2/org/apache/ignite'

[When in the shell, I run the following scala code]:

import org.apache.ignite.spark._
import org.apache.ignite.configuration._

val ic = new IgniteContext(sc,
"examples/config/spark/example-shared-rdd.xml") # This is the same
configuration as the Ignite nodes
val sharedRDD = ic.fromCache[Integer, Integer]("sharedRDD") # The cache I
have in the config is named sharedRDD.

When I observe the Ignite cluster *before* doing any read/write operations
on the spark end, I see the 2 nodes I started up with about 50,000 key,value
pairs each. After running:

sharedRDD.filter(_._2 > 5).count # Which should be a read and count
command?

I observe that I now have *4* nodes with about 25,000 key,value pairs each.
2 of these nodes are the Ignite nodes I deployed standalone, and the other 2
are launched from the context in the Spark processes. This leads to
different datasets in each process, and different page mappings fails to
achieve what I need.

In both cases (Ignite Nodes only, and Ignite+Spark), I observe different
physical page mappings. While the dataset seems shared to the outside world,
it is not truly shared at the page level. The nodes seem to be getting their
own set of private key,value pairs which are served to requesters, and a
sharing illusion is given to clients.

Is my understanding correct? If I am incorrect, how should I approach the
shared-dataset-multiple-processes setup with the same physical page mapping
using Ignite and SharedRDD (and Spark)?

Please let me know if you have any questions.

Sincerely,
Umur Darbaz
University of Illinois at Urbana-Champaign, Graduate Researcher



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite performing slow on cluster.

2018-01-25 Thread vkulichenko
Ganesh,

Thin driver uses one of the nodes as a gateway, so once you add second node,
half of updates will have to make two network hops instead of one, so
slowdown is expected. Although, it should not get worse further when you add
third, fourth and so on node.

The best option for this case would be to use client node driver [1] with
'streaming' option set to true. I would recommend you to try it out and
check the results.

[1] https://apacheignite-sql.readme.io/docs/jdbc-client-driver

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Async query

2018-01-25 Thread vkulichenko
Queries are actually always async, meaning that query method itself doesn't
return any data. You get a cursor and the data is then fetched while you
iterate.

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite Index - unique or non unique

2018-01-25 Thread vkulichenko
Rajesh,

Ignite has only non-unique indexes. For information on how to create them
please refer to the documentation: https://apacheignite-sql.readme.io/docs.
You can do this either via cache configuration or using CREATE INDEX command
depending on your use case.

As for the logging, here is some information that can be useful:
https://apacheignite.readme.io/docs/logging

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to use BinaryObject from existing data

2018-01-25 Thread vkulichenko
When you create a table via SQL, you already fully describe its schema, so
there is no need for QueryEntity. Can you clarify what you're trying to
achieve?

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Long activation times with Ignite persistence enabled

2018-01-25 Thread Andrey Kornev
Alexey,

I'm wondering you had a chance to look into this? I'd like to understand what 
to expect in terms of node activation time and how it's related to the data 
volume.

Thanks!
Andrey


From: Andrey Kornev 
Sent: Monday, January 22, 2018 11:36 AM
To: Alexey Goncharuk; user@ignite.apache.org
Subject: Re: Long activation times with Ignite persistence enabled

Alexey,

Thanks a lot for looking into this!

My configuration is very basic: 3 caches all using standard 1024 partitions, 
sharing a 1GB persistent memory region.

Please find below the stack trace of the exchange worker thread captured while 
the node's activation is in progress (2.4 Ignite branch).

Hope it helps!

Thanks!
Andrey

"exchange-worker-#42%ignite-2%" #82 prio=5 os_prio=31 tid=0x7ffe8bf1c000 
nid=0xc403 waiting on condition [0x7ed43000]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:177)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.getUninterruptibly(GridFutureAdapter.java:145)
at 
org.apache.ignite.internal.processors.cache.persistence.file.AsyncFileIO.read(AsyncFileIO.java:95)
at 
org.apache.ignite.internal.processors.cache.persistence.file.FilePageStore.read(FilePageStore.java:324)
at 
org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager.read(FilePageStoreManager.java:306)
at 
org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager.read(FilePageStoreManager.java:291)
at 
org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.acquirePage(PageMemoryImpl.java:656)
at 
org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.acquirePage(PageMemoryImpl.java:576)
at 
org.apache.ignite.internal.processors.cache.persistence.DataStructure.acquirePage(DataStructure.java:130)
at 
org.apache.ignite.internal.processors.cache.persistence.freelist.PagesList.init(PagesList.java:212)
at 
org.apache.ignite.internal.processors.cache.persistence.freelist.AbstractFreeList.(AbstractFreeList.java:367)
at 
org.apache.ignite.internal.processors.cache.persistence.freelist.CacheFreeListImpl.(CacheFreeListImpl.java:47)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore$1.(GridCacheOffheapManager.java:1041)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.init0(GridCacheOffheapManager.java:1041)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.updateCounter(GridCacheOffheapManager.java:1247)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition.updateCounter(GridDhtLocalPartition.java:835)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition.casState(GridDhtLocalPartition.java:523)
- locked <0x00077a3d1120> (a 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition.(GridDhtLocalPartition.java:218)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtPartitionTopologyImpl.forceCreatePartition(GridDhtPartitionTopologyImpl.java:804)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.restorePartitionState(GridCacheDatabaseSharedManager.java:2196)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.applyLastUpdates(GridCacheDatabaseSharedManager.java:2155)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.restoreState(GridCacheDatabaseSharedManager.java:1322)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.beforeExchange(GridCacheDatabaseSharedManager.java:1113)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.distributedExchange(GridDhtPartitionsExchangeFuture.java:1063)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:661)
at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2329)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:748)

2018-01-22 11:30:01,049 INFO  [exchange-worker-#42%ContentStore-2%] 
GridCacheDatabaseSharedManager - Finished applying WAL changes 
[updatesApplied=0, time=68435ms]
2018-01-22 11:30:01,789 INFO  [main] GridDiscoveryManager - Topology snapshot 
[ver=4, servers=2, clients=0, CPUs=8, offheap=26.0GB, heap=4.0GB]
2018-01-22 11:30:01,789 INFO  

Re: Ignite Index - unique or non unique

2018-01-25 Thread Rajesh Kishore
any pointers please


Thanks,
Rajes

On Thu, Jan 25, 2018 at 10:07 AM, Rajesh Kishore 
wrote:

> Hi All,
>
> Wanted to know -  what does ignite supports unique or non -unique index.
> I have requirement to create non unique index on a field / group of field.
> Whats the way?
>
> Also, with the EXPLAIN plan , we can get to know the index used for a
> query, sometimes my log is not getting generated properly, any settings I
> need to do ? I have enabled finest level in java.util.logging.properties
> though
>
> Appreciate the response.
>
> Thanks,
> Rajesh
>


Re: Async query

2018-01-25 Thread slava.koptilin
Hi,

This method is not marked by IgniteAsyncSupport annotation and therefore,
it cannot be used with enabled asynchronous mode on Ignite API.
I mean that the following code throws IllegalStateException:
IgniteCache asyncCache = cache.withAsync();
QueryCursor cursor = asyncCache.query(sqlFieldsQuery);
IgniteFuture fut = asyncCache.future();

Exception in thread "..." java.lang.IllegalStateException: Asynchronous
operation not started.
at
org.apache.ignite.internal.AsyncSupportAdapter.future(AsyncSupportAdapter.java:91)
at
org.apache.ignite.internal.AsyncSupportAdapter.future(AsyncSupportAdapter.java:73)

So, you need to do that in your own way.

By the way, asynchronous support was reworked as of Apache Ignite 2.x [1]
(using of IgniteAsyncSupport should be avoided).
[1] https://apacheignite.readme.io/docs/async-support

Thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: ClassCastException Issues

2018-01-25 Thread svonn
Hi!

I actually fixed the issue, even though I'm still not 100% sure why it
caused this:

The kafka connector got a setting called "tasks.max", which I set to a
number higher than 1.
After setting tasks.max=1 I can process all data I want without any issues -
I assume it somehow can't use the extractor for any additional tasks.

- svonn



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: ClassCastException Issues

2018-01-25 Thread slava.koptilin
Hi Svonn,

It would be really helpful if you can prepare a small reproducer (for
example maven project) and upload it on github.
Is it possible?

Thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: setNodeFilter throwing a CacheException

2018-01-25 Thread dkarachentsev
Hi Sharavya,

This exception means that client node is disconnected from cluster and tries
to reconnect. You may get reconnect future on it
(IgniteClientDisconnectedException.reconnectFuture().get()) and wait when
client will be reconnected.

So it looks like you're trying to create cache on stopped cluster and it has
nothing with node filter. Can you share logs from all nodes?

Thanks!
-Dmitry



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


ClassCastException Issues

2018-01-25 Thread svonn
Hi!

I've got some issues I'm struggling to debug properly:

Im receiving two streams, each has a binary object as key and a binary
object as value.
The keys are built with a Extractor (the data coming from kafka has a String
as key).
When I'm simply starting my stack, everything runs fine - when I'm storing
some data in kafka before starting the connectors, I'm running into
following error (FULL STACKTRACE:  https://pastebin.com/9ykra6Ei ):

java.lang.ClassCastException: java.lang.String cannot be cast to
org.apache.ignite.binary.BinaryObject

For following function:

long startTimestamp = prevGpsValue.field("timestamp");
long endTimestamp = curGpsValue.field("timestamp");

IgniteCache apCache =
Ignition.ignite().cache("AccelerationPoint").withKeepBinary();

ScanQuery scan = new ScanQuery<>(
(IgniteBiPredicate) (key, value)
->
(key.field("timestamp") >= startTimestamp
&& key.field("timestamp") <
endTimestamp)
&&
key.field("deviceId").equals(curGpsKey.field("deviceId"))
&&
key.field("measurementId").equals(curGpsKey.field("measurementId"))
&& !value.field("interpolated")

);

scan.setLocal(true);

try (QueryCursor> cursor
= apCache.query(scan)) {
for (Cache.Entry entry : cursor) {
interpolate(prevGpsValue, curGpsValue, entry.getKey());
}
} catch (Exception e) {
 e.printStackTrace();
}


Apparently, the cursor is receiving a String where he expected a
BinaryObject - yet I can't figure out how thats even possible.
Both streams have a continuous query listing to all incoming events, and
neither of them is throwing errors. Since it has to be some entry in the
apCache, here's my CQ for those events:

 IgniteCache apCache =
Ignition.ignite().cache("AccelerationPoint").withKeepBinary();


ContinuousQuery continuousQuery = new
ContinuousQuery<>();

continuousQuery.setLocalListener(evts -> {
for (CacheEntryEvent e : evts) {

processAccelerationPoint(e.getKey(), e.getValue());
}
});

continuousQuery.setRemoteFilter(e -> e.getEventType() ==
EventType.CREATED);

continuousQuery.setLocal(true);

apCache.query(continuousQuery);


The function processAccelerationPoint called here relies on it being
BinaryObjects and modifies them as follows:

IgniteCache accCache =
Ignition.ignite().cache("AccelerationPoint").withKeepBinary();

if
(dcmgMatrixMap.containsKey(accPointKey.field("measurementId"))) {

accCache.withKeepBinary().invoke(
accPointKey, (CacheEntryProcessor) (entry, objects) -> {
RealMatrix dcm_g =
dcmgMatrixMap.get(entry.getValue().field("measurementId"));
double[] accPointVector = dcm_g.operate(new
double[]{entry.getValue().field("ax"),
entry.getValue().field("ay"),
entry.getValue().field("az")});

BinaryObjectBuilder builder =
entry.getValue().toBuilder();

double zMean = dcm_g.getEntry(2, 2);

builder.setField("ax", accPointVector[0]);
builder.setField("ay", accPointVector[1]);
builder.setField("az", accPointVector[2] - zMean);

builder.setField("calibrated", true);

if (builder.getField("interpolated")) {
   
MetricStatus.dataLatency.add(System.currentTimeMillis() - (Long)
builder.getField("createdAt"));
}
entry.setValue(builder.build());

return null;
});
} else {
calibrate(accPointKey, accPointValue);
}



I really got no clue whats going on, if somehow kafka data can enter the
cache without being transformed by the extractor and without getting caught
by the CQs.
Any hints would be appreciated!

Best regards,
svonn






--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Binary type has different affinity key fields

2018-01-25 Thread Thomas Isaksen
Hi Slava

I did create the cache using DDL.

CREATE TABLE UserCache (
id long,
username varchar, 
password varchar,
PRIMARY KEY (username, password)
)
WITH "template=partitioned, affinitykey=username, cache_name=UserCache, 
key_type=no.toyota.gatekeeper.ignite.key.CredentialsKey, 
value_type=no.toyota.gatekeeper.authenticate.Credentials";

The config looks like this, very simple:







Why would I have to use uppercase? I did change now to test it but no still 
getting the same exception.

./t

-Original Message-
From: slava.koptilin [mailto:slava.kopti...@gmail.com] 
Sent: torsdag 25. januar 2018 14.08
To: user@ignite.apache.org
Subject: Re: Binary type has different affinity key fields

Hi Thomas,

Could you please share a small code snippet of cache configuration/cache 
creation?
Do you use DDL for that?

I guess that you need to define affinity keys using upper-case

public class CredentialsKey {
@QuerySqlField(index = true)
@AffinityKeyMapped
private String USERNAME;

@QuerySqlField(index = true)
private String PASSWORD;
...
}

Thanks,
Slava.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Native persistence

2018-01-25 Thread ezhuravlev
Hi Humphrey,

>What will happen if at a later point I want to scale back up to 4
replica's? 
>- So what will happen with the data it finds in the existing directory (and 
probably are old), how does it handle this? 

It depends on the time when the node was down. If it was a short period of
time and it's possible to apply delta rebalance - the current partitions
will be updated to the last version. Otherwise, old partitions will be
dropped and full rebalance will happen, like for new nodes. The possibility
of applying delta rebalance depends on configured walHistorySize.

>- What happens in the situation that I shutdown my cluster and restart it 
with 2 replica's. How does ignite know which two of the four directories to 
re-use? 

By default, folder name contains IP and port of node related to this
persistent store's data. So, the newly started node will choose a folder for
the IP and host that it has. Also, it's possible to set consistentId for the
node, in this case, the node will choose folder with the name related to
it's consistentId. 

Evgenii



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Re:Re: Re:Re: delete data error

2018-01-25 Thread Ilya Kasnacheev
Hello!

It seems that an issue was filed about the problem you are seeing:

https://issues.apache.org/jira/browse/IGNITE-7512

I can see that there's already work underway to fix is.

Regards,

-- 
Ilya Kasnacheev

2018-01-23 4:53 GMT+03:00 Lucky :

> Sorry, the fid is not UUID in tmpCompanyCuBaseDataCache , but the other
> are UUID.
> The error is not happened only in this cache, the other are the same.
> I found when I delete a single record ,it's nomal .But If I delete many
> records in a SQL,it will get the wrong.
> Thanks.
>
>
> At 2018-01-23 09:38:18, "Lucky"  wrote:
>
> I put the entry like this:
>  cache.put(entry.getFID(),entry);
> The fid is a UUID, is only.
>
> I'm very sure that the data in the cache is no problem.
> All value is correct,and look like the other record.
>
>  sql="delete from \"tmpCompanyCuBaseDataCache\".TmpCompanyCuBaseData
> where fid='1516093156643-53-33' ";
>  sql="delete from \"tmpCompanyCuBaseDataCache\".TmpCompanyCuBaseData
> where _key='1516093156643-53-33' ";
>  It can both execute correctly.
>  Then I execute "delete from \"tmpCompanyCuBaseDataCache\".
> TmpCompanyCuBaseData" again, It got the same error, the key had changed
> to another one.
> And When I delete this record ,execute again ,it's the same..
>
> Thanks.
> Lucky.
>
>
>
>
>
>
>


Async query

2018-01-25 Thread breathem
Hi all.
I found that IgniteCache.withAsync() is deprecated in v.2.3.0.
How to execute multiple SqlFieldsQuery asynchronously now?




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Binary type has different affinity key fields

2018-01-25 Thread slava.koptilin
Hi Thomas,

Could you please share a small code snippet of cache configuration/cache
creation?
Do you use DDL for that?

I guess that you need to define affinity keys using upper-case

public class CredentialsKey {
@QuerySqlField(index = true)
@AffinityKeyMapped
private String USERNAME;

@QuerySqlField(index = true)
private String PASSWORD;
...
}

Thanks,
Slava.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Failed to activate cluster - table already exists

2018-01-25 Thread Thomas Isaksen
I simply stopped my cluster and started it again, when I try to activate I get 
the following:

[12:50:26,308][SEVERE][sys-#38][GridTaskWorker] Failed to obtain remote job 
result policy for result from ComputeTask.result(..) method (will fail the 
whole task): GridJobResultImpl [job=C4 
[r=o.a.i.i.processors.cluster.GridClusterStateProcessor$ClientChangeGlobalStateComputeRequest@4263b080],
 sib=GridJobSiblingImpl 
[sesId=ab5772d2161-6e54ac8c-8405-459f-ba0b-ce7795286781, 
jobId=cb5772d2161-6e54ac8c-8405-459f-ba0b-ce7795286781, 
nodeId=0aee4fec-e006-420f-bdc6-4a9ca85cee18, isJobDone=false], 
jobCtx=GridJobContextImpl 
[jobId=cb5772d2161-6e54ac8c-8405-459f-ba0b-ce7795286781, timeoutObj=null, 
attrs={}], node=TcpDiscoveryNode [id=0aee4fec-e006-420f-bdc6-4a9ca85cee18, 
addrs=[0:0:0:0:0:0:0:1, 10.108.192.88, 127.0.0.1], 
sockAddrs=[/10.108.192.88:47500, /0:0:0:0:0:0:0:1:47500, /127.0.0.1:47500], 
discPort=47500, order=161, intOrder=82, lastExchangeTime=1516881026101, 
loc=false, ver=2.3.0#20171028-sha1:8add7fd5, isClient=false], ex=class 
o.a.i.IgniteException: Failed to activate cluster., hasRes=true, 
isCancelled=false, isOccupied=true]
class org.apache.ignite.IgniteException: Remote job threw user exception 
(override or implement ComputeTask.result(..) method if you would like to have 
automatic failover for this exception).
at 
org.apache.ignite.compute.ComputeTaskAdapter.result(ComputeTaskAdapter.java:101)
at 
org.apache.ignite.internal.processors.task.GridTaskWorker$5.apply(GridTaskWorker.java:1047)
at 
org.apache.ignite.internal.processors.task.GridTaskWorker$5.apply(GridTaskWorker.java:1040)
at 
org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6663)
at 
org.apache.ignite.internal.processors.task.GridTaskWorker.result(GridTaskWorker.java:1040)
at 
org.apache.ignite.internal.processors.task.GridTaskWorker.onResponse(GridTaskWorker.java:858)
at 
org.apache.ignite.internal.processors.task.GridTaskProcessor.processJobExecuteResponse(GridTaskProcessor.java:1066)
at 
org.apache.ignite.internal.processors.task.GridTaskProcessor$JobMessageListener.onMessage(GridTaskProcessor.java:1301)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1555)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1183)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:126)
at 
org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1090)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: class org.apache.ignite.IgniteException: Failed to activate cluster.
at 
org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:966)
at 
org.apache.ignite.internal.IgniteKernal.active(IgniteKernal.java:3513)
at 
org.apache.ignite.internal.processors.cluster.GridClusterStateProcessor$ClientChangeGlobalStateComputeRequest.run(GridClusterStateProcessor.java:908)
at 
org.apache.ignite.internal.processors.closure.GridClosureProcessor$C4.execute(GridClosureProcessor.java:1944)
at 
org.apache.ignite.internal.processors.job.GridJobWorker$2.call(GridJobWorker.java:566)
at 
org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6631)
at 
org.apache.ignite.internal.processors.job.GridJobWorker.execute0(GridJobWorker.java:560)
at 
org.apache.ignite.internal.processors.job.GridJobWorker.body(GridJobWorker.java:489)
at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at 
org.apache.ignite.internal.processors.job.GridJobProcessor.processJobExecuteRequest(GridJobProcessor.java:1181)
at 
org.apache.ignite.internal.processors.job.GridJobProcessor$JobExecutionListener.onMessage(GridJobProcessor.java:1913)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1555)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1183)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:126)
at 
org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1090)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 

Re: Can writeSynchronizationMode be modified online?

2018-01-25 Thread slava.koptilin
Hello,

Write synchronization mode is one of the crucial settings of cache
configuration and cannot be changed after a cache has been created.
By default, SQL engine uses FULL_SYNC synchronization mode [1]
You can choose the required mode by specifying the additional parameter as
follows:
CREATE TABLE IF NOT EXISTS person ( id int,orgId LONG, name VARCHAR, salary 
LONG ,PRIMARY KEY (id) ) WITH "TEMPLATE=PARTITIONED,backups=1, 
affinityKey=id, value_type=MyPerson,WRITE_SYNCHRONIZATION_MODE=PRIMARY_SYNC"

[1] https://apacheignite-sql.readme.io/docs/create-table

Thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


setNodeFilter throwing a CacheException

2018-01-25 Thread Shravya Nethula
Hi,

We are trying to create tables on one particular ClusterGroup with 3 server
nodes: Node1, Node2, Node3. On all these 3 nodes we have set the following
configuration in default-config.xml.







On the client side code, we are trying to setNodeFilter for the
cacheConfiguration as follows:

CacheConfiguration cacheCfg = new
CacheConfiguration<>(this.tableName).setSqlSchema("PUBLIC");
cacheCfg.setNodeFilter(new DataNodeFilter()); 
IgniteCache cache = ignite.getOrCreateCache(cacheCfg);   
//IgniteTable.java:50

The DataNodeFilter has the following code: 

public class DataNodeFilter implements IgnitePredicate {
  @Override public boolean apply(ClusterNode node) {
// The service will be deployed on non client nodes
// that have the attribute 'data.compute'.
return !node.isClient() &&
node.attributes().containsValue("data.compute");
  }
}

Inspite of all these configuration setting, it still throws a
CacheExaception as follows:
Exception in thread "main" javax.cache.CacheException: class
org.apache.ignite.IgniteClientDisconnectedException: Failed to execute
dynamic cache change request, client node disconnected.
at
org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1287)
at
org.apache.ignite.internal.IgniteKernal.getOrCreateCache0(IgniteKernal.java:2937)
at
org.apache.ignite.internal.IgniteKernal.getOrCreateCache(IgniteKernal.java:2906)
at
net.aline.cloudedh.inmemorydb.ignite.IgniteTable.(IgniteTable.java:50)

Can anyone please tell if we are missing any other configuration setting?

Regards,
Shravya Nethula.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


a2cf190a-6a44-4b94-baea-c9b88a16922e, class org.apache.ignite.IgniteCheckedException:Failed to execute SQL query

2018-01-25 Thread Rahul Pandey
Hi,

I am running ignite servers on yarn cluster with properties PFA
ignite-cluster.properties say on "host1". All server nodes are running on
these host itself.

The servers running are with persistence storage enabled PFA
ignite-config.xml configuration file.

When I start an main program from another host say "host2" with same
configs but in client mode.

This main program consists simple select query to fetch data from ignite
cluster, but I get following error:

Caused by: javax.cache.CacheException: Failed to execute map query on the
node: a2cf190a-6a44-4b94-baea-c9b88a16922e, class
org.apache.ignite.IgniteCheckedException:Failed to execute SQL query.
at
org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.fail(GridReduceQueryExecutor.java:274)
at
org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.onFail(GridReduceQueryExecutor.java:264)
at
org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.onMessage(GridReduceQueryExecutor.java:243)
at
org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor$2.onMessage(GridReduceQueryExecutor.java:187)
at
org.apache.ignite.internal.managers.communication.GridIoManager$ArrayListener.onMessage(GridIoManager.java:2332)
at
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1555)
at
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1183)
at
org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:126)
at
org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1090)
... 3 more

But once I write insert query for same table from same main program and
then I fire select query then the results are available, PFA
"DriverStackTrace.txt"
Also

This issue is getting repeated every time, due to this I have to atleast
insert one record to fire select query.

Also I can see node "a2cf190a"(@n9) that is failing in ignitevisor PFA
"top.txt"

Regards,

-- 


--

The content of this e-mail is confidential and intended solely for the use 
of the addressee(s). The text of this email (including any attachments) may 
contain information, belonging to Pragmatix Services Private Limited, 
and/or its associates/ group companies/ subsidiaries (the Company), which 
is proprietary and/or confidential or legally privileged in nature or 
exempt from disclosure under applicable law. If you are not the addressee, 
or the person responsible for delivering it to the addressee, any 
disclosure, copying, distribution or any action taken or omitted to be 
taken in reliance on it is prohibited and may be unlawful. If you have 
received this e-mail in error, please notify the sender and remove this 
communication entirely from your system. The recipient acknowledges that no 
guarantee or any warranty is given as to completeness and accuracy of the 
content of the email. The recipient further acknowledges that the views 
contained in the email message are those of the sender and may not 
necessarily reflect those of the Company. Before opening and accessing the 
attachment please check and scan for virus. Thank you.

WARNING: Computer viruses can be transmitted via email. The recipient 
should check this email and any attachments for the presence of viruses. 
The sender or the Company accepts no liability for any damage caused by any 
virus transmitted by this email or errors or omissions.

visor> top
Hosts: 2
+=+
|   Int./Ext. IPs|Node ID8(@) | Node Type |  OS 
  | CPUs |  MACs   | CPU Load |
+=+
| 0:0:0:0:0:0:0:1%lo | 1: C626F82C(@n0)   | Server| Linux amd64 
4.4.0-109-generic | 64   | 00:A2:EE:E8:B5:76   | 0.01 %   |
| 10.10.13.36| 2: 4CC62A12(@n1)   | Server| 
  |  | |  |
| 127.0.0.1  | 3: 15E5CD4F(@n2)   | Server| 
  |  | |  |
|| 4: 7C43286B(@n3)   | Server| 
  |  | |  |
|| 5: 72A737BF(@n4)   | Server| 
  |  | |  |
|| 6: 4CE19A41(@n5)   | Server| 
  |  | |  |
|| 7: 2AE6B4D3(@n6)   | Server| 
  |  | |  |
|| 8: 0117773D(@n7)   | Server| 
  |  |  

Re: Error: Failed to handle JDBC request because node is stopping. (state=50000,code=0)

2018-01-25 Thread Rahul Pandey
Thanks Evgenni that was quick.

On Thu, Jan 25, 2018 at 3:50 PM, Evgenii Zhuravlev  wrote:

> Default port for JdbcThinDriver is 10800, while this node started on:
> Local ports: TCP:10801 TCP:11211 TCP:47101 UDP:47400 TCP:47501
> It's possible that you have some problem node started on the port 10800.
>
> OR
>
> As I see, you have Topology snapshot [ver=6024, servers=1, clients=0,
> CPUs=64, heap=1.0GB], which means that it's daemon node was started earlier
> and it stores topology version. It's possible that you could connect to the
> this node too.
>
> To fix this you need to stop all other nodes, including visor and connect
> to the default port or just change a port in the connection string.
>
> Evgenii
>
> 2018-01-25 13:07 GMT+03:00 Rahul Pandey  pragmatixservices.com>:
>
>> Hi,
>>
>> I do not know where to find complete logs for sqlline.
>>
>> The lists of steps which I am following is:
>>
>> 1. Starting one ignite server by using ignite.sh script with no xml
>> configuration.
>> The logs for this step I have attached.
>>
>> 2. Starting sqlline with following commands:
>> ./sqlline.sh --color=true --verbose=true -u jdbc:ignite:thin://127.0.0.1/
>>
>> 3. Executing query to see list of tables with the help of !tables command
>>
>> On third step I get the following error:
>>
>> Error: Failed to handle JDBC request because node is stopping.
>> (state=5,code=0)
>> java.sql.SQLException: Failed to handle JDBC request because node is
>> stopping.
>> at org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.send
>> Request(JdbcThinConnection.java:671)
>> at org.apache.ignite.internal.jdbc.thin.JdbcThinDatabaseMetadat
>> a.getTables(JdbcThinDatabaseMetadata.java:740)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcce
>> ssorImpl.java:62)
>> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe
>> thodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:498)
>> at sqlline.Reflector.invoke(Reflector.java:75)
>> at sqlline.Commands.metadata(Commands.java:194)
>> at sqlline.Commands.tables(Commands.java:332)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcce
>> ssorImpl.java:62)
>> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe
>> thodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:498)
>> at sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHa
>> ndler.java:38)
>> at sqlline.SqlLine.dispatch(SqlLine.java:791)
>> at sqlline.SqlLine.begin(SqlLine.java:668)
>> at sqlline.SqlLine.start(SqlLine.java:373)
>> at sqlline.SqlLine.main(SqlLine.java:265)
>>
>>
>> Regards,
>>
>>
>>
>> On Thu, Jan 25, 2018 at 3:21 PM, Evgenii Zhuravlev <
>> e.zhuravlev...@gmail.com> wrote:
>>
>>> Hi Rahul,
>>>
>>> Could you please share full logs from Ignite - we need to check it's
>>> status.
>>>
>>> Evgenii
>>>
>>> 2018-01-25 12:34 GMT+03:00 Rahul Pandey >> s.com>:
>>>
 Hi all,

 I am facing error while runnig sqlline.sh

  I am running the following commands:
 ./sqlline.sh --color=true --verbose=true -u jdbc:ignite:thin://
 127.0.0.1/

  !tables

 Error Stack trace is as below:

 Error: Failed to handle JDBC request because node is stopping.
 (state=5,code=0)
 java.sql.SQLException: Failed to handle JDBC request because node is
 stopping.
 at org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.send
 Request(JdbcThinConnection.java:671)
 at org.apache.ignite.internal.jdbc.thin.JdbcThinDatabaseMetadat
 a.getTables(JdbcThinDatabaseMetadata.java:740)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcce
 ssorImpl.java:62)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe
 thodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at sqlline.Reflector.invoke(Reflector.java:75)
 at sqlline.Commands.metadata(Commands.java:194)
 at sqlline.Commands.tables(Commands.java:332)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcce
 ssorImpl.java:62)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe
 thodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHa
 ndler.java:38)
 at sqlline.SqlLine.dispatch(SqlLine.java:791)
 at 

Binary type has different affinity key fields

2018-01-25 Thread Thomas Isaksen
I have no idea why I am getting this exception. It occours when I try to do 
cache.put(...)

class org.apache.ignite.binary.BinaryObjectException: Binary type has different 
affinity key fields [typeName=no.toyota.gatekeeper.ignite.key.CredentialsKey, 
affKeyFieldName1=id, affKeyFieldName2=username]

CredentialsKey fields:

private long id;
@QuerySqlField(index = true)
@AffinityKeyMapped
private String username;
@QuerySqlField(index = true)
private String password;



Re: Purpose of cache in cache.query(Create Table ...) statement

2018-01-25 Thread Shravya Nethula
Hi Denis,

Thank you for the information.

Regards,
Shravya.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Error: Failed to handle JDBC request because node is stopping. (state=50000,code=0)

2018-01-25 Thread Evgenii Zhuravlev
Default port for JdbcThinDriver is 10800, while this node started on:
Local ports: TCP:10801 TCP:11211 TCP:47101 UDP:47400 TCP:47501
It's possible that you have some problem node started on the port 10800.

OR

As I see, you have Topology snapshot [ver=6024, servers=1, clients=0,
CPUs=64, heap=1.0GB], which means that it's daemon node was started earlier
and it stores topology version. It's possible that you could connect to the
this node too.

To fix this you need to stop all other nodes, including visor and connect
to the default port or just change a port in the connection string.

Evgenii

2018-01-25 13:07 GMT+03:00 Rahul Pandey 
:

> Hi,
>
> I do not know where to find complete logs for sqlline.
>
> The lists of steps which I am following is:
>
> 1. Starting one ignite server by using ignite.sh script with no xml
> configuration.
> The logs for this step I have attached.
>
> 2. Starting sqlline with following commands:
> ./sqlline.sh --color=true --verbose=true -u jdbc:ignite:thin://127.0.0.1/
>
> 3. Executing query to see list of tables with the help of !tables command
>
> On third step I get the following error:
>
> Error: Failed to handle JDBC request because node is stopping.
> (state=5,code=0)
> java.sql.SQLException: Failed to handle JDBC request because node is
> stopping.
> at org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.send
> Request(JdbcThinConnection.java:671)
> at org.apache.ignite.internal.jdbc.thin.JdbcThinDatabaseMetadat
> a.getTables(JdbcThinDatabaseMetadata.java:740)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcce
> ssorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe
> thodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at sqlline.Reflector.invoke(Reflector.java:75)
> at sqlline.Commands.metadata(Commands.java:194)
> at sqlline.Commands.tables(Commands.java:332)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcce
> ssorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe
> thodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHa
> ndler.java:38)
> at sqlline.SqlLine.dispatch(SqlLine.java:791)
> at sqlline.SqlLine.begin(SqlLine.java:668)
> at sqlline.SqlLine.start(SqlLine.java:373)
> at sqlline.SqlLine.main(SqlLine.java:265)
>
>
> Regards,
>
>
>
> On Thu, Jan 25, 2018 at 3:21 PM, Evgenii Zhuravlev <
> e.zhuravlev...@gmail.com> wrote:
>
>> Hi Rahul,
>>
>> Could you please share full logs from Ignite - we need to check it's
>> status.
>>
>> Evgenii
>>
>> 2018-01-25 12:34 GMT+03:00 Rahul Pandey > s.com>:
>>
>>> Hi all,
>>>
>>> I am facing error while runnig sqlline.sh
>>>
>>>  I am running the following commands:
>>> ./sqlline.sh --color=true --verbose=true -u jdbc:ignite:thin://
>>> 127.0.0.1/
>>>
>>>  !tables
>>>
>>> Error Stack trace is as below:
>>>
>>> Error: Failed to handle JDBC request because node is stopping.
>>> (state=5,code=0)
>>> java.sql.SQLException: Failed to handle JDBC request because node is
>>> stopping.
>>> at org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.send
>>> Request(JdbcThinConnection.java:671)
>>> at org.apache.ignite.internal.jdbc.thin.JdbcThinDatabaseMetadat
>>> a.getTables(JdbcThinDatabaseMetadata.java:740)
>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcce
>>> ssorImpl.java:62)
>>> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe
>>> thodAccessorImpl.java:43)
>>> at java.lang.reflect.Method.invoke(Method.java:498)
>>> at sqlline.Reflector.invoke(Reflector.java:75)
>>> at sqlline.Commands.metadata(Commands.java:194)
>>> at sqlline.Commands.tables(Commands.java:332)
>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcce
>>> ssorImpl.java:62)
>>> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe
>>> thodAccessorImpl.java:43)
>>> at java.lang.reflect.Method.invoke(Method.java:498)
>>> at sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHa
>>> ndler.java:38)
>>> at sqlline.SqlLine.dispatch(SqlLine.java:791)
>>> at sqlline.SqlLine.begin(SqlLine.java:668)
>>> at sqlline.SqlLine.start(SqlLine.java:373)
>>> at sqlline.SqlLine.main(SqlLine.java:265)
>>>
>>> I am running ignite.sh with default configurations.
>>>
>>> Regards,
>>>
>>>
>>> 

Re: Error: Failed to handle JDBC request because node is stopping. (state=50000,code=0)

2018-01-25 Thread Rahul Pandey
Hi,

I do not know where to find complete logs for sqlline.

The lists of steps which I am following is:

1. Starting one ignite server by using ignite.sh script with no xml
configuration.
The logs for this step I have attached.

2. Starting sqlline with following commands:
./sqlline.sh --color=true --verbose=true -u jdbc:ignite:thin://127.0.0.1/

3. Executing query to see list of tables with the help of !tables command

On third step I get the following error:

Error: Failed to handle JDBC request because node is stopping.
(state=5,code=0)
java.sql.SQLException: Failed to handle JDBC request because node is
stopping.
at org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.
sendRequest(JdbcThinConnection.java:671)
at org.apache.ignite.internal.jdbc.thin.JdbcThinDatabaseMetadata.
getTables(JdbcThinDatabaseMetadata.java:740)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(
NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(
DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sqlline.Reflector.invoke(Reflector.java:75)
at sqlline.Commands.metadata(Commands.java:194)
at sqlline.Commands.tables(Commands.java:332)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(
NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(
DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sqlline.ReflectiveCommandHandler.execute(
ReflectiveCommandHandler.java:38)
at sqlline.SqlLine.dispatch(SqlLine.java:791)
at sqlline.SqlLine.begin(SqlLine.java:668)
at sqlline.SqlLine.start(SqlLine.java:373)
at sqlline.SqlLine.main(SqlLine.java:265)


Regards,



On Thu, Jan 25, 2018 at 3:21 PM, Evgenii Zhuravlev  wrote:

> Hi Rahul,
>
> Could you please share full logs from Ignite - we need to check it's
> status.
>
> Evgenii
>
> 2018-01-25 12:34 GMT+03:00 Rahul Pandey  pragmatixservices.com>:
>
>> Hi all,
>>
>> I am facing error while runnig sqlline.sh
>>
>>  I am running the following commands:
>> ./sqlline.sh --color=true --verbose=true -u jdbc:ignite:thin://127.0.0.1/
>>
>>  !tables
>>
>> Error Stack trace is as below:
>>
>> Error: Failed to handle JDBC request because node is stopping.
>> (state=5,code=0)
>> java.sql.SQLException: Failed to handle JDBC request because node is
>> stopping.
>> at org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.send
>> Request(JdbcThinConnection.java:671)
>> at org.apache.ignite.internal.jdbc.thin.JdbcThinDatabaseMetadat
>> a.getTables(JdbcThinDatabaseMetadata.java:740)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcce
>> ssorImpl.java:62)
>> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe
>> thodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:498)
>> at sqlline.Reflector.invoke(Reflector.java:75)
>> at sqlline.Commands.metadata(Commands.java:194)
>> at sqlline.Commands.tables(Commands.java:332)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcce
>> ssorImpl.java:62)
>> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe
>> thodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:498)
>> at sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHa
>> ndler.java:38)
>> at sqlline.SqlLine.dispatch(SqlLine.java:791)
>> at sqlline.SqlLine.begin(SqlLine.java:668)
>> at sqlline.SqlLine.start(SqlLine.java:373)
>> at sqlline.SqlLine.main(SqlLine.java:265)
>>
>> I am running ignite.sh with default configurations.
>>
>> Regards,
>>
>>
>> --
>>
>> The content of this e-mail is confidential and intended solely for the
>> use of the addressee(s). The text of this email (including any attachments)
>> may contain information, belonging to Pragmatix Services Private Limited,
>> and/or its associates/ group companies/ subsidiaries (the Company), which
>> is proprietary and/or confidential or legally privileged in nature or
>> exempt from disclosure under applicable law. If you are not the
>> addressee, or the person responsible for delivering it to the addressee,
>> any disclosure, copying, distribution or any action taken or omitted to be
>> taken in reliance on it is prohibited and may be unlawful. If you have
>> received this e-mail in error, please notify the sender and remove this
>> communication entirely from your system. The recipient acknowledges 

Re: One problem about Cluster Configuration(cfg)

2018-01-25 Thread Andrey Mashenkov
Rick,

Looks like ok.
You run 2 nodes, then you kill one and other node report that killed node
was dropped from grid.

What the issue is?

On Thu, Jan 25, 2018 at 12:38 PM,  wrote:

> Hi Andrey,
>
>
>
> 1.  There are no other running nodes when I triggered the two nodes.
>
>
>
> 2.  If I firstly triggered the One node(shell script), and then
> triggered the other node(maven.project.java).
>
> I  closed the other node(maven.project.java) and *the One node was still
> running*. The proram result of the One node show that:
>
>
>
> [25-Jan-2018 17:32:25][WARN ][tcp-disco-msg-worker-#2%null%][TcpDiscoverySpi]
> Local node has detected failed nodes and started cluster-wide procedure. To
> speed up failure detection please see 'Failure Detection' section under
> javadoc for 'TcpDiscoverySpi'
>
>
>
> [25-01-2018 17:32:25][INFO 
> ][disco-event-worker-#28%null%][GridDiscoveryManager]
> Added new node to topology: TcpDiscoveryNode 
> [id=664c870e-6b93-4328-a95b-9e04d5b4f59c,
> addrs=[0:0:0:0:0:0:0:1%lo,  127.0.0.1], sockAddrs=[ubuntu/ 127.0.0.1:47501,
> /0:0:0:0:0:0:0:1%lo:47501, /127.0.0.1:47501], discPort=47501, order=10,
> intOrder=6, lastExchangeTime=1516872738417, loc=false,
> ver=1.9.0#20170302-sha1:a8169d0a, isClient=false]
>
>
>
> [25-01-2018 17:32:25][INFO 
> ][disco-event-worker-#28%null%][GridDiscoveryManager]
> **Topology snapshot [ver=10, servers=2, clients=0, CPUs=4, heap=4.5GB]**
>
>
>
> [25-Jan-2018 17:32:25][WARN 
> ][disco-event-worker-#28%null%][GridDiscoveryManager]
> Node FAILED: TcpDiscoveryNode [id=664c870e-6b93-4328-a95b-9e04d5b4f59c,
> addrs=[0:0:0:0:0:0:0:1%lo, 127.0.0.1], sockAddrs=[ubuntu/127.0.0.1:47501,
> /0:0:0:0:0:0:0:1%lo:47501, /127.0.0.1:47501], discPort=47501, order=10,
> intOrder=6, lastExchangeTime=1516872738417, loc=false,
> ver=1.9.0#20170302-sha1:a8169d0a, isClient=false]
>
>
>
> [25-01-2018 17:32:25][INFO 
> ][disco-event-worker-#28%null%][GridDiscoveryManager]
> Topology snapshot *[ver=11, servers=1, clients=0, CPUs=4, heap=1.0GB]*
>
>
>
> Rick
>
>
>
> *From:* Andrey Mashenkov [mailto:andrey.mashen...@gmail.com]
> *Sent:* Thursday, January 25, 2018 5:10 PM
> *To:* user@ignite.apache.org
> *Subject:* Re: One problem about Cluster Configuration(cfg)
>
>
>
> Hi Rick,
>
>
>
> Do you have a luck to resolve this?
>
> Or you still observe the issue when configuring ipFinder via API?
>
>
>
> On Thu, Jan 25, 2018 at 11:29 AM,  wrote:
>
> Hi all,
>
>
>
> By the way, I run two nodes on localhost, and the multicastGroup ip and
> port are default settings in the example-cache.xml, as:
>
> 
> ===
>
> 
>
>   
>
>   
>
>   
>
> 
>
> 127.0.0.1:47500..47509
>
>   
>
>   
>
> 
>
> 
> ===
>
>
>
> Rick
>
>
>
> *From:* linr...@itri.org.tw [mailto:linr...@itri.org.tw]
> *Sent:* Thursday, January 25, 2018 3:51 PM
> *To:* user@ignite.apache.org
> *Subject:* One problem about Cluster Configuration(cfg)
>
>
>
> Hi all,
>
>
>
> I have tried to construct a cluster with two nodes.
>
>
>
> my run environment ==
> 
>
> OS: Ubuntu 14.04.5 LTS
>
> Java version: 1.7
>
> Ignite version: 1.9.0
>
> 
> ===
>
>
>
> One node with a “example-cache.xml” was triggered by the shell script as
> the following command:./bin/ignite.sh config/example-cache.xml
>
> The execution results of the program is as:
>
> shell script result ==
> ==
>
> Local node [ID=D411C309-E56A-4773-ABD1-132ADE62C325, order=1,
> clientMode=false]
>
> *Local node addresses: [ubuntu/0:0:0:0:0:0:0:1%lo, /127.0.0.1
> ]*
>
> *Local ports: TCP:8080 TCP:11211 TCP:47100 UDP:47400 TCP:47500*
>
>
>
> [25-01-2018 15:23:44][INFO ][main][GridDiscoveryManager] Topology snapshot
> [*ver=1, servers=1, clients=0, CPUs=4, heap=1.0GB*]
>
> [25-01-2018 15:23:48][INFO ][Thread-23][G] Invoking shutdown hook...
>
> [25-01-2018 15:23:48][INFO ][Thread-23][GridTcpRestProtocol] Command
> protocol successfully stopped: TCP binary
>
> [25-01-2018 15:23:48][INFO ][Thread-23][GridJettyRestProtocol] Command
> protocol successfully stopped: Jetty REST
>
> [25-01-2018 15:23:48][INFO ][Thread-23][GridCacheProcessor] Stopped
> cache: *oneCache*
>
> 
> ===
>
>
>
> The other node was triggered by the maven project (java 1.7) as the
> following command: mvn 

Re: Error: Failed to handle JDBC request because node is stopping. (state=50000,code=0)

2018-01-25 Thread Evgenii Zhuravlev
Hi Rahul,

Could you please share full logs from Ignite - we need to check it's status.

Evgenii

2018-01-25 12:34 GMT+03:00 Rahul Pandey 
:

> Hi all,
>
> I am facing error while runnig sqlline.sh
>
>  I am running the following commands:
> ./sqlline.sh --color=true --verbose=true -u jdbc:ignite:thin://127.0.0.1/
>
>  !tables
>
> Error Stack trace is as below:
>
> Error: Failed to handle JDBC request because node is stopping.
> (state=5,code=0)
> java.sql.SQLException: Failed to handle JDBC request because node is
> stopping.
> at org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.
> sendRequest(JdbcThinConnection.java:671)
> at org.apache.ignite.internal.jdbc.thin.JdbcThinDatabaseMetadata.
> getTables(JdbcThinDatabaseMetadata.java:740)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at sqlline.Reflector.invoke(Reflector.java:75)
> at sqlline.Commands.metadata(Commands.java:194)
> at sqlline.Commands.tables(Commands.java:332)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at sqlline.ReflectiveCommandHandler.execute(
> ReflectiveCommandHandler.java:38)
> at sqlline.SqlLine.dispatch(SqlLine.java:791)
> at sqlline.SqlLine.begin(SqlLine.java:668)
> at sqlline.SqlLine.start(SqlLine.java:373)
> at sqlline.SqlLine.main(SqlLine.java:265)
>
> I am running ignite.sh with default configurations.
>
> Regards,
>
>
> --
>
> The content of this e-mail is confidential and intended solely for the use
> of the addressee(s). The text of this email (including any attachments) may
> contain information, belonging to Pragmatix Services Private Limited,
> and/or its associates/ group companies/ subsidiaries (the Company), which
> is proprietary and/or confidential or legally privileged in nature or
> exempt from disclosure under applicable law. If you are not the
> addressee, or the person responsible for delivering it to the addressee,
> any disclosure, copying, distribution or any action taken or omitted to be
> taken in reliance on it is prohibited and may be unlawful. If you have
> received this e-mail in error, please notify the sender and remove this
> communication entirely from your system. The recipient acknowledges that no
> guarantee or any warranty is given as to completeness and accuracy of the
> content of the email. The recipient further acknowledges that the views
> contained in the email message are those of the sender and may not
> necessarily reflect those of the Company. Before opening and accessing the
> attachment please check and scan for virus. Thank you.
>
> WARNING: Computer viruses can be transmitted via email. The recipient
> should check this email and any attachments for the presence of viruses.
> The sender or the Company accepts no liability for any damage caused by any
> virus transmitted by this email or errors or omissions.
>
>


RE: One problem about Cluster Configuration(cfg)

2018-01-25 Thread linrick
Hi Andrey,


1.  There are no other running nodes when I triggered the two nodes.



2.  If I firstly triggered the One node(shell script), and then triggered 
the other node(maven.project.java).

I  closed the other node(maven.project.java) and the One node was still 
running. The proram result of the One node show that:


[25-Jan-2018 17:32:25][WARN ][tcp-disco-msg-worker-#2%null%][TcpDiscoverySpi] 
Local node has detected failed nodes and started cluster-wide procedure. To 
speed up failure detection please see 'Failure Detection' section under javadoc 
for 'TcpDiscoverySpi'



[25-01-2018 17:32:25][INFO 
][disco-event-worker-#28%null%][GridDiscoveryManager] Added new node to 
topology: TcpDiscoveryNode [id=664c870e-6b93-4328-a95b-9e04d5b4f59c, 
addrs=[0:0:0:0:0:0:0:1%lo,  127.0.0.1], sockAddrs=[ubuntu/ 127.0.0.1:47501, 
/0:0:0:0:0:0:0:1%lo:47501, /127.0.0.1:47501], discPort=47501, order=10, 
intOrder=6, lastExchangeTime=1516872738417, loc=false, 
ver=1.9.0#20170302-sha1:a8169d0a, isClient=false]



[25-01-2018 17:32:25][INFO 
][disco-event-worker-#28%null%][GridDiscoveryManager] *Topology snapshot 
[ver=10, servers=2, clients=0, CPUs=4, heap=4.5GB]*



[25-Jan-2018 17:32:25][WARN 
][disco-event-worker-#28%null%][GridDiscoveryManager] Node FAILED: 
TcpDiscoveryNode [id=664c870e-6b93-4328-a95b-9e04d5b4f59c, 
addrs=[0:0:0:0:0:0:0:1%lo, 127.0.0.1], sockAddrs=[ubuntu/127.0.0.1:47501, 
/0:0:0:0:0:0:0:1%lo:47501, /127.0.0.1:47501], discPort=47501, order=10, 
intOrder=6, lastExchangeTime=1516872738417, loc=false, 
ver=1.9.0#20170302-sha1:a8169d0a, isClient=false]



[25-01-2018 17:32:25][INFO 
][disco-event-worker-#28%null%][GridDiscoveryManager] Topology snapshot 
[ver=11, servers=1, clients=0, CPUs=4, heap=1.0GB]


Rick

From: Andrey Mashenkov [mailto:andrey.mashen...@gmail.com]
Sent: Thursday, January 25, 2018 5:10 PM
To: user@ignite.apache.org
Subject: Re: One problem about Cluster Configuration(cfg)

Hi Rick,

Do you have a luck to resolve this?
Or you still observe the issue when configuring ipFinder via API?

On Thu, Jan 25, 2018 at 11:29 AM, 
> wrote:
Hi all,

By the way, I run two nodes on localhost, and the multicastGroup ip and port 
are default settings in the example-cache.xml, as:
===

  
  
  

127.0.0.1:47500..47509
  
  

===

Rick

From: linr...@itri.org.tw 
[mailto:linr...@itri.org.tw]
Sent: Thursday, January 25, 2018 3:51 PM
To: user@ignite.apache.org
Subject: One problem about Cluster Configuration(cfg)

Hi all,

I have tried to construct a cluster with two nodes.

my run environment 
==
OS: Ubuntu 14.04.5 LTS
Java version: 1.7
Ignite version: 1.9.0
===

One node with a “example-cache.xml” was triggered by the shell script as the 
following command:./bin/ignite.sh config/example-cache.xml
The execution results of the program is as:
shell script result 

Local node [ID=D411C309-E56A-4773-ABD1-132ADE62C325, order=1, clientMode=false]
Local node addresses: [ubuntu/0:0:0:0:0:0:0:1%lo, /127.0.0.1]
Local ports: TCP:8080 TCP:11211 TCP:47100 UDP:47400 TCP:47500

[25-01-2018 15:23:44][INFO ][main][GridDiscoveryManager] Topology snapshot 
[ver=1, servers=1, clients=0, CPUs=4, heap=1.0GB]
[25-01-2018 15:23:48][INFO ][Thread-23][G] Invoking shutdown hook...
[25-01-2018 15:23:48][INFO ][Thread-23][GridTcpRestProtocol] Command protocol 
successfully stopped: TCP binary
[25-01-2018 15:23:48][INFO ][Thread-23][GridJettyRestProtocol] Command protocol 
successfully stopped: Jetty REST
[25-01-2018 15:23:48][INFO ][Thread-23][GridCacheProcessor] Stopped cache: 
oneCache
===

The other node was triggered by the maven project (java 1.7) as the following 
command: mvn compile exec:java -Dexec.mainClass=…
In addition, my java code is as:
Java code 
==
TcpDiscoveryMulticastIpFinder ipFinder = new TcpDiscoveryMulticastIpFinder();

TcpDiscoverySpi spi = new TcpDiscoverySpi();
spi.setIpFinder(ipFinder);

IgniteConfiguration cfg  = new IgniteConfiguration();

cfg.setClientMode(false);


Error: Failed to handle JDBC request because node is stopping. (state=50000,code=0)

2018-01-25 Thread Rahul Pandey
Hi all,

I am facing error while runnig sqlline.sh

 I am running the following commands:
./sqlline.sh --color=true --verbose=true -u jdbc:ignite:thin://127.0.0.1/

 !tables

Error Stack trace is as below:

Error: Failed to handle JDBC request because node is stopping.
(state=5,code=0)
java.sql.SQLException: Failed to handle JDBC request because node is
stopping.
at
org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(JdbcThinConnection.java:671)
at
org.apache.ignite.internal.jdbc.thin.JdbcThinDatabaseMetadata.getTables(JdbcThinDatabaseMetadata.java:740)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sqlline.Reflector.invoke(Reflector.java:75)
at sqlline.Commands.metadata(Commands.java:194)
at sqlline.Commands.tables(Commands.java:332)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:38)
at sqlline.SqlLine.dispatch(SqlLine.java:791)
at sqlline.SqlLine.begin(SqlLine.java:668)
at sqlline.SqlLine.start(SqlLine.java:373)
at sqlline.SqlLine.main(SqlLine.java:265)

I am running ignite.sh with default configurations.

Regards,

-- 


--

The content of this e-mail is confidential and intended solely for the use 
of the addressee(s). The text of this email (including any attachments) may 
contain information, belonging to Pragmatix Services Private Limited, 
and/or its associates/ group companies/ subsidiaries (the Company), which 
is proprietary and/or confidential or legally privileged in nature or 
exempt from disclosure under applicable law. If you are not the addressee, 
or the person responsible for delivering it to the addressee, any 
disclosure, copying, distribution or any action taken or omitted to be 
taken in reliance on it is prohibited and may be unlawful. If you have 
received this e-mail in error, please notify the sender and remove this 
communication entirely from your system. The recipient acknowledges that no 
guarantee or any warranty is given as to completeness and accuracy of the 
content of the email. The recipient further acknowledges that the views 
contained in the email message are those of the sender and may not 
necessarily reflect those of the Company. Before opening and accessing the 
attachment please check and scan for virus. Thank you.

WARNING: Computer viruses can be transmitted via email. The recipient 
should check this email and any attachments for the presence of viruses. 
The sender or the Company accepts no liability for any damage caused by any 
virus transmitted by this email or errors or omissions.



Re: One problem about Cluster Configuration(cfg)

2018-01-25 Thread Andrey Mashenkov
Hi Rick,

Do you have a luck to resolve this?
Or you still observe the issue when configuring ipFinder via API?

On Thu, Jan 25, 2018 at 11:29 AM,  wrote:

> Hi all,
>
>
>
> By the way, I run two nodes on localhost, and the multicastGroup ip and
> port are default settings in the example-cache.xml, as:
>
> 
> ===
>
> 
>
>   
>
>   
>
>   
>
> 
>
> 127.0.0.1:47500..47509
>
>   
>
>   
>
> 
>
> 
> ===
>
>
>
> Rick
>
>
>
> *From:* linr...@itri.org.tw [mailto:linr...@itri.org.tw]
> *Sent:* Thursday, January 25, 2018 3:51 PM
> *To:* user@ignite.apache.org
> *Subject:* One problem about Cluster Configuration(cfg)
>
>
>
> Hi all,
>
>
>
> I have tried to construct a cluster with two nodes.
>
>
>
> my run environment ==
> 
>
> OS: Ubuntu 14.04.5 LTS
>
> Java version: 1.7
>
> Ignite version: 1.9.0
>
> 
> ===
>
>
>
> One node with a “example-cache.xml” was triggered by the shell script as
> the following command:./bin/ignite.sh config/example-cache.xml
>
> The execution results of the program is as:
>
> shell script result ==
> ==
>
> Local node [ID=D411C309-E56A-4773-ABD1-132ADE62C325, order=1,
> clientMode=false]
>
> *Local node addresses: [ubuntu/0:0:0:0:0:0:0:1%lo, /127.0.0.1
> ]*
>
> *Local ports: TCP:8080 TCP:11211 TCP:47100 UDP:47400 TCP:47500*
>
>
>
> [25-01-2018 15:23:44][INFO ][main][GridDiscoveryManager] Topology snapshot
> [*ver=1, servers=1, clients=0, CPUs=4, heap=1.0GB*]
>
> [25-01-2018 15:23:48][INFO ][Thread-23][G] Invoking shutdown hook...
>
> [25-01-2018 15:23:48][INFO ][Thread-23][GridTcpRestProtocol] Command
> protocol successfully stopped: TCP binary
>
> [25-01-2018 15:23:48][INFO ][Thread-23][GridJettyRestProtocol] Command
> protocol successfully stopped: Jetty REST
>
> [25-01-2018 15:23:48][INFO ][Thread-23][GridCacheProcessor] Stopped
> cache: *oneCache*
>
> 
> ===
>
>
>
> The other node was triggered by the maven project (java 1.7) as the
> following command: mvn compile exec:java -Dexec.mainClass=…
>
> In addition, my java code is as:
>
> Java code 
> ==
>
> TcpDiscoveryMulticastIpFinder ipFinder = *new*
> TcpDiscoveryMulticastIpFinder();
>
>
>
> TcpDiscoverySpi spi = *new* TcpDiscoverySpi();
>
> spi.setIpFinder(ipFinder);
>
>
>
> IgniteConfiguration cfg  = *new* IgniteConfiguration();
>
>
>
> cfg.setClientMode(*false*);
>
>
>
> cfg.setDiscoverySpi(spi);
>
>
>
> Ignite igniteVar = Ignition.getOrS*tart*(cfg);
>
>
>
> *CacheConfiguration* cacheConf = *new* *CacheConfiguration*();
>
> cacheConf.setName("oneCache");
>
> *cacheConf**.setIndexedTypes(String.**class**, String.**class**)*;
>
> *IgniteCache* cache = *igniteCache**.getOrCreateCache(**cacheConf**)*;
>
> 
> ===
>
>
>
> The execution results of the java program is as:
>
> Maven project(java) result ==
> ===
>
> SLF4J: Class path contains multiple SLF4J bindings.
>
> SLF4J: Found binding in [jar:file:/root/.m2/repository/org/slf4j/slf4j-
> log4j12/1.7.25/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/
> StaticLoggerBinder.class]
>
> SLF4J: Found binding in [jar:file:/root/.m2/repository/org/slf4j/slf4j-
> jdk14/1.7.25/slf4j-jdk14-1.7.25.jar!/org/slf4j/impl/
> StaticLoggerBinder.class]
>
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
>
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>
> *Program execution lock….*
>
> 
> ===
>
>
>
> And, If I closed One node(shell script), the Maven project program started
> running, as:
>
> 
> ===
>
> [15:32:13] Performance suggestions for grid  (fix if possible)
>
> [15:32:13] To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true
>
> [15:32:13]   ^-- Enable G1 Garbage Collector (add '-XX:+UseG1GC' to JVM
> options)
>
> [15:32:13]   ^-- Specify JVM 

RE: One problem about Cluster Configuration(cfg)

2018-01-25 Thread linrick
Hi all,

By the way, I run two nodes on localhost, and the multicastGroup ip and port 
are default settings in the example-cache.xml, as:
===

  
  
  

127.0.0.1:47500..47509
  
  

===

Rick

From: linr...@itri.org.tw [mailto:linr...@itri.org.tw]
Sent: Thursday, January 25, 2018 3:51 PM
To: user@ignite.apache.org
Subject: One problem about Cluster Configuration(cfg)

Hi all,

I have tried to construct a cluster with two nodes.

my run environment 
==
OS: Ubuntu 14.04.5 LTS
Java version: 1.7
Ignite version: 1.9.0
===

One node with a “example-cache.xml” was triggered by the shell script as the 
following command:./bin/ignite.sh config/example-cache.xml
The execution results of the program is as:
shell script result 

Local node [ID=D411C309-E56A-4773-ABD1-132ADE62C325, order=1, clientMode=false]
Local node addresses: [ubuntu/0:0:0:0:0:0:0:1%lo, /127.0.0.1]
Local ports: TCP:8080 TCP:11211 TCP:47100 UDP:47400 TCP:47500

[25-01-2018 15:23:44][INFO ][main][GridDiscoveryManager] Topology snapshot 
[ver=1, servers=1, clients=0, CPUs=4, heap=1.0GB]
[25-01-2018 15:23:48][INFO ][Thread-23][G] Invoking shutdown hook...
[25-01-2018 15:23:48][INFO ][Thread-23][GridTcpRestProtocol] Command protocol 
successfully stopped: TCP binary
[25-01-2018 15:23:48][INFO ][Thread-23][GridJettyRestProtocol] Command protocol 
successfully stopped: Jetty REST
[25-01-2018 15:23:48][INFO ][Thread-23][GridCacheProcessor] Stopped cache: 
oneCache
===

The other node was triggered by the maven project (java 1.7) as the following 
command: mvn compile exec:java -Dexec.mainClass=…
In addition, my java code is as:
Java code 
==
TcpDiscoveryMulticastIpFinder ipFinder = new TcpDiscoveryMulticastIpFinder();

TcpDiscoverySpi spi = new TcpDiscoverySpi();
spi.setIpFinder(ipFinder);

IgniteConfiguration cfg  = new IgniteConfiguration();

cfg.setClientMode(false);

cfg.setDiscoverySpi(spi);

Ignite igniteVar = Ignition.getOrStart(cfg);

CacheConfiguration cacheConf = new CacheConfiguration();
cacheConf.setName("oneCache");
cacheConf.setIndexedTypes(String.class, String.class);
IgniteCache cache = igniteCache.getOrCreateCache(cacheConf);
===

The execution results of the java program is as:
Maven project(java) result 
=
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/root/.m2/repository/org/slf4j/slf4j-log4j12/1.7.25/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/root/.m2/repository/org/slf4j/slf4j-jdk14/1.7.25/slf4j-jdk14-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Program execution lock….
===

And, If I closed One node(shell script), the Maven project program started 
running, as:
===
[15:32:13] Performance suggestions for grid  (fix if possible)
[15:32:13] To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true
[15:32:13]   ^-- Enable G1 Garbage Collector (add '-XX:+UseG1GC' to JVM options)
[15:32:13]   ^-- Specify JVM heap max size (add '-Xmx[g|G|m|M|k|K]' to 
JVM options)
[15:32:13]   ^-- Set max direct memory size if getting 'OOME: Direct buffer 
memory' (add '-XX:MaxDirectMemorySize=[g|G|m|M|k|K]' to JVM options)
[15:32:13]   ^-- Disable processing of calls to System.gc() (add 
'-XX:+DisableExplicitGC' to JVM options)
[15:32:13] Refer to this page for more performance suggestions: 
https://apacheignite.readme.io/docs/jvm-and-system-tuning
[15:32:13]
[15:32:13] To start Console Management & Monitoring run ignitevisorcmd.{sh|bat}
[15:32:13]
[15:32:13] Ignite node started OK (id=753b6c7e)
[15:32:13] Topology snapshot [ver=1, 

Re: Ignite 2.x upgrade guidelines

2018-01-25 Thread Evgenii Zhuravlev
Great, please let us know if you will face any issues and we will add it to
the documentation

Evgenii

2018-01-24 23:43 GMT+03:00 bintisepaha :

> Thanks Evgenii. We will let you know how it goes.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>