Failing to create index on Ignite table column

2019-01-06 Thread Shravya Nethula
Hi,

I successfully created an Ignite table named User. Now I want to create 3
indexes on id, cityId, age columns in User table. Indexes were successfully
created on id and cityId columns. But I am getting the following "duplicate
index found" error though I am trying to create index for the first time for
age column:

> Generated sql : CREATE INDEX  ON User (id asc)
> Successfully created index on id column

> Generated sql : CREATE INDEX  ON User (cityId asc)
> Successfully created index on cityId column

> Generated sql : CREATE INDEX  ON User (age asc)
> Failed to create index on age column
> Invalid schema state (duplicate index found): null

[12:34:52] (err) DDL operation failureSchemaOperationException [code=0,
msg=Invalid schema state (duplicate index found): null]
at
org.apache.ignite.internal.processors.query.GridQueryProcessor.prepareChangeOnNotStartedCache(GridQueryProcessor.java:1104)
at
org.apache.ignite.internal.processors.query.GridQueryProcessor.startSchemaChange(GridQueryProcessor.java:627)
at
org.apache.ignite.internal.processors.query.GridQueryProcessor.onSchemaPropose(GridQueryProcessor.java:488)
at
org.apache.ignite.internal.processors.cache.GridCacheProcessor.processCustomExchangeTask(GridCacheProcessor.java:378)
at
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.processCustomTask(GridCachePartitionExchangeManager.java:2475)
at
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:2620)
at
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2539)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
at java.lang.Thread.run(Thread.java:748)
javax.cache.CacheException: Invalid schema state (duplicate index found):
null
at
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:697)
at
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:636)
at
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.query(GatewayProtectedCacheProxy.java:388)
at
net.aline.cloudedh.base.database.IgniteTable._createIndex(IgniteTable.java:99)
at 
net.aline.cloudedh.base.database.BigTable.createIndex(BigTable.java:234)
at 
net.aline.cloudedh.base.database.BigTable.createIndex(BigTable.java:246)
at
net.aline.cloudedh.base.framework.DACEngine.createIndexOnTable(DACEngine.java:502)
at
net.aline.cloudedh.base.framework.DACOperationsTest.main(DACOperationsTest.java:24)
Caused by: class
org.apache.ignite.internal.processors.query.IgniteSQLException: Invalid
schema state (duplicate index found): null
at
org.apache.ignite.internal.processors.query.h2.ddl.DdlStatementsProcessor.convert(DdlStatementsProcessor.java:644)
at
org.apache.ignite.internal.processors.query.h2.ddl.DdlStatementsProcessor.runDdlStatement(DdlStatementsProcessor.java:505)
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.doRunPrepared(IgniteH2Indexing.java:2293)
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.querySqlFields(IgniteH2Indexing.java:2209)
at
org.apache.ignite.internal.processors.query.GridQueryProcessor$4.applyx(GridQueryProcessor.java:2135)
at
org.apache.ignite.internal.processors.query.GridQueryProcessor$4.applyx(GridQueryProcessor.java:2130)
at
org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36)
at
org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:2707)
at
org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:2144)
at
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:685)
... 7 more

How can i resolve this issue?

Thanks.

Regards,
Shravya Nethula.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: SQL performance very slow compared to KV in pure-in-memory cache?

2019-01-06 Thread Naveen
As per my knowledge, results are quite obvious and expected, KV definitely
gives better performance compared to SQL, since SQL (table) is wrapper built
on top of cache. 

If you are looking for better performance than KV, you can even go for
storing data in the binary form. if we store data in binary, we can even
avoid serialization and de-serialization while doing all the operations like
GETs/PUTs. 

Thanks
Naveen



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Native persistence: ignite server failed to join when client has been started

2019-01-06 Thread Roman Guseinov
Hi radha,

It looks like a known issue which is already fixed in 2.7
https://issues.apache.org/jira/browse/IGNITE-8774. Could you try to
reproduce this using Ignite 2.7?

Best Regards,
Roman



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: SQL performance very slow compared to KV in pure-in-memory cache?

2019-01-06 Thread summasumma
Can anyone throw some light on this please?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Native persistence: ignite server failed to join when client has been started

2019-01-06 Thread radha jai
Hi
I started two ignite servers by enabling the native persistence and
authentication.
I checked in the baseline topology two servers were there, after activating
the cluster.
Then i started the client. suddenly one of the server went down.
I checked topology in the visor , only one server and one client was there.
I am using ignite 2.6
Below is  the error message:

{"type":"log","host":"ignite-cluster-kru-ign-1","level":"ERROR","systemid":"bcfa2b5e","system":"ignite","time":"2019-01-07
04:32:39,135","logger":"TcpDiscoverySpi","timezone":"UTC","marker""","log":"TcpDiscoverSpi's
message worker thread failed abnormally. Stopping the node in order to
prevent cluster wide instability. class org.apache.ignite.IgniteException:
Node with BaselineTopology cannot join mixed cluster running in
compatibility mode
at
org.apache.ignite.internal.processors.cluster.GridClusterStateProcessor.onGridDataReceived(GridClusterStateProcessor.java:714)
at
org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$5.onExchange(GridDiscoveryManager.java:883)
at
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.onExchange(TcpDiscoverySpi.java:1939)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processNodeAddedMessage(ServerImpl.java:4354)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2744)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2536)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorkerAdapter.body(ServerImpl.java:6775)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.body(ServerImpl.java:2621)
at
org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
"}

Regards
radha


SqlQuery retrieves same cache entry twice. ScanQuery results conflicts with indentical SqlQuery

2019-01-06 Thread oshevchenko
Met very strange SqlQuery when executing simple query on partitioned. The
problem that the same cache entry is retrieved twice. Looks like first time
entry gets retrieved from primary partition and second time it is taken from
back up. Another problem that same  ScanQuery gives correct results. Code i
run to this behavior:
public class UnitDupKeyProblemTests {
public static void main(String[] args) {
try (Ignite ignite = Ignition.start("ignite-client-conf-unit.xml"))
{
Collection results = ignite.compute().broadcast(new
TestCallable("ENT_LST_0004_20180630_14009_99_9__23_USD_9_9_99_2018_9_9_LN00025_99_99"));
results.forEach(System.out::println);
}
}

private static class BiPredicateFilter implements
IgniteBiPredicate {
@Override
public boolean apply(String string, Balance balance) {
return balance.getTransType().equals("23");
}
}

private static class Result implements Serializable {
int timesFound;
int nonPrimary;
Collection addresses;
UUID nodeUid;
boolean primary;
boolean backUp;

public Result(int timesFound, int nonPrimary, Collection
addresses, UUID nodeUid, boolean primary, boolean backUp) {
this.timesFound = timesFound;
this.addresses = addresses;
this.nodeUid = nodeUid;
this.primary = primary;
this.nonPrimary = nonPrimary;
this.backUp = backUp;
}

public int getTimesFound() {
return timesFound;
}

@Override
public String toString() {
return "Result{" +
"timesFound=" + timesFound +
", nonPrimary=" + nonPrimary +
", addresses=" + addresses +
", nodeUid=" + nodeUid +
", primary=" + primary +
", backUp=" + backUp +
'}';
}
}

private static class TestCallable implements IgniteCallable {

@IgniteInstanceResource
transient Ignite ignite;

private final String keyToTest;

public TestCallable(String keyToTest) {
this.keyToTest = keyToTest;
}

@Override
public Result call() throws Exception {
final IgniteCache cache =
ignite.cache("BALANCE");
final ClusterNode clusterNode = ignite.cluster().localNode();
//final ScanQuery query = new ScanQuery<>(new
BiPredicateFilter());
final SqlQuery query = new
SqlQuery<>(Balance.class, "transType='23'");


query.setLocal(true);
int num = 0;
int nonPrimary = 0;
try (final QueryCursor> cursor =
cache.query(query)) {
for (Cache.Entry entry : cursor) {
if (cache.localPeek(entry.getKey(),
CachePeekMode.PRIMARY) == null) {
nonPrimary++;
}

if (keyToTest.equals(entry.getKey())) {
num++;
}
}
}

ignite.affinity("BALANCE").isPrimary(clusterNode, keyToTest);
return new Result(num, nonPrimary, clusterNode.addresses(),
clusterNode.id(),
ignite.affinity("BALANCE").isPrimary(clusterNode,
keyToTest),
ignite.affinity("BALANCE").isBackup(clusterNode,
keyToTest));
}
}
}

Output that proves my assumptions is:
Result{timesFound=1, nonPrimary=4, addresses=[127.0.0.1, 48.124.176.58],
nodeUid=0d45348c-2e94-4ac3-b9aa-b61fbdd56749, primary=false, backUp=true}
Result{timesFound=0, nonPrimary=5, addresses=[127.0.0.1, 48.124.184.19],
nodeUid=b5161591-7d62-4c0d-a866-7610b3665760, primary=false, backUp=false}
Result{timesFound=0, nonPrimary=2, addresses=[127.0.0.1, 48.124.176.57],
nodeUid=da3da1d3-a4f0-4273-8fac-f4ba479ae211, primary=false, backUp=false}
Result{timesFound=1, nonPrimary=2, addresses=[127.0.0.1, 48.124.184.20],
nodeUid=39203ddb-2c6b-4247-84bf-22f380130711, primary=true, backUp=false}

if switch to scanquery output looks correct:
Result{timesFound=0, nonPrimary=0, addresses=[127.0.0.1, 48.124.176.58],
nodeUid=0d45348c-2e94-4ac3-b9aa-b61fbdd56749, primary=false, backUp=true}
Result{timesFound=1, nonPrimary=0, addresses=[127.0.0.1, 48.124.184.20],
nodeUid=39203ddb-2c6b-4247-84bf-22f380130711, primary=true, backUp=false}
Result{timesFound=0, nonPrimary=0, addresses=[127.0.0.1, 48.124.176.57],
nodeUid=da3da1d3-a4f0-4273-8fac-f4ba479ae211, primary=false, backUp=false}
Result{timesFound=0, nonPrimary=0, addresses=[127.0.0.1, 48.124.184.19],
nodeUid=b5161591-7d62-4c0d-a866-7610b3665760, primary=false, backUp=false}




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Getting javax.cache.CacheException after upgrading to Ignite 2.7

2019-01-06 Thread Denis Magda
Hello,

Ignite versions prior to 2.7 never supported transactions for SQL queries.
You were enlisting SQL in transactions for your own risk. Ignite version
2.7 introduced true transactional support for SQL based on MVCC. Presently
it's in beta with GA to be available around Q2-Q3 this year. The community
is working on optimizations.

Please refer to this docs for more details:
https://apacheignite.readme.io/docs/multiversion-concurrency-control
https://apacheignite-sql.readme.io/docs/transactions
https://apacheignite-sql.readme.io/docs/multiversion-concurrency-control

--
Denis

On Sat, Jan 5, 2019 at 7:48 PM Prasad Bhalerao 
wrote:

> Can someone please explain if anything has changed in ignite 2.7.
>
> Started getting this exception after upgrading to 2.7.
>
>
> -- Forwarded message -
> From: Prasad Bhalerao 
> Date: Fri 4 Jan, 2019, 8:41 PM
> Subject: Re: Getting javax.cache.CacheException after upgrading to Ignite
> 2.7
> To: 
>
>
> Can someone please help me with this?
>
> On Thu 3 Jan, 2019, 7:15 PM Prasad Bhalerao  wrote:
>
> > Hi
> >
> > After upgrading to 2.7 version I am getting following exception. I am
> > executing a SELECT sql inside optimistic transaction with serialization
> > isolation level.
> >
> > 1) Has anything changed from 2.6 to 2.7 version?  This work fine prior to
> > 2.7 version.
> >
> > After changing it to Pessimistic and isolation level to REPEATABLE_READ
> it
> > works fine.
> >
> >
> >
> >
> >
> >
> > *javax.cache.CacheException: Only pessimistic repeatable read
> transactions
> > are supported at the moment.at
> >
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:697)at
> >
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:636)at
> >
> org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.query(GatewayProtectedCacheProxy.java:388)at
> >
> com.qualys.agms.grid.dao.AbstractDataGridDAO.getFieldResultsByCriteria(AbstractDataGridDAO.java:85)*
> >
> > Thanks,
> > Prasad
> >
>


Re: JDK 11 support

2019-01-06 Thread Yakov Zhdanov
Of course jdk 8 is fine :). I meant that ignite should also be fine with
jdk9 and newer.

Yakov


Re: JDK 11 support

2019-01-06 Thread Mehdi Seydali
No. I have run it on jdk 8 also

On Sunday, January 6, 2019, Yakov Zhdanov  wrote:

> Ignite 2.7 should run on JDK 11. Vladimir, Denis, correct me if I'm wrong.
>
> Please try this link for information - https://apacheignite.readme.
> io/v2.7/docs/getting-started#section-running-ignite-with-java-9
>
> --Yakov
>


Re: JDK 11 support

2019-01-06 Thread Yakov Zhdanov
Ignite 2.7 should run on JDK 11. Vladimir, Denis, correct me if I'm wrong.

Please try this link for information -
https://apacheignite.readme.io/v2.7/docs/getting-started#section-running-ignite-with-java-9

--Yakov


How to debug network issues in cluster

2019-01-06 Thread Prasad Bhalerao
Hi,

I am consistently getting "Node is out of topology" message in logs on
node-1 and in other node, node-2 getting message "Timed out waiting for
message delivery receipt (most probably, the reason is in long GC pauses on
remote node; consider tuning GC and increasing '"

I have checked the network bandwidth using iperf and it is 470 Mbit per
sec. I have also checked the gc logs and max pause time is 140 ms.

If it is really happening because of network issues, it there any way to
debug it?

If it is happening because of gc, I would have seen it in gc logs.

Can someone please help me out with this?

Log messages on node-1:
2019-01-06 13:48:19,036 125016 [tcp-disco-srvr-#3%springDataNode%] INFO
o.a.i.s.d.tcp.TcpDiscoverySpi - TCP discovery accepted incoming connection
[rmtAddr=/10.114.113.65, rmtPort=35651]
2019-01-06 13:48:19,037 125017 [tcp-disco-srvr-#3%springDataNode%] INFO
o.a.i.s.d.tcp.TcpDiscoverySpi - TCP discovery spawning a new thread for
connection [rmtAddr=/10.114.113.65, rmtPort=35651]
2019-01-06 13:48:19,037 125017 [tcp-disco-sock-reader-#5%springDataNode%]
INFO  o.a.i.s.d.tcp.TcpDiscoverySpi - Started serving remote node
connection [rmtAddr=/10.114.113.65:35651, rmtPort=35651]
*2019-01-06 13:48:19,040 125020 [tcp-disco-msg-worker-#2%springDataNode%]
WARN  o.a.i.s.d.tcp.TcpDiscoverySpi - Node is out of topology (probably,
due to short-time network problems).*
2019-01-06 13:48:19,041 125021 [disco-event-worker-#62%springDataNode%]
WARN  o.a.i.i.m.d.GridDiscoveryManager - Local node SEGMENTED:
TcpDiscoveryNode [id=a5827f51-096a-4c98-af4f-564d2d3e769d,
addrs=[10.114.113.53, 127.0.0.1], sockAddrs=[/127.0.0.1:47500,
qagmscore02.p13.eng.in03.qualys.com/10.114.113.53:47500], discPort=47500,
order=2, intOrder=2, lastExchangeTime=1546782499034, loc=true,
ver=2.7.0#20181130-sha1:256ae401, isClient=false]
2019-01-06 13:48:19,041 125021 [tcp-disco-sock-reader-#5%springDataNode%]
INFO  o.a.i.s.d.tcp.TcpDiscoverySpi - Finished serving remote node
connection [rmtAddr=/10.114.113.65:35651, rmtPort=35651
2019-01-06 13:48:19,866 125846 [tcp-comm-worker-#1%springDataNode%] INFO
o.a.i.s.d.tcp.TcpDiscoverySpi - Pinging node:
cd9803ac-b810-447e-818e-ab51dada59d8


Distributed Training in tensorflow

2019-01-06 Thread mehdi sey
Distributed training allows computational resources to be used on the whole
cluster and thus speed up training of deep learning models. TensorFlow is a
machine learning framework that natively supports distributed neural network
training, inference and other computations.Using this ability, we can
calculate gradients on the nodes the data are stored on, reduce them and
then finally update model parameters.In case of TensorFlow on Apache Ignite
does in a server in cluster we must run a tensorflow worker for doing work
on its data?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: deep learning over apache ignite

2019-01-06 Thread Mehdi Seydali
i have a question in my mind. we know that spark and ignite are in memory
computing platform and both of them cache data. if we want to consider
caching data in ignite as a benefit, spark also cache data. in this idea if
we want to develope dl4j over ignite What will we gain except for ignite
computation?

On Tue, Dec 18, 2018 at 7:38 PM dmitrievanthony 
wrote:

> First of all, when you are talking about "acceleration of running deep
> neural
> network" what do you mean, training or inference?
> If you are talking about inference, it actually doesn't matter what cluster
> management system we use, we need only to run parallel/distributed
> inference.
> If you are talking about training, how are you going to speed it up? Use
> distributed training on multiple machines? Or GPU's? Or maybe you just want
> to speed up data preprocessing?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Article about Ignite Caching

2019-01-06 Thread Gaurav Bajaj
Thanks Denis !!

On Sun 6 Jan, 2019, 2:41 AM Denis Magda,  wrote:

> Gaurav, thanks for spreading the word! Keep up the good work!
>
> --
> Denis
>
> On Sat, Jan 5, 2019 at 3:16 AM Gaurav Bajaj 
> wrote:
>
>> Hello Igniters,
>>
>> I have written my first article about Apache Ignite. It is introductory
>> article about basics of creating caches.
>> Please find below the link :
>>
>> “Introduction to Distributed Caching using Apache Ignite”
>>
>> *https://medium.com/@gauravhbajaj/introduction-to-distributed-caching-using-apache-ignite-af7a84433a36
>> *
>>
>> I plan to write about more advancd stuff in future.
>>
>> Thanks,
>> Gaurav
>>
>>


Re: JDK 11 support

2019-01-06 Thread Mehdi Seydali
i think you must run ignite on JDK 8.

On Sat, Jan 5, 2019 at 7:03 PM ignite_user2016 
wrote:

> what version of Ignite run on JDK 11 ? Does any one tried running Ignite on
> open JDK 11 ?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>