Ignite behaving strange with Spark SharedRDD in AWS EMR Yarn Client Mode

2017-08-04 Thread raksja
Hi,

We are evaluating Ignite and our ultimate goal is to share an rdd between
multiple spark jobs. So one job can cache its computation in ignite and
other can use that for its computation.

We setup ignite server in master node & 6 worker nodes of AWS EMR cluster
and ran the spark-submit with ScalarSharedRDDExample provided in yarn client
mode. 

Multicast is not enabled in AWS, so we created a custom config file for
IgniteContext creation with S3Discovery and we are able to see the servers=7
(6w+1m).

https://github.com/apache/ignite/blob/master/examples/src/main/scala/org/apache/ignite/scalar/examples/spark/ScalarSharedRDDExample.scala

  

Then we spark-submit the above example with one slight modification in yarn
client mode. With 1 executor 1 core.
new IgniteContext(sparkContext, CONFIG, true)  ->  true instead of false

 It gave all the logs for saving the values, then the strange thing started
to happen.

Foreach of the "take" task, it started spinning up multiple executors and in
each executor of course it started another IgniteContext. So the total
ignite servers became like 25 and then started printing the take values and
then it teared down all those igniteinstances as well as spark executors.

But overall it took lot of time. We even tried configuring a
RendezvousAffinityFunction to set only 1 partition, still no change. Most of
the documentation talks about things separately (like how to set it up in
aws, how to do sharedrdd). 



Just wanted to ask this forum, whether we are doing something wrong here.
Does any one tested it in yarn client mode? Is there any setup documentation
for running this in such mode? Also why its spinning up 38 different
executors itself, even though we specify only 1 to spark?

Any help would be much appreciated.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-behaving-strange-with-Spark-SharedRDD-in-AWS-EMR-Yarn-Client-Mode-tp16011.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Trouble using latest Ignite Web Console (2.1)

2017-08-04 Thread Muthu
Hi Alexey,

Just to add a little more context & summarize...I have been really trying
to get this to work for a while so i can begin deploying ignite but as yet
i haven't manged to get this working...below are the current list of issues,

1. The cache fails loading with the configurations generated from web
console when i bring it up on spring boot (problems with H2 Indexing...)
2. The model files generated is missing the primary key field & its
getters/setters. For example for the table "dcm.emp" i described earlier
(in the sample project on github that i shared), the generated model DTOs
did not have the field "private String id;" and the corresponding getters &
setters for it..i had to add it manually in the DTO and edit the equals,
hashCode & toString to fix it. I think manually doing this for a lot of
tables is very cumbersome...

public class Dept implements Serializable {
...
...
private String id;

public String getId() {
return id;
}

public void setId(String id) {
this.id = id;
}
...
...
}


Regards,
Muthu

-- The real danger with modern technology isn't that machines will begin to
think like people, but that people will begin to think like machines.
-- Faith is to believe what you do not see; the reward of this faith is to
see what you believe.

On Fri, Aug 4, 2017 at 1:50 AM, Muthu  wrote:

> Hi Alexey,
>
> Try this, https://github.com/softwarebrahma/IgniteCacheWithAutomaticExtern
> alPersistence
>
> Can be run like below,
>
> java -jar -Ddb.host=localhost -Ddb.port=5432 -Ddb.name=postgres
> -DIGNITE_QUIET=false target\CacheAutomaticPersistence-0.0.1-SNAPSHOT.jar
>
> Regards,
> Muthu
>
> -- The real danger with modern technology isn't that machines will begin
> to think like people, but that people will begin to think like machines.
> -- Faith is to believe what you do not see; the reward of this faith is to
> see what you believe.
>
> On Thu, Aug 3, 2017 at 8:50 PM, Alexey Kuznetsov 
> wrote:
>
>> Muthu,
>>
>> Can you attach a simple example to reproduce this exception?
>>
>>
>> On Fri, Aug 4, 2017 at 4:08 AM, Muthu  wrote:
>>
>>> Sure Alexey...thanks. I am using PostgreSQL version 9.6. The driver is a
>>> bit old...its 9.4.1212 JDBC 4
>>> 
>>>  (postgresql-9.4.1212.jre6.jar )..but i was using it without any issues
>>> with 1.9 version of web console (previous one).
>>>
>>> On the workaround thingy, the problem is when i generate the
>>> models/artifacts from it (console.gridgain.com) & deploy it on ignite
>>> 2.1.0 it fails starting up with this error (it was failing even with 2.0 &
>>> was fixed after i identified it..more details here,
>>> https://stackoverflow.com/questions/44468342/automatic-persi
>>> stence-unable-to-bring-up-cache-with-web-console-generated-mode/44901117).
>>> I thought later on that the ignite version from there is grid gain's ignite
>>> version & that may not be compatible with the open source ignite 2.1.0. Is
>>> it not true?
>>>
>>> INFO: Started cache [name=ignite-sys-cache, memoryPolicyName=sysMemPlc,
>>> mode=REPLICATED, atomicity=TRANSACTIONAL]
>>> Aug 03, 2017 12:34:36 PM org.apache.ignite.logger.java.JavaLogger error
>>> SEVERE: Failed to reinitialize local partitions (preloading will be
>>> stopped): GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion
>>> [topVer=1, minorTopVer=0], nodeId=e9b2cf46, evt=NODE_JOINED]
>>> class org.apache.ignite.IgniteCheckedException: Failed to register
>>> query type: QueryTypeDescriptorImpl [cacheName=DeptCache, name=Dept,
>>> tblName=DEPT, fields={DNAME=class java.lang.String, LOC=class java.lang
>>> .String, DEPTID=class java.lang.String}, idxs={}, fullTextIdx=null,
>>> keyCls=class java.lang.String, valCls=class java.lang.Object,
>>> keyTypeName=java.lang.String, valTypeName=com.brocade.dcm.do
>>> main.model.Dept,
>>> valTextIdx=false, typeId=1038936471, affKey=null, keyFieldName=deptid,
>>> valFieldName=null, obsolete=false]
>>> at org.apache.ignite.internal.processors.query.h2.IgniteH2Index
>>> ing.registerType(IgniteH2Indexing.java:1532)
>>> at org.apache.ignite.internal.processors.query.GridQueryProcess
>>> or.registerCache0(GridQueryProcessor.java:1424)
>>> at org.apache.ignite.internal.processors.query.GridQueryProcess
>>> or.onCacheStart0(GridQueryProcessor.java:784)
>>> at org.apache.ignite.internal.processors.query.GridQueryProcess
>>> or.onCacheStart(GridQueryProcessor.java:845)
>>> at org.apache.ignite.internal.processors.cache.GridCacheProcess
>>> or.startCache(GridCacheProcessor.java:1185)
>>> at org.apache.ignite.internal.processors.cache.GridCacheProcess
>>> or.prepareCacheStart(GridCacheProcessor.java:1884)
>>> at org.apache.ignite.internal.processors.cache.GridCacheProcess
>>> or.startCachesOnLocalJoin(GridCacheProcessor.java:1755)
>>> at 

Blog post: Apache Ignite's new Machine Learning (ML) Grid

2017-08-04 Thread Dieds
Igniters, today Akmal B. Chaudhri published the 7th post in his series on
"Getting Started with Apache Ignite." This time round the focus is on the
new Machine Learning (ML) Grid. http://bit.ly/2v6QKQ1




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Blog-post-Apache-Ignite-s-new-Machine-Learning-ML-Grid-tp16009.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite starting by itself when using javax.cache.Caching

2017-08-04 Thread slava.koptilin
Hi Matt,

javax.cache.Caching.getCachingProvider() does not start Apache Ignite.
It looks like, you are trying to call (explicitly or implicitly)
CachingProvider.getCacheManager(), which have to start Ignite server.

Thanks.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-starting-by-itself-when-using-javax-cache-Caching-tp15997p16008.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: how to append data to IGFS so that data gets saved to Hive partitioned table?

2017-08-04 Thread Michael Cherkasov
Hi,

Could you please clarify, if you run all actions using IGFS, but instaed of
fs.appedn use Hive, like:
insert into table stocks PARTITION (years=2004,months=12,days=3)
values('AAPL',1501236980,120.34);

Does select work this time?

Thanks,
Mikhail.

2017-08-04 12:56 GMT+03:00 csumi :

> Let me try to clear here with the sequence of steps performed.
> -   Created table with partition through hive using below query. It
> creates a
> directory in hdfs.
> create table stocks3 (stock string, time timestamp, price float)
> PARTITIONED BY (years bigint, months bigint, days bigint) ROW FORMAT
> DELIMITED FIELDS TERMINATED BY ',';
> -   Then I get streaming data and using IgniteFileSystem's
> append/create
> method, it gets saved to ignited hadoop.
> -   Run below select query. No result returned
> select * from stocks3;
> -   Stop ignite and run the select again on hive. No result with below
> logs
>
> hive> select * from stocks3;
> 17/08/04 14:59:08 INFO conf.HiveConf: Using the default value passed in for
> log id: b5e3e924-e46a-481c-8aef-30d48605a2da
> 17/08/04 14:59:08 INFO session.SessionState: Updating thread name to
> b5e3e924-e46a-481c-8aef-30d48605a2da main
> 17/08/04 14:59:08 WARN operation.Operation: Unable to create operation log
> file:
> D:\tmp\hive\\operation_logs\b5e3e924-e46a-481c-8aef-
> 30d48605a2da\137adad6-ea23-462c-a414-6ce260e5bd49
> java.io.IOException: The system cannot find the path specified
> at java.io.WinNTFileSystem.createFileExclusively(Native Method)
> at java.io.File.createNewFile(File.java:1012)
> at
> org.apache.hive.service.cli.operation.Operation.
> createOperationLog(Operation.java:237)
> at
> org.apache.hive.service.cli.operation.Operation.beforeRun(
> Operation.java:279)
> at
> org.apache.hive.service.cli.operation.Operation.run(Operation.java:314)
> at
> org.apache.hive.service.cli.session.HiveSessionImpl.
> executeStatementInternal(HiveSessionImpl.java:499)
> at
> org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(
> HiveSessionImpl.java:486)
> at
> org.apache.hive.service.cli.CLIService.executeStatementAsync(
> CLIService.java:295)
> at
> org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(
> ThriftCLIService.java:506)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:
> 57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:491)
> at
> org.apache.hive.jdbc.HiveConnection$SynchronizedHandler.invoke(
> HiveConnection.java:1412)
> at com.sun.proxy.$Proxy21.ExecuteStatement(Unknown Source)
> at
> org.apache.hive.jdbc.HiveStatement.runAsyncOnServer(
> HiveStatement.java:308)
> at
> org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:250)
> at
> org.apache.hive.beeline.Commands.executeInternal(Commands.java:988)
> at org.apache.hive.beeline.Commands.execute(Commands.java:1160)
> at org.apache.hive.beeline.Commands.sql(Commands.java:1074)
> at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:1148)
> at org.apache.hive.beeline.BeeLine.execute(BeeLine.java:976)
> at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:886)
> at org.apache.hive.beeline.cli.HiveCli.runWithArgs(HiveCli.
> java:35)
> at org.apache.hive.beeline.cli.HiveCli.main(HiveCli.java:29)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:
> 57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:491)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:234)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
> 17/08/04 14:59:08 INFO ql.Driver: Compiling
> command(queryId=_20170804145908_b270c978-ab00-
> 4160-a2a6-c19b42eab676):
> select * from stocks3
> 17/08/04 14:59:08 INFO parse.CalcitePlanner: Starting Semantic Analysis
> 17/08/04 14:59:08 INFO parse.CalcitePlanner: Completed phase 1 of Semantic
> Analysis
> 17/08/04 14:59:08 INFO parse.CalcitePlanner: Get metadata for source tables
> 17/08/04 14:59:08 INFO metastore.HiveMetaStore: 0: get_table : db=yt
> tbl=stocks3
> 17/08/04 14:59:08 INFO HiveMetaStore.audit: ugi=  ip=unknown-ip-addr
> cmd=get_table : db=yt tbl=stocks3
> 17/08/04 14:59:08 INFO parse.CalcitePlanner: Get metadata for subqueries
> 17/08/04 14:59:08 INFO parse.CalcitePlanner: Get metadata for destination
> tables
> 17/08/04 14:59:09 INFO ql.Context: New scratch dir is
> hdfs://localhost:9000/tmp/hive//b5e3e924-e46a-
> 

Re: IGFS Error when writing a file

2017-08-04 Thread mcherkasov
Hi Pradeep,

I didn't reproduce the issue yet, I need more time for this.
But could you please share the full stack-trace of the exception, because
the most interesting part it is cut.

Thanks,
Mikhail.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/IGFS-Error-when-writing-a-file-tp15868p16006.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: IGFS With HDFS Configuration

2017-08-04 Thread Mikhail Cherkasov
Hi Pradeep,

I can't run it with 1.9 too, however, the same config works fine with 2.0.
You can try to build zeppelin your self, just apply the following patch:
https://github.com/apache/zeppelin/pull/2445/files
to some stable version of zeppelin.
I'm not sure but it should work, anyway, the only way to check this is to
build with the patch and run.

Thanks,
Mikhail.

On Thu, Aug 3, 2017 at 9:57 PM, pradeepchanumolu 
wrote:

> I couldn't get it running with the v1.9.0.
>
> I bumped the version to the latest v2.1.0 and could get the ignite services
> up. (Zeppelin only supports v1.9.0 btw)
> I am attaching my config file to this message.
> default-config.xml
>  n15964/default-config.xml>
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/IGFS-With-HDFS-Configuration-tp15830p15964.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Thanks,
Mikhail.


Re: Affinity Key field is not identified if binary configuration is used on cache key object

2017-08-04 Thread Andrey Gura
Could you please clarify what does it mean "I already did LoadCache
before running the program."?

Also it would be good if you can share some minimal reproducer.

On Fri, Aug 4, 2017 at 4:55 PM, kotamrajuyashasvi
 wrote:
> Hi
> All nodes use same config xml and pojos.
>
> On Aug 4, 2017 7:22 PM, "agura [via Apache Ignite Users]" <[hidden email]>
> wrote:
>>
>> Hi
>>
>> It seems that you have different configuration on nodes (e.g. one node
>> has cacheKeyConfiguration while other doesn't). Isn't it?
>>
>> On Fri, Aug 4, 2017 at 12:54 PM, kotamrajuyashasvi
>> <[hidden email]> wrote:
>>
>> > Hi
>> >
>> > Thanks for the response. When I put cacheKeyConfiguration in ignite
>> > configuration, the affinity was working. But when I call Cache.Get() in
>> > client program I'm getting the following error.
>> >
>> > "Java exception occurred
>> > [cls=org.apache.ignite.binary.BinaryObjectException, msg=Binary type has
>> > different affinity key fields [typeName=PersonPK,
>> > affKeyFieldName1=customer_ref, affKeyFieldName2=null]]"
>> >
>> > I already did LoadCache before running the program.
>> >
>> >
>> >
>> > --
>> > View this message in context:
>> > http://apache-ignite-users.70518.x6.nabble.com/Affinity-Key-field-is-not-identified-if-binary-configuration-is-used-on-cache-key-object-tp15959p15990.html
>> > Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>>
>> 
>> If you reply to this email, your message will be added to the discussion
>> below:
>>
>> http://apache-ignite-users.70518.x6.nabble.com/Affinity-Key-field-is-not-identified-if-binary-configuration-is-used-on-cache-key-object-tp15959p15995.html
>> To unsubscribe from Affinity Key field is not identified if binary
>> configuration is used on cache key object, click here.
>> NAML
>
>
> 
> View this message in context: Re: Affinity Key field is not identified if
> binary configuration is used on cache key object
>
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.


RE: Cassandra Cache Store: How are loadCache() queries distributed

2017-08-04 Thread Roger Fischer (CW)
Thanks, Igor.

That enhancement will be very useful. Both faster load (parallel) and more 
efficiency (not transferring all data  times) are highly desirable.

Roger

From: Igor Rudyak [mailto:irud...@gmail.com]
Sent: Thursday, August 03, 2017 10:58 PM
To: user@ignite.apache.org
Subject: Re: Cassandra Cache Store: How are loadCache() queries distributed

Hi Roger,

As of now Cassandra Cache Store loadCache() implementation is pretty 
straightforward - it sends all provided CQL queries from all Ignite nodes. 
There is no query analysis to distribute data loading routine among cluster 
nodes.

There is an enhancement ticket created for this: 
https://issues.apache.org/jira/browse/IGNITE-3962

Igor



On Thu, Aug 3, 2017 at 2:29 PM, Roger Fischer (CW) 
> wrote:
Hello,

could someone please explain to me how loadCache() queries are distributed to 
the Cassandra instances when using the Cassandra Cache Store module.

I used Ignite logging and Cassandra server tracing (system_traces.sessions) to 
try to determine how queries are distributed, but I can’t make sense of what I 
have observed.

I am quite sure of: An ignite server stores the objects for which it is the 
primary or a backup. It ignores other objects received from Cassandra.

I first tried a load-all scenario, with one query (select * from table) passed 
in the loadCache() call.

Initially, it looked like each Ignite server sends the query to one Cassandra 
node. That seems reasonable.

However, I have also observed cases when each Ignite server sends the query to 
more than one Cassandra node. Why?

Then I tried to call loadCache() with multiple queries. Specifically I created 
a query for each Cassandra partition. Best-practice for Cassandra is to limit 
queries to a single partition.

One test seemed to imply that each Ignite server sends all queries, 
distributing them across the available Cassandra nodes. This seems reasonable.

However, in another test one query (out of 6) got sent (really executed in 
Cassandra) only once, most got sent twice, and a few three times. With 3 Ignite 
servers, I would have expected each query to be sent 3 times (once from each 
Ignite server).

I am quite suspect of that last observation, as it would invalidate what I 
stated earlier as “quite sure of”. Maybe Cassandra did not record all queries 
in the sessions table.

So how does Ignite handle a loadCache() request when there are  Ignite 
servers and  Cassandra servers. The loadCache() call is made in an Ignite 
client.
a) when there is a single query provided to loadCache().
b) when there are multiple () queries provided to loadCache().
c) does it make any difference if the query includes all Cassandra partitioning 
key columns in the where clause (ie. would the Cassandra Cache Store analyze 
the query to optimize the distribution)?
d) does it make any difference if the query includes the Ignite affinity key 
(ie. would the Cassandra Cache Store analyze the query to optimize which 
queries to send from where)?

Thanks…

Roger





Re: Possible starvation in striped pool. message

2017-08-04 Thread slava.koptilin
Hi Kestas,

There are several possible reasons for that.
In your case, I think you are trying to put or get huge objects, that
contain internal collections and so on.

>at
> o.a.i.i.binary.BinaryUtils.doReadCollection(BinaryUtils.java:2016) 
>at
> o.a.i.i.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1904) 
...
>at
> o.a.i.i.binary.BinaryUtils.doReadCollection(BinaryUtils.java:2016) 
>at
> o.a.i.i.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1904) 

Another possible reason is intensive use of getAll()/putAll() methods from
different threads.
In that case, you need to sort collection of keys before, because batch
operation on the same entries in different order could lead to deadlock.


Thanks.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Possible-starvation-in-striped-pool-message-tp15993p16002.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Platform updates

2017-08-04 Thread luqmanahmad
Thanks Christos, it is for the application code. For upgrading the ignite
version we can schedule the downtime that's why 99.9% :)



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Platform-updates-tp15998p16001.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Platform updates

2017-08-04 Thread Christos Erotocritou
Luqman, do you want to update the application or the actual Ignite version?

If it's your application Kara then as long as you can manage multiple versions 
of your app for a phased upgrade then sure. But if it's for Ignite then this is 
not possible to have 2 different version running in the same cluster. You can 
do that with GridGain enterprise version though. 

Cheers,
C  

> On 4 Aug 2017, at 15:31, luqmanahmad  wrote:
> 
> Hi there,
> 
> Lets say we have a distributed caching system which needs to be up 99.9%
> time unless we upgrade the version of ignite. Now let say we have found some
> bugs and needs to be updated on the production cluster without any downtime. 
> 
> How do we approach this scenario in ignite ? If we have a cluster, let say 2
> nodes, can we stop one node -
> update the jars and repeat the same process on the other node without any
> downtime ? I might not be thinking straight over here but would be
> appreciated any help.
> 
> Thanks,
> Luqman 
> 
> 
> 
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/Platform-updates-tp15998.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Ignite starting by itself when using javax.cache.Caching

2017-08-04 Thread matt
I have the ignite dependencies set in my project. I have some test code that
is using a non-Ignite Java test code actually touches Ignite or attempts to
call code that does. As soon as I run
javax.cache.Caching.getCachingProvider() - Ignite starts up and prints the
normal Ignite welcome ascii art. Why is it starting on its own like this?
How can I prevent this from happening?

Thanks,
- Matt



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-starting-by-itself-when-using-javax-cache-Caching-tp15997.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Affinity Key field is not identified if binary configuration is used on cache key object

2017-08-04 Thread Andrey Gura
Hi

It seems that you have different configuration on nodes (e.g. one node
has cacheKeyConfiguration while other doesn't). Isn't it?

On Fri, Aug 4, 2017 at 12:54 PM, kotamrajuyashasvi
 wrote:
> Hi
>
> Thanks for the response. When I put cacheKeyConfiguration in ignite
> configuration, the affinity was working. But when I call Cache.Get() in
> client program I'm getting the following error.
>
> "Java exception occurred
> [cls=org.apache.ignite.binary.BinaryObjectException, msg=Binary type has
> different affinity key fields [typeName=PersonPK,
> affKeyFieldName1=customer_ref, affKeyFieldName2=null]]"
>
> I already did LoadCache before running the program.
>
>
>
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/Affinity-Key-field-is-not-identified-if-binary-configuration-is-used-on-cache-key-object-tp15959p15990.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Inserting Data From Spark to Ignite

2017-08-04 Thread agura
Hi, 

for data frames you can try to save records one by one but I'm not sure that
Spark will not use batch store.

The second option is transformation data frame to RDD and use savePairs
method.

Will this work for you?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Inserting-Data-From-Spark-to-Ignite-tp14937p15994.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Possible starvation in striped pool. message

2017-08-04 Thread kestas
Hi,
 sometimes we get this message in logs. What does it mean ?

Jul 26, 2017 11:43:25 AM org.apache.ignite.logger.java.JavaLogger warning
WARNING: >>> Possible starvation in striped pool.
Thread name: sys-stripe-3-#4%null%
Queue: []
Deadlock: false
Completed: 17
Thread [name="sys-stripe-3-#4%null%", id=36, state=RUNNABLE, blockCnt=0,
waitCnt=18]
at
o.a.i.i.MarshallerContextImpl.getClassName(MarshallerContextImpl.java:348)
at
o.a.i.i.MarshallerContextImpl.getClass(MarshallerContextImpl.java:335)
at
o.a.i.i.binary.BinaryContext.descriptorForTypeId(BinaryContext.java:692)
at
o.a.i.i.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1745)
at
o.a.i.i.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1704)
at o.a.i.i.binary.BinaryUtils.doReadObject(BinaryUtils.java:1728)
at
o.a.i.i.binary.BinaryUtils.deserializeOrUnmarshal(BinaryUtils.java:2085)
at
o.a.i.i.binary.BinaryUtils.doReadCollection(BinaryUtils.java:2016)
at
o.a.i.i.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1904)
at
o.a.i.i.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1704)
at
o.a.i.i.binary.BinaryReaderExImpl.readField(BinaryReaderExImpl.java:1964)
at
o.a.i.i.binary.BinaryFieldAccessor$DefaultFinalClassAccessor.read0(BinaryFieldAccessor.java:674)
at
o.a.i.i.binary.BinaryFieldAccessor.read(BinaryFieldAccessor.java:164)
at
o.a.i.i.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:843)
at
o.a.i.i.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1752)
at
o.a.i.i.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1704)
at
o.a.i.i.binary.BinaryReaderExImpl.readField(BinaryReaderExImpl.java:1964)
at
o.a.i.i.binary.BinaryFieldAccessor$DefaultFinalClassAccessor.read0(BinaryFieldAccessor.java:674)
at
o.a.i.i.binary.BinaryFieldAccessor.read(BinaryFieldAccessor.java:164)
at
o.a.i.i.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:843)
at
o.a.i.i.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1752)
at
o.a.i.i.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1704)
at o.a.i.i.binary.BinaryUtils.doReadObject(BinaryUtils.java:1728)
at
o.a.i.i.binary.BinaryUtils.deserializeOrUnmarshal(BinaryUtils.java:2085)
at
o.a.i.i.binary.BinaryUtils.doReadCollection(BinaryUtils.java:2016)
at
o.a.i.i.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1904)
at
o.a.i.i.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1704)
at
o.a.i.i.binary.BinaryReaderExImpl.readField(BinaryReaderExImpl.java:1964)
at
o.a.i.i.binary.BinaryFieldAccessor$DefaultFinalClassAccessor.read0(BinaryFieldAccessor.java:674)
at
o.a.i.i.binary.BinaryFieldAccessor.read(BinaryFieldAccessor.java:164)
at
o.a.i.i.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:843)
at
o.a.i.i.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1752)
at
o.a.i.i.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1704)
at
o.a.i.i.binary.BinaryReaderExImpl.readField(BinaryReaderExImpl.java:1964)
at
o.a.i.i.binary.BinaryFieldAccessor$DefaultFinalClassAccessor.read0(BinaryFieldAccessor.java:674)
at
o.a.i.i.binary.BinaryFieldAccessor.read(BinaryFieldAccessor.java:164)
at
o.a.i.i.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:843)
at
o.a.i.i.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1752)
at
o.a.i.i.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1704)
at
o.a.i.i.binary.BinaryReaderExImpl.readField(BinaryReaderExImpl.java:1964)
at
o.a.i.i.binary.BinaryFieldAccessor$DefaultFinalClassAccessor.read0(BinaryFieldAccessor.java:674)
at
o.a.i.i.binary.BinaryFieldAccessor.read(BinaryFieldAccessor.java:164)
at
o.a.i.i.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:843)
at
o.a.i.i.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1752)
at
o.a.i.i.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1704)
at
o.a.i.i.binary.BinaryReaderExImpl.readField(BinaryReaderExImpl.java:1964)
at
o.a.i.i.binary.BinaryFieldAccessor$DefaultFinalClassAccessor.read0(BinaryFieldAccessor.java:674)
at
o.a.i.i.binary.BinaryFieldAccessor.read(BinaryFieldAccessor.java:164)
at
o.a.i.i.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:843)
at
o.a.i.i.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1752)
at
o.a.i.i.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1704)
at
o.a.i.i.binary.BinaryObjectImpl.deserializeValue(BinaryObjectImpl.java:794)
at o.a.i.i.binary.BinaryObjectImpl.value(BinaryObjectImpl.java:142)
at

Re: Accessing array elements within items cached in Ignite without deserialising the entire item

2017-08-04 Thread Pavel Tupitsyn
>  git refused to clone the repo in GitExtensions
"git clone https://github.com/apache/ignite.git; in console should work

> Is the ‘data’ pointer actually a pointer to the unmanaged memory in the
off heap cache containing this element
It is just a pointer, it can point both to managed or unmanaged memory.
Yes, in some cases we have a pointer to unmanaged memory that comes from
Java side, but it is always a copy of the actual cache data.
Otherwise it would be quite difficult to maintain atomicity and the like.

Generally, I don't think we should introduce pointers and other unsafe
stuff in the public API.
Performance is important, but correctness is always a priority.

On Fri, Aug 4, 2017 at 5:25 AM, Raymond Wilson 
wrote:

> I had not seen that page yet – very useful.
>
>
>
> There’s a few moving parts to getting it working, so not sure I will get
> time to really dig into, but will have a look for sure.
>
>
>
> I did pull a static copy of the source (after git refused to clone the
> repo in GitExtensions) and started looking at the code. It does seem
> relatively simple to add appropriate methods to the appropriate interface
> and implementation classes.
>
>
>
> Question: When I see a method like this in BinaryStreamBase.cs:
>
>
>
> /// 
>
> /// Read byte array.
>
> /// 
>
> /// Count.
>
> /// 
>
> /// Byte array.
>
> /// 
>
> public abstract byte[] ReadByteArray(int cnt);
>
>
>
> protected static byte[] ReadByteArray0(int len, byte* data)
>
> {
>
> byte[] res = new byte[len];
>
>
>
> fixed (byte* res0 = res)
>
> {
>
> CopyMemory(data, res0, len);
>
> }
>
>
>
> return res;
>
> }
>
>
>
> Is the ‘data’ pointer actually a pointer to the unmanaged memory in the
> off heap cache containing this element? If so, would this permit ‘user
> defined’ operations to performed, something like this? [or does Ignite.Net
> Linq already support this]?
>
>
>
> /// 
>
> /// Perform action on a range of byte array elements with a
> delegate
>
> /// 
>
> /// Start at.
>
> /// Count.
>
> /// 
>
> /// Nothing
>
> /// 
>
> protected static void PerformByteArrayOperation(int index, int
> len, Action action, byte* data)
>
> {
>
> fixed (byte* res0 = [index])
>
> {
>
> for (int i = 0; I < len; i++)
>
> {
>
>  action(res0++);
>
> }
>
> }
>
> }
>
>
>
> There’s probably a nice way to genericize this across multiple array
> types, but it’s useful as an example.
>
>
>
> In this way you can operate on the data without the need to move it around
> all the time between unmanaged and managed contexts.
>
>
>
> Thanks,
>
> Raymond.
>
>
>
> *From:* Pavel Tupitsyn [mailto:ptupit...@apache.org]
> *Sent:* Thursday, August 3, 2017 7:21 PM
>
> *To:* user@ignite.apache.org
> *Subject:* Re: Accessing array elements within items cached in Ignite
> without deserialising the entire item
>
>
>
> Great!
>
>
>
> Here's .NET development page, in case you haven't seen it yet:
> https://cwiki.apache.org/confluence/display/IGNITE/Ignite.NET+Development
>
> Let me know if you need any assistance.
>
>
>
> Pavel
>
>
>
> On Thu, Aug 3, 2017 at 5:28 AM, Raymond Wilson 
> wrote:
>
> Hi Pavel,
>
>
>
> Thanks for putting it on the plan.
>
>
>
> I’ve been reading through the ‘how to contribute’ documentation to see
> what’s required and have pulled a static download of the Git repository to
> start looking at the code. I’ll see… J
>
>
>
> Thanks,
>
> Raymond.
>
>
>
> *From:* Pavel Tupitsyn [mailto:ptupit...@apache.org]
> *Sent:* Wednesday, August 2, 2017 9:08 PM
>
>
> *To:* user@ignite.apache.org
> *Subject:* Re: Accessing array elements within items cached in Ignite
> without deserialising the entire item
>
>
>
> Actually, you are right, we can add this easily, because internal API
> allows random stream access.
>
> I've filed a ticket: https://issues.apache.org/jira/browse/IGNITE-5904
>
>
>
> Thank you for a good suggestion!
>
> And, by the way, everyone is welcome to contribute, this ticket can be a
> perfect start!
>
>
>
> Pavel
>
>
>
> On Wed, Aug 2, 2017 at 12:46 AM, Raymond Wilson <
> raymond_wil...@trimble.com> wrote:
>
> Hi Pavel,
>
>
>
> Thanks for the clarifications. I certainly appreciate that cross platform
> protocols constrain what can be done…
>
>
>
> Thanks for pointing out IBinaryRawReader.
>
>
>
> Regarding random access into arrays, is this something that is on the
> books for a future version?
>
> Thanks,
>
> Raymond.
>
>
>
> *From:* Pavel Tupitsyn [mailto:ptupit...@apache.org]
> *Sent:* Tuesday, August 1, 2017 11:31 PM
> *To:* user@ignite.apache.org
> *Subject:* Re: Accessing array elements within items cached in Ignite
> without 

Re: how to append data to IGFS so that data gets saved to Hive partitioned table?

2017-08-04 Thread csumi
Let me try to clear here with the sequence of steps performed. 
-   Created table with partition through hive using below query. It creates 
a
directory in hdfs.
create table stocks3 (stock string, time timestamp, price float)
PARTITIONED BY (years bigint, months bigint, days bigint) ROW FORMAT
DELIMITED FIELDS TERMINATED BY ',';
-   Then I get streaming data and using IgniteFileSystem's append/create
method, it gets saved to ignited hadoop. 
-   Run below select query. No result returned
select * from stocks3;
-   Stop ignite and run the select again on hive. No result with below logs

hive> select * from stocks3;
17/08/04 14:59:08 INFO conf.HiveConf: Using the default value passed in for
log id: b5e3e924-e46a-481c-8aef-30d48605a2da
17/08/04 14:59:08 INFO session.SessionState: Updating thread name to
b5e3e924-e46a-481c-8aef-30d48605a2da main
17/08/04 14:59:08 WARN operation.Operation: Unable to create operation log
file:
D:\tmp\hive\\operation_logs\b5e3e924-e46a-481c-8aef-30d48605a2da\137adad6-ea23-462c-a414-6ce260e5bd49
java.io.IOException: The system cannot find the path specified
at java.io.WinNTFileSystem.createFileExclusively(Native Method)
at java.io.File.createNewFile(File.java:1012)
at
org.apache.hive.service.cli.operation.Operation.createOperationLog(Operation.java:237)
at
org.apache.hive.service.cli.operation.Operation.beforeRun(Operation.java:279)
at
org.apache.hive.service.cli.operation.Operation.run(Operation.java:314)
at
org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:499)
at
org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:486)
at
org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:295)
at
org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:506)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:491)
at
org.apache.hive.jdbc.HiveConnection$SynchronizedHandler.invoke(HiveConnection.java:1412)
at com.sun.proxy.$Proxy21.ExecuteStatement(Unknown Source)
at
org.apache.hive.jdbc.HiveStatement.runAsyncOnServer(HiveStatement.java:308)
at
org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:250)
at
org.apache.hive.beeline.Commands.executeInternal(Commands.java:988)
at org.apache.hive.beeline.Commands.execute(Commands.java:1160)
at org.apache.hive.beeline.Commands.sql(Commands.java:1074)
at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:1148)
at org.apache.hive.beeline.BeeLine.execute(BeeLine.java:976)
at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:886)
at org.apache.hive.beeline.cli.HiveCli.runWithArgs(HiveCli.java:35)
at org.apache.hive.beeline.cli.HiveCli.main(HiveCli.java:29)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:491)
at org.apache.hadoop.util.RunJar.run(RunJar.java:234)
at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
17/08/04 14:59:08 INFO ql.Driver: Compiling
command(queryId=_20170804145908_b270c978-ab00-4160-a2a6-c19b42eab676):
select * from stocks3
17/08/04 14:59:08 INFO parse.CalcitePlanner: Starting Semantic Analysis
17/08/04 14:59:08 INFO parse.CalcitePlanner: Completed phase 1 of Semantic
Analysis
17/08/04 14:59:08 INFO parse.CalcitePlanner: Get metadata for source tables
17/08/04 14:59:08 INFO metastore.HiveMetaStore: 0: get_table : db=yt
tbl=stocks3
17/08/04 14:59:08 INFO HiveMetaStore.audit: ugi=  ip=unknown-ip-addr 
cmd=get_table : db=yt tbl=stocks3
17/08/04 14:59:08 INFO parse.CalcitePlanner: Get metadata for subqueries
17/08/04 14:59:08 INFO parse.CalcitePlanner: Get metadata for destination
tables
17/08/04 14:59:09 INFO ql.Context: New scratch dir is
hdfs://localhost:9000/tmp/hive//b5e3e924-e46a-481c-8aef-30d48605a2da/hive_2017-08-04_14-59-08_935_8316159022041430928-1
17/08/04 14:59:09 INFO parse.CalcitePlanner: Completed getting MetaData in
Semantic Analysis
17/08/04 14:59:09 INFO parse.CalcitePlanner: Get metadata for source tables
17/08/04 14:59:09 INFO metastore.HiveMetaStore: 0: get_table : db=yt
tbl=stocks3
17/08/04 14:59:09 INFO HiveMetaStore.audit: ugi=  ip=unknown-ip-addr 
cmd=get_table : db=yt tbl=stocks3
17/08/04 14:59:09 INFO parse.CalcitePlanner: Get metadata for subqueries
17/08/04 14:59:09 INFO parse.CalcitePlanner: Get metadata for 

Re: Affinity Key field is not identified if binary configuration is used on cache key object

2017-08-04 Thread kotamrajuyashasvi
Hi

Thanks for the response. When I put cacheKeyConfiguration in ignite
configuration, the affinity was working. But when I call Cache.Get() in
client program I'm getting the following error.

"Java exception occurred
[cls=org.apache.ignite.binary.BinaryObjectException, msg=Binary type has
different affinity key fields [typeName=PersonPK,
affKeyFieldName1=customer_ref, affKeyFieldName2=null]]"

I already did LoadCache before running the program.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Affinity-Key-field-is-not-identified-if-binary-configuration-is-used-on-cache-key-object-tp15959p15990.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: ODBC version is not supported issue

2017-08-04 Thread Igor Sapego
Yeah, "Data type in not supported" - this is the message I'm getting
currently. I was just making sure we are on the same page. I'm
working on this and other issues right now.

I'll notify you when I'm done. Thanks for your assistance.

Best Regards,
Igor

On Thu, Aug 3, 2017 at 10:06 PM, Dar Swift 
wrote:

> It appears the node is up. Qlik, which we trying to are connect to, is
> only able to connect via ODBC which is why are not using the JDBC driver.
>
>
>
> Here is another video with a lot more tests:
>
> https://github.com/graben1437/SparkToIgniteTest/blob/master/
> src/main/resources/IgniteODBCDriverStillNotWorking.mp4.gz
>
>
>
> With Respect,
>
> Wallace
>
>
>
> *From: *Igor Sapego 
> *Reply-To: *"user@ignite.apache.org" 
> *Date: *Thursday, August 3, 2017 at 11:49 AM
>
> *To: *user 
> *Subject: *Re: ODBC version is not supported issue
>
>
>
> I think that in your case the core issue may be not "SQL version
>
> is not supported", but the other one,"Cannot establish connection
>
> with the host". Is the node up and running?
>
>
>
>
> Best Regards,
>
> Igor
>
>
>
> On Thu, Aug 3, 2017 at 6:43 PM, Igor Sapego  wrote:
>
> Oh, from your video I can see that you are using RazorSQL 7.3.2.
>
> Thanks
>
>
> Best Regards,
>
> Igor
>
>
>
> On Thu, Aug 3, 2017 at 6:42 PM, Igor Sapego  wrote:
>
> RazorSQL 2.0.0? Do you know where I can download it?
>
>
>
> Also, have you tried our JDBC driver?
>
>
> Best Regards,
>
> Igor
>
>
>
> On Thu, Aug 3, 2017 at 6:28 PM, Dar Swift 
> wrote:
>
> Greetings Igor,
>
>
>
> We have been using RazorSQL 2.0.0 running on a Windows Server 2012 R2 with
> visual C++ installed.
>
> Ignite 2.0.0 is running on CentOS 7.3.1611 kernel - 3.10.0-514 x86_64.
>
>
> Here is a video run through of our attempt:
>
> https://github.com/graben1437/SparkToIgniteTest/tree/master/
> src/main/resources
>
>
>
> Thank You,
>
> Wallace
>
>
>
> *From: *Igor Sapego 
> *Reply-To: *"user@ignite.apache.org" 
> *Date: *Thursday, August 3, 2017 at 6:23 AM
> *To: *user 
> *Subject: *Re: ODBC version is not supported issue
>
>
>
> Hello again,
>
>
>
> I've started working on the RazorSQL support, downloaded the
>
> latest version from the official site and there is no issue with version
>
> you have mentioned (there are some other issues though). Can
>
> you point out what is the version of the RazorSQL you were using,
>
> so that I can test with your version as well?
>
>
>
> Also, operation system version could be helpful.
>
>
>
> Thanks in advance.
>
>
> Best Regards,
>
> Igor
>
>
>
> On Wed, Jul 26, 2017 at 3:28 PM, limabean  wrote:
>
> Thank you, Igor.  Very helpful !
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/ODBC-version-is-not-supported-issue-tp15641p15690.html
>
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>
>
>
>
>
>
>
>
>


Re: Trouble using latest Ignite Web Console (2.1)

2017-08-04 Thread Muthu
Hi Alexey,

Try this,
https://github.com/softwarebrahma/IgniteCacheWithAutomaticExternalPersistence

Can be run like below,

java -jar -Ddb.host=localhost -Ddb.port=5432 -Ddb.name=postgres
-DIGNITE_QUIET=false target\CacheAutomaticPersistence-0.0.1-SNAPSHOT.jar

Regards,
Muthu

-- The real danger with modern technology isn't that machines will begin to
think like people, but that people will begin to think like machines.
-- Faith is to believe what you do not see; the reward of this faith is to
see what you believe.

On Thu, Aug 3, 2017 at 8:50 PM, Alexey Kuznetsov 
wrote:

> Muthu,
>
> Can you attach a simple example to reproduce this exception?
>
>
> On Fri, Aug 4, 2017 at 4:08 AM, Muthu  wrote:
>
>> Sure Alexey...thanks. I am using PostgreSQL version 9.6. The driver is a
>> bit old...its 9.4.1212 JDBC 4
>> 
>>  (postgresql-9.4.1212.jre6.jar )..but i was using it without any issues
>> with 1.9 version of web console (previous one).
>>
>> On the workaround thingy, the problem is when i generate the
>> models/artifacts from it (console.gridgain.com) & deploy it on ignite
>> 2.1.0 it fails starting up with this error (it was failing even with 2.0 &
>> was fixed after i identified it..more details here,
>> https://stackoverflow.com/questions/44468342/automatic-persi
>> stence-unable-to-bring-up-cache-with-web-console-generated-mode/44901117).
>> I thought later on that the ignite version from there is grid gain's ignite
>> version & that may not be compatible with the open source ignite 2.1.0. Is
>> it not true?
>>
>> INFO: Started cache [name=ignite-sys-cache, memoryPolicyName=sysMemPlc,
>> mode=REPLICATED, atomicity=TRANSACTIONAL]
>> Aug 03, 2017 12:34:36 PM org.apache.ignite.logger.java.JavaLogger error
>> SEVERE: Failed to reinitialize local partitions (preloading will be
>> stopped): GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion
>> [topVer=1, minorTopVer=0], nodeId=e9b2cf46, evt=NODE_JOINED]
>> class org.apache.ignite.IgniteCheckedException: Failed to register query
>> type: QueryTypeDescriptorImpl [cacheName=DeptCache, name=Dept,
>> tblName=DEPT, fields={DNAME=class java.lang.String, LOC=class java.lang
>> .String, DEPTID=class java.lang.String}, idxs={}, fullTextIdx=null,
>> keyCls=class java.lang.String, valCls=class java.lang.Object,
>> keyTypeName=java.lang.String, valTypeName=com.brocade.dcm.do
>> main.model.Dept,
>> valTextIdx=false, typeId=1038936471, affKey=null, keyFieldName=deptid,
>> valFieldName=null, obsolete=false]
>> at org.apache.ignite.internal.processors.query.h2.IgniteH2Index
>> ing.registerType(IgniteH2Indexing.java:1532)
>> at org.apache.ignite.internal.processors.query.GridQueryProcess
>> or.registerCache0(GridQueryProcessor.java:1424)
>> at org.apache.ignite.internal.processors.query.GridQueryProcess
>> or.onCacheStart0(GridQueryProcessor.java:784)
>> at org.apache.ignite.internal.processors.query.GridQueryProcess
>> or.onCacheStart(GridQueryProcessor.java:845)
>> at org.apache.ignite.internal.processors.cache.GridCacheProcess
>> or.startCache(GridCacheProcessor.java:1185)
>> at org.apache.ignite.internal.processors.cache.GridCacheProcess
>> or.prepareCacheStart(GridCacheProcessor.java:1884)
>> at org.apache.ignite.internal.processors.cache.GridCacheProcess
>> or.startCachesOnLocalJoin(GridCacheProcessor.java:1755)
>> at org.apache.ignite.internal.processors.cache.distributed.dht.
>> preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartit
>> ionsExchangeFuture.java:619)
>> at org.apache.ignite.internal.processors.cache.GridCachePartiti
>> onExchangeManager$ExchangeWorker.body(GridCacheP
>> artitionExchangeManager.java:1901)
>> at org.apache.ignite.internal.util.worker.GridWorker.run(GridWo
>> rker.java:110)
>> at java.lang.Thread.run(Unknown Source)
>> Caused by: org.h2.jdbc.JdbcSQLException: Syntax error in SQL statement
>> "CREATE TABLE ""DEPTS"".""DEPT"" (_KEY VARCHAR INVISIBLE[*] NOT NULL,_VAL
>> OTHER INVISIBLE,_VER OTHER INVISIBLE,""DNAME"" VARCHAR,""LOC""
>>  VARCHAR,""DEPTID"" VARCHAR) ENGINE ""org.apache.ignite.internal.p
>> rocessors.query.h2.H2TableEngine"" "; expected "(, FOR, UNSIGNED, NOT,
>> NULL, AS, DEFAULT, GENERATED, NOT, NULL, AUTO_INCREMENT, BIGSERIAL, SE
>> RIAL, IDENTITY, NULL_TO_DEFAULT, SEQUENCE, SELECTIVITY, COMMENT,
>> CONSTRAINT, PRIMARY, UNIQUE, NOT, NULL, CHECK, REFERENCES, ,, )"; SQL
>> statement:
>> CREATE TABLE "DEPTS"."DEPT" (_KEY VARCHAR INVISIBLE NOT NULL,_VAL OTHER
>> INVISIBLE,_VER OTHER INVISIBLE,"DNAME" VARCHAR,"LOC" VARCHAR,"DEPTID"
>> VARCHAR) engine "org.apache.ignite.internal.processors.query.h2.H
>> 2TableEngine" [42001-193]
>> at org.h2.message.DbException.getJdbcSQLException(DbException.
>> java:345)
>> at org.h2.message.DbException.getSyntaxError(DbException.java:
>> 205)
>> at 

Re: Explicit setting of write synchronization mode FULL_SYNC needed for replicated caches?

2017-08-04 Thread Andrey Gura
Hi Muthu,

You understand correctly. If you have write-then-read logic that can
be executed on backup node for particular key than you should use
FULL_SYNC write synchronization mode.

Other way to get similar behaviour is setting readFromBackup property
value to false. In this case you still can use PRIMARY_SYNC
synchronization mode. But you should understand what consistency
guarantee you have in case of node failure.

On Fri, Aug 4, 2017 at 4:48 AM, Muthu  wrote:
> Hi Folks,
>
> I understand from the docs (https://apacheignite.readme.io/docs/cache-modes)
> that replicated caches are implemented using partitioned caches where every
> key has a primary copy and is also backed up on all other nodes in the
> cluster & that when data is queried lookups would be made from both primary
> & backup on the node for serving the query.
>
> But i see that the default cache write synchronization mode is PRIMARY_SYNC,
> where client will not wait for backups to be updated. Does that mean i have
> to explicitly set it to FULL_SYNC for replicated caches since responses rely
> on lookup of primary & backup?
>
> Regards,
> Muthu
>
> -- The real danger with modern technology isn't that machines will begin to
> think like people, but that people will begin to think like machines.
> -- Faith is to believe what you do not see; the reward of this faith is to
> see what you believe.