Re: Issue with Ignite + Zeppelin

2017-07-20 Thread Megha Mittal
Hi,

I have already shared my configurations, but there is exception in the logs.
It seems to be version issue between my Ignite version-2.0.0 and Zeppelin's
Ignite version - 1.9.0 . 

Anyways, can you suggest me any other UI tool that can connect to my
cluster. 

Thanks.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Issue-with-Ignite-Zeppelin-tp14990p15224.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite Cluster going OOM

2017-07-20 Thread Ankit Singhai
Thanks Andrew. Will wait few days for new release and after that would update
you on the test results.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Cluster-going-OOM-tp15175p15223.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


EOFException in Hadoop Datanode when writing to Ignite

2017-07-20 Thread Rodrigo Parra
Hi,

I am trying to run a Flink job that simply writes some data to IGFS (URI:
igfs://igfs@/tmp/output/mydata.bin), as Flink supports custom Hadoop
filesystems [1]. However, I get a timeout error on the job, and looking
further I can find an exception logged by Hadoop's Datanode:

java.io.EOFException: End of File Exception between local host is:
> "cloud-7.mynetwork/130.149.21.11"; destination host is: "cloud-7":45000;
> : java.io.EOFException; For more details see:
> http://wiki.apache.org/hadoop/EOFException
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
> at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at
> org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:765)
> at org.apache.hadoop.ipc.Client.call(Client.java:1480)
> at org.apache.hadoop.ipc.Client.call(Client.java:1407)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
> at com.sun.proxy.$Proxy14.sendHeartbeat(Unknown Source)
> at
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.sendHeartbeat(DatanodeProtocolClientSideTranslatorPB.java:153)
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:553)
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:653)
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:823)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.EOFException
> at java.io.DataInputStream.readInt(DataInputStream.java:392)
> at
> org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1079)
> at org.apache.hadoop.ipc.Client$Connection.run(Client.java:974)
> 17/07/21 04:33:56 INFO ipc.Client: Retrying connect to server: cloud-7/
> 130.149.21.11:45000. Already tried 0 time(s); retry policy is
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000
> MILLISECONDS)


Ignite is running in PRIMARY mode (as I want to make sure to operate in
memory) and I can use Hadoop CLI tools to query the filesystem (using
bin/hdfs dfs -ls igfs://igfs@/) and write to it  (using bin/hdfs dfs
-copyFromLocal).

I'd appreciate any ideas as to what could cause the write to fail when
doing it programmatically. Thanks in advance.

Best,
Rodrigo



[1]
https://ci.apache.org/projects/flink/flink-docs-release-0.8/example_connectors.html


Re: Re:Re: about write behind problems

2017-07-20 Thread vkulichenko
Did you set the writeThrough property? It's needed to enable write to cache
store, regardless of whether it's write-behind mode or not.

cacheCfg.setWriteThrough(true);

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/about-write-behind-problems-tp15160p15219.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


RE: Custom SQL Functions - ClassNotFoundException

2017-07-20 Thread vkulichenko
You should provide full class name in configuration.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/RE-Custom-SQL-Functions-ClassNotFoundException-tp15215p15220.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Re: issue in REPLICATED model cache between server and client

2017-07-20 Thread aa...@tophold.com
hi Val, 

like this:










Regards
Aaron


aa...@tophold.com
 
From: vkulichenko
Date: 2017-07-21 03:15
To: user
Subject: Re: issue in REPLICATED model cache between server and client
Aaron,
 
Memory policy is not required on client, but it must be configured on
server. Do you have memory policy configuration for Account_Region?
 
Take a look at MemoryPoliciesExample, it demonstrates how to do this.
 
-Val
 
 
 
--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/issue-in-REPLICATED-model-cache-between-server-and-client-tp15154p15204.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re:Re: about write behind problems

2017-07-20 Thread Lucky
Ok, in this case,I did not tell about loading cache.
It is about persistent data.
I want to insert 50 records into DB.
my setting is:
  cacheCfg.setWriteBehindEnabled(true);
  cacheCfg.setWriteBehindFlushSize(10240);
cacheCfg.setWriteBehindFlushFrequency(1);
cacheCfg.setWriteBehindBatchSize(1);
cacheCfg.setWriteBehindFlushThreadCount(3);




My question is:
1.WriteBehindEnabled parameter is set to true,but it seems not work.
   String sql = "insert into "mycache1".tableA(_key,fid,fname) 
select(fid,fid,orgid) from "mycache2".tableB";
   cache.query(new SqlFieldQuery(sql));


   this is update data to DB.
   but it wait until all data has finished  persistent to DB.
I just hope the flow can run continue when the cache is updated ,but not need 
to wait persistent .

2. setWriteBehindBatchSize is set to 1 and setWriteBehindFlushFrequency is 
set to 5 seconds,but in the log ,I see  that is not 1 each time,and There 
is no interval of 10 seconds


Thanks.
Lucky



At 2017-07-21 03:18:06, "vkulichenko"  wrote:
>Hi Lucky,
>
>Write behind affects the way how underlying store is updated when you update
>the cache. Load cache process is different and is not related to this. More
>information here: https://apacheignite.readme.io/docs/persistent-store
>
>-Val
>
>
>
>--
>View this message in context: 
>http://apache-ignite-users.70518.x6.nabble.com/about-write-behind-problems-tp15160p15205.html
>Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite REST

2017-07-20 Thread waterg
Thank you Val!



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-REST-tp14952p15216.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


RE: Custom SQL Functions - ClassNotFoundException

2017-07-20 Thread Roger Fischer (CW)
Thanks, Alex.

Now I get a java.lang.ClassNotFoundException:

Caused by: java.lang.ClassNotFoundException: DateTimeFunctions.class
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at org.springframework.util.ClassUtils.forName(ClassUtils.java:250)
at 
org.springframework.util.ClassUtils.resolveClassName(ClassUtils.java:284)
... 40 more

Full exception is further below.

I verified that the class is in the library.

> jar tvf libs/icpoc-pojo.jar | grep DateTimeFunctions.class
  3583 Thu Jul 20 14:51:52 PDT 2017 
com/abc/poc/icpoc/function/DateTimeFunctions.class

And I verified that the jar is in the classpath (using echo in the ignite.sh). 
Besides, the same JAR also includes the POJO for the cached objects, which are 
loaded correctly.

The client also reports the same exception. The client is a fat jar with the 
DateTimeFunctions class included.

Configuration:





DateTimeFunctions.class



Version: ver. 2.0.0#20170430-sha1:d4eef3c6

Any suggestions?

Thanks…

Roger

PS: Full exception:

class org.apache.ignite.IgniteException: Failed to instantiate Spring XML 
application context (make sure all classes used in Spring configuration are 
present at CLASSPATH) 
[springUrl=file:/home/ignite/ignite/test-ignite-cassandra/config-server-cassandra.xml]
at 
org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:949)
at org.apache.ignite.Ignition.start(Ignition.java:350)
at 
org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:302)
Caused by: class org.apache.ignite.IgniteCheckedException: Failed to 
instantiate Spring XML application context (make sure all classes used in 
Spring configuration are present at CLASSPATH) 
[springUrl=file:/home/ignite/ignite/test-ignite-cassandra/config-server-cassandra.xml]
at 
org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.applicationContext(IgniteSpringHelperImpl.java:387)
at 
org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.loadConfigurations(IgniteSpringHelperImpl.java:104)
at 
org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.loadConfigurations(IgniteSpringHelperImpl.java:98)
at 
org.apache.ignite.internal.IgnitionEx.loadConfigurations(IgnitionEx.java:668)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:869)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:778)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:648)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:617)
at org.apache.ignite.Ignition.start(Ignition.java:347)
... 1 more
Caused by: org.springframework.beans.factory.BeanCreationException: Error 
creating bean with name 'grid.cfg' defined in URL 
[file:/home/ignite/ignite/test-ignite-cassandra/config-server-cassandra.xml]: 
Cannot create inner bean 
'org.apache.ignite.configuration.CacheConfiguration#3b084709' of type 
[org.apache.ignite.configuration.CacheConfiguration] while setting bean 
property 'cacheConfiguration' with key [0]; nested exception is 
org.springframework.beans.factory.BeanCreationException: Error creating bean 
with name 'org.apache.ignite.configuration.CacheConfiguration#3b084709' defined 
in URL 
[file:/home/ignite/ignite/test-ignite-cassandra/config-server-cassandra.xml]: 
Initialization of bean failed; nested exception is 
org.springframework.beans.TypeMismatchException: Failed to convert property 
value of type 'java.util.ArrayList' to required type 'java.lang.Class[]' for 
property 'sqlFunctionClasses'; nested exception is 
java.lang.IllegalArgumentException: Cannot find class [DateTimeFunctions.class]
at 
org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveInnerBean(BeanDefinitionValueResolver.java:313)
at 
org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:122)
at 
org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveManagedList(BeanDefinitionValueResolver.java:382)
at 
org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:157)
at 
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1531)
at 
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1276)
at 

Re: Data loss validation. TopologyValidator

2017-07-20 Thread vkulichenko
Not sure why injection doesn't work, but you can always get Ignite instance
via Ignition.ignite().

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Data-loss-validation-TopologyValidator-tp15147p15214.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Enforcing data to be stored in Heap only

2017-07-20 Thread vkulichenko
Amit Pundir wrote
> Does that mean the on-heap will benefit only when we have more 'get'
> operations but will degrade the performance with 'put' operations?

Yes, I think it's accurate. However, even for reads performance will be
noticeable inly in certain scenarios. As I already mentioned, this is just a
cache, it's not designed to duplicate everything you have off-heap.


Amit Pundir wrote
> Also, why Ignite 2.0 is duplicating data instead of maintaining one copy
> be in on-heap or off-heap?

Because that's the architecture of Ignite 2.0. It uses page memory, so there
are no objects at all. Everything is binary and everything is off-heap where
Ignite allocates memory manually, bypassing JVM and GC. Starting with 2.1
these pages can also be mapped to disk.

-Val




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Enforcing-data-to-be-stored-in-Heap-only-tp15141p15213.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite REST

2017-07-20 Thread vkulichenko
Performance of Java API is much better than REST API, for multiple reasons.
One of the major reasons is that Ignite client node is aware of affinity and
can always send request to node that holds particular piece of data, while
with REST there is one node which acts as a router.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-REST-tp14952p15212.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Data loss validation. TopologyValidator

2017-07-20 Thread Raja
Thank you very much for the response.

While implementing Topology Validator, I ran into the following issue. I
wasn't able to get/inject ignite instance.
Is there a way to inject/get hold of 'ignite' resource/instance within the
Topology Validator?

Thank you
Raja




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Data-loss-validation-TopologyValidator-tp15147p15211.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite REST

2017-07-20 Thread waterg
Mikhail, found out some process is using 8080 and ignite used 8081 instead.
Good catch!

On the other hand, does Ignite REST support batch request? And if not, how
good is the performance of REST comparing to Java API? 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-REST-tp14952p15210.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite REST

2017-07-20 Thread waterg
Mikhail, which version of ignite did you use? Was it 2.0.0?
I used on 1.8 before and it worked fine, but not on 2.0

Btw, does Ignite REST support batch request? And if not, how good is the
performance of REST comparing to Java API?

Thank you for helping!



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-REST-tp14952p15209.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Multi Cloud Deployment Issue

2017-07-20 Thread Amit Pundir
Hi Mikhail, 
It is critical for us to prove Ignite performance on atleast 2 clouds (not
necessarily data center replication) before enterprise wide adoption. Any
tips to improve its performance would be appreciated. 

Thanks



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Multi-Cloud-Deployment-Issue-tp15092p15208.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Large value of idleConnectionTimeout in TcpCommunicationSpi

2017-07-20 Thread vkulichenko
Amit,

I think I already answered that :) See above.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Large-value-of-idleConnectionTimeout-in-TcpCommunicationSpi-tp15138p15206.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: about write behind problems

2017-07-20 Thread vkulichenko
Hi Lucky,

Write behind affects the way how underlying store is updated when you update
the cache. Load cache process is different and is not related to this. More
information here: https://apacheignite.readme.io/docs/persistent-store

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/about-write-behind-problems-tp15160p15205.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: issue in REPLICATED model cache between server and client

2017-07-20 Thread vkulichenko
Aaron,

Memory policy is not required on client, but it must be configured on
server. Do you have memory policy configuration for Account_Region?

Take a look at MemoryPoliciesExample, it demonstrates how to do this.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/issue-in-REPLICATED-model-cache-between-server-and-client-tp15154p15204.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Multi Cloud Deployment Issue

2017-07-20 Thread Amit Pundir
Hi Mikhail,
It is critical for us to prove Ignite performance on atleast 2 clouds (not
necessarily data center replication) before enterprise wide adoption. Any
tips to improve its performance would be appreciated.

Thanks



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Multi-Cloud-Deployment-Issue-tp15092p15203.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Large value of idleConnectionTimeout in TcpCommunicationSpi

2017-07-20 Thread Amit Pundir
Hi Val,
So do you think we can keep a bigger value for idleConnectionTimeout (few
hours) without any untoward behavior? Please suggest.


Thanks



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Large-value-of-idleConnectionTimeout-in-TcpCommunicationSpi-tp15138p15202.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: write behind problems

2017-07-20 Thread vkulichenko
Hi Lucky,

I don't understand the use case. Are you loading data from DB to cache, or
writing to cache? Can you provide exact steps describing what you're doing?

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/write-behind-problems-tp15152p15201.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Exception when Ignite server starts up and also when a new node joins the cluster

2017-07-20 Thread vkulichenko
Jaipal, Ramesh,

Does any of you have a reproducer that can be shared? This looks like a
serious issue, but it obviously can be reproduced only under some specific
circumstances, I doubt someone will be able to do this without your help.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Exception-when-Ignite-server-starts-up-and-also-when-a-new-node-joins-the-cluster-tp13684p15200.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Pagination with TextQuery

2017-07-20 Thread vkulichenko
Query cursor results are paginated automatically while you iterate over the
cursor. I.e. if the page size is 1024 (default), you will never have more
than 1024 entries in local memory. After you finish iteration through the
first page, he second one will be requested, and so on. This allows to avoid
out of memory issues in case of large result sets.

To change the size of the page you can use Query#setPageSize method.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Pagination-with-TextQuery-tp15176p15199.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Re: Did Ignite query support embed DML ?

2017-07-20 Thread vkulichenko
If you just calculate the sum, then you don't need to join, right? Just
remove the WHERE clause.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Did-Ignite-query-support-embed-DML-tp15120p15198.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Enforcing data to be stored in Heap only

2017-07-20 Thread vkulichenko
Amit,

Current architecture assumes that all data is stored off-heap. On-heap is
just a cache. I.e., when you update, you update both anyway.

Also serialization overhead exists regardless of whether you use on-heap or
off-heap, because values are always stored in binary format. To reduce this
overhead you can use BinaryObject API:
https://apacheignite.readme.io/docs/binary-marshaller

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Enforcing-data-to-be-stored-in-Heap-only-tp15141p15197.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite Cache Console

2017-07-20 Thread vkulichenko
Ajay,

Visor and Web Console are two tools we have. If you need something that is
not available there, you can always create your own application that will
collect the required information.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Cache-Console-tp15125p15196.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Large value of idleConnectionTimeout in TcpCommunicationSpi

2017-07-20 Thread vkulichenko
Yes, you're right. Apparently I was looking at the wrong version of the code.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Large-value-of-idleConnectionTimeout-in-TcpCommunicationSpi-tp15138p15195.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite2.0 Affinity Key Distribution

2017-07-20 Thread vkulichenko
Ajay,

To get better distribution you should have more partitions. Ideally, number
of partitions should be at least order of magnitude bigger than number of
nodes. I also recommend to use power of 2 as number of partitions (128, 256,
512, ...).

Default value is 1024 and it works fine in majority of cases. Why do you
change it in the first place?

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite2-0-Affinity-Key-Distribution-tp15123p15194.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: java.lang.IllegalStateException: Data streamer has been closed.

2017-07-20 Thread zbyszek
Thank you for confirming Valentin!
zb

  From: vkulichenko [via Apache Ignite Users] 

 To: zbyszek  
 Sent: Thursday, July 20, 2017 8:13 PM
 Subject: Re: java.lang.IllegalStateException: Data streamer has been closed.
   
 Hi Zbyszek,

You're subscribed now, all good.

-Val 
 
   If you reply to this email, your message will be added to the discussion 
below: 
http://apache-ignite-users.70518.x6.nabble.com/java-lang-IllegalStateException-Data-streamer-has-been-closed-tp15059p15190.html
   To unsubscribe from java.lang.IllegalStateException: Data streamer has been 
closed., click here.
 NAML 

   



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/java-lang-IllegalStateException-Data-streamer-has-been-closed-tp15059p15193.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

New video on Ignite's architecture's ACID-compliant transactional subsystem

2017-07-20 Thread Dieds
Igniters, there is a new video by technical evangelist Akmal B. Chaudhri
explaining Apache® Ignite™ architecture's ACID-compliant transactional
subsystem. http://bit.ly/2uf485B

After watching his recorded webinar, you'll have a better understanding of
why companies in many industries, including large banks and financial
institutions, trust Apache Ignite and make it their platform of choice.

During the video, Akmal debunks the myth that online financial operations
need to be delegated to relational databases due to their ACID (Atomicity,
Consistency, Isolation, Durability) transaction support. Please let me know
what you think of the webinar after watching it. I am happy to share any
questions or comments with Akmal. 




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/New-video-on-Ignite-s-architecture-s-ACID-compliant-transactional-subsystem-tp15192.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite2.0 Affinity Key Distribution

2017-07-20 Thread mcherkasov
I don't about recommendations about this, you can use default value, which
works well for a lot of users.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite2-0-Affinity-Key-Distribution-tp15123p15191.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: java.lang.IllegalStateException: Data streamer has been closed.

2017-07-20 Thread vkulichenko
Hi Zbyszek,

You're subscribed now, all good.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/java-lang-IllegalStateException-Data-streamer-has-been-closed-tp15059p15190.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite Cluster going OOM

2017-07-20 Thread Andrey Mashenkov
Looks weird.

I see number of NODE_FAILED messages that means, some nodes doesn't respond
for some reason.
Possibly, Ignite can't release resources for such nodes during failure
detection timeout.
Try to decrease network timeout to allow failed nodes being dropped faster.

Also you can try to switch to 2.1 version that should be available in next
3 days and will contains a lot of fixes.

On Thu, Jul 20, 2017 at 7:10 PM, Ankit Singhai  wrote:

> ignite-d662243f.gz
>  n15188/ignite-d662243f.gz>
>
> It is server log file.
>
> Note:- 20 Caches are created but only 4 is being used.
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Ignite-Cluster-going-OOM-tp15175p15188.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Best regards,
Andrey V. Mashenkov


Re: Ignite Cluster going OOM

2017-07-20 Thread Ankit Singhai
ignite-d662243f.gz
 
 

It is server log file.

Note:- 20 Caches are created but only 4 is being used.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Cluster-going-OOM-tp15175p15188.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite Cluster going OOM

2017-07-20 Thread Ankit Singhai
Load is 16K transactions / min it includes creation, updation, accessing &
deleting it also.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Cluster-going-OOM-tp15175p15187.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite Cluster going OOM

2017-07-20 Thread Ankit Singhai
Hi Andrew,
Below is the grid config







10.179.29.135:47500..47509
10.179.29.136:47500..47509
10.179.29.137:47500..47509













 










I have not tried with less number of clients & increasing the number of
servers. Please suggestion in this regard.

Use case:- All the caches are storing session related data (such as login
session and sso token management) which is created once, updating is not
that frequent but accessing is 50 / 60 times.  




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Cluster-going-OOM-tp15175p15186.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: java.lang.IllegalStateException: Data streamer has been closed.

2017-07-20 Thread zbyszek
Hi,I believe I have subscribed properly... clicked 'Subscribe' button on the 
web page and sent separate email followed by executing received instructions.
Is there anything else I can do?
Thanx,Zbyszek
Sent from Yahoo Mail for iPhone
On Tuesday, July 18, 2017, 11:37 PM, vkulichenko [via Apache Ignite Users] 
 wrote:

 Hi Zbyszek,

Please properly subscribe to the mailing list so that the community can receive 
email notifications for your messages. To subscribe, send empty email to 
user-subscr...@ignite.apache.org and follow simple instructions in the reply.


zbyszek wroteHello All,

I was wondering if anybody has encountered the following issue.
I have 2 servers (Ignite 2.0.1) in the cluster. Each of these 2 servers loads 
different caches (with different names) in LOCAL mode using DataStreamer.
I am starting these servers simultaneously (say the second server is started 1 
sec. after the first one). Very often, say with the 25% chance, 
the first server's addData(DataStreamerImpl.java:665) call fails with the error 
"java.lang.IllegalStateException: Data streamer has been closed". Looking at 
the log one can see that this error is always 
preceeded with the error "org.apache.ignite.IgniteCheckedException: Failed to 
finish operation (too many remaps): 32".
I have verified already that this is not not us calling the close() on the 
DataStreamer. I seem to never observe this issue when I start only the first 
(one) server.

Fragment of the log is attached below.

Thank you in advance for any help or suggestion,

Zbyszek



evt=DISCOVERY_CUSTOM_EVT, node=918cd6f8-e761-43d7-9467-a28f65163c8c]
2017-07-17 21:40:39 INFO [exchange-worker-#65%null%] 
o.a.i.i.p.cache.GridCacheProcessor [Slf4jLogger.java:99] Started cache 
[name=fvpd_Exchange_BOND-L1555-1500303702823, memoryPolicyName=null, mode=LOCAL]
2017-07-17 21:40:39 INFO [exchange-worker-#65%null%] 
o.a.i.i.p.c.GridCachePartitionExchangeManager [Slf4jLogger.java:99] Skipping 
rebalancing (obsolete exchange ID) [top=AffinityTopologyVersion [topVer=2, 
minorTopVer=125], evt=DISCOVERY_CUSTOM_EVT, 
node=918cd6f8-e761-43d7-9467-a28f65163c8c]
2017-07-17 21:40:39 INFO [exchange-worker-#65%null%] 
o.a.i.i.p.cache.GridCacheProcessor [Slf4jLogger.java:99] Started cache 
[name=fvpd_Exchange_EQUITY-U1300-1500296583907, memoryPolicyName=null, 
mode=LOCAL]
2017-07-17 21:40:39 INFO [exchange-worker-#65%null%] 
o.a.i.i.p.c.GridCachePartitionExchangeManager [Slf4jLogger.java:99] Skipping 
rebalancing (obsolete exchange ID) [top=AffinityTopologyVersion [topVer=2, 
minorTopVer=126], evt=DISCOVERY_CUSTOM_EVT, 
node=918cd6f8-e761-43d7-9467-a28f65163c8c]
2017-07-17 21:40:39 INFO [exchange-worker-#65%null%] 
o.a.i.i.p.cache.GridCacheProcessor [Slf4jLogger.java:99] Started cache 
[name=fvpd_Exchange_EQUITY-U1100-1500289395073, memoryPolicyName=null, 
mode=LOCAL]
2017-07-17 21:40:39 INFO [exchange-worker-#65%null%] 
o.a.i.i.p.c.GridCachePartitionExchangeManager [Slf4jLogger.java:99] Skipping 
rebalancing (obsolete exchange ID) [top=AffinityTopologyVersion [topVer=2, 
minorTopVer=127], evt=DISCOVERY_CUSTOM_EVT, 
node=918cd6f8-e761-43d7-9467-a28f65163c8c]
2017-07-17 21:40:39 INFO [exchange-worker-#65%null%] 
o.a.i.i.p.cache.GridCacheProcessor [Slf4jLogger.java:99] Started cache 
[name=fvpd_Exchange_EQUITY-U1200-1500293005458, memoryPolicyName=null, 
mode=LOCAL]
2017-07-17 21:40:39 INFO [exchange-worker-#65%null%] 
o.a.i.i.p.c.GridCachePartitionExchangeManager [Slf4jLogger.java:99] Skipping 
rebalancing (obsolete exchange ID) [top=AffinityTopologyVersion [topVer=2, 
minorTopVer=128], evt=DISCOVERY_CUSTOM_EVT, 
node=918cd6f8-e761-43d7-9467-a28f65163c8c]
2017-07-17 21:40:39 INFO [exchange-worker-#65%null%] 
o.a.i.i.p.cache.GridCacheProcessor [Slf4jLogger.java:99] Started cache 
[name=fvpd_Listing_EQUITY-U0800-1500278534000, memoryPolicyName=null, 
mode=LOCAL]
2017-07-17 21:40:39 INFO [exchange-worker-#65%null%] 
o.a.i.i.p.c.GridCachePartitionExchangeManager [Slf4jLogger.java:99] Skipping 
rebalancing (obsolete exchange ID) [top=AffinityTopologyVersion [topVer=2, 
minorTopVer=129], evt=DISCOVERY_CUSTOM_EVT, 
node=918cd6f8-e761-43d7-9467-a28f65163c8c]
2017-07-17 21:40:39 INFO [exchange-worker-#65%null%] 
o.a.i.i.p.cache.GridCacheProcessor [Slf4jLogger.java:99] Started cache 
[name=fvpd_Listing_EQUITY-U1000-1500285783155, memoryPolicyName=null, 
mode=LOCAL]
2017-07-17 21:40:39 INFO [exchange-worker-#65%null%] 
o.a.i.i.p.c.GridCachePartitionExchangeManager [Slf4jLogger.java:99] Skipping 
rebalancing (obsolete exchange ID) [top=AffinityTopologyVersion [topVer=2, 
minorTopVer=130], evt=DISCOVERY_CUSTOM_EVT, 
node=918cd6f8-e761-43d7-9467-a28f65163c8c]
2017-07-17 21:40:39 INFO [exchange-worker-#65%null%] 
o.a.i.i.p.cache.GridCacheProcessor [Slf4jLogger.java:99] Started cache 
[name=fvpd_Listing_EQUITY-U1400-1500300190375, memoryPolicyName=null, 
mode=LOCAL]
2017-07-17 21:40:39 ERROR [sys-#131%null%] o.a.i.i.p.d.DataStreamerImpl 
[Slf4jLogger.java:119] 

Re: Ignite client not seeing cache created by server

2017-07-20 Thread Maureen Lawless
Hi Mikhail,

I've attached a small maven spring boot project where i've reproduced the
error. 
I have tried a different key, with no success. 

When i run: mvn spring-boot:run

I get the following:

>>> +--+
>>> Ignite ver. 2.0.0#20170430-sha1:d4eef3c68ff116ee34bc13648cd82c640b3ea072
>>> +--+
>>> OS name: Windows 10 10.0 amd64
>>> CPU(s): 4
>>> Heap: 1.0GB
>>> VM name: 15044@mlawlessJMLW742
>>> Local node [ID=5BBEA473-216B-497F-966E-7FC6D66AD194, order=1,
>>> clientMode=false]
>>> Local node addresses: [mlawlessJMLW742/0:0:0:0:0:0:0:1, /127.0.0.1,
>>> /172.19.29.194]
>>> Local ports: TCP:11211 TCP:47100 UDP:47400 TCP:47500

[16:35:52] Topology snapshot [ver=1, servers=1, clients=0, CPUs=4,
heap=1.0GB]
2017-07-20 16:35:52.728  INFO 15044 --- [   main]
o.a.i.i.m.d.GridDiscoveryManager : Topology snapshot [ver=1,
servers=1, clients=0, CPUs=4, heap=1.0GB]
2017-07-20 16:35:52.962 ERROR 15044 --- [orker-#29%null%]
.c.d.d.p.GridDhtPartitionsExchangeFuture : Failed to reinitialize local
partitions (preloading will be stopped): GridDhtPartitionExchangeId
[topVer=AffinityTopologyVersion [topVer=1, minorTopVer=1], nodeId=5bbea473,
evt=DISCOVERY_CUSTOM_EVT]

org.apache.ignite.IgniteCheckedException: Failed to register query type:
QueryTypeDescriptorImpl [space=IMAGE_CACHE, name=SurveyImage, tblName=null,
fields={gasCompany=class java.lang.String, surveyCompany=class
java.lang.String, survey=class java.lang.String, session=class
java.lang.String, camera=class java.lang.String, fileName=class
java.lang.String, fileLocation=class java.lang.String, latitude=class
java.lang.Float, longitude=class java.lang.Float, id=class java.lang.Long},
idxs={SurveyImage_id_idx=QueryIndexDescriptorImpl [name=SurveyImage_id_idx,
type=SORTED, inlineSize=-1]}, fullTextIdx=null, keyCls=class java.lang.Long,
valCls=class java.lang.Object, keyTypeName=java.lang.Long,
valTypeName=com.ignite.SurveyImage, valTextIdx=false, typeId=0, affKey=null,
keyFieldName=null, valFieldName=null, obsolete=false]
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.registerType(IgniteH2Indexing.java:1866)
~[ignite-indexing-2.0.0.jar:2.0.0]
at
org.apache.ignite.internal.processors.query.GridQueryProcessor.registerCache0(GridQueryProcessor.java:1306)
~[ignite-core-2.0.0.jar:2.0.0]
at
org.apache.ignite.internal.processors.query.GridQueryProcessor.onCacheStart0(GridQueryProcessor.java:756)
~[ignite-core-2.0.0.jar:2.0.0]
at
org.apache.ignite.internal.processors.query.GridQueryProcessor.onCacheStart(GridQueryProcessor.java:817)
~[ignite-core-2.0.0.jar:2.0.0]
at
org.apache.ignite.internal.processors.cache.GridCacheProcessor.startCache(GridCacheProcessor.java:1265)
~[ignite-core-2.0.0.jar:2.0.0]
at
org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheStart(GridCacheProcessor.java:1943)
~[ignite-core-2.0.0.jar:2.0.0]
at
org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheStart(GridCacheProcessor.java:1833)
~[ignite-core-2.0.0.jar:2.0.0]
at
org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.onCacheChangeRequest(CacheAffinitySharedManager.java:379)
~[ignite-core-2.0.0.jar:2.0.0]

at
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onCacheChangeRequest(GridDhtPartitionsExchangeFuture.java:688)
~[ignite-core-2.0.0.jar:2.0.0]
at
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:529)
~[ignite-core-2.0.0.jar:2.0.0]
at
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:1806)
[ignite-core-2.0.0.jar:2.0.0]
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
[ignite-core-2.0.0.jar:2.0.0]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_131]
Caused by: org.h2.jdbc.JdbcSQLException: Syntax error in SQL statement
"CREATE TABLE ""IMAGE_CACHE"".SURVEYIMAGE (_KEY BIGINT INVISIBLE[*] NOT
NULL,_VAL OTHER INVISIBLE,_VER OTHER INVISIBLE,GASCOMPANY
VARCHAR,SURVEYCOMPANY VARCHAR,SURVEY VARCHAR,SESSION VARCHAR,CAMERA
VARCHAR,FILENAME VARCHAR,FILELOCATION VARCHAR,LATITUDE REAL,LONGITUDE
REAL,ID BIGINT) ENGINE
""org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing$H2TableEngine""
"; expected "(, FOR, UNSIGNED, NOT, NULL, AS, DEFAULT, GENERATED, NOT, NULL,
AUTO_INCREMENT, BIGSERIAL, SERIAL, IDENTITY, NULL_TO_DEFAULT, SEQUENCE,
SELECTIVITY, COMMENT, CONSTRAINT, PRIMARY, UNIQUE, NOT, NULL, CHECK,
REFERENCES, ,, )"; SQL statement:
CREATE TABLE "IMAGE_CACHE".SurveyImage (_key BIGINT INVISIBLE NOT NULL,_val
OTHER INVISIBLE,_ver OTHER INVISIBLE,gasCompany VARCHAR,surveyCompany
VARCHAR,survey 

Re: Problem with Messages after client reconnect

2017-07-20 Thread Ilya Lantukh
Hi Sebastian,

Thanks for example and detailed steps to reproduce.

Unfortunately, this is a flaw in current binary metadata exchange
algorithm. I have created a ticket for it -
https://issues.apache.org/jira/browse/IGNITE-5794.

To avoid this issue for now, you should either:
- ensure that there is always at least 1 server node alive,
- restart all client nodes after last server node left cluster.


On Thu, Jul 20, 2017 at 9:18 AM, Sebastian Sindelar <
sebastian.sinde...@ibh-ks.de> wrote:

> Hi.
>
>
>
> Just figured out while the example setup doesn’t produce the error. The
> following TestClient will always cause the exception at the server.
>
>
>
> *public* *class* TestClient {
>
>
>
>*public* *static* *void* main(String[] args) {
>
>  IgniteConfiguration igniteConfiguration = *new*
> IgniteConfiguration();
>
>  igniteConfiguration.setClientMode(*true*);
>
>  igniteConfiguration.setMetricsLogFrequency(0);
>
>  Ignite ignite = *Ignition.start(**igniteConfiguration**)*;
>
>
>
>  Scanner *scanner* = *new* Scanner(System.*in*);
>
>  *while** (**true**) {*
>
> *String **message** = **scanner**.nextLine();*
>
> *IgniteMessaging **messaging** = **ignite**.message(*
> *ignite**.cluster());*
>
> *messaging**.send(**"test"**, **new** ComplexMessage(*
> *message**));*
>
> * }*
>
>
>
>
>
>}
>
> }
>
>
>
> Ignite.message(…) is now called before each message is send. In the main
> application it is called only once.
>
> I also noticed I didn’t describe the steps to reproduce the problem very
> detailed yesterday evening.
>
>
>
>1. Start 1 Client and 1 Server
>2. Type in some message in the client console and hit Enter
>3. Check that message is received and logged by the server
>4. Stop the Server
>5. Start the Server and wait for the client to reconnect
>6. Type in another message in the client console and hit Enter
>7. See the Exception at the server log
>
>
>
> Also one additional note. This only happens if all cluster servers are
> offline and the cluster gets restartet. If you have multiple servers there
> is no problem. At least one server must survive to keep the record on how
> to unmarshall the sent class.
>
>
>
> Best regards,
> Sebastian Sindelar
>
> *Von:* Sebastian Sindelar [mailto:sebastian.sinde...@ibh-ks.de]
> *Gesendet:* Mittwoch, 19. Juli 2017 16:53
> *An:* user@ignite.apache.org
> *Betreff:* Problem with Messages after client reconnect
>
>
>
> Hi.
>
>
>
> I just started with Ignite messaging and I encountered the following
> problem:
>
>
>
> I have a simple Setup with 1 server and 1 client. Sending messages from
> the client to the server works fine. But after a server restart I encounter
> the problem that when I send a message with a custom class I get an
> unmarshalling error:
>
>
>
>
>
> *org.apache.ignite.IgniteCheckedException*: null
>
>at org.apache.ignite.internal.util.IgniteUtils.unmarshal(
> *IgniteUtils.java:9893*) ~[ignite-core-2.0.0.jar:2.0.0]
>
>at org.apache.ignite.internal.managers.communication.GridIoManager$
> GridUserMessageListener.onMessage(*GridIoManager.java:2216*)
> ~[ignite-core-2.0.0.jar:2.0.0]
>
>at org.apache.ignite.internal.managers.communication.
> GridIoManager.invokeListener(*GridIoManager.java:1257*)
> [ignite-core-2.0.0.jar:2.0.0]
>
>at org.apache.ignite.internal.managers.communication.
> GridIoManager.access$2000(*GridIoManager.java:114*)
> [ignite-core-2.0.0.jar:2.0.0]
>
>at org.apache.ignite.internal.managers.communication.GridIoManager$
> GridCommunicationMessageSet.unwind(*GridIoManager.java:2461*)
> [ignite-core-2.0.0.jar:2.0.0]
>
>at org.apache.ignite.internal.managers.communication.GridIoManager.
> unwindMessageSet(*GridIoManager.java:1217*) [ignite-core-2.0.0.jar:2.0.0]
>
>at org.apache.ignite.internal.managers.communication.
> GridIoManager.access$2300(*GridIoManager.java:114*)
> [ignite-core-2.0.0.jar:2.0.0]
>
>at org.apache.ignite.internal.managers.communication.
> GridIoManager$8.run(*GridIoManager.java:1186*)
> [ignite-core-2.0.0.jar:2.0.0]
>
>at java.util.concurrent.ThreadPoolExecutor.runWorker(
> *ThreadPoolExecutor.java:1142*) [na:1.8.0_112]
>
>at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> *ThreadPoolExecutor.java:617*) [na:1.8.0_112]
>
>at java.lang.Thread.run(*Thread.java:745*) [na:1.8.0_112]
>
> Caused by: *java.lang.NullPointerException*: null
>
>at org.apache.ignite.internal.processors.cache.binary.
> CacheObjectBinaryProcessorImpl.metadata(
> *CacheObjectBinaryProcessorImpl.java:492*) ~[ignite-core-2.0.0.jar:2.0.0]
>
>at org.apache.ignite.internal.processors.cache.binary.
> CacheObjectBinaryProcessorImpl$2.metadata(
> *CacheObjectBinaryProcessorImpl.java:174*) ~[ignite-core-2.0.0.jar:2.0.0]
>
>at 

Re: Ignite Cluster going OOM

2017-07-20 Thread Andrey Mashenkov
Hi,

Looks like a memory leak.
Would you please share a grid config and clarify the use case?
Does it happens with lower number of clients as well?
Is increasing number of server nodes or heap size helpful?


On Thu, Jul 20, 2017 at 3:20 PM, Ankit Singhai  wrote:

> Hi Guys,
> I am using Ignite 1.8.0 with 3 servers and 30 clients. Where in each server
> has 4 GB of java heap. Server is going OOM very soon and heap dump shows
> the
> below as top consumer.
>
> Class Name
> | Shallow Heap | Retained Heap | Percentage
> 
> 
> --
> org.apache.ignite.internal.processors.cache.distributed.dht.
> GridDhtPartitionTopologyImpl
> @ 0x705dd8970|   72 |   767,921,176 | 17.96%
> org.apache.ignite.internal.processors.cache.distributed.dht.
> GridDhtPartitionTopologyImpl
> @ 0x700f38708|   72 |   749,744,568 | 17.53%
> org.apache.ignite.internal.processors.cache.distributed.dht.
> GridDhtPartitionTopologyImpl
> @ 0x702763d90|   72 |   723,323,712 | 16.91%
> org.apache.ignite.internal.processors.cache.distributed.dht.
> GridDhtPartitionTopologyImpl
> @ 0x702575918|   72 |   375,394,760 |  8.78%
> 
> 
> --
>
>
>
> I have 4 caches and they are configured to use OFFHEAP_TIERED
>
> Cache 1
> 
>
> 
>
> Cache2
> 
>
> 
>
> Cache3
> 
>
> 
>
>
> Cache4
> 
>
> 
>
> Please help me out.
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Ignite-Cluster-going-OOM-tp15175.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Best regards,
Andrey V. Mashenkov


Re: Transformer not called on every ScanQuery

2017-07-20 Thread Andrey Mashenkov
Hi Guilherme,

What version of ignite do you use?
Can you share a reproducer?

On Thu, Jul 20, 2017 at 1:32 PM, Guilherme Melo  wrote:

> Hello,
> Has anyone had issues with Transformers not being called on every instance
> of the scanQuery?
> only about 10% of the objects get transformed, the rest are added to the
> list returned from cache.query() are IgniteBiTuple, not what was passed to
> the Transformer.
> this does not happen when running the query without the transformer.
>
> Thank you
>
>


-- 
Best regards,
Andrey V. Mashenkov


Re: Ignite SQL queries's metadata

2017-07-20 Thread Andrey Mashenkov
Hi Menglei,

It is fixed in IGNITE-5252 [1] and already merged to master,
Fix will available from 2.1 release coming.

[1] https://issues.apache.org/jira/browse/IGNITE-5252

On Wed, Jul 19, 2017 at 12:14 AM, menglei  wrote:

> Hi all,
>
> Currently we are heavily using the standard SQL query functionality
> provided
> by Ignite. We have a lot join across the cache and need to get many columns
> from the query.
>
> But right now the interface of SQL resultset (QueryCursor in ignite) has
> limited exposure of the API, which is an iterator of the rows and we can
> only able to use the index to get the columns or the object we want. There
> is no support for the column name based retrieve operation like JDBC. We
> also don't want to use JDBC which is not fully supported yet.
>
> The interesting part is the QueryCursorEx and the QueryCursorImpl have the
> method to get at least the metadata which we can make use of. But I am not
> confident to use the QueryCursorEx (it may be changed) in all our queries
> since the QueryCursor is the right API to use.
>
> Could someone give us some insight for how to get the metadata?
>
> Thanks,
> Menglei
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Ignite-SQL-queries-s-metadata-tp15085.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Best regards,
Andrey V. Mashenkov


Re: Ignite2.0 Affinity Key Distribution

2017-07-20 Thread Ajay
Hi,

How much size(threshold limit) recommended for a partition.

Thanks,
Ajay.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite2-0-Affinity-Key-Distribution-tp15123p15178.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Pagination with TextQuery

2017-07-20 Thread Andrey Mashenkov
Hi,

There is no way to implement pagination with TextQuery
as there is no result order guarantee.


On Thu, Jul 20, 2017 at 3:23 PM, iostream  wrote:

> Hi,
>
> How to implement pagination with TextQuery? I want to fetch only the first
> 10 matches for my text query on the first page and the next 10 (from 11 to
> 20) on the second page.
>
> Thanks!
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Pagination-with-TextQuery-tp15176.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Best regards,
Andrey V. Mashenkov


Pagination with TextQuery

2017-07-20 Thread iostream
Hi,

How to implement pagination with TextQuery? I want to fetch only the first
10 matches for my text query on the first page and the next 10 (from 11 to
20) on the second page.

Thanks!



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Pagination-with-TextQuery-tp15176.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Ignite Cluster going OOM

2017-07-20 Thread Ankit Singhai
Hi Guys,
I am using Ignite 1.8.0 with 3 servers and 30 clients. Where in each server
has 4 GB of java heap. Server is going OOM very soon and heap dump shows the
below as top consumer.

Class Name  
 
| Shallow Heap | Retained Heap | Percentage
--
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtPartitionTopologyImpl
@ 0x705dd8970|   72 |   767,921,176 | 17.96%
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtPartitionTopologyImpl
@ 0x700f38708|   72 |   749,744,568 | 17.53%
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtPartitionTopologyImpl
@ 0x702763d90|   72 |   723,323,712 | 16.91%
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtPartitionTopologyImpl
@ 0x702575918|   72 |   375,394,760 |  8.78%
--



I have 4 caches and they are configured to use OFFHEAP_TIERED

Cache 1




Cache2




Cache3





Cache4




Please help me out.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Cluster-going-OOM-tp15175.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Issue with Ignite + Zeppelin

2017-07-20 Thread Andrey Gura
Zeppelin works good with Ignite. It seems you have some configuration or
setup issues.

20 июля 2017 г. 9:56 AM пользователь "Megha Mittal" <
meghamittal1...@gmail.com> написал:

> Hi,
>
> I added ignite-core jar in the jdbc folder of ignite and even provided jar
> path as an artifact under dependencies section in Zeppelin, but still no
> success.
>
> Can you suggest me some other UI tool that I can coonect to my java
> application. I am not using any xml configurations for Ignite.
>
> Thanks.
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Issue-with-Ignite-Zeppelin-tp14990p15161.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Ignite2.0 Affinity Key Distribution

2017-07-20 Thread mcherkasov
>I configured 10 partitions, when i started the cluster with two nodes i saw
8 partitions in one server and two partitions in other server, can we make
partitions equally?

Play a little bit with partition number, try to set it to 128 or 256.

>If not then 8 partitions servers having more load compare to other right? 
right.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite2-0-Affinity-Key-Distribution-tp15123p15173.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Data loss validation. TopologyValidator

2017-07-20 Thread ezhuravlev
Hi,

Yes, TopologyValidator was implemented exactly for these things.

Also, it's possible to listen for EVT_CACHE_REBALANCE_PART_DATA_LOST events
and implement some logic on it, but I think that TopologyValidator is better
solution.

Evgenii



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Data-loss-validation-TopologyValidator-tp15147p15172.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Transformer not called on every ScanQuery

2017-07-20 Thread Guilherme Melo
Hello,
Has anyone had issues with Transformers not being called on every instance
of the scanQuery?
only about 10% of the objects get transformed, the rest are added to the
list returned from cache.query() are IgniteBiTuple, not what was passed to
the Transformer.
this does not happen when running the query without the transformer.

Thank you


Re: Custom SQL Functions

2017-07-20 Thread afedotov
Hello,

1) No, functions can't be overloaded. Functions with the same name must
have a different number of parameters
3) To register a class with custom SQL functions via XML just use regular
Spring bean notation


...


Functions1.class
Functions2
...





Kind regards,
Alex.

On Thu, Jul 20, 2017 at 6:31 AM, Roger Fischer (CW) [via Apache Ignite
Users]  wrote:

> Hello,
>
>
>
> I have a couple of questions on custom SQL functions:
>
>
>
> 1) Can functions be overloaded, ie. could there be a to_string( Date) and
> a to_string( Double) function?
>
>
>
> 2) If yes, how is the applicable function identified? Based on parameters?
> Or does the function need to be qualified with the class name?
>
>
>
> 3) How are functions registered using XML configuration? The documentation
> only shows programmatic registration.
>
>
>
> Thanks…
>
>
>
> Roger
>
>
>
>
> --
> If you reply to this email, your message will be added to the discussion
> below:
> http://apache-ignite-users.70518.x6.nabble.com/Custom-
> SQL-Functions-tp15153.html
> To start a new topic under Apache Ignite Users, email
> ml+s70518n1...@n6.nabble.com
> To unsubscribe from Apache Ignite Users, click here
> 
> .
> NAML
> 
>




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Custom-SQL-Functions-tp15153p15170.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: Ignite Cache Console

2017-07-20 Thread Ajay
Hi,
yes i aware visor, but it had some bugs, anyway i need to get 

Cluster
|
|_ _ Nodes
|
|_ _Partitions
|
|_ _ Data resides

Thanks,
Ajay.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Cache-Console-tp15125p15169.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Enforcing data to be stored in Heap only

2017-07-20 Thread Amit Pundir
Hi Val,
Thanks for the response.

The onHeap get performance is better than getting from offHeap especially
for our use-cases. We get the data, update and put it back multiple times.
With every such request, the deserialization overhead from offHeap adds up.
We can't take such an overhead to maintain our SLAs.

Also, the decision to use on-heap vs off-heap should be left to the
application. In our case, we are fine to get a bigger heap (8 GB). But with
Ignite 2.0, the total RAM requirement shoots up drastically because it keeps
a copy of the on-heap data off-heap.

/I am assuming with on-heap enabled, the complete 'Entry' is
stored on-heap and not just the 'Key'. I will confirm this with heap dump.
/

Thanks.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Enforcing-data-to-be-stored-in-Heap-only-tp15141p15168.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite2.0 Affinity Key Distribution

2017-07-20 Thread Ajay
Hi,

I configured 10 partitions, when i started the cluster with two nodes i saw
8 partitions in one server and two partitions in other server, can we make
partitions equally?If not then 8 partitions servers having more load compare
to other right?

Thanks,
Ajay.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite2-0-Affinity-Key-Distribution-tp15123p15167.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re:Re: encountering a character encoding problem when DML SQL is used

2017-07-20 Thread afedotov
Hi Martin,

Could you share a reproducer? I'll check it.

Kind regards,
Alex


20 июля 2017 г. 4:14 AM пользователь "Martin [via Apache Ignite Users]" <
ml+s70518n15149...@n6.nabble.com> написал:



I called it from Java, and i am using Ignite 2.0.
seems many chinese developer are encountering this problem.
is there any configuration that can set as encoding? or is there any API
method can do this?



thanks,
Martin




At 2017-07-20 00:34:45, "afedotov" <[hidden email]
> wrote:

Hi Martin,

I've checked the same with 2.0 and it works on my side.

Which version of Ignite do you use?
Do you call it from Java, .NET or C++ ?
Please check that source files encoding is consistent. It's better to stick
with UTF-8.

Kind regards,
Alex.

On Wed, Jul 19, 2017 at 4:47 AM, Martin [via Apache Ignite Users] <[hidden
email] > wrote:

> Hi guys,
>
> I am now encountering a character encoding problem when DML SQL is
> used.
> For example, if i run the DML SQL like the following sentence:
>  update tableA set columnA="测试" and columnB="你好" where _key=1
> it can works, but when i get it(_key=1) from the cache , i get the messy
> word.
>
> any ideas about this? is there any way to set character encoding?
>
>
> thanks for your help
>
>
>
> Thanks,
> Martin
>
>
>
>
>
>
> --
> If you reply to this email, your message will be added to the discussion
> below:
> http://apache-ignite-users.70518.x6.nabble.com/encountering-
> a-character-encoding-problem-when-DML-SQL-is-used-tp15094.html
> To start a new topic under Apache Ignite Users, email [hidden email]
> 
> To unsubscribe from Apache Ignite Users, click here.
> NAML
> 
>


--
View this message in context: Re: encountering a character encoding problem
when DML SQL is used

Sent from the Apache Ignite Users mailing list archive
 at Nabble.com.






--
If you reply to this email, your message will be added to the discussion
below:
http://apache-ignite-users.70518.x6.nabble.com/encountering-a-character-
encoding-problem-when-DML-SQL-is-used-tp15094p15149.html
To start a new topic under Apache Ignite Users, email
ml+s70518n1...@n6.nabble.com
To unsubscribe from Apache Ignite Users, click here

.
NAML





--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/encountering-a-character-encoding-problem-when-DML-SQL-is-used-tp15094p15165.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: Question about QueryTextField

2017-07-20 Thread iostream
I corrected the problem. Like you suggested, using capital "A" does not work.
I have to use exact name of my attribute (which is small "a").

Thanks!



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Question-about-QueryTextField-tp15111p15164.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Question about QueryTextField

2017-07-20 Thread Evgenii Zhuravlev
Well, it works for me. Please share the reproducer and I will check it.

Evgenii

2017-07-20 10:10 GMT+03:00 iostream :

> When I use search as -> "A:123" it does not give me any result even though
> there is a cache entry with A = 123. Can somebody please help me?
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Question-about-QueryTextField-tp15111p15162.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Question about QueryTextField

2017-07-20 Thread iostream
When I use search as -> "A:123" it does not give me any result even though
there is a cache entry with A = 123. Can somebody please help me?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Question-about-QueryTextField-tp15111p15162.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Issue with Ignite + Zeppelin

2017-07-20 Thread Megha Mittal
Hi,

I added ignite-core jar in the jdbc folder of ignite and even provided jar
path as an artifact under dependencies section in Zeppelin, but still no
success. 

Can you suggest me some other UI tool that I can coonect to my java
application. I am not using any xml configurations for Ignite.

Thanks.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Issue-with-Ignite-Zeppelin-tp14990p15161.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


AW: Problem with Messages after client reconnect

2017-07-20 Thread Sebastian Sindelar
Hi.

Just figured out while the example setup doesn't produce the error. The 
following TestClient will always cause the exception at the server.

public class TestClient {

   public static void main(String[] args) {
 IgniteConfiguration igniteConfiguration = new 
IgniteConfiguration();
 igniteConfiguration.setClientMode(true);
 igniteConfiguration.setMetricsLogFrequency(0);
 Ignite ignite = Ignition.start(igniteConfiguration);

 Scanner scanner = new Scanner(System.in);
 while (true) {
String message = scanner.nextLine();
IgniteMessaging messaging = 
ignite.message(ignite.cluster());
messaging.send("test", new ComplexMessage(message));
 }


   }
}

Ignite.message(...) is now called before each message is send. In the main 
application it is called only once.
I also noticed I didn't describe the steps to reproduce the problem very 
detailed yesterday evening.


  1.  Start 1 Client and 1 Server
  2.  Type in some message in the client console and hit Enter
  3.  Check that message is received and logged by the server
  4.  Stop the Server
  5.  Start the Server and wait for the client to reconnect
  6.  Type in another message in the client console and hit Enter
  7.  See the Exception at the server log

Also one additional note. This only happens if all cluster servers are offline 
and the cluster gets restartet. If you have multiple servers there is no 
problem. At least one server must survive to keep the record on how to 
unmarshall the sent class.

Best regards,
Sebastian Sindelar

Von: Sebastian Sindelar [mailto:sebastian.sinde...@ibh-ks.de]
Gesendet: Mittwoch, 19. Juli 2017 16:53
An: user@ignite.apache.org
Betreff: Problem with Messages after client reconnect

Hi.

I just started with Ignite messaging and I encountered the following problem:

I have a simple Setup with 1 server and 1 client. Sending messages from the 
client to the server works fine. But after a server restart I encounter the 
problem that when I send a message with a custom class I get an unmarshalling 
error:


org.apache.ignite.IgniteCheckedException: null
   at 
org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:9893) 
~[ignite-core-2.0.0.jar:2.0.0]
   at 
org.apache.ignite.internal.managers.communication.GridIoManager$GridUserMessageListener.onMessage(GridIoManager.java:2216)
 ~[ignite-core-2.0.0.jar:2.0.0]
   at 
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1257)
 [ignite-core-2.0.0.jar:2.0.0]
   at 
org.apache.ignite.internal.managers.communication.GridIoManager.access$2000(GridIoManager.java:114)
 [ignite-core-2.0.0.jar:2.0.0]
   at 
org.apache.ignite.internal.managers.communication.GridIoManager$GridCommunicationMessageSet.unwind(GridIoManager.java:2461)
 [ignite-core-2.0.0.jar:2.0.0]
   at 
org.apache.ignite.internal.managers.communication.GridIoManager.unwindMessageSet(GridIoManager.java:1217)
 [ignite-core-2.0.0.jar:2.0.0]
   at 
org.apache.ignite.internal.managers.communication.GridIoManager.access$2300(GridIoManager.java:114)
 [ignite-core-2.0.0.jar:2.0.0]
   at 
org.apache.ignite.internal.managers.communication.GridIoManager$8.run(GridIoManager.java:1186)
 [ignite-core-2.0.0.jar:2.0.0]
   at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[na:1.8.0_112]
   at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_112]
   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_112]
Caused by: java.lang.NullPointerException: null
   at 
org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.metadata(CacheObjectBinaryProcessorImpl.java:492)
 ~[ignite-core-2.0.0.jar:2.0.0]
   at 
org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl$2.metadata(CacheObjectBinaryProcessorImpl.java:174)
 ~[ignite-core-2.0.0.jar:2.0.0]
   at 
org.apache.ignite.internal.binary.BinaryContext.metadata(BinaryContext.java:1231)
 ~[ignite-core-2.0.0.jar:2.0.0]
   at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.getOrCreateSchema(BinaryReaderExImpl.java:1987)
 ~[ignite-core-2.0.0.jar:2.0.0]
   at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.(BinaryReaderExImpl.java:283)
 ~[ignite-core-2.0.0.jar:2.0.0]
   at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.(BinaryReaderExImpl.java:182)
 ~[ignite-core-2.0.0.jar:2.0.0]
   at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.(BinaryReaderExImpl.java:161)
 ~[ignite-core-2.0.0.jar:2.0.0]
   at 
org.apache.ignite.internal.binary.GridBinaryMarshaller.deserialize(GridBinaryMarshaller.java:304)
 ~[ignite-core-2.0.0.jar:2.0.0]
   at 
org.apache.ignite.internal.binary.BinaryMarshaller.unmarshal0(BinaryMarshaller.java:99)
 ~[ignite-core-2.0.0.jar:2.0.0]
   at 

Re: SQL Syntax error in cross-cache query

2017-07-20 Thread afedotov
Hi,

You need either adjust entity/table name to the upper case like *select ID
from "personCache".PERSON p limit 5* or
add a config option 

Re: SQL Syntax error in cross-cache query

2017-07-20 Thread afedotov
Hi Bob,

Due to some reason my answer has got in another conversation, so
duplicating it here.
Hi,

You need either adjust entity/table name to the upper case like *select ID
from "personCache".PERSON p limit 5* or
add a config option