CacheJdbcPersonStore Example
Hi, In the CacheJdbcPersonStore example[1], why *IgniteDataStreamer* is never used while loading the bulk elements from the persistent store. What is the purpose of using IgniteBiInClosure in the example ? [1] https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/datagrid/store/jdbc/CacheJdbcPersonStore.java#L136 --Kamal
Re: beloved ignite vs. xxx: what is the actual status
On Thu, May 12, 2016 at 8:35 PM, minisoft_rmwrote: > Hi dsetrakyan , > Thank you so much. I have told it to my colleagues. > > Now I would like to know the difference between GridGain prod and ignite... > commerce vs. free, right? > Right, all the info is available on the GridGain site. > > and the performance are different as well ? or same as each other? thanks. > Performance is the same. GridGain does not change Ignite, it only adds to it. > > > > -- > View this message in context: > http://apache-ignite-users.70518.x6.nabble.com/beloved-ignite-vs-xxx-what-is-the-actual-status-tp4901p4921.html > Sent from the Apache Ignite Users mailing list archive at Nabble.com. >
Re: Graceful shutdown
Hi, Yes, ignite.close() cancels all currently running tasks. You can replace this call with the following: Ignition.stop(ignite.name(), false) The second parameter means that the stopping node should not cancel tasks and will wait for their completion. Hope this helps. -Val -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Graceful-shutdown-tp4911p4920.html Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Re: What will happend in case of run out of memory ?
You can use loadCache() method for bulk loading. It allows to provide a set of optional parameters that can be used to specify different conditions, like time ranges. -Val -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/What-will-happend-in-case-of-run-out-of-memory-tp4446p4918.html Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Re: NullPointerException When Use ReadThrough
Can you please provide the stack trace? On Thu, May 12, 2016 at 2:12 AM, Level D <724172...@qq.com> wrote: > Hi, > > When I use readthrough to get a key neither existed in ignite cache nor in > hbase, console shows nullpointerexception. > > Is it nessary for hbase to have the record? In addition to this way,how > can I avoid that exception? >
Re: visorcmd alerts: improvements and fixes
Thanks Vasily! The custom alert logic is a great addition. Do we need to add documentation for it? On Thu, May 12, 2016 at 2:30 AM, Vasiliy Siskowrote: > Hello Igniters. > > I implement execution of custom script on alert for visorcmd: > https://issues.apache.org/jira/browse/IGNITE-2919. Now some logic can be > defined in case of specific grid conditions in format sh/bat file. > > Also fixed reaction on alert condition switch: > https://issues.apache.org/jira/browse/IGNITE-2930. > > > -- > Vasiliy Sisko > GridGain Systems > www.gridgain.com >
How REPLICATED cache is more performant comparing to PARTITIONED
I have couple of caches which are initialized per system event, and then almost stay untouched for the next 1 or 2 hours. And only reads are used. First of all is it a good use case for the REPLICATED cache? Data is small, just int to int mapping. The main question is why REPLICATED cache behaves better for frequent reads comparing to PARTITIONED. As I understood from https://apacheignite.readme.io/docs/cache-modes#replicated-mode, PARTITIONED cache with backups set to all is used underneath. Is still affinity collocation is in place for the REPLICATED cache? If, so it means it has to go to the primary server every time anyway, so no different comparing to REPLICATED. So, what are the factors who are giving the better read performance for REPLICATED cache comparing to PARTITIONED? -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/How-REPLICATED-cache-is-more-performant-comparing-to-PARTITIONED-tp4915.html Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Re: beloved ignite vs. xxx: what is the actual status
On Thu, May 12, 2016 at 8:39 AM, minisoft_rmwrote: Now May I get some professional performance statistics, for example: > read/write performance per second? > I know redis can handle over 10k operations per second. > As Apache project we are not allowed to publish any benchmark comparisons, however GridGain has done some benchmark comparisons and published them on the site: http://www.gridgain.com/resources/benchmarks/gridgain-benchmarks-results/ http://www.gridgain.com/resources/benchmarks/ignite-vs-hazelcast-benchmarks/ > > > besides, how to compare ignite with something like ... Infinispan? that > looks cool too.. also support all the functions even Query DSL. please > refer > to: http://vschart.com/compare/jboss-infinispan/vs/redis-database > > and this website looks professional.. is there any public comparison for > ignite? thanks. > Again, as Apache project we are not allowed to publish comparisons. GridGain again has several comparisons published, which you can look at: http://www.gridgain.com/resources/feature-comparisons/ > > > > -- > View this message in context: > http://apache-ignite-users.70518.x6.nabble.com/beloved-ignite-vs-xxx-what-is-the-actual-status-tp4901.html > Sent from the Apache Ignite Users mailing list archive at Nabble.com. >
Re: ODBC Driver?
Hi Arthi, In new release you should not run libtolize, aclocal and other autotools for every directory. Now you only should do that once in the root directory i.e. $IGNITE_HOME/platforms/cpp. Please refer to DEVNOTES.txt for detailed instructions. Best Regards, Igor On Thu, May 12, 2016 at 5:16 PM, arthiwrote: > Hi Igor, > > We tried to build using the binaries from the below nightly build - > > > https://builds.apache.org/view/H-L/view/Ignite/job/Ignite-nightly/lastSuccessfulBuild/artifact/target/bin/apache-ignite-fabric-1.6.0-SNAPSHOT-bin.zip > > But, the C++ is missing a few config files. We get errors on build- > [root@dayrhegapd022 common]# libtoolize > [root@dayrhegapd022 common]# aclocal > aclocal: `configure.ac' or `configure.in' is required > [root@dayrhegapd022 common]# ^C > [root@dayrhegapd022 common]# autoheader > autoheader: `configure.ac' or `configure.in' is required > > Is there another repository we should use? > > Thanks, > Arthi > > > > -- > View this message in context: > http://apache-ignite-users.70518.x6.nabble.com/ODBC-Driver-tp4557p4896.html > Sent from the Apache Ignite Users mailing list archive at Nabble.com. >
Re: Cluster Formation between nodes in different data centers
Hello, Can you check ping between data centers? If delay of network may be long, you can incrase FailureDetectionTimeout (use org.apache.ignite.configuration.IgniteConfiguration#setFailureDetectionTimeout) In additional you need make sure what communication ports are available in all nodes (by default org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi#DFLT_PORT) . You can see all ports, which are will by used, in debug level logger. If nothing to help, please provide log files from all nodes and I could see details. Also I see you are not subscribed on users mail list. Can you please subscribe? You can do it from this page https://ignite.apache.org/community/resources.html Also I see you are not subscribed on users mail list. Can you please subscribe? You can do it from this page https://ignite.apache.org/community/resources.html visagan wrote > Hi, > > I have two nodes in data center A and two nodes in Data center B. I have > opened up the ignite default ports and i am able to talk to all nodes and > ports if i use "nc" command. > > But when i try to make the ignite cluster work, the four nodes join and > form a cluster and then they leave. > > I am able to see the total memory to increase from 24GB (when two nodes > are in the cluster) to 36 and then to 48GB and then the other two nodes > leave with an event NODE_FAILED. > But the nodes in the corresponding data center forms the cluster within > themselves and lives there happily. > When i turn on the cross data center, though it is able to discover the > nodes, it fails after joining with the other nodes. > Is there any possibly know reason for this ? -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Cluster-Formation-between-nodes-in-different-data-centers-tp4838p4903.html Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Re: Unexpexted exception
MRE - minimal reproducible example This may be a simple project with a couple of files. On Thu, May 12, 2016 at 10:31 PM, kajzurwrote: > What is MRE? Whole code of project? Repository on github? > > > > -- > View this message in context: > http://apache-ignite-users.70518.x6.nabble.com/Unexpexted-exception-tp4847p4900.html > Sent from the Apache Ignite Users mailing list archive at Nabble.com. > -- Alexey Kuznetsov GridGain Systems www.gridgain.com
beloved ignite vs. xxx: what is the actual status
Dear Experts, I am testing ignite functions for some weeks. although there are a number of bugs to be fix, I can successfully run it over DB, hiding some table from the cache by sql interaction with ignite. this is really cool. Now May I get some professional performance statistics, for example: read/write performance per second? I know redis can handle over 10k operations per second. besides, how to compare ignite with something like ... Infinispan? that looks cool too.. also support all the functions even Query DSL. please refer to: http://vschart.com/compare/jboss-infinispan/vs/redis-database and this website looks professional.. is there any public comparison for ignite? thanks. -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/beloved-ignite-vs-xxx-what-is-the-actual-status-tp4901.html Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Get client node instance
Hi, I have a ignite grid started with 2 servers and 1 client. I want my c++ application to be able to retrieve the client node instance from the grid, with out starting a new server/client. Is there an API I can use? Ignition.ignite() fails - Exception in thread "main" class org.apache.ignite.IgniteIllegalStateException: Ignite instance with provided name doesn't exist. Did you call Ignition.start(..) to start an Ignite instance? [name=cip] at org.apache.ignite.internal.IgnitionEx.grid(IgnitionEx.java:1235) at org.apache.ignite.Ignition.ignite(Ignition.java:516) at com.nielsen.poc.aggregation.ignite.datagrid.CacheMetrics.main(CacheMetrics.java:34) Please advice. Thanks, Arthi -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Get-client-node-instance-tp4897.html Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Re: Tcp discovery with clustered docker nodes
To ny understanding no; the java process running innside the docker container has no possibility of binding to the externally visible ip/port. Kristian 12. mai 2016 3.22 p.m. skrev "Alexei Scherbakov" < alexey.scherbak...@gmail.com>: > Hi, > > TcpDiscoverySPI has two properties(localAddress and localPort), which > allows you to set local address and port for Discovery SPI to bind. > T > Does it solve your problem? > > > . > > > 2016-05-12 10:56 GMT+03:00 Kristian Rosenvold: > >> We have been using jdbc based discovery successfully, but have been >> trying to get this to work with docker. The problem is of course that all >> the internal IP's inside the docker container are useless, so we'd really >> need some way to override the discovery mechanism. Now you could of course >> say that I should be using the static mechanism, but configuration-wise it >> makes things really messy. >> >> It would be much cleaner to simply make each node publish its "correct" >> external address to the discovery api, so that the JDBC discovery mechanism >> would contain only the external address (ignoring loopbacks here, unsure if >> there is a use case). >> >> By reading the source it would appear that this is not really supported, >> or am I missing something ? (It would appear like I need some kind of >> ovverride in the TcpDiscoveryMulticastIpFinder#initializeLocalAddresses/ >> U.resolveLocalAddresses >> area). >> >> >> Kristian >> >> >> >> > > > -- > > Best regards, > Alexei Scherbakov >
Re: Can't stop Ignite instance using Ignition.stopAll() or Ignition.kill() until writeBehindFlushFrequency passed
Hello, I can not reproduce the issue. In my test case all data entries save immediately after call Ignition.stopAll(true). If you had have example, which demonstrate the behavior, can you please provide source code? -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Can-t-stop-Ignite-instance-using-Ignition-stopAll-or-Ignition-kill-until-writeBehindFlushFrequency-pd-tp4837p4889.html Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Re: What will happend in case of run out of memory ?
Yes, I remember about case of SQL queries. In practice, before executing query Apache Ignite client should use method .get to loading actual data (queried by SQL). Is it possible to load to cache data that fit to some conditions ? For example data from last year (table contain column date). -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/What-will-happend-in-case-of-run-out-of-memory-tp4446p4888.html Sent from the Apache Ignite Users mailing list archive at Nabble.com.
NullPointerException When Use ReadThrough
Hi, When I use readthrough to get a key neither existed in ignite cache nor in hbase, console shows nullpointerexception. Is it nessary for hbase to have the record? In addition to this way,how can I avoid that exception?
How to cache Spark Dataframe in Apache ignite
Hello, I am writing a code to cache RDBMS data using spark SQLContext JDBC connection. Once a Dataframe is created I want to cache that reusltset using apache ignite thereby making other applications to make use of the resultset. Here is the code snippet. object test { def main(args:Array[String]) { val configuration = new Configuration() val config="src/main/scala/config.xml" val sparkConf = new SparkConf().setAppName("test").setMaster("local[*]") val sc=new SparkContext(sparkConf) val sqlContext = new org.apache.spark.sql.SQLContext(sc) val sql_dump1=sqlContext.read.format("jdbc").option("url", "jdbc URL").option("driver", "com.mysql.jdbc.Driver").option("dbtable", mysql_table_statement).option("user", "username").option("password", "pass").load() val ic = new IgniteContext[Integer, Integer](sc, config) val sharedrdd = ic.fromCache("hbase_metadata") //How to cache sql_dump1 dataframe } } Now the question is how to cache a dataframe, IgniteRDD has savepairs method but it accepts key and value as RDD[Integer], but I have a dataframe even if I convert that to RDD i would only be getting RDD[Row]. The savepairs method consisting of RDD of Integer more specific what if I have a string of RDD as value? Is it good to cache dataframe or any other better approach to cache the resultset. Thanks and Regards, Vignesh ::DISCLAIMER:: The contents of this e-mail and any attachment(s) are confidential and intended for the named recipient(s) only. E-mail transmission is not guaranteed to be secure or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or may contain viruses in transmission. The e mail and its contents (with or without referred errors) shall therefore not attach any liability on the originator or HCL or its affiliates. Views or opinions, if any, presented in this email are solely those of the author and may not necessarily reflect the views or opinions of HCL or its affiliates. Any form of reproduction, dissemination, copying, disclosure, modification, distribution and / or publication of this message without the prior written consent of authorized representative of HCL is strictly prohibited. If you have received this email in error please delete it and notify the sender immediately. Before opening any email and/or attachments, please check them for viruses and other defects.
Tcp discovery with clustered docker nodes
We have been using jdbc based discovery successfully, but have been trying to get this to work with docker. The problem is of course that all the internal IP's inside the docker container are useless, so we'd really need some way to override the discovery mechanism. Now you could of course say that I should be using the static mechanism, but configuration-wise it makes things really messy. It would be much cleaner to simply make each node publish its "correct" external address to the discovery api, so that the JDBC discovery mechanism would contain only the external address (ignoring loopbacks here, unsure if there is a use case). By reading the source it would appear that this is not really supported, or am I missing something ? (It would appear like I need some kind of ovverride in the TcpDiscoveryMulticastIpFinder#initializeLocalAddresses/U. resolveLocalAddresses area). Kristian
Re: Unexpexted exception
Hi, Do you have an MRE on github or elsewhere I can reproduce it? -Roman On Thursday, May 12, 2016 3:23 AM, kajzurwrote: Hi, thank, now it's working, but now I have other problem. All is working but my session isn't saved. I have HttpSession object and in one servler setting an attr and in other servler I trying to get this attr and it's not there. I checked in debugger - my session is WebSession from ingniter. Have you got, maybe, aby idea, why it's not working? I have four servers: [18:13:58,879][INFO][disco-event-worker-#44%null%][GridDiscoveryManager] Topology snapshot [ver=148, servers=4, clients=0, CPUs=6, heap=4.7GB] -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Unexpexted-exception-tp4847p4879.html Sent from the Apache Ignite Users mailing list archive at Nabble.com.