Ignite server of perNodeParallelOperatoins ?
Hello, In IgniteDataStreamer, there is a config: perNodeParallelOperatoins (int), it is configured in the client side. in the Server side, does it have similar configuration ? otherwise, if client has freedom to set any number of perNodeParallelOperatoins they want, how server prevent not crash ? Thanks. Ed
ignite cache as database table
Hello, trying to load 17m record size of data, size is 3.4G. if pojo class field defined with QuerySqlField annotation, Ignite will use roughly 9G memory. If we create ignite cache as database table with QueryEntity / QueryField, it will use total 20G memory. The difference is the second one will create H2 database table. Then H2 will keep another copy so that it is using double size of memory ? Another question is, do we have easy way to define cache as a table ? now we have to use QueryEntity / QueryField , do we have a @table(name="My_table_name") annotation ? Thanks. Ed
Re: how to achieve this topology ?
Here is a background. In enterprise wide, each application, like JVM1, will create its own cache and this data is useful for another application, so this data should push to server node , like JVM3. but each client node, like JVM1, should have freedom to contain other caches based on some rules, but definitely not every cache in JVM3. in this case, looks like Node filter will work. Therefore, I think JVM1 and JVM2 should be client node with near caches and also have node filter. My understanding is correct ? Thanks Stephen Darlington for the near cache idea. On 3/9/2020 7:04 PM, Evgenii Zhuravlev wrote: Hi, You can use NodeFilter for caches. Please use this JavaDoc for information: https://www.javadoc.io/doc/org.apache.ignite/ignite-core/latest/org/apache/ignite/util/AttributeNodeFilter.html Example can be found here: https://github.com/ezhuravl/ignite-code-examples/blob/master/src/main/java/examples/nodefilter/cache/CacheNodeFilterExample.java Evgenii пт, 6 мар. 2020 г. в 18:34, Edward Chen <mailto:java...@gmail.com>>: Hello, I want to achieve this topology, do you know how to configure ? The critical parts are, cache2 in JVM2 should not be replicated or copied to JVM 1 . cache1 in JVM1 should not be replicated or copied to JVM 2 . JVM3 and JVM 4 are each other failed-over backup. Thanks. Ed
how to achieve this topology ?
Hello, I want to achieve this topology, do you know how to configure ? The critical parts are, cache2 in JVM2 should not be replicated or copied to JVM 1 . cache1 in JVM1 should not be replicated or copied to JVM 2 . JVM3 and JVM 4 are each other failed-over backup. Thanks. Ed
how to achieve this topology ?
Hello, I want to achieve this topology, do you know how to configure ? The critical parts are, cache2 in JVM2 should not be replicated or copied to JVM 1 . cache1 in JVM1 should not be replicated or copied to JVM 2 . JVM3 and JVM 4 are each other failed-over backup. Thanks. Ed
Load cache data into another POJO with SQL
Hello, I am using Ignite SQL, wondering it is possible to load cache data into another POJO ? just like ORM, sql like this : select new MyPojo(p.name, p.age) from myCacheTable as p where p.age > 30 Thanks. Ed
where to download odbc driver ?
Hello, As per Ignite doc, ignite is shipped with ODBC windows pre-built installers. I can not find any odbc msi file in apache-ignite-2.7.6-bin.zip. Do you know how to get the odbc driver ? https://apacheignite-sql.readme.io/docs/odbc-driver#building-odbc-driver Thanks. Ed
Re: sql insert, but key is null
Yes, all of them defined in Person On 2/11/2020 6:29 PM, Evgenii Zhuravlev wrote: Did you add it to all fields in both key and value? Evgenii вт, 11 февр. 2020 г. в 15:18, Edward Chen <mailto:java...@gmail.com>>: I just add @QuerySqlField to java field. Does Ignite have annotation for Primary Key ? Evgenii вт, 11 февр. 2020 г. в 13:59, Edward Chen mailto:java...@gmail.com>>: Hello, I am using Ignite 2.7.6 and testing its SQL insert function. I have these codes: PersonKey { id: Long; type: String; // constructor, getter, setter // hashCode, toString ... } Person { id: Long; type: String; name: String; zip: String; public PersonKey getKey() {return new PersonKey(...);} // constructor, getter, setter // hashCode, toString ... } insert sql: "insert into Person(id, type, name, zip) values (100, "S", "John", "11223") when get data back from Cache, Iterator<..> iter = cache.iterator(); while(iter.hasNext()){ Cache.Entry entry = iter.next(); entry.getKey --> *0,null * } The last output is not correct, it should be *"100, S"* . Any inputs please ? Thanks
sql insert, but key is null
Hello, I am using Ignite 2.7.6 and testing its SQL insert function. I have these codes: PersonKey { id: Long; type: String; // constructor, getter, setter // hashCode, toString ... } Person { id: Long; type: String; name: String; zip: String; public PersonKey getKey() {return new PersonKey(...);} // constructor, getter, setter // hashCode, toString ... } insert sql: "insert into Person(id, type, name, zip) values (100, "S", "John", "11223") when get data back from Cache, Iterator<..> iter = cache.iterator(); while(iter.hasNext()){ Cache.Entry entry = iter.next(); entry.getKey --> *0,null * } The last output is not correct, it should be *"100, S"* . Any inputs please ? Thanks
not working in Spark submit yarn mode.
Hi, I am trying to load Hdfs data into cache from Spark. it is working in local mode, but failed in spark-submit yarn mode. it tried to find Ignite Home path in the cluster. yes, it is true that Ignite is not installed in cluster. but why needed ? Ignite instance is created inside my java code and all of ignite jars are packaged. why need to find ignite info outside of my jar. My ignite version is 2.3. Anyone has a working example of Spark submit + Yarn mode ? any web link is welcome. Thanks. Edward