Re: SQL to limit number of records per agId

2018-07-26 Thread Stephen Darlington
How about: 0: jdbc:ignite:thin://127.0.0.1/> select * from cache1; 'ID','AGID','VAL' '1','100','10-15' '2','100','17-20' '3','100','30-50' '4','101','10-15' '5','101','17-20' 5 rows selected (0.003 seconds) 0: jdbc:ignite:thin://127.0.0.1/> select * from cache1 where id in (select min(id) from

Re: Spark 2.3 Structured Streaming With Ignite

2018-10-08 Thread Stephen Darlington
There’s a ticket and a patch but it doesn’t work “out of the box” yet. https://issues.apache.org/jira/browse/IGNITE-9357 Regards, Stephen > On 5 Oct 2018, at 19:53, ApacheUser wrote: > > Hi, > > Is it possible to use Spark Structured

Re: Is there a provision to apply predicate on values in cache?

2018-10-23 Thread Stephen Darlington
If you adjust your data model to be a more traditional 1:M relationship it would Just Work. But you still don’t have to do it on the client side. If you use ignite.compute().broadcast(…) you can send the query out to your cluster, have the scans happen on the server side and only send the

Re: Ignite Cache Memory Size Information

2018-10-30 Thread Stephen Darlington
The capacity planning page on the documentation site has a good starting point: https://apacheignite.readme.io/docs/capacity-planning Regards, Stephen > On 30 Oct 2018, at 06:35, Hemasundara Rao > wrote: > > Hi , > I am looking for

Re: Distributed Priority QUEUE

2018-11-02 Thread Stephen Darlington
Ignite comes with a queue implementation. See: https://apacheignite.readme.io/docs/queue-and-set Maybe you could use that as a starting point? (Patches to make it a priority queue welcomed of course!) Regards, Stephen > On 2 Nov 2018, at

Re: IgniteCache.size() for different cache show the same number

2018-09-27 Thread Stephen Darlington
They’re different _types_ but you’ve given them both the same _name_. Try something like: public *IgniteCache* comment() { return igniteSpringBean.getOrCreateCache(“COMMENTS_CACHE"); } public *IgniteCache *reply() { return

Re: Ignite ML withKeepBinary cache

2019-01-02 Thread Stephen Darlington
That’s a great investigation! I think the developer mailing list (http://apache-ignite-developers.2346864.n4.nabble.com) would be a better place to discuss the best way to fix it, though. Regards, Stephen > On 2 Jan 2019, at 07:20, otorreno wrote: > > Hi everyone, > > After the new release

Re: Apache Ignite: not able to cache Dataframe using Python thin client

2019-01-21 Thread Stephen Darlington
Can you share some code and the actual errors you’re getting? It’s not entirely clear to me what you’re trying to do? Are you using the new Python thin-client? Or are you using Spark's Python support along with Ignite’s support for DataFrames? And what do you mean by “complex object type”?

Re: Apache Ignite: not able to cache Dataframe using Python thin client

2019-01-22 Thread Stephen Darlington
I’m not sure I fully understand what you’re trying to do. It looks like you’re trying to put an entire DataFrame (a collection of records) into a single value in Ignite? Even if there’s only a single record, you probably want to put the row into Ignite rather than the whole DF. But I think

Re: Apache Ignite: not able to cache Dataframe using Python thin client

2019-01-22 Thread Stephen Darlington
Write to Ignite using the Ignite-Spark integration: input = spark.read.parquet(HDFS_ACCOUNT) input.write.format("ignite") .option("table","sfdc_account_parquet") .option("primaryKeyFields”,”key1,key2") .option("config",configFile) .save() At that

Re: PySpark: Failed to find data source: ignite

2019-01-23 Thread Stephen Darlington
You don’t say what your full CLASSPATH is but you’re clearly missing something. Here’s how I did it: https://medium.com/@sdarlington/the-trick-to-successfully-integrating-apache-ignite-and-pyspark-890e436d09ba Regards, Stephen > On 23 Jan 2019, at 05:49, Balakumar > wrote: > > Hi, > > I'm

Re: Ignite in container enviroments?

2018-12-20 Thread Stephen Darlington
I guess we should update the documentation. Does this suggest that an IpFinder for Docker Swarm (much like the one for Kubernetes) would be useful? Regards, Stephen > On 20 Dec 2018, at 14:13, Ilya Kasnacheev wrote: > > Hello! > > #1 you should use

Re: Python for Ignite for Spark?

2018-12-12 Thread Stephen Darlington
You can use PySpark in exactly as you normally do. So something like this works: stuff = spark.read \ .format("ignite") \ .option("config", "ignite-client.xml") \ .option("table", “Stuff") \ .option("primaryKeyFields", "ID") \ .load() You might need to check the Java

Re: Python for Ignite for Spark?

2018-12-17 Thread Stephen Darlington
I’m not sure there are any — and you’re right, there probably should. Having said that, integration is very straight-forward. You run pyspark (or spark-submit), passing in the Ignite jar files (using the —jars parameter). For example: $SPARK_HOME/bin/spark-submit --jars

Re: cluster activation on k8s when native persistence is enabled

2018-12-05 Thread Stephen Darlington
If you expose the TCP connector port (default 11211), you should be able to connect to your k8s cluster using the —host parameter, e.g., ./control.sh —host service_ip —activate I didn’t test this, but it should work. You don’t need to reactivate your cluster again. You do need to add and

Re: cluster activation on k8s when native persistence is enabled

2018-12-05 Thread Stephen Darlington
hat and it does work. Regards, Stephen > On 5 Dec 2018, at 14:51, Stephen Darlington > wrote: > > If you expose the TCP connector port (default 11211), you should be able to > connect to your k8s cluster using the —host parameter, e.g., > > ./control.sh —host service_ip —a

Re: Geo redundancy support in Ignite?

2018-11-29 Thread Stephen Darlington
It’s absolutely possible. The difficulty comes not in making it work but in figuring out the failure scenarios and what sensible thing to do in each case. The obvious example: if your inter-site network goes down what happens? You now have two clusters, each that thinks it’s *the* cluster. I

Re: ignite-kubernetes seems to be missing the jackson-annotations dependency

2018-12-06 Thread Stephen Darlington
I literally just found this myself (with the same workaround). I’ve raised a ticket (https://issues.apache.org/jira/browse/IGNITE-10577 ) and will discuss more on the dev mailing list. Regards, Stephen > On 6 Dec 2018, at 13:35, kellan

Re: JDBC Streaming

2018-11-26 Thread Stephen Darlington
The streaming examples should work fine for you. “Grid caches” and “SQL tables” are not two, different things. They are just two ways of accessing the same underlying structures. You can happily insert data using a Data Streamer and access it later using SQL. Regards, Stephen > On 26 Nov

Re: Support for SELECT ... INTO ?

2018-11-27 Thread Stephen Darlington
If you’re looking for new features, taking in the development email list might be better. What particular aspect of “SELECT… INTO” are you looking for? You can already do: INSERT INTO x SELECT * FROM y; Are you looking for the bulk loading facilities that some legacy databases provide this

Re: Ignite in Kubernetes not works correctly

2019-01-10 Thread Stephen Darlington
What kind of environment are you using? A public cloud? Your own data centre? And how are you killing the pod? I fired up a cluster using Minikube and your configuration and it worked as far as I could see. (I deleted the pod using the dashboard, for what that’s worth.) Regards, Stephen > On

Re: Cluster of two nodes with minimal port use

2019-01-08 Thread Stephen Darlington
Try putting the same list on both nodes: 172.24.10.79:3013 172.24.10.83:3013 Regards, Stephen > On 8 Jan 2019, at 14:13, Tobias König wrote: > > Hi there, > > I'm trying to get an Ignite cluster consisting of two nodes to work, that > uses a minimum number of exposed ports. I'm new to

Re: Ignite SparkSQL need to pre-load data?

2019-01-04 Thread Stephen Darlington
You’re right, data needs to be loaded into Ignite before you can use its more efficient SQL engine from Spark. You can certainly load the data in using Spark as you describe. That’s probably the easiest, least code, way of doing it. If there’s a lot it, it may be more efficient to load the

Re: Loading data from Spark Cluster to Ignite Cache to perform Ignite ML

2019-01-02 Thread Stephen Darlington
Where does the data in your Spark DataFrame come from? As I understand it, that would all be in Spark’s memory anyway? Anyway, I didn’t test this exact scenario, but it seems that it writing directly to an Ignite DataFrame should work — why did you think it wouldn’t? I can’t say whether it

Re: Pain points of Ignite user community

2019-01-11 Thread Stephen Darlington
Nice work! > On 10 Jan 2019, at 15:26, Stanislav Lukyanov wrote: > > Hi Rohan, > > Sorry, the publishing took some time. > In case you’re still interested, here’s the article: > https://www.gridgain.com/resources/blog/checklist-assembling-your-first-apacher-ignitetm-cluster > >

Re: Ignite in Kubernetes not works correctly

2019-01-14 Thread Stephen Darlington
ind, result is the same. > > Maybe you need some more logs from us? > > On Thu, Jan 10, 2019 at 7:28 PM Stephen Darlington > mailto:stephen.darling...@gridgain.com>> > wrote: > What kind of environment are you using? A public cloud? Your own data centre? > And how ar

Re: Licencing cost

2019-03-21 Thread Stephen Darlington
You should direct this question to GridGain rather than the Ignite open source community. Regards, Stephen > On 20 Mar 2019, at 16:11, austin solomon wrote: > > Hi, > > Does the gridgain licensing cost vary depending on the number of physical > cores of each node? > > Can anyone tell me. >

Re: On Multiple Endpoints Mode of JDBC Driver

2019-02-27 Thread Stephen Darlington
If you’re already using Ignite-specific APIs (IgniteCallable), why not use the other Ignite-native APIs for reading/writing/processing data? That way you can use affinity functions for load balancing where it makes sense and Ignite’s normal load balancing processing for general compute tasks.

Re: How to avoid start multiple instances in single machine

2019-03-01 Thread Stephen Darlington
If you set the localPortRange to zero (property in the TcpDiscoverySpi), Ignite will only start on the port number you specific. That way, if you bring up another node it will fail to start. Though automating how your environment is configured so this could never happen would probably be a

Re: Populating tables via IgniteDataStreamer

2019-02-19 Thread Stephen Darlington
Ignite comes with a nice sample for using the Data Streamer API: https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/streaming/StreamTransformerExample.java

Re: External multiple config files for Docker installation

2019-02-20 Thread Stephen Darlington
A lower-lift method might be to put your config files in the local file system and create a volume to make it accessible from your container. Something like: docker run —rm -it -v /home/me/config:/opt/ignite/config -e CONFIG_URI=file:///opt/ignite/config/local.xml apacheignite/ignite:2.7.0

Re: peerClassLoadingEnabled=true but java.lang.ClassNotFoundException

2019-03-07 Thread Stephen Darlington
From the documentation (https://apacheignite.readme.io/docs/service-grid): > Note, that by default it's required to have a Service class in the classpath > of all the cluster nodes. Peer class loading (P2P class loading) is not > supported for Service Grid So, generally, you have to deploy a

Re: [External]Re: Unable to read all cache data using ScanQuery API

2019-03-18 Thread Stephen Darlington
It looks like one of the nodes is unreachable or corrupt in some way. Look for disconnection or communication errors. As a general point, including text rather than screen-shots is preferable; the screenshots are barely legible. Regards, Stephen > On 18 Mar 2019, at 14:32, > wrote: > >

Re: Finding collocated data in ignite nodes

2019-03-18 Thread Stephen Darlington
> CREATE TABLE Country ( > country_id INT(10), > country_name CHAR(30), > Continent CHAR(30), > PRIMARY KEY (country_id) > ) WITH "template=partitioned, backups=1"; > > > CREATE TABLE City ( > city_id INT(15), > country_name CHAR(30), > city_name CHAR(50), > Dist CHAR(20), > PRIMARY KEY

Re: Finding collocated data in ignite nodes

2019-03-19 Thread Stephen Darlington
But you changed the schema from the documentation. In the guide, CountryCode is the primary key for the country table/cache. In your schema, you have country_id in the Country table and country_code in City. Two possible solutions: 1. Change the primary key of Country from country_id to

Re: exporting ignite table into CSV file

2019-03-19 Thread Stephen Darlington
Yes. You could use sqlline.sh: 0: jdbc:ignite:thin://127.0.0.1/> !set outputFormat csv 0: jdbc:ignite:thin://127.0.0.1/> !record a.csv Saving all output to “/tmp/a.csv". Enter "record" with no arguments to stop it. 0: jdbc:ignite:thin://127.0.0.1/> select * from ignite; select * from ignite;

Re: Ignite Java server with .net client limitations

2019-03-11 Thread Stephen Darlington
Invoke and the continuous query filters both mean sending code from the client to the server. The Java server does not, of course, understand the CLR. If you want to run .net code on your server, you need to run the ignite.net server. You can do that on a Linux server using Mono. Regards,

Re: Ignite Java server with .net client limitations

2019-03-11 Thread Stephen Darlington
I’m not sure what resolution you have in mind? The fundamental problem is that the JVM can’t run .net code and vice versa. Ignite solves that with a .net server that integrates with the usual Java server. The other alternative would be to have a Java client. Regards, Stephen > On 11 Mar 2019,

Re: I have a question about Java scan ignite cache

2019-02-14 Thread Stephen Darlington
The filter would run on the server side, so yes, the model class would need to be deployed to there. Alternatively, you could use BinaryObject. Something like this should work: QueryCursor> query = cache.withKeepBinary().query(new ScanQuery(new IgniteBiPredicate() { @Override

Re: Procedure for scale in and scale out of ignite nodes

2019-02-14 Thread Stephen Darlington
Note that, if you’re using persistence, the recommendation is to use StatefulSets so that the nodes are added/removed in a predictable way. https://apacheignite.readme.io/docs/stateful-deployment Regards, Stephen > On 14 Feb 2019, at

Re: IGNITE-3180 : please add support for median, stddev, var in Ignite SQL

2019-01-30 Thread Stephen Darlington
The weird thing about this is that the documentation says they do exist: https://apacheignite-sql.readme.io/docs/aggregate-functions (They don’t.) At the very least we need to update the documentation. Regards, Stephen > On 30

Re: Listen for cache changes

2019-04-15 Thread Stephen Darlington
You can use Continuous Queries to “listen” to changes in your caches: https://apacheignite.readme.io/docs/continuous-queries Regards, Stephen > On 15 Apr 2019, at 12:22, Mike Needham wrote: > > Hi All, > > I have a cache that has 3 SQL tables in it. There is a loop that listens to > a

Re: max number of TCP connections

2019-06-19 Thread Stephen Darlington
This is the Apache Ignite user mailing list. For help with Apache Nifi you’ll need to engage with their community. See here: https://nifi.apache.org/mailing_lists.html Regards, Stephen > On 18 Jun 2019, at 17:02, Clay Teahouse wrote: > >

Re: Ignite Nodes Go Down - org.apache.spark.rpc.RpcTimeoutException: Futures timed out after [10 seconds]. This timeout is controlled by spark.executor.heartbeatInterval

2019-06-13 Thread Stephen Darlington
The documentation recommends against using embedded mode for what’s likely to be a related reason. Embedded mode implies starting Ignite server nodes within Spark executors which can cause unexpected rebalancing or even data loss. Therefore this mode is currently deprecated and will be

Re: Ignite Nodes Go Down - org.apache.spark.rpc.RpcTimeoutException: Futures timed out after [10 seconds]. This timeout is controlled by spark.executor.heartbeatInterval

2019-06-13 Thread Stephen Darlington
extrapolate this statement and say that Ignite should > be started as a standalone application as opposed to being embedded inside an > application server that has its own lifecycle and additional responsibilities? > > > > On Thu, Jun 13, 2019 at 7:48 AM Stephen Darlington > mailt

Re: Apache Ignite 2.7.5 requirements for AWS

2019-06-21 Thread Stephen Darlington
There’s no one-size-fits-all answer unfortunately. How much data do you have? Do you use a lot of SQL? A lot of compute? What are your resilience requirements? For an “average” deployment I’d start looking at the “Memory optimised” instances (r5 and r5a). Of course, no one has an average

Re: Limited set of DataStructure compared to REDIS

2019-06-24 Thread Stephen Darlington
I think you’d do operations like that using SQL, i.e., you’d use the SqlFieldsQuery. Regards, Stephen > On 24 Jun 2019, at 11:45, dhiman_nikhil wrote: > > I see many DS not present in Ignite like zrevrangeByScore, sortedSet, etc. > Is there a way I can get all D.S. provided by Redis in

Checking for rebalancing?

2019-06-27 Thread Stephen Darlington
Hi, I’m looking to be able to automate a rolling update of Ignite, that is, take nodes down one at a time until the whole cluster has the new configuration. I have my caches configured with at least one backup. What’s the easiest way of checking that the cluster has finished rebalancing all

Re: Checking for rebalancing?

2019-06-28 Thread Stephen Darlington
tool do you use for such purpose? > > [1] > https://github.com/Mmuzaf/ignite/blob/master/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCachePreloader.java#L128 > > On Thu, 27 Jun 2019 at 18:18, Stephen Darlington > wrote: >> >> Hi, >> &

Re: configuration of ignite client nodes

2019-07-08 Thread Stephen Darlington
The list of machines in your IP finder list does not need to be exhaustive. As long as a node can find at least one other it should be able to join the cluster. You don’t need to configure your clients to know about the other client nodes, but, by virtue of joining the cluster, they will learn

Re: Ignite Spark Example Question

2019-08-12 Thread Stephen Darlington
I don’t think there’s anything “out of the box,” but you could write a custom CacheStore to do that. See here for more details: https://apacheignite.readme.io/docs/3rd-party-store#section-custom-cachestore Regards, Stephen > On 9 Aug 2019, at 21:50, sri hari kali charan Tummala > wrote: >

Re: Ignite Spark Example Question

2019-08-13 Thread Stephen Darlington
e to keep looping to find new data > files in S3 and write to cache real time or is it already built in ? > > On Mon, Aug 12, 2019 at 5:43 AM Stephen Darlington > mailto:stephen.darling...@gridgain.com>> > wrote: > I don’t think there’s anything “out of the box,” but you

Re: Job Stealing node not stealing jobs

2019-09-10 Thread Stephen Darlington
I don’t know the answer to your jon stealing question, but I do wonder if that’s the right configuration for your requirements. Why not use the weighted load balancer (https://apacheignite.readme.io/docs/load-balancing )? That’s designed to

Re: Complex Event Processing Using Ignite Streaming

2019-09-17 Thread Stephen Darlington
Unfortunately CEP isn’t really a focus of the project. Having said that, it is possible. For example, the old documentation for sliding windows still worked the last time I checked: https://apacheignite.readme.io/v1.4/docs/sliding-windows

Re: Affinity on non-key field

2019-07-30 Thread Stephen Darlington
No, and it’s not really possible. The problem is with something like IgniteCache.get(), the only information it has to find the value is the key. In that sense you’d be able to co-locate the data when you saved it to the cache, but you’d have no efficient way to find it again afterwards.

Re: Ignite backup/restore Cache-wise

2019-07-29 Thread Stephen Darlington
> Is WAL bringing cache to latest state (since I am not doing anthing with wal > folder in backup/restore ) ? Basically, yes. The WAL contains all the changes since the last snapshot. To do a backup you’ll need both the data files and the WAL files. The WAL archives files wouldn’t hurt,

Re: Partitioned cache didn't distributed for all the server nodes

2019-07-22 Thread Stephen Darlington
How many nodes in your cluster? How many keys in your dataset? Regards, Stephen > On 22 Jul 2019, at 16:16, raja24 wrote: > > Hi, > > I haven below configuration and partitioned cache didn't distributed for all > the server nodes. > >

Re: Failed to read magic header (too few bytes received)

2019-09-24 Thread Stephen Darlington
What’s on IP address 10.0.11.210? It’s sending Ignite something that it doesn’t understand. Maybe it’s not another copy of Ignite? Could it be a firewall setting truncating the message? Or perhaps the remote node has a different configuration, for example mixing up communication and discovery

Re: Failed to clean IP finder up.

2019-10-02 Thread Stephen Darlington
Looks like it missed being part of the 2.7.x release by a month or two. It will be resolved when 2.8.0 comes out. Regards, Stephen > On 2 Oct 2019, at 09:59, Marco Bernagozzi wrote: > > I'm getting this error when the nodes are shutting down. > What are the possible causes for this? > A bug

Re: Failed to clean IP finder up.

2019-10-02 Thread Stephen Darlington
It’s being discussed in the developer list right now. January seems to be the current target. Regards, Stephen > On 2 Oct 2019, at 10:41, Marco Bernagozzi wrote: > > Cool, thanks! > Any idea when 2.8 is planned to be released? > > Regards, > Marco > > On Wed, 2

Re: How does Apache Ignite distribute???

2019-10-31 Thread Stephen Darlington
The direct answer to your question is: implement your own org.apache.ignite.cache.affinity.AffinityFunction. But that’s hard to do correctly. You’d probably be better running, say, two copies of Ignite on the machines with more memory. Regards, Stephen > On 31 Oct 2019, at 15:25,

Re: Apache Ignite 2.8 release timeline ?

2019-11-11 Thread Stephen Darlington
See the wiki for the current status: https://cwiki.apache.org/confluence/display/IGNITE/Apache+Ignite+2.8 In summary, the target is mid-January. Regards, Stephen > On 12 Nov 2019, at 00:00, apada...@kent.edu wrote: > >

Re: How to insert data?

2019-11-05 Thread Stephen Darlington
One. The cache is cluster-wide, so once it’s created every node can see it. > On 5 Nov 2019, at 12:36, BorisBelozerov wrote: > > Thank you!! > How many nodes that I run your code?? > I only run the "CREATE database" code in one node or all nodes?? > > > > -- > Sent from:

Re: how to config On-Heap Caching by xml files?

2019-11-21 Thread Stephen Darlington
What do you mean by “doesn’t work”? The on-heap cache is, effectively, a cache of a cache. Data in Ignite is always stored off-heap. That setting creates another copy of the data on-heap. Regards, Stephen > On 21 Nov 2019, at 13:34, ?青春狂-^ wrote: > > hi: > I want know how to config On-Heap

Re: ValueExtractor support in Apache Ignite

2019-12-13 Thread Stephen Darlington
The “Ignite Way” of doing that would be to normalise the data, put the IdentifierMap as a separate, probably co-located cache. Regards, Stephen > On 13 Dec 2019, at 09:53, Rastogi, Arjit (CWM-NR) > wrote: > > Hi Ilya, > > We want to create index on the keys present in HashMap present in

Re: Standby nodes

2019-12-06 Thread Stephen Darlington
I’m not clear what that means? They’re part of the cluster but don’t store data or provide compute services? At what point would they leap into action? Maybe if you could describe what problem you’re trying to solve we could provide a more Ignite-y way to solve it. Regards, Stephen > On 6 Dec

Re: Standby nodes

2019-12-06 Thread Stephen Darlington
Why would this be better than having the same five nodes all sharing the workload? I can say why it’s worse: in the event of a failure, there’s more data to copy which means the system would take longer to rebalance and therefore the window where you have no redundancy is larger. The closest

Re: Ignite with Spark Intergration

2019-12-06 Thread Stephen Darlington
That’s just how the Spark integration works! I suppose you could use the Spark’s JDBC connection to access Ignite, but you’d lose some of the flexibility. Regards, Stephen > On 6 Dec 2019, at 17:04, datta wrote: > > Hi, > > I have installed ignite in 2 machines . > > 1 - server and 1 as

Re: Fetching Server DataStorageMetric

2019-12-16 Thread Stephen Darlington
Use the compute grid. Something like: Collection mx = ignite.compute().broadcast(() -> { Ignite i = Ignition.ignite(); return i.dataStorageMetrics(); }); System.out.println(mx); Regards, Stephen > On 16 Dec 2019, at 11:28, Mahesh Renduchintala > wrote: > > Hi, > > I need to fetch

Re: Action performed multiple times when using HA ignite clients using continuous queries

2019-10-16 Thread Stephen Darlington
Your clients don’t “know” about each other, so, yes, they’re going to duplicate work. Could you use the Service Grid to run your client? That way it would fail over automatically. Regards, Stephen > On 16 Oct 2019, at 16:04, SunSatION wrote: > > Hi, > > We have a scenario where we're using

Re: Write python dataframe to ignite table.

2019-10-28 Thread Stephen Darlington
What have you tried? As long as your command-line includes the right JAR files it seems to more-or-less just work for me: https://medium.com/@sdarlington/the-trick-to-successfully-integrating-apache-ignite-and-pyspark-890e436d09ba

Re: Throttling getAll

2019-10-28 Thread Stephen Darlington
You might want to open a ticket. Of course, Ignite is open source and I’m sure the community would welcome a pull request. Regards, Stephen > On 28 Oct 2019, at 12:14, Abhishek Gupta (BLOOMBERG/ 919 3RD A) > wrote: > >  > Thanks Ilya for your response. > > Even if my value objects were not

Re: Apache Spark + Ignite Connection Issue

2019-10-18 Thread Stephen Darlington
You’re trying to connect a thick client (the Spark integration) to the thin client port (10800). Your example-default.xml file needs to have the same configuration as your server node(s). Regards, Stephen > On 17 Oct 2019, at 18:12, sri hari kali charan Tummala > wrote: > > Hi Community, >

Re: Write python dataframe to ignite table.

2019-10-29 Thread Stephen Darlington
o column > mapping while import csv files. > > > Regards, > Favas > > From: Stephen Darlington > Sent: Monday, October 28, 2019 5:05 PM > To: user@ignite.apache.org > Subject: Re: Write python dataframe to ignite table. > > What have you tried? As long as y

Re: Does any one have working Ignite cluster on AWS

2019-10-17 Thread Stephen Darlington
You have to tell it where to connect: ./sqlline -u jdbc:ignite:thin://127.0.0.1/ I also wrote this showing a few ways to load data without firing up an IDE: https://medium.com/@sdarlington/loading-data-into-apache-ignite-c0cb7c065a7

Re: Action performed multiple times when using HA ignite clients using continuous queries

2019-10-17 Thread Stephen Darlington
es to Kafka and therefore not sure if it's heavy to > perform the push the Service grid > > On Wed, Oct 16, 2019 at 6:02 PM Stephen Darlington > mailto:stephen.darling...@gridgain.com>> > wrote: > Your clients don’t “know” about each other, so, yes, they’re going to > duplic

Re: Transaction operations using the Ignite Thin Client Protocol

2019-11-29 Thread Stephen Darlington
The ticket says “Fix version: 2.8” so I would assume it would be available then. Currently planned for late January. > On 29 Nov 2019, at 13:58, dkurzaj wrote: > > Hello, > > Since this improvement : https://issues.apache.org/jira/browse/IGNITE-9410 > is resolved, I'd assume that it is now

Re: How does Apache Ignite distribute???

2019-10-31 Thread Stephen Darlington
What do you want to customise? How does the current behaviour not meet your requirements? > On 31 Oct 2019, at 08:02, BorisBelozerov wrote: > > Can I customize affinity by the way I want to? > How can I do it? Thank you!! > > > > -- > Sent from:

Re: How to insert data?

2019-11-04 Thread Stephen Darlington
You need to tell the SQL engine about your POJO. There are number of ways of doing that, but one example would be: val cacheConfiguration = new CacheConfiguration[Integer,DataX]() val valQE = new QueryEntity() valQE.setKeyFieldName("key") valQE.setKeyType("java.lang.Integer")

Re: ValueExtractor support in Apache Ignite

2019-12-18 Thread Stephen Darlington
find intersection/ union result programmatically. > > Thanks & Regards, > Arjit Rastogi > > <>From: Stephen Darlington [mailto:stephen.darling...@gridgain.com] > Sent: Friday,December 13, 2019 4:44 PM > To: user@ignite.apache.org > Subject: Re: Value

Re: Is Apache ignite support tiering or it only support caching??

2020-02-25 Thread Stephen Darlington
Ignite supports persistence to disk. It’s configured using data regions. So you could have one data region that’s entirely in memory and another that’s on disk. We don’t call them “tiers” but that’s effectively what they allow. > On 25 Feb 2020, at 05:57, Preet wrote: > > I am new to

Re: cache put/clear atomicity query

2020-01-29 Thread Stephen Darlington
> org.apache.ignite.internal.util.worker.GridWorker.onIdle(GridWorker.java:297) > at > org.apache.ignite.internal.processors.timeout.GridTimeoutProcessor$TimeoutWorker.body(GridTimeoutProcessor.java:221) > at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120) > at java.lang.Thread.run(Thread.java:7

Re: K8S Deployment with Cache Partitioning

2020-01-29 Thread Stephen Darlington
When you say “replicas” are you talking about the number of pods, i.e., the “kubectl scale sts ignite --replicas=4” command? If so, that’s not related to your cache configuration. That’s simply the number of nodes in your cluster. If you loaded in the SQL file as described at the end of the

Re: cache put/clear atomicity query

2020-01-29 Thread Stephen Darlington
re. > Jan 29, 2020 5:40:18 PM java.util.logging.LogManager$RootLogger log > SEVERE: Stopping local node on Ignite failure: [failureCtx=FailureContext > [type=SEGMENTATION, err=null]] > > On Wed, Jan 29, 2020 at 4:46 PM Stephen Darlington > mailto:stephen.darling...@gridgain.com>

Re: How to do wildcard search by key in ignite?

2020-01-31 Thread Stephen Darlington
If you don’t use SQL, Ignite is basically a key-value store. That is, if you don’t know the key you have to look at every record to see if it matches. You can specify a filter on the ScanQuery: ScanQuery q = new ScanQuery<>((k,v) -> k.equals("Stephen")); That wouldn’t be indexed, though. If

Re: cache put/clear atomicity query

2020-01-28 Thread Stephen Darlington
First, I assume you mean remove rather than clear? Clear removes all entries in the cache (and takes no parameters). With that, yes, your sequence of events could happen. There are (at least) two alternative ways of doing it. “Put” doesn’t “check that a value already exists.” It simply puts

Re: How to do wildcard search by key in ignite?

2020-01-30 Thread Stephen Darlington
If you create an index on (A,B,C), SQL queries for all three variants you note should work and use the index. Having said that, “returning a huge number of rows” doesn’t seem like a good usage pattern with Ignite. You might be better distributing your query around the cluster rather than

Re: Ignite Web Console requires root?

2020-02-18 Thread Stephen Darlington
I suspect it’s missing a “not” in there somewhere. You certainly do not need root access. Regards, Stephen > On 17 Feb 2020, at 22:32, Andrew Munn wrote: > >  > https://www.gridgain.com/docs/web-console/latest/deploying-web-console > > says: > > By default, the Web Console control process

Re: Apache Ignite downloads are redirecting from https to http

2020-02-19 Thread Stephen Darlington
Should be fixed now. Thanks for reporting! > On 19 Feb 2020, at 09:33, Stephen Darlington > wrote: > > I forwarded to the developer mailing list. > >> On 18 Feb 2020, at 20:28, Devin Anderson wrote: >> >> ::Bump:: >> >> Devin >> >

Re: Is Apache ignite support tiering or it only support caching??

2020-02-21 Thread Stephen Darlington
What are you trying to achieve? You can do read/write through with an external third-party database, you can use Ignite’s transactional persistence, both of which allow you to have different tiers with varied speeds/volumes of data. Regards, Stephen > On 21 Feb 2020, at 09:58, Preet

Re: baseline topology questions

2020-02-11 Thread Stephen Darlington
Persistence doesn’t change anything about the distribution of data. It also doesn’t change anything about “rebalancing” the data. The only real difference is that you trigger rebalancing by changing the baseline topology manually, a process that is generally automatic when you use Ignite

Re: Apache Ignite downloads are redirecting from https to http

2020-02-19 Thread Stephen Darlington
I forwarded to the developer mailing list. > On 18 Feb 2020, at 20:28, Devin Anderson wrote: > > ::Bump:: > > Devin > > On 2/17/20 5:42 PM, Devin Anderson wrote: >> Hi all, >> >> I'm not sure if this is the correct mailing list to bring up this issue. If >> I'm writing the wrong mailing

Re: JDBC Connectivity

2020-01-15 Thread Stephen Darlington
> 1) Is this possible, in either client or server mode? Yes. > 2) If yes, I assume, I'd need one JDBC connection per cache, as I see it is > possible to specify only one cache per JDBC connection. Is this right? No. You can access any cache that has SQL enabled as long as you fully qualify

Re: JDBC Connectivity

2020-01-16 Thread Stephen Darlington
If you create a cache, either in code or XML, using the minimal list of parameter it won’t be accessible using SQL. There are a number of ways you can define what’s visible using SQL. You can use a POJO with the @QuerySqlField annotation (and the indexTypes property in the XML file) or define

Re: Kubernetes persistant volumes

2020-01-22 Thread Stephen Darlington
The general recommendation is to use stateful sets: https://www.gridgain.com/docs/latest/installation-guide/kubernetes/openshift-deployment > On 22 Jan 2020, at 06:04, Gokulnath Chidambaram wrote: > > Hello, > > We have pool of persistent volumes (PV's) in kubernetes and we want to use >

Re: JDBC Connectivity

2020-01-17 Thread Stephen Darlington
> > Can you send me an example where the cache and tables are entirely defined in > the XML configuration file (and no POJO), with query entity or just JDBC? > Let's assume that the sql codes run on a server node or a thick client. > > >> On Thu, Jan 16, 2020 at 8

Re: JDBC Connectivity

2020-01-21 Thread Stephen Darlington
he way JDBC types are used in defining tables with 3rd party > databases. > > thanks. > > On Mon, Jan 20, 2020 at 10:04 AM Stephen Darlington > mailto:stephen.darling...@gridgain.com>> > wrote: > Which JDBC settings? If you use the JDBC thick client, you can defi

Re: CacheKeyConfiguration

2020-01-20 Thread Stephen Darlington
Details on how to configure affinity colocation can be found in the documentation: https://www.gridgain.com/docs/latest/developers-guide/data-modeling/affinity-collocation In short, use the “indexedTypes” property in the XML file and the @AffinityKeyMapped annotation in your POJO key,

Re: JDBC Connectivity

2020-01-20 Thread Stephen Darlington
ble to do the same with JDBC settings in the XML file. > > On Fri, Jan 17, 2020 at 2:10 AM Stephen Darlington > mailto:stephen.darling...@gridgain.com>> > wrote: > See the “Configuring Indexes using query entities” section of the > documentation: > https://www.gridgain

Re: how to achieve this topology ?

2020-03-10 Thread Stephen Darlington
Or JVM3 and JVM4 would be your Ignite cluster (server nodes) and JMV1 and JVM2 would be client nodes, possibly with near caches. > On 9 Mar 2020, at 23:04, Evgenii Zhuravlev wrote: > > Hi, > > You can use NodeFilter for caches. Please use this JavaDoc for information: >

  1   2   3   4   5   >