Re: DataRegionConfiguration is a FINAL class but prefer it not be

2019-03-15 Thread Humphrey
Thanks, I'll post the same message there.

Humphrey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to use enums in sql ‘where’ clause when using Ignite Web Console

2019-03-15 Thread aealexsandrov
Hi,

Could you please share your CacheConfiguration where you set the EnumField
as a field in QueryEntity (just to have the same configuration)? I will try
to reproduce your issue and check how it works.

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Exception while trying to insert data in Key/value map in ignite

2019-03-15 Thread aealexsandrov
Hi,

Could you please provide more details:

1)Server configuration
2)Cache configuration
3)Logs
4)The example of the data that you are going to insert

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: The field name of the query result becomes uppercase

2019-03-15 Thread aealexsandrov
Hi,

No, it's not possible to do because all the fields in Ignite SQL should be
mapped to java fields. However, in java you can do next:

@QuerySqlField
private java.lang.String id;
@QuerySqlField
private java.lang.String Id;

here id and Id will be different fields. But in H2 SQL engine expects the
case insensitive fields. So one field in SQL could be mapped to two fields
in JAVA. You can see the details in this ticket
https://issues.apache.org/jira/browse/IGNITE-1338 (it closed as won't fix) 

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: LIKE operator on Array column in Apache ignite SQL

2019-03-15 Thread aealexsandrov
Hi,

Ignite SQL engine supports only the next types:

https://apacheignite-sql.readme.io/docs/data-types

Also all existed functions you can see here:

https://apacheignite-sql.readme.io/docs/sql-reference-overview

So there is no way to work with arrays as DataTypes even if you set them as
type in QueryEntity.  Possible that you can add some new boolean SQL fields
to the object that contains the array:

public class Market{
@QuerySqlField(index = true)
private java.lang.String id;
@QuerySqlField
private java.lang.Boolean isContainLacl;
@QuerySqlField
private ArrayList array;

select * from market where isContainLacl = true;

You can fill this boolean flag when generating the Market object.

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: DataRegionConfiguration is a FINAL class but prefer it not be

2019-03-15 Thread aealexsandrov
Hi,

I think it was done to avoid the possible issues that can happen because
users generally can't see the full logic from under the hood. 

If you going to suggest some improvement then better to create a thread on
apache ignite development mail list:

http://apache-ignite-developers.2346864.n4.nabble.com/

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Best way to upsetting/delete records in Ignite from Kafka Stream?

2019-03-15 Thread aealexsandrov
Hi,

You can use the Kafka streamer for these purposes:

https://apacheignite-mix.readme.io/docs/kafka-streamer

Also, take a look at this thread. It contains examples of how to work with
JSON files:

http://apache-ignite-users.70518.x6.nabble.com/Kindly-tell-me-where-to-find-these-jar-files-td12649.html

According to UPSERT specific. You should set the allowoverride flag to the
IgniteStreamer used by KafkaStreamer (otherwise it will be INSERT specific).
You can do it using the next methods:

https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/stream/StreamAdapter.html#getStreamer--
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/stream/StreamAdapter.html#setStreamer-org.apache.ignite.IgniteDataStreamer-

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


DataRegionConfiguration is a FINAL class but prefer it not be

2019-03-15 Thread Humphrey
Is there a very good reason why the DataRegionConfiguration is a *FINAL*
class?

I would like to be able to extend the DataRegionConfiguration.

In my extended class I would like to add a new method for the *setMaxSize()*
and *setInitialSize()* for example where the input is of the type
*org.springframework.util.unit.DataSize instead* of *long*.

This way I can do *super.setMaxSize(maxSize.toBytes())* which will convert a
property set in my application.properties in a form like *"1GB"* to the
corresponding bytes by Spring Boot automatically. 

see [1]  Properties Conversion

  

[1]
https://docs.spring.io/spring-boot/docs/current/reference/html/boot-features-external-config.html#boot-features-external-config-conversion-datasize

The only option I have now is to Wrap the class and delegate the methods,
but that makes me create two beans instead of one for each
DataRegionConfiguration.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to use atomic operations on C++ thin client?

2019-03-15 Thread Павлухин Иван
Hi Jack,

Should be included into next version [1]. Stay tuned.

[1] https://issues.apache.org/jira/browse/IGNITE-9904

пт, 15 мар. 2019 г. в 01:32, jackluo923 :
>
> After digging deeper, it appears that thin-client atomic cache operations are
> not implemented. I have implemented and tested the atomic cache operations
> in C++ thin-client locally and they appear to work correctly but I haven't
> done any extensive testing. Is there any reason why atomic operations are
> not available in c++ thin-client, but available in other languages?
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: Primary partitions return zero partitions before rebalance.

2019-03-15 Thread Павлухин Иван
Hi,

What Ignite version do you use?
How do you register your listener?
On what object do you call primaryPartitions/allPartitions?

It is true that Ignite uses late affinitly assignment. And it means
that for each topology change (node enter or node leave) parttion
assigment changes twice. First time temporay backups are created which
should be rebalanced from other nodes (EVT_CACHE_REBALANCE_STARTED
takes place here). Second time redundant partition replicas should be
marked as unusable (and unloaded after that)
(EVT_CACHE_REBALANCE_STOPPED). And it is useful to understand that
Affinity interface calculates partition distribution using affinity
function and such distribution might differ from real partitoin
assignment. And it differes when rebalance is in progress. See
AffinityAssignment interface.

ср, 13 мар. 2019 г. в 21:59, Koitoer :
>
> Hi All.
>
> I'm trying to follow the rebalance events of my ignite cluster so I'm able to 
> track which partitions are assigned to each node at any point in time. I am 
> listening to the `EVT_CACHE_REBALANCE_STARTED` and 
> `EVT_CACHE_REBALANCE_STOPPED`
> events from Ignite and that is working well, except in the case one node 
> crash and another take its place.
>
> My cluster is 5 nodes.
> Ex. Node 1 has let's say 100 partitions, after I kill this node the 
> partitions that were assigned to it, got rebalance across the entire cluster, 
> I'm able to track that done with the STOPPED event and checking the affinity 
> function in each one of them using the `primaryPartitions` method gives me 
> that, if I add all those numbers I get 1024 partitions, which is why I was 
> expected.
>
> However when a new node replaces the previous one, I see a rebalance process 
> occurs and now I'm getting that some of the partitions `disappear` from the 
> already existing nodes (which is expected as well as new node will take some 
> partitions from them) but when the STOPPED event is listened by this new node 
> if I call the `primaryPartitions` that one returns an empty list, but if I 
> used the  `allPartitions` method that one give me a list (I think at this 
> point is primary + backups).
>
> If I let pass some time and I execute the `primaryPartitions` method again I 
> am able to retrieve the partitions that I was expecting to see after the 
> STOPPED event comes. I read here 
> https://cwiki.apache.org/confluence/display/IGNITE/%28Partition+Map%29+Exchange+-+under+the+hood#id-(PartitionMap)Exchange-under
>  the hood-LateAffinityAssignment that it could be a late assignment, that 
> after the cache rebalance the new node needs to bring all the entries to 
> fill-out the cache and after that, the `primaryPartitions` will return 
> something.
> Will be great to know if this actually what is happening.
>
> My question is if there is any kind of event that I should listen so I can be 
> aware that this process (if this is what is happening) already finish. I 
> would like to said, "After you bring this node into the cluster the 
> partitions assigned to that node are the following: XXX, XXX".
>
> Also, I'm aware of the event `EVT_CACHE_REBALANCE_PART_LOADED` but I'm seeing 
> a ton of them and at this point, I would be able to know when the last one 
> arrives and say that are now my primary partitions.
>
> Thanks in advance.



-- 
Best regards,
Ivan Pavlukhin


Best way to upsetting/delete records in Ignite from Kafka Stream?

2019-03-15 Thread John Smith
Hi, I have a bunch of Json records in Kafka. I would like to either UPSERT
or DELETE a record from my Ignite cache based on the "type" specified in
the Json record. What's the best way to do this or what feature of
Kafka/Ignite I should use?


Re: Finding collocated data in ignite nodes

2019-03-15 Thread NileshKhaire
Hello,

I have inserted 1000 cities now and 3 countries. 

1st Country = 500 cities
2nd Country = 300 cities
3rd Country = 200 cities

After executing first query

select * from City c join Country cc on cc.Code = c.CountryCode;  *record
count is 0*

after executing second query 

select * from City c join Country cc on cc.Code != c.CountryCode;* record
count is 1200*

This is completely opposite to your result , in first query you have 0
records and for second query you have some records.




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Access a cache loaded by DataStreamer with SQL

2019-03-15 Thread Mike Needham
Perfect, now the next question is how would you do this for a more complex
object/table?  Either one defined in a separate object or via SQL DDL?

On Fri, Mar 15, 2019 at 9:05 AM Ilya Kasnacheev 
wrote:

> Hello!
>
> You will have to specify schema name (or cache name?) in ALLCAPS when
> creating cache.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пт, 15 мар. 2019 г. в 16:45, Mike Needham :
>
>> I see.  did not have the "person" for the schema.  Is there a way to not
>> have the quotes around that?
>>
>> On Fri, Mar 15, 2019 at 7:59 AM ilya.kasnacheev <
>> ilya.kasnach...@gmail.com> wrote:
>>
>>> Hello!
>>>
>>> Definitely works for me in DBeaver with this exact code:
>>>
>>> <
>>> http://apache-ignite-users.70518.x6.nabble.com/file/t1312/dbeaver-tables.png>
>>>
>>>
>>> Some of DBeaver's introspection does not work but statements are solid.
>>>
>>> Regards,
>>>
>>>
>>>
>>> --
>>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>>
>>
>>
>> --
>> *Some days it just not worth chewing through the restraints*
>>
>

-- 
*Some days it just not worth chewing through the restraints*


Re: Finding collocated data in ignite nodes

2019-03-15 Thread Ilya Kasnacheev
Hello!

Well, maybe both entries landed on the same node. You will need to have
around a dozen entries (each table) to reliably observe this difference.

Regards,
-- 
Ilya Kasnacheev


пт, 15 мар. 2019 г. в 17:45, NileshKhaire :

> Hello Ilya Kasnacheev,
>
> I executed both queries, and it gives me same result. It would be great if
> you could explain this in more detail .
>
> Thanks,
> Nilesh
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Finding collocated data in ignite nodes

2019-03-15 Thread NileshKhaire
Hello Ilya Kasnacheev,

I executed both queries, and it gives me same result. It would be great if
you could explain this in more detail .

Thanks,
Nilesh





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Deadlocked while Get/Set at the time ?

2019-03-15 Thread Tâm Nguyễn Mạnh
Hi,

Thank you for your advise. I will try with SortedDictionary. But I am
wondering it's only workaround. Should it be handle natively in Ignite
Core, shouldn't it ?
How do you thing ?

On Fri, Mar 15, 2019 at 8:38 PM Ilya Kasnacheev 
wrote:

> Hello!
>
> I imagine you can use a SortedDictionary.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пт, 15 мар. 2019 г. в 16:02, Tâm Nguyễn Mạnh :
>
>> Hi,
>>
>> Yes im using .Net ThinClient putAll method. It requires
>> IEnumerable> as input. What should I do ?, please
>> advise me
>>
>>
>> https://ignite.apache.org/releases/latest/dotnetdoc/api/Apache.Ignite.Core.Client.Cache.ICacheClient-2.html#Apache_Ignite_Core_Client_Cache_ICacheClient_2_PutAll_System_Collections_Generic_IEnumerable_System_Collections_Generic_KeyValuePair__0__1___
>>
>>
>> On Fri, Mar 15, 2019 at 5:55 PM Ilya Kasnacheev <
>> ilya.kasnach...@gmail.com> wrote:
>>
>>> Hello!
>>>
>>> Are you using putAll()? please make sure to always pass TreeMap (or
>>> other sorted map) to putAll since it is possible to get a deadlock when
>>> using HashMap (since keys are unordered).
>>>
>>> Regards,
>>> --
>>> Ilya Kasnacheev
>>>
>>>
>>> пт, 15 мар. 2019 г. в 06:18, Tâm Nguyễn Mạnh >> >:
>>>
 Hi Igniters,

 Today I run a loadtest for my system that use Ignite as a Cache Service
 but 1 of my nodes got crashed because of DeadLocked.

 Could you please help me to prevent this.

 --
 Thanks & Best Regards

 Tam, Nguyen Manh


>>
>> --
>> Thanks & Best Regards
>>
>> Tam, Nguyen Manh
>>
>>

-- 
Thanks & Best Regards

Tam, Nguyen Manh


Re: Deadlocked while Get/Set at the time ?

2019-03-15 Thread Ilya Kasnacheev
Hello!

It's a reasonable question that I have no answer to. You can raise this
issue on developers maillist if you like.

Regards,
-- 
Ilya Kasnacheev


пт, 15 мар. 2019 г. в 17:30, Tâm Nguyễn Mạnh :

> Hi,
>
> Thank you for your advise. I will try with SortedDictionary. But I am
> wondering it's only workaround. Should it be handle natively in Ignite
> Core, shouldn't it ?
> How do you thing ?
>
> On Fri, Mar 15, 2019 at 8:38 PM Ilya Kasnacheev 
> wrote:
>
>> Hello!
>>
>> I imagine you can use a SortedDictionary.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> пт, 15 мар. 2019 г. в 16:02, Tâm Nguyễn Mạnh > >:
>>
>>> Hi,
>>>
>>> Yes im using .Net ThinClient putAll method. It requires
>>> IEnumerable> as input. What should I do ?, please
>>> advise me
>>>
>>>
>>> https://ignite.apache.org/releases/latest/dotnetdoc/api/Apache.Ignite.Core.Client.Cache.ICacheClient-2.html#Apache_Ignite_Core_Client_Cache_ICacheClient_2_PutAll_System_Collections_Generic_IEnumerable_System_Collections_Generic_KeyValuePair__0__1___
>>>
>>>
>>> On Fri, Mar 15, 2019 at 5:55 PM Ilya Kasnacheev <
>>> ilya.kasnach...@gmail.com> wrote:
>>>
 Hello!

 Are you using putAll()? please make sure to always pass TreeMap (or
 other sorted map) to putAll since it is possible to get a deadlock when
 using HashMap (since keys are unordered).

 Regards,
 --
 Ilya Kasnacheev


 пт, 15 мар. 2019 г. в 06:18, Tâm Nguyễn Mạnh <
 nguyenmanhtam...@gmail.com>:

> Hi Igniters,
>
> Today I run a loadtest for my system that use Ignite as a Cache
> Service but 1 of my nodes got crashed because of DeadLocked.
>
> Could you please help me to prevent this.
>
> --
> Thanks & Best Regards
>
> Tam, Nguyen Manh
>
>
>>>
>>> --
>>> Thanks & Best Regards
>>>
>>> Tam, Nguyen Manh
>>>
>>>
>
> --
> Thanks & Best Regards
>
> Tam, Nguyen Manh
>
>


Re: Access a cache loaded by DataStreamer with SQL

2019-03-15 Thread Ilya Kasnacheev
Hello!

You will have to specify schema name (or cache name?) in ALLCAPS when
creating cache.

Regards,
-- 
Ilya Kasnacheev


пт, 15 мар. 2019 г. в 16:45, Mike Needham :

> I see.  did not have the "person" for the schema.  Is there a way to not
> have the quotes around that?
>
> On Fri, Mar 15, 2019 at 7:59 AM ilya.kasnacheev 
> wrote:
>
>> Hello!
>>
>> Definitely works for me in DBeaver with this exact code:
>>
>> <
>> http://apache-ignite-users.70518.x6.nabble.com/file/t1312/dbeaver-tables.png>
>>
>>
>> Some of DBeaver's introspection does not work but statements are solid.
>>
>> Regards,
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>
>
> --
> *Some days it just not worth chewing through the restraints*
>


Re: Access a cache loaded by DataStreamer with SQL

2019-03-15 Thread Mike Needham
I see.  did not have the "person" for the schema.  Is there a way to not
have the quotes around that?

On Fri, Mar 15, 2019 at 7:59 AM ilya.kasnacheev 
wrote:

> Hello!
>
> Definitely works for me in DBeaver with this exact code:
>
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t1312/dbeaver-tables.png>
>
>
> Some of DBeaver's introspection does not work but statements are solid.
>
> Regards,
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


-- 
*Some days it just not worth chewing through the restraints*


Re: Deadlocked while Get/Set at the time ?

2019-03-15 Thread Ilya Kasnacheev
Hello!

I imagine you can use a SortedDictionary.

Regards,
-- 
Ilya Kasnacheev


пт, 15 мар. 2019 г. в 16:02, Tâm Nguyễn Mạnh :

> Hi,
>
> Yes im using .Net ThinClient putAll method. It requires
> IEnumerable> as input. What should I do ?, please
> advise me
>
>
> https://ignite.apache.org/releases/latest/dotnetdoc/api/Apache.Ignite.Core.Client.Cache.ICacheClient-2.html#Apache_Ignite_Core_Client_Cache_ICacheClient_2_PutAll_System_Collections_Generic_IEnumerable_System_Collections_Generic_KeyValuePair__0__1___
>
>
> On Fri, Mar 15, 2019 at 5:55 PM Ilya Kasnacheev 
> wrote:
>
>> Hello!
>>
>> Are you using putAll()? please make sure to always pass TreeMap (or other
>> sorted map) to putAll since it is possible to get a deadlock when using
>> HashMap (since keys are unordered).
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> пт, 15 мар. 2019 г. в 06:18, Tâm Nguyễn Mạnh > >:
>>
>>> Hi Igniters,
>>>
>>> Today I run a loadtest for my system that use Ignite as a Cache Service
>>> but 1 of my nodes got crashed because of DeadLocked.
>>>
>>> Could you please help me to prevent this.
>>>
>>> --
>>> Thanks & Best Regards
>>>
>>> Tam, Nguyen Manh
>>>
>>>
>
> --
> Thanks & Best Regards
>
> Tam, Nguyen Manh
>
>


Re: Deadlocked while Get/Set at the time ?

2019-03-15 Thread Tâm Nguyễn Mạnh
Hi,

Yes im using .Net ThinClient putAll method. It requires
IEnumerable> as input. What should I do ?, please
advise me

https://ignite.apache.org/releases/latest/dotnetdoc/api/Apache.Ignite.Core.Client.Cache.ICacheClient-2.html#Apache_Ignite_Core_Client_Cache_ICacheClient_2_PutAll_System_Collections_Generic_IEnumerable_System_Collections_Generic_KeyValuePair__0__1___


On Fri, Mar 15, 2019 at 5:55 PM Ilya Kasnacheev 
wrote:

> Hello!
>
> Are you using putAll()? please make sure to always pass TreeMap (or other
> sorted map) to putAll since it is possible to get a deadlock when using
> HashMap (since keys are unordered).
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пт, 15 мар. 2019 г. в 06:18, Tâm Nguyễn Mạnh :
>
>> Hi Igniters,
>>
>> Today I run a loadtest for my system that use Ignite as a Cache Service
>> but 1 of my nodes got crashed because of DeadLocked.
>>
>> Could you please help me to prevent this.
>>
>> --
>> Thanks & Best Regards
>>
>> Tam, Nguyen Manh
>>
>>

-- 
Thanks & Best Regards

Tam, Nguyen Manh


Re: Access a cache loaded by DataStreamer with SQL

2019-03-15 Thread ilya.kasnacheev
Hello!

Definitely works for me in DBeaver with this exact code:

 

Some of DBeaver's introspection does not work but statements are solid.

Regards,



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Access a cache loaded by DataStreamer with SQL

2019-03-15 Thread Mike Needham
OK, that appears to be part of the way, but they caches are not queryable
from any SQL tools like Tableau or DBeaver.  What am i missing?

On Fri, Mar 15, 2019 at 5:29 AM Ilya Kasnacheev 
wrote:

> Hello!
>
> Yes, I understand your confusuion here.
>
> Take a look at following elaborate snippet:
>
> package org.apache.ignite.examples;
> import java.util.Collections;
> import java.util.LinkedHashMap;
> import org.apache.ignite.Ignite;
> import org.apache.ignite.IgniteCache;
> import org.apache.ignite.IgniteDataStreamer;
> import org.apache.ignite.Ignition;
> import org.apache.ignite.cache.QueryEntity;
> import org.apache.ignite.cache.query.SqlFieldsQuery;
> import org.apache.ignite.configuration.CacheConfiguration;
>
> public class LoadTableWithDataStreamer {
> public static void main(String[] args) {
> try (Ignite ignite = Ignition.start()) {
> IgniteCache personCache = 
> ignite.getOrCreateCache(new CacheConfiguration<>("person")
> .setQueryEntities(Collections.singleton(
> new QueryEntity(Integer.class, 
> String.class).setTableName("person_table";
>
> IgniteCache placeCache = 
> ignite.getOrCreateCache(new CacheConfiguration<>("place")
> .setQueryEntities(Collections.singleton(
> new QueryEntity(Integer.class, 
> String.class).setTableName("place_table")
> // more decoration
> .setKeyFieldName("id").setValueFieldName("name")
> .setFields(new LinkedHashMap() {{
> // Note that extending LinkedHashMap isn't 
> production-ready
> put("id", Integer.class.getCanonicalName());
> put("name", String.class.getCanonicalName());
> }};
>
> try (IgniteDataStreamer ds = 
> ignite.dataStreamer("person")) {
> ds.addData(1, "John");
> }
>
> try (IgniteDataStreamer ds = 
> ignite.dataStreamer("place")) {
> ds.addData(1, "Siberia");
> }
>
> System.err.println("Query result");
> personCache.query(new SqlFieldsQuery("select _key, _val from 
> person_table")).getAll().forEach(System.err::println);
>
> // refer to different cache's table
> personCache.query(new SqlFieldsQuery("select id, name from 
> \"place\".place_table")).getAll().forEach(System.err::println);
> }
> }
> }
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> чт, 14 мар. 2019 г. в 22:59, Mike Needham :
>
>> Hi,
>>
>> Here is the code I am using
>>
>> package org.apache.ignite.examples;
>> import java.util.Collections;
>> import org.apache.ignite.Ignite;
>> import org.apache.ignite.IgniteCache;
>> import org.apache.ignite.IgniteDataStreamer;
>> import org.apache.ignite.Ignition;
>> import org.apache.ignite.cache.QueryEntity;
>> import org.apache.ignite.cache.query.SqlFieldsQuery;
>> import org.apache.ignite.configuration.CacheConfiguration;
>> import org.apache.ignite.configuration.IgniteConfiguration;
>>
>> public class LoadTableWithDataStreamer {
>> public static void main(String[] args) {
>> try (Ignite ignite =
>> Ignition.start("E:\\ignite\\apache-ignite-2.7.0-src\\examples\\config\\example-ignite.xml"))
>> {
>> IgniteCache personCache =
>> ignite.getOrCreateCache(new CacheConfiguration<>("PUBLIC")
>> .setQueryEntities(Collections.singleton(
>> new QueryEntity(Integer.class,
>> String.class).setTableName("person_table";
>>
>> IgniteCache placeCache =
>> ignite.getOrCreateCache(new CacheConfiguration<>("PUBLIC")
>> .setQueryEntities(Collections.singleton(
>> new QueryEntity(Integer.class,
>> String.class).setTableName("place_table";
>> try (IgniteDataStreamer ds =
>> ignite.dataStreamer("person")) {
>> ds.addData(1, "John");
>> }
>>
>> System.err.println("Query result");
>> personCache.query(new SqlFieldsQuery("select * from
>> person_table")).getAll().forEach(System.err::println);
>> }
>> }
>> }
>>
>> I want to create to queryable tables that I can load using the data
>> streamer.  The config is the default config in the java examples
>>
>>
>> On Mon, Mar 11, 2019 at 3:06 AM Ilya Kasnacheev <
>> ilya.kasnach...@gmail.com> wrote:
>>
>>> Hello!
>>>
>>> Can you perhaps post your config and code? Much easier than writing my
>>> own boilerplate.
>>>
>>> Regards,
>>> --
>>> Ilya Kasnacheev
>>>
>>>
>>> пт, 8 мар. 2019 г. в 00:36, Mike Needham :
>>>
 Would it be possible for someone to provide a sample that uses the
 DataStreamer for multiple large caches and the resulting caches are
 queryable from ODBC tools like Tableau and DBeaver?  I can get one to work,
 but cannot get the cache naming to work for a second one.

 On Thu, Mar 7, 2019 at 7:53 AM Ilya Kasnacheev 

Re: Ignite for a temporal (Open Source) NoSQL storage system

2019-03-15 Thread Johannes Lichtenberger

Hi,

I'm just evaluating what best to use for a distributed in-memory and 
on-disk data storage system / NoSQL data store.


That is for instance:

- single master receiving writes => distributing them (the transaction 
log) to a number of nodes synchronously and to all others asynchronously


- providing real ACID transactions (maybe locking database changes until 
most (and we need to define what most means) nodes respond that they 
wrote the value, as some nodes can simply fail / shutdown / whatever). 
The old revisions can be read regardless of the lock.


- if the transaction on a node in the cluster fails send an event to a 
queue to rollback the most recent revision (maybe if a .commit-file is 
existing remove the most recent revision up to the latest committed).


- need to know on which node in the cluster a specific resource of a 
database resides (indexes are always part of the resource).


- sending events exactly once semantics maybe

- maybe multi-master replication between two master-nodes in different 
networks (but that's maybe nice to have some time).


...

But I'm sure you know a lot more about all the problems in distributed 
systems ;-)


As of now the storage system simply is on a single node (and the storage 
engine very similar to how ZFS works internally with indirect blocks... 
has been written from Scratch). I want to distribute it in the future to 
provide horizontal scalability like MongoDB (without the eventual 
consistency probably), CockroachDB, Cassandra...


I know it's not simple and likely needs a few years, but I think it's 
doable :)


First, I guess replicating a resource in a database to a bunch of nodes 
within a transaction, then look into partitioning and then how to ship 
queries to specific nodes...


kind regards

Johannes


On 15.03.19 11:51, Ilya Kasnacheev wrote:

Hello!

Unfortunately, after re-reading your message several times, I still do 
not understand:


- What did you actually do.
- Whether you have any questions for community.
- Whether you have any specific use cases to share.

Regards,
--
Ilya Kasnacheev


пт, 15 мар. 2019 г. в 11:19, Johannes Lichtenberger 
>:


Hi,

as we are working with Ignite in the company I work for, basically
for
in-memory Grids and horizontal scaling in the cloud I guess Ignite is
also a perfect fit for adding replication/partitioning to a temporal
NoSQL storage system capable of storing revisions of both XML- and
JSON-documents (could also store any other kind of data) in a binary
format efficiently (https://sirix.io or
https://github.com/sirixdb/sirix
-- at least for the storage part itself). It started as a university
project, but now I'm really eager to put forth the idea of keeping
the
history of your data as efficiently as possible (minimal storage- and
query-overhead within the same asymptotic space and time
complexity as
other database systems, which usually do not keep the history -- for
instance through a novel sliding snapshot algorithm and copy-on-write
semantics at the per page / per record level, heavily inspired by
ZFS).

Maybe for the query plan rewriting (the AST of the query) and
distribution Apache Spark is better suited, but for distributing
transaction logs and executing transactions I think Ignite is the
way to go.

What do you think?

I just have to finish the JSONiq query language implementation, but
after releasing 1.0 in summer and stabilizing the core as well as
keeping the APIs stable (and defining a spec for the binary
representation, saving some space in page-headers for future
encryption
at rest for instance) I'm eager to work on clustering for Sirix :-)

Oh and if you're interested, go ahead, clone it, download the Zip,
the
Docker image, whatever and let me know what you think :)

kind regards

Johannes



LIKE operator on Array column in Apache ignite SQL

2019-03-15 Thread Hyma
Hi, 

Is there a way we can do like query on array field in ignite sql.

Able to retrieve records on exact search using array_contains function like
select * from market where array_contains(name,'LACL') - here name is an
array object



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Deadlocked while Get/Set at the time ?

2019-03-15 Thread Ilya Kasnacheev
Hello!

Are you using putAll()? please make sure to always pass TreeMap (or other
sorted map) to putAll since it is possible to get a deadlock when using
HashMap (since keys are unordered).

Regards,
-- 
Ilya Kasnacheev


пт, 15 мар. 2019 г. в 06:18, Tâm Nguyễn Mạnh :

> Hi Igniters,
>
> Today I run a loadtest for my system that use Ignite as a Cache Service
> but 1 of my nodes got crashed because of DeadLocked.
>
> Could you please help me to prevent this.
>
> --
> Thanks & Best Regards
>
> Tam, Nguyen Manh
>
>


Re: Ignite for a temporal (Open Source) NoSQL storage system

2019-03-15 Thread Ilya Kasnacheev
Hello!

Unfortunately, after re-reading your message several times, I still do not
understand:

- What did you actually do.
- Whether you have any questions for community.
- Whether you have any specific use cases to share.

Regards,
-- 
Ilya Kasnacheev


пт, 15 мар. 2019 г. в 11:19, Johannes Lichtenberger <
johannes.lichtenber...@unitedplanet.com>:

> Hi,
>
> as we are working with Ignite in the company I work for, basically for
> in-memory Grids and horizontal scaling in the cloud I guess Ignite is
> also a perfect fit for adding replication/partitioning to a temporal
> NoSQL storage system capable of storing revisions of both XML- and
> JSON-documents (could also store any other kind of data) in a binary
> format efficiently (https://sirix.io or https://github.com/sirixdb/sirix
> -- at least for the storage part itself). It started as a university
> project, but now I'm really eager to put forth the idea of keeping the
> history of your data as efficiently as possible (minimal storage- and
> query-overhead within the same asymptotic space and time complexity as
> other database systems, which usually do not keep the history -- for
> instance through a novel sliding snapshot algorithm and copy-on-write
> semantics at the per page / per record level, heavily inspired by ZFS).
>
> Maybe for the query plan rewriting (the AST of the query) and
> distribution Apache Spark is better suited, but for distributing
> transaction logs and executing transactions I think Ignite is the way to
> go.
>
> What do you think?
>
> I just have to finish the JSONiq query language implementation, but
> after releasing 1.0 in summer and stabilizing the core as well as
> keeping the APIs stable (and defining a spec for the binary
> representation, saving some space in page-headers for future encryption
> at rest for instance) I'm eager to work on clustering for Sirix :-)
>
> Oh and if you're interested, go ahead, clone it, download the Zip, the
> Docker image, whatever and let me know what you think :)
>
> kind regards
>
> Johannes
>
>


Re: Access a cache loaded by DataStreamer with SQL

2019-03-15 Thread Ilya Kasnacheev
Hello!

Yes, I understand your confusuion here.

Take a look at following elaborate snippet:

package org.apache.ignite.examples;
import java.util.Collections;
import java.util.LinkedHashMap;
import org.apache.ignite.Ignite;
import org.apache.ignite.IgniteCache;
import org.apache.ignite.IgniteDataStreamer;
import org.apache.ignite.Ignition;
import org.apache.ignite.cache.QueryEntity;
import org.apache.ignite.cache.query.SqlFieldsQuery;
import org.apache.ignite.configuration.CacheConfiguration;

public class LoadTableWithDataStreamer {
public static void main(String[] args) {
try (Ignite ignite = Ignition.start()) {
IgniteCache personCache =
ignite.getOrCreateCache(new CacheConfiguration<>("person")
.setQueryEntities(Collections.singleton(
new QueryEntity(Integer.class,
String.class).setTableName("person_table";

IgniteCache placeCache =
ignite.getOrCreateCache(new CacheConfiguration<>("place")
.setQueryEntities(Collections.singleton(
new QueryEntity(Integer.class,
String.class).setTableName("place_table")
// more decoration
.setKeyFieldName("id").setValueFieldName("name")
.setFields(new LinkedHashMap() {{
// Note that extending LinkedHashMap isn't production-ready
put("id", Integer.class.getCanonicalName());
put("name", String.class.getCanonicalName());
}};

try (IgniteDataStreamer ds =
ignite.dataStreamer("person")) {
ds.addData(1, "John");
}

try (IgniteDataStreamer ds =
ignite.dataStreamer("place")) {
ds.addData(1, "Siberia");
}

System.err.println("Query result");
personCache.query(new SqlFieldsQuery("select _key, _val
from person_table")).getAll().forEach(System.err::println);

// refer to different cache's table
personCache.query(new SqlFieldsQuery("select id, name from
\"place\".place_table")).getAll().forEach(System.err::println);
}
}
}

Regards,
-- 
Ilya Kasnacheev


чт, 14 мар. 2019 г. в 22:59, Mike Needham :

> Hi,
>
> Here is the code I am using
>
> package org.apache.ignite.examples;
> import java.util.Collections;
> import org.apache.ignite.Ignite;
> import org.apache.ignite.IgniteCache;
> import org.apache.ignite.IgniteDataStreamer;
> import org.apache.ignite.Ignition;
> import org.apache.ignite.cache.QueryEntity;
> import org.apache.ignite.cache.query.SqlFieldsQuery;
> import org.apache.ignite.configuration.CacheConfiguration;
> import org.apache.ignite.configuration.IgniteConfiguration;
>
> public class LoadTableWithDataStreamer {
> public static void main(String[] args) {
> try (Ignite ignite =
> Ignition.start("E:\\ignite\\apache-ignite-2.7.0-src\\examples\\config\\example-ignite.xml"))
> {
> IgniteCache personCache =
> ignite.getOrCreateCache(new CacheConfiguration<>("PUBLIC")
> .setQueryEntities(Collections.singleton(
> new QueryEntity(Integer.class,
> String.class).setTableName("person_table";
>
> IgniteCache placeCache =
> ignite.getOrCreateCache(new CacheConfiguration<>("PUBLIC")
> .setQueryEntities(Collections.singleton(
> new QueryEntity(Integer.class,
> String.class).setTableName("place_table";
> try (IgniteDataStreamer ds =
> ignite.dataStreamer("person")) {
> ds.addData(1, "John");
> }
>
> System.err.println("Query result");
> personCache.query(new SqlFieldsQuery("select * from
> person_table")).getAll().forEach(System.err::println);
> }
> }
> }
>
> I want to create to queryable tables that I can load using the data
> streamer.  The config is the default config in the java examples
>
>
> On Mon, Mar 11, 2019 at 3:06 AM Ilya Kasnacheev 
> wrote:
>
>> Hello!
>>
>> Can you perhaps post your config and code? Much easier than writing my
>> own boilerplate.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> пт, 8 мар. 2019 г. в 00:36, Mike Needham :
>>
>>> Would it be possible for someone to provide a sample that uses the
>>> DataStreamer for multiple large caches and the resulting caches are
>>> queryable from ODBC tools like Tableau and DBeaver?  I can get one to work,
>>> but cannot get the cache naming to work for a second one.
>>>
>>> On Thu, Mar 7, 2019 at 7:53 AM Ilya Kasnacheev <
>>> ilya.kasnach...@gmail.com> wrote:
>>>
 Hello!

 JDBC with SET STREAMING ON should work reasonably well. You could also
 use CacheStore.

 With .Net I guess you will have to stick to DataStreamer & learn how to
 integrate it with Indexing properly. Or use JDBC from .Net.

 Regards,
 --
 Ilya Kasnacheev


 чт, 7 мар. 2019 г. в 16:31, Mike Needham :

> 

Ignite for a temporal (Open Source) NoSQL storage system

2019-03-15 Thread Johannes Lichtenberger

Hi,

as we are working with Ignite in the company I work for, basically for 
in-memory Grids and horizontal scaling in the cloud I guess Ignite is 
also a perfect fit for adding replication/partitioning to a temporal 
NoSQL storage system capable of storing revisions of both XML- and 
JSON-documents (could also store any other kind of data) in a binary 
format efficiently (https://sirix.io or https://github.com/sirixdb/sirix 
-- at least for the storage part itself). It started as a university 
project, but now I'm really eager to put forth the idea of keeping the 
history of your data as efficiently as possible (minimal storage- and 
query-overhead within the same asymptotic space and time complexity as 
other database systems, which usually do not keep the history -- for 
instance through a novel sliding snapshot algorithm and copy-on-write 
semantics at the per page / per record level, heavily inspired by ZFS).


Maybe for the query plan rewriting (the AST of the query) and 
distribution Apache Spark is better suited, but for distributing 
transaction logs and executing transactions I think Ignite is the way to go.


What do you think?

I just have to finish the JSONiq query language implementation, but 
after releasing 1.0 in summer and stabilizing the core as well as 
keeping the APIs stable (and defining a spec for the binary 
representation, saving some space in page-headers for future encryption 
at rest for instance) I'm eager to work on clustering for Sirix :-)


Oh and if you're interested, go ahead, clone it, download the Zip, the 
Docker image, whatever and let me know what you think :)


kind regards

Johannes