Re: Question on BinaryObject

2023-01-17 Thread Pavel Tupitsyn
Hello, yes, your understanding is correct.

On Tue, Jan 17, 2023 at 6:46 PM Peter  wrote:

> Hello,
>
> Do I understand correctly, that each BinaryObject that is returned by
> IgniteCache.get() and  IgniteCache.getAll() method calls on a local node
> contains an internal on-heap byte array, and object unmarshalling occurs
> from that array, and not from off-heap memory?
>


Question on BinaryObject

2023-01-17 Thread Peter
 Hello,

Do I understand correctly, that each BinaryObject that is returned by
IgniteCache.get() and  IgniteCache.getAll() method calls on a local node
contains an internal on-heap byte array, and object unmarshalling occurs
from that array, and not from off-heap memory?


Re:Re: BinaryObject Data Can Not Mapping To SQL-Data

2022-04-20 Thread y
Hi Vladimir
Yes, my first name is Tianyue... In fact, I work for an ERP company. ERP you 
know ,   "there are strict latency requirements for data processing and for the 
data-loading phase" in your words.  I has deployed Ignite  in a small, simple 
production environment. Now  I need to deploy another one on a more complex 
environment which has xx TB data. For some business reasons, the previous code 
is no more appropriate. The current plan  is ’Computetask + Binaryserializer‘, 
but this is also some problems and I will describe it in the future. Besides, I 
have tried to customize CacheStore, failed: (  
To be honest, since I have only graduated from University for three years and 
my technical level is so limited, exploring ignite often frustrates me. (Damn 
it, why is it wrong again ! why? 

Tianyue Hu,
2022/4/20

PS:Zhenya sounds really really like a Chinese name,maybe a Russian Chinese, 
just kidding .







在 2022-04-20 12:50:42,vtcher...@gmail.com 写道:

Hi Tianyue,


IMHO fully compilable project is usefull for newbie, while short code snippets 
are not. You can start a single server cluster and debug code in your IDE, 
check some suggestions about how it works. Couple of years ago I found such a 
compilable project describing microservices written by @DenisMagda, it helped a 
lot. So I hope my post will be usefull.



Vladimir
PS
I know out there in China last name is written first, while here in Russia it 
is written last, so the names are Vladimir or Zhenya. Hope I am correct and 
your name is Tianyue

5:12, 20 апреля 2022 г., y :





Hi Stanilovsky,

I don't know how to describe my problem to you, but I'm sure there is not an 
error and the data was successfully inserted but not mapped to SQL-data. 
Vladimir give me a link 
https://www.gridgain.com/resources/blog/how-fast-load-large-datasets-apache-ignite-using-key-value-api.
 I decided to take a look at this link first.

Anyway, thanks for your advise and hope can help you in the future.

Tianyue Hu,
2022/4/20













在 2022-04-19 14:50:51,"Zhenya Stanilovsky"  写道:

hi !
BinaryObjectBuilder oldBuilder = 
igniteClient.binary().builder(«com.inspur...PubPartitionKeys_1_7»);
 
do you call:
 
oldBuilder.build(); // after ?
 
If so — what this mean ? «data is not mapped to sql» is it error in log or 
client side or smth ?
 
thanks !


Hi,
 
I have had the same experience without sql, using KV API only. My cluster 
consists of several data nodes and self-written jar application that starts the 
client node. When started, client node executes mapreduce tasks for data load 
and processing.
 
The workaround is as follows:
1. create POJO on the client node;
2. convert it to the binary object;
3. on the data node, get binary object over the network and get its builder 
(obj.toBuilder());
4. set some fields, build and put in the cache.
 
The builder on the step 3 seems to be the same as the one on the cluent node.
 
Hope that helps,
Vladimir
 
13:06, 18 апреля 2022 г., y :
Hi ,
When using binary to insert data, I need to  get an exist 
BinaryObject/BinaryObjectBuilder  from the database, similar to the code below. 
442062c6$3$1803c222cba$Coremail$hty1994712$163.com

If I create a BinaryObjectBuilder directly, inserting binary data does not map 
to table data. The following code will not throw error, but the data is not 
mapped to sql. If there is no data in my table at first, how can I insert data?
3ecbd8f9$4$1803c222cba$Coremail$hty1994712$163.com

 
 

 



--
Отправлено из мобильного приложения Яндекс Почты
 
 
 
 




 



--
Отправлено из мобильного приложения Яндекс Почты

Re: BinaryObject Data Can Not Mapping To SQL-Data

2022-04-19 Thread vtchernyi
Hi Tianyue,IMHO fully compilable project is usefull for newbie, while short code snippets are not. You can start a single server cluster and debug code in your IDE, check some suggestions about how it works. Couple of years ago I found such a compilable project describing microservices written by @DenisMagda, it helped a lot. So I hope my post will be usefull.VladimirPSI know out there in China last name is written first, while here in Russia it is written last, so the names are Vladimir or Zhenya. Hope I am correct and your name is Tianyue5:12, 20 апреля 2022 г., y :Hi Stanilovsky,I don't know how to describe my problem to you, but I'm sure there is not an error and the data was successfully inserted but not mapped to SQL-data. Vladimir give me a link https://www.gridgain.com/resources/blog/how-fast-load-large-datasets-apache-ignite-using-key-value-api. I decided to take a look at this link first. Anyway, thanks for your advise and hope can help you in the future.Tianyue Hu,2022/4/20在 2022-04-19 14:50:51,"Zhenya Stanilovsky" <arzamas...@mail.ru> 写道:hi !BinaryObjectBuilder oldBuilder = igniteClient.binary().builder(«com.inspur...PubPartitionKeys_1_7»); do you call: oldBuilder.build(); // after ? If so — what this mean ? «data is not mapped to sql» is it error in log or client side or smth ? thanks !Hi, I have had the same experience without sql, using KV API only. My cluster consists of several data nodes and self-written jar application that starts the client node. When started, client node executes mapreduce tasks for data load and processing. The workaround is as follows:1. create POJO on the client node;2. convert it to the binary object;3. on the data node, get binary object over the network and get its builder (obj.toBuilder());4. set some fields, build and put in the cache. The builder on the step 3 seems to be the same as the one on the cluent node. Hope that helps,Vladimir 13:06, 18 апреля 2022 г., y <hty1994...@163.com>:Hi ,When using binary to insert data, I need to  get an exist BinaryObject/BinaryObjectBuilder  from the database, similar to the code below. 442062c6$3$1803c222cba$Coremail$hty1994712$163.comIf I create a BinaryObjectBuilder directly, inserting binary data does not map to table data. The following code will not throw error, but the data is not mapped to sql. If there is no data in my table at first, how can I insert data?3ecbd8f9$4$1803c222cba$Coremail$hty1994712$163.com   --Отправлено из мобильного приложения Яндекс Почты
 -- Отправлено из мобильного приложения Яндекс Почты

Re: BinaryObject Data Can Not Mapping To SQL-Data

2022-04-19 Thread y



Hi Stanilovsky,

I don't know how to describe my problem to you, but I'm sure there is not an 
error and the data was successfully inserted but not mapped to SQL-data. 
Vladimir give me a link 
https://www.gridgain.com/resources/blog/how-fast-load-large-datasets-apache-ignite-using-key-value-api.
 I decided to take a look at this link first.

Anyway, thanks for your advise and hope can help you in the future.

Tianyue Hu,
2022/4/20













在 2022-04-19 14:50:51,"Zhenya Stanilovsky"  写道:

hi !
BinaryObjectBuilder oldBuilder = 
igniteClient.binary().builder(«com.inspur...PubPartitionKeys_1_7»);
 
do you call:
 
oldBuilder.build(); // after ?
 
If so — what this mean ? «data is not mapped to sql» is it error in log or 
client side or smth ?
 
thanks !


Hi,
 
I have had the same experience without sql, using KV API only. My cluster 
consists of several data nodes and self-written jar application that starts the 
client node. When started, client node executes mapreduce tasks for data load 
and processing.
 
The workaround is as follows:
1. create POJO on the client node;
2. convert it to the binary object;
3. on the data node, get binary object over the network and get its builder 
(obj.toBuilder());
4. set some fields, build and put in the cache.
 
The builder on the step 3 seems to be the same as the one on the cluent node.
 
Hope that helps,
Vladimir
 
13:06, 18 апреля 2022 г., y :
Hi ,
When using binary to insert data, I need to  get an exist 
BinaryObject/BinaryObjectBuilder  from the database, similar to the code below. 
442062c6$3$1803c222cba$Coremail$hty1994712$163.com

If I create a BinaryObjectBuilder directly, inserting binary data does not map 
to table data. The following code will not throw error, but the data is not 
mapped to sql. If there is no data in my table at first, how can I insert data?
3ecbd8f9$4$1803c222cba$Coremail$hty1994712$163.com

 
 

 



--
Отправлено из мобильного приложения Яндекс Почты
 
 
 
 

Re:Re: Re: BinaryObject Data Can Not Mapping To SQL-Data

2022-04-19 Thread y
Hello Vladimir,

Thanks for your reply, I will carefully refer to the link you sent to see how 
it is written differently from mine. If there is a problem, I will ask for help 
again.(hope not)
By the way, i am not Russiasn. I am a young developer from China and studied 
ignite for more than a year : )


Tianyue Hu
2022/4/20




在 2022-04-19 17:12:39,"Vladimir Tchernyi"  写道:

Hello Huty,
please read my post [1]. The approach in that paper works successfully in 
production for more than one year and seems to be correct
[1] 
https://www.gridgain.com/resources/blog/how-fast-load-large-datasets-apache-ignite-using-key-value-api


Vladimir
telegram @vtchernyi


PS
hope I named you correct, the name is not widespread here in Russia


вт, 19 апр. 2022 г. в 09:46, y :

Hi Vladimir,
Thank you for your answer.  Emmm.Actually, most of my methods are the same 
as yours except for the following two points:
1、I didn't use ComputeTask. The data is sent to the server node through the 
thin client.

2、I didn't use the standard POJO. Key-type is the following code and value-type 
is an empty class. That means All columns are dynamically specified through 
BinaryObjectBuilder. 

public class PubPartionKeys_1_7 {
@AffinityKeyMapped
private String TBDATA_DX01;
private String TBDATA_DX02;
private String TBDATA_DX03;
private String TBDATA_DX04;
private String TBDATA_DX05;
private String TBDATA_DX06;
private String TBDATA_DX07;

public PubPartionKeys_1_7() {

}

// get/set method 
// .

}
I would be appreciate it very much if you attach your code back! :)

Huty,
2022/4/19




At 2022-04-19 12:40:20, vtcher...@gmail.com wrote:

Hi,


I have had the same experience without sql, using KV API only. My cluster 
consists of several data nodes and self-written jar application that starts the 
client node. When started, client node executes mapreduce tasks for data load 
and processing.


The workaround is as follows:
1. create POJO on the client node;
2. convert it to the binary object;
3. on the data node, get binary object over the network and get its builder 
(obj.toBuilder());
4. set some fields, build and put in the cache.


The builder on the step 3 seems to be the same as the one on the cluent node.


Hope that helps,
Vladimir


13:06, 18 апреля 2022 г., y :

Hi ,
When using binary to insert data, I need to  get an exist 
BinaryObject/BinaryObjectBuilder  from the database, similar to the code below. 
442062c6$3$1803c222cba$Coremail$hty1994712$163.com

If I create a BinaryObjectBuilder directly, inserting binary data does not map 
to table data. The following code will not throw error, but the data is not 
mapped to sql. If there is no data in my table at first, how can I insert data?
3ecbd8f9$4$1803c222cba$Coremail$hty1994712$163.com







 



--
Отправлено из мобильного приложения Яндекс Почты




 

Re: Re: BinaryObject Data Can Not Mapping To SQL-Data

2022-04-19 Thread Vladimir Tchernyi
Hello Huty,
please read my post [1]. The approach in that paper works successfully in
production for more than one year and seems to be correct
[1]
https://www.gridgain.com/resources/blog/how-fast-load-large-datasets-apache-ignite-using-key-value-api

Vladimir
telegram @vtchernyi

PS
hope I named you correct, the name is not widespread here in Russia

вт, 19 апр. 2022 г. в 09:46, y :

> Hi Vladimir,
> Thank you for your answer.  Emmm.Actually, most of my methods are the
> same as yours except for the following two points:
> 1、I didn't use ComputeTask. The data is sent to the server node through
> the* thin client.*
>
> 2、I didn't use the standard POJO. Key-type is the following code and
> value-type is an empty class. That means *All columns are dynamically
> specified through BinaryObjectBuilder. *
>
> public class PubPartionKeys_1_7 {
> @AffinityKeyMapped
> private String TBDATA_DX01;
> private String TBDATA_DX02;
> private String TBDATA_DX03;
> private String TBDATA_DX04;
> private String TBDATA_DX05;
> private String TBDATA_DX06;
> private String TBDATA_DX07;
>
> public PubPartionKeys_1_7() {
> }
>
> // get/set method
> // .
> }
>
> I would be appreciate it very much if you attach your code back! :)
>
> Huty,
> 2022/4/19
>
>
> At 2022-04-19 12:40:20, vtcher...@gmail.com wrote:
>
> Hi,
>
> I have had the same experience without sql, using KV API only. My cluster
> consists of several data nodes and self-written jar application that starts
> the client node. When started, client node executes mapreduce tasks for
> data load and processing.
>
> The workaround is as follows:
> 1. create POJO on the client node;
> 2. convert it to the binary object;
> 3. on the data node, get binary object over the network and get its
> builder (obj.toBuilder());
> 4. set some fields, build and put in the cache.
>
> The builder on the step 3 seems to be the same as the one on the cluent
> node.
>
> Hope that helps,
> Vladimir
>
> 13:06, 18 апреля 2022 г., y :
>
> Hi ,
> When using binary to insert data, I need to  get *an
> exist BinaryObject/BinaryObjectBuilder*  from the database, similar to
> the code below.
> 442062c6$3$1803c222cba$Coremail$hty1994712$163.com
>
> If I create a BinaryObjectBuilder directly, inserting binary data does not
> map to table data. The following code will not throw error, but the* data
> is not mapped to sql. *If there is *no data in my table at first*, how
> can I insert data?
> 3ecbd8f9$4$1803c222cba$Coremail$hty1994712$163.com
>
>
>
>
>
>
>
>
> --
> Отправлено из мобильного приложения Яндекс Почты
>
>
>
>
>


Re[2]: BinaryObject Data Can Not Mapping To SQL-Data

2022-04-19 Thread Zhenya Stanilovsky

hi !
BinaryObjectBuilder oldBuilder = 
igniteClient.binary().builder(«com.inspur...PubPartitionKeys_1_7»);
 
do you call:
 
oldBuilder.build(); // after ?
 
If so — what this mean ? «data is not mapped to sql» is it error in log or 
client side or smth ?
 
thanks !
>Hi,
> 
>I have had the same experience without sql, using KV API only. My cluster 
>consists of several data nodes and self-written jar application that starts 
>the client node. When started, client node executes mapreduce tasks for data 
>load and processing.
> 
>The workaround is as follows:
>1. create POJO on the client node;
>2. convert it to the binary object;
>3. on the data node, get binary object over the network and get its builder 
>(obj.toBuilder());
>4. set some fields, build and put in the cache.
> 
>The builder on the step 3 seems to be the same as the one on the cluent node.
> 
>Hope that helps,
>Vladimir
>  13:06, 18 апреля 2022 г., y < hty1994...@163.com >:
>>Hi ,
>>When using binary to insert data, I need to  get  an exist 
>>BinaryObject/BinaryObjectBuilder   from the database, similar to the code 
>>below. 
>>442062c6$3$1803c222cba$Coremail$hty1994712$163.com
>>
>>If I create a BinaryObjectBuilder directly, inserting binary data does not 
>>map to table data. The following code will not throw error, but the data is 
>>not mapped to sql.  If there is  no data in my table at first , how can I 
>>insert data?
>>3ecbd8f9$4$1803c222cba$Coremail$hty1994712$163.com
>>
>>   
>> 
>
>--
>Отправлено из мобильного приложения Яндекс Почты 
 
 
 
 

Re:Re: BinaryObject Data Can Not Mapping To SQL-Data

2022-04-19 Thread y
Hi Vladimir,
Thank you for your answer.  Emmm.Actually, most of my methods are the same 
as yours except for the following two points:
1、I didn't use ComputeTask. The data is sent to the server node through the 
thin client.

2、I didn't use the standard POJO. Key-type is the following code and value-type 
is an empty class. That means All columns are dynamically specified through 
BinaryObjectBuilder. 

public class PubPartionKeys_1_7 {
@AffinityKeyMapped
private String TBDATA_DX01;
private String TBDATA_DX02;
private String TBDATA_DX03;
private String TBDATA_DX04;
private String TBDATA_DX05;
private String TBDATA_DX06;
private String TBDATA_DX07;

public PubPartionKeys_1_7() {

}

// get/set method 
// .

}
I would be appreciate it very much if you attach your code back! :)

Huty,
2022/4/19




At 2022-04-19 12:40:20, vtcher...@gmail.com wrote:

Hi,


I have had the same experience without sql, using KV API only. My cluster 
consists of several data nodes and self-written jar application that starts the 
client node. When started, client node executes mapreduce tasks for data load 
and processing.


The workaround is as follows:
1. create POJO on the client node;
2. convert it to the binary object;
3. on the data node, get binary object over the network and get its builder 
(obj.toBuilder());
4. set some fields, build and put in the cache.


The builder on the step 3 seems to be the same as the one on the cluent node.


Hope that helps,
Vladimir


13:06, 18 апреля 2022 г., y :

Hi ,
When using binary to insert data, I need to  get an exist 
BinaryObject/BinaryObjectBuilder  from the database, similar to the code below. 
442062c6$3$1803c222cba$Coremail$hty1994712$163.com

If I create a BinaryObjectBuilder directly, inserting binary data does not map 
to table data. The following code will not throw error, but the data is not 
mapped to sql. If there is no data in my table at first, how can I insert data?
3ecbd8f9$4$1803c222cba$Coremail$hty1994712$163.com







 



--
Отправлено из мобильного приложения Яндекс Почты

Re: BinaryObject Data Can Not Mapping To SQL-Data

2022-04-18 Thread vtchernyi
Hi,I have had the same experience without sql, using KV API only. My cluster consists of several data nodes and self-written jar application that starts the client node. When started, client node executes mapreduce tasks for data load and processing.The workaround is as follows:1. create POJO on the client node;2. convert it to the binary object;3. on the data node, get binary object over the network and get its builder (obj.toBuilder());4. set some fields, build and put in the cache.The builder on the step 3 seems to be the same as the one on the cluent node.Hope that helps,Vladimir13:06, 18 апреля 2022 г., y :Hi ,When using binary to insert data, I need to  get an exist BinaryObject/BinaryObjectBuilder  from the database, similar to the code below. 442062c6$3$1803c222cba$Coremail$hty1994712$163.comIf I create a BinaryObjectBuilder directly, inserting binary data does not map to table data. The following code will not throw error, but the data is not mapped to sql. If there is no data in my table at first, how can I insert data?3ecbd8f9$4$1803c222cba$Coremail$hty1994712$163.com -- Отправлено из мобильного приложения Яндекс Почты

BinaryObject Data Can Not Mapping To SQL-Data

2022-04-18 Thread y
Hi ,
When using binary to insert data, I need to  get an exist 
BinaryObject/BinaryObjectBuilder  from the database, similar to the code below. 


If I create a BinaryObjectBuilder directly, inserting binary data does not map 
to table data. The following code will not throw error, but the data is not 
mapped to sql. If there is no data in my table at first, how can I insert data?





[2.7.6] Question about BinaryObject/SQL schema

2021-01-26 Thread maxi628
Hello everyone.

I'm creating a partitioned cache with persistence enabled.
I use java's thin client to connect my app to ignite. 
I have a pool of reusable connections, as suggested in the documentation
since IgniteClient isn't thread safe.
After the cache is created, I'm using SQL "ALTER TABLE" to change its schema
to drop/create columns and I also create some indexes if needed.
I use cache API to populate the cache.
Now, If I had any opened connections in my pool after doing the alter table,
those binary types maintain the old schema, which is leading to problems.

Sometimes I get:


An example would be:
- create connection A
- create cache1 (let's call it cache1-A)
- create connection B
- get cache1 from connection B (let's call it cache1-B)
- And here I can either
- run alter table on cache1-B and add/drop a column
- make a cache.put(1, binaryObjectBuilder with new field)

Now if I get the binary().types() from each connection, they are different,
in the sense that one of those two is missing the new field.

Is there any way to refresh/update the binary types of an existing thin
client connection?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: BinaryObject with '.' (dots) in field names

2020-10-02 Thread akurbanov
Hello Scott,

Unfortunately, there is no workaround available to safely use dot within
BinaryObject field name.

I am not aware of any other things, dot seems to be the only one symbol that
affects how BinaryObject is aligned.

Best regards,
Anton



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: BinaryObject with '.' (dots) in field names

2020-10-02 Thread scottmf
Thanks Anton.

We can deal with it by using a placeholder for the dot when interacting with
ignite.  (Since in our notation we already have dots)

Going back to the questions,

1. It sounds like we cannot work around this limitation since it is a
reserved character, is that correct?
2. Are there any other characters that we should be aware of besides the '.'
?

thanks.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: BinaryObject with '.' (dots) in field names

2020-10-02 Thread akurbanov
Hello Scott, 

I would recommend to stick with the underscore character, as the dot (.) is
reserved for the nested object to be referenced in the QueryEntity. When you
mark the query entity field with "org.id", it expects that there is an field
with name "org" in this object that has nested field "id".

The field names in the query will be flattened and in this case you will end
up with 3 fields in your QueryEntity type descriptor instead of expected 5
fields: "id", "name" and "owner" which will all have the same parent object
"org" that is missing in this object (the field should be a BinaryObject
field named "org" with 3 nested fields), thus, the returned data will be
evaluated as nulls.

Best regards,
Anton



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


BinaryObject with '.' (dots) in field names

2020-10-02 Thread scottmf
Hi,Running the code below doesn't work properly unless I change the field
names from using to '.' (dots) to use '_' (underscores).  
Questions:
1. What are the restrictions around field names?  In other words, are there
other characters that i can't use?
2. Is there a way to work around this and use dots in the field names?

thanks!
Ignite ignite = Ignition.start();String id = "id";   
String name = "name";String orgId = "org.id";String orgName
= "org.name";String orgOwner = "org.owner";   
CacheConfiguration cfg = new CacheConfiguration<>();  
 
cfg.setName("deployment");LinkedHashMap fields = new
LinkedHashMap<>();fields.put("id", "java.lang.String");   
fields.put("name", "java.lang.String");fields.put("org.id",
"java.lang.String");fields.put("org.name", "java.lang.String");   
fields.put("org.owner", "java.lang.String");QueryEntity queryEntity
= new QueryEntity();queryEntity.setKeyType("java.lang.String");   
queryEntity.setValueType("deployment");   
queryEntity.setFields(fields);   
cfg.setQueryEntities(Collections.singleton(queryEntity));   
IgniteCache deployment =
ignite.getOrCreateCache(cfg).withKeepBinary();BinaryObject
binaryObject = ignite.binary().builder("deployment")   
.setField("id", id)    .setField("name", name)   
.setField("org.id", orgId).setField("org.name", orgName)   
.setField("org.owner", orgOwner).build();   
deployment.put(id, binaryObject);Object o = deployment.get("id");   
System.err.println("cache -> " + o);FieldsQueryCursor>
query = deployment.query(new SqlFieldsQuery("select * from deployment"));   
System.err.println(query.getAll());
output:
cache -> deployment [idHash=1984294974, hash=1137170470, id=id, name=name,
org.id=org.id, org.name=org.name, org.owner=org.owner][[null, null, null]]




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

RE: Unable to read fields value from BinaryObject

2020-09-11 Thread Alexandr Shapkin
Hi, Just for the record, I noticed that the following tickets has been filed: https://issues.apache.org/jira/browse/IGNITE-13415 https://issues.apache.org/jira/browse/IGNITE-13436 Also seems like the consumer would work if you do at least one cache#put operation. From: Alexandr ShapkinSent: Thursday, September 10, 2020 3:04 PMTo: user@ignite.apache.orgSubject: Re: Unable to read fields value from BinaryObject Hi, Yes, I can reproduce the behavior. It could be an issue with the custombinary mappers.  The weird thing is that the CustomBinaryIdMapper works fine with a singlenode, i.e. if it reads and writes the value from a cache, then the mapper isworking. But fails on the other node. Meanwhile, it's quite interesting, why do you want to replace the defaultmapper with a custom one? What tasks are you trying to solve?   -Alex Shapkin--Sent from: http://apache-ignite-users.70518.x6.nabble.com/ 


Re: Unable to read fields value from BinaryObject

2020-09-10 Thread Alexandr Shapkin
Hi,

Yes, I can reproduce the behavior. It could be an issue with the custom
binary mappers. 

The weird thing is that the CustomBinaryIdMapper works fine with a single
node, i.e. if it reads and writes the value from a cache, then the mapper is
working. But fails on the other node.

Meanwhile, it's quite interesting, why do you want to replace the default
mapper with a custom one? What tasks are you trying to solve?



-
Alex Shapkin
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Unable to read fields value from BinaryObject

2020-09-07 Thread Wasim Bari
We are facing issue while retrieving values from binary objects if we use
custom id mapper.

We have two ignite clients.
1) Producer--> Creates cache and add entries to it.
2) Consumer--> Consumes cache entries created by producer. Reads data in
BinaryObject form and extracts values from it. 

We notice that if we provide custom id mapper then consumer not able to
fetch values from BinaryObject. It always returns null. However, if we do
not
provide custom id mapper, then we are able to fetch values from BinaryObject
in consumer. Given are the java classes for your reference.

Employee Class
package com.myorg.ignite.producerconsumer;

import java.io.Serializable;
public class Employee implements Serializable{
private String empName;
private String deparment;
private float height;
private int experience;

public Employee() {

}
public Employee(String empName, String department, float height, int
experience) {
this.empName = empName;
this.deparment = department;
this.height = height;
this.experience = experience;
}
public String getEmpName() {
return empName;
}
public void setEmpName(String empName) {
this.empName = empName;
}
public String getDeparment() {
return deparment;
}
public void setDeparment(String deparment) {
this.deparment = deparment;
}
public float getHeight() {
return height;
}
public void setHeight(float height) {
this.height = height;
}
public int getExperience() {
return experience;
}
public void setExperience(int experience) {
this.experience = experience;
}

}

Utility class for cache creation
package com.myorg.ignite.producerconsumer;

import java.util.Arrays;
import org.apache.ignite.Ignite;
import org.apache.ignite.Ignition;
import org.apache.ignite.configuration.BinaryConfiguration;
import org.apache.ignite.configuration.IgniteConfiguration;
import org.apache.ignite.logger.slf4j.Slf4jLogger;
import org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi;
import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
import
org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;

import com.myorg.ignite.custommapper.CustomBinaryIdMapper;

public class CacheUtil {
public static String cacheName = "mycache";
public static String instanceName = "local-ignite";

  public static Ignite getIgniteInstance() {
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setIgniteInstanceName(instanceName);
cfg.setClientMode(true);
cfg.setGridLogger(new Slf4jLogger());
cfg.setBinaryConfiguration(getBinaryConfiguration());


TcpDiscoverySpi spi = new TcpDiscoverySpi();
TcpDiscoveryVmIpFinder ipFinder = new 
TcpDiscoveryVmIpFinder();
ipFinder.setAddresses(Arrays.asList("localhost"));
spi.setLocalPort(37508);
spi.setLocalPortRange(10);
TcpCommunicationSpi commSpi=new TcpCommunicationSpi();

commSpi.setLocalPort(37509);
spi.setIpFinder(ipFinder);

cfg.setDiscoverySpi(spi);
cfg.setCommunicationSpi(commSpi);

return Ignition.start(cfg);
}
  
  private static BinaryConfiguration getBinaryConfiguration() {
BinaryConfiguration bc = new BinaryConfiguration();
CustomBinaryIdMapper imlIdMapper = new 
CustomBinaryIdMapper();
bc.setIdMapper(imlIdMapper);
return bc;
  }

}

Producer class

package com.myorg.ignite.producerconsumer;

import java.io.Serializable;
import javax.cache.Cache;
import org.apache.ignite.Ignite;
import org.apache.ignite.configuration.CacheConfiguration;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class CacheProducer {
private static final Logger LOG =
LoggerFactory.getLogger(CacheProducer.class);
public static void main(String ...s) {
Ignite ignite = CacheUtil.getIgniteInstance();
Cache cache = 
createCache(CacheUtil.cacheName,
ignite);
cache.getAndPut("emp1", new Employee("David", "Transport", 
5.8f, 5));
cache.close();

}
  private static Cache createCache(String 
cacheName,
Ignite 

Re: BinaryObject field is not update

2020-05-18 Thread Ilya Kasnacheev
Hello!

Before put() is completed, Binary Object Schema is not created/updated and
so the type names are not reflected here.

Unfortunately, if you need to read all fields of binary objects, you may
need to use internal APIs (which may not work as expected or change between
releases),
such as, in this case, BinaryObjectImpl.createSchema().fieldIds() and
BinaryObjectImpl.field(int fieldId).

I recommend putting any variable properties in a map field as opposed to
re-building binary object on the fly.

Regards,
-- 
Ilya Kasnacheev


сб, 9 мая 2020 г. в 18:46, takumi :

> Hello. As I am in trouble, please help me.
>
> I use Java ThinClient and Ignite Cache Server.
> In client code, I update BinaryObject by BinaryObjectBuilder API.
> ex) BinaryObject bo = clientCache.get(KEY);
>  clientCache.put(KEY, bo.toBuilder().setField("AddField",
> ...).build());
>
> But MyCacheInterceptor#onBeforePut(entry, newVal) is not update
> newVal.type().fieldNames().
> newVal.type().fieldNames() does not return "AddField" field.
>
> I can get newVal.field("AddField");
>
> How can I update type().fieldNames() result?
> I want to get a "newVal" in response list of type().fieldNames().
>
> Sorry, I am weak in English, and a sentence is not good.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: BinaryObject field is not update

2020-05-14 Thread akorensh
Hi,
  See:
https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/binary/datagrid/CacheClientBinaryPutGetExample.java


  and:
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/binary/BinaryObject.html

  https://apacheignite.readme.io/docs/binary-marshaller

https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/datagrid/CacheEntryProcessorExample.java
  
 
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/CacheEntry.html


  If you are still having issues, please send a full reproducer and I'll
take a look.
Thanks, Alex

   



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


BinaryObject field is not update

2020-05-09 Thread takumi
Hello. As I am in trouble, please help me.

I use Java ThinClient and Ignite Cache Server.
In client code, I update BinaryObject by BinaryObjectBuilder API.
ex) BinaryObject bo = clientCache.get(KEY);
 clientCache.put(KEY, bo.toBuilder().setField("AddField",
...).build());

But MyCacheInterceptor#onBeforePut(entry, newVal) is not update
newVal.type().fieldNames().
newVal.type().fieldNames() does not return "AddField" field.

I can get newVal.field("AddField");

How can I update type().fieldNames() result? 
I want to get a "newVal" in response list of type().fieldNames().

Sorry, I am weak in English, and a sentence is not good.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite crashes with CorruptedTreeException: "B+Tree is corrupted" on a composite BinaryObject scenario

2020-04-24 Thread Ilya Kasnacheev
Hello!

I have added comment to this ticket describing why does it happen and how
to fix it.

In Ignite, when you get key from your value binary object, it's actually a
wrapper pointing at some position in its parent (value) binary object. When
you try to put it to cache, indexing cannot process it correctly.

We should add code which tries to detach such objects, and throws some
exception when it cannot be detached (such as, if it references another
object inside parent)

Regards,
-- 
Ilya Kasnacheev


пт, 17 апр. 2020 г. в 20:52, akorensh :

> Maxim,
>   I've an appropriate ticket:
> https://issues.apache.org/jira/browse/IGNITE-12911
> Thanks, Alex
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: BinaryObject and CustomKey

2020-04-20 Thread Evgenii Zhuravlev
Please use builder from my example. You use wrong types here:


BinaryObjectBuilder builder1 = ignite.binary().builder("keys");

BinaryObjectBuilder builder2 = ignite.binary().builder("fields");

You should use the same types that you have in xml configuration.

Yes, you need to create 2 binary objects - one with key fields and the
second with value fields only.

пн, 20 апр. 2020 г. в 11:18, narges saleh :

> Thanks Evgenii. I am still a little bit confused.
> So if my CustomKey is composed of Client_ID, Customer_ID, and other fields
> are Customer_name, Client_name, then I populate the cache this way? Do I
> need to build two binary objects, one for the custom key and another for
> the rest of the fields?
>
> IgniteDataStreamer streamer =
> grid.dataStreamer("Customer");
>
> streamer.keepBinary(true);
>
> BinaryObjectBuilder builder1 = ignite.binary().builder("keys");
>
> BinaryObjectBuilder builder2 = ignite.binary().builder("fields");
>
> BinaryObject b1 = null;
>
> BinaryObject b2 = null;
>
>  builder1.setField("Client_ID", 1);
>
>  builder1.setField("Customer_ID", 1);
>
>  builder2.setField("customer_name", "jim");
>
> builder2.setField("client_name", "joe");
>
>  b1 = builder1.build();
>
>  b2 = builder2.build;
>
>  streamer.adddata(b1, b2);
>
>
> On Mon, Apr 20, 2020 at 12:22 PM Evgenii Zhuravlev <
> e.zhuravlev...@gmail.com> wrote:
>
>>  Hi,
>>
>> >How do I do streamer.addData to add the binary objects?
>> If you want to work without classes, you can use binary object builder,
>> just take it for these types:
>>
>> ignite.binary().builder(CustomKey);
>>
>> ignite.binary().builder(Customer);
>>
>> And set fields in it:
>>
>> builder.setField("Client_ID", value);
>>
>> These objects can be used for streaming.
>>
>> However, if you want to use java objects in future, you will need to change 
>> keyType and keyType to the full name with a package.
>>
>>
>> Evgenii
>>
>>
>> пн, 20 апр. 2020 г. в 08:53, narges saleh :
>>
>>> Hi All,
>>> If I have a query entity defined with composite CustomKey, how do I
>>> insert to the cache,here, Customer, via the DataStreamer, using binary
>>> object builder? Do I need to define an object for the composite CustomKey?
>>> I am trying to define all the tables/caches via the configuration file. How
>>> do I do streamer.addData to add the binary objects?
>>> For example the QE is defined as
>>>  
>>> >> value="CustomKey"/>
>>> >> value="Customer"/>
>>> >> value="Customer"/>
>>> 
>>>
>>> 
>>>   
>>> Client_ID
>>> Customer_ID
>>> 
>>>  
>>> thanks.
>>>
>>>


Re: BinaryObject and CustomKey

2020-04-20 Thread narges saleh
Thanks Evgenii. I am still a little bit confused.
So if my CustomKey is composed of Client_ID, Customer_ID, and other fields
are Customer_name, Client_name, then I populate the cache this way? Do I
need to build two binary objects, one for the custom key and another for
the rest of the fields?

IgniteDataStreamer streamer =
grid.dataStreamer("Customer");

streamer.keepBinary(true);

BinaryObjectBuilder builder1 = ignite.binary().builder("keys");

BinaryObjectBuilder builder2 = ignite.binary().builder("fields");

BinaryObject b1 = null;

BinaryObject b2 = null;

 builder1.setField("Client_ID", 1);

 builder1.setField("Customer_ID", 1);

 builder2.setField("customer_name", "jim");

builder2.setField("client_name", "joe");

 b1 = builder1.build();

 b2 = builder2.build;

 streamer.adddata(b1, b2);


On Mon, Apr 20, 2020 at 12:22 PM Evgenii Zhuravlev 
wrote:

>  Hi,
>
> >How do I do streamer.addData to add the binary objects?
> If you want to work without classes, you can use binary object builder,
> just take it for these types:
>
> ignite.binary().builder(CustomKey);
>
> ignite.binary().builder(Customer);
>
> And set fields in it:
>
> builder.setField("Client_ID", value);
>
> These objects can be used for streaming.
>
> However, if you want to use java objects in future, you will need to change 
> keyType and keyType to the full name with a package.
>
>
> Evgenii
>
>
> пн, 20 апр. 2020 г. в 08:53, narges saleh :
>
>> Hi All,
>> If I have a query entity defined with composite CustomKey, how do I
>> insert to the cache,here, Customer, via the DataStreamer, using binary
>> object builder? Do I need to define an object for the composite CustomKey?
>> I am trying to define all the tables/caches via the configuration file. How
>> do I do streamer.addData to add the binary objects?
>> For example the QE is defined as
>>  
>> > value="CustomKey"/>
>> > value="Customer"/>
>> > value="Customer"/>
>> 
>>
>> 
>>   
>> Client_ID
>> Customer_ID
>> 
>>  
>> thanks.
>>
>>


Re: BinaryObject and CustomKey

2020-04-20 Thread Evgenii Zhuravlev
 Hi,

>How do I do streamer.addData to add the binary objects?
If you want to work without classes, you can use binary object builder,
just take it for these types:

ignite.binary().builder(CustomKey);

ignite.binary().builder(Customer);

And set fields in it:

builder.setField("Client_ID", value);

These objects can be used for streaming.

However, if you want to use java objects in future, you will need to
change keyType and keyType to the full name with a package.


Evgenii


пн, 20 апр. 2020 г. в 08:53, narges saleh :

> Hi All,
> If I have a query entity defined with composite CustomKey, how do I insert
> to the cache,here, Customer, via the DataStreamer, using binary object
> builder? Do I need to define an object for the composite CustomKey? I am
> trying to define all the tables/caches via the configuration file. How do I
> do streamer.addData to add the binary objects?
> For example the QE is defined as
>  
>  value="CustomKey"/>
>  value="Customer"/>
>  value="Customer"/>
> 
>
> 
>   
> Client_ID
> Customer_ID
> 
>  
> thanks.
>
>


BinaryObject and CustomKey

2020-04-20 Thread narges saleh
Hi All,
If I have a query entity defined with composite CustomKey, how do I insert
to the cache,here, Customer, via the DataStreamer, using binary object
builder? Do I need to define an object for the composite CustomKey? I am
trying to define all the tables/caches via the configuration file. How do I
do streamer.addData to add the binary objects?
For example the QE is defined as
 






  
Client_ID
Customer_ID

 
thanks.


Re: Ignite crashes with CorruptedTreeException: "B+Tree is corrupted" on a composite BinaryObject scenario

2020-04-17 Thread akorensh
Maxim,
  I've an appropriate ticket:
https://issues.apache.org/jira/browse/IGNITE-12911
Thanks, Alex



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite crashes with CorruptedTreeException: "B+Tree is corrupted" on a composite BinaryObject scenario

2020-04-17 Thread max904
Thank you, Alex!
Yes, I figured that this line indeed causes this crash. And yes, I'm using
queries and I need this line.
For now, I'm using the workaround I've described (of rebuilding the key
BinaryObject). But it's quite expensive operation and I would like to avoid
it.
This looks like a bug to me, as in any case, Ignite should not crash so
miserably, especially on such common condition.
If you file a ticket, could you please post a reference number for it to
track?

Best regards,
Maxim



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite crashes with CorruptedTreeException: "B+Tree is corrupted" on a composite BinaryObject scenario

2020-04-17 Thread max904
Thank you, Alex!
Yes, I figured that this line indeed causes this crash. And yes, I'm using
queries and I need this line.
For now, I'm using the workaround I've described (of rebuilding the key
BinaryObject). But it's quite expensive operation and I would like to avoid
it.
This looks like a bug to me, as in any case, Ignite should not crash so
miserably, especially on such common condition.
If you file a ticket, could you please post a reference number for it to
track?

Best regards,
Maxim




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite crashes with CorruptedTreeException: "B+Tree is corrupted" on a composite BinaryObject scenario

2020-04-16 Thread akorensh
Hi,
  I was able to reproduce your use-case.
  This line causes the issues you describe:
cacheConfig.setIndexedTypes(EmployeeId.class, Employee.class);
  Comment it out, and everything works.

   If you need Indexes, define them as need be, but  remove these lines:
 employeeCache.put(key2, emp1); // CRASH!!! CorruptedTreeException:
B+Tree is corrupted

 employeeCache.put(key2.clone(), emp1); // CRASH!!!

   Use other means to compose key2.
   

   I will look into why this failure occurs, and raise appropriate tickets
if necessary.
Thanks, Alex




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite crashes with CorruptedTreeException: "B+Tree is corrupted" on a composite BinaryObject scenario

2020-04-15 Thread max904
Yes, of course I'm using "cache.withKeepBinary()". Below is my exact
reproducer:

final URL configUrl =
getClass().getClassLoader().getResource("example-ignite.xml");
final Ignite ignite = Ignition.start(configUrl);

final CacheConfiguration cacheConfig = new
CacheConfiguration<>("Employee");
cacheConfig.setIndexedTypes(EmployeeId.class, Employee.class);

IgniteCache cache =
ignite.getOrCreateCache(cacheConfig);
IgniteCache employeeCache =
cache.withKeepBinary();

try {
  BinaryObjectBuilder key1Builder =
ignite.binary().builder(EmployeeId.class.getName());
  key1Builder.setField("employeeNumber", 65348765, Integer.class);
  key1Builder.setField("departmentNumber", 123, Integer.class);
  BinaryObject key1 = key1Builder.build();
  BinaryObjectBuilder emp1Builder =
ignite.binary().builder(Employee.class.getName());
  emp1Builder.setField("firstName", "John", String.class);
  emp1Builder.setField("lastName", "Smith", String.class);
  emp1Builder.setField("id", key1);
  BinaryObject emp1 = emp1Builder.build();

  employeeCache.put(key1, emp1);
  BinaryObject emp2 = employeeCache.get(key1);
  assertThat(emp2).isNotNull();
  assertThat(emp2).isEqualTo(emp1);

  employeeCache.put(key1, emp1);

  BinaryObject key2 = emp1.field("id");
  employeeCache.put(key2, emp1); // CRASH!!! CorruptedTreeException: B+Tree
is corrupted

  //employeeCache.put(key2.clone(), emp1); // CRASH!!!
CorruptedTreeException: B+Tree is corrupted

  employeeCache.put(key2.toBuilder().build(), emp1); // OK!
} finally {
  Ignition.stop(true);
}



Where the data types are the following:

public interface EmployeeId {
  int getEmployeeNumber();
  void setEmployeeNumber(int employeeNumber);

  int getDepartmentNumber();
  void setDepartmentNumber(int departmentNumber);
}

public interface Employee {

  EmployeeId getId();
  void setId(EmployeeId id);

  String getFirstName();
  void setFirstName(String firstName);

  String getLastName();
  void setLastName(String lastName);

  Date getBirthDate();
  void setBirthDate(Date birthDate);

  ...
}





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite crashes with CorruptedTreeException: "B+Tree is corrupted" on a composite BinaryObject scenario

2020-04-10 Thread akorensh
Hi,
   When running your code I created the cache as follows:
  IgniteCache employeeCache =
ignite.getOrCreateCache("employeeCache").withKeepBinary();

  With the ".WithKeepBinary()" flag set code works, otherwise there are
serialization errors.

  If you still get errors w/this flag set, send a reproducer, and I'll take
a look.

   more info:
https://www.gridgain.com/docs/latest/developers-guide/key-value-api/binary-objects
   Binary Objects Example:
https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/binary/datagrid/CacheClientBinaryPutGetExample.java

Thanks, Alex



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite crashes with CorruptedTreeException: "B+Tree is corrupted" on a composite BinaryObject scenario

2020-04-09 Thread max904
Ignite crashes when I use composite BinaryObject as a key and also include it
into the value object.

Here is the BinaryObject scenario:

// create a new composite key
BinaryObjectBuilder key1Builder =
Ignition.ignite().binary().builder(EmployeeId.class.getName());
key1Builder.setField("employeeNumber", 65348765, Integer.class);
key1Builder.setField("departmentNumber", 123, Integer.class);
BinaryObject key1 = key1Builder.build();

// create a new value
BinaryObjectBuilder emp1Builder =
Ignition.ignite().binary().builder(Employee.class.getName());
emp1Builder.setField("firstName", "John", String.class);
emp1Builder.setField("lastName", "Smith", String.class);
emp1Builder.setField("id", key1); // The composite key is also a part of the
value!
BinaryObject emp1 = emp1Builder.build();

// put the record to the DB - OK
employeeCache.put(key1, emp1);
// read it back - OK
BinaryObject emp2 = employeeCache.get(key1);
assertThat(emp2).isNotNull();
assertThat(emp2).isEqualTo(emp1);

// put the same key and value back to the DB - OK
employeeCache.put(key1, emp1); // OK!

// extract a key from the value
BinaryObject key2 = emp1.field("id");

// try to put a record with the extracted key - CRASH
employeeCache.put(key2, emp1); // CRASH!!! CorruptedTreeException: B+Tree is
corrupted ...

// try to put a record with the extracted key clone - CRASH
employeeCache.put(key2.clone(), emp1); // CRASH!!! CorruptedTreeException:
B+Tree is corrupted ...

// try to put a record with the extracted key rebuilt - OK
employeeCache.put(key2.toBuilder().build(), emp1); // OK!

This is clearly a bug as Ignite node crashes on a such basic use case!
I expect the scenario should work in all three cases, not only when I
explicitly rebuild the extracted key.
I verified it fails in both Ignite 2.8.0 and 2.7.6 (with a different error
though).
I can provide all the stack traces upon request.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Are data in NearCache BinaryObject?

2019-12-12 Thread Prasad Bhalerao
I had debugged the near cache once and I found that near cache simply
stores the entries concurrent hashmap in deserialized format.

But be careful with near cache usage. I faced many issues and finally
removed it.
I had reported an issue on this forum but I couldn't not create reproducer
for it but it used give give me exceptions in running application. Check
near cache issue on user list.


On Thu 12 Dec, 2019, 9:09 PM Cong Guo  Hi,
>
> My application needs to read all entries in the cache frequently. The
> entries may be updated by others. I'm thinking about two solutions to avoid
> a lot deserialization. First, I can maintain my own local hash map and
> relies on continuous queries to get the update events. Second, I can use a
> NearCache, but if the data in NearCache are still serialized, this method
> does not work for my application.
>
> Thanks,
> Nap
>
> On Thu, Dec 12, 2019 at 5:37 AM Ilya Kasnacheev 
> wrote:
>
>> Hello!
>>
>> It is actually hard to say without debugging. I expect that it is
>> BinaryObject or primitive type or byte[].
>>
>> It is possible to enable onheap caching, in this case objects will be
>> held as is, and also sed copyOnRead=false, in this case objects will not
>> even be copied.
>> However, I'm not sure if Near Cache will interact with onheap caching.
>>
>> Why does it matter for your use case?
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> ср, 11 дек. 2019 г. в 22:54, Cong Guo :
>>
>>> Hi,
>>>
>>> Are the entries stored in local NearCache on my client node in the
>>> format of deserialized java objects or BinaryObject? Will the entry in
>>> local on-heap NearCache be deserialized from BinaryObject when I call the
>>> get function?
>>>
>>> Thanks,
>>> Nap
>>>
>>


Re: Are data in NearCache BinaryObject?

2019-12-12 Thread Cong Guo
Hi,

My application needs to read all entries in the cache frequently. The
entries may be updated by others. I'm thinking about two solutions to avoid
a lot deserialization. First, I can maintain my own local hash map and
relies on continuous queries to get the update events. Second, I can use a
NearCache, but if the data in NearCache are still serialized, this method
does not work for my application.

Thanks,
Nap

On Thu, Dec 12, 2019 at 5:37 AM Ilya Kasnacheev 
wrote:

> Hello!
>
> It is actually hard to say without debugging. I expect that it is
> BinaryObject or primitive type or byte[].
>
> It is possible to enable onheap caching, in this case objects will be held
> as is, and also sed copyOnRead=false, in this case objects will not even be
> copied.
> However, I'm not sure if Near Cache will interact with onheap caching.
>
> Why does it matter for your use case?
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> ср, 11 дек. 2019 г. в 22:54, Cong Guo :
>
>> Hi,
>>
>> Are the entries stored in local NearCache on my client node in the format
>> of deserialized java objects or BinaryObject? Will the entry in local
>> on-heap NearCache be deserialized from BinaryObject when I call the get
>> function?
>>
>> Thanks,
>> Nap
>>
>


Re: Are data in NearCache BinaryObject?

2019-12-12 Thread Ilya Kasnacheev
Hello!

It is actually hard to say without debugging. I expect that it is
BinaryObject or primitive type or byte[].

It is possible to enable onheap caching, in this case objects will be held
as is, and also sed copyOnRead=false, in this case objects will not even be
copied.
However, I'm not sure if Near Cache will interact with onheap caching.

Why does it matter for your use case?

Regards,
-- 
Ilya Kasnacheev


ср, 11 дек. 2019 г. в 22:54, Cong Guo :

> Hi,
>
> Are the entries stored in local NearCache on my client node in the format
> of deserialized java objects or BinaryObject? Will the entry in local
> on-heap NearCache be deserialized from BinaryObject when I call the get
> function?
>
> Thanks,
> Nap
>


Are data in NearCache BinaryObject?

2019-12-11 Thread Cong Guo
Hi,

Are the entries stored in local NearCache on my client node in the format
of deserialized java objects or BinaryObject? Will the entry in local
on-heap NearCache be deserialized from BinaryObject when I call the get
function?

Thanks,
Nap


Re: Can I update specific field of a binaryobject

2019-03-11 Thread Ilya Kasnacheev
Hello!

You can do cache.invoke() with a callback that will update a single field
in an object. It will be sent to a specific node in cluster and object in
question will not be transferred via network, but processed locally.

Note that UPDATE will probably only send request to node containing
relevant key if that is specified, and not broadcast it.

Regards,
-- 
Ilya Kasnacheev


пт, 8 мар. 2019 г. в 11:28, BinaryTree :

> Hi Igniters -
>
> As far as I am known, igniteCache.put(K, V) will replace the value of K
> with V, but sometimes I just would like to update a specific field of V
> instead of the whole object.
> I know that I can update the specific field via
> igniteCache.query(SqlFieldsQuery),  but how-ignite-sql-works
>  writes
> that ignite will generate SELECT queries internally before UPDATE or DELETE
> a set of records, so it may not be as good as igniteCache.put(K, V), so is
> there a way can update a specific field without QUERY?
>
>


Re: Can I update specific field of a binaryobject

2019-03-08 Thread Justin Ji
Besides the question above, I have another question.
If I use a composed key like this:
public class DpKey implements Serializable {
//key=devId + "_" + dpId
private String key;
@AffinityKeyMapped
private String devId;
   //getter setter
}

Now I need to add records like this,
IgniteCache cache = ignite.cache("cache");
cache.query(new SqlFieldsQuery("MERGE INTO t_cache(id, devId, dpId)" +
   " values (1, 'devId001', '001'), (2, 'devId002', '002')"));
And I also need to get value with igniteCache.get(key), but I do not know
what is the key.

Can I assign a key when inserting a record?







--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Can I update specific field of a binaryobject

2019-03-08 Thread BinaryTree
Hi Igniters - 


As far as I am known, igniteCache.put(K, V) will replace the value of K with V, 
but sometimes I just would like to update a specific field of V instead of the 
whole object.
I know that I can update the specific field via 
igniteCache.query(SqlFieldsQuery),  but how-ignite-sql-works writes that ignite 
will generate SELECT queries internally before UPDATE or DELETE a set of 
records, so it may not be as good as igniteCache.put(K, V), so is there a way 
can update a specific field without QUERY?

Re: Error while persisting from Ignite to Hive for a BinaryObject

2019-01-31 Thread Ilya Kasnacheev
Hello!

Do you have this problem also if you swap Hive with some other DB (H2 comes
to mind)?

If so, can you create a reproducer project out of those files, post it to
e.g. github? Then I will look.

My immediate idea is that after you supply aliases Ignite can't find apn_id
(or apnId) in your binary object so it will just bind the whole parent
object (ApnDiameter5Min or *Key) to JDBC parameter.
I think you could check that with a debugger.

Regards,
-- 
Ilya Kasnacheev


ср, 30 янв. 2019 г. в 15:09, Premachandran, Mahesh (Nokia - IN/Bangalore) <
mahesh.premachand...@nokia.com>:

> Hello,
>
>
>
>
>
> I have attached the XML am using right now as well as the classes. All of
> these were exported from the webconsole which was pointing to a hive table
> created with the following statement.
>
>
>
> CREATE TABLE apn_diameter_5_min (id VARCHAR(36), report_start_time
> BIGINT,report_end_time BIGINT, apn_id
> VARCHAR(200),ggsn_diameter_total_events BIGINT, apn_id_vector_item_count
> BIGINT, request_type BIGINT,request_type_number_events BIGINT,
> request_type_imsi VARCHAR(16), request_type_imsi_vector_item_count BIGINT,
> request_type_success_events BIGINT, imsi_diameter_success
> VARCHAR(16),imsi_diameter_success_vector_item_count BIGINT,
> diameter_requests_unsuccessful BIGINT, imsi_diameter_unsuccessful
> VARCHAR(16), imsi_diameter_unsuccessful_vector_item_count BIGINT,
> request_delay_sum DOUBLE, request_delay_events BIGINT, result_code BIGINT,
> result_code_events BIGINT, result_code_imsi VARCHAR(16),
> result_code_imsi_vector_item_count BIGINT, termination_cause BIGINT,
> termination_cause_event BIGINT) clustered  by (id) into 2 buckets STORED AS
> orc TBLPROPERTIES('transactional'='true');
>
>
>
> While streaming, the data is going into the ignite cache, but not HIVE and
>  on the console I get the following error.
>
>
>
> "GridCacheWriteBehindStore","timezone":"UTC","marker":"","log":"Unable to
> update underlying store: CacheJdbcPojoStore [] javax.cache.CacheException:
> Failed to set statement parameter name: apn_id
>
> at
> org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore.fillParameter(CacheAbstractJdbcStore.java:1391)
>
> at
> org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore.fillValueParameters(CacheAbstractJdbcStore.java:1443)
>
> at
> org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore.writeUpsert(CacheAbstractJdbcStore.java:919)
>
> at
> org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore.writeAll(CacheAbstractJdbcStore.java:1161)
>
> at
> org.apache.ignite.internal.processors.cache.store.GridCacheWriteBehindStore.updateStore(GridCacheWriteBehindStore.java:816)
>
> at
> org.apache.ignite.internal.processors.cache.store.GridCacheWriteBehindStore.applyBatch(GridCacheWriteBehindStore.java:726)
>
> at
> org.apache.ignite.internal.processors.cache.store.GridCacheWriteBehindStore.access$2400(GridCacheWriteBehindStore.java:76)
>
> at
> org.apache.ignite.internal.processors.cache.store.GridCacheWriteBehindStore$Flusher.flushCacheCoalescing(GridCacheWriteBehindStore.java:1147)
>
> at
> org.apache.ignite.internal.processors.cache.store.GridCacheWriteBehindStore$Flusher.body(GridCacheWriteBehindStore.java:1018)
>
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
>
> at java.lang.Thread.run(Thread.java:748)
>
> Caused by: java.sql.SQLException: Can't infer the SQL type to use for an
> instance of org.apache.ignite.internal.binary.BinaryObjectImpl. Use
> setObject() with an explicit Types value to specify the type to use.
>
> at
> org.apache.hive.jdbc.HivePreparedStatement.setObject(HivePreparedStatement.java:624)
>
> at
> org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore.fillParameter(CacheAbstractJdbcStore.java:1385)
>
> ... 10 more
>
> "}
>
>
>
>
>
> Mahesh
>
>
>
>
>
> *From:* Ilya Kasnacheev 
> *Sent:* Tuesday, January 29, 2019 10:59 PM
> *To:* user@ignite.apache.org
> *Subject:* Re: Error while persisting from Ignite to Hive for a
> BinaryObject
>
>
>
> Hello!
>
>
>
> What is the type that you are storing in this cache? Can you please show
> full cache configuration & key-value classes?
>
>
>
> Regards,
>
> --
>
> Ilya Kasnacheev
>
>
>
>
>
> пт, 25 янв. 2019 г. в 00:49, Premachandran, Mahesh (Nokia - IN/Bangalore) <
> mahesh.premachand...@nokia.com>:
>
> Hi,
>
>
>
> Sorry for the earlier confusion, the type of apn_id/apnId is indeed
> String. I had writte

Re: Error while persisting from Ignite to Hive for a BinaryObject

2019-01-29 Thread Ilya Kasnacheev
Hello!

What is the type that you are storing in this cache? Can you please show
full cache configuration & key-value classes?

Regards,
-- 
Ilya Kasnacheev


пт, 25 янв. 2019 г. в 00:49, Premachandran, Mahesh (Nokia - IN/Bangalore) <
mahesh.premachand...@nokia.com>:

> Hi,
>
>
>
> Sorry for the earlier confusion, the type of apn_id/apnId is indeed
> String. I had written a simple producer to publish messages to kafka topics
> with random values the types of which are
>
>
>
> id  java.lang.String
>
> reportStartTime  java.lang.Long
>
> reportEndTime  java.lang.Long
>
> apnId  java.lang.String
>
> ggsnDiameterTotalEvents  java.lang.Long
>
> apnIdVectorItemCount  java.lang.Long
>
> requestType  java.lang.Long
>
> requestTypeNumberEvents  java.lang.Long
>
> requestTypeImsi  java.lang.String
>
> requestTypeImsiVectorItemCount  java.lang.Long
>
> requestTypeSuccessEvents  java.lang.Long
>
> imsiDiameterSuccess  java.lang.String
>
> imsiDiameterSuccessVectorItemCount  java.lang.Long
>
> diameterRequestsUnsuccessful  java.lang.Long
>
> imsiDiameterUnsuccessful  java.lang.String
>
> imsiDiameterUnsuccessfulVectorItemCount  java.lang.Long
>
> requestDelaySum  java.lang.Double
>
> requestDelayEvents  java.lang.Long
>
> resultCode  java.lang.Long
>
> resultCodeEvents  java.lang.Long
>
> resultCodeImsi  java.lang.String
>
> resultCodeImsiVectorItemCount  java.lang.Long
>
> terminationCause  java.lang.Long
>
> terminationCauseEvent  java.lang.Long
>
>
>
> This is the statement that was used to create the table on HIVE.
>
>
>
> CREATE TABLE apn_diameter_5_min (id VARCHAR(36), report_start_time
> BIGINT,report_end_time BIGINT, apn_id
> VARCHAR(200),ggsn_diameter_total_events BIGINT, apn_id_vector_item_count
> BIGINT, request_type BIGINT,request_type_number_events BIGINT,
> request_type_imsi VARCHAR(16), request_type_imsi_vector_item_count BIGINT,
> request_type_success_events BIGINT, imsi_diameter_success
> VARCHAR(16),imsi_diameter_success_vector_item_count BIGINT,
> diameter_requests_unsuccessful BIGINT, imsi_diameter_unsuccessful
> VARCHAR(16), imsi_diameter_unsuccessful_vector_item_count BIGINT,
> request_delay_sum DOUBLE, request_delay_events BIGINT, result_code BIGINT,
> result_code_events BIGINT, result_code_imsi VARCHAR(16),
> result_code_imsi_vector_item_count BIGINT, termination_cause BIGINT,
> termination_cause_event BIGINT) clustered  by (id) into 2 buckets STORED AS
> orc TBLPROPERTIES('transactional'='true');
>
>
>
>
>
> I am populating a BinaryObject using the BinaryObjectBuilder in my
> implementation of  StreamSingleTupleExtractor.
>
>
>
> Mahesh
>
>
>
> *From:* Ilya Kasnacheev 
> *Sent:* Thursday, January 24, 2019 7:39 PM
> *To:* user@ignite.apache.org
> *Subject:* Re: Error while persisting from Ignite to Hive for a
> BinaryObject
>
>
>
> Hello!
>
>
>
> In your XML apn_id looks like String. Is it possible that actual type of
> apnId in ApnDiameter5Min is neither Long nor String but some other
> complex type? Can you attach those types?
>
>
>
> Regards,
>
> --
>
> Ilya Kasnacheev
>
>
>
>
>
> ср, 23 янв. 2019 г. в 18:37, Premachandran, Mahesh (Nokia - IN/Bangalore) <
> mahesh.premachand...@nokia.com>:
>
> Hi Ilya,
>
>
>
> The field apn_id is of type Long. I have been using the
>  CacheJdbcPojoStore, does that map the BinaryObjects to the database
> schema? or is it only for java pojos? I have attached the xml I am using
> with the client.
>
>
>
> Mahesh
>
>
>
> *From:* Ilya Kasnacheev 
> *Sent:* Wednesday, January 23, 2019 6:43 PM
> *To:* user@ignite.apache.org
> *Subject:* Re: Error while persisting from Ignite to Hive for a
> BinaryObject
>
>
>
> Hello!
>
>
>
> I think that your CacheStore implementation is confused by nested fields
> or binary object values (what is the type of apn_id?). Consider using
> CacheJdbcBlobStoreFactory instead which will serialize value to one big
> field in BinaryObject formar.
>
>
>
> Regards,
>
> --
>
> Ilya Kasnacheev
>
>
>
>
>
> ср, 23 янв. 2019 г. в 15:47, Premachandran, Mahesh (Nokia - IN/Bangalore) <
> mahesh.premachand...@nokia.com>:
>
> Hi all,
>
>
>
> I am trying to stream some data from Kafka to Ignite using
> IgniteDataStreamer and use 3rd party persistence to move it to HIVE. The
> data on Kafka is in avro format, which I am deserailising, populating an
> Ignite BinaryObject using the binary builder and pushing it to Ignite. It
> works well when I do not enable 3rd party persistence, but once that is
> en

RE: Error while persisting from Ignite to Hive for a BinaryObject

2019-01-24 Thread Premachandran, Mahesh (Nokia - IN/Bangalore)
Hi,

Sorry for the earlier confusion, the type of apn_id/apnId is indeed String. I 
had written a simple producer to publish messages to kafka topics with random 
values the types of which are

id  java.lang.String
reportStartTime  java.lang.Long
reportEndTime  java.lang.Long
apnId  java.lang.String
ggsnDiameterTotalEvents  java.lang.Long
apnIdVectorItemCount  java.lang.Long
requestType  java.lang.Long
requestTypeNumberEvents  java.lang.Long
requestTypeImsi  java.lang.String
requestTypeImsiVectorItemCount  java.lang.Long
requestTypeSuccessEvents  java.lang.Long
imsiDiameterSuccess  java.lang.String
imsiDiameterSuccessVectorItemCount  java.lang.Long
diameterRequestsUnsuccessful  java.lang.Long
imsiDiameterUnsuccessful  java.lang.String
imsiDiameterUnsuccessfulVectorItemCount  java.lang.Long
requestDelaySum  java.lang.Double
requestDelayEvents  java.lang.Long
resultCode  java.lang.Long
resultCodeEvents  java.lang.Long
resultCodeImsi  java.lang.String
resultCodeImsiVectorItemCount  java.lang.Long
terminationCause  java.lang.Long
terminationCauseEvent  java.lang.Long

This is the statement that was used to create the table on HIVE.

CREATE TABLE apn_diameter_5_min (id VARCHAR(36), report_start_time 
BIGINT,report_end_time BIGINT, apn_id VARCHAR(200),ggsn_diameter_total_events 
BIGINT, apn_id_vector_item_count BIGINT, request_type 
BIGINT,request_type_number_events BIGINT, request_type_imsi VARCHAR(16), 
request_type_imsi_vector_item_count BIGINT, request_type_success_events BIGINT, 
imsi_diameter_success VARCHAR(16),imsi_diameter_success_vector_item_count 
BIGINT, diameter_requests_unsuccessful BIGINT, imsi_diameter_unsuccessful 
VARCHAR(16), imsi_diameter_unsuccessful_vector_item_count BIGINT, 
request_delay_sum DOUBLE, request_delay_events BIGINT, result_code BIGINT, 
result_code_events BIGINT, result_code_imsi VARCHAR(16), 
result_code_imsi_vector_item_count BIGINT, termination_cause BIGINT, 
termination_cause_event BIGINT) clustered  by (id) into 2 buckets STORED AS orc 
TBLPROPERTIES('transactional'='true');


I am populating a BinaryObject using the BinaryObjectBuilder in my 
implementation of  StreamSingleTupleExtractor.

Mahesh

From: Ilya Kasnacheev 
Sent: Thursday, January 24, 2019 7:39 PM
To: user@ignite.apache.org
Subject: Re: Error while persisting from Ignite to Hive for a BinaryObject

Hello!

In your XML apn_id looks like String. Is it possible that actual type of apnId 
in ApnDiameter5Min is neither Long nor String but some other complex type? Can 
you attach those types?

Regards,
--
Ilya Kasnacheev


ср, 23 янв. 2019 г. в 18:37, Premachandran, Mahesh (Nokia - IN/Bangalore) 
mailto:mahesh.premachand...@nokia.com>>:
Hi Ilya,

The field apn_id is of type Long. I have been using the  CacheJdbcPojoStore, 
does that map the BinaryObjects to the database schema? or is it only for java 
pojos? I have attached the xml I am using with the client.

Mahesh

From: Ilya Kasnacheev 
mailto:ilya.kasnach...@gmail.com>>
Sent: Wednesday, January 23, 2019 6:43 PM
To: user@ignite.apache.org<mailto:user@ignite.apache.org>
Subject: Re: Error while persisting from Ignite to Hive for a BinaryObject

Hello!

I think that your CacheStore implementation is confused by nested fields or 
binary object values (what is the type of apn_id?). Consider using  
CacheJdbcBlobStoreFactory instead which will serialize value to one big field 
in BinaryObject formar.

Regards,
--
Ilya Kasnacheev


ср, 23 янв. 2019 г. в 15:47, Premachandran, Mahesh (Nokia - IN/Bangalore) 
mailto:mahesh.premachand...@nokia.com>>:
Hi all,

I am trying to stream some data from Kafka to Ignite using IgniteDataStreamer 
and use 3rd party persistence to move it to HIVE. The data on Kafka is in avro 
format, which I am deserailising, populating an Ignite BinaryObject using the 
binary builder and pushing it to Ignite. It works well when I do not enable 3rd 
party persistence, but once that is enabled, it throws the following exception.

[12:32:07] (err) Failed to execute compound future reducer: GridCompoundFuture 
[rdc=null, initFlag=1, lsnrCalls=2, done=true, cancelled=false, err=class 
o.a.i.IgniteCheckedException: DataStreamer request failed 
[node=292ab229-61fb-4d61-8f08-33c8abd310a2], futs=[true, true, true]]class 
org.apache.ignite.IgniteCheckedException: DataStreamer request failed 
[node=292ab229-61fb-4d61-8f08-33c8abd310a2]
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$Buffer.onResponse(DataStreamerImpl.java:1912)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$3.onMessage(DataStreamerImpl.java:346)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1556)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1184)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.access$420

Re: Error while persisting from Ignite to Hive for a BinaryObject

2019-01-24 Thread Ilya Kasnacheev
Hello!

In your XML apn_id looks like String. Is it possible that actual type of
apnId in ApnDiameter5Min is neither Long nor String but some other complex
type? Can you attach those types?

Regards,
-- 
Ilya Kasnacheev


ср, 23 янв. 2019 г. в 18:37, Premachandran, Mahesh (Nokia - IN/Bangalore) <
mahesh.premachand...@nokia.com>:

> Hi Ilya,
>
>
>
> The field apn_id is of type Long. I have been using the
>  CacheJdbcPojoStore, does that map the BinaryObjects to the database
> schema? or is it only for java pojos? I have attached the xml I am using
> with the client.
>
>
>
> Mahesh
>
>
>
> *From:* Ilya Kasnacheev 
> *Sent:* Wednesday, January 23, 2019 6:43 PM
> *To:* user@ignite.apache.org
> *Subject:* Re: Error while persisting from Ignite to Hive for a
> BinaryObject
>
>
>
> Hello!
>
>
>
> I think that your CacheStore implementation is confused by nested fields
> or binary object values (what is the type of apn_id?). Consider using
> CacheJdbcBlobStoreFactory instead which will serialize value to one big
> field in BinaryObject formar.
>
>
>
> Regards,
>
> --
>
> Ilya Kasnacheev
>
>
>
>
>
> ср, 23 янв. 2019 г. в 15:47, Premachandran, Mahesh (Nokia - IN/Bangalore) <
> mahesh.premachand...@nokia.com>:
>
> Hi all,
>
>
>
> I am trying to stream some data from Kafka to Ignite using
> IgniteDataStreamer and use 3rd party persistence to move it to HIVE. The
> data on Kafka is in avro format, which I am deserailising, populating an
> Ignite BinaryObject using the binary builder and pushing it to Ignite. It
> works well when I do not enable 3rd party persistence, but once that is
> enabled, it throws the following exception.
>
>
>
> [12:32:07] (err) Failed to execute compound future reducer:
> GridCompoundFuture [rdc=null, initFlag=1, lsnrCalls=2, done=true,
> cancelled=false, err=class o.a.i.IgniteCheckedException: DataStreamer
> request failed [node=292ab229-61fb-4d61-8f08-33c8abd310a2], futs=[true,
> true, true]]class org.apache.ignite.IgniteCheckedException: DataStreamer
> request failed [node=292ab229-61fb-4d61-8f08-33c8abd310a2]
>
> at
> org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$Buffer.onResponse(DataStreamerImpl.java:1912)
>
> at
> org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$3.onMessage(DataStreamerImpl.java:346)
>
> at
> org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1556)
>
> at
> org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1184)
>
> at
> org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:125)
>
> at
> org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1091)
>
> at
> org.apache.ignite.internal.util.StripedExecutor$Stripe.run(StripedExecutor.java:511)
>
> at java.lang.Thread.run(Thread.java:748)
>
> Caused by: javax.cache.integration.CacheWriterException: class
> org.apache.ignite.internal.processors.cache.CachePartialUpdateCheckedException:
> Failed to update keys (retry update if possible).: [2]
>
> at
> org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1280)
>
> at
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.cacheException(IgniteCacheProxyImpl.java:1734)
>
> at
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.put(IgniteCacheProxyImpl.java:1087)
>
> at
> org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.put(GatewayProtectedCacheProxy.java:788)
>
> at
> org.apache.ignite.internal.processors.datastreamer.DataStreamerCacheUpdaters$Individual.receive(DataStreamerCacheUpdaters.java:121)
>
> at
> org.apache.ignite.internal.processors.datastreamer.DataStreamerUpdateJob.call(DataStreamerUpdateJob.java:140)
>
> at
> org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.localUpdate(DataStreamProcessor.java:400)
>
> at
> org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.processRequest(DataStreamProcessor.java:305)
>
>at
> org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.access$000(DataStreamProcessor.java:60)
>
> at
> org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor$1.onMessage(DataStreamProcessor.java:90)
>
> ... 6 more
>
> Caused by: class
> org.apache.ignite.internal.processors.cache.CachePartialUpdateCheckedException:
> Failed to u

RE: Error while persisting from Ignite to Hive for a BinaryObject

2019-01-23 Thread Premachandran, Mahesh (Nokia - IN/Bangalore)
Hi Ilya,

The field apn_id is of type Long. I have been using the  CacheJdbcPojoStore, 
does that map the BinaryObjects to the database schema? or is it only for java 
pojos? I have attached the xml I am using with the client.

Mahesh

From: Ilya Kasnacheev 
Sent: Wednesday, January 23, 2019 6:43 PM
To: user@ignite.apache.org
Subject: Re: Error while persisting from Ignite to Hive for a BinaryObject

Hello!

I think that your CacheStore implementation is confused by nested fields or 
binary object values (what is the type of apn_id?). Consider using  
CacheJdbcBlobStoreFactory instead which will serialize value to one big field 
in BinaryObject formar.

Regards,
--
Ilya Kasnacheev


ср, 23 янв. 2019 г. в 15:47, Premachandran, Mahesh (Nokia - IN/Bangalore) 
mailto:mahesh.premachand...@nokia.com>>:
Hi all,

I am trying to stream some data from Kafka to Ignite using IgniteDataStreamer 
and use 3rd party persistence to move it to HIVE. The data on Kafka is in avro 
format, which I am deserailising, populating an Ignite BinaryObject using the 
binary builder and pushing it to Ignite. It works well when I do not enable 3rd 
party persistence, but once that is enabled, it throws the following exception.

[12:32:07] (err) Failed to execute compound future reducer: GridCompoundFuture 
[rdc=null, initFlag=1, lsnrCalls=2, done=true, cancelled=false, err=class 
o.a.i.IgniteCheckedException: DataStreamer request failed 
[node=292ab229-61fb-4d61-8f08-33c8abd310a2], futs=[true, true, true]]class 
org.apache.ignite.IgniteCheckedException: DataStreamer request failed 
[node=292ab229-61fb-4d61-8f08-33c8abd310a2]
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$Buffer.onResponse(DataStreamerImpl.java:1912)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$3.onMessage(DataStreamerImpl.java:346)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1556)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1184)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:125)
at 
org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1091)
at 
org.apache.ignite.internal.util.StripedExecutor$Stripe.run(StripedExecutor.java:511)
at java.lang.Thread.run(Thread.java:748)
Caused by: javax.cache.integration.CacheWriterException: class 
org.apache.ignite.internal.processors.cache.CachePartialUpdateCheckedException: 
Failed to update keys (retry update if possible).: [2]
at 
org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1280)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.cacheException(IgniteCacheProxyImpl.java:1734)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.put(IgniteCacheProxyImpl.java:1087)
at 
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.put(GatewayProtectedCacheProxy.java:788)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamerCacheUpdaters$Individual.receive(DataStreamerCacheUpdaters.java:121)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamerUpdateJob.call(DataStreamerUpdateJob.java:140)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.localUpdate(DataStreamProcessor.java:400)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.processRequest(DataStreamProcessor.java:305)
   at 
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.access$000(DataStreamProcessor.java:60)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor$1.onMessage(DataStreamProcessor.java:90)
... 6 more
Caused by: class 
org.apache.ignite.internal.processors.cache.CachePartialUpdateCheckedException: 
Failed to update keys (retry update if possible).: [2]
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.onPrimaryError(GridNearAtomicAbstractUpdateFuture.java:397)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.onPrimaryResponse(GridNearAtomicSingleUpdateFuture.java:253)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture$1.apply(GridNearAtomicAbstractUpdateFuture.java:303)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture$1.apply(GridNearAtomicAbstractUpdateFuture.java:300)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicAbstractUpdateFuture.map(GridDhtAtomicAbstractUpdateFuture.ja

Re: Error while persisting from Ignite to Hive for a BinaryObject

2019-01-23 Thread Ilya Kasnacheev
Hello!

I think that your CacheStore implementation is confused by nested fields or
binary object values (what is the type of apn_id?). Consider using
CacheJdbcBlobStoreFactory instead which will serialize value to one big
field in BinaryObject formar.

Regards,
-- 
Ilya Kasnacheev


ср, 23 янв. 2019 г. в 15:47, Premachandran, Mahesh (Nokia - IN/Bangalore) <
mahesh.premachand...@nokia.com>:

> Hi all,
>
>
>
> I am trying to stream some data from Kafka to Ignite using
> IgniteDataStreamer and use 3rd party persistence to move it to HIVE. The
> data on Kafka is in avro format, which I am deserailising, populating an
> Ignite BinaryObject using the binary builder and pushing it to Ignite. It
> works well when I do not enable 3rd party persistence, but once that is
> enabled, it throws the following exception.
>
>
>
> [12:32:07] (err) Failed to execute compound future reducer:
> GridCompoundFuture [rdc=null, initFlag=1, lsnrCalls=2, done=true,
> cancelled=false, err=class o.a.i.IgniteCheckedException: DataStreamer
> request failed [node=292ab229-61fb-4d61-8f08-33c8abd310a2], futs=[true,
> true, true]]class org.apache.ignite.IgniteCheckedException: DataStreamer
> request failed [node=292ab229-61fb-4d61-8f08-33c8abd310a2]
>
> at
> org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$Buffer.onResponse(DataStreamerImpl.java:1912)
>
> at
> org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$3.onMessage(DataStreamerImpl.java:346)
>
> at
> org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1556)
>
> at
> org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1184)
>
> at
> org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:125)
>
> at
> org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1091)
>
> at
> org.apache.ignite.internal.util.StripedExecutor$Stripe.run(StripedExecutor.java:511)
>
> at java.lang.Thread.run(Thread.java:748)
>
> Caused by: javax.cache.integration.CacheWriterException: class
> org.apache.ignite.internal.processors.cache.CachePartialUpdateCheckedException:
> Failed to update keys (retry update if possible).: [2]
>
> at
> org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1280)
>
> at
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.cacheException(IgniteCacheProxyImpl.java:1734)
>
> at
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.put(IgniteCacheProxyImpl.java:1087)
>
> at
> org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.put(GatewayProtectedCacheProxy.java:788)
>
> at
> org.apache.ignite.internal.processors.datastreamer.DataStreamerCacheUpdaters$Individual.receive(DataStreamerCacheUpdaters.java:121)
>
> at
> org.apache.ignite.internal.processors.datastreamer.DataStreamerUpdateJob.call(DataStreamerUpdateJob.java:140)
>
> at
> org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.localUpdate(DataStreamProcessor.java:400)
>
> at
> org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.processRequest(DataStreamProcessor.java:305)
>
>at
> org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.access$000(DataStreamProcessor.java:60)
>
> at
> org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor$1.onMessage(DataStreamProcessor.java:90)
>
> ... 6 more
>
> Caused by: class
> org.apache.ignite.internal.processors.cache.CachePartialUpdateCheckedException:
> Failed to update keys (retry update if possible).: [2]
>
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.onPrimaryError(GridNearAtomicAbstractUpdateFuture.java:397)
>
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.onPrimaryResponse(GridNearAtomicSingleUpdateFuture.java:253)
>
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture$1.apply(GridNearAtomicAbstractUpdateFuture.java:303)
>
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture$1.apply(GridNearAtomicAbstractUpdateFuture.java:300)
>
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicAbstractUpdateFuture.map(GridDhtAtomicAbstractUpdateFuture.java:390)
>
> 

Error while persisting from Ignite to Hive for a BinaryObject

2019-01-23 Thread Premachandran, Mahesh (Nokia - IN/Bangalore)
Hi all,

I am trying to stream some data from Kafka to Ignite using IgniteDataStreamer 
and use 3rd party persistence to move it to HIVE. The data on Kafka is in avro 
format, which I am deserailising, populating an Ignite BinaryObject using the 
binary builder and pushing it to Ignite. It works well when I do not enable 3rd 
party persistence, but once that is enabled, it throws the following exception.

[12:32:07] (err) Failed to execute compound future reducer: GridCompoundFuture 
[rdc=null, initFlag=1, lsnrCalls=2, done=true, cancelled=false, err=class 
o.a.i.IgniteCheckedException: DataStreamer request failed 
[node=292ab229-61fb-4d61-8f08-33c8abd310a2], futs=[true, true, true]]class 
org.apache.ignite.IgniteCheckedException: DataStreamer request failed 
[node=292ab229-61fb-4d61-8f08-33c8abd310a2]
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$Buffer.onResponse(DataStreamerImpl.java:1912)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$3.onMessage(DataStreamerImpl.java:346)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1556)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1184)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:125)
at 
org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1091)
at 
org.apache.ignite.internal.util.StripedExecutor$Stripe.run(StripedExecutor.java:511)
at java.lang.Thread.run(Thread.java:748)
Caused by: javax.cache.integration.CacheWriterException: class 
org.apache.ignite.internal.processors.cache.CachePartialUpdateCheckedException: 
Failed to update keys (retry update if possible).: [2]
at 
org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1280)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.cacheException(IgniteCacheProxyImpl.java:1734)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.put(IgniteCacheProxyImpl.java:1087)
at 
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.put(GatewayProtectedCacheProxy.java:788)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamerCacheUpdaters$Individual.receive(DataStreamerCacheUpdaters.java:121)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamerUpdateJob.call(DataStreamerUpdateJob.java:140)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.localUpdate(DataStreamProcessor.java:400)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.processRequest(DataStreamProcessor.java:305)
   at 
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.access$000(DataStreamProcessor.java:60)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor$1.onMessage(DataStreamProcessor.java:90)
... 6 more
Caused by: class 
org.apache.ignite.internal.processors.cache.CachePartialUpdateCheckedException: 
Failed to update keys (retry update if possible).: [2]
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.onPrimaryError(GridNearAtomicAbstractUpdateFuture.java:397)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.onPrimaryResponse(GridNearAtomicSingleUpdateFuture.java:253)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture$1.apply(GridNearAtomicAbstractUpdateFuture.java:303)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture$1.apply(GridNearAtomicAbstractUpdateFuture.java:300)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicAbstractUpdateFuture.map(GridDhtAtomicAbstractUpdateFuture.java:390)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1805)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal(GridDhtAtomicCache.java:1628)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.sendSingleRequest(GridNearAtomicAbstractUpdateFuture.java:299)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.map(GridNearAtomicSingleUpdateFuture.java:483)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.mapOnTopology(GridNearAtomicSingleUpdateFuture.java:443

Re: BinaryObject nested fields syntax?

2018-11-25 Thread Evgenii Zhuravlev
Hi,

In this case, customer should be also a BinaryObject, so, you have to
access it at first and only after get a certain field.

Evgenii

сб, 24 нояб. 2018 г. в 00:38, joseheitor :

> Given the following model data structure for a given document record:
>
> {
>   "trans": {
> "cust": {
>   "firstname": "Bone",
>   "lastname": "Klebes",
>   "email": "bkleb...@usgs.gov",
>   "gender": "Male"
> },
> "ipaddress": "104.89.149.184",
> "date": "2017-12-01",
> "amount": 1217,
> "currency": "NOK"
>   }
> }
>
> ...modelled in Java by a Transaction class and a Customer class,
>
> And the following code to perform a ScanQuery:
>
> String date = "2017-12-01";
> int amount = 1000;
> String lastname = "Klebes";
>
> IgniteCache cache =
> database.getCache().withKeepBinary();
> ScanQuery filter = new ScanQuery<>(
>   new IgniteBiPredicate() {
> @Override
> public boolean apply(Integer key, BinaryObject trans)
> {
>   if (!trans.field("date").equals(date))
> return false;
>   if (trans.field("amount") <= amount)
> return false;
>   *(???) if
> (!trans.field("customer.lastname").equals(lastname))*
> return false;
>   return true;
> }
>   }
> );
> List result = cache.query(filter).getAll();
>
> What is the correct syntax for accessing the nested 'Customer.lastname'
> field?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


BinaryObject nested fields syntax?

2018-11-24 Thread joseheitor
Given the following model data structure for a given document record:

{
  "trans": {
"cust": {
  "firstname": "Bone",
  "lastname": "Klebes",
  "email": "bkleb...@usgs.gov",
  "gender": "Male"
},
"ipaddress": "104.89.149.184",
"date": "2017-12-01",
"amount": 1217,
"currency": "NOK"
  }
}

...modelled in Java by a Transaction class and a Customer class,

And the following code to perform a ScanQuery:

String date = "2017-12-01";
int amount = 1000;
String lastname = "Klebes";

IgniteCache cache =
database.getCache().withKeepBinary();
ScanQuery filter = new ScanQuery<>(
  new IgniteBiPredicate() {
@Override
public boolean apply(Integer key, BinaryObject trans)
{
  if (!trans.field("date").equals(date))
return false;
  if (trans.field("amount") <= amount)
return false;
  *(???) if
(!trans.field("customer.lastname").equals(lastname))*
return false;
  return true;
}
  }
);
List result = cache.query(filter).getAll();

What is the correct syntax for accessing the nested 'Customer.lastname'
field?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to use BinaryObject Cache API to query a table created from JDBC?

2018-10-12 Thread aealexsandrov
Hi,

Try the next example and follow the same way for your:

// Open the JDBC connection.
try {
Connection conn =
DriverManager.getConnection("jdbc:ignite:thin://127.0.0.1;user=user;password=password");
//remove user password if you don't have security

conn.createStatement().execute("CREATE TABLE IF NOT EXISTS
Person(\n" +
"  id int,\n" +
"  city_id int,\n" +
"  name varchar,\n" +
"  age int, \n" +
"  company varchar,\n" +
"  PRIMARY KEY (id, city_id)\n" +
") WITH
\"template=partitioned,backups=1,affinity_key=city_id, key_type=PersonKey,
value_type=Person\";"); //IMPORTANT add key_type and value_type

conn.createStatement().execute("INSERT INTO Person (id, name,
city_id) VALUES (1, 'John Doe1', 3);");
conn.createStatement().execute("INSERT INTO Person (id, name,
city_id) VALUES (2, 'John Doe2', 3);");
conn.createStatement().execute("INSERT INTO Person (id, name,
city_id) VALUES (3, 'John Doe3', 3);");
conn.createStatement().execute("INSERT INTO Person (id, name,
city_id) VALUES (4, 'John Doe4', 3);");
conn.createStatement().execute("INSERT INTO Person (id, name,
city_id) VALUES (5, 'John Doe5', 3);");
conn.createStatement().execute("INSERT INTO Person (id, name,
city_id) VALUES (6, 'John Doe6', 3);");
    conn.createStatement().execute("INSERT INTO Person (id, name,
city_id) VALUES (7, 'John Doe7', 3);");

conn.close();

BinaryObject key = ignite.binary().builder("PersonKey") //same
as in key type
.setField("id", 1)
.setField("city_id", 3)
.build();

IgniteCache cache1 =
ignite.getOrCreateCache("SQL_PUBLIC_PERSON").withKeepBinary();

BinaryObject value = cache1.get(key);

System.out.println("Name is " + value.field("name"));
}
catch (SQLException e) {
e.printStackTrace();

Output is:

Name is John Doe1

BR,
Andrei




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


How to use BinaryObject Cache API to query a table created from JDBC?

2018-10-12 Thread Ray
Let's say I create a table from jdbc client using this command.

create table test(a varchar, b varchar, c varchar,primary key(a,b));

I inserted one record in this table.

 select * from test;
++++
|   A|   B| 
 
C|
++++
| 8  | 8  | 9   
  
|
++++

How can I query this table using BinaryObject Cache API?

I tried the code below, but it returns null for object o.

public class TestKey {

private String a;

private String b;

public TestKey() {
}

public String getA() {
return a;
}

public void setA(String a) {
this.a = a;
}

public String getB() {
return b;
}

public void setB(String b) {
this.b = b;
}
}



TestKey tk = new TestKey();
tk.setA("8");
tk.setB("8");
Object o = ignite.cache("SQL_PUBLIC_TEST").get(tk);



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Off heap constantly grow on use BinaryObject as field of cached data

2018-10-03 Thread Ilya Kasnacheev
Hello!

This is the first time such problem is observed, so maybe your case is very
uncommon for some reason.

Ignite has a thing called "free list" which helps plug holes in Durable
Memory pages normally.

Regards,
-- 
Ilya Kasnacheev


пт, 28 сент. 2018 г. в 18:40, Serg :

> Hi,
>
> I have changed my data model and the problem is gone.
>
> But look like I should be care with data which I upload and will be nice to
> know which data I can use, maybe I missed smth in docs about data
> preparation?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: BinaryObject vs a HashMap

2018-10-02 Thread Ilya Kasnacheev
Hello!

You cannot avoid deserialization since data is stored in off-heap in
serialized form and also perhaps sent over network.

If you have a lot of different types, using HashMap will be preferred (with
primitive keys/values).
You could also try storing byte[] values to have precise control over
serialization.

Regards,
-- 
Ilya Kasnacheev


вт, 2 окт. 2018 г. в 11:52, Mikael :

> Hi!
>
> Is there some meta data storage overhead for binary object "types" ? I
> need a cache where the value is pretty much a small key value store
> (5-10 keys and values, all strings, every value different) and with that
> few values maybe just two arrays with keys and values or a hash map, but
> as I will read out single values all the time I thought using binary
> objects would be a better choice to avoid deserialization, but all
> objects will be different, so if the cache has 1 entries all 1
> might be different "type", would it be a good choice to use a binary
> object as a simple key/value storage or would I be better of using a
> POJO or HashMap ?
>
> What I don't get is if there is some meta data stored behind the scenes
> for each different type of binary object I create or if it's fine to
> have all binary objects in a cache all being different.
>
> Mikael
>
>
>


BinaryObject vs a HashMap

2018-10-02 Thread Mikael

Hi!

Is there some meta data storage overhead for binary object "types" ? I 
need a cache where the value is pretty much a small key value store 
(5-10 keys and values, all strings, every value different) and with that 
few values maybe just two arrays with keys and values or a hash map, but 
as I will read out single values all the time I thought using binary 
objects would be a better choice to avoid deserialization, but all 
objects will be different, so if the cache has 1 entries all 1 
might be different "type", would it be a good choice to use a binary 
object as a simple key/value storage or would I be better of using a 
POJO or HashMap ?


What I don't get is if there is some meta data stored behind the scenes 
for each different type of binary object I create or if it's fine to 
have all binary objects in a cache all being different.


Mikael




Re: Off heap constantly grow on use BinaryObject as field of cached data

2018-09-28 Thread Serg
Hi,  

I have changed my data model and the problem is gone.

But look like I should be care with data which I upload and will be nice to
know which data I can use, maybe I missed smth in docs about data
preparation?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Off heap constantly grow on use BinaryObject as field of cached data

2018-09-28 Thread ilya.kasnacheev
Hello!

It is still a problem for you? I can see slight decrease of fill factor on
your chart coupled with slight increase of data region usage. With regards
to fragmentation it is to be expected.

I have tried your test:
15:04:54,356 INFO  [grid-timeout-worker-#23] 
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
^-- Node [id=14b5701b, uptime=00:01:00.014]
^-- PageMemory [pages=6157]
^-- Heap [used=105MB, free=69.72%, comm=350MB]
^-- Non heap [used=62MB, free=-1%, comm=65MB]

Then, ten minutes later:

15:15:54,402 INFO  [grid-timeout-worker-#23] 
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
^-- Node [id=14b5701b, uptime=00:12:00.058]
^-- PageMemory [pages=6212]
^-- Heap [used=230MB, free=34.11%, comm=350MB]
^-- Non heap [used=67MB, free=-1%, comm=68MB]

Regards,




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Off heap constantly grow on use BinaryObject as field of cached data

2018-09-20 Thread Serg
Hello Ilya,

I have found that cause of problem the objects which have different size of
data. 
As result I have situation where pageFillFactor increased and offHeapSize is
decreased and this trend never end.

Grafana screen:
https://www.awesomescreenshot.com/image/3619800/ad604d44bae0f241a2618197c2be8af4

I updated reproducer with static data set in csv (
ignite-server-docker/src/main/docker/5.csv ) . 
Reproducer : https://github.com/SergeyMagid/ignite-reproduce-grow-memory

Can you look at this ?
Can this be a bug or we should use different data model?





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Off heap constantly grow on use BinaryObject as field of cached data

2018-09-11 Thread Ilya Kasnacheev
Hello!

So I was increasing amount of RAM in the memory model, and it turns out
that Off-Heap usage will not grow past:

2018-09-11 12:47:46,603 INFO  [pub-#290] log4j.Log4JLogger
(Log4JLogger.java:566) - #
2018-09-11 12:47:56,605 INFO  [pub-#292] log4j.Log4JLogger
(Log4JLogger.java:566) - Show metrics inside ignite
c2785f18-983c-490e-8ebc-3198b54ae132
2018-09-11 12:47:56,606 INFO  [pub-#292] log4j.Log4JLogger
(Log4JLogger.java:566) - Size : 10 of cache contactsEx
2018-09-11 12:47:56,606 INFO  [pub-#292] log4j.Log4JLogger
(Log4JLogger.java:566) - #
2018-09-11 12:47:56,606 INFO  [pub-#292] log4j.Log4JLogger
(Log4JLogger.java:566) - >>> Memory Region Name: Default_Region
2018-09-11 12:47:56,606 INFO  [pub-#292] log4j.Log4JLogger
(Log4JLogger.java:566) - AllocationRate: 0.0
2018-09-11 12:47:56,606 INFO  [pub-#292] log4j.Log4JLogger
(Log4JLogger.java:566) - PagesFillFactor: 0.8341349
2018-09-11 12:47:56,606 INFO  [pub-#292] log4j.Log4JLogger
(Log4JLogger.java:566) - PhysicalMemoryPages: 35493
2018-09-11 12:47:56,606 INFO  [pub-#292] log4j.Log4JLogger
(Log4JLogger.java:566) - OffHeapSize: 209715200
2018-09-11 12:47:56,606 INFO  [pub-#292] log4j.Log4JLogger
(Log4JLogger.java:566) - CheckpointBufferSize: 0
2018-09-11 12:47:56,606 INFO  [pub-#292] log4j.Log4JLogger
(Log4JLogger.java:566) - CheckpointBufferPages: 0
2018-09-11 12:47:56,607 INFO  [pub-#292] log4j.Log4JLogger
(Log4JLogger.java:566) - OffheapUsedSize: 145379328
2018-09-11 12:47:56,607 INFO  [pub-#292] log4j.Log4JLogger
(Log4JLogger.java:566) - #

After half an hour usage is still at this point. So I imagine that storing
this amount of data in Apache Ignite takes 140M and not less. But it won't
grow past this point.

There may be a lot of reasons why this number may grow before it reaches
plateau. There's, obviously, a lot of metadata pages, some of which may not
be allocated immediately. Then there's fragmentation: If you remove an
object from page, and write a slightly larger object, it may not fit the
page and it will use up space on some other page. PagesFillFactor is metric
that chases this.

Note that for such a small cache, PDS will be absolutely dominated by
metadata. On large datasets you will see growth due to fragmentation. But
neither of those are runaway growth. Unfortunately, your reproducer does
not show runaway growth either, so I can't tell you anything further.

Regards,
-- 
Ilya Kasnacheev


вт, 11 сент. 2018 г. в 11:28, Serg :

> Hi Ilya,
>
> I created reproducer with two tests
> https://github.com/SergeyMagid/ignite-reproduce-grow-memory
>
> Differents in this tests only is data which inserted to cache.
> I have previously suppose that problem caused with BinaryObject only but I
> reproduced this problem without BinaryObject too.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Off heap constantly grow on use BinaryObject as field of cached data

2018-09-11 Thread Serg
Hi Ilya,

I created reproducer with two tests
https://github.com/SergeyMagid/ignite-reproduce-grow-memory

Differents in this tests only is data which inserted to cache. 
I have previously suppose that problem caused with BinaryObject only but I 
reproduced this problem without BinaryObject too.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Off heap constantly grow on use BinaryObject as field of cached data

2018-09-10 Thread Ilya Kasnacheev
Hello!

There's a WAL reader somewhere in the code, it could help if you have
persistence. Note that both invocation and output of this tool is confusing.

It would be nice if you had a reproducer which would show this behavior.
The snippet that you have posted previously isn't very clear on expected
behavior. Can you please try to devise something stand-alone on Github?

Regards,
-- 
Ilya Kasnacheev


пн, 10 сент. 2018 г. в 12:04, Serg :

> Hi Ilya,
>
> Yes growing not so quick but in production we lost near 1GB every day with
> 15GB of data on each node.
> I had simplify data classes by remove annotations and this does not help.
>
> Is it possible debug off-heap memory? How I can  understand where memory
> goes?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Off heap constantly grow on use BinaryObject as field of cached data

2018-09-10 Thread Serg
Hi Ilya,

Yes growing not so quick but in production we lost near 1GB every day with
15GB of data on each node.
I had simplify data classes by remove annotations and this does not help.

Is it possible debug off-heap memory? How I can  understand where memory
goes?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Off heap constantly grow on use BinaryObject as field of cached data

2018-09-07 Thread Ilya Kasnacheev
Hello!

I have ran your test and I don't observe any off-heap growth:
[18:17:14,655][INFO ][grid-timeout-worker-#24%ignite.GrowTest0%][GrowTest0]
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
^-- Node [id=23ad15f7, name=ignite.GrowTest0, uptime=00:00:40.015]
^-- H/N/C [hosts=1, nodes=1, CPUs=8]
^-- CPU [cur=31,67%, avg=31,69%, GC=0,2%]
*^-- PageMemory [pages=3314]*
^-- Heap [used=2419MB, free=66,03%, comm=2796MB]
^-- Non heap [used=48MB, free=-1%, comm=53MB]
^-- Outbound messages queue [size=0]
^-- Public thread pool [active=0, idle=6, qSize=0]
^-- System thread pool [active=0, idle=8, qSize=0]

and then after a few minutes:
[18:19:14,710][INFO ][grid-timeout-worker-#24%ignite.GrowTest0%][GrowTest0]
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
^-- Node [id=23ad15f7, name=ignite.GrowTest0, uptime=00:02:40.071]
^-- H/N/C [hosts=1, nodes=1, CPUs=8]
^-- CPU [cur=31,93%, avg=31,67%, GC=0,2%]
*^-- PageMemory [pages=3317]*
^-- Heap [used=1442MB, free=79,74%, comm=2828MB]
^-- Non heap [used=48MB, free=-1%, comm=53MB]
^-- Outbound messages queue [size=0]
^-- Public thread pool [active=0, idle=0, qSize=0]
^-- System thread pool [active=0, idle=6, qSize=0]

There's barely any growth at all.

Maybe it's something in your post-processing via annotations that add
something to classes that causes them to bloat in size?

Regards,
-- 
Ilya Kasnacheev


пт, 7 сент. 2018 г. в 10:39, Serg :

> Hi All
>
> We have got a problem with growing off-heap on each sync data.
>
> *Precondition *
> We have ignite in memory cache. with key Integer and value Contact
> Each hour we sync full data in cache via api cache.put(key, value); (cache
> full load)
> Also we have increment update via the api cache.put(key, value);
> (streaming)
>
> Data:
>
> @Builder
> @Data
> @AllArgsConstructor
> @NoArgsConstructor
> private class Contact {
> private int id;
> private MapBinaryObject*> customFields;
> }
>
> @Data
> private class CustomInfo {
>
> private Date updateTime;
> private boolean deleted;
>
> }
>
> /if we do not use BinaryObject in Contact all works as expected./
>
> *Problem*
> Off-heap memory allocate time to time for the same key new memory pages.
> As a result, memory is constantly growing, but the data size is not
> increasing.
> If we stop ignite and load data again off-heap has normal size.
>
>
> Example to reproduce problem
>
>  @Test
> public void testStreamAndUpdateContact() {
>  HashMap baseCustomFields = new HashMap<>();
> baseCustomFields.put("customX",
>
> ignite.binary().builder(CustomInfo.class.getName()).build());
> baseCustomFields.put("customY",
> ignite.binary().builder(CustomInfo.class.getName()).build());
> baseCustomFields.put("customY1",
> ignite.binary().builder(CustomInfo.class.getName()).build());
> baseCustomFields.put("customY2",
> ignite.binary().builder(CustomInfo.class.getName()).build());
> baseCustomFields.put("customY3",
> ignite.binary().builder(CustomInfo.class.getName()).build());
>
> {
> CacheConfiguration cfg = new
> CacheConfiguration<>("contactsEx");
> cfg.setCacheMode(CacheMode.PARTITIONED);
> cfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
> IgniteCache cache =
> ignite.getOrCreateCache(cfg);
> }
>
> for (int z = 0; z < 50_000; z++) {
>
> //Fill base contacts
> {
> IgniteCache cache =
> ignite.getOrCreateCache("contactsEx");
> IntStream.range(0, 10_000).forEach(counter -> {
> Lock lock = cache.lock(counter);
> lock.lock();
> try {
> cache.put(counter,
> Contact.builder()
> .id(counter)
> .customFields(baseCustomFields)
> .build());
> } finally {
> lock.unlock();
> }
>
> });
> }
>
> //events
>
> ExecutorService executorService =
> Executors.newSingleThreadExecutor();
> executorService.execute(() -> {
> {
> IgniteCache cache =
> ignite.getOrCreateCache("contactsEx");
> IntStream.range(0, 10_000).forEach(counter -> {
>
>  

Off heap constantly grow on use BinaryObject as field of cached data

2018-09-07 Thread Serg
Hi All

We have got a problem with growing off-heap on each sync data.

*Precondition *
We have ignite in memory cache. with key Integer and value Contact
Each hour we sync full data in cache via api cache.put(key, value); (cache
full load)
Also we have increment update via the api cache.put(key, value); (streaming)

Data:

@Builder
@Data
@AllArgsConstructor
@NoArgsConstructor
private class Contact {
private int id;
private MapBinaryObject*> customFields;
}

@Data
private class CustomInfo {

private Date updateTime;
private boolean deleted;

}

/if we do not use BinaryObject in Contact all works as expected./

*Problem*
Off-heap memory allocate time to time for the same key new memory pages. 
As a result, memory is constantly growing, but the data size is not
increasing.
If we stop ignite and load data again off-heap has normal size.


Example to reproduce problem

 @Test
public void testStreamAndUpdateContact() {
 HashMap baseCustomFields = new HashMap<>();
baseCustomFields.put("customX",
   
ignite.binary().builder(CustomInfo.class.getName()).build());
baseCustomFields.put("customY",
ignite.binary().builder(CustomInfo.class.getName()).build());
baseCustomFields.put("customY1",
ignite.binary().builder(CustomInfo.class.getName()).build());
baseCustomFields.put("customY2",
ignite.binary().builder(CustomInfo.class.getName()).build());
baseCustomFields.put("customY3",
ignite.binary().builder(CustomInfo.class.getName()).build());

{
CacheConfiguration cfg = new
CacheConfiguration<>("contactsEx");
cfg.setCacheMode(CacheMode.PARTITIONED);
cfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
IgniteCache cache =
ignite.getOrCreateCache(cfg);
}

for (int z = 0; z < 50_000; z++) {

//Fill base contacts
{
IgniteCache cache =
ignite.getOrCreateCache("contactsEx");
IntStream.range(0, 10_000).forEach(counter -> {
Lock lock = cache.lock(counter);
lock.lock();
try {
cache.put(counter,
Contact.builder()
.id(counter)
.customFields(baseCustomFields)
.build());
} finally {
lock.unlock();
}

});
}

//events

ExecutorService executorService =
Executors.newSingleThreadExecutor();
executorService.execute(() -> {
{
IgniteCache cache =
ignite.getOrCreateCache("contactsEx");
IntStream.range(0, 10_000).forEach(counter -> {

Lock lock = cache.lock(counter);
lock.lock();
try {

Contact c = cache.get(counter);
Map customFields =
c.getCustomFields();
customFields.put("customY", ignite.binary()
   
.builder(CustomInfo.class.getName()).setField("updateTime", new Date())
.build());
cache.put(counter, c);

} finally {
lock.unlock();
}

});

}
executorService.shutdown();
});


}
}





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Determining BinaryObject field type

2018-03-28 Thread David Harvey
 I had stopped at BinaryObject, and didn't follow the indirection though
type() to BinaryType. I think I assumed that this had only information at
the higher level, and wouldn't drill down into the fields.
This also answers a question about how to enumerate the fields.

Thanks.
-DH

On Wed, Mar 28, 2018 at 3:10 AM, Pavel Vinokurov <vinokurov.pa...@gmail.com>
wrote:

> Dave,
>
> I suppose there isn't way to delete the schema.
> You could get the meta information about binary objects using Ignite#binary()
> method.
> For example ignite.binary().type("package.Employeee").
> fieldTypeName("name").
>
>
>
> Thanks,
> Pavel
>
> 2018-03-24 1:10 GMT+03:00 Dave Harvey <dhar...@jobcase.com>:
>
>> Once a BinaryObjectSchema is created, it is not possible to change the
>> type
>> of a field for a known field name.
>>
>> My question is whether there is any way to determine the type of that
>> field
>> in the Schema.
>>
>> We are hitting a case were the way we get the data out of a different
>> database returns a TIMESTAMP, but our binary object wants a DATE. In
>> this test case, I could figure out that, but in the general case,  I have
>> a
>> BinaryObject type name, and a field name, and an exception if I try to put
>> the wrong type in that field.
>>
>> The hokey general solutions I have come up with are:
>> 1) Parse the exception message to see what type it wants
>> 2) Have a list of conversions to try for the source type, and step through
>> them on each exception.
>> 3) Get the field from an existing binary object of that type, and use the
>> class of the result.   But there is the chicken/egg problem.
>>
>> I have found that I can create a cache on a cluster with persistence, with
>> some type definition, then delete that cache, the cluster will remember
>> the
>> BinaryObjectSchema for that type, and refuse to allow me to change the
>> field's type.  If I  don't remember the field's type, how can I
>> build the binary object?
>>
>> Is there any way to delete the schema without nuking some of the
>> binary-meta/marshaller files when the cluster is down?
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>
>
>
> --
>
> Regards
>
> Pavel Vinokurov
>

Disclaimer

The information contained in this communication from the sender is 
confidential. It is intended solely for use by the recipient and others 
authorized to receive it. If you are not the recipient, you are hereby notified 
that any disclosure, copying, distribution or taking action in relation of the 
contents of this information is strictly prohibited and may be unlawful.

This email has been scanned for viruses and malware, and may have been 
automatically archived by Mimecast Ltd, an innovator in Software as a Service 
(SaaS) for business. Providing a safer and more useful place for your human 
generated data. Specializing in; Security, archiving and compliance. To find 
out more visit the Mimecast website.


Re: Determining BinaryObject field type

2018-03-28 Thread Pavel Vinokurov
Dave,

There is one way to delete meta data.
You could find typeId using ignite.binary().type("package.Employeee").typeId()
and remove .bin files in all  *binary_meta* subfolders.
​


Re: Determining BinaryObject field type

2018-03-28 Thread Pavel Vinokurov
Dave,

I suppose there isn't way to delete the schema.
You could get the meta information about binary objects using Ignite#binary()
method.
For example ignite.binary().type("package.Employeee").fieldTypeName("name").



Thanks,
Pavel

2018-03-24 1:10 GMT+03:00 Dave Harvey <dhar...@jobcase.com>:

> Once a BinaryObjectSchema is created, it is not possible to change the type
> of a field for a known field name.
>
> My question is whether there is any way to determine the type of that field
> in the Schema.
>
> We are hitting a case were the way we get the data out of a different
> database returns a TIMESTAMP, but our binary object wants a DATE. In
> this test case, I could figure out that, but in the general case,  I have a
> BinaryObject type name, and a field name, and an exception if I try to put
> the wrong type in that field.
>
> The hokey general solutions I have come up with are:
> 1) Parse the exception message to see what type it wants
> 2) Have a list of conversions to try for the source type, and step through
> them on each exception.
> 3) Get the field from an existing binary object of that type, and use the
> class of the result.   But there is the chicken/egg problem.
>
> I have found that I can create a cache on a cluster with persistence, with
> some type definition, then delete that cache, the cluster will remember the
> BinaryObjectSchema for that type, and refuse to allow me to change the
> field's type.  If I  don't remember the field's type, how can I
> build the binary object?
>
> Is there any way to delete the schema without nuking some of the
> binary-meta/marshaller files when the cluster is down?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 

Regards

Pavel Vinokurov


Determining BinaryObject field type

2018-03-23 Thread Dave Harvey
Once a BinaryObjectSchema is created, it is not possible to change the type
of a field for a known field name.

My question is whether there is any way to determine the type of that field
in the Schema.  

We are hitting a case were the way we get the data out of a different
database returns a TIMESTAMP, but our binary object wants a DATE. In
this test case, I could figure out that, but in the general case,  I have a
BinaryObject type name, and a field name, and an exception if I try to put
the wrong type in that field.

The hokey general solutions I have come up with are:
1) Parse the exception message to see what type it wants
2) Have a list of conversions to try for the source type, and step through
them on each exception.
3) Get the field from an existing binary object of that type, and use the
class of the result.   But there is the chicken/egg problem.

I have found that I can create a cache on a cluster with persistence, with
some type definition, then delete that cache, the cluster will remember the
BinaryObjectSchema for that type, and refuse to allow me to change the
field's type.  If I  don't remember the field's type, how can I
build the binary object?

Is there any way to delete the schema without nuking some of the
binary-meta/marshaller files when the cluster is down?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: query on BinaryObject index and table

2018-02-14 Thread Vladimir Ozerov
He Rajesh,

Method CacheConfiguration.setIndexedTypes() should only be used for classes
with SQL annotations. Since you operate on binary objects, you should use
CacheConfiguration.setQueryEntity(), and define QueryEntity with all
necessary fields. Also there is a property QueryEntity.tableName which you
can use to specify concrete table name.

Vladimir.


On Mon, Jan 22, 2018 at 7:41 PM, Denis Magda <dma...@apache.org> wrote:

> The schema can be changed with ALTER TABLE ADD COLUMN command:
> https://apacheignite-sql.readme.io/docs/alter-table
>
> To my knowledge this is supported for schemas that were initially
> configured by both DDL and QueryEntity/Annotations.
>
> —
> Denis
>
>
> On Jan 22, 2018, at 5:44 AM, Ilya Kasnacheev <ilya.kasnach...@gmail.com>
> wrote:
>
> Hello Rajesh!
>
> Table name can be specified in cache configuration's query entity. If not
> supplied, by default it is equal to value type name, e.g. BinaryObject :)
>
> Also, note that SQL tables have fixed schemas. This means you won't be
> able to add a random set of fields in BinaryObject and be able to do SQL
> queries on them all. You will have to declare all fields that you are going
> to use via SQL, either by annotations or query entity:
> see https://apacheignite-sql.readme.io/docs/schema-and-indexes
>
> To add index, you should either specify it in annotations (via index=true)
> or in query entity.
>
> Regards,
> Ilya.
>
> --
> Ilya Kasnacheev
>
> 2018-01-21 15:12 GMT+03:00 Rajesh Kishore <rajesh10si...@gmail.com>:
>
>> Hi Denis,
>>
>> This is my code:
>>
>> CacheConfiguration<Long, BinaryObject> cacheCfg =
>> new CacheConfiguration<>(ORG_CACHE);
>>
>> cacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
>> cacheCfg.setBackups(1);
>> cacheCfg
>> .setWriteSynchronizationMode(CacheWriteSynchronizationMode.F
>> ULL_SYNC);
>> cacheCfg.setIndexedTypes(Long.class, BinaryObject.class);
>>
>> IgniteCache<Long, BinaryObject> cache = ignite.getOrCreateCache(cacheC
>> fg);
>>
>> if ( UPDATE ) {
>>   System.out.println("Populating the cache...");
>>
>>   try (IgniteDataStreamer<Long, BinaryObject> streamer =
>>   ignite.dataStreamer(ORG_CACHE)) {
>> streamer.allowOverwrite(true);
>> IgniteBinary binary = ignite.binary();
>> BinaryObjectBuilder objBuilder = binary.builder(ORG_CACHE);
>> ;
>> for ( long i = 0; i < 100; i++ ) {
>>   streamer.addData(i,
>>   objBuilder.setField("id", i)
>>   .setField("name", "organization-" + i).build());
>>
>>   if ( i > 0 && i % 100 == 0 )
>> System.out.println("Done: " + i);
>> }
>>   }
>> }
>>
>> IgniteCache<Long, BinaryObject> binaryCache =
>> ignite.cache(ORG_CACHE).withKeepBinary();
>> BinaryObject binaryPerson = binaryCache.get(54l);
>> System.out.println("name " + binaryPerson.field("name"));
>>
>>
>> Not sure, If I am missing some context here , if I have to use sqlquery ,
>> what table name should I specify - I did not create table explicitly, do I
>> need to that?
>> How would I create the index?
>>
>> Thanks,
>> Rajesh
>>
>> On Sun, Jan 21, 2018 at 12:25 PM, Denis Magda <dma...@apache.org> wrote:
>>
>>>
>>>
>>> > On Jan 20, 2018, at 7:20 PM, Rajesh Kishore <rajesh10si...@gmail.com>
>>> wrote:
>>> >
>>> > Hi,
>>> >
>>> > I have requirement that my schema is not fixed , so I have to use the
>>> BinaryObject approach instead of fixed POJO
>>> >
>>> > I am relying on OOTB file system persistence mechanism
>>> >
>>> > My questions are:
>>> > - How can I specify the indexes on BinaryObject?
>>>
>>> https://apacheignite-sql.readme.io/docs/create-index
>>> https://apacheignite-sql.readme.io/docs/schema-and-indexes
>>>
>>> > - If I have to use sql query for retrieving objects , what table name
>>> should I specify, the one which is used for cache name does not work
>>> >
>>>
>>> Was the table and its queryable fields/indexes created with CREATE TABLE
>>> or Java annotations/QueryEntity?
>>>
>>> If the latter approach was taken then the table name corresponds to the
>>> Java type name as shown in this doc:
>>> https://apacheignite-sql.readme.io/docs/schema-and-indexes
>>>
>>> —
>>> Denis
>>>
>>> > -Rajesh
>>>
>>>
>>
>
>


Re: How to use BinaryObject from existing data

2018-01-25 Thread vkulichenko
When you create a table via SQL, you already fully describe its schema, so
there is no need for QueryEntity. Can you clarify what you're trying to
achieve?

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to use BinaryObject from existing data

2018-01-24 Thread Humphrey
Is it possible to create a table by SQL and then add the QueryEntity
(probably when creating the table) so later we are able to search with
SqlFieldsQuery property?

I mean without creating first a POJO first and defined in the cache
configuration.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to use BinaryObject from existing data

2018-01-24 Thread ezhuravlev
Hi, 

In your case, you, most possibly use Id as key, and it could be Long or Int.
So, you just need to get it by this key. The example you provided describes
the case with complex primary key.

Evgenii



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


How to use BinaryObject from existing data

2018-01-24 Thread Thomas Isaksen
Hi

I have created a table with three columns which I populated using sql.

The table is called descriptor and the columns are id,name,description

Now I wonder, how can I query this table to get a list of BinaryObjects?

In the documentation I found this example:

BinaryObject key = ignite.binary().toBinary(new MyKey());

BinaryObject result = cache.get(key);

What does the class MyKey look like ?


./t


Re: query on BinaryObject index and table

2018-01-22 Thread Denis Magda
The schema can be changed with ALTER TABLE ADD COLUMN command:
https://apacheignite-sql.readme.io/docs/alter-table 
<https://apacheignite-sql.readme.io/docs/alter-table>

To my knowledge this is supported for schemas that were initially configured by 
both DDL and QueryEntity/Annotations.

—
Denis

> On Jan 22, 2018, at 5:44 AM, Ilya Kasnacheev <ilya.kasnach...@gmail.com> 
> wrote:
> 
> Hello Rajesh!
> 
> Table name can be specified in cache configuration's query entity. If not 
> supplied, by default it is equal to value type name, e.g. BinaryObject :)
> 
> Also, note that SQL tables have fixed schemas. This means you won't be able 
> to add a random set of fields in BinaryObject and be able to do SQL queries 
> on them all. You will have to declare all fields that you are going to use 
> via SQL, either by annotations or query entity:
> see https://apacheignite-sql.readme.io/docs/schema-and-indexes 
> <https://apacheignite-sql.readme.io/docs/schema-and-indexes>
> 
> To add index, you should either specify it in annotations (via index=true) or 
> in query entity.
> 
> Regards,
> Ilya.
> 
> -- 
> Ilya Kasnacheev
> 
> 2018-01-21 15:12 GMT+03:00 Rajesh Kishore <rajesh10si...@gmail.com 
> <mailto:rajesh10si...@gmail.com>>:
> Hi Denis,
> 
> This is my code:
> 
> CacheConfiguration<Long, BinaryObject> cacheCfg =
> new CacheConfiguration<>(ORG_CACHE);
> 
> cacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
> cacheCfg.setBackups(1);
> cacheCfg
> .setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
> cacheCfg.setIndexedTypes(Long.class, BinaryObject.class);
> 
> IgniteCache<Long, BinaryObject> cache = ignite.getOrCreateCache(cacheCfg);
> 
> if ( UPDATE ) {
>   System.out.println("Populating the cache...");
> 
>   try (IgniteDataStreamer<Long, BinaryObject> streamer =
>   ignite.dataStreamer(ORG_CACHE)) {
> streamer.allowOverwrite(true);
> IgniteBinary binary = ignite.binary();
> BinaryObjectBuilder objBuilder = binary.builder(ORG_CACHE);
> ;
> for ( long i = 0; i < 100; i++ ) {
>   streamer.addData(i,
>   objBuilder.setField("id", i)
>   .setField("name", "organization-" + i).build());
> 
>   if ( i > 0 && i % 100 == 0 )
> System.out.println("Done: " + i);
> }
>   }
> }
> 
> IgniteCache<Long, BinaryObject> binaryCache =
> ignite.cache(ORG_CACHE).withKeepBinary();
> BinaryObject binaryPerson = binaryCache.get(54l);
> System.out.println("name " + binaryPerson.field("name"));
> 
> 
> Not sure, If I am missing some context here , if I have to use sqlquery , 
> what table name should I specify - I did not create table explicitly, do I 
> need to that?
> How would I create the index?
> 
> Thanks,
> Rajesh
> 
> On Sun, Jan 21, 2018 at 12:25 PM, Denis Magda <dma...@apache.org 
> <mailto:dma...@apache.org>> wrote:
> 
> 
> > On Jan 20, 2018, at 7:20 PM, Rajesh Kishore <rajesh10si...@gmail.com 
> > <mailto:rajesh10si...@gmail.com>> wrote:
> >
> > Hi,
> >
> > I have requirement that my schema is not fixed , so I have to use the 
> > BinaryObject approach instead of fixed POJO
> >
> > I am relying on OOTB file system persistence mechanism
> >
> > My questions are:
> > - How can I specify the indexes on BinaryObject?
> 
> https://apacheignite-sql.readme.io/docs/create-index 
> <https://apacheignite-sql.readme.io/docs/create-index>
> https://apacheignite-sql.readme.io/docs/schema-and-indexes 
> <https://apacheignite-sql.readme.io/docs/schema-and-indexes>
> 
> > - If I have to use sql query for retrieving objects , what table name 
> > should I specify, the one which is used for cache name does not work
> >
> 
> Was the table and its queryable fields/indexes created with CREATE TABLE or 
> Java annotations/QueryEntity?
> 
> If the latter approach was taken then the table name corresponds to the Java 
> type name as shown in this doc:
> https://apacheignite-sql.readme.io/docs/schema-and-indexes 
> <https://apacheignite-sql.readme.io/docs/schema-and-indexes>
> 
> —
> Denis
> 
> > -Rajesh
> 
> 
> 



Re: query on BinaryObject index and table

2018-01-22 Thread Ilya Kasnacheev
Hello Rajesh!

Table name can be specified in cache configuration's query entity. If not
supplied, by default it is equal to value type name, e.g. BinaryObject :)

Also, note that SQL tables have fixed schemas. This means you won't be able
to add a random set of fields in BinaryObject and be able to do SQL queries
on them all. You will have to declare all fields that you are going to use
via SQL, either by annotations or query entity:
see https://apacheignite-sql.readme.io/docs/schema-and-indexes

To add index, you should either specify it in annotations (via index=true)
or in query entity.

Regards,
Ilya.

-- 
Ilya Kasnacheev

2018-01-21 15:12 GMT+03:00 Rajesh Kishore <rajesh10si...@gmail.com>:

> Hi Denis,
>
> This is my code:
>
> CacheConfiguration<Long, BinaryObject> cacheCfg =
> new CacheConfiguration<>(ORG_CACHE);
>
> cacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
> cacheCfg.setBackups(1);
> cacheCfg
> .setWriteSynchronizationMode(CacheWriteSynchronizationMode.
> FULL_SYNC);
> cacheCfg.setIndexedTypes(Long.class, BinaryObject.class);
>
> IgniteCache<Long, BinaryObject> cache = ignite.getOrCreateCache(
> cacheCfg);
>
> if ( UPDATE ) {
>   System.out.println("Populating the cache...");
>
>   try (IgniteDataStreamer<Long, BinaryObject> streamer =
>   ignite.dataStreamer(ORG_CACHE)) {
> streamer.allowOverwrite(true);
> IgniteBinary binary = ignite.binary();
> BinaryObjectBuilder objBuilder = binary.builder(ORG_CACHE);
> ;
> for ( long i = 0; i < 100; i++ ) {
>   streamer.addData(i,
>   objBuilder.setField("id", i)
>   .setField("name", "organization-" + i).build());
>
>   if ( i > 0 && i % 100 == 0 )
>     System.out.println("Done: " + i);
> }
>   }
> }
>
> IgniteCache<Long, BinaryObject> binaryCache =
> ignite.cache(ORG_CACHE).withKeepBinary();
> BinaryObject binaryPerson = binaryCache.get(54l);
> System.out.println("name " + binaryPerson.field("name"));
>
>
> Not sure, If I am missing some context here , if I have to use sqlquery ,
> what table name should I specify - I did not create table explicitly, do I
> need to that?
> How would I create the index?
>
> Thanks,
> Rajesh
>
> On Sun, Jan 21, 2018 at 12:25 PM, Denis Magda <dma...@apache.org> wrote:
>
>>
>>
>> > On Jan 20, 2018, at 7:20 PM, Rajesh Kishore <rajesh10si...@gmail.com>
>> wrote:
>> >
>> > Hi,
>> >
>> > I have requirement that my schema is not fixed , so I have to use the
>> BinaryObject approach instead of fixed POJO
>> >
>> > I am relying on OOTB file system persistence mechanism
>> >
>> > My questions are:
>> > - How can I specify the indexes on BinaryObject?
>>
>> https://apacheignite-sql.readme.io/docs/create-index
>> https://apacheignite-sql.readme.io/docs/schema-and-indexes
>>
>> > - If I have to use sql query for retrieving objects , what table name
>> should I specify, the one which is used for cache name does not work
>> >
>>
>> Was the table and its queryable fields/indexes created with CREATE TABLE
>> or Java annotations/QueryEntity?
>>
>> If the latter approach was taken then the table name corresponds to the
>> Java type name as shown in this doc:
>> https://apacheignite-sql.readme.io/docs/schema-and-indexes
>>
>> —
>> Denis
>>
>> > -Rajesh
>>
>>
>


Re: query on BinaryObject index and table

2018-01-21 Thread Rajesh Kishore
Hi Denis,

This is my code:

CacheConfiguration<Long, BinaryObject> cacheCfg =
new CacheConfiguration<>(ORG_CACHE);

cacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
cacheCfg.setBackups(1);
cacheCfg

.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
cacheCfg.setIndexedTypes(Long.class, BinaryObject.class);

IgniteCache<Long, BinaryObject> cache =
ignite.getOrCreateCache(cacheCfg);

if ( UPDATE ) {
  System.out.println("Populating the cache...");

  try (IgniteDataStreamer<Long, BinaryObject> streamer =
  ignite.dataStreamer(ORG_CACHE)) {
streamer.allowOverwrite(true);
IgniteBinary binary = ignite.binary();
BinaryObjectBuilder objBuilder = binary.builder(ORG_CACHE);
;
for ( long i = 0; i < 100; i++ ) {
  streamer.addData(i,
  objBuilder.setField("id", i)
  .setField("name", "organization-" + i).build());

  if ( i > 0 && i % 100 == 0 )
System.out.println("Done: " + i);
}
  }
}

IgniteCache<Long, BinaryObject> binaryCache =
ignite.cache(ORG_CACHE).withKeepBinary();
BinaryObject binaryPerson = binaryCache.get(54l);
System.out.println("name " + binaryPerson.field("name"));


Not sure, If I am missing some context here , if I have to use sqlquery ,
what table name should I specify - I did not create table explicitly, do I
need to that?
How would I create the index?

Thanks,
Rajesh

On Sun, Jan 21, 2018 at 12:25 PM, Denis Magda <dma...@apache.org> wrote:

>
>
> > On Jan 20, 2018, at 7:20 PM, Rajesh Kishore <rajesh10si...@gmail.com>
> wrote:
> >
> > Hi,
> >
> > I have requirement that my schema is not fixed , so I have to use the
> BinaryObject approach instead of fixed POJO
> >
> > I am relying on OOTB file system persistence mechanism
> >
> > My questions are:
> > - How can I specify the indexes on BinaryObject?
>
> https://apacheignite-sql.readme.io/docs/create-index
> https://apacheignite-sql.readme.io/docs/schema-and-indexes
>
> > - If I have to use sql query for retrieving objects , what table name
> should I specify, the one which is used for cache name does not work
> >
>
> Was the table and its queryable fields/indexes created with CREATE TABLE
> or Java annotations/QueryEntity?
>
> If the latter approach was taken then the table name corresponds to the
> Java type name as shown in this doc:
> https://apacheignite-sql.readme.io/docs/schema-and-indexes
>
> —
> Denis
>
> > -Rajesh
>
>


Re: query on BinaryObject index and table

2018-01-20 Thread Denis Magda


> On Jan 20, 2018, at 7:20 PM, Rajesh Kishore <rajesh10si...@gmail.com> wrote:
> 
> Hi,
> 
> I have requirement that my schema is not fixed , so I have to use the 
> BinaryObject approach instead of fixed POJO
> 
> I am relying on OOTB file system persistence mechanism
> 
> My questions are:
> - How can I specify the indexes on BinaryObject?

https://apacheignite-sql.readme.io/docs/create-index
https://apacheignite-sql.readme.io/docs/schema-and-indexes

> - If I have to use sql query for retrieving objects , what table name should 
> I specify, the one which is used for cache name does not work
> 

Was the table and its queryable fields/indexes created with CREATE TABLE or 
Java annotations/QueryEntity?

If the latter approach was taken then the table name corresponds to the Java 
type name as shown in this doc:
https://apacheignite-sql.readme.io/docs/schema-and-indexes

—
Denis

> -Rajesh



query on BinaryObject index and table

2018-01-20 Thread Rajesh Kishore
Hi,

I have requirement that my schema is not fixed , so I have to use the
BinaryObject approach instead of fixed POJO

I am relying on OOTB file system persistence mechanism

My questions are:
- How can I specify the indexes on BinaryObject?
- If I have to use sql query for retrieving objects , what table name
should I specify, the one which is used for cache name does not work

-Rajesh


Re: Create BinaryObject without starting Ignite?

2018-01-16 Thread zbyszek
Hi Val,

thank you for confirmation.

>> What is the purpose of this?
Purpose of that was to prepare object prototype (object with the same
structure layout to ensure the same schema version for all updates) without
having access to Ignite.
But as it turned out I managed to obtain the reference to Ignite hence my
particular problem is solved.

But it is good to know that it is not easily achievable.

thanx and regards,
zbyszek



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Create BinaryObject without starting Ignite?

2018-01-15 Thread vkulichenko
zbyszek,

Generally, the answer is no. Binary format depends on internal Ignite
context, so there is no clean way to create a binary object without starting
Ignite. The code that was provided in the referenced thread is a hacky
workaround which could probably worked in one of the previous versions, but
there is a big chance it doesn't work anymore in the latest.

What is the purpose of this?

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Create BinaryObject without starting Ignite?

2018-01-15 Thread zbyszek
Hello Igniters,

Is it possible to create BinaryObject without starting Ignite?

I was trying the following:

private static BinaryObject createPrototype() throws
IgniteCheckedException {
// based on
http://apache-ignite-users.70518.x6.nabble.com/Working-Directly-with-Binary-objects-td5131.html
IgniteConfiguration iCfg = new IgniteConfiguration();
BinaryConfiguration bCfg = new BinaryConfiguration();
iCfg.setBinaryConfiguration(bCfg);
BinaryContext ctx = new
BinaryContext(BinaryCachingMetadataHandler.create(), iCfg, new
NullLogger());
BinaryMarshaller marsh = new BinaryMarshaller();
marsh.setContext(new MarshallerContextImpl(null));
IgniteUtils.invoke(BinaryMarshaller.class, marsh,
"setBinaryContext", ctx, iCfg);
BinaryObjectBuilder builder = new BinaryObjectBuilderImpl(ctx,
"MyBinaryName");
builder.setField("f1", (String) null);
builder.setField("f2", (String) null);
builder.setField("f3", (String) null);
BinaryObject res = builder.build(); //  ---> throws NPE here
return res;
}

but this throws NPE on builder.build() due to null transport member in
MarshallerContextImpl.

Thank you for your help,
zbyszek



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to configure a QueryEntity for a BinaryObject

2017-09-21 Thread Savagearts
Thanks Evgenii,It does work!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to configure a QueryEntity for a BinaryObject

2017-09-20 Thread ezhuravlev
Check QueryEntity class, it contains tableName property.

Evgenii



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to configure a QueryEntity for a BinaryObject

2017-09-20 Thread Savagearts
Thanks, Evgenii. I removed the indexedTypes configuration according to your
suggestion. But it still doesn't work. The ignite throws a exception:"Failed
to find SQL table for type: com.example.Foo". (There is a error in the
configuration of my previous post, i change the valType from "com.Foo.Bar"
to "com.example.Foo"). 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to configure a QueryEntity for a BinaryObject

2017-09-20 Thread Evgenii Zhuravlev
If you're configuring QueryEntity, you don't need to add to the
configuration indexedTypes too. It's just different ways to configure
Indexes and queryable fields.

Evgenii

2017-09-20 10:10 GMT+03:00 Savagearts <eisen.zh...@huawei.com>:

> I failed to configure the binaryobject ignitecache. My cache configuration
> as
> following:
>  class="org.apache.ignite.configuration.CacheConfiguration">
> 
> 
> 
> 
> 
> 
> 
> 
>  />
>  value="java.lang.String" />
> 
> 
> 
> 
> name
> 
> 
> 
> 
> 
> 
> java.lang.String
> org.apache.ignite.
> binary.BinaryObject
> 
> 
> 
>
> I can apply BinaryObjectBuilder to build a binaryobject with type
> name:"com.example.Foo" and put it in the cache. But when i apply SqlQuery,
> a
> IgniteSQLException is thrown with message:"Failed to find SQL table for
> type: com.example.Foo". But when i change the cache configuration's
> indexedTypes with com.example.Foo, The ignite fail to create such a cache,
> cause of "com.example.Foo" doesn't exist. My binary object builder code as
> following:
>final  Collection result = new ArrayList<>(numbers);
> IntStream.range(1,numbers).forEach((i)->{
> BinaryObjectBuilder fooBuilder
> =ignite.binary().builder("com.example.Foo");
> fooBuilder.setField("name","foo"+i).setField("age",i);
> result.add(fooBuilder.build());
> });
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: How to configure a QueryEntity for a BinaryObject

2017-09-20 Thread Savagearts
I failed to configure the binaryobject ignitecache. My cache configuration as
following:















name






java.lang.String
org.apache.ignite.binary.BinaryObject




I can apply BinaryObjectBuilder to build a binaryobject with type
name:"com.example.Foo" and put it in the cache. But when i apply SqlQuery, a
IgniteSQLException is thrown with message:"Failed to find SQL table for
type: com.example.Foo". But when i change the cache configuration's
indexedTypes with com.example.Foo, The ignite fail to create such a cache,
cause of "com.example.Foo" doesn't exist. My binary object builder code as
following:
   final  Collection result = new ArrayList<>(numbers);
IntStream.range(1,numbers).forEach((i)->{
BinaryObjectBuilder fooBuilder
=ignite.binary().builder("com.example.Foo");
fooBuilder.setField("name","foo"+i).setField("age",i);
result.add(fooBuilder.build());
});  



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to configure a QueryEntity for a BinaryObject

2017-09-17 Thread Savagearts
Thanks,I'll give it a try



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to configure a QueryEntity for a BinaryObject

2017-09-15 Thread vkulichenko
Hi,

Yes, you can do this with, just provide the field name and its type in the
QueryEntity#fields map. Is there anything in particular that doesn't work?

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


How to configure a QueryEntity for a BinaryObject

2017-09-15 Thread Savagearts
Hi All:

I'm trying to build a  cache with type IgniteCache<String,BinaryObject> .
I've checked all examples in document are just for a POJO definitions with
query entity configuration. Can i configure a binary object with query
entities configuration? For an instance, I have a BinaryObject with type
name "com.example.Foo". This type "com.example.Foo" contains field "name".
I'd like to configure the field "name" in a query entity?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: BinaryObject model replace() in client/server topology fails with ClassNotFoundFunction

2017-09-11 Thread sai kiran nukala
I don't think entryprocessor with work because the anonymous class will not
be present at ignite server node classpath as our aim is to avoid classes
at server's classpath.

On Sep 11, 2017 4:30 PM, "Andrey Mashenkov" <andrey.mashen...@gmail.com>
wrote:

> I've created a ticket for this [1].
> As a workaround you can try to use cache.invoke() with own comparison
> implementation inside EntryProcessor.
>
> Unfortunately, there is no release dated filled on apache ignite releases
> page [2].
> Usually, new Ignite release become available twice a year.
>
> [1] https://issues.apache.org/jira/browse/IGNITE-6332
> [2] https://issues.apache.org/jira/projects/IGNITE?
> selectedItem=com.atlassian.jira.jira-projects-plugin:release-page
>
> On Thu, Sep 7, 2017 at 9:50 PM, sai kiran nukala <saikiran...@gmail.com>
> wrote:
>
>> Thanks for the reply.
>>
>> How do I file a ticket ?
>>
>> I know it is still early stage, usually how long it takes to fix this bug
>> and release a version because we need this functionality working for our
>> use case.
>>
>> On Sep 7, 2017 10:22 PM, "Andrey Mashenkov" <andrey.mashen...@gmail.com>
>> wrote:
>>
>> Hi,
>> Looks like a bug and CacheEntryPredicateContainsValue shouldn't
>> deserialize value to compare BinaryObjects in case of replace() operation
>> .
>> Feel free to fill a ticket for this.
>>
>>
>>
>> On Thu, Sep 7, 2017 at 9:50 AM, saikiran939 <saikiran...@gmail.com>
>> wrote:
>>
>>> Hi Team,
>>>
>>> Our team on working on a usecase in which we don't want to have any
>>> classes
>>> on Ignite Server node's classpath. To achieve this we are making use of
>>> BinaryObject based querying and putting/replacing values into cache.
>>>
>>> We are also using Optimistic Locking to replace the values into cache
>>> using
>>> "binaryObjectcache.replace(key, oldValue, newValue)" API - this method
>>> fails
>>> with ClassNotFoundException when used in client/server topology if the
>>> cache
>>> value class is not present at server's classpath. Sample piece of code is
>>> given below:
>>>
>>> String key = "key1";
>>> CacheValue entry1 = new CacheValue("putsomevalue");
>>> IgniteCache<String, CacheValue> cache = ignite.getOrCreateCache(cacheC
>>> fg);
>>> cache.put(key, entry1); //put works fine even if there is no class
>>> present
>>> in server's classpath
>>>
>>> CacheValue replaceEntry1 = cache.get(key);
>>> replaceEntry1.location= "test";
>>>
>>> IgniteCache<String, BinaryObject> binaryCacheProjection =
>>> cache.withKeepBinary();
>>> BinaryObject oldValueInBinary = binaryCacheProjection.get(key);
>>> BinaryObject newValueInBinary = ignite.binary().toBinary(replaceEntry1);
>>> binaryCacheProjection.replace(key, oldValueInBinary, newValueInBinary);
>>>
>>> The last replace() method call fails with below exception, is this bug in
>>> ignite because put() API works or is there anyway to workaround this
>>> exception ? I think from the stacktrace ignite server is trying to
>>> deserialize value object even if it is binary. I get the same exception
>>> with
>>> normal cache.replace() API .
>>>
>>> Exception in thread "main"
>>> org.apache.ignite.cache.CachePartialUpdateException: Failed to update
>>> keys
>>> (retry update if possible).: [OP21|SHARED]
>>> at
>>> org.apache.ignite.internal.processors.cache.GridCacheUtils.c
>>> onvertToCacheException(GridCacheUtils.java:1488)
>>> at
>>> org.apache.ignite.internal.processors.cache.IgniteCacheProxy
>>> .cacheException(IgniteCacheProxy.java:2021)
>>> at
>>> org.apache.ignite.internal.processors.cache.IgniteCacheProxy
>>> .replace(IgniteCacheProxy.java:1393)
>>> at
>>> com.ignite.binary.TestReplaceBinaryObject.populateCache(Test
>>> ReplaceBinaryObject.java:166)
>>> at
>>> com.ignite.binary.TestReplaceBinaryObject.main(TestReplaceBi
>>> naryObject.java:60)
>>> at com.ignite.binary.IgniteDriver.main(IgniteDriver.java:11)
>>> Caused by: class
>>> org.apache.ignite.internal.processors.cache.CachePartialUpda
>>> teCheckedException:
>>> Failed to update keys (retry update if possible).: [OP21|SHARED]
>>> at
>>> org.apa

Re: BinaryObject model replace() in client/server topology fails with ClassNotFoundFunction

2017-09-11 Thread Andrey Mashenkov
I've created a ticket for this [1].
As a workaround you can try to use cache.invoke() with own comparison
implementation inside EntryProcessor.

Unfortunately, there is no release dated filled on apache ignite releases
page [2].
Usually, new Ignite release become available twice a year.

[1] https://issues.apache.org/jira/browse/IGNITE-6332
[2]
https://issues.apache.org/jira/projects/IGNITE?selectedItem=com.atlassian.jira.jira-projects-plugin:release-page

On Thu, Sep 7, 2017 at 9:50 PM, sai kiran nukala <saikiran...@gmail.com>
wrote:

> Thanks for the reply.
>
> How do I file a ticket ?
>
> I know it is still early stage, usually how long it takes to fix this bug
> and release a version because we need this functionality working for our
> use case.
>
> On Sep 7, 2017 10:22 PM, "Andrey Mashenkov" <andrey.mashen...@gmail.com>
> wrote:
>
> Hi,
> Looks like a bug and CacheEntryPredicateContainsValue shouldn't
> deserialize value to compare BinaryObjects in case of replace() operation
> .
> Feel free to fill a ticket for this.
>
>
>
> On Thu, Sep 7, 2017 at 9:50 AM, saikiran939 <saikiran...@gmail.com> wrote:
>
>> Hi Team,
>>
>> Our team on working on a usecase in which we don't want to have any
>> classes
>> on Ignite Server node's classpath. To achieve this we are making use of
>> BinaryObject based querying and putting/replacing values into cache.
>>
>> We are also using Optimistic Locking to replace the values into cache
>> using
>> "binaryObjectcache.replace(key, oldValue, newValue)" API - this method
>> fails
>> with ClassNotFoundException when used in client/server topology if the
>> cache
>> value class is not present at server's classpath. Sample piece of code is
>> given below:
>>
>> String key = "key1";
>> CacheValue entry1 = new CacheValue("putsomevalue");
>> IgniteCache<String, CacheValue> cache = ignite.getOrCreateCache(cacheC
>> fg);
>> cache.put(key, entry1); //put works fine even if there is no class present
>> in server's classpath
>>
>> CacheValue replaceEntry1 = cache.get(key);
>> replaceEntry1.location= "test";
>>
>> IgniteCache<String, BinaryObject> binaryCacheProjection =
>> cache.withKeepBinary();
>> BinaryObject oldValueInBinary = binaryCacheProjection.get(key);
>> BinaryObject newValueInBinary = ignite.binary().toBinary(replaceEntry1);
>> binaryCacheProjection.replace(key, oldValueInBinary, newValueInBinary);
>>
>> The last replace() method call fails with below exception, is this bug in
>> ignite because put() API works or is there anyway to workaround this
>> exception ? I think from the stacktrace ignite server is trying to
>> deserialize value object even if it is binary. I get the same exception
>> with
>> normal cache.replace() API .
>>
>> Exception in thread "main"
>> org.apache.ignite.cache.CachePartialUpdateException: Failed to update
>> keys
>> (retry update if possible).: [OP21|SHARED]
>> at
>> org.apache.ignite.internal.processors.cache.GridCacheUtils.c
>> onvertToCacheException(GridCacheUtils.java:1488)
>> at
>> org.apache.ignite.internal.processors.cache.IgniteCacheProxy
>> .cacheException(IgniteCacheProxy.java:2021)
>> at
>> org.apache.ignite.internal.processors.cache.IgniteCacheProxy
>> .replace(IgniteCacheProxy.java:1393)
>> at
>> com.ignite.binary.TestReplaceBinaryObject.populateCache(Test
>> ReplaceBinaryObject.java:166)
>> at
>> com.ignite.binary.TestReplaceBinaryObject.main(TestReplaceBi
>> naryObject.java:60)
>> at com.ignite.binary.IgniteDriver.main(IgniteDriver.java:11)
>> Caused by: class
>> org.apache.ignite.internal.processors.cache.CachePartialUpda
>> teCheckedException:
>> Failed to update keys (retry update if possible).: [OP21|SHARED]
>> at
>> org.apache.ignite.internal.processors.cache.distributed.dht.
>> atomic.GridNearAtomicSingleUpdateFuture.onResult(GridNearAto
>> micSingleUpdateFuture.java:232)
>> at
>> org.apache.ignite.internal.processors.cache.distributed.dht.
>> atomic.GridDhtAtomicCache.processNearAtomicUpdateResponse(Gr
>> idDhtAtomicCache.java:2969)
>> at
>> org.apache.ignite.internal.processors.cache.distributed.dht.
>> atomic.GridDhtAtomicCache.access$700(GridDhtAtomicCache.java:130)
>> at
>> org.apache.ignite.internal.processors.cache.distributed.dht.
>> atomic.GridDhtAtomicCache$6.apply(GridDhtAtomicCache.java:274)
>> at
>> 

Re: BinaryObject model replace() in client/server topology fails with ClassNotFoundFunction

2017-09-07 Thread sai kiran nukala
Thanks for the reply.

How do I file a ticket ?

I know it is still early stage, usually how long it takes to fix this bug
and release a version because we need this functionality working for our
use case.

On Sep 7, 2017 10:22 PM, "Andrey Mashenkov" <andrey.mashen...@gmail.com>
wrote:

Hi,
Looks like a bug and CacheEntryPredicateContainsValue shouldn't deserialize
value to compare BinaryObjects in case of replace() operation.
Feel free to fill a ticket for this.



On Thu, Sep 7, 2017 at 9:50 AM, saikiran939 <saikiran...@gmail.com> wrote:

> Hi Team,
>
> Our team on working on a usecase in which we don't want to have any classes
> on Ignite Server node's classpath. To achieve this we are making use of
> BinaryObject based querying and putting/replacing values into cache.
>
> We are also using Optimistic Locking to replace the values into cache using
> "binaryObjectcache.replace(key, oldValue, newValue)" API - this method
> fails
> with ClassNotFoundException when used in client/server topology if the
> cache
> value class is not present at server's classpath. Sample piece of code is
> given below:
>
> String key = "key1";
> CacheValue entry1 = new CacheValue("putsomevalue");
> IgniteCache<String, CacheValue> cache = ignite.getOrCreateCache(cacheCfg);
> cache.put(key, entry1); //put works fine even if there is no class present
> in server's classpath
>
> CacheValue replaceEntry1 = cache.get(key);
> replaceEntry1.location= "test";
>
> IgniteCache<String, BinaryObject> binaryCacheProjection =
> cache.withKeepBinary();
> BinaryObject oldValueInBinary = binaryCacheProjection.get(key);
> BinaryObject newValueInBinary = ignite.binary().toBinary(replaceEntry1);
> binaryCacheProjection.replace(key, oldValueInBinary, newValueInBinary);
>
> The last replace() method call fails with below exception, is this bug in
> ignite because put() API works or is there anyway to workaround this
> exception ? I think from the stacktrace ignite server is trying to
> deserialize value object even if it is binary. I get the same exception
> with
> normal cache.replace() API .
>
> Exception in thread "main"
> org.apache.ignite.cache.CachePartialUpdateException: Failed to update keys
> (retry update if possible).: [OP21|SHARED]
> at
> org.apache.ignite.internal.processors.cache.GridCacheUtils.c
> onvertToCacheException(GridCacheUtils.java:1488)
> at
> org.apache.ignite.internal.processors.cache.IgniteCacheProxy
> .cacheException(IgniteCacheProxy.java:2021)
> at
> org.apache.ignite.internal.processors.cache.IgniteCacheProxy
> .replace(IgniteCacheProxy.java:1393)
> at
> com.ignite.binary.TestReplaceBinaryObject.populateCache(Test
> ReplaceBinaryObject.java:166)
> at
> com.ignite.binary.TestReplaceBinaryObject.main(TestReplaceBi
> naryObject.java:60)
> at com.ignite.binary.IgniteDriver.main(IgniteDriver.java:11)
> Caused by: class
> org.apache.ignite.internal.processors.cache.CachePartialUpda
> teCheckedException:
> Failed to update keys (retry update if possible).: [OP21|SHARED]
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.
> atomic.GridNearAtomicSingleUpdateFuture.onResult(GridNearAto
> micSingleUpdateFuture.java:232)
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.
> atomic.GridDhtAtomicCache.processNearAtomicUpdateResponse(
> GridDhtAtomicCache.java:2969)
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.
> atomic.GridDhtAtomicCache.access$700(GridDhtAtomicCache.java:130)
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.
> atomic.GridDhtAtomicCache$6.apply(GridDhtAtomicCache.java:274)
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.
> atomic.GridDhtAtomicCache$6.apply(GridDhtAtomicCache.java:272)
> at
> org.apache.ignite.internal.processors.cache.GridCacheIoManag
> er.processMessage(GridCacheIoManager.java:748)
> at
> org.apache.ignite.internal.processors.cache.GridCacheIoManag
> er.onMessage0(GridCacheIoManager.java:353)
> at
> org.apache.ignite.internal.processors.cache.GridCacheIoManag
> er.handleMessage(GridCacheIoManager.java:277)
> at
> org.apache.ignite.internal.processors.cache.GridCacheIoManag
> er.access$000(GridCacheIoManager.java:88)
> at
> org.apache.ignite.internal.processors.cache.GridCacheIoManag
> er$1.onMessage(GridCacheIoManager.java:231)
> at
> org.apache.ignite.internal.managers.communication.GridIoMana
> ger.invokeListener(GridIoManager.java:1238)
> at
> org.apache.ignite.internal.managers.communicati

Re: BinaryObject model replace() in client/server topology fails with ClassNotFoundFunction

2017-09-07 Thread Andrey Mashenkov
Hi,
Looks like a bug and CacheEntryPredicateContainsValue shouldn't deserialize
value to compare BinaryObjects in case of replace() operation.
Feel free to fill a ticket for this.



On Thu, Sep 7, 2017 at 9:50 AM, saikiran939 <saikiran...@gmail.com> wrote:

> Hi Team,
>
> Our team on working on a usecase in which we don't want to have any classes
> on Ignite Server node's classpath. To achieve this we are making use of
> BinaryObject based querying and putting/replacing values into cache.
>
> We are also using Optimistic Locking to replace the values into cache using
> "binaryObjectcache.replace(key, oldValue, newValue)" API - this method
> fails
> with ClassNotFoundException when used in client/server topology if the
> cache
> value class is not present at server's classpath. Sample piece of code is
> given below:
>
> String key = "key1";
> CacheValue entry1 = new CacheValue("putsomevalue");
> IgniteCache<String, CacheValue> cache = ignite.getOrCreateCache(cacheCfg);
> cache.put(key, entry1); //put works fine even if there is no class present
> in server's classpath
>
> CacheValue replaceEntry1 = cache.get(key);
> replaceEntry1.location= "test";
>
> IgniteCache<String, BinaryObject> binaryCacheProjection =
> cache.withKeepBinary();
> BinaryObject oldValueInBinary = binaryCacheProjection.get(key);
> BinaryObject newValueInBinary = ignite.binary().toBinary(replaceEntry1);
> binaryCacheProjection.replace(key, oldValueInBinary, newValueInBinary);
>
> The last replace() method call fails with below exception, is this bug in
> ignite because put() API works or is there anyway to workaround this
> exception ? I think from the stacktrace ignite server is trying to
> deserialize value object even if it is binary. I get the same exception
> with
> normal cache.replace() API .
>
> Exception in thread "main"
> org.apache.ignite.cache.CachePartialUpdateException: Failed to update keys
> (retry update if possible).: [OP21|SHARED]
> at
> org.apache.ignite.internal.processors.cache.GridCacheUtils.
> convertToCacheException(GridCacheUtils.java:1488)
> at
> org.apache.ignite.internal.processors.cache.IgniteCacheProxy.
> cacheException(IgniteCacheProxy.java:2021)
> at
> org.apache.ignite.internal.processors.cache.IgniteCacheProxy.replace(
> IgniteCacheProxy.java:1393)
> at
> com.ignite.binary.TestReplaceBinaryObject.populateCache(
> TestReplaceBinaryObject.java:166)
> at
> com.ignite.binary.TestReplaceBinaryObject.main(
> TestReplaceBinaryObject.java:60)
> at com.ignite.binary.IgniteDriver.main(IgniteDriver.java:11)
> Caused by: class
> org.apache.ignite.internal.processors.cache.CachePartialUpdateCheckedExcep
> tion:
> Failed to update keys (retry update if possible).: [OP21|SHARED]
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.
> GridNearAtomicSingleUpdateFuture.onResult(GridNearAtomicSingleUpdateFutu
> re.java:232)
> at
> org.apache.ignite.internal.processors.cache.distributed.
> dht.atomic.GridDhtAtomicCache.processNearAtomicUpdateRespons
> e(GridDhtAtomicCache.java:2969)
> at
> org.apache.ignite.internal.processors.cache.distributed.
> dht.atomic.GridDhtAtomicCache.access$700(GridDhtAtomicCache.java:130)
> at
> org.apache.ignite.internal.processors.cache.distributed.
> dht.atomic.GridDhtAtomicCache$6.apply(GridDhtAtomicCache.java:274)
> at
> org.apache.ignite.internal.processors.cache.distributed.
> dht.atomic.GridDhtAtomicCache$6.apply(GridDhtAtomicCache.java:272)
> at
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.
> processMessage(GridCacheIoManager.java:748)
> at
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(
> GridCacheIoManager.java:353)
> at
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.
> handleMessage(GridCacheIoManager.java:277)
> at
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$000(
> GridCacheIoManager.java:88)
> at
> org.apache.ignite.internal.processors.cache.GridCacheIoManager$1.
> onMessage(GridCacheIoManager.java:231)
> at
> org.apache.ignite.internal.managers.communication.
> GridIoManager.invokeListener(GridIoManager.java:1238)
> at
> org.apache.ignite.internal.managers.communication.GridIoManager.
> processRegularMessage0(GridIoManager.java:866)
> at
> org.apache.ignite.internal.managers.communication.
> GridIoManager.access$1700(GridIoManager.java:106)
> at
> org.apache.ignite.internal.managers.communication.GridIoManager$5.run(
> GridI

BinaryObject model replace() in client/server topology fails with ClassNotFoundFunction

2017-09-07 Thread saikiran939
Hi Team,

Our team on working on a usecase in which we don't want to have any classes
on Ignite Server node's classpath. To achieve this we are making use of
BinaryObject based querying and putting/replacing values into cache.

We are also using Optimistic Locking to replace the values into cache using
"binaryObjectcache.replace(key, oldValue, newValue)" API - this method fails
with ClassNotFoundException when used in client/server topology if the cache
value class is not present at server's classpath. Sample piece of code is
given below:

String key = "key1";
CacheValue entry1 = new CacheValue("putsomevalue");
IgniteCache<String, CacheValue> cache = ignite.getOrCreateCache(cacheCfg);
cache.put(key, entry1); //put works fine even if there is no class present
in server's classpath

CacheValue replaceEntry1 = cache.get(key);
replaceEntry1.location= "test";
    
IgniteCache<String, BinaryObject> binaryCacheProjection =
cache.withKeepBinary();
BinaryObject oldValueInBinary = binaryCacheProjection.get(key);
BinaryObject newValueInBinary = ignite.binary().toBinary(replaceEntry1);
binaryCacheProjection.replace(key, oldValueInBinary, newValueInBinary);

The last replace() method call fails with below exception, is this bug in
ignite because put() API works or is there anyway to workaround this
exception ? I think from the stacktrace ignite server is trying to
deserialize value object even if it is binary. I get the same exception with
normal cache.replace() API .

Exception in thread "main"
org.apache.ignite.cache.CachePartialUpdateException: Failed to update keys
(retry update if possible).: [OP21|SHARED]
at
org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1488)
at
org.apache.ignite.internal.processors.cache.IgniteCacheProxy.cacheException(IgniteCacheProxy.java:2021)
at
org.apache.ignite.internal.processors.cache.IgniteCacheProxy.replace(IgniteCacheProxy.java:1393)
at
com.ignite.binary.TestReplaceBinaryObject.populateCache(TestReplaceBinaryObject.java:166)
at
com.ignite.binary.TestReplaceBinaryObject.main(TestReplaceBinaryObject.java:60)
at com.ignite.binary.IgniteDriver.main(IgniteDriver.java:11)
Caused by: class
org.apache.ignite.internal.processors.cache.CachePartialUpdateCheckedException:
Failed to update keys (retry update if possible).: [OP21|SHARED]
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.onResult(GridNearAtomicSingleUpdateFuture.java:232)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.processNearAtomicUpdateResponse(GridDhtAtomicCache.java:2969)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.access$700(GridDhtAtomicCache.java:130)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$6.apply(GridDhtAtomicCache.java:274)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$6.apply(GridDhtAtomicCache.java:272)
at
org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:748)
at
org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:353)
at
org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:277)
at
org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$000(GridCacheIoManager.java:88)
at
org.apache.ignite.internal.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:231)
at
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1238)
at
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:866)
at
org.apache.ignite.internal.managers.communication.GridIoManager.access$1700(GridIoManager.java:106)
at
org.apache.ignite.internal.managers.communication.GridIoManager$5.run(GridIoManager.java:829)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
Suppressed: class org.apache.ignite.IgniteCheckedException: Failed to
update keys on primary node.
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicUpdateResponse.addFailedKey(GridNearAtomicUpdateResponse.java:350)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateSingle(GridDhtAtomicCache.java:2393)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAt

Re: UPDATE SQL for nested BinaryObject throws exception.

2017-09-05 Thread afedotov
Hi,

FYI. Created tickets related to the subject:
1) https://issues.apache.org/jira/browse/IGNITE-6265
2) https://issues.apache.org/jira/browse/IGNITE-6266
3) https://issues.apache.org/jira/browse/IGNITE-6268



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: UPDATE SQL for nested BinaryObject throws exception.

2017-09-04 Thread afedotov
Hi,

Actually, flattening the nested properties with aliases works only for one
level as for now.
Looks like it's a bug. I'll file a JIRA ticket for this.

Kind regards,
Alex



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: UPDATE SQL for nested BinaryObject throws exception.

2017-09-03 Thread Dmitriy Setrakyan
Cross sending to dev@

Igniters, up until version 1.9, the nested fields were supported by
flattening the names. Do we still support it? I cannot seem to find
documentation for it.

D.

On Thu, Aug 31, 2017 at 7:12 AM, takumi <bla...@kss.biglobe.ne.jp> wrote:

> This is a part of the real code that I wrote.
>
> -
>   List entities = new ArrayList<>();
>   QueryEntity qe = new QueryEntity(String.class.getName(), "cache");
>   qe.addQueryField("attribute.prop1", Double.class.getName(), "prop3");
>   qe.addQueryField("attribute.prop2", String.class.getName(), "prop4");
>   qe.addQueryField("attribute.prop.prop1", Double.class.getName(),
> "prop5");
>   qe.addQueryField("attribute.prop.prop2", String.class.getName(),
> "prop6");
>
>   BinaryObject bo  =ib.builder("cache").setField("attribute",
> ib.builder("cache.attribute")
>   .setField("prop",
> ib.builder("cache.attribute.prop")
>.setField("prop1", 50.0, Double.class)
>.setField("prop2", "old", String.class))
>   .setField("prop1", 50.0, Double.class)
>   .setField("prop2", "old", String.class)).build();
>
>   cache.put("key1", bo);
>   cache.query(new SqlFieldsQuery("update cache set prop4 = 'new'  where
> prop3 >= 20.0"));//OK
>   cache.query(new SqlFieldsQuery("update cache set prop6 = 'new'  where
> prop5 >= 20.0"));//NG
> -
>
> I can update 'prop4' by SQL, but I do not update 'prop6' by SQL.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: UPDATE SQL for nested BinaryObject throws exception.

2017-08-31 Thread takumi
This is a part of the real code that I wrote.

-
  List entities = new ArrayList<>();
  QueryEntity qe = new QueryEntity(String.class.getName(), "cache");
  qe.addQueryField("attribute.prop1", Double.class.getName(), "prop3");
  qe.addQueryField("attribute.prop2", String.class.getName(), "prop4");
  qe.addQueryField("attribute.prop.prop1", Double.class.getName(), "prop5");
  qe.addQueryField("attribute.prop.prop2", String.class.getName(), "prop6");

  BinaryObject bo  =ib.builder("cache").setField("attribute",
ib.builder("cache.attribute")
  .setField("prop",
ib.builder("cache.attribute.prop")
   .setField("prop1", 50.0, Double.class)
   .setField("prop2", "old", String.class))
  .setField("prop1", 50.0, Double.class)
  .setField("prop2", "old", String.class)).build();

  cache.put("key1", bo);
  cache.query(new SqlFieldsQuery("update cache set prop4 = 'new'  where
prop3 >= 20.0"));//OK
  cache.query(new SqlFieldsQuery("update cache set prop6 = 'new'  where
prop5 >= 20.0"));//NG
-

I can update 'prop4' by SQL, but I do not update 'prop6' by SQL. 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: UPDATE SQL for nested BinaryObject throws exception.

2017-08-31 Thread afedotov
Hi,

Currently referencing nested object's fields from SQL isn't supported for
both regular Java objects and for BinaryObject-s.

In other words, having 

class B {
private String field1;
}

class A {
private B bField;
}

you cannot update it like `update A set bField.field1=?`

Kind regards,
Alex.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: UPDATE SQL for nested BinaryObject throws exception.

2017-08-31 Thread Andrey Mashenkov
Does this work for regular objects?


BWT: looks like a type when you set a builder to field instead of binary
object itself "bb.build()"

On Thu, Aug 31, 2017 at 4:32 PM, takumi <bla...@kss.biglobe.ne.jp> wrote:

> I use SqlQuery for nested BinaryObject.
> The BinaryObject instance is following structure.
>
>BinaryObjectBuilder bb2 = binary.builder("nested2
> hoge").setField("field2", "old", String.class);
>BinaryObjectBuilder bb = binary.builder("nested
> hoge").setField("field1",
> bb2);
>binary.builder("hoge").setField("field0", bb);
>
> The sample SQL to throw an exception is "update " + CACHE_NAME + " set
> field2 = 'new'".
>
> The cause to throw an exception to is that I update a field of BinaryObject
> which is child of nested BinaryObject .
> When I update a field of BinaryObject which is nested BinaryObject, it do
> not throw an exception.
>
> Should I not use this SQL for child of nested BinaryObject?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


  1   2   >