Re: Hadoop Accelerator doesn't work when use SnappyCodec compression

2017-10-16 Thread C Reid
Hi Evgenii,

Checked, as shown:

17/10/17 13:43:12 DEBUG util.NativeCodeLoader: Trying to load the custom-built 
native-hadoop library...
17/10/17 13:43:12 DEBUG util.NativeCodeLoader: Loaded the native-hadoop library
17/10/17 13:43:12 WARN bzip2.Bzip2Factory: Failed to load/initialize 
native-bzip2 library system-native, will use pure-Java version
17/10/17 13:43:12 INFO zlib.ZlibFactory: Successfully loaded & initialized 
native-zlib library
Native library checking:
hadoop:  true /opt/hadoop-2.8.1-all/lib/native/libhadoop.so
zlib:true /lib64/libz.so.1
snappy:  true /usr/lib64/libsnappy.so.1
lz4: true revision:10301
bzip2:   false
openssl: true /usr/lib64/libcrypto.so


From: Evgenii Zhuravlev 
Sent: 17 October 2017 13:34
To: user@ignite.apache.org
Subject: Re: Hadoop Accelerator doesn't work when use SnappyCodec compression

Hi,

Have you checked "hadoop checknative -a" ? What it shows for snappy?

Evgenii

2017-10-17 7:12 GMT+03:00 C Reid 
>:
Hi all igniters,

I have tried many ways to include native jar and snappy jar, but exceptions 
below kept thrown. (I'm sure the hdfs and yarn support snappy by running job in 
yarn framework with SnappyCodec.) Hopes to get some helps and suggestions from 
community.

[NativeCodeLoader] Unable to load native-hadoop library for your platform... 
using builtin-java classes where applicable

and

java.lang.UnsatisfiedLinkError: 
org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z
at org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy(Native 
Method)
at 
org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:63)
at 
org.apache.hadoop.io.compress.SnappyCodec.getCompressorType(SnappyCodec.java:136)
at 
org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:150)
at 
org.apache.hadoop.io.compress.CompressionCodec$Util.createOutputStreamWithCodecPool(CompressionCodec.java:131)
at 
org.apache.hadoop.io.compress.SnappyCodec.createOutputStream(SnappyCodec.java:101)
at 
org.apache.hadoop.mapreduce.lib.output.TextOutputFormat.getRecordWriter(TextOutputFormat.java:126)
at 
org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2Task.prepareWriter(HadoopV2Task.java:104)
at 
org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2ReduceTask.run0(HadoopV2ReduceTask.java:64)
at 
org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2Task.run(HadoopV2Task.java:55)
at 
org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2TaskContext.run(HadoopV2TaskContext.java:266)
at 
org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask.runTask(HadoopRunnableTask.java:209)
at 
org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask.call0(HadoopRunnableTask.java:144)
at 
org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask$1.call(HadoopRunnableTask.java:116)
at 
org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask$1.call(HadoopRunnableTask.java:114)
at 
org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2TaskContext.runAsJobOwner(HadoopV2TaskContext.java:573)
at 
org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask.call(HadoopRunnableTask.java:114)
at 
org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask.call(HadoopRunnableTask.java:46)
at 
org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopExecutorService$2.body(HadoopExecutorService.java:186)
at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)


Regards,

RC.



Re: Hadoop Accelerator doesn't work when use SnappyCodec compression

2017-10-16 Thread Evgenii Zhuravlev
Hi,

Have you checked "hadoop checknative -a" ? What it shows for snappy?

Evgenii

2017-10-17 7:12 GMT+03:00 C Reid :

> Hi all igniters,
>
> I have tried many ways to include native jar and snappy jar, but
> exceptions below kept thrown. (I'm sure the hdfs and yarn support snappy by
> running job in yarn framework with SnappyCodec.) Hopes to get some helps
> and suggestions from community.
>
> [NativeCodeLoader] Unable to load native-hadoop library for your
> platform... using builtin-java classes where applicable
>
> and
>
> java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCodeLoader.
> buildSupportsSnappy()Z
> at org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy(Native
> Method)
> at org.apache.hadoop.io.compress.SnappyCodec.
> checkNativeCodeLoaded(SnappyCodec.java:63)
> at org.apache.hadoop.io.compress.SnappyCodec.getCompressorType(
> SnappyCodec.java:136)
> at org.apache.hadoop.io.compress.CodecPool.getCompressor(
> CodecPool.java:150)
> at org.apache.hadoop.io.compress.CompressionCodec$Util.
> createOutputStreamWithCodecPool(CompressionCodec.java:131)
> at org.apache.hadoop.io.compress.SnappyCodec.createOutputStream(
> SnappyCodec.java:101)
> at org.apache.hadoop.mapreduce.lib.output.TextOutputFormat.
> getRecordWriter(TextOutputFormat.java:126)
> at org.apache.ignite.internal.processors.hadoop.impl.v2.
> HadoopV2Task.prepareWriter(HadoopV2Task.java:104)
> at org.apache.ignite.internal.processors.hadoop.impl.v2.
> HadoopV2ReduceTask.run0(HadoopV2ReduceTask.java:64)
> at org.apache.ignite.internal.processors.hadoop.impl.v2.
> HadoopV2Task.run(HadoopV2Task.java:55)
> at org.apache.ignite.internal.processors.hadoop.impl.v2.
> HadoopV2TaskContext.run(HadoopV2TaskContext.java:266)
> at org.apache.ignite.internal.processors.hadoop.taskexecutor.
> HadoopRunnableTask.runTask(HadoopRunnableTask.java:209)
> at org.apache.ignite.internal.processors.hadoop.taskexecutor.
> HadoopRunnableTask.call0(HadoopRunnableTask.java:144)
> at org.apache.ignite.internal.processors.hadoop.taskexecutor.
> HadoopRunnableTask$1.call(HadoopRunnableTask.java:116)
> at org.apache.ignite.internal.processors.hadoop.taskexecutor.
> HadoopRunnableTask$1.call(HadoopRunnableTask.java:114)
> at org.apache.ignite.internal.processors.hadoop.impl.v2.
> HadoopV2TaskContext.runAsJobOwner(HadoopV2TaskContext.java:573)
> at org.apache.ignite.internal.processors.hadoop.taskexecutor.
> HadoopRunnableTask.call(HadoopRunnableTask.java:114)
> at org.apache.ignite.internal.processors.hadoop.taskexecutor.
> HadoopRunnableTask.call(HadoopRunnableTask.java:46)
> at org.apache.ignite.internal.processors.hadoop.taskexecutor.
> HadoopExecutorService$2.body(HadoopExecutorService.java:186)
> at org.apache.ignite.internal.util.worker.GridWorker.run(
> GridWorker.java:110)
>
>
> Regards,
>
> RC.
>


Re: Inserting data into Ignite got stuck when memory is full with persistent store enabled.

2017-10-16 Thread Ray
The above log is captured when the data ingestion slowed down, not stuck
completely.
The job has been running two and a half hour now, and the total records to
be ingested is 550 million.
During last ten minutes, less than one million records has been ingested
into Ignite.

The performance for writing using IgniteDataStreamer is really slow with
persistent store enabled.
I also did this test on another cluster without persistent store enabled, it
took about forty minutes to save all 550 million records using the same
code.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Hadoop Accelerator doesn't work when use SnappyCodec compression

2017-10-16 Thread C Reid
Hi all igniters,

I have tried many ways to include native jar and snappy jar, but exceptions 
below kept thrown. (I'm sure the hdfs and yarn support snappy by running job in 
yarn framework with SnappyCodec.) Hopes to get some helps and suggestions from 
community.

[NativeCodeLoader] Unable to load native-hadoop library for your platform... 
using builtin-java classes where applicable

and

java.lang.UnsatisfiedLinkError: 
org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z
at org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy(Native 
Method)
at 
org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:63)
at 
org.apache.hadoop.io.compress.SnappyCodec.getCompressorType(SnappyCodec.java:136)
at 
org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:150)
at 
org.apache.hadoop.io.compress.CompressionCodec$Util.createOutputStreamWithCodecPool(CompressionCodec.java:131)
at 
org.apache.hadoop.io.compress.SnappyCodec.createOutputStream(SnappyCodec.java:101)
at 
org.apache.hadoop.mapreduce.lib.output.TextOutputFormat.getRecordWriter(TextOutputFormat.java:126)
at 
org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2Task.prepareWriter(HadoopV2Task.java:104)
at 
org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2ReduceTask.run0(HadoopV2ReduceTask.java:64)
at 
org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2Task.run(HadoopV2Task.java:55)
at 
org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2TaskContext.run(HadoopV2TaskContext.java:266)
at 
org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask.runTask(HadoopRunnableTask.java:209)
at 
org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask.call0(HadoopRunnableTask.java:144)
at 
org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask$1.call(HadoopRunnableTask.java:116)
at 
org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask$1.call(HadoopRunnableTask.java:114)
at 
org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2TaskContext.runAsJobOwner(HadoopV2TaskContext.java:573)
at 
org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask.call(HadoopRunnableTask.java:114)
at 
org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask.call(HadoopRunnableTask.java:46)
at 
org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopExecutorService$2.body(HadoopExecutorService.java:186)
at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)


Regards,

RC.


Re: integrate with prestodb

2017-10-16 Thread Denis Magda
Shawn,

What's the purpose of Presto then if you consider holding all the data in
RAM? From what I see, Presto is intended for joining the data stored in
different storages.

As for the Ignite persistence, here are some performance hints you might
need to apply:
https://apacheignite.readme.io/docs/durable-memory-tuning

--
Denis


On Mon, Oct 16, 2017 at 7:42 PM, shawn.du  wrote:

> Hi Denis,
>
> We are evaluating this feature(our production use ignite 1.9 and we are
> testing ignite 2.2),  this do make things simple. But we don't want to loss
> performance.
> we need careful testing, seeing from our first round test result, the disk
> IO will be the bottleneck.
> The load average is higher than ignite 1.9 without this feature.  Also i
> don't know ignite load data from disk will fast enough
> comparing with decode the data in memory.
>
> Thanks
> Shawn
>
> On 10/17/2017 10:25,Denis Magda 
> wrote:
>
> Shawn,
>
> Then my suggestion would be to enable Ignite persistence [
> 1] that will store the whole data set you have. RAM will
> keep only a subset for the performance benefits. Ignite
> SQL is full supported for the persistence, you can even
> join data RAM and disk only data sets. Plus, your compression becomes 
> optional.
>
>
> [1] https://ignite.apache.org/features/persistence.html  ps://ignite.apache.org/features/persistence.html>
>
> —
> Denis
>
> > On Oct 16, 2017, at 7:18 PM, shawn.du  wrote:
> >
> > Hi Denis,
> >
> > Yes, We do want to limit the RAM to less than 64G.
> RAM resource is still an expensive resource.
> > If we store our data with ignite SQL queryable format,
> our data may use more than 640G. This is too expensive for us.
> > So we store data using binary format which works a
> bit like orc or parquet. Only several important columns are
> SQL queryable and the others are not. In this way, we do
> store using less RAMs, but we have to use map-reduce to
> query the data, which is a little bit of complex: Query
> in client with SQL, then submit jobs to ignite compute,
>  finally do some post aggregation in client.
> > This is why I want to have a try of Presto. We like SQL,
> we want all computation on server side.
> >
> > welcome your comments.
> >
> > Thanks
> > Shawn
> >
> > On 10/17/2017 07:57,Denis Magda  dma...@apache.org> wrote:
> > Hello Shawn,
> >
> > Do I understand properly that you have scarce RAM
> resources and think to exploit Presto as an alternative SQL engine in
> Ignite that queries both RAM and disk data sets? If that’s
> the case than just enable Ignite native persistence [1]
> and you’ll get all the data stored on disk and as much as
> you can afford in RAM. The SQL works over both tiers transparently for you.
>
> >
> > [1] https://ignite.apache.org/features/persistence.html <
> https://ignite.apache.org/features/persistence.html>  tps://ignite.apache.org/features/persistence.html  ps://ignite.apache.org/features/persistence.html>>
> >
> > —
> > Denis
> >
> > > On Oct 16, 2017, at 2:19 AM, Alexey Kukushkin  nale...@gmail.com > wrote:
> > >
> > > Cross-sending to the DEV community.
> > >
> > > On Mon, Oct 16, 2017 at 12:14 PM, shawn.du    >> wrote:
> > > Hi community,
> > >
> > > I am trying to implement a connector for presto to connect ignite.
> > > I think it will be a very interest thing to connect ignite and presto.
>
> > >
> > > In fact, currently we use ignite and it works very well.
>   but in order to save memory, we build compressed binary data.
> > > thus we cannot query them using SQL. We use ignite map-reduce to query 
> > > the data.
>
> > >
> > > Using presto, we may use SQL again. If it is fast
> enough, ignite will be our in memory storage and not
> responsible for computing or only for simple query.
> > > The only thing I concern about is presto is fast
> enough or not like Ignite. For now all ignite query cost
> less than 5 seconds and most are hundreds of milliseconds.
> > > Also presto provides a connector for redis.  I don't
> know community has interest to contribute to presto-ignite?
> > >
> > > Thanks
> > > Shawn
> > >
> > >
> > >
> > >
> > > --
> > > Best regards,
> > > Alexey
> >
>
>


Re: integrate with prestodb

2017-10-16 Thread shawn.du






Hi Denis,We are evaluating this feature(our production use ignite 1.9 and we are testing ignite 2.2),  this do make things simple. But we don't want to loss performance.we need careful testing, seeing from our first round test result, the disk IO will be the bottleneck.The load average is higher than ignite 1.9 without this feature.  Also i don't know ignite load data from disk will fast enoughcomparing with decode the data in memory.







ThanksShawn





On 10/17/2017 10:25,Denis Magda wrote: 


Shawn,

Then my suggestion would be to enable Ignite persistence [1] that will store the whole data set you have. RAM will keep only a subset for the performance benefits. Ignite SQL is full supported for the persistence, you can even join data RAM and disk only data sets. Plus, your compression becomes optional.

[1] https://ignite.apache.org/features/persistence.html 

—
Denis

> On Oct 16, 2017, at 7:18 PM, shawn.du  wrote:
> 
> Hi Denis,
> 
> Yes, We do want to limit the RAM to less than 64G.  RAM resource is still an expensive resource.
> If we store our data with ignite SQL queryable format, our data may use more than 640G. This is too expensive for us.
> So we store data using binary format which works a bit like orc or parquet. Only several important columns are SQL queryable and the others are not. In this way, we do store using less RAMs, but we have to use map-reduce to query the data, which is a little bit of complex: Query in client with SQL, then submit jobs to ignite compute, finally do some post aggregation in client.
> This is why I want to have a try of Presto. We like SQL, we want all computation on server side. 
> 
> welcome your comments.
> 
> Thanks
> Shawn
> 
> On 10/17/2017 07:57,Denis Magda  wrote: 
> Hello Shawn, 
> 
> Do I understand properly that you have scarce RAM resources and think to exploit Presto as an alternative SQL engine in Ignite that queries both RAM and disk data sets? If that’s the case than just enable Ignite native persistence [1] and you’ll get all the data stored on disk and as much as you can afford in RAM. The SQL works over both tiers transparently for you. 
> 
> [1] https://ignite.apache.org/features/persistence.html  > 
> 
> — 
> Denis 
> 
> > On Oct 16, 2017, at 2:19 AM, Alexey Kukushkin > wrote: 
> >  
> > Cross-sending to the DEV community. 
> >  
> > On Mon, Oct 16, 2017 at 12:14 PM, shawn.du  >> wrote: 
> > Hi community, 
> >  
> > I am trying to implement a connector for presto to connect ignite.  
> > I think it will be a very interest thing to connect ignite and presto. 
> >  
> > In fact, currently we use ignite and it works very well.  but in order to save memory, we build compressed binary data. 
> > thus we cannot query them using SQL. We use ignite map-reduce to query the data. 
> >  
> > Using presto, we may use SQL again. If it is fast enough, ignite will be our in memory storage and not responsible for computing or only for simple query. 
> > The only thing I concern about is presto is fast enough or not like Ignite. For now all ignite query cost less than 5 seconds and most are hundreds of milliseconds. 
> > Also presto provides a connector for redis.  I don't know community has interest to contribute to presto-ignite? 
> >  
> > Thanks 
> > Shawn 
> >  
> >  
> >  
> >  
> > --  
> > Best regards, 
> > Alexey 
> 






Re: integrate with prestodb

2017-10-16 Thread Denis Magda
Shawn,

Then my suggestion would be to enable Ignite persistence [1] that will store 
the whole data set you have. RAM will keep only a subset for the performance 
benefits. Ignite SQL is full supported for the persistence, you can even join 
data RAM and disk only data sets. Plus, your compression becomes optional.

[1] https://ignite.apache.org/features/persistence.html 


—
Denis

> On Oct 16, 2017, at 7:18 PM, shawn.du  wrote:
> 
> Hi Denis,
> 
> Yes, We do want to limit the RAM to less than 64G.  RAM resource is still an 
> expensive resource.
> If we store our data with ignite SQL queryable format, our data may use more 
> than 640G. This is too expensive for us.
> So we store data using binary format which works a bit like orc or parquet. 
> Only several important columns are SQL queryable and the others are not. In 
> this way, we do store using less RAMs, but we have to use map-reduce to query 
> the data, which is a little bit of complex: Query in client with SQL, then 
> submit jobs to ignite compute, finally do some post aggregation in client.
> This is why I want to have a try of Presto. We like SQL, we want all 
> computation on server side. 
> 
> welcome your comments.
> 
> Thanks
> Shawn
> 
> On 10/17/2017 07:57,Denis Magda  
> wrote: 
> Hello Shawn, 
> 
> Do I understand properly that you have scarce RAM resources and think to 
> exploit Presto as an alternative SQL engine in Ignite that queries both RAM 
> and disk data sets? If that’s the case than just enable Ignite native 
> persistence [1] and you’ll get all the data stored on disk and as much as you 
> can afford in RAM. The SQL works over both tiers transparently for you. 
> 
> [1] https://ignite.apache.org/features/persistence.html 
>  
>  > 
> 
> — 
> Denis 
> 
> > On Oct 16, 2017, at 2:19 AM, Alexey Kukushkin  > > wrote: 
> >  
> > Cross-sending to the DEV community. 
> >  
> > On Mon, Oct 16, 2017 at 12:14 PM, shawn.du  >   > >> wrote: 
> > Hi community, 
> >  
> > I am trying to implement a connector for presto to connect ignite.  
> > I think it will be a very interest thing to connect ignite and presto. 
> >  
> > In fact, currently we use ignite and it works very well.  but in order to 
> > save memory, we build compressed binary data. 
> > thus we cannot query them using SQL. We use ignite map-reduce to query the 
> > data. 
> >  
> > Using presto, we may use SQL again. If it is fast enough, ignite will be 
> > our in memory storage and not responsible for computing or only for simple 
> > query. 
> > The only thing I concern about is presto is fast enough or not like Ignite. 
> > For now all ignite query cost less than 5 seconds and most are hundreds of 
> > milliseconds. 
> > Also presto provides a connector for redis.  I don't know community has 
> > interest to contribute to presto-ignite? 
> >  
> > Thanks 
> > Shawn 
> >  
> >  
> >  
> >  
> > --  
> > Best regards, 
> > Alexey 
> 



Re: integrate with prestodb

2017-10-16 Thread shawn.du






Hi Denis,Yes, We do want to limit the RAM to less than 64G.  RAM resource is still an expensive resource.If we store our data with ignite SQL queryable format, our data may use more than 640G. This is too expensive for us.So we store data using binary format which works a bit like orc or parquet. Only several important columns are SQL queryable and the others are not. In this way, we do store using less RAMs, but we have to use map-reduce to query the data, which is a little bit of complex: Query in client with SQL, then submit jobs to ignite compute, finally do some post aggregation in client.This is why I want to have a try of Presto. We like SQL, we want all computation on server side. welcome your comments.






ThanksShawn





On 10/17/2017 07:57,Denis Magda wrote: 


Hello Shawn,

Do I understand properly that you have scarce RAM resources and think to exploit Presto as an alternative SQL engine in Ignite that queries both RAM and disk data sets? If that’s the case than just enable Ignite native persistence [1] and you’ll get all the data stored on disk and as much as you can afford in RAM. The SQL works over both tiers transparently for you.

[1] https://ignite.apache.org/features/persistence.html 

—
Denis

> On Oct 16, 2017, at 2:19 AM, Alexey Kukushkin  wrote:
> 
> Cross-sending to the DEV community.
> 
> On Mon, Oct 16, 2017 at 12:14 PM, shawn.du > wrote:
> Hi community,
> 
> I am trying to implement a connector for presto to connect ignite. 
> I think it will be a very interest thing to connect ignite and presto.
> 
> In fact, currently we use ignite and it works very well.  but in order to save memory, we build compressed binary data.
> thus we cannot query them using SQL. We use ignite map-reduce to query the data.
> 
> Using presto, we may use SQL again. If it is fast enough, ignite will be our in memory storage and not responsible for computing or only for simple query.
> The only thing I concern about is presto is fast enough or not like Ignite. For now all ignite query cost less than 5 seconds and most are hundreds of milliseconds.
> Also presto provides a connector for redis.  I don't know community has interest to contribute to presto-ignite?
> 
> Thanks
> Shawn
> 
> 
> 
> 
> -- 
> Best regards,
> Alexey






Re: integrate with prestodb

2017-10-16 Thread Denis Magda
Hello Shawn,

Do I understand properly that you have scarce RAM resources and think to 
exploit Presto as an alternative SQL engine in Ignite that queries both RAM and 
disk data sets? If that’s the case than just enable Ignite native persistence 
[1] and you’ll get all the data stored on disk and as much as you can afford in 
RAM. The SQL works over both tiers transparently for you.

[1] https://ignite.apache.org/features/persistence.html 


—
Denis

> On Oct 16, 2017, at 2:19 AM, Alexey Kukushkin  
> wrote:
> 
> Cross-sending to the DEV community.
> 
> On Mon, Oct 16, 2017 at 12:14 PM, shawn.du  > wrote:
> Hi community,
> 
> I am trying to implement a connector for presto to connect ignite. 
> I think it will be a very interest thing to connect ignite and presto.
> 
> In fact, currently we use ignite and it works very well.  but in order to 
> save memory, we build compressed binary data.
> thus we cannot query them using SQL. We use ignite map-reduce to query the 
> data.
> 
> Using presto, we may use SQL again. If it is fast enough, ignite will be our 
> in memory storage and not responsible for computing or only for simple query.
> The only thing I concern about is presto is fast enough or not like Ignite. 
> For now all ignite query cost less than 5 seconds and most are hundreds of 
> milliseconds.
> Also presto provides a connector for redis.  I don't know community has 
> interest to contribute to presto-ignite?
> 
> Thanks
> Shawn
> 
> 
> 
> 
> -- 
> Best regards,
> Alexey



Re: Custom CacheStoreAdapter implementation - fields are null

2017-10-16 Thread matt
Hi, thanks for the reply. I should've mentioned that I do use a factory, and
the http-client is passed into the factory, which is then set on a field.
When the create() method is called, I return a new CacheStoreAdapter along
with the client that's in the factory. At the time the create() method is
called, the http client is present and is properly passed to the
CacheStoreAdapter instance. But when my loadAll() method is called, it is
null.

- Matt



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Custom CacheStoreAdapter implementation - fields are null

2017-10-16 Thread vkulichenko
Hi Matt,

CacheStore is implementation usually should not be serialized in the first
place (actually I doubt it would be possible to properly serialize a HTTP
client instance). In CacheConfiguration you provide a Factory instead, so
you can provide your own implementation that will correctly create the store
instance. Don't forget to deploy the factory class on all the nodes before
starting.

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Query performance against table with/out backup

2017-10-16 Thread vkulichenko
Hi,

Do you have indexes configured and (if yes) are they applied properly to the
query? Did you check the execution plan?

It sounds like your query have to scan the whole cache which gets slower
with backups. Can you provide your full cache configuration and data model?

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Custom CacheStoreAdapter implementation - fields are null

2017-10-16 Thread matt
Hi,

I've implemented a CacheStoreAdapter and am seeing that when Ignite starts
to use this class (loadAll, etc.) the fields that I set in my constructor
with values, are null when the methods are called. I realized there's
something I'm doing wrong in terms of how my CacheStoreAdapter is
serialized, but not sure what to do. The values passed into my
CacheStoreAdapter constructor, are arbitrary, but one includes an http
client and another is a basic Java class used for cache key/field mapping.
How can I make sure that my adapter has access to the objects it requires
when Ignite is calling on it?

Thanks,
Matt



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Client Near Cache Configuration Lost after Cluster Node Removed

2017-10-16 Thread torjt
Hello All,

We are having an issue with Ignite client near caches being "lost" upon a
cluster node being removed from the Ignite cluster.  Furthermore, using
version 2.1.0, we are seeing this issue when another client joins the
topology.  I built Ignite from GIT today, 10/16/17, with the latest changes,
ver. 2.3.0-SNAPSHOT.  As of version 2.3.0-SNAPSHOT, bringing clients up/down
does not cause an active client to lose its near cache and performance is
good.  However, when we remove a node from the cluster, the client
immediately communicates with the cluster and disregards its near cache. 
Restarting the client remedies the issue.

The following is the steps to reproduce the issue:
Apache Ignite Version:
*SNAPSHOT#20171016-sha1:ca6662bcb4eecc62493e2e25a572ed0b982c046c*
1.  Start 2 Ignite servers
2.  Start client with caches configured as near-cache
3.  Access caches
4.  Stop node client is connected to
4a.  Client immediately bypasses near cache and access "cluster" for cache
miss





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Error with ScanQuery

2017-10-16 Thread Raymond Wilson
I just read through the Jira ticket and wonder if this is the underlying
cause for what I am seeing as I have KeepBinaryInStore=false in my
configuration.



*From:* Raymond Wilson [mailto:raymond_wil...@trimble.com]
*Sent:* Tuesday, October 17, 2017 9:49 AM
*To:* 'user@ignite.apache.org' 
*Subject:* RE: Error with ScanQuery



I am using the Ignite native persistence introduced in the 2.* releases.



*From:* Alexey Kukushkin [mailto:kukushkinale...@gmail.com
]
*Sent:* Tuesday, October 17, 2017 12:48 AM
*To:* user@ignite.apache.org
*Subject:* Re: Error with ScanQuery



Raymond,



Do you have native or 3-rd party persistence? If you have 3rd-party
persistence, then this "Requesting mapping from grid failed" error I see in
your screenshot was fixed in release 2.3
.


RE: Error with ScanQuery

2017-10-16 Thread Raymond Wilson
I am using the Ignite native persistence introduced in the 2.* releases.



*From:* Alexey Kukushkin [mailto:kukushkinale...@gmail.com]
*Sent:* Tuesday, October 17, 2017 12:48 AM
*To:* user@ignite.apache.org
*Subject:* Re: Error with ScanQuery



Raymond,



Do you have native or 3-rd party persistence? If you have 3rd-party
persistence, then this "Requesting mapping from grid failed" error I see in
your screenshot was fixed in release 2.3
.


Re: JDBC store Date deserialization problem

2017-10-16 Thread franck102
My bad, the issue is not resolved but my transformer workaround was not
properly installed.

Franck



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Cannot insert data into table using JDBC

2017-10-16 Thread James
Here is my code:

Connection igniteJdbcConnection =
DriverManager.getConnection("jdbc:ignite:cfg://streaming=true:cache=am_jdbc@file:/E:/ignite-jdbc.xml");

PreparedStatement igniteInsertPrepareStatement =
igniteJdbcConnection.prepareStatement("insert into Sample_Superstore(Row_ID,
...");

Above prepareStatement throw
org.apache.ignite.internal.processors.query.IgniteSQLException: Failed to
set schema for DB connection for thread [schema=].



The following is my observation during debug.
---
/**
 * JdbcConnection.prepareNativeStatement
 */
PreparedStatement prepareNativeStatement(String sql) throws SQLException
{
return
ignite().context().query().prepareNativeStatement(schemaName(), sql);
}
-CALL the following
method--
/**
 *
 * GridQueryProcessor.prepareNativeStatement
 */
public PreparedStatement prepareNativeStatement(String cacheName, String
sql) throws SQLException {
checkxEnabled();

String schemaName = idx.schema(cacheName);

return idx.prepareNativeStatement(schemaName, sql);
}

Right now, parameter - cacheName is "PUBLIC" during runtime.

CALL following
method---

/** {@inheritDoc} */
@Override  public String schema(String cacheName) {
String res = cacheName2schema.get(cacheName);

if (res == null)
res = "";

return res;
}

Now cacheName2schema has this value - {SQL_PUBLIC_SAMPLE_SUPERSTORE=PUBLIC,
ignite-sys-cache=ignite-sys-cache, am_jdbc=PUBLIC}.

I think problem is that my prepareStatement carries "PUBLIC" as a cache
name. 

Thank you so much!

James





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Cannot insert data into table using JDBC

2017-10-16 Thread Alexey Kukushkin
James,

Just to confirm - are you saying that Statement.executeUpdate("create table
Sample_Superstore ...") throws that exception but still creates the cache
as expected?


Query performance against table with/out backup

2017-10-16 Thread blackfield
Hello,

We have a table with the following configuration:
1. Persistence is enabled
2. Partition (not replicated)
3. Backup = 1 vs. 0
Everything else, pretty much use default.

We have a table in which we perform the following query:
SELECT COUNT(*) FROM Table WHERE column1 > 0.75 AND column2 > 0.75 AND zone
IN (27000 zones...);

The table has about 75000 rows and zone is the primary key.

I ran the above query with many other options and many different client
threads (1-50), the backup == 1 consistently about twice as slow as when the
backup == 0.

The Ignite documentation mentions at many different places that specifying
backup impacts the performance. 

I understand if the write performance is impacted when backup is specified.

What I am trying to understand is why the read performance appears to be
heavily impacted when we specify the backup. 








--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Partition exchange timeout can produce OOM with binary cache

2017-10-16 Thread Timofey Fedyanin

Hi,
I use Ignite 2.2 cache with binary objects, produced by 
org.apache.ignite.binary.BinaryObjectBuilder
An exception occurred during 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture#waitPartitionRelease:1068,

which eventually led to a
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager#dumpPendingObjects:1656
with IgniteInternalTx::toString method, that produce HUGE log and very 
big StringBuilder (about 3GB per transaction) in Heap.


I found org.apache.ignite.internal.util.tostring.GridToStringExclude, 
but I can't apply it to BinaryObject.
How to solve this issue (disable GridToStringBuilder for 
BinaryObjectImpl)?


Sincerely,

Timofey Fedyanin

tfedyan...@gmail.com
+7 985 446 9277

Skype: timofeyfedyanin



Re: Cannot insert data into table using JDBC

2017-10-16 Thread James
I found out that following code in IgniteH2Indexing.java throws one
exception. 

stmt.executeUpdate("SET SCHEMA " + H2Utils.withQuotes(schema));

In Ignite implementation,  schema is "PUBLIC" at beginning during runtime,
then changed to "".  

But I cannot figure out why my code get this exception. I need your help
urgently. Thanks.





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: JDBC store Date deserialization problem

2017-10-16 Thread franck102
Hi Val,

Was this fixed in 2.2.0? It looks like my workaround doesn't work anymore
with that version...

SQL_TYPES now includes java.util.Date, however the type of my binary field
is ignored when serializing by the (very strange...) implementation of
org.apache.ignite.internal.binary.builder.BinaryValueWithType:
coming out of the store I have a nice instance with { val=;
type=11 }, and the writeTo method ignores the type field and lets the
context serialize the field (ctx.writeValue).

The context uses the Map below, which is missing java.util.Date :(





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Cannot insert data into table using JDBC

2017-10-16 Thread James
I am running a local server with native persistence.

CacheConfiguration cacheConfiguration = new 
CacheConfiguration<>();
cacheConfiguration.setSqlSchema("public");
cacheConfiguration.setName("am_jdbc");
igniteConfiguration.setCacheConfiguration(cacheConfiguration);

I am using
DriverManager.getConnection("jdbc:ignite:cfg://streaming=true:cache=am_jdbc@file:/E:/ignite-jdbc.xml");

I can run Statement.executeUpdate() to create a table. SQL is listed below:

create table Sample_Superstore(Row_ID decimal, Order_ID varchar, Order_Date
timestamp, Ship_Date timestamp, Ship_Mode varchar, Customer_ID varchar,
Customer_Name varchar, Segment varchar, Country varchar, City varchar, State
varchar, Postal_Code decimal, Region varchar, Product_ID varchar, Category
varchar, Sub_Category varchar, Product_Name varchar, Sales decimal, Quantity
decimal, Discount decimal, Profit decimal, primary key (Row_ID)) with
"template=partitioned,backups=1"

I can confirm a cache is created with a name - SQL_PUBLIC_SAMPLE_SUPERSTORE.

I got the following exception:

class org.apache.ignite.internal.processors.query.IgniteSQLException: Failed
to set schema for DB connection for thread [schema=]

org.h2.jdbc.JdbcSQLException: Schema  not found; SQL statement:
SET SCHEMA "" [90079-195]

The whole log is listed below:


>>>__    
>>>   /  _/ ___/ |/ /  _/_  __/ __/  
>>>  _/ // (7 7// /  / / / _/
>>> /___/\___/_/|_/___/ /_/ /___/   
>>> 
>>> ver. 2.1.0#20170721-sha1:a6ca5c8a
>>> 2017 Copyright(C) Apache Software Foundation
>>> 
>>> Ignite documentation: http://ignite.apache.org

2017-10-16 23:12:04 INFO 
IgniteKernal%ignite-jdbc-driver-ed1473ab-40b2-47ff-90af-372c3ec22930:475 -
Config URL: n/a
2017-10-16 23:12:04 INFO 
IgniteKernal%ignite-jdbc-driver-ed1473ab-40b2-47ff-90af-372c3ec22930:475 -
Daemon mode: off
2017-10-16 23:12:04 INFO 
IgniteKernal%ignite-jdbc-driver-ed1473ab-40b2-47ff-90af-372c3ec22930:475 -
OS: Windows 7 6.1 amd64
2017-10-16 23:12:04 INFO 
IgniteKernal%ignite-jdbc-driver-ed1473ab-40b2-47ff-90af-372c3ec22930:475 -
OS user: tester
2017-10-16 23:12:04 INFO 
IgniteKernal%ignite-jdbc-driver-ed1473ab-40b2-47ff-90af-372c3ec22930:475 -
PID: 113872
2017-10-16 23:12:04 INFO 
IgniteKernal%ignite-jdbc-driver-ed1473ab-40b2-47ff-90af-372c3ec22930:475 -
Language runtime: Java Platform API Specification ver. 1.8
2017-10-16 23:12:04 INFO 
IgniteKernal%ignite-jdbc-driver-ed1473ab-40b2-47ff-90af-372c3ec22930:475 -
VM information: Java(TM) SE Runtime Environment 1.8.0_102-b14 Oracle
Corporation Java HotSpot(TM) 64-Bit Server VM 25.102-b14
2017-10-16 23:12:04 INFO 
IgniteKernal%ignite-jdbc-driver-ed1473ab-40b2-47ff-90af-372c3ec22930:475 -
VM total memory: 3.5GB
2017-10-16 23:12:04 INFO 
IgniteKernal%ignite-jdbc-driver-ed1473ab-40b2-47ff-90af-372c3ec22930:475 -
Remote Management [restart: off, REST: off, JMX (remote: off)]
2017-10-16 23:12:04 INFO 
IgniteKernal%ignite-jdbc-driver-ed1473ab-40b2-47ff-90af-372c3ec22930:475 -
IGNITE_HOME=D:\apache\ignite\2.0.0
2017-10-16 23:12:04 INFO 
IgniteKernal%ignite-jdbc-driver-ed1473ab-40b2-47ff-90af-372c3ec22930:475 -
VM arguments:
[-agentlib:jdwp=transport=dt_socket,suspend=y,address=localhost:53806,
-Dmaven.home=EMBEDDED,
-Dclassworlds.conf=E:\workspace10062016\.metadata\.plugins\org.eclipse.m2e.launching\launches\m2conf2309621012848687193.tmp,
-Dmaven.multiModuleProjectDirectory=E:\workspace\am-data-analysis,
-javaagent:C:\Users\tester\.p2\pool\plugins\com.ifedorenko.m2e.sourcelookup_1.1.0.201506181114\com.ifedorenko.m2e.sourcelookup.javaagent.jar,
-Xms512m, -XX:MaxPermSize=256m, -Dfile.encoding=UTF-8]
2017-10-16 23:12:04 INFO 
IgniteKernal%ignite-jdbc-driver-ed1473ab-40b2-47ff-90af-372c3ec22930:475 -
Configured caches [in 'sysMemPlc' memoryPolicy: ['ignite-sys-cache']]
2017-10-16 23:12:04 WARN 
IgniteKernal%ignite-jdbc-driver-ed1473ab-40b2-47ff-90af-372c3ec22930:480 -
Peer class loading is enabled (disable it in production for performance and
deployment consistency reasons)
2017-10-16 23:12:04 INFO 
IgniteKernal%ignite-jdbc-driver-ed1473ab-40b2-47ff-90af-372c3ec22930:475 -
3-rd party licenses can be found at: D:\apache\ignite\2.0.0\libs\licenses
2017-10-16 23:12:04 INFO  IgnitePluginProcessor:475 - Configured plugins:
2017-10-16 23:12:04 INFO  IgnitePluginProcessor:475 -   ^-- None
2017-10-16 23:12:04 INFO  IgnitePluginProcessor:475 - 
2017-10-16 23:12:04 INFO  TcpCommunicationSpi:475 - Successfully bound
communication NIO server to TCP port [port=47101, locHost=0.0.0.0/0.0.0.0,
selectorsCnt=4, selectorSpins=0, pairedConn=false]
2017-10-16 23:12:04 WARN  TcpCommunicationSpi:480 - Message queue limit is
set to 0 which may lead to potential OOMEs when running cache operations in
FULL_ASYNC or PRIMARY_SYNC modes due to message queues growth on sender and
receiver sides.
2017-10-16 23:12:04 WARN  NoopCheckpointSpi:480 - 

Re: DML sql transaction

2017-10-16 Thread dkarachentsev
Hi Alisher,

This issue is under active development:
https://issues.apache.org/jira/browse/IGNITE-3478

Thanks!
-Dmitry



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


DML sql transaction

2017-10-16 Thread Ali
Hello!

Does Ignite open transaction for DML operations by default, even if transaction 
is not open programmatically?

P.S. CacheAtomicityMode is TRANSACTIONAL



Re: Error with ScanQuery

2017-10-16 Thread Alexey Kukushkin
Raymond,

Do you have native or 3-rd party persistence? If you have 3rd-party
persistence, then this "Requesting mapping from grid failed" error I see in
your screenshot was fixed in release 2.3
.


RE: Error with ScanQuery

2017-10-16 Thread Raymond Wilson
Hi Pavel,



Thanks for the tips on MemoryStream versus byte[]. I’ll look into it.



Thanks,

Raymond.





*From:* Pavel Tupitsyn [mailto:ptupit...@apache.org]
*Sent:* Monday, October 16, 2017 11:00 PM
*To:* user@ignite.apache.org
*Subject:* Re: Error with ScanQuery



Hi Raymond,



1) Please attach full exception details, including all inner exceptions
(can be obtained with ex.ToString())



2) Storing MemoryStream in cache is not a very good idea: there is a lot of
unnecessary overhead

(underlying buffer is usually bigger that real data, there is some state,
etc).

Consider storing byte[] instead (call MemoryStream.ToArray()).



Thanks,

Pavel



On Mon, Oct 16, 2017 at 11:46 AM, Raymond Wilson 
wrote:

I can do that tomorrow.

In the meantime, can you speak to what that error means?

Thanks,
Raymond.

Sent from my iPhone


> On 16/10/2017, at 9:18 PM, dkarachentsev 
wrote:
>
> Hi Raymond,
>
> Could you please attach full log and config for failed node?
>
> Thanks!
> -Dmitry
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Error with ScanQuery

2017-10-16 Thread Raymond Wilson
Below is output from ex.ToString()



---



---

Apache.Ignite.Core.Binary.BinaryObjectException: Requesting mapping from
grid failed for [platformId=1, typeId=349095370] --->
Apache.Ignite.Core.Common.JavaException: class
org.apache.ignite.binary.BinaryObjectException: Requesting mapping from
grid failed for [platformId=1, typeId=349095370]

at
org.apache.ignite.internal.processors.platform.binary.PlatformBinaryProcessor.processInStreamOutStream(PlatformBinaryProcessor.java:126)

at
org.apache.ignite.internal.processors.platform.PlatformTargetProxyImpl.inStreamOutStream(PlatformTargetProxyImpl.java:155)

Caused by: java.lang.ClassNotFoundException: Requesting mapping from grid
failed for [platformId=1, typeId=349095370]

at
org.apache.ignite.internal.MarshallerContextImpl.getClassName(MarshallerContextImpl.java:383)

at
org.apache.ignite.internal.processors.platform.binary.PlatformBinaryProcessor.processInStreamOutStream(PlatformBinaryProcessor.java:120)

... 1 more





   --- End of inner exception stack trace ---

   at Apache.Ignite.Core.Impl.Unmanaged.UnmanagedCallbacks.Error(Void*
target, Int32 errType, SByte* errClsChars, Int32 errClsCharsLen, SByte*
errMsgChars, Int32 errMsgCharsLen, SByte* stackTraceChars, Int32
stackTraceCharsLen, Void* errData, Int32 errDataLen)

   at
Apache.Ignite.Core.Impl.Unmanaged.IgniteJniNativeMethods.TargetInStreamOutStream(Void*
ctx, Void* target, Int32 opType, Int64 inMemPtr, Int64 outMemPtr)

   at Apache.Ignite.Core.Impl.PlatformTarget.DoOutInOp[TR](Int32 type,
Action`1 outAction, Func`2 inAction)

   at Apache.Ignite.Core.Impl.Binary.BinaryProcessor.GetTypeName(Int32 id)

   at Apache.Ignite.Core.Impl.Binary.Marshaller.GetDescriptor(Boolean
userType, Int32 typeId, Boolean requiresType, String typeName, Type
knownType)

   at Apache.Ignite.Core.Impl.Binary.BinaryReader.ReadFullObject[T](Int32
pos, Type typeOverride)

   at Apache.Ignite.Core.Impl.Binary.BinaryReader.TryDeserialize[T](T& res,
Type typeOverride)

   at Apache.Ignite.Core.Impl.Binary.BinaryReader.Deserialize[T](Type
typeOverride)

   at
Apache.Ignite.Core.Impl.Binary.BinaryReader.ReadBinaryObject[T](Boolean
doDetach)

   at Apache.Ignite.Core.Impl.Binary.BinaryReader.TryDeserialize[T](T& res,
Type typeOverride)

   at Apache.Ignite.Core.Impl.Binary.BinaryReader.Deserialize[T](Type
typeOverride)

   at Apache.Ignite.Core.Impl.Cache.Query.QueryCursor`2.Read(BinaryReader
reader)

   at
Apache.Ignite.Core.Impl.Cache.Query.AbstractQueryCursor`1.ConvertGetBatch(IBinaryStream
stream)

   at Apache.Ignite.Core.Impl.PlatformTarget.DoInOp[T](Int32 type, Func`2
action)

   at
Apache.Ignite.Core.Impl.Cache.Query.AbstractQueryCursor`1.RequestBatch()

   at Apache.Ignite.Core.Impl.Cache.Query.AbstractQueryCursor`1.MoveNext()

   at VSS.Raptor.IgnitePOC.TestApp.Form1.button1_Click(Object sender,
EventArgs e) in
C:\Dev\VSS.Raptor.IgnitePOC\WindowsFormsApplication1\Form1.cs:line 254

---

OK

---



*From:* Raymond Wilson [mailto:raymond_wil...@trimble.com]
*Sent:* Tuesday, October 17, 2017 12:17 AM
*To:* 'user@ignite.apache.org' 
*Subject:* RE: Error with ScanQuery



Hi Dmitry,



I don’t seem to get any Java exceptions reported in the log.



Below is the inner exception detail from the IDE error dialog:



class org.apache.ignite.binary.BinaryObjectException: Requesting mapping
from grid failed for [platformId=1, typeId=349095370]

at
org.apache.ignite.internal.processors.platform.binary.PlatformBinaryProcessor.processInStreamOutStream(PlatformBinaryProcessor.java:126)

at
org.apache.ignite.internal.processors.platform.PlatformTargetProxyImpl.inStreamOutStream(PlatformTargetProxyImpl.java:155)

Caused by: java.lang.ClassNotFoundException: Requesting mapping from grid
failed for [platformId=1, typeId=349095370]

at
org.apache.ignite.internal.MarshallerContextImpl.getClassName(MarshallerContextImpl.java:383)

at
org.apache.ignite.internal.processors.platform.binary.PlatformBinaryProcessor.processInStreamOutStream(PlatformBinaryProcessor.java:120)

... 1 more



From:





Raymond.



-Original Message-
From: dkarachentsev [mailto:dkarachent...@gridgain.com
]
Sent: Monday, October 16, 2017 11:18 PM
To: user@ignite.apache.org
Subject: Re: Error with ScanQuery



Raymond,



Without logs I see just that deserialization failed by some reason.
Actually I more interested in exceptions that come from Ignite's java part
if any.



Thanks!

-Dmitry







--

Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Error with ScanQuery

2017-10-16 Thread Raymond Wilson
Hi Dmitry,



I don’t seem to get any Java exceptions reported in the log.



Below is the inner exception detail from the IDE error dialog:



class org.apache.ignite.binary.BinaryObjectException: Requesting mapping
from grid failed for [platformId=1, typeId=349095370]

at
org.apache.ignite.internal.processors.platform.binary.PlatformBinaryProcessor.processInStreamOutStream(PlatformBinaryProcessor.java:126)

at
org.apache.ignite.internal.processors.platform.PlatformTargetProxyImpl.inStreamOutStream(PlatformTargetProxyImpl.java:155)

Caused by: java.lang.ClassNotFoundException: Requesting mapping from grid
failed for [platformId=1, typeId=349095370]

at
org.apache.ignite.internal.MarshallerContextImpl.getClassName(MarshallerContextImpl.java:383)

at
org.apache.ignite.internal.processors.platform.binary.PlatformBinaryProcessor.processInStreamOutStream(PlatformBinaryProcessor.java:120)

... 1 more



From:





Raymond.



-Original Message-
From: dkarachentsev [mailto:dkarachent...@gridgain.com]
Sent: Monday, October 16, 2017 11:18 PM
To: user@ignite.apache.org
Subject: Re: Error with ScanQuery



Raymond,



Without logs I see just that deserialization failed by some reason.
Actually I more interested in exceptions that come from Ignite's java part
if any.



Thanks!

-Dmitry







--

Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Security question

2017-10-16 Thread franck102
Thanks Dmitry, that makes sense, we will make sure that client-side code is
trusted.

Franck



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Error with ScanQuery

2017-10-16 Thread dkarachentsev
Raymond,

Without logs I see just that deserialization failed by some reason. Actually
I more interested in exceptions that come from Ignite's java part if any.

Thanks!
-Dmitry



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Security question

2017-10-16 Thread dkarachentsev
Franck,

You're definitely right, but this is more like client roles than regular
security.

On "they have a number of connected clients with actual applications" I
meant that user's application is connected to the grid via clients with
their local permissions. But end user cannot access the grid directly, only
via user's API.

Anyway, I don't think that it would be changed in nearest time.

Thanks!
-Dmitry



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Error with ScanQuery

2017-10-16 Thread Pavel Tupitsyn
Hi Raymond,

1) Please attach full exception details, including all inner exceptions
(can be obtained with ex.ToString())

2) Storing MemoryStream in cache is not a very good idea: there is a lot of
unnecessary overhead
(underlying buffer is usually bigger that real data, there is some state,
etc).
Consider storing byte[] instead (call MemoryStream.ToArray()).

Thanks,
Pavel

On Mon, Oct 16, 2017 at 11:46 AM, Raymond Wilson  wrote:

> I can do that tomorrow.
>
> In the meantime, can you speak to what that error means?
>
> Thanks,
> Raymond.
>
> Sent from my iPhone
>
> > On 16/10/2017, at 9:18 PM, dkarachentsev 
> wrote:
> >
> > Hi Raymond,
> >
> > Could you please attach full log and config for failed node?
> >
> > Thanks!
> > -Dmitry
> >
> >
> >
> > --
> > Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: How to cancel IgniteRunnale on remote node?

2017-10-16 Thread james wu
Thanks a lot

Whether the Remote IgniteRunnable can be graceful shutdown if I implement
ComputeJobMasterLeaveAware interface in my Kafka consumer IgniteRunnable,
and close kafka streamer in onMasterNodeLeft().

And if the remote server want to receive master node left event. Is there
any configuration need to add to IgniteConfiguration?

Thanks

James



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: integrate with prestodb

2017-10-16 Thread Alexey Kukushkin
Cross-sending to the DEV community.

On Mon, Oct 16, 2017 at 12:14 PM, shawn.du  wrote:

> Hi community,
>
> I am trying to implement a connector for presto to connect ignite.
> I think it will be a very interest thing to connect ignite and presto.
>
> In fact, currently we use ignite and it works very well.  but in order to
> save memory, we build compressed binary data.
> thus we cannot query them using SQL. We use ignite map-reduce to query the
> data.
>
> Using presto, we may use SQL again. If it is fast enough, ignite will be
> our in memory storage and not responsible for computing or only for simple
> query.
> The only thing I concern about is presto is fast enough or not like
> Ignite. For now all ignite query cost less than 5 seconds and most are
> hundreds of milliseconds.
> Also presto provides a connector for redis.  I don't know community has
> interest to contribute to presto-ignite?
>
> Thanks
> Shawn
>
>


-- 
Best regards,
Alexey


integrate with prestodb

2017-10-16 Thread shawn.du






Hi community,I am trying to implement a connector for presto to connect ignite. 
I think it will be a very interest thing to connect ignite and presto.In fact, currently we use ignite and it works very well.  but in order to save memory, we build compressed binary data.thus we cannot query them using SQL. We use ignite map-reduce to query the data.Using presto, we may use SQL again. If it is fast enough, ignite will be our in memory storage and not responsible for computing or only for simple query.The only thing I concern about is presto is fast enough or not like Ignite. For now all ignite query cost less than 5 seconds and most are hundreds of milliseconds.Also presto provides a connector for redis.  I don't know community has interest to contribute to presto-ignite?






ThanksShawn









Re: custom restful service using jetty server on top of ignite grid to perform crud operations and configure cache store

2017-10-16 Thread siva
Is this the correct way to  create a jetty rest server   in ignite  service?  



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: custom restful service using jetty server on top of ignite grid to perform crud operations and configure cache store

2017-10-16 Thread siva
Hi,

Yes,I have also used "*IgniteInstanceResource"*  in code to get the
instance,as per my code snippet ,when i call the "*create*" method  using
*proxy *its working fine ,but when i call the same method using my custom
rest "*ignite*" instance getting null.




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Error with ScanQuery

2017-10-16 Thread Raymond Wilson
I can do that tomorrow. 

In the meantime, can you speak to what that error means?

Thanks,
Raymond. 

Sent from my iPhone

> On 16/10/2017, at 9:18 PM, dkarachentsev  wrote:
> 
> Hi Raymond,
> 
> Could you please attach full log and config for failed node?
> 
> Thanks!
> -Dmitry
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Error with ScanQuery

2017-10-16 Thread dkarachentsev
Hi Raymond,

Could you please attach full log and config for failed node?

Thanks!
-Dmitry



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: custom restful service using jetty server on top of ignite grid to perform crud operations and configure cache store

2017-10-16 Thread dkarachentsev
Hi,

@IgniteInstanceResource annotation is a correct and the best way to get
Ignite instance in service.

Thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Benchmark results questions

2017-10-16 Thread Ray
Already tried tuning a few parameters, here's some quick update.

Increasing 128 threads to 256 threads in yardstick configuration for 8 node
setup, the result is still the same.

Increasing warm up seconds from 60s to 120s in yardstick configuration for 8
node setup, the result is still the same.

Increasing 1 driver to 2 driver in yardstick configuration for 8 node setup,
the result is still the same.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/