Re: Ignite Cluster Communication with SSH Tunnels

2016-10-11 Thread Denis Magda
Hi Krzysztof,

Definitely, AddressResolver can be used for the purposes when you need to
connect nodes that are behind NAT with the other ones.

Basically, in an AddressResolver configuration each node should simply list
a mapping of *its* private addresses to *its* external addresses. After
that when a node joins a cluster it sends its mapped addresses to the rest
of the cluster so that every other node can connect to it through
TcpCommunicationSpi.

In my understanding there is a miss-configuration at the level of
AddressResolver, TcpCommunicationSpi or TcpDiscoverySpi. Please double
check that:

   1. Every cluster node provides private->public addresses mapping of its
   own IPs in the AddressResolver.
   2. TcpDiscoverySpi of every node contains only public addresses.
   3. All the ports that are used by both TcpDiscoverySpi and
   TcpCommunicationSpi are opened.

If nothing helps please share a full configuration you use.

--
Denis


On Tue, Oct 11, 2016 at 4:27 PM, Krzysztof  wrote:

> Hello,
>
> We would appreciate confirmation that
> GridDhtAssignmentFetchFuture::requestFromNextNode() makes impossible to
> have
> a client communicating via NAT, even with AddressResolver - or a pointer
> where we make a mistake in the code/logic.
> Judging by the code, GridDhtAssignmentFetchFuture has no notion of
> AddressResolver at all and will always try to contact returned nodes from
> behind the NAT..
>
> Is there any way to make it work? WE must make a decision if to use Ignite
> in our project and this seems to be a blocker..
>
> Best Regards
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Ignite-Cluster-Communication-with-
> SSH-Tunnels-tp273p8225.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Ignite Cluster Communication with SSH Tunnels

2016-10-11 Thread Krzysztof
Hello,

We would appreciate confirmation that
GridDhtAssignmentFetchFuture::requestFromNextNode() makes impossible to have
a client communicating via NAT, even with AddressResolver - or a pointer
where we make a mistake in the code/logic.
Judging by the code, GridDhtAssignmentFetchFuture has no notion of
AddressResolver at all and will always try to contact returned nodes from
behind the NAT.. 

Is there any way to make it work? WE must make a decision if to use Ignite
in our project and this seems to be a blocker..

Best Regards



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Cluster-Communication-with-SSH-Tunnels-tp273p8225.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Iginte - Exception - Failed to unmarshal discovery data for component: 1 - on starting server from console and an application in eclipse IDE

2016-10-11 Thread M Singh
I will definitely let you know if I can reproduce it, but just for my own 
understanding, as you saw in the sample, I was just storing strings, so what 
could have caused the serialization exception?  Just want to avoid misusing the 
API.


On Tuesday, October 11, 2016 8:14 AM, Sergej Sidorov 
 wrote:
 

 Hi,

For internal activities ignite use only work directory.
Let me know if the issue is reproduced again.

-Sergej



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Iginte-Exception-Failed-to-unmarshal-discovery-data-for-component-1-on-starting-server-from-console-E-tp8160p8215.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


   

Re: Near cache

2016-10-11 Thread javastuff....@gmail.com
Thank you for your reply. Would like to add more details to 3rd point as you
have not clearly understood it.

Lets assume there are 4 nodes running, node A brings data to distributed
cache, as concept of Near Cahce I will push data to distributed cache as
well as Node A will have it on heap in Map implementation. Later each node
uses data from distributed cache and each node will now bring that data to
their local heap based map implementation.
Now comes the case of cache invalidation -  one of the node initiate REMOVE
call and this will remove local heap copy for this acting node and
distributed cache. This invokes EVT_CACHE_OBJECT_REMOVED event. However this
event will be generated only on one node have that data in its partition
(this is what I have observed, remote event for owner node and local event
for acting node). In that case owner node has the responsibility to
communicate to all other node to invalidate their local map based copy.   
So I am combining EVENT and TOPIC to implement this. 

Is this right approach? or there is a better approach? 

Cache remove event is generated only for owner node (node holding data in
its partition) and node who is initiating remove API. Is this correct or it
suppose to generate event for all nodes? Conceptually both have their own
meaning and use, so I think both are correct.
  




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Near-cache-tp8192p8223.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Cache read through with expiry policy

2016-10-11 Thread o4oxide
Hello Val,

Sorry I got it wrong on how expiry works with the cache store. I had thought
it will call a delete to remove it from the store when it expires. I just
realized that is not the case so an expiry only removes from cache not from
cache store.

Thanks for the good job.

Best Regards,



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Cache-read-through-with-expiry-policy-tp2521p8222.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


RE: Evicted entry appears in Write-behind cache

2016-10-11 Thread Pradeep Badiger
Hi Denis,

I did the get() on the evicted entry from the cache, it still returned me the 
value without calling the load() on the store. As you said, the entry would be 
cached in the write behind store even for the evicted entry. Is that true?

Thanks,
Pradeep V.B.
From: Denis Magda [mailto:dma...@gridgain.com]
Sent: Monday, October 10, 2016 9:13 PM
To: user@ignite.apache.org
Subject: Re: Evicted entry appears in Write-behind cache

Hi,

How do you see that the evicted entries are still in the cache? If you check 
this by calling cache get like operations then entries can be loaded back from 
the write-behind store or from your underlying store.

—
Denis

On Oct 8, 2016, at 1:00 PM, Pradeep Badiger 
> wrote:

Hi,

I am trying to evaluate Apache Ignite and trying to explore eviction policy and 
write behind features. I am seeing that whenever a cache is configured with 
eviction policy and write behind feature, the write behind cache always have 
all the changed entries including the ones that are evicted, before the write 
cache is flushed. But soon after it is flushed, the store loads again from DB. 
Is this the expected behavior? Is there a documentation on how the write behind 
cache works?

Thanks,
Pradeep V.B.
This email and any files transmitted with it are confidential, proprietary and 
intended solely for the individual or entity to whom they are addressed. If you 
have received this email in error please delete it immediately.

This email and any files transmitted with it are confidential, proprietary and 
intended solely for the individual or entity to whom they are addressed. If you 
have received this email in error please delete it immediately.


Re: Loading Hbase data into Ignite

2016-10-11 Thread Anil
Thank you Vladislav and Andrey. I will look at the document and give a try.

Thanks again.

On 11 October 2016 at 20:47, Andrey Gura  wrote:

> Hi,
>
> HBase regions doesn't map to Ignite nodes due to architectural
> differences. Each HBase region contains rows in some range of keys that
> sorted lexicographically while distribution of keys in Ignite depends on
> affinity function and key hash code. Also how do you remap region to nodes
> in case of region was splitted?
>
> Of course you can get node ID in cluster for given key but because HBase
> keeps rows sorted by keys lexicographically you should perform full scan in
> HBase table. So the simplest way for parallelization data loading from
> HBase to Ignite it concurrently scan regions and stream all rows to one or
> more DataStreamer.
>
>
> On Tue, Oct 11, 2016 at 4:11 PM, Anil  wrote:
>
>> HI,
>>
>> we have around 18 M records in hbase which needs to be loaded into ignite
>> cluster.
>>
>> i was looking at
>>
>> http://apacheignite.gridgain.org/v1.7/docs/data-loading
>>
>> https://github.com/apache/ignite/tree/master/examples
>>
>> is there any approach where each ignite node loads the data of one hbase
>> region ?
>>
>> Do you have any recommendations ?
>>
>> Thanks.
>>
>
>


Re: Loading Hbase data into Ignite

2016-10-11 Thread Anil
HI Alexey,

We are planning to have 4 node cluster. we will increase the number of
nodes based on performance.

key is string which unique (some part of hbase record primary key which is
unique). Each record has around 25-30 fields but that is small only. Record
wont have much content.

All 18 M records are related to one use case only.. so planning to keep in
single cache so that pagination , filter and sorting supported at cache
level itself.

Initial load will be just write to cache and changes (or new objects) to
existing cache will be added/updated using kafka stream.

Thanks.


On 11 October 2016 at 19:03, Alexey Kuznetsov  wrote:

> Hi, Anil.
>
> It depends on your use case.
> How many nodes will be in your cluster?
> All 18M records will be in one cache or many caches?
> How big single record? What will be the key?
> You need only load or you also need write changed / new objects in cache
> to HBase?
>
> On Tue, Oct 11, 2016 at 8:11 PM, Anil  wrote:
>
>> HI,
>>
>> we have around 18 M records in hbase which needs to be loaded into ignite
>> cluster.
>>
>> i was looking at
>>
>> http://apacheignite.gridgain.org/v1.7/docs/data-loading
>>
>> https://github.com/apache/ignite/tree/master/examples
>>
>> is there any approach where each ignite node loads the data of one hbase
>> region ?
>>
>> Do you have any recommendations ?
>>
>> Thanks.
>>
>
>
>
> --
> Alexey Kuznetsov
>


Re: Loading Hbase data into Ignite

2016-10-11 Thread Andrey Gura
Hi,

HBase regions doesn't map to Ignite nodes due to architectural differences.
Each HBase region contains rows in some range of keys that sorted
lexicographically while distribution of keys in Ignite depends on affinity
function and key hash code. Also how do you remap region to nodes in case
of region was splitted?

Of course you can get node ID in cluster for given key but because HBase
keeps rows sorted by keys lexicographically you should perform full scan in
HBase table. So the simplest way for parallelization data loading from
HBase to Ignite it concurrently scan regions and stream all rows to one or
more DataStreamer.


On Tue, Oct 11, 2016 at 4:11 PM, Anil  wrote:

> HI,
>
> we have around 18 M records in hbase which needs to be loaded into ignite
> cluster.
>
> i was looking at
>
> http://apacheignite.gridgain.org/v1.7/docs/data-loading
>
> https://github.com/apache/ignite/tree/master/examples
>
> is there any approach where each ignite node loads the data of one hbase
> region ?
>
> Do you have any recommendations ?
>
> Thanks.
>


Re: Iginte - Exception - Failed to unmarshal discovery data for component: 1 - on starting server from console and an application in eclipse IDE

2016-10-11 Thread Sergej Sidorov
Hi,

For internal activities ignite use only work directory.
Let me know if the issue is reproduced again.

-Sergej



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Iginte-Exception-Failed-to-unmarshal-discovery-data-for-component-1-on-starting-server-from-console-E-tp8160p8215.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Is it possible to enable both REPLICATED and PARTITIONED?

2016-10-11 Thread Tracy Liang (BLOOMBERG/ 731 LEX)
Thanks Alexey. Maybe in the future. Right now it's hard. Also from consistency 
point of view, I saw there are FULL_SYNC for strong consistency, but that might 
cause weak availability for writing. Does ignite support quorum consistency?

From: user@ignite.apache.org 
Subject: Re: Is it possible to enable both REPLICATED and PARTITIONED?

Hi Tracy,

In addition to Vlad answer about SQL fields query I would like to mention, that 
may be you can implement your functionality directly in Ignite without Spark?
Ignite has a lot of features: in memory key-value storage, SQL and scan queries 
and Compute engine for map reduce (and much more).
See: https://ignite.apache.org/features.html

On Tue, Oct 11, 2016 at 3:25 PM, Vladislav Pyatkov  wrote:

Hi Tracy,

Ignite support SQLFieldQuery  for the purpose[1]
SQL with default marshaller (Binary) will be use only needed fields when 
evaluation.

[1]: https://apacheignite.readme.io/docs/sql-queries#fields-queries

On Mon, Oct 10, 2016 at 8:54 PM, Tracy Liang (BLOOMBERG/ 731 LEX) 
 wrote:

Thanks for this clear explanation, Alexey. Basically I want to use Ignite as a 
shared in-memory layer among multiple Spark Server instances. Also I have 
another question: does ignite cache support predicate pushdown or a logic view 
of cache? For example, I only want certain column of the value instead of 
returning the entire universe. How do I do that?


From: user@ignite.apache.org 
Subject: Re: Is it possible to enable both REPLICATED and PARTITIONED?

Tracy,

First of all, cache mode and number of backups could be set only once - on 
cache start.
So, if you know the size of your cluster you could set number of backups before 
cache start.
But, I think it is not reasonable to set number of backups equals to number of 
nodes.
If you need 100% high availability, just use replicated cache. But I would 
recommend to think about how many nodes at once can be lost?
May be it is reasonable to set backups = 2? The more backups you choose - the 
more memory will be consumed by backup partitions and also
grid will spend time in rebalancing data.
What is your use case?


On Mon, Oct 10, 2016 at 11:18 PM, Tracy Liang (BLOOMBERG/ 731 LEX) 
 wrote:

Thanks, and PARTITIONED mode could have any number of backups right? I want 
backups for high availability and also my dataset is large. I guess I will use 
PARTITIONED mode and configure number of backups based on actual needs in that 
case right?

From: user@ignite.apache.org 
Subject: Re: Is it possible to enable both REPLICATED and PARTITIONED?

Hi, Tracyl.

Actually, REPLICATED cache is a PARTITIONED cache win backups on all nodes.

But, why did you need  this?

On Mon, Oct 10, 2016 at 10:46 AM, Tracyl  wrote:

As subject shows.


--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Is-it-possible-to-enable-both-REPLICATED-and-PARTITIONED-tp8167.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


-- 
Alexey Kuznetsov


-- 
Alexey Kuznetsov


-- 
Vladislav Pyatkov


-- 
Alexey Kuznetsov




Re: Is it possible to enable both REPLICATED and PARTITIONED?

2016-10-11 Thread Tracy Liang (BLOOMBERG/ 731 LEX)
Thanks Vlad. So it's like I define a custom class for my use case(Basically a 
logic view of the table and column) like this: 
https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/model/Person.java.
 And then I create Ignite cache with type  and populate it with 
constructed person object. After that I could directly use something like 
"select * from person where age > 33" to get subset of it right? But issue is 
my original data format is dataframe. If I am doing this way, does that mean I 
have to manually parse the dataframe and use underlying data construct Person 
object? Is this the only way to enable subset/filter pushdown(Could I use spark 
jdbc API that directly do that mapping for me: JdbcUtils.saveTable(personDF, 
url, table, props))? 

From: user@ignite.apache.org 
Subject: Re: Is it possible to enable both REPLICATED and PARTITIONED?

Hi Tracy,

Ignite support SQLFieldQuery  for the purpose[1]
SQL with default marshaller (Binary) will be use only needed fields when 
evaluation.

[1]: https://apacheignite.readme.io/docs/sql-queries#fields-queries

On Mon, Oct 10, 2016 at 8:54 PM, Tracy Liang (BLOOMBERG/ 731 LEX) 
 wrote:

Thanks for this clear explanation, Alexey. Basically I want to use Ignite as a 
shared in-memory layer among multiple Spark Server instances. Also I have 
another question: does ignite cache support predicate pushdown or a logic view 
of cache? For example, I only want certain column of the value instead of 
returning the entire universe. How do I do that?


From: user@ignite.apache.org 
Subject: Re: Is it possible to enable both REPLICATED and PARTITIONED?

Tracy,

First of all, cache mode and number of backups could be set only once - on 
cache start.
So, if you know the size of your cluster you could set number of backups before 
cache start.
But, I think it is not reasonable to set number of backups equals to number of 
nodes.
If you need 100% high availability, just use replicated cache. But I would 
recommend to think about how many nodes at once can be lost?
May be it is reasonable to set backups = 2? The more backups you choose - the 
more memory will be consumed by backup partitions and also
grid will spend time in rebalancing data.
What is your use case?


On Mon, Oct 10, 2016 at 11:18 PM, Tracy Liang (BLOOMBERG/ 731 LEX) 
 wrote:

Thanks, and PARTITIONED mode could have any number of backups right? I want 
backups for high availability and also my dataset is large. I guess I will use 
PARTITIONED mode and configure number of backups based on actual needs in that 
case right?

From: user@ignite.apache.org 
Subject: Re: Is it possible to enable both REPLICATED and PARTITIONED?

Hi, Tracyl.

Actually, REPLICATED cache is a PARTITIONED cache win backups on all nodes.

But, why did you need  this?

On Mon, Oct 10, 2016 at 10:46 AM, Tracyl  wrote:

As subject shows.


--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Is-it-possible-to-enable-both-REPLICATED-and-PARTITIONED-tp8167.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


-- 
Alexey Kuznetsov


-- 
Alexey Kuznetsov


-- 
Vladislav Pyatkov



Re: Loading Hbase data into Ignite

2016-10-11 Thread Vladislav Pyatkov
Hi,

The easiest way do this using DataStrimer[1] from all server nodes, but
with specific part of data.
You can do it using Ignite compute[2] (matching by node id for example or
node parameter or any other) with part number as a parameter to SQL query
for HDB.

[1]: http://apacheignite.gridgain.org/docs/data-streamers
[2]:
http://apacheignite.gridgain.org/docs/distributed-closures#broadcast-methods

On Tue, Oct 11, 2016 at 4:11 PM, Anil  wrote:

> HI,
>
> we have around 18 M records in hbase which needs to be loaded into ignite
> cluster.
>
> i was looking at
>
> http://apacheignite.gridgain.org/v1.7/docs/data-loading
>
> https://github.com/apache/ignite/tree/master/examples
>
> is there any approach where each ignite node loads the data of one hbase
> region ?
>
> Do you have any recommendations ?
>
> Thanks.
>



-- 
Vladislav Pyatkov


Re: Loading Hbase data into Ignite

2016-10-11 Thread Alexey Kuznetsov
Hi, Anil.

It depends on your use case.
How many nodes will be in your cluster?
All 18M records will be in one cache or many caches?
How big single record? What will be the key?
You need only load or you also need write changed / new objects in cache to
HBase?

On Tue, Oct 11, 2016 at 8:11 PM, Anil  wrote:

> HI,
>
> we have around 18 M records in hbase which needs to be loaded into ignite
> cluster.
>
> i was looking at
>
> http://apacheignite.gridgain.org/v1.7/docs/data-loading
>
> https://github.com/apache/ignite/tree/master/examples
>
> is there any approach where each ignite node loads the data of one hbase
> region ?
>
> Do you have any recommendations ?
>
> Thanks.
>



-- 
Alexey Kuznetsov


Re: secured mesos deployment

2016-10-11 Thread Nikolay Tikhonov
Hi Vincent,

Ignite Mesos Framework connecting to mesos via current user. I've created
ticket on adding new functionality which will allow to set user name and
principal (https://issues.apache.org/jira/browse/IGNITE-4052 feel free to
contribute ;) ). But if your user equals user for mesos cluster then you
can pass authentication options by MESOS_AUTHENTICATE, DEFAULT_PRINCIPAL,
DEFAULT_SECRET env options.

On Tue, Oct 11, 2016 at 9:50 AM, vincent gromakowski <
vincent.gromakow...@gmail.com> wrote:

> Hi
> My mesos is configured to ask for login and password. So it rejects ignite
> framework...
>
> Le 11 oct. 2016 3:10 AM, "Denis Magda"  a écrit :
>
>> I’m not an expert in Apache Mesos but think that this can be done using
>> standard Mesos configuration files. However, it depends on what part of the
>> deployment you wish to secure.
>>
>> Please provide more details and I hope Nikolay Tikhonov (copied) will be
>> able to give a more precise answer.
>>
>> —
>> Denis
>>
>> On Oct 10, 2016, at 12:29 AM, vincent gromakowski <
>> vincent.gromakow...@gmail.com> wrote:
>>
>> Can somebody confirm it's not supported ?
>>
>> 2016-10-06 21:54 GMT+02:00 vincent gromakowski <
>> vincent.gromakow...@gmail.com>:
>>
>>> Hi,
>>> Is there any way to configure mesos credentials in ignite-mesos ?
>>>
>>> Vincent
>>>
>>
>>
>>


Loading Hbase data into Ignite

2016-10-11 Thread Anil
HI,

we have around 18 M records in hbase which needs to be loaded into ignite
cluster.

i was looking at

http://apacheignite.gridgain.org/v1.7/docs/data-loading

https://github.com/apache/ignite/tree/master/examples

is there any approach where each ignite node loads the data of one hbase
region ?

Do you have any recommendations ?

Thanks.


Re: Near cache

2016-10-11 Thread Vladislav Pyatkov
Hi,

I think you will get long delay when updated local map entry, and general
issue you will be generate garbage, which increase garbage collector load
and consume of a heap.

The delay will not by long, but depends of communication layer.

I have not understand this. Why can not use one listener for some event
types[1]?

I think in your case, you need to use remote listener (which will invoke
not only in local node). You need use send, because sendOrdered give
additional performance load.

I can not say about this. You need to test implementation.

In general, you must not wait into listeners, because blocks will be bad
affect other task at the same thread pool. Also you always can tune thread
count in specific pool[2].

[1]: https://apacheignite.readme.io/docs/events
[2]:
https://apacheignite.readme.io/docs/performance-tips#tune-cache-data-rebalancing

On Tue, Oct 11, 2016 at 2:26 AM, javastuff@gmail.com <
javastuff@gmail.com> wrote:

> Hi,
>
> Because Ignite always serialize data, we are not able to take advantage of
> Near cache where cached object are heavy interms of
> serialization/de-serilalization and no requirement of query or eviction.
> We are taking about 5 min vs 2 hours difference, if we do not cache meta
> information of frequently accessed small number of objects.
>
> I am working on Java's standard map based implementation to simulate Near
> cache where local heap has non-serialized object backed by partitioned
> ignited distributed cache. Cache invalidation is a challenge where one node
> invalidates partitioned distributed cache and now we have to invalidate
> local copy of same cache form each each node.
>
> Need experts opinion on my POC -
>
> For a particular cache add EVT_CACHE_OBJECT_REMOVED cache event listener.
> Now remove cache event will be generated on local node and remote node who
> own that cache key.
> Remove event handler/actor publish a message to a topic.
> On receiving message on topic each node will remove object from local map
> based copy.
>
> Question -
> 1. Do you see any issue with this kind of implementation when cache
> invalidation will be very less, but read will be more frequent?
> 2. Approximately how much delay one should expect during event generation
> and publishing messages to all nodes?
> 3. I have observed remove Event is generated only on acting local node or
> owner node. So need to combine Event with Topic. Is there any other way
> this
> can be achieved?
> 4. In terms of performance should we use -
>  - Should we use local listener or remote listener? In POC I have added
> remote listener.
>  - Should we use sendOrderd or send?
> 4. Is there any specific sizing need to be done for REMOVE event and TOPIC?
> We have around 20 such cache and I am planning to have a single topic.
>
> Regarding topology - At minimum 2 nodes with 20 GB off-heap.
>
> Thanks,
> -Sam
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Near-cache-tp8192.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: IN Query

2016-10-11 Thread Vladislav Pyatkov
Hi,

Try do it like:

SqlFieldsQuery query = new
SqlFieldsQuery(sql).setArgs(inParameter.toArray(),
anotherINParameter.toArray())

If you have more the one args, the method will be work.

On Tue, Oct 11, 2016 at 3:08 PM, Anil  wrote:

> ignite supports multiple IN queries ?  i tried the following and no luck.
>
> String sql = "select p.id, p.name from Person p join table(name
> VARCHAR(15) = ?) i on p.name = i.name join table(id varchar(15) = ?) k on
> p.id = k.id";
>
> SqlFieldsQuery query = new SqlFieldsQuery(sql).setArgs(new Object[] {
> inParameter.toArray()}, new Object[] { anotherINParameter.toArray()});
>
> Thanks
>



-- 
Vladislav Pyatkov


Re: IN Query

2016-10-11 Thread Anil
ignite supports multiple IN queries ?  i tried the following and no luck.

String sql = "select p.id, p.name from Person p join table(name VARCHAR(15)
= ?) i on p.name = i.name join table(id varchar(15) = ?) k on p.id = k.id";

SqlFieldsQuery query = new SqlFieldsQuery(sql).setArgs(new Object[] {
inParameter.toArray()}, new Object[] { anotherINParameter.toArray()});

Thanks


Re: DataStreamer pool

2016-10-11 Thread thammoud
Meant to say Partitioned cache. Sorry.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/DataStreamer-pool-tp5857p8203.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite .NET on Linux

2016-10-11 Thread Pavel Tupitsyn
Hi,

I think there are only two windows-specific things in
Apache.Ignite.Core.dll:
* Embedded windows dll with JNI interop code, which is loaded in
UnmanagedUtils class initializer via LoadLibrary system call
* IgniteUtils.LoadJvmDll, which uses registry to locate Java installation
paths

So if you are willing to contribute, it should not be too hard to add a
couple of "#if __MonoCS__" there.
JIRA ticket already exists:
https://issues.apache.org/jira/browse/IGNITE-1628

Not sure if there is an easier way to do this.

Pavel.






On Mon, Oct 10, 2016 at 7:12 PM, exones  wrote:

> Hello,
>
> We are "gridifying" our application with Ignite and would like to know if
> it's possible to run .NET application *on Linux using mono* which itself
> runs Ignite node inside (via Ignition.Start). So basically we have app.exe
> which runs Ignition.Start(). Then we use .NET Ignite compute functions to
> do
> the job.
>
> I looked at the code and it looks Ignite .NET is quite tightly bound to
> Windows with no considerations for running on Linux.
>
> We considered options like running java app which calls .NET implementation
> (either via IKVM or JNI or some kind of IPC). Anything more elegant?
>
> Can you advise me on this?
>
> Thank you in advance.
>
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Ignite-NET-on-Linux-tp8185.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Is it possible to enable both REPLICATED and PARTITIONED?

2016-10-11 Thread Alexey Kuznetsov
Hi Tracy,

In addition to Vlad answer about SQL fields query I would like to mention,
that may be you can implement your functionality directly in Ignite without
Spark?
Ignite has a lot of features: in memory key-value storage, SQL and scan
queries and Compute engine for map reduce (and much more).
See: https://ignite.apache.org/features.html

On Tue, Oct 11, 2016 at 3:25 PM, Vladislav Pyatkov 
wrote:

> Hi Tracy,
>
> Ignite support SQLFieldQuery  for the purpose[1]
> SQL with default marshaller (Binary) will be use only needed fields when
> evaluation.
>
> [1]: https://apacheignite.readme.io/docs/sql-queries#fields-queries
>
> On Mon, Oct 10, 2016 at 8:54 PM, Tracy Liang (BLOOMBERG/ 731 LEX) <
> tlian...@bloomberg.net> wrote:
>
>> Thanks for this clear explanation, Alexey. Basically I want to use Ignite
>> as a shared in-memory layer among multiple Spark Server instances. Also I
>> have another question: does ignite cache support predicate pushdown or a
>> logic view of cache? For example, I only want certain column of the value
>> instead of returning the entire universe. How do I do that?
>>
>>
>> From: user@ignite.apache.org
>> Subject: Re: Is it possible to enable both REPLICATED and PARTITIONED?
>>
>> Tracy,
>>
>> First of all, cache mode and number of backups could be set only once -
>> on cache start.
>> So, if you know the size of your cluster you could set number of backups
>> before cache start.
>> But, I think it is not reasonable to set number of backups equals to
>> number of nodes.
>> If you need 100% high availability, just use replicated cache. But I
>> would recommend to think about how many nodes at once can be lost?
>> May be it is reasonable to set backups = 2? The more backups you choose -
>> the more memory will be consumed by backup partitions and also
>> grid will spend time in rebalancing data.
>> What is your use case?
>>
>>
>> On Mon, Oct 10, 2016 at 11:18 PM, Tracy Liang (BLOOMBERG/ 731 LEX) <
>> tlian...@bloomberg.net> wrote:
>>
>>> Thanks, and PARTITIONED mode could have any number of backups right? I
>>> want backups for high availability and also my dataset is large. I guess I
>>> will use PARTITIONED mode and configure number of backups based on actual
>>> needs in that case right?
>>>
>>> From: user@ignite.apache.org
>>> Subject: Re: Is it possible to enable both REPLICATED and PARTITIONED?
>>>
>>> Hi, Tracyl.
>>>
>>> Actually, REPLICATED cache is a PARTITIONED cache win backups on all
>>> nodes.
>>>
>>> But, why did you need  this?
>>>
>>> On Mon, Oct 10, 2016 at 10:46 AM, Tracyl  wrote:
>>>
 As subject shows.



 --
 View this message in context: http://apache-ignite-users.705
 18.x6.nabble.com/Is-it-possible-to-enable-both-REPLICATED-
 and-PARTITIONED-tp8167.html
 Sent from the Apache Ignite Users mailing list archive at Nabble.com.

>>>
>>>
>>>
>>> --
>>> Alexey Kuznetsov
>>>
>>>
>>>
>>
>>
>> --
>> Alexey Kuznetsov
>>
>>
>>
>
>
> --
> Vladislav Pyatkov
>



-- 
Alexey Kuznetsov


Re: Cache read through with expiry policy

2016-10-11 Thread o4oxide
Hi vkulichenko,

This is still open and I also noticed it affects write through and write
behind as well.

Could you advise of a work around for now as I really need this feature.

Thanks



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Cache-read-through-with-expiry-policy-tp2521p8199.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: secured mesos deployment

2016-10-11 Thread vincent gromakowski
Hi
My mesos is configured to ask for login and password. So it rejects ignite
framework...

Le 11 oct. 2016 3:10 AM, "Denis Magda"  a écrit :

> I’m not an expert in Apache Mesos but think that this can be done using
> standard Mesos configuration files. However, it depends on what part of the
> deployment you wish to secure.
>
> Please provide more details and I hope Nikolay Tikhonov (copied) will be
> able to give a more precise answer.
>
> —
> Denis
>
> On Oct 10, 2016, at 12:29 AM, vincent gromakowski <
> vincent.gromakow...@gmail.com> wrote:
>
> Can somebody confirm it's not supported ?
>
> 2016-10-06 21:54 GMT+02:00 vincent gromakowski <
> vincent.gromakow...@gmail.com>:
>
>> Hi,
>> Is there any way to configure mesos credentials in ignite-mesos ?
>>
>> Vincent
>>
>
>
>