100% Consistency

2018-02-14 Thread Prasad Bhalerao
Hi,
As per the doc ignite is a s strong or 100% consistent system.

e.g. I have  partitioned cache, backup count is 1, and there are two nodes
in cluster.
If I update an entry in cache on  one node and before updating the value in
backup cache(update to back is sync or async) the node crashes.
In this scenario the other node becomes primary for the backup data. Now in
this case if I access the cache for the same data for which the update to
backup  was failed, will I get the stale data ?


Thanks,
Prasad


Re: Ignite performance

2018-02-14 Thread Prasad Bhalerao
Hi luqmanahmed,

Could you please tell me why do need 1200 nodes to get 8 gb in memory?
Is it replicated cache? Do you need 1200 nodes to serve high number of
requests/sec?
If yes then does each node act as a primary for whole data? If yes, do you
use load balancer to reroute the requests?

Thanks,
Prasad

On Feb 14, 2018 11:21 PM, "luqmanahmad" <@gmail.com> wrote:

Hi Igniters,

I will try to keep it short as much as I can. We have 8 caches total size is
around 6GB to 8GB in redis and the cluster is made up of 1200 nodes. All the
operations are read-only. It is a very low latency system where each request
doesn't need to be more than 2ms, in very rare cases 3-4ms. Everything is
hosted in our own datacenters and is working out of the box. We have a very
busy traffic I am talking about roughly 4.4 billion requests in an hour.

Now the challenge we have is to move everything to AWS as company rolling
all the projects to cloud and Elastic cache is reaching to its limit with
only 15% traffic and with no multi-region support which means each region
needs to have its own cache is not ideal for us.

Company is really not very keen to move away from redis but looking at the
elastic cache limitations, agreed to look at the alternate solutions. I want
to go ahead with Ignite but I am really not sure whether Ignite can handle
that much traffic. Although I am an ignite user for a very long time and
have a firm faith on it :) but with that much low latency and hight traffic
is it really possible? All I want to know your views on whether Ignite can
handle that much traffic. How many nodes cluster would be sufficient for
that size traffic ? Max number of ignite nodes ever been deployed in a
cluster ?

Best regards,
L



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Write ahead log and early eviction of new elements

2018-02-14 Thread Raymond Wilson
Thanks for the clarification Mike 

-Original Message-
From: Mikhail [mailto:michael.cherka...@gmail.com]
Sent: Thursday, February 15, 2018 5:15 AM
To: user@ignite.apache.org
Subject: Re: Write ahead log and early eviction of new elements

Hi Raymond,

>I understand when I add an element to a cache that element is
>serialized, placed into the local memory for the cache on that server
>and then placed into the WAL pending checkpointing (merging into the
>persistence store).

First, the update will be written into WAL and only then into local memory.


>What happens if the newly added element is evicted and  then re-read
>from the cache by the client before the next checkpoint  occurs?

What do you mean by "evicted"? Ignite evicts memory pages to a disk if
there's not enough space to save new record or it needs to load a page from
disk and for this purpose, it will evict some page from memory.
But it will evict the only page that is already saved to the disk.

Thanks,
Mike.






--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite performance

2018-02-14 Thread vkulichenko
Hi Luqman,

I don't see why not. It will probably require pretty big cluster, but looks
like your Redis cluster is not very small either :) Ignite is a highly
scalable system, so you can test with smaller clusters of different sizes,
check what maximum throughput they provide and then extrapolate to estimate
how many nodes you need.

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: QuerySqlFunction

2018-02-14 Thread vkulichenko
That's correct. Custom SQL functions must be explicitly deployed on all nodes
and can't be deployed dynamically.

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: QuerySqlFunction

2018-02-14 Thread Alexey Kuznetsov
Hi,

AFAIK,  PeerClassLoading works only with Ignite compute subsystem.
For SQL functions you need to deploy them in cluster before use.



On Thu, Feb 15, 2018 at 4:29 AM, Williams, Michael <
michael.willi...@transamerica.com> wrote:

> What changes do I need to do to make ZeroDeploy work with QuerySqlFunction
>  definitions? I’m following the example and adding the class as follows,
> but even with peer class loading enabled, I get a gnarly error. Can clients
> marshal to servers? Any advice?
>
>
>
>
>
> import org.apache.ignite.cache.query.annotations.QuerySqlFunction;
>
>
>
> public class MyFunctions {
>
> @QuerySqlFunction
>
> public static int sqr(int x) {
>
> return x * x;
>
> }
>
> }
>
>
>
> …
>
> cfg.setPeerClassLoadingEnabled(true);
>
> cfg.setClientMode(true);
>
> cfg.setDeploymentMode(DeploymentMode.CONTINUOUS);
>
> try(Ignite ignite = Ignition.start(cfg))
>
> …
>
> myCache.setSqlFunctionClasses(MyFunctions.class);
>
> …
>
>
>
>
>
> Error:
>
> class org.apache.ignite.IgniteCheckedException: Failed to find class with
> given class loader for unmarshalling (make sure same versions of all
> classes are available on all nodes or
>
> enable peer-class-loading) [clsLdr=sun.misc.Launcher$
> AppClassLoader@764c12b6, cls=IgniteStartup.MyFunctions]
>
> at org.apache.ignite.marshaller.jdk.JdkMarshaller.unmarshal0(
> JdkMarshaller.java:126)
>
> at org.apache.ignite.marshaller.AbstractNodeNameAwareMarshalle
> r.unmarshal(AbstractNodeNameAwareMarshaller.java:94)
>
> at org.apache.ignite.marshaller.jdk.JdkMarshaller.unmarshal0(
> JdkMarshaller.java:143)
>
> at org.apache.ignite.marshaller.AbstractNodeNameAwareMarshalle
> r.unmarshal(AbstractNodeNameAwareMarshaller.java:82)
>
> at org.apache.ignite.internal.util.IgniteUtils.unmarshal(
> IgniteUtils.java:9795)
>
> at org.apache.ignite.spi.discovery.tcp.messages.
> TcpDiscoveryCustomEventMessage.message(TcpDiscoveryCustomEventMessage
> .java:81)
>
> at org.apache.ignite.spi.discovery.tcp.ServerImpl$
> RingMessageWorker.notifyDiscoveryListener(ServerImpl.java:5460)
>
> at org.apache.ignite.spi.discovery.tcp.ServerImpl$
> RingMessageWorker.processCustomMessage(ServerImpl.java:5282)
>
> at org.apache.ignite.spi.discovery.tcp.ServerImpl$
> RingMessageWorker.processMessage(ServerImpl.java:2656)
>
>
>
> Thanks,
>
> *Mike Williams*
>
>
>



-- 
Alexey Kuznetsov


QuerySqlFunction

2018-02-14 Thread Williams, Michael
What changes do I need to do to make ZeroDeploy work with QuerySqlFunction  
definitions? I'm following the example and adding the class as follows, but 
even with peer class loading enabled, I get a gnarly error. Can clients marshal 
to servers? Any advice?


import org.apache.ignite.cache.query.annotations.QuerySqlFunction;

public class MyFunctions {
@QuerySqlFunction
public static int sqr(int x) {
return x * x;
}
}

...
cfg.setPeerClassLoadingEnabled(true);
cfg.setClientMode(true);
cfg.setDeploymentMode(DeploymentMode.CONTINUOUS);
try(Ignite ignite = Ignition.start(cfg))
...
myCache.setSqlFunctionClasses(MyFunctions.class);
...


Error:
class org.apache.ignite.IgniteCheckedException: Failed to find class with given 
class loader for unmarshalling (make sure same versions of all classes are 
available on all nodes or
enable peer-class-loading) [clsLdr=sun.misc.Launcher$AppClassLoader@764c12b6, 
cls=IgniteStartup.MyFunctions]
at 
org.apache.ignite.marshaller.jdk.JdkMarshaller.unmarshal0(JdkMarshaller.java:126)
at 
org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:94)
at 
org.apache.ignite.marshaller.jdk.JdkMarshaller.unmarshal0(JdkMarshaller.java:143)
at 
org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:82)
at 
org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:9795)
at 
org.apache.ignite.spi.discovery.tcp.messages.TcpDiscoveryCustomEventMessage.message(TcpDiscoveryCustomEventMessage.java:81)
at 
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.notifyDiscoveryListener(ServerImpl.java:5460)
at 
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processCustomMessage(ServerImpl.java:5282)
at 
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2656)

Thanks,
Mike Williams



RE: Adding new fields without server restart

2018-02-14 Thread Tim Newman
Hi Val,

Thanks for the response.

We will destroy the cache this time around. We will look to upgrade our 
environments to 2.3 (currently on 2.1) so we can dynamically update the cache 
configuration next time.

Thanks again!

-Original Message-
From: vkulichenko [mailto:valentin.kuliche...@gmail.com] 
Sent: Monday, February 12, 2018 03:33 PM
To: user@ignite.apache.org
Subject: Re: Adding new fields without server restart

Hi Tim,

Cache configuration is defines when it's started. So @QuerySqlField annotation 
on the new field does not have affect unless you restart the cluster or at 
least destroy the cache and create with the new configuration.
Field are added on object level transparently, but to modify the SQL schema in 
runtime you need to use ALTER TABLE and CREATE INDEX:
https://apacheignite-sql.readme.io/docs/ddl

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite performance

2018-02-14 Thread luqmanahmad
Hi Igniters,

I will try to keep it short as much as I can. We have 8 caches total size is
around 6GB to 8GB in redis and the cluster is made up of 1200 nodes. All the
operations are read-only. It is a very low latency system where each request
doesn't need to be more than 2ms, in very rare cases 3-4ms. Everything is
hosted in our own datacenters and is working out of the box. We have a very
busy traffic I am talking about roughly 4.4 billion requests in an hour.

Now the challenge we have is to move everything to AWS as company rolling
all the projects to cloud and Elastic cache is reaching to its limit with
only 15% traffic and with no multi-region support which means each region
needs to have its own cache is not ideal for us. 

Company is really not very keen to move away from redis but looking at the
elastic cache limitations, agreed to look at the alternate solutions. I want
to go ahead with Ignite but I am really not sure whether Ignite can handle
that much traffic. Although I am an ignite user for a very long time and
have a firm faith on it :) but with that much low latency and hight traffic
is it really possible? All I want to know your views on whether Ignite can
handle that much traffic. How many nodes cluster would be sufficient for
that size traffic ? Max number of ignite nodes ever been deployed in a
cluster ? 

Best regards,
L



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Webinar: "Getting Started with Apache Ignite as a Distributed Database (Today at 1 a.m. Pacific)

2018-02-14 Thread Tom Diederich
Igniters, Valentin Kulichenko is hosting a free webinar today at 11 a.m.. 
Pacific time that will give you the tools and blueprint to build your very own 
distributed database using Apache Ignite. This webinar will be recorded. You 
can register to addend the event live or catch the recording from this same 
link .





Re: Write ahead log and early eviction of new elements

2018-02-14 Thread Mikhail
Hi Raymond,

>I understand when I add an element to a cache that element is serialized, 
>placed into the local memory for the cache on that server and then placed 
>into the WAL pending checkpointing (merging into the persistence store).

First, the update will be written into WAL and only then into local memory.

 
>What happens if the newly added element is evicted and
> then re-read from the cache by the client before the next checkpoint
> occurs?

What do you mean by "evicted"? Ignite evicts memory pages to a disk if
there's not enough space to save new record or it needs to load a page from
disk and for this purpose, it will evict some page from memory.
But it will evict the only page that is already saved to the disk.

Thanks,
Mike.






--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Spark as a service

2018-02-14 Thread jay kapadnis
Hi all,

I am trying to build a prototype of some workflow application, where task
in a workflow is a spark application. For the same i am using Data and
Service grid. (Can't use Compute grid due to limitations from Customer)

What i am trying to do is, encapsulate spark execution inside an Ignite
service and deploying it on Ignite node so that different services (which
are executing as spark applications) will share RDDs among themselves. It's
working fine with spark master as local.

I just wanted to know, with this approach, if there could be any challenges
on production environment when i will be using Yarn for Spark.

Also how to configure Ignite on HDP without igfs ? Is it possible ?


Thanks & Regards,
Jay Kapadnis


Re: Logging using Log4Net

2018-02-14 Thread ozgurnevres
Great. Many thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: slow query performance against berkley db

2018-02-14 Thread Rajesh Kishore
Thanks Stan for looking into it.
Unfortunately, it still takes 23 sec on 240gb RAM system, the corresponding
EXPLAIN PLAN

[[SELECT
ST.ENTRYID,
ST.ATTRNAME,
ST.ATTRVALUE,
ST.ATTRSTYPE
FROM "objectclass".IGNITE_OBJECTCLASS T
/* "objectclass".OBJECTCLASSNDEXED_ATTRVAL_IDX: ATTRVALUE = ?1 */
/* WHERE T.ATTRVALUE = ?1
*/
INNER JOIN "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE ST
/* "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE_ENTRYID_IDX:
ENTRYID = T.ENTRYID
AND ENTRYID = T.ENTRYID
 */
ON 1=1
/* WHERE (ST.ATTRKIND IN('u', 'o'))
AND (ST.ENTRYID = T.ENTRYID)
*/
INNER JOIN "dn".IGNITE_DN DNT
/* "dn".EP_DN_IDX: ENTRYID = ST.ENTRYID
AND PARENTDN >= 'dc=ignite,'
AND PARENTDN < 'dc=ignite-'
AND ENTRYID = ST.ENTRYID
 */
ON 1=1
WHERE (((ST.ATTRKIND IN('u', 'o'))
AND (T.ATTRVALUE = ?1))
AND (DNT.PARENTDN LIKE ?2))
AND ((ST.ENTRYID = DNT.ENTRYID)
AND (ST.ENTRYID = T.ENTRYID))]]

Pls advise

Thanks,
Rajesh

On Tue, Feb 13, 2018 at 8:48 PM, Stanislav Lukyanov 
wrote:

> Hi Rajesh,
>
> While I don't have - and, probably, no one has - any benchmarks comparing
> Ignite vs Berkeley in a single node configuration (as others have said,
> this
> is not really a common use case for Ignite), I can say that performance
> problems you see are likely to be caused by your query structure.
>
> Rule of thumb for Ignite's SQL - avoid nested SELECTs. Also make sure you
> have proper indexes for the fields you use in conditions. Usually you also
> need to make sure that your data is efficiently collocated, but that only
> applies to cases when you have multiple nodes.
>
> I've attempted to optimize the SELECT you've posted - here it is:
> SELECT st.entryID, st.attrName, st.attrValue, st.attrsType
> FROM "objectclass".Ignite_ObjectClass as t
> JOIN "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE AS st
> ON st.entryID = t.entryID
> JOIN "dn".Ignite_DN AS dnt
> ON st.entryID = dnt.entry
> WHERE t.attrValue= ?
> AND (st.attrKind = 'u' OR st.attrKind = 'o')
> AND dnt.parentDN LIKE ?
>
> I can't really verify its correctness, but I guess it can be a decent place
> to start.
>
> Thanks,
> Stan
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: query on BinaryObject index and table

2018-02-14 Thread Vladimir Ozerov
He Rajesh,

Method CacheConfiguration.setIndexedTypes() should only be used for classes
with SQL annotations. Since you operate on binary objects, you should use
CacheConfiguration.setQueryEntity(), and define QueryEntity with all
necessary fields. Also there is a property QueryEntity.tableName which you
can use to specify concrete table name.

Vladimir.


On Mon, Jan 22, 2018 at 7:41 PM, Denis Magda  wrote:

> The schema can be changed with ALTER TABLE ADD COLUMN command:
> https://apacheignite-sql.readme.io/docs/alter-table
>
> To my knowledge this is supported for schemas that were initially
> configured by both DDL and QueryEntity/Annotations.
>
> —
> Denis
>
>
> On Jan 22, 2018, at 5:44 AM, Ilya Kasnacheev 
> wrote:
>
> Hello Rajesh!
>
> Table name can be specified in cache configuration's query entity. If not
> supplied, by default it is equal to value type name, e.g. BinaryObject :)
>
> Also, note that SQL tables have fixed schemas. This means you won't be
> able to add a random set of fields in BinaryObject and be able to do SQL
> queries on them all. You will have to declare all fields that you are going
> to use via SQL, either by annotations or query entity:
> see https://apacheignite-sql.readme.io/docs/schema-and-indexes
>
> To add index, you should either specify it in annotations (via index=true)
> or in query entity.
>
> Regards,
> Ilya.
>
> --
> Ilya Kasnacheev
>
> 2018-01-21 15:12 GMT+03:00 Rajesh Kishore :
>
>> Hi Denis,
>>
>> This is my code:
>>
>> CacheConfiguration cacheCfg =
>> new CacheConfiguration<>(ORG_CACHE);
>>
>> cacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
>> cacheCfg.setBackups(1);
>> cacheCfg
>> .setWriteSynchronizationMode(CacheWriteSynchronizationMode.F
>> ULL_SYNC);
>> cacheCfg.setIndexedTypes(Long.class, BinaryObject.class);
>>
>> IgniteCache cache = ignite.getOrCreateCache(cacheC
>> fg);
>>
>> if ( UPDATE ) {
>>   System.out.println("Populating the cache...");
>>
>>   try (IgniteDataStreamer streamer =
>>   ignite.dataStreamer(ORG_CACHE)) {
>> streamer.allowOverwrite(true);
>> IgniteBinary binary = ignite.binary();
>> BinaryObjectBuilder objBuilder = binary.builder(ORG_CACHE);
>> ;
>> for ( long i = 0; i < 100; i++ ) {
>>   streamer.addData(i,
>>   objBuilder.setField("id", i)
>>   .setField("name", "organization-" + i).build());
>>
>>   if ( i > 0 && i % 100 == 0 )
>> System.out.println("Done: " + i);
>> }
>>   }
>> }
>>
>> IgniteCache binaryCache =
>> ignite.cache(ORG_CACHE).withKeepBinary();
>> BinaryObject binaryPerson = binaryCache.get(54l);
>> System.out.println("name " + binaryPerson.field("name"));
>>
>>
>> Not sure, If I am missing some context here , if I have to use sqlquery ,
>> what table name should I specify - I did not create table explicitly, do I
>> need to that?
>> How would I create the index?
>>
>> Thanks,
>> Rajesh
>>
>> On Sun, Jan 21, 2018 at 12:25 PM, Denis Magda  wrote:
>>
>>>
>>>
>>> > On Jan 20, 2018, at 7:20 PM, Rajesh Kishore 
>>> wrote:
>>> >
>>> > Hi,
>>> >
>>> > I have requirement that my schema is not fixed , so I have to use the
>>> BinaryObject approach instead of fixed POJO
>>> >
>>> > I am relying on OOTB file system persistence mechanism
>>> >
>>> > My questions are:
>>> > - How can I specify the indexes on BinaryObject?
>>>
>>> https://apacheignite-sql.readme.io/docs/create-index
>>> https://apacheignite-sql.readme.io/docs/schema-and-indexes
>>>
>>> > - If I have to use sql query for retrieving objects , what table name
>>> should I specify, the one which is used for cache name does not work
>>> >
>>>
>>> Was the table and its queryable fields/indexes created with CREATE TABLE
>>> or Java annotations/QueryEntity?
>>>
>>> If the latter approach was taken then the table name corresponds to the
>>> Java type name as shown in this doc:
>>> https://apacheignite-sql.readme.io/docs/schema-and-indexes
>>>
>>> —
>>> Denis
>>>
>>> > -Rajesh
>>>
>>>
>>
>
>