Re: Clarity on Ignite filesytem

2019-04-01 Thread Vladimir Ozerov
Hi Maros,

Over the years of IGFS existence we came to conclusion that it doesn't fit
to typical Ignite use cases well. Currently development of this component
is frozen. Most likely it will be removed in nearest future.
Hence, no plans to improve it or extend with new APIs.

On Mon, Apr 1, 2019 at 11:14 AM maros.urbanec 
wrote:

> Hi all, I've been recently looking into Ignite Filesystem (IGFS for short)
> and couldn't find much assistance by Google searches.
>
> Reading the documentation, it wasn't immediately obvious whether IGFS as a
> cache for HDFS is an option or a must. In other words - is it possible to
> use IGFS without Hadoop or any other secondary file stores; storing files
> in
> Ignite cache only?
>
> Secondarily, I've tried to start Ignite with
> examples\config\filesystem\example-igfs.xml configuration, only for Java
> client to obstinately claim "IGFS is not configured: igfs". The
> configuration is being read by Ignite, since enabling shared memory
> endpoint
> end up in error (I'm running on Windows). Is there any step I'm missing?
>
> As a last question - there is no C++ API for IGFS at this point. However, a
> recurring talk on these mailing lists is an integration of IGFS as a
> userspace filesystem. Is this item stipulated on a roadmap?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: SQL Query plan confusion

2018-12-07 Thread Vladimir Ozerov
Hi Jose,

I am a bit lost in two provided explains. Both "godd" and "bad" contain "k
= 'trans.cust.first_name'" condition. Could you please confirm that they
are correct? Specifically, I cannot understand why this condition is
present in "good" explain.

On Tue, Nov 27, 2018 at 12:06 PM joseheitor  wrote:

> 1. - I have a nested join query on a table of 8,000,000 records which
> performs similar or better than PostreSQL (~10ms) on my small test setup
> (2x
> nodes, 8GB, 2CPU):
>
> SELECT
> mainTable.pk, mainTable.id, mainTable.k, mainTable.v
> FROM
> public.test_data AS mainTable
> INNER JOIN (
> SELECT
> lastName.id
> FROM
> (SELECT id FROM public.test_data WHERE k =
> 'trans.cust.last_name' AND v
> = 'Smythe-Veall') AS lastName
> INNER JOIN
> (SELECT id FROM
> public.test_data WHERE k = 'trans.date' AND v =
> '2017-12-21') AS transDate ON transDate.id = lastName.id
> INNER JOIN
> (SELECT id FROM
> public.test_data WHERE k = 'trans.amount' AND cast(v
> AS integer) > 9) AS transAmount ON transAmount.id = lastName.id
> ) AS subTable ON mainTable.id = subTable.id
> ORDER BY 1, 2
>
>
> 2. - By simply adding a WHERE clause at the end, the performance becomes
> catastrophic on Ignite (~10s for subsequent queries - first query takes
> many
> minutes). On PostgreSQL performance does not change...
>
> SELECT
> mainTable.pk, mainTable.id, mainTable.k, mainTable.v
> FROM
> public.test_data AS mainTable
> INNER JOIN (
> SELECT
> lastName.id
> FROM
> (SELECT id FROM public.test_data WHERE k =
> 'trans.cust.last_name' AND v
> = 'Smythe-Veall') AS lastName
> INNER JOIN
> (SELECT id FROM
> public.test_data WHERE k = 'trans.date' AND v =
> '2017-12-21') AS transDate ON transDate.id = lastName.id
> INNER JOIN
> (SELECT id FROM
> public.test_data WHERE k = 'trans.amount' AND cast(v
> AS integer) > 9) AS transAmount ON transAmount.id = lastName.id
> ) AS subTable ON mainTable.id = subTable.id
> *WHERE
> mainTable.k = 'trans.cust.first_name'*
> ORDER BY 1, 2
>
> What can I do to optimise this query for Ignite???
>
> (Table structure and query plans attached for reference)
>
> Thanks,
> Jose
> table.sql
> 
> good-join-query.txt
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t1652/good-join-query.txt>
>
> bad-join-query.txt
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t1652/bad-join-query.txt>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Difference between replicated and local cache mode regarding time execution of query

2018-09-07 Thread Vladimir Ozerov
Hi Moti,

Could you please attach execution plans for both LOCAL and REPLICATED cases?

On Wed, Sep 5, 2018 at 6:13 PM ilya.kasnacheev 
wrote:

> Hello!
>
> Unfortunately I'm not aware why you see such a big difference in this case.
> It should be comparable. Maybe SQL people will chime in?
>
> Regards,
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite SQL Queries not getting all data back in ignite 2.4 and 2.6

2018-08-16 Thread Vladimir Ozerov
Dmitriy,

I wanted to suggest this, but looks like we do not have direct links for
regular distribution. User can download it from TeamCity, but
username/password is needed for this. May be I missing something.

On Thu, Aug 16, 2018 at 11:44 AM Dmitriy Setrakyan 
wrote:

> I also want to point out that Ignite has nightly builds, so you can try
> them instead of doing your own build as well.
>
> https://ignite.apache.org/download.cgi#nightly-builds
>
> D.
>
> On Thu, Aug 16, 2018 at 1:38 AM, Vladimir Ozerov 
> wrote:
>
>> Hi,
>>
>> There were a lot of changes in the product since 2.3 which may affect it.
>> Most important change was baseline topology, as already mentioned.
>> I am aware of a case when incorrect result might be returned [1], which
>> is already fixed in *master*. Not sure if this is the same issue, but
>> you may try to build Ignite from recent master and check if the problem is
>> still there.
>>
>> Is it possible to create isolated reproducer for this issue?
>>
>> [1] https://issues.apache.org/jira/browse/IGNITE-8900
>>
>> On Wed, Aug 15, 2018 at 11:34 PM bintisepaha 
>> wrote:
>>
>>> Thanks for getting back, but we do not use Ignite's native persistence.
>>> Anything else changed from 2.3 to 2.4 to cause this around SQL Queries?
>>>
>>>
>>>
>>>
>>> --
>>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>>
>>
>


Re: Ignite SQL Queries not getting all data back in ignite 2.4 and 2.6

2018-08-16 Thread Vladimir Ozerov
Hi,

There were a lot of changes in the product since 2.3 which may affect it.
Most important change was baseline topology, as already mentioned.
I am aware of a case when incorrect result might be returned [1], which is
already fixed in *master*. Not sure if this is the same issue, but you may
try to build Ignite from recent master and check if the problem is still
there.

Is it possible to create isolated reproducer for this issue?

[1] https://issues.apache.org/jira/browse/IGNITE-8900

On Wed, Aug 15, 2018 at 11:34 PM bintisepaha  wrote:

> Thanks for getting back, but we do not use Ignite's native persistence.
> Anything else changed from 2.3 to 2.4 to cause this around SQL Queries?
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite running on JDK10?

2018-08-15 Thread Vladimir Ozerov
HI,

Please try adding add these flags to JVM startup arguments.

On Fri, Aug 10, 2018 at 5:31 PM KJQ  wrote:

> As a note, I downgraded all of the Docker containers to use JDK 9 (9.0.4)
> and
> I still get the same problem running the SpringBoot 2 application.  Running
> in my IDE a test case works perfectly fine.
>
> *Caused by: java.lang.RuntimeException: jdk.internal.misc.JavaNioAccess
> class is unavailable.*
>
> *Caused by: java.lang.IllegalAccessException: class
> org.apache.ignite.internal.util.GridUnsafe cannot access class
> jdk.internal.misc.SharedSecrets (in module java.base) because module
> java.base does not export jdk.internal.misc to unnamed module @78a89eea*
>
>
>
> -
> KJQ
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Delete SQL is failing with IN clause for a table which has composite key

2018-04-12 Thread Vladimir Ozerov
Hi,

The problem is already fixed [1]. Fix will be available in AI 2.5.

[1] https://issues.apache.org/jira/browse/IGNITE-8147

On Wed, Apr 11, 2018 at 9:21 AM, Naveen  wrote:

> Hi
>
> These are the queries I have used, which I was getting the error every time
> I execute.
> Nothing else to share
>
> Thanks
> Naveen
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: SQL Transactional Support for Commit and rollback operation.

2018-04-11 Thread Vladimir Ozerov
Hi,

Transactional SQL is extremely huge multi-man-year effort. We hope to have
it in the product in Q3. We are working on it in normal (not "ASAP") mode,
as our ultimate goal is great product.

On Wed, Apr 11, 2018 at 2:32 PM, joseheitor  wrote:

> Yes - please clarify?
>
> We are also developing a product for which we are keen to use Ignite
> (instead of PostgreSQL), but it is a complete BLOCKER if there is no
> persistent SQL Transactional support.
>
> So we also require this urgently!
> (Please, please, please...)
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Upgrade from 2.1.0 to 2.4.0 resulting in error within transaction block

2018-04-05 Thread Vladimir Ozerov
I've just fixed possible root cause in master [1]. However, as exact use
case details is not known, may be it was something else.
Is it possible to provide more info on the use case: cache configuratioh,
model classes?

[1] https://issues.apache.org/jira/browse/IGNITE-8147

On Mon, Apr 2, 2018 at 12:21 PM, Yakov Zhdanov <yzhda...@apache.org> wrote:

> Cross posting to dev.
>
> Vladimir Ozerov, can you please take a look at NPE from query processor
> (see below - GridQueryProcessor.typeByValue(GridQueryProcessor.java:1901)
> )?
>
> --Yakov
>
> 2018-03-29 0:19 GMT+03:00 smurphy <smur...@trustwave.com>:
>
>> Code works in Ignite 2.1.0. Upgrading to 2.4.0 produces the stack trace
>> below. The delete statement that is causing the error is:
>>
>> SqlFieldsQuery sqlQuery = new SqlFieldsQuery("delete from EngineFragment
>> where " + criteria());
>> fragmentCache.query(sqlQuery.setArgs(criteria.getArgs()));
>>
>> The code above is called from within a transactional block managed by a
>> PlatformTransactionManager which is in turn managed by Spring's
>> ChainedTransactionManager. If the @Transactional annotation is removed
>> from
>> the surrounding code, then the code works ok...
>>
>> 2018-03-28 15:50:05,748 WARN  [engine 127.0.0.1] progress_monitor_2
>> unknown
>> unknown {ProgressMonitorImpl.java:112} - Scan
>> [ec7af5e8-a773-40fd-9722-f81103de73dc] is unable to process!
>> javax.cache.CacheException: Failed to process key '247002'
>> at
>> org.apache.ignite.internal.processors.cache.IgniteCacheProxy
>> Impl.query(IgniteCacheProxyImpl.java:618)
>> at
>> org.apache.ignite.internal.processors.cache.IgniteCacheProxy
>> Impl.query(IgniteCacheProxyImpl.java:557)
>> at
>> org.apache.ignite.internal.processors.cache.GatewayProtected
>> CacheProxy.query(GatewayProtectedCacheProxy.java:382)
>> at
>> com.company.core.dao.ignite.IgniteFragmentDao.delete(IgniteF
>> ragmentDao.java:143)
>> at
>> com.company.core.dao.ignite.IgniteFragmentDao$$FastClassBySp
>> ringCGLIB$$c520aa1b.invoke()
>> at org.springframework.cglib.proxy.MethodProxy.invoke(MethodPro
>> xy.java:204)
>> at
>> org.springframework.aop.framework.CglibAopProxy$CglibMethodI
>> nvocation.invokeJoinpoint(CglibAopProxy.java:720)
>> at
>> org.springframework.aop.framework.ReflectiveMethodInvocation
>> .proceed(ReflectiveMethodInvocation.java:157)
>> at
>> org.springframework.dao.support.PersistenceExceptionTranslat
>> ionInterceptor.invoke(PersistenceExceptionTranslationInterce
>> ptor.java:136)
>> at
>> org.springframework.aop.framework.ReflectiveMethodInvocation
>> .proceed(ReflectiveMethodInvocation.java:179)
>> at
>> org.springframework.aop.framework.CglibAopProxy$DynamicAdvis
>> edInterceptor.intercept(CglibAopProxy.java:655)
>> at
>> com.company.core.dao.ignite.IgniteFragmentDao$$EnhancerBySpr
>> ingCGLIB$$ce60f71c.delete()
>> at
>> com.company.core.core.service.impl.InternalScanServiceImpl.p
>> urgeScanFromGrid(InternalScanServiceImpl.java:455)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcce
>> ssorImpl.java:62)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe
>> thodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:498)
>> at
>> org.springframework.aop.support.AopUtils.invokeJoinpointUsin
>> gReflection(AopUtils.java:302)
>> at
>> org.springframework.aop.framework.JdkDynamicAopProxy.invoke(
>> JdkDynamicAopProxy.java:202)
>> at com.sun.proxy.$Proxy417.purgeScanFromGrid(Unknown Source)
>> at com.company.core.core.async.tasks.PurgeTask.process(PurgeTas
>> k.java:85)
>> at sun.reflect.GeneratedMethodAccessor197.invoke(Unknown Source)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe
>> thodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:498)
>> at
>> org.springframework.aop.support.AopUtils.invokeJoinpointUsin
>> gReflection(AopUtils.java:302)
>> at
>> org.springframework.aop.framework.ReflectiveMethodInvocation
>> .invokeJoinpoint(ReflectiveMethodInvocation.java:190)
>> at
>> org.springframework.aop.framework.ReflectiveMethodInvocation
>> .proceed(Re

Re: Affinity Key column to be always part of the Primary Key

2018-03-20 Thread Vladimir Ozerov
Because without AFFINITY KEY option we do not know order of fields within
composite PK which is very important for index creation.

вт, 20 марта 2018 г. в 19:58, Dmitriy Setrakyan <dsetrak...@apache.org>:

> On Tue, Mar 20, 2018 at 2:09 PM, Vladimir Ozerov <voze...@gridgain.com>
> wrote:
>
>> Internally Ignite is key-value storage. It use key to derive partition it
>> belongs to. By default the whole key is used. Alternatively you can use
>> @AffinityKey annotation in cache API or "affinityKey" option in CREATE
>> TABLE to specify *part of the key* to be used for affinity calculation.
>> Affinity column cannot belong to value because in this case single
>> key-value pair could migrate between nodes during updates and
>> IgniteCache.get(K) will not be able to locate the key in cluster.
>>
>
> Vladimir, while it makes sense that the key must be composed of the ID and
> Affinity Key, I still do not understand why we require that user declares
> them both as PRIMARY KEY. Why do you need to enforce that explicitly? In my
> view you can do it automatically, if you see that the table has both,
> PRIMARY KEY and AFFINITY KEY declared.
>
>


Re: Affinity Key column to be always part of the Primary Key

2018-03-20 Thread Vladimir Ozerov
Internally Ignite is key-value storage. It use key to derive partition it
belongs to. By default the whole key is used. Alternatively you can use
@AffinityKey annotation in cache API or "affinityKey" option in CREATE
TABLE to specify *part of the key* to be used for affinity calculation.
Affinity column cannot belong to value because in this case single
key-value pair could migrate between nodes during updates and
IgniteCache.get(K) will not be able to locate the key in cluster.

On Fri, Mar 16, 2018 at 4:56 PM, David Harvey  wrote:

> Yes, the affinity key must be part of the primary key.   Welcome to my
> world
>
> On Fri, Mar 16, 2018 at 3:23 AM, Naveen  wrote:
>
>> Hi Mike
>>
>> I have created a table called CITY
>>
>> : jdbc:ignite:thin://127.0.0.1> CREATE TABLE City (  city_id LONG PRIMARY
>> KEY, name VARCHAR)  WITH "template=replicated";
>> No rows affected (0.224 seconds)
>>
>> Creating a table called Person with affinity key as city_id
>>
>> 0: jdbc:ignite:thin://127.0.0.1> CREATE TABLE IF NOT EXISTS Person ( age
>> int, id int, city_id LONG , name varchar, company varchar,  PRIMARY KEY
>> (name, id)) WITH "template=partitioned,backups=1,affinitykey=city_id,
>> key_type=PersonKey, value_type=MyPerson";
>>
>> This is the exception I get
>>
>> Error: Affinity key column must be one of key columns: CITY_ID
>> (state=42000,code=0)
>> java.sql.SQLException: Affinity key column must be one of key columns:
>> CITY_ID
>> at
>> org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.send
>> Request(JdbcThinConnection.java:671)
>> at
>> org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execu
>> te0(JdbcThinStatement.java:130)
>> at
>> org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execu
>> te(JdbcThinStatement.java:299)
>> at sqlline.Commands.execute(Commands.java:823)
>> at sqlline.Commands.sql(Commands.java:733)
>> at sqlline.SqlLine.dispatch(SqlLine.java:795)
>> at sqlline.SqlLine.begin(SqlLine.java:668)
>> at sqlline.SqlLine.start(SqlLine.java:373)
>> at sqlline.SqlLine.main(SqlLine.java:265)
>> 0: jdbc:ignite:thin://127.0.0.1>
>>
>> And, when I change the primary key to include affinity id, below DDL is
>> working fine.
>> 0: jdbc:ignite:thin://127.0.0.1> CREATE TABLE IF NOT EXISTS Person ( age
>> int, id int, city_id LONG , name varchar, company varchar,  PRIMARY KEY
>> (name, id,city_id)) WITH
>> "template=partitioned,backups=1,affinitykey=city_id, key_type=PersonKey,
>> value_type=MyPerson";
>>
>> This is what I was trying to explain, is affinity key to be part of the
>> primary key ??
>>
>> If this is the case, whole my data model will change drastically.
>>
>> Thanks
>> Naveen
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>
>
>
> *Disclaimer*
>
> The information contained in this communication from the sender is
> confidential. It is intended solely for use by the recipient and others
> authorized to receive it. If you are not the recipient, you are hereby
> notified that any disclosure, copying, distribution or taking action in
> relation of the contents of this information is strictly prohibited and may
> be unlawful.
>
> This email has been scanned for viruses and malware, and may have been
> automatically archived by *Mimecast Ltd*, an innovator in Software as a
> Service (SaaS) for business. Providing a *safer* and *more useful* place
> for your human generated data. Specializing in; Security, archiving and
> compliance. To find out more Click Here
> .
>


Re: query on BinaryObject index and table

2018-02-14 Thread Vladimir Ozerov
He Rajesh,

Method CacheConfiguration.setIndexedTypes() should only be used for classes
with SQL annotations. Since you operate on binary objects, you should use
CacheConfiguration.setQueryEntity(), and define QueryEntity with all
necessary fields. Also there is a property QueryEntity.tableName which you
can use to specify concrete table name.

Vladimir.


On Mon, Jan 22, 2018 at 7:41 PM, Denis Magda  wrote:

> The schema can be changed with ALTER TABLE ADD COLUMN command:
> https://apacheignite-sql.readme.io/docs/alter-table
>
> To my knowledge this is supported for schemas that were initially
> configured by both DDL and QueryEntity/Annotations.
>
> —
> Denis
>
>
> On Jan 22, 2018, at 5:44 AM, Ilya Kasnacheev 
> wrote:
>
> Hello Rajesh!
>
> Table name can be specified in cache configuration's query entity. If not
> supplied, by default it is equal to value type name, e.g. BinaryObject :)
>
> Also, note that SQL tables have fixed schemas. This means you won't be
> able to add a random set of fields in BinaryObject and be able to do SQL
> queries on them all. You will have to declare all fields that you are going
> to use via SQL, either by annotations or query entity:
> see https://apacheignite-sql.readme.io/docs/schema-and-indexes
>
> To add index, you should either specify it in annotations (via index=true)
> or in query entity.
>
> Regards,
> Ilya.
>
> --
> Ilya Kasnacheev
>
> 2018-01-21 15:12 GMT+03:00 Rajesh Kishore :
>
>> Hi Denis,
>>
>> This is my code:
>>
>> CacheConfiguration cacheCfg =
>> new CacheConfiguration<>(ORG_CACHE);
>>
>> cacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
>> cacheCfg.setBackups(1);
>> cacheCfg
>> .setWriteSynchronizationMode(CacheWriteSynchronizationMode.F
>> ULL_SYNC);
>> cacheCfg.setIndexedTypes(Long.class, BinaryObject.class);
>>
>> IgniteCache cache = ignite.getOrCreateCache(cacheC
>> fg);
>>
>> if ( UPDATE ) {
>>   System.out.println("Populating the cache...");
>>
>>   try (IgniteDataStreamer streamer =
>>   ignite.dataStreamer(ORG_CACHE)) {
>> streamer.allowOverwrite(true);
>> IgniteBinary binary = ignite.binary();
>> BinaryObjectBuilder objBuilder = binary.builder(ORG_CACHE);
>> ;
>> for ( long i = 0; i < 100; i++ ) {
>>   streamer.addData(i,
>>   objBuilder.setField("id", i)
>>   .setField("name", "organization-" + i).build());
>>
>>   if ( i > 0 && i % 100 == 0 )
>> System.out.println("Done: " + i);
>> }
>>   }
>> }
>>
>> IgniteCache binaryCache =
>> ignite.cache(ORG_CACHE).withKeepBinary();
>> BinaryObject binaryPerson = binaryCache.get(54l);
>> System.out.println("name " + binaryPerson.field("name"));
>>
>>
>> Not sure, If I am missing some context here , if I have to use sqlquery ,
>> what table name should I specify - I did not create table explicitly, do I
>> need to that?
>> How would I create the index?
>>
>> Thanks,
>> Rajesh
>>
>> On Sun, Jan 21, 2018 at 12:25 PM, Denis Magda  wrote:
>>
>>>
>>>
>>> > On Jan 20, 2018, at 7:20 PM, Rajesh Kishore 
>>> wrote:
>>> >
>>> > Hi,
>>> >
>>> > I have requirement that my schema is not fixed , so I have to use the
>>> BinaryObject approach instead of fixed POJO
>>> >
>>> > I am relying on OOTB file system persistence mechanism
>>> >
>>> > My questions are:
>>> > - How can I specify the indexes on BinaryObject?
>>>
>>> https://apacheignite-sql.readme.io/docs/create-index
>>> https://apacheignite-sql.readme.io/docs/schema-and-indexes
>>>
>>> > - If I have to use sql query for retrieving objects , what table name
>>> should I specify, the one which is used for cache name does not work
>>> >
>>>
>>> Was the table and its queryable fields/indexes created with CREATE TABLE
>>> or Java annotations/QueryEntity?
>>>
>>> If the latter approach was taken then the table name corresponds to the
>>> Java type name as shown in this doc:
>>> https://apacheignite-sql.readme.io/docs/schema-and-indexes
>>>
>>> —
>>> Denis
>>>
>>> > -Rajesh
>>>
>>>
>>
>
>


Re: Cannot insert data into table using JDBC

2018-01-17 Thread Vladimir Ozerov
Hi Michael,

The issue is almost fixed. The fix will be available as a part of Apache
Ignite 2.4 release.

On Tue, Dec 19, 2017 at 1:48 PM, Michael Jay <841519...@qq.com> wrote:

> Hi,has it been solved or is it a bug? I just met with the same
> problem.When I
> set "streaming=false" ,it worked, data can be inserted via
> JdbcClientDriver.
> However, when streaming=true, I got message that said "schema not found" .
> Thanks.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: BinaryObjectImpl.deserializeValue with specific ClassLoader

2018-01-15 Thread Vladimir Ozerov
The ticket is on the radar, but not in immediate plans. The problem might
sounds simple at first glance, but we already spent considerable time on
implementation and review, because we heavily rely on classes caching, and
a lot of internal BinaryMarshaller infrastructure should be reworked to
allow for this change. Hopefully, we will have it in Apache Ignite 2.5.

On Thu, Jan 4, 2018 at 2:28 AM, Valentin Kulichenko <
valentin.kuliche...@gmail.com> wrote:

> Ticket is still open. Vladimir, looks like it's assigned to you. Do you
> have any plans to work on it?
>
> https://issues.apache.org/jira/browse/IGNITE-5038
>
> -Val
>
> On Wed, Jan 3, 2018 at 1:26 PM, Abeneazer Chafamo <
> chaf...@nodalexchange.com> wrote:
>
>> Is there any update on the suggested functionality to resolve cache entry
>> classes based on the caller's context first instead of relying on Ignite's
>> classloader?
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>
>


Re: insert so slow via JDBC thin driver

2018-01-15 Thread Vladimir Ozerov
Streaming mode for thin JDBC driver is expected in Apache Ignite 2.5.
Meanwhile you can load data using thick driver which support streaming, and
then switch to thin driver for normal operations it you prefer it over
thick one.


On Fri, Dec 15, 2017 at 9:09 PM, Denis Magda  wrote:

> Hi Michael,
>
> Yes, I heard from Ignite SQL experts that Ignite thin client is not
> optimal for data loading yet. However, there is already some work in
> progress to speed up the loading and batching of a myriad of rows.
>
> Vladimir, could you please step in and comment here.
>
> —
> Denis
>
> > On Dec 14, 2017, at 1:02 AM, Michael Jay <841519...@qq.com> wrote:
> >
> > Hello, I am a new Ignite leaner. I want to insert 50,000,000 rows into a
> > table. Here,i got a problem.
> > When one host and one sever node, the speed of insert is about 2,000,000
> per
> > minute,  the usage of cpu is 30-40%; however two hosts and two sever
> nodes,
> > about 100,000 per minute,and the usage of cpu is only 5%. It's so
> slow,what
> > can i do to improve the performance? Thanks.
> >
> > my default-config.xml:
> >   
> >   
> >   
> >   
> > > class="org.apache.ignite.configuration.MemoryConfiguration">
> >value="19327352832"/>
> >
> >value="16"/>
> >
> >   
> >   
> >   
> >   
> >
> >
> > > class="org.apache.ignite.configuration.CacheConfiguration">
> >   
> >value="PARTITIONED"/>
> >value="4"/>
> >
> >
> >
> >value="false"/>
> >value="false"/>
> >value="false"/>
> >
> >name="rebalanceBatchSize" value="#{256L * 1024 * 1024}"/>
> >
> >value="0"/>
> >
> >name="rebalanceThreadPoolSize" value="8"/>
> >
> >value="false"/>
> >
> >
> >
> >
> >
> > > class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
> >
> >
> > > class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.
> TcpDiscoveryMulticastIpFinder">
> >
> >   
> >
> >
> >
>  10x.x.x.226:47500..47509
> >
>  10x.x.x.75:47500..47509
> >
> >
> >
> >
> >
> >
> >   
> > 
> >
> >
> >
> >
> > --
> > Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
>


Re: How can I get Ignite security plugin to work with JDBC thin client?

2017-11-09 Thread Vladimir Ozerov
Hi Caleb,

This appears to be a problem with our query execution engine, rather than
with thin JDBC driver. I created a ticket to fix it [1].

[1] https://issues.apache.org/jira/browse/IGNITE-6856

On Tue, Oct 31, 2017 at 4:49 PM, Andrey Mashenkov <
andrey.mashen...@gmail.com> wrote:

> Caleb,
>
> I've found authorization should work only when you use url like
> "jdbc:ignite://" with thick driver,
> and won't if  "jdbc:ignite:cfg://" is used.
>
> On Tue, Oct 31, 2017 at 4:33 PM, calebs  wrote:
>
>> The javadoc for the jdbc thick client states that property:
>> "ignite.client.credentials" is available to pass "Client credentials used
>> in
>> authentication process."
>>
>> Is this not being used for authentication/authorization?
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>
>
>
> --
> Best regards,
> Andrey V. Mashenkov
>


Re: Ignite 2.0.0 GridUnsafe unmonitor

2017-10-31 Thread Vladimir Ozerov
Guys,

Printing a warning in this case is really strange idea. First, how would
explain it in case of OPTIMISTIC/SERIALIZABLE transactions where deadlocks
are impossible? Second, what would you do in case tow sorted maps are
passed one by one in a transaction? User still may have a deadlock. :Last,
we are going towards SQL world, where "maps" simply do not exist, and
virtually any update could eailty lead to a deadlock.

Let's avoid strange warnings for normal usage scenario. Denis, please close
the ticket :-)))

Vladimir.

On Tue, Oct 31, 2017 at 8:34 PM, Denis Magda  wrote:

> Here is a ticket for the improvement:
> https://issues.apache.org/jira/browse/IGNITE-6804
>
> —
> Denis
>
> > On Oct 31, 2017, at 3:55 AM, Dmitry Pavlov 
> wrote:
> >
> > I agree with Denis, if we don't have such warning we should continiously
> warn users in wiki pages/blogs/presentations. It is simpler to warn from
> code.
> >
> > What do you think if we will issue warning only if size > 1. HashMap
> with 1 item will not cause deadlock. Moreover where can be some custom
> singleton Map provided by user.
> >
> > Sincerely,
> > Dmitriy Pavlov
> >
> > вт, 31 окт. 2017 г. в 7:18, Dmitriy Setrakyan  >:
> > Denis,
> >
> > We should definitely print out a thorough warning if HashMap is passed
> into
> > a bulk method (instead of SortedMap). However, we should make sure that
> we
> > only print that warning once and not ever time the API is called.
> >
> > Can you please file a ticket for 2.4?
> >
> > D.
> >
> > On Thu, Oct 26, 2017 at 11:05 AM, Denis Magda  > wrote:
> >
> > > + dev list
> > >
> > > Igniters, that’s a relevant point below. Newcomers to Ignite tend to
> > > stumble on deadlocks simply because the keys are passed in an unordered
> > > HashMap. Propose to do the following:
> > > - update bulk operations Java doc.
> > > - print out a warning if a HashMap is used and its exceeds one element.
> >
> >
> > > Thoughts?
> > >
> > > —
> > > Denis
> > >
> > > > On Oct 21, 2017, at 6:16 PM, dark > wrote:
> > > >
> > > > Many people seem to be more likely to send Cache entries in bulk via
> a
> > > > HashMap.
> > > > How do you expose a warning statement by checking if the TreeMap is
> > > putAll
> > > > inside the code?
> > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Sent from: http://apache-ignite-users.70518.x6.nabble.com/ <
> http://apache-ignite-users.70518.x6.nabble.com/>
> > >
> > >
>
>


Re: Query performance against table with/out backup

2017-10-27 Thread Vladimir Ozerov
Hi,

We know one serious source of slowdown when backups are enabled. See [1]
and [2]. It will be fixed in 2.4.

Vladimir.

[1] https://issues.apache.org/jira/browse/IGNITE-6624
[2] https://issues.apache.org/jira/browse/IGNITE-6626

On Thu, Oct 19, 2017 at 9:29 PM, blackfield 
wrote:

> Here, I am trying to ascertain that I set backup == 2 properly as I
> mentioned
> above that I do not see query performance difference between backup ==1 and
> backup == 2.
>
> I want to make sure that I configure my cache properly.
>
> When I set the backup==2 (to have three copies), I notice the following via
> visor.
>
> The Affinity Backups is still equal to 1. Is this a different property than
> number of backups? If it is not, how do one see the number of backups a
> cache is configured for?
>
> Invoking "cache -a" to see the detail cache stat, with backup==2, under the
> size column, the sum of entries on all nodes is equal to the number of rows
> in the table * 2.  It appears this is the case for backup >= 1?
>
> As in, only one set of backup will be stored in off heap regardless the
> number of backups are specified?
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: work around for problem where ignite query does not include objects added into Cache from within a transaction

2017-09-18 Thread Vladimir Ozerov
Hi,

Looks like you need the feature which is simply not implement in Ignite at
the moment. SELECTs are not transactional and there is not easy way to make
not-yet-committed updates visible to them. Solution 1 doesn't work as there
are not "temporal" caches in Ignite. Not-yet-committed updates are stored
in some internal data structures. Solution 2 would lead to dramatic
performance degradation.

I would suggest you to rethink the whole approach - SELECTs do have any
transactional guarantees at the moment. May be you will be able to split
single transaction into multiple and then run SQL query in the middle.

Vladimir.

On Sat, Sep 16, 2017 at 10:17 AM, kotamrajuyashasvi <
kotamrajuyasha...@gmail.com> wrote:

> Hi
>
> When a transaction is started and some rows are updated/inserted/deleted in
> a cache which is in transactional mode, And after that when I perform a
> select query, its not reflecting the results from of update/insert/delete
> done before with in the transaction.  I found out that ignite sql does not
> have this functionality yet. But I need a temporary workaround for this
> problem. I thought of two possible solutions. I do not know the feasibility
> of these solutions and also its impacts on performance or functionality.
> Hence I need your help.
>
> solution 1: Assuming all updates/inserts/deletes with in transaction are
> temporarily stored in some cache, and then flushed to original cache when
> commited or rolledback, Is there any way I could access that cache to
> perform select query on that temporary cache and then decide what rows
> should I include as result of select query.
>
> solution 2: When ever I start a Transaction, for every original cache, I
> initialize a new dynamic temporary cache which is not in  transactional
> mode[also does not have any persistent store] using 'getOrCreateCache(..)'
> as mentioned in docs. Now within transaction when ever any update/insert
> happens I insert into the corresponding temporary cache. During the select
> query I perform select query on both main and corresponding temporary
> cache.
> Then I take results from both the queries and decide the actual result.I
> also maintain a list of rows/keys deleted within transaction, and during
> select query I will exclude the keys present in this list. During commit I
> sync the temporary caches into original caches. Will creating temporary
> caches for every transaction degrade performance? Is there any way to
> create
> a temporary cache local to transaction or node so that it does not affect
> performance, since Im using temporary cache mainly for performing sql
> queries.
>
>
> Please suggest if there are any drawbacks to the above solutions and also
> if
> there are any other workarounds for this problem.
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: GROUP_CONCAT function is unsupported

2017-09-08 Thread Vladimir Ozerov
Hi,

This function is not supported yet. No immediate plans to fix it as there
are lots of tasks with higher priority and impact. But will definitely
support it at some point.

On Wed, Sep 6, 2017 at 11:38 AM, mhetea  wrote:

> Hello,
> I saw that the ticket was not fixed.
> Is there a workaround for this issue?
> I wouldn't like to iterate over the results and then concatenate them
> because the result list can be large.
> Thank you!
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: New API docs look for Ignite.NET

2017-09-06 Thread Vladimir Ozerov
Looks good to me. Clean layout, fast build - do we need anything else?

On Tue, Sep 5, 2017 at 12:06 PM, Oleg Ostanin  wrote:

> Great news, thanks a lot!
>
> On Tue, Sep 5, 2017 at 11:47 AM, Pavel Tupitsyn 
> wrote:
>
>> DocFX takes around 30 seconds on my machine.
>>
>> > if you already tried that
>> Yes, everything is done on my side, see JIRA ticket [4] and preview [5]
>> above.
>>
>> On Tue, Sep 5, 2017 at 11:45 AM, Ilya Suntsov 
>> wrote:
>>
>> > Pavel, thanks!
>> > It is the great news!
>> > Looks like DocFX will save 30-40 min.
>> >
>> > 2017-09-05 11:16 GMT+03:00 Pavel Tupitsyn :
>> >
>> > > Igniters and users,
>> > >
>> > > Historically we've been using Doxygen [1] to generate .NET API
>> > > documentation [2].
>> > >
>> > > Recently it became very slow on our code base (more than 30 minutes to
>> > > generate), and I could not find any solution or tweak to fix that.
>> Other
>> > > issues include outdated looks and limited customization possibilities.
>> > >
>> > > I propose to replace it with DocFX [3] [4]:
>> > > - Popular .NET Foundation project
>> > > - Good looks and usability out of the box
>> > > - Easy to set up
>> > >
>> > > Our docs will look like this: [5]
>> > > Let me know if you have any objections or suggestions.
>> > >
>> > > Pavel
>> > >
>> > >
>> > > [1] http://www.stack.nl/~dimitri/doxygen/
>> > > [2] https://ignite.apache.org/releases/latest/dotnetdoc/index.html
>> > > [3] https://dotnet.github.io/docfx/
>> > > [4] https://issues.apache.org/jira/browse/IGNITE-6253
>> > > [5] https://ptupitsyn.github.io/docfx-test/api/index.html
>> > >
>> >
>> >
>> >
>> > --
>> > Ilya Suntsov
>> >
>>
>
>


Re: SQL query is slow

2017-09-04 Thread Vladimir Ozerov
Hi Mihaela,

Index is not used in your case because you specify function-based
condition. Usually this is resolved by adding functional index, but Ignite
doesn't support it at the moment unfortunately. Is it possible to
"materialize" the condition "POSITION ('Z',manufacturerCode)>0" as
additional attribute and add an index on it? In this case SQL would look
like this and index will be used:

SELECT COUNT(_KEY) FROM IgniteProduct AS product
WHERE manufacturerCodeZ=1

Another important thing is selectivity - which fraction of records fall
under this condition?
Also I would recommend to change "COUNT(_KEY)" to "COUNT(*)".

Vladimir.

On Tue, Aug 29, 2017 at 6:05 PM, Andrey Mashenkov <
andrey.mashen...@gmail.com> wrote:

> It is possible returned dataset is too large and cause high network
> pressure that results in large query execution time.
>
> There is no recommendation for grid nodes count.
> Simple SQL queries can work slower on large grid as most of time is spent
> in inter-node communication.
> Heavy SQL queries may show better results on larger grid as every node
> will have smaller dataset.
>
> You can try to look at page memory statistics [1] to get estimate numbers.
>
> Really, there is an issue with large OFFSET as Ignite can't just skip
> entries and have to fetch all of them from nodes.
> OFFSET makes no sense without ORDER as Ignite fetch rows from other nodes
> in async way and row order should be preserved between such queries.
> OFFSET applies on query initiator node (reduce side) after results merged
> as there is no way to understand on map side what rows should be skiped.
>
>
> Looks like underlying H2 tries to use index scan, but I don't think index
> can help in case of functional condition.
> You can try to make Ignite to have inline values in index or use separate
> field with smaller type that can be inlined. By default, index inlining is
> enabled for 10 byte length values.
> See IGNITE_MAX_INDEX_PAYLOAD_SIZE_DEFAULT system property docs and [2].
>
> [1] https://apacheignite.readme.io/v2.1/docs/memory-metrics
> [2] https://issues.apache.org/jira/browse/IGNITE-6060
>
> On Tue, Aug 29, 2017 at 3:59 PM, mhetea  wrote:
>
>> Thank you for your response.
>> I used query parallelizm and the time reduced to ~2.3s, which is still too
>> much.
>> Regarding 1. is there any documentation about configuration parameters
>> (recommended number of nodes, how much data should be stored on each
>> node).
>> We currently have 2 nodes with 32GB RAM each. Every 1 million records from
>> our cache occupy about 1GB (is there a way to see how much memory a cache
>> actually occupies? we look now at the Allocated next memory segment log
>> info)
>> For 3. it seems that the index is hit  from the execution plan:
>>  /* "productCache".IGNITEPRODUCT_MANUFACTURERCODE_IDX */
>> No?
>>
>> We have this issue also when we use a large OFFSET (we execute this kind
>> of
>> query because we want paginated results)
>>
>> Also, this cache will be updated frequently so we expect it to grow in
>> size.
>>
>> Thank you!
>>
>>
>>
>> --
>> View this message in context: http://apache-ignite-users.705
>> 18.x6.nabble.com/SQL-query-is-slow-tp16475p16487.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>
>
> --
> Best regards,
> Andrey V. Mashenkov
>


Re: using hdfs as secondary file system

2017-06-14 Thread Vladimir Ozerov
Hi Antonio,

Is it possible to attach logs from server nodes?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/using-hdfs-as-secondary-file-system-tp13580p13696.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: odd query plan with joins?

2017-06-14 Thread Vladimir Ozerov
Hi,

"1=1" is normal thing in the plan provided that index is used (see "/*
PropertyCache.PROPERTY_CITY_ID_IDX: CITY_ID = 59053 */"). Is it possible to
attach a reproducer?

Vladimir.

On Wed, Jun 14, 2017 at 5:10 AM, djardine  wrote:

> Hi,
> I'm attempting to run a query that does a join between 2 large tables (one
> with 6m rows, another with 80m rows). In the query plan I see a join "on
> 1=1" and separately i see filters for my join under the "where" clause. I'm
> not sure if this is standard output in the query plan or if it's doing a
> ridiculously expensive join where it's combining every possible permutation
> between the tables and then filtering it down. The query basically runs
> forever, never returning and eventually will kill the server node that it's
> running on (maybe OOM?). I've tried this on both PARTITIONED and REPLICATED
> clusters. "property_id" fields are indexed.
>
> The query is:
>
> SELECT p.property_id, sum(cm.days_r)::real/(sum(cm.days_r)+sum(cm.days_a))
> as occ_rate, p.bedrooms, cm.period, p.room_type
> FROM calendar_metric as cm
> JOIN "PropertyCache".property as p on
> p.property_id = cm.property_id
> WHERE
> cm.days_r > 0 AND p.bedrooms is not null AND (
> p.room_type = 'Entire
> home/apt' )
> AND cm.period BETWEEN '2016-1-1' and '2016-8-1'
> AND p.city_id = 59053
> GROUP BY cm.period, p.room_type, p.bedrooms, p.property_id
>
> The query plan shows:
>
> SELECT
> P__Z1.PROPERTY_ID AS __C0_0,
> SUM(CM__Z0.DAYS_R) AS __C0_1,
> P__Z1.BEDROOMS AS __C0_2,
> CM__Z0.PERIOD AS __C0_3,
> P__Z1.ROOM_TYPE AS __C0_4,
> SUM(CM__Z0.DAYS_R) AS __C0_5,
> SUM(CM__Z0.DAYS_A) AS __C0_6
> FROM CalendarMetricCache.CALENDAR_METRIC CM__Z0
> /* CalendarMetricCache.CALENDAR_METRIC_PERIOD_DAYS_R_IDX: PERIOD >=
> DATE
> '2016-01-01'
> AND PERIOD <= DATE '2016-08-01'
> AND DAYS_R > 0
>  */
> /* WHERE (CM__Z0.DAYS_R > 0)
> AND ((CM__Z0.PERIOD >= DATE '2016-01-01')
> AND (CM__Z0.PERIOD <= DATE '2016-08-01'))
> */
> INNER JOIN PropertyCache.PROPERTY P__Z1
> /* PropertyCache.PROPERTY_CITY_ID_IDX: CITY_ID = 59053 */
> ON 1=1
> WHERE (P__Z1.PROPERTY_ID = CM__Z0.PROPERTY_ID)
> AND ((P__Z1.CITY_ID = 59053)
> AND (((CM__Z0.PERIOD >= DATE '2016-01-01')
> AND (CM__Z0.PERIOD <= DATE '2016-08-01'))
> AND ((P__Z1.ROOM_TYPE = 'Entire home/apt')
> AND ((CM__Z0.DAYS_R > 0)
> AND (P__Z1.BEDROOMS IS NOT NULL)
> GROUP BY CM__Z0.PERIOD, P__Z1.ROOM_TYPE, P__Z1.BEDROOMS, P__Z1.PROPERTY_ID
>
> SELECT
> __C0_0 AS PROPERTY_ID,
> (CAST(SUM(__C0_1) AS REAL) / (SUM(__C0_5) + SUM(__C0_6))) AS OCC_RATE,
> __C0_2 AS BEDROOMS,
> __C0_3 AS PERIOD,
> __C0_4 AS ROOM_TYPE
> FROM PUBLIC.__T0
> /* CalendarMetricCache.merge_scan */
> GROUP BY __C0_3, __C0_4, __C0_2, __C0_0
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/odd-query-plan-with-joins-tp13680.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: How to properly close igfs file after appending it

2017-06-09 Thread Vladimir Ozerov
Is it possible to provide a reproducer?

On Fri, Jun 9, 2017 at 9:11 AM, ishan-jain  wrote:

> Ivan please ellaborate
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/How-to-properly-close-igfs-file-after-appending-it-
> tp11613p13551.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Memory Leak or my Error?

2017-06-08 Thread Vladimir Ozerov
Hi Devis,

Behavior you saw doesn't depend on how many rows are returned, but rather
on how many threads are created. There should not be any problems when
switching from 100 to 2000 rows.

On Wed, Jun 7, 2017 at 9:20 PM, Devis Balsemin <
devis.balse...@flexvalley.com> wrote:

> Hi Vladimir,
>
> for now i have change my code in this way:
>
>
>
> From
>
> queryCursor = *cache*.query(*new *SqlQuery(cls, whereCondition).setArgs(
> whereParams));
>
>
>
> With
>
> cache.iterator() and Java 8 Stream/Filter/….
>
>
>
> My memory now works  very fine over 37.4% less of resources and CMS objects 
> in garbage
>
>
>
>
>
> What do you think about this?
>
>
>
> For your what will happen with over 2000 Rows (now I’m testing only with 100 
> rows)?
>
>  Have I  to wait for a decrease in performance with this approach?
>
>
>
> Thank for your help
>
> Devis
>
>
>
> [image: cid:image001.png@01D12220.7E4C4300]
>
>
>
>
>
> Devis Balsemin,  FlexValley SRL (Italy) ,
> Via Dante Alighieri 4, I-36075 Montecchio Maggiore, Italy | t: +39.0444.699622
> | f: +39.0444.1830543
>
>
>
> *Please consider your environmental responsibility before printing this
> e-mail*
>
> This communication (including any attachments) may contain privileged or
> confidential information of FlexValley SRL and is intended for a specific
> individual.  If you are not the intended recipient, you should delete this
> communication, including any attachments without reading or saving them in
> any manner, and you are hereby notified that any disclosure, copying, or
> distribution of this communication, or the taking of any action based on
> it, is strictly prohibited.
>
>
>
> *Da:* devis76 [mailto:devis.balse...@flexvalley.com]
> *Inviato:* martedì 6 giugno 2017 17:12
> *A:* user@ignite.apache.org
> *Oggetto:* R: Memory Leak or my Error?
>
>
>
> Hi Vladimir,
>
> sorry…
>
> when i call “submit”….
>
> I’m using a IgniteThreadPoolExecutor in this way….
>
>
>
> *threadPoolExecutor *= *new *IgniteThreadPoolExecutor(coreSize, maxSize,
> keepAlive, *arrayBlockingQueue*, *threadFactory*, *new *
> ThreadPoolExecutor.CallerRunsPolicy());
> *threadPoolExecutor*.allowCoreThreadTimeOut(*true*);
>
>
>
>
>
> [image: cid:image001.png@01D12220.7E4C4300]
>
>
>
>
>
> Devis Balsemin,  FlexValley SRL (Italy) ,
> Via Dante Alighieri 4, I-36075 Montecchio Maggiore, Italy | t: +39.0444.699622
> | f: +39.0444.1830543
>
>
>
> *Please consider your environmental responsibility before printing this
> e-mail*
>
> This communication (including any attachments) may contain privileged or
> confidential information of FlexValley SRL and is intended for a specific
> individual.  If you are not the intended recipient, you should delete this
> communication, including any attachments without reading or saving them in
> any manner, and you are hereby notified that any disclosure, copying, or
> distribution of this communication, or the taking of any action based on
> it, is strictly prohibited.
>
>
>
> *Da:* Vladimir Ozerov [via Apache Ignite Users] [mailto:[hidden email]
> <http:///user/SendEmail.jtp?type=node=13429=0>]
> *Inviato:* martedì 6 giugno 2017 16:47
> *A:* devis76 <[hidden email]
> <http:///user/SendEmail.jtp?type=node=13429=1>>
> *Oggetto:* Re: Memory Leak or my Error?
>
>
>
> Hi Devis,
>
>
>
> Is it correct that your application create lots of threads? Currently we
> cache connections on per-thread level for performance reasons. If there are
> many threads created over and over again, it could consume more and more
> memory. Probably we should improve this piece of code.
>
>
>
> On Tue, Jun 6, 2017 at 4:38 PM, Devis Balsemin <[hidden email]
> <http:///user/SendEmail.jtp?type=node=13425=0>> wrote:
>
> Hi Vladimir,
>
>
>
> this is my cache configuration
>
>
>
>
>CacheConfiguration<K, V> cacheCfg = *new *CacheConfiguration<>();
>cacheCfg.setTypes(*keyType*, *valueType*);
>
>cacheCfg.setBackups(0);
>cacheCfg.setName(ctx.name());
>cacheCfg.setCacheMode(*PARTITIONED*);
>cacheCfg.setEagerTtl(*false*);
>cacheCfg.setCopyOnRead(*true*);
>cacheCfg.setExpiryPolicyFactory(CreatedExpiryPolicy.*factoryOf*(*new *
> Duration(TimeUnit.*HOURS*, 24)));
>cacheCfg.setIndexedTypes(*keyType*, *valueType*);
>*ignite*.configuration().setCacheConfiguration(cacheCfg);
>*cache *= *ignite*.getOrCreateCache(cacheCfg);
>
>
>
>
>
> [image: cid:image001.png@01D12220.7E4C4300]
>
>
>
>
>
> Devis Balsemin,  F

Re: Is it necessary to configure setting HBase which use Secondary File System?

2017-06-08 Thread Vladimir Ozerov
Hi Takashi,

"igfs://" prefix should be used in your application code, in those places
where data is accessed. It is illegal to change "hbase.wal.dir" property,
as it breaks HBase internals.

On Thu, Jun 8, 2017 at 6:30 AM, Takashi Sasaki  wrote:

> Hello,
>
> I used igfs:// instead of hdfs:// for hbase.wal.dir property, then
> HBase Master Server throwed Exception.
>
> 2017-06-08 02:51:56,745 ERROR [main] master.HMasterCommandLine: Master
> exiting
> java.lang.RuntimeException: Failed construction of Master: class
> org.apache.hadoop.hbase.master.HMaster.
> at org.apache.hadoop.hbase.master.HMaster.
> constructMaster(HMaster.java:2577)
> at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(
> HMasterCommandLine.java:231)
> at org.apache.hadoop.hbase.master.HMasterCommandLine.run(
> HMasterCommandLine.java:137)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(
> ServerCommandLine.java:126)
> at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2587)
> Caused by: java.io.IOException: File system is already initialized:
> org.apache.ignite.internal.processors.hadoop.impl.igfs.
> HadoopIgfsWrapper@1dbd580
> at org.apache.ignite.hadoop.fs.v1.IgniteHadoopFileSystem.
> initialize(IgniteHadoopFileSystem.java:215)
> at org.apache.hadoop.hbase.fs.HFileSystem.(
> HFileSystem.java:87)
> at org.apache.hadoop.hbase.regionserver.HRegionServer.
> initializeFileSystem(HRegionServer.java:634)
> at org.apache.hadoop.hbase.regionserver.HRegionServer.<
> init>(HRegionServer.java:576)
> at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:397)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>
> I checked Ignite source code.
> It seems to be called initialize method more than once, so the server
> throw the exception.
>
> I added properties to core-site.xml:
> 
>   fs.igfs.impl
>   org.apache.ignite.hadoop.fs.v1.IgniteHadoopFileSystem
> 
> 
>   fs.AbstractFileSystem.igfs.impl
>   org.apache.ignite.hadoop.fs.v2.IgniteHadoopFileSystem
> 
>
> I changed property hbase-site.xml:
> 
>   hbase.wal.dir
>   igfs://igfs@/user/hbase/WAL
> 
>
> Hadoop version: 2.7.3
> HBase version: 1.3.0
> Ignite version: 2.0.0
>
> How can I solve this problem?
> Give me advice if you like.
>
> Thanks,
> Takashi
>
> 2017-06-07 21:38 GMT+09:00 Takashi Sasaki :
> > Hello,
> >
> > I'm newbie of Ignite, so have some question.
> >
> > When I use Secondary File System to write HBase WAL, should I use
> > igfs:// instead of hdfs:// ?
> >
> > hbase-site.xml(default) is hdfs://.
> >
> > --
> > 
> >   hbase.wal.dir
> >   hdfs://[dnsname]:[port]/user/hbase/WAL
> > 
> > --
> >
> > Does the secondary file system require some configuration changes to
> Hbase?
> >
> > Please give me advice.
> >
> > Thanks,
> > Takashi
>


Re: igfs-meta behavior when node restarts

2017-06-08 Thread Vladimir Ozerov
Hi,

Is it possible to create isolated reproducer which will show gradual
increase of failed operations over time?

On Wed, Jun 7, 2017 at 6:39 AM, joewang  wrote:

> Would it be helpful if I uploaded logs collected from the cluster? I can
> point to the time when the behavior begins to occur.
>
> My concern with this is that the recurrent transactions (removing entries
> from the TRASH) and rollbacks are being performed over and over again for
> the same set of entries -- over time, this set continues to grow. In my
> observation, over the course of 3-4 days, the initial 4 TX op/s grows to
> ~200 TX op/s across all my nodes. I'm assuming if this grows unbounded, the
> cluster performance and stability will eventually be affected.
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/igfs-meta-behavior-when-node-restarts-
> tp13155p13446.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Memory Leak or my Error?

2017-06-06 Thread Vladimir Ozerov
Hi Devis,

Is it correct that your application create lots of threads? Currently we
cache connections on per-thread level for performance reasons. If there are
many threads created over and over again, it could consume more and more
memory. Probably we should improve this piece of code.

On Tue, Jun 6, 2017 at 4:38 PM, Devis Balsemin <
devis.balse...@flexvalley.com> wrote:

> Hi Vladimir,
>
>
>
> this is my cache configuration
>
>
>
>
>CacheConfiguration<K, V> cacheCfg = *new *CacheConfiguration<>();
>cacheCfg.setTypes(*keyType*, *valueType*);
>
>cacheCfg.setBackups(0);
>cacheCfg.setName(ctx.name());
>cacheCfg.setCacheMode(*PARTITIONED*);
>cacheCfg.setEagerTtl(*false*);
>cacheCfg.setCopyOnRead(*true*);
>cacheCfg.setExpiryPolicyFactory(CreatedExpiryPolicy.*factoryOf*(*new *
> Duration(TimeUnit.*HOURS*, 24)));
>cacheCfg.setIndexedTypes(*keyType*, *valueType*);
>*ignite*.configuration().setCacheConfiguration(cacheCfg);
>*cache *= *ignite*.getOrCreateCache(cacheCfg);
>
>
>
>
>
> [image: cid:image001.png@01D12220.7E4C4300]
>
>
>
>
>
> Devis Balsemin,  FlexValley SRL (Italy) ,
> Via Dante Alighieri 4, I-36075 Montecchio Maggiore, Italy | t: +39.0444.699622
> | f: +39.0444.1830543
>
>
>
> *Please consider your environmental responsibility before printing this
> e-mail*
>
> This communication (including any attachments) may contain privileged or
> confidential information of FlexValley SRL and is intended for a specific
> individual.  If you are not the intended recipient, you should delete this
> communication, including any attachments without reading or saving them in
> any manner, and you are hereby notified that any disclosure, copying, or
> distribution of this communication, or the taking of any action based on
> it, is strictly prohibited.
>
>
>
> *Da:* Vladimir Ozerov [mailto:voze...@gridgain.com]
> *Inviato:* martedì 6 giugno 2017 15:02
> *A:* user@ignite.apache.org
> *Oggetto:* Re: Memory Leak or my Error?
>
>
>
> Hi Devis,
>
>
>
> Can you show GC roots of these Session objects?
>
>
>
> On Tue, Jun 6, 2017 at 3:52 PM, Devis Balsemin <
> devis.balse...@flexvalley.com> wrote:
>
> Hi,
>
> I’m using Ignite 1.7.
>
> I have a function nativeSQL that is called in many point of my programs.
>
> But after 2/3days I receive my OOM  (this function is called every 0.25ms
> X 10 concurrent users).
>
>
>
> My dubmp show me
>
>
>
> Class Name |
> Objects | Shallow Heap | Retained Heap
>
> 
> 
>
> org.h2.engine.Session  |
> 201.406 |   51.559.936 | >= 95.063.648
>
> char[] |
> 501.160 |   40.283.480 | >= 40.283.480
>
> java.lang.Object[] |
> 625.053 |   21.469.296 | >= 34.984.640
>
> org.h2.jdbc.JdbcConnection |
> 201.404 |   17.723.552 | >= 43.503.144
>
> java.util.ArrayList|
> 616.718 |   14.801.232 | >= 38.616.296
>
> java.util.HashMap$Node |
> 470.953 |   11.302.872 | >= 16.857.176
>
> org.h2.engine.UndoLog  |
> 201.406 |9.667.488 | >= 32.224.960
>
> org.apache.ignite.internal.util.GridCircularBuffer$Item|
> 343.040 |8.232.960 |  >= 8.232.968
>
> java.lang.String   |
> 500.219 |8.003.504 | >= 47.308.152
>
> org.h2.util.CloseWatcher   |
> 201.404 |6.444.928 | >= 13.375.872
>
> java.util.HashMap$Node[]
> |   9.546 |5.074.192 | >= 19.757.480
>
> org.h2.message.Trace   |
> 201.411 |4.833.864 | >= 16.033.024
>
> byte[]
> |   5.104 |1.698.496 |  >= 1.698.496
>
> org.jsr166.ConcurrentHashMap8$Node[]
>|  85 |1.531.792 |  >= 2.455.608
>
> org.apache.ignite.internal.util.GridCircularBuffer$Item[]
> |   4.216 |1.439.616 |  >= 9.672.576
>
> org.apache.ignite.internal.processors.jobmetrics.GridJobMetricsSnapshot|
> 16.384 |1.310.720 |  >= 1.310.720
>
> org.jsr166.ConcurrentLinkedHashMap$HashEntry[]
> |   1.308 |1.225.408 |  >= 1.225.472
>
>
>
>
>
&g

Re: Memory Leak or my Error?

2017-06-06 Thread Vladimir Ozerov
Hi Devis,

Can you show GC roots of these Session objects?

On Tue, Jun 6, 2017 at 3:52 PM, Devis Balsemin <
devis.balse...@flexvalley.com> wrote:

> Hi,
>
> I’m using Ignite 1.7.
>
> I have a function nativeSQL that is called in many point of my programs.
>
> But after 2/3days I receive my OOM  (this function is called every 0.25ms
> X 10 concurrent users).
>
>
>
> My dubmp show me
>
>
>
> Class Name |
> Objects | Shallow Heap | Retained Heap
>
> 
> 
>
> org.h2.engine.Session  |
> 201.406 |   51.559.936 | >= 95.063.648
>
> char[] |
> 501.160 |   40.283.480 | >= 40.283.480
>
> java.lang.Object[] |
> 625.053 |   21.469.296 | >= 34.984.640
>
> org.h2.jdbc.JdbcConnection |
> 201.404 |   17.723.552 | >= 43.503.144
>
> java.util.ArrayList|
> 616.718 |   14.801.232 | >= 38.616.296
>
> java.util.HashMap$Node |
> 470.953 |   11.302.872 | >= 16.857.176
>
> org.h2.engine.UndoLog  |
> 201.406 |9.667.488 | >= 32.224.960
>
> org.apache.ignite.internal.util.GridCircularBuffer$Item|
> 343.040 |8.232.960 |  >= 8.232.968
>
> java.lang.String   |
> 500.219 |8.003.504 | >= 47.308.152
>
> org.h2.util.CloseWatcher   |
> 201.404 |6.444.928 | >= 13.375.872
>
> java.util.HashMap$Node[]
> |   9.546 |5.074.192 | >= 19.757.480
>
> org.h2.message.Trace   |
> 201.411 |4.833.864 | >= 16.033.024
>
> byte[]
> |   5.104 |1.698.496 |  >= 1.698.496
>
> org.jsr166.ConcurrentHashMap8$Node[]
>|  85 |1.531.792 |  >= 2.455.608
>
> org.apache.ignite.internal.util.GridCircularBuffer$Item[]
> |   4.216 |1.439.616 |  >= 9.672.576
>
> org.apache.ignite.internal.processors.jobmetrics.GridJobMetricsSnapshot|
> 16.384 |1.310.720 |  >= 1.310.720
>
> org.jsr166.ConcurrentLinkedHashMap$HashEntry[]
> |   1.308 |1.225.408 |  >= 1.225.472
>
>
>
>
>
>
>
> Class Name   | Objects | Shallow Heap
>
> --
>
> org.h2.engine.Session| 201.406 |   51.559.936
>
> |- org.h2.jdbc.JdbcConnection| 201.404 |   17.723.552
>
> |- org.h2.util.CloseWatcher  | 201.404 |6.444.928
>
> |- java.util.HashMap$Node| 201.404 |4.833.696
>
> |- org.h2.command.dml.Select |  65 |   11.440
>
> |- org.h2.table.TableFilter  |  65 |6.760
>
> |- org.h2.jdbc.JdbcPreparedStatement |  65 |5.720
>
> |- org.h2.index.IndexCursor  |  64 |4.608
>
> |- org.h2.command.CommandContainer   |  65 |2.600
>
> |- org.h2.engine.Database|   1 |  368
>
> |- java.lang.Thread  |   1 |  120
>
> |- org.jsr166.ConcurrentHashMap8$Node|   4 |   96
>
> |- org.h2.result.LocalResult |   1 |   72
>
> '- Total: 12 entries | |
>
>
>
> Class Name
> | Objects | Shallow Heap
>
> 
> ---
>
> org.h2.jdbc.JdbcConnection
> | 201.404 |   17.723.552
>
> |- org.h2.util.CloseWatcher
> | 201.404 |6.444.928
>
> |- java.lang.Object[]
> |   1 |  960.400
>
> |- org.h2.jdbc.JdbcPreparedStatement
>  |  65 |5.720
>
> |- 
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing$ConnectionWrapper
> |  14 |  224
>
> |- java.lang.Thread
> |   1 |  120
>
> |- org.apache.ignite.internal.processors.query.h2.twostep.
> GridReduceQueryExecutor$QueryRun|   1 |   32
>
>
>
>
>
>
>
>
>
> Class Name |
> Objects | Shallow Heap
>
> 
> 
>
> java.util.ArrayList|
> 616.718 |   14.801.232
>
> |- org.h2.engine.Session   |
> 201.406 |   51.559.936
>
> |- org.h2.engine.UndoLog   |
> 201.406 |9.667.488
>
> |- org.apache.felix.framework.capabilityset.SimpleFilter   |
> 5.798 |  139.152
>
> |- org.apache.felix.framework.wiring.BundleCapabilityImpl  |
> 1.777 |   71.080
>
> |- java.lang.Object[]   

Re: igfs-meta behavior when node restarts

2017-06-06 Thread Vladimir Ozerov
Hi,

Ignite performs delete in a "soft" fashion:
1) When "remove" command is executed, we propagate it to the secondary file
system;
2) For IGFS meta cache, we do not remove all records immediately, but
rather execute a single "move" operation and move removed tree to a hidden
"TRASH" folder.
3) "TRASH" folder is cleared periodically - this is what you see in logs.

Removal of trash content should not interfere with normal operations
anyhow, nor should it cause any performance issues. Do you observe some
real slowdown, or you are only concerned with metrics summary?

Vladimir.


On Thu, Jun 1, 2017 at 8:39 PM, joewang  wrote:

> The reads are from a non-IGFS source, but the writes are through IGFS.
> Spark
> uses Hadoop's FileOutputCommitter to write the output to IGFS. I think what
> happens is essentially:
>
> - During processing, temporary files are written by each of n executors
> running on different nodes to some /data/path/output/_temporary/part-n...
> - When the job completes, each of the executor performs the final "commit"
> by renaming the files under /data/path/output/_temporary/part-n... to
> /data/path/output/part-n... and deletes the _temporary directory.
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/igfs-meta-behavior-when-node-restarts-
> tp13155p13322.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Adding new fields to QueryEntity

2017-05-21 Thread Vladimir Ozerov
Denis,

I think the question not about new indexes, but about new fields. The
answer is - you must change QueryEntity manually still.

сб, 20 мая 2017 г. в 16:25, Denis Magda :

> Hi,
>
> You need to use CREATE index command in runtime to achieve that:
> https://apacheignite.readme.io/docs/distributed-ddl
>
> Denis
>
>
> On Saturday, May 20, 2017, fatality  wrote:
>
>> Hi
>>
>> I am wondering if it is possible to add new fields for sql queries to
>> BinaryObject caches.
>>
>> For example imagine I already have BinaryObjects that has fields X,Y,Z in
>> my
>> cache. And later I wanted to add one more field which is 'W'. Is it going
>> to
>> be possible to just adding this field to new BinaryObjects to do sql
>> queries
>> on 'W' as in below 'Step1' or do I have to do more?
>>
>> Imagining something like below Step1 should be enough to start querying on
>> the existing cache with fields X,Y,Z so that I can make a query like
>> "select
>> X,W from BinaryTest where W=32"
>>
>> cfg.setQueryEntities(new ArrayList() {{
>>QueryEntity e = new QueryEntity();
>>e.setKeyType("java.lang.Integer");
>>e.setValueType("BinaryTest");
>>e.setFields(new LinkedHashMap(){{
>>put("X", "java.lang.String");
>>put("Y", "java.lang.String");
>>put("Z", "java.lang.String");
>>put("W", "java.lang.String"); //Step1
>>  }});
>>
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-ignite-users.70518.x6.nabble.com/Adding-new-fields-to-QueryEntity-tp13043.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>


Re: [ANNOUNCE] Apache Ignite 2.0.0 Released

2017-05-06 Thread Vladimir Ozerov
Paolo,

Yes, there were braking API changes. Please refer to Migration Guide [1].

[1]
https://cwiki.apache.org/confluence/display/IGNITE/Apache+Ignite+2.0+Migration+Guide

On Sat, May 6, 2017 at 11:58 AM, Paolo Di Tommaso  wrote:

> Congrats people!
>
> The new ML Grid feature looks *very* promising.
>
> Is there any breaking change in version 2.0  ?
>
>
> Cheers,
> Paolo
>
>
> On Fri, May 5, 2017 at 10:13 PM, Denis Magda  wrote:
>
>> The Apache Ignite Community is pleased to announce the release of Apache
>> Ignite 2.0.0.
>>
>> Apache Ignite In-Memory Data Fabric [1] is a high-performance, integrated
>> and distributed in-memory platform for computing and transacting on
>> large-scale data sets in real-time, orders of magnitude faster than
>> possible with traditional disk-based or flash-based technologies.
>>
>> The Fabric is a collection of independent and well integrated components
>> some of which are the following:
>> - Data Grid
>> - SQL Grid
>> - Compute Grid
>> - Service Grid
>> - Machine Learning Grid (NEW!)
>>
>> This release incorporates tremendous changes that shifted Ignite 2.0 to
>> the next level. The whole off-heap memory architecture was redesigned from
>> scratch and, going forward, Ignite can be easily integrated with Flash and
>> SSD drives (stay tuned!). SQL Grid has been enriched with Data Definition
>> Language support. A completely new component was added to the fabric -
>> Machine Learning Grid. We provided integrations with RocketMQ and Spring
>> Data, updated Hibernate integration, introduced a plugin system for .NET
>> and found out how to execute C++ code on remote node, and many more...
>>
>> Get more details from this blog post: https://blogs.apache.org/ignit
>> e/entry/apache-ignite-2-0-redesigned
>>
>> The full list of the changes can be found here [2].
>>
>> Please visit this page if you’re ready to try the release out:
>> https://ignite.apache.org/download.cgi
>>
>> Please let us know [3] if you encounter any problems.
>>
>> Regards,
>>
>> The Apache Ignite Community
>>
>> [1] https://ignite.apache.org
>> [2] https://ignite.apache.org/releases/2.0.0/release_notes.html
>> [3] https://ignite.apache.org/community/resources.html#ask
>
>
>


Re: Integrate IGFS and Impala

2017-03-26 Thread Vladimir Ozerov
It looks like Impala expect some specific file system types. We can do
nothing with it. IGFS will never extend "org.apache.hadoop.hdfs.
DistributedFileSystem" because it is completely different implementation.

Question should be addressed to Impala developers.

On Sun, Mar 26, 2017 at 9:19 PM, Denis Magda  wrote:

> Cross-posting to the dev list.
>
> IGFS gurus, are you aware of any existing and workable plug-n-play
> integration of Impala and IGFS? Looks this is unattainable unless Impala is
> supported directly by our in-memory file system.
>
> —
> Denis
>
> > On Mar 24, 2017, at 10:23 PM, Masayuki Takahashi 
> wrote:
> >
> > Hi,
> >
> > I want to use Impala with IGFS. But it failed.
> > The reason why is that Impala depends on
> > org.apache.hadoop.hdfs.DistributedFileSystem.
> >
> > https://github.com/cloudera/Impala/blob/2717f7378c406195eab241b34b184e
> ee5f574f91/fe/src/main/java/org/apache/impala/service/
> JniFrontend.java#L719
> >
> > IgniteHadoopFileSystem does not extend it.
> >
> > https://ignite.apache.org/releases/mobile/org/apache/
> ignite/hadoop/fs/v1/IgniteHadoopFileSystem.html
> > https://ignite.apache.org/releases/mobile/org/apache/
> ignite/hadoop/fs/v2/IgniteHadoopFileSystem.html
> >
> > Have anyone tried it?
> >
> > thanks.
> >
> > --
> > Masayuki Takahashi
>
>


Re: Remote SPI with the same name is not configured

2017-03-02 Thread Vladimir Ozerov
Hi,

Looks like some of your caches have *swapEnabled *property set to *true*.
Please try setting *FileSwapSpaceSpi *explicitly in configuration of your
nodes.

Vladimir.

On Thu, Mar 2, 2017 at 3:00 PM, ght230  wrote:

> I start an Ingnite Server node first, Sometimes when I start the second
> Server Node, following information outputted.
>
> [09:52:21] Security status [authentication=off, tls/ssl=off]
> [09:52:23,921][ERROR][main][IgniteKernal] Got exception while starting
> (will
> rollback startup routine).
> org.apache.ignite.IgniteCheckedException: Failed to initialize SPI
> context.
> at
> org.apache.ignite.internal.managers.GridManagerAdapter.onKernalStart(
> GridManagerAdapter.java:586)
> ~[ignite-core-1.6.12.jar:1.6.12]
> at org.apache.ignite.internal.IgniteKernal.start(
> IgniteKernal.java:976)
> [ignite-core-1.6.12.jar:1.6.12]
> at
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(
> IgnitionEx.java:1798)
> [ignite-core-1.6.12.jar:1.6.12]
> at
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(
> IgnitionEx.java:1602)
> [ignite-core-1.6.12.jar:1.6.12]
> at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.
> java:1042)
> [ignite-core-1.6.12.jar:1.6.12]
> at
> org.apache.ignite.internal.IgnitionEx.startConfigurations(
> IgnitionEx.java:964)
> [ignite-core-1.6.12.jar:1.6.12]
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.
> java:850)
> [ignite-core-1.6.12.jar:1.6.12]
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.
> java:749)
> [ignite-core-1.6.12.jar:1.6.12]
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.
> java:619)
> [ignite-core-1.6.12.jar:1.6.12]
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.
> java:589)
> [ignite-core-1.6.12.jar:1.6.12]
> at org.apache.ignite.Ignition.start(Ignition.java:347)
> [ignite-core-1.6.12.jar:1.6.12]
> at
> org.apache.ignite.startup.cmdline.CommandLineStartup.
> main(CommandLineStartup.java:302)
> [ignite-core-1.6.12.jar:1.6.12]
> Caused by: org.apache.ignite.spi.IgniteSpiException: Remote SPI with the
> same name is not configured (fix configuration or set
> -DIGNITE_SKIP_CONFIGURATION_CONSISTENCY_CHECK=true system property)
> [name=FileSwapSpaceSpi,
> loc=org.apache.ignite.spi.swapspace.file.FileSwapSpaceSpi]
> at
> org.apache.ignite.spi.IgniteSpiAdapter.checkConfigurationConsistency(
> IgniteSpiAdapter.java:550)
> ~[ignite-core-1.6.12.jar:1.6.12]
> at
> org.apache.ignite.spi.IgniteSpiAdapter.onContextInitialized(
> IgniteSpiAdapter.java:231)
> ~[ignite-core-1.6.12.jar:1.6.12]
> at
> org.apache.ignite.internal.managers.GridManagerAdapter.onKernalStart(
> GridManagerAdapter.java:337)
> ~[ignite-core-1.6.12.jar:1.6.12]
> ... 11 more
> [09:52:24] Ignite node stopped OK [uptime=00:00:04:119]
> class org.apache.ignite.spi.IgniteSpiException: Remote SPI with the same
> name is not configured (fix configuration or set
> -DIGNITE_SKIP_CONFIGURATION_CONSISTENCY_CHECK=true system property)
> [name=FileSwapSpaceSpi,
> loc=org.apache.ignite.spi.swapspace.file.FileSwapSpaceSpi]
> at
> org.apache.ignite.spi.IgniteSpiAdapter.checkConfigurationConsistency(
> IgniteSpiAdapter.java:550)
> at
> org.apache.ignite.spi.IgniteSpiAdapter.onContextInitialized(
> IgniteSpiAdapter.java:231)
> at
> org.apache.ignite.internal.managers.GridManagerAdapter.onKernalStart(
> GridManagerAdapter.java:337)
> at org.apache.ignite.internal.IgniteKernal.start(
> IgniteKernal.java:976)
> at
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(
> IgnitionEx.java:1798)
> at
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(
> IgnitionEx.java:1602)
> at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.
> java:1042)
> at
> org.apache.ignite.internal.IgnitionEx.startConfigurations(
> IgnitionEx.java:964)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.
> java:850)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.
> java:749)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.
> java:619)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.
> java:589)
> at org.apache.ignite.Ignition.start(Ignition.java:347)
> at
> org.apache.ignite.startup.cmdline.CommandLineStartup.
> main(CommandLineStartup.java:302)
> Failed to start grid: Remote SPI with the same name is not configured (fix
> configuration or set -DIGNITE_SKIP_CONFIGURATION_CONSISTENCY_CHECK=true
> system property) [name=FileSwapSpaceSpi,
> loc=org.apache.ignite.spi.swapspace.file.FileSwapSpaceSpi]
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Remote-SPI-with-the-same-name-is-not-
> configured-tp10989.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: @SqlQueryField on composite primary key

2017-02-16 Thread Vladimir Ozerov
zaid,
In this case "id" column will be created. If you query this attribute,
PersonPK will be returned.

Sergi,
This is in fact widely-used pattern which I saw many times in production
systems. It allows user to have self-contained object. Otherwise one will
have to introduce third value object which will hold both key and val parts.

On Thu, Feb 16, 2017 at 2:15 PM, Sergi Vladykin 
wrote:

> If you are going to store this stuff in cache like
>
> cache.put(myPersonPK, myPerson)
>
> then storing PersonPK inside of Person is bad idea, because PK will be
> stored twice in cache.
>
> I'm not sure I understand your question, but it seems to me that you don't
> need to do anything here because cache key is always accessible from SQL as
> _KEY field.
>
> Sergi
>
> 2017-02-16 13:48 GMT+03:00 zaid :
>
>> Hi,
>>
>> My POJO has composite primary key.
>>
>> e.g.
>>
>> public class Person {
>>
>> PersonPK id;
>>
>> String name;
>>
>> }
>>
>> public class PersonPK {
>> Long id1;
>> String personType;
>> }
>>
>> These classes are just for example purpose. My question is - can I apply
>> @SqlQueryField directly on id field in Person class which is of type
>> PersonPK. How this will be used while creating table in H2?
>>
>>
>>
>>
>>
>> --
>> View this message in context: http://apache-ignite-users.705
>> 18.x6.nabble.com/SqlQueryField-on-composite-primary-key-tp10666.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>


Re: ignite as Hadoop's Map Reduce engine - how it works?

2017-02-13 Thread Vladimir Ozerov
Hi,

Our Hadoop Accelerator has its own shuffle algorithm. When jobs request
arrives, we assign mappers and reducers to the most appropriate nodes in
terms of data locality and available resources. 

Shuffle itself adds K-V pairs of local reducer to sorted collection right
away. K-V pairs of remote reducers are packed into batches and sent to them,
then added to sorted collection.

Internally we share job state between nodes with help of cache, this is why
you may see occasional transactions. 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/ignite-as-Hadoop-s-Map-Reduce-engine-how-it-works-tp10551p10592.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: IGFS Questions

2017-01-26 Thread Vladimir Ozerov
Hi,

1. Durability depends on IGFS mode. In PRIMARY there is no durability, In
other modes IGFS will propagate writes to underlying file system (e.g. to
HDFS).
2. Files stored in IGFS are always partitioned. You can specify number of
backups and/or REPLICATED mode in data cache configuration.
3. Yes, as long as you have Hadoop-compliant implementation of S3 file
system (e.g. org.apache.hadoop.fs.s3.S3FileSystem).
4. You can configure evictions from data cache. Please refer to
org.apache.ignite.cache.eviction.igfs.IgfsPerBlockLruEvictionPolicy class.
5. Underlying file system must be shared between all nodes in cluster. If
it is true, then you can use
org.apache.ignite.igfs.secondary.local.LocalIgfsSecondaryFileSystem.
6. You can define pinned files before node start. But you cannot change it
in runtime. Please refer to IgfsPerBlockLruEvictionPolicy.setExcludePaths()
method.
7. PROXY mode doesn't cache any data, but simply delegates to secondary
file system. It is useful when you do not want to cache certain part of
data at all. For example, if you access data only once.
8. No. Write-behind speeds up user operations at the cost of consistency
guarantees.


On Tue, Jan 24, 2017 at 5:00 PM, pragmaticbigdata 
wrote:

> I have some questions of deploying IGFS as a cache layer given that ignite
> could be deployed both as a key-value store and as a file system
>
> 1. How does IGFS behave when deployed in standalone mode? I wanted to
> confirm that there is no durability in this mode. Assuming I persist a
> parquet file on IGFS, if the cluster goes down I lose the file, right?
> 2. Do we get the ability to specify the fact that the file stored in IGFS
> could be both partitioned (with backup nodes) or replicated?
> 3. IGFS can act as a cache layer over HDFS and local file system, can it
> act
> as a caching layer over S3 store?
> 4. As with the key-value store, can I configure tiered storage in IGFS i.e.
> given that IGFS is configured with local file system as the secondary store
> and the ignite cluster of 3 server nodes configured with 5GB memory each,
> would the data spill over to the local disk if I try to load a 25GB file
> into IGFS? If so, what is the configuration needed?
> 5. Can I configure local SSD disks as the secondary store for IGFS?
> 6. I browsed through the documentation but I didn't find the capability of
> pinning and unpinning files in IGFS. I am looking something similar to what
> alluxio
>  Alluxio.html#pinning-files>
> provides. Can it be implemented?
> 7. Could you elaborate a bit on how IGFS Proxy Mode works? What is its
> recommended use case?
> 8. With DUAL_ASYNC (write-behind mode), does ignite have failover
> guarantees
> which is lacking in the key-value store -
> https://issues.apache.org/jira/browse/IGNITE-1897?
>
> Thanks in advance.
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/IGFS-Questions-tp10217.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Ignite cluster will be jammed for long-running queries

2017-01-25 Thread Vladimir Ozerov
IGNITE-4105 is a must ticket for Apache Ignite 2.0.

On Wed, Jan 25, 2017 at 8:58 AM, ght230  wrote:

> Yes, some queries will be long running, they are unavoidable.
>
> Query slow is acceptable, but the cluster jammed is a bit of trouble.
>
> I will be grateful if IGNITE-4105 can be solved as soon as possible.
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Ignite-cluster-will-be-jammed-for-
> long-running-queries-tp10210p10238.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Abandon of the non-throwing version of C++ client API

2017-01-23 Thread Vladimir Ozerov
Originally I tried to cover situation when user disabled exceptions. In
this case throwing something will lead to application crash. However, for
now I believe that such use case is not very likely as Ignite is relatively
high-level product. For this reason I would prefer to keep clean and
compact API which will throw exceptions. But still have a workaround for
users which do not want that. For instance, we may consult to some
environment variable in runtime, or to some preprocessor flag in compile
time, and expose additional static "GetLastError" method.

On Mon, Jan 23, 2017 at 11:35 PM, Denis Magda <dma...@apache.org> wrote:

> Guys,
>
> I found the initial discussion from the early times of our C++ client:
> http://apache-ignite-developers.2346864.n4.nabble.
> com/C-exception-handling-strategy-td778.html
>
> Vovan, that time you were on the side of error-code methods. Why have you
> changed you mind proposing to make the throwing version default one? Any
> new tendency in C++ community?
>
> —
> Denis
>
> On Jan 23, 2017, at 2:56 AM, Vladimir Ozerov <voze...@gridgain.com> wrote:
>
> +1 to Igor's idea. Ignite is relatively high-level product and we do not
> expect ultra-optimized users who cannot allow exceptions to be enabled.
> Macros should be a good workaround for them, though.
>
> On Sat, Jan 21, 2017 at 6:47 PM, Denis Magda <dma...@gridgain.com> wrote:
>
>> Hi Igor,
>>
>> My C++ experience is based only on error code methods. This is why I
>> thought that exceptions based approach is unrelated to C++ at all.
>>
>> I do remember we discussed all the pros and cons of these ways before.
>> Could you find that old discussion and share it here? I'm on a mobile now,
>> not easy to do on my own.
>>
>> Denis
>>
>>
>> On Friday, January 20, 2017, Igor Sapego <isap...@gridgain.com> wrote:
>>
>>> Hi Igniters,
>>>
>>> I'm the guy who mostly contribute in C++ Ignite client and I
>>> need your advice. Mostly I'd like to hear from our users and
>>> those who are experienced in C++. Currently we have two
>>> versions of most API methods - the throwing one and the
>>> one that returns error through output argument. This was initially
>>> done because we were not sure which way of error-reporting
>>> is going to be preferred by our users.
>>>
>>> Now this approach bloats C++ API a lot and makes it harder to
>>> maintain and optimize code. I propose like to abandon and deprecate
>>> non-throwing version of API and only leave throwing version,
>>> but first I want to hear from you guys - what do you think? Does
>>> anyone use non-throwing version of the API? Maybe your toolchain
>>> does not support exceptions or are you disabling them on purpose?
>>>
>>> For those who prefer disabling exceptions I propose to introduce
>>> some macros like IGNITE_DISABLE_EXCEPTIONS and add
>>> some thread-local error-storing mechanism like ignite::GetLastError().
>>>
>>> What do you guys think?
>>>
>>> Best Regards,
>>> Igor
>>>
>>
>
>


Re: How IGFS keep sync with HDFS?

2016-12-02 Thread Vladimir Ozerov
Hi,

IGFS always propagate updates to HDFS immediately (or with slight delay in
case of writes in DUAL_ASYNC mode). It doesn't remove data from memory
after flushing it to HDFS. You can try configuring
*org.apache.ignite.cache.eviction.igfs.IgfsPerBlockLruEvictionPolicy
*to evict some data blocks from IGFS data cache.

As per expiry policy, it doesn't affect flushing logic anyhow. If eviction
happens after flush, then data will not be lost. Otherwise it can be lost.
Though, this is possible only in DUAL_ASYNC mode with very low TTL.

Vladimir.

On Tue, Nov 29, 2016 at 2:17 PM, Kaiming Wan <344277...@qq.com> wrote:

> If IGFS can't hold more input data, will it flush data to HDFS? How IGFS
> flush data to HDFS? Async or sync?
>
> If it a async mode, when IGFS can't hold more data, it will using cache
> expire policy. And if the data which is expired is not persisted to HDFS
> timely, the data will be lost.  Is is possible?
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/How-IGFS-keep-sync-with-HDFS-tp9258p9262.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: BinaryObject pros/cons

2016-10-30 Thread Vladimir Ozerov
Valya,

I have several concerns:
1) Correctness: hasField() will not work properly. But probably we can fix
that by adding this info to schema.
2) Performance: we have lots optimizations which depend on either "stable"
object schema, or low number of schemas. We will effectively turn them off.
But what concerns me even more, is that we may end up in enormous number of
schemas. E.g. consider an object with 10 number fields. If all fields could
be zero, we may end up in something like 2^10 schemas.

Vladimir.

29 окт. 2016 г. 0:37 пользователь "Valentin Kulichenko" <
valentin.kuliche...@gmail.com> написал:

> Vova,
>
> Why do we need to write zeros and nulls in the first place? What's the
> value of having them in the byte array?
>
> -Val
>
> On Fri, Oct 28, 2016 at 1:18 AM, Vladimir Ozerov <voze...@gridgain.com>
> wrote:
>
>> Valya,
>>
>> Currently null value is written as one byte, while zero value of long
>> type is written as 9 bytes. I want to improve that and write zeros as one
>> byte as well.
>>
>> As per var-length encoding, I am strongly against it. It saves IO and
>> memory at the cost of CPU. If we encode numbers in this way we will
>> slowdown SQL (which is already not very fast, to be honest). Because
>> instead of a single read memory read, we will have to perform multiple
>> reads and then apply some mechanics to restore original value. We already
>> have such problem with Strings - Java stores them as UTF-16, but we encode
>> them as UTF-8. As a result every read of a string field in SQL results in
>> decoding overhead.
>>
>> Vladimir.
>>
>> On Fri, Oct 28, 2016 at 6:07 AM, Valentin Kulichenko <
>> valentin.kuliche...@gmail.com> wrote:
>>
>>> Cross-posting this to dev list.
>>>
>>> Vladimir,
>>>
>>> To be honest, I don't see much difference between null values for
>>> objects and zero values for primitives. From BinaryObject semantics
>>> standpoint, both are default values for corresponding types. These values
>>> will be returned from the BinaryObject.field() method regardless of whether
>>> we actually save then in the byte array or not. Having said that, why don't
>>> we just skip them during write?
>>>
>>> You optimization will be still useful though, because there are often a
>>> lot of ints and longs that are not zeros, but still small and can fit 1-2
>>> bytes. We already added such compaction in direct message marshaling and it
>>> reduced overall traffic by around 30%.
>>>
>>> -Val
>>>
>>>
>>> On Thu, Oct 27, 2016 at 2:21 PM, Vladimir Ozerov <voze...@gridgain.com>
>>> wrote:
>>>
>>>> Hi,
>>>>
>>>> I am not very concerned with null fields overhead, because usually it
>>>> won't be significant. However, there is a problem with zeros. User object
>>>> might have lots of int/long zeros, this is not uncommon. And each zero will
>>>> consume 4-8 additional bytes. We probably will implement special
>>>> optimization which will write such fields in special compact format.
>>>>
>>>> Vladimir.
>>>>
>>>> On Thu, Oct 27, 2016 at 10:55 PM, vkulichenko <
>>>> valentin.kuliche...@gmail.com> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> Yes, null values consume memory. I believe this can be optimized, but I
>>>>> haven't seen issues with this so far. Unless you have hundreds of
>>>>> fields
>>>>> most of which are nulls (very rare case), the overhead is minimal.
>>>>>
>>>>> -Val
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> View this message in context: http://apache-ignite-users.705
>>>>> 18.x6.nabble.com/BinaryObject-pros-cons-tp8541p8563.html
>>>>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>>>>
>>>>
>>>>
>>>
>>
>


Re: BinaryObject pros/cons

2016-10-28 Thread Vladimir Ozerov
Valya,

Currently null value is written as one byte, while zero value of long type
is written as 9 bytes. I want to improve that and write zeros as one byte
as well.

As per var-length encoding, I am strongly against it. It saves IO and
memory at the cost of CPU. If we encode numbers in this way we will
slowdown SQL (which is already not very fast, to be honest). Because
instead of a single read memory read, we will have to perform multiple
reads and then apply some mechanics to restore original value. We already
have such problem with Strings - Java stores them as UTF-16, but we encode
them as UTF-8. As a result every read of a string field in SQL results in
decoding overhead.

Vladimir.

On Fri, Oct 28, 2016 at 6:07 AM, Valentin Kulichenko <
valentin.kuliche...@gmail.com> wrote:

> Cross-posting this to dev list.
>
> Vladimir,
>
> To be honest, I don't see much difference between null values for objects
> and zero values for primitives. From BinaryObject semantics standpoint,
> both are default values for corresponding types. These values will be
> returned from the BinaryObject.field() method regardless of whether we
> actually save then in the byte array or not. Having said that, why don't we
> just skip them during write?
>
> You optimization will be still useful though, because there are often a
> lot of ints and longs that are not zeros, but still small and can fit 1-2
> bytes. We already added such compaction in direct message marshaling and it
> reduced overall traffic by around 30%.
>
> -Val
>
>
> On Thu, Oct 27, 2016 at 2:21 PM, Vladimir Ozerov <voze...@gridgain.com>
> wrote:
>
>> Hi,
>>
>> I am not very concerned with null fields overhead, because usually it
>> won't be significant. However, there is a problem with zeros. User object
>> might have lots of int/long zeros, this is not uncommon. And each zero will
>> consume 4-8 additional bytes. We probably will implement special
>> optimization which will write such fields in special compact format.
>>
>> Vladimir.
>>
>> On Thu, Oct 27, 2016 at 10:55 PM, vkulichenko <
>> valentin.kuliche...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> Yes, null values consume memory. I believe this can be optimized, but I
>>> haven't seen issues with this so far. Unless you have hundreds of fields
>>> most of which are nulls (very rare case), the overhead is minimal.
>>>
>>> -Val
>>>
>>>
>>>
>>> --
>>> View this message in context: http://apache-ignite-users.705
>>> 18.x6.nabble.com/BinaryObject-pros-cons-tp8541p8563.html
>>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>>
>>
>>
>


Re: BinaryObject pros/cons

2016-10-27 Thread Vladimir Ozerov
Hi,

I am not very concerned with null fields overhead, because usually it won't
be significant. However, there is a problem with zeros. User object might
have lots of int/long zeros, this is not uncommon. And each zero will
consume 4-8 additional bytes. We probably will implement special
optimization which will write such fields in special compact format.

Vladimir.

On Thu, Oct 27, 2016 at 10:55 PM, vkulichenko  wrote:

> Hi,
>
> Yes, null values consume memory. I believe this can be optimized, but I
> haven't seen issues with this so far. Unless you have hundreds of fields
> most of which are nulls (very rare case), the overhead is minimal.
>
> -Val
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/BinaryObject-pros-cons-tp8541p8563.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Does Apache Ignite use any proprietary api?

2016-10-19 Thread Vladimir Ozerov
Hi,

Thank you for pointing this out. I asked Spring folks to clarify what do
they mean.

Vladimir.

On Wed, Oct 19, 2016 at 10:11 AM, edwardk  wrote:

> Hi,
>
> Does Apache Ignite use any proprietary api within it.
>
> It seems the folks at Spring believe Apache Ignite uses proprietary api and
> so they do not want to support dependency management for it. Apache Ignite
> does not come in their list of supported caching providers in their popular
> framework Spring Boot.
>
> Check out the issue raised for it in Spring Boot as below.
> https://github.com/spring-projects/spring-boot/issues/6373
>
> I do not believe this to be true. I haven't see anything about that in the
> apache ignite site docs too.
>
> Can someone confirm on this.
>
>
> Thanks,
> edwardk
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Does-Apache-Ignite-use-any-proprietary-api-tp8353.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: MapReduce with Apache-Ignite

2016-10-15 Thread Vladimir Ozerov
Hi,

Currently mapper output is stored in-memory only. We are working on spilling
algorithm at the moment which will flush part of it to the disk according to
some pre-configured thresholds.

As per job logs, they are not supported at the moment by Ignite in the same
way as it is done for Hadoop. We add similar functionality soon. 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/MapReduce-with-Apache-Ignite-tp8007p8314.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: MapReduce with Apache-Ignite

2016-09-29 Thread Vladimir Ozerov
Hi,

1. Mapper output should be written to the same place as if it job was run
through native Apache Hadoop engine. Apache Ignite Hadoop Accelerator can
work without IGFS at all.
2. Currently you will not see jobs in Resource Manager because they are
executed through separate engine. We will improve this in future. Please
clarify what logs do you mean?

Vladimir.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/MapReduce-with-Apache-Ignite-tp8007p8016.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Hive on Ignited- Yarn

2016-09-27 Thread Vladimir Ozerov
Hi,

First of all we need to ensure that Ignite is really used in your case.
Please advise which Hadoop deployment you use and where mapred-site.xml and
core-site.xml files you mentioned are located?

Vladimir.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Hive-on-Ignited-Yarn-tp7838p7964.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Is ignite installation required on all nodes of hadoop cluster

2016-09-27 Thread Vladimir Ozerov
Hi,

Could you please provide more information on your use case? 

By default we recommend to have a single Ignite node started on every HDFS
data node so that we can take advantage of data locality and minimize
network traffic. But this is not strict requirement and final cluster
deployment may vary depending on the use case technology stack. So the more
details we have about your use case, the better solution we can advise.

Vladimir.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Is-ignite-installation-required-on-all-nodes-of-hadoop-cluster-tp7919p7963.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Is ignite supported on Mapr File system (mfs)

2016-09-27 Thread Vladimir Ozerov
Hi,

MapR support will be available in Apache Ignite 1.8.

Vladimir.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Is-ignite-supported-on-Mapr-File-system-mfs-tp7914p7962.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Affinity and Topology versions

2016-09-26 Thread Vladimir Ozerov
Hi,

Yes, Ignite guarantees that all nodes participating in transactions operate
on the same topology version. I explained the problem with assumed race
condition in relevant thread.

Vladimir.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Affinity-and-Topology-versions-tp7880p7948.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Race condition with partition affinity mapping

2016-09-26 Thread Vladimir Ozerov
Hi.

Looks like you observe "late affinity assignment" feature which is aimed to
achieve better overall cluster performance during rebalance. Please see
IgniteConfiguration.isLateAffinityAssignment() method for more information.

Vladimir.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Race-condition-with-partition-affinity-mapping-tp7848p7947.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: IgniteException: No FileSystem for scheme: hdfs

2016-08-15 Thread Vladimir Ozerov
Hi Vikram,

As you delegate to HDFS, you should have all relevant HDFS classes in
classpath. Please make sure that you added all JAR files from the following
directories to your startup script:

/opt/cloudera/parcels/CDH-5.8.0-1.cdh5.8.0.p0.42/lib/hadoop/ 
/opt/cloudera/parcels/CDH-5.8.0-1.cdh5.8.0.p0.42/lib/hadoop-hdfs/ 
/opt/cloudera/parcels/CDH-5.8.0-1.cdh5.8.0.p0.42/lib/hadoop-mapreduce/ 

This is not needed when you start the node using ignite.sh as these JAR
files are added automatically. But this is not the case when starting
application with embedded Ignite from the command line.

Vladimir.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/IgniteException-No-FileSystem-for-scheme-hdfs-tp7014p7061.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Cannot get the data in Tableau

2016-08-12 Thread Vladimir Ozerov
Hi Austin,

Most probably Igor Sapego will be able to assist you. As far as I know he
is unavailable during this week, but I hope he will be able to answer you
in the beginning of the next week.

Vladimir.

On Tue, Aug 9, 2016 at 2:34 PM, austin solomon 
wrote:

> Hi,
>
> I am using Apache ignite 1.7.0-SNAPSHOT version.
>
> My ignite cluster is running in CentOS server and I'm trying to connect to
> it through ODBC driver from Windows PC.
>
> My confguration is like below:
>
> 
> 
> 
>
> 
> 
> 
> 
> 
> 
>  value="FULL_SYNC"/>
>
> 
> 
> 
> 
>  value="java.lang.Long"/>
> 
> 
> 
>  value="java.lang.Long"/>
> 
> 
> 
> 
> 
>  value="java.lang.String"/>
>  value="java.lang.String"/>
>  value="java.lang.String"/>
>  value="java.lang.Integer"/>
> 
> 
> 
> 
> 
> 
>
> 
> 
> 
> 
>  value="FULL_SYNC"/>
>
> 
> 
> 
> 
>  value="java.lang.Long"/>
>  value="Organization"/>
> 
> 
>  value="java.lang.String"/>
> 
> 
> 
> 
> 
> 
> 
> 
>
> My problem here is, I could not able to get the data present in cache that
> i have created, In Tableau i can see only the columns.
>
> When i checked the cache using ignitevisorcmd.sh i got the following
> output:
> Time of the snapshot: 08/09/16, 16:52:46
> +===
> ==+
> |  Name(@)  |Mode | Nodes | Entries (Heap / Off heap) |
> Hits|  Misses   |   Reads   |  Writes   |
> +===
> ==+
> | Organization(@c0) | PARTITIONED | 1 | min: 2 (2 / 0)|
> min: 0| min: 0| min: 0| min: 0|
> |   | |   | avg: 2.00 (2.00 / 0.00)   |
> avg: 0.00 | avg: 0.00 | avg: 0.00 | avg: 0.00 |
> |   | |   | max: 2 (2 / 0)|
> max: 0| max: 0| max: 0| max: 0|
> +---+-+---+-
> --+---+---+---+---+
> | Person(@c1)   | PARTITIONED | 1 | min: 5 (5 / 0)|
> min: 0| min: 0| min: 0| min: 0|
> |   | |   | avg: 5.00 (5.00 / 0.00)   |
> avg: 0.00 | avg: 0.00 | avg: 0.00 | avg: 0.00 |
> |   | |   | max: 5 (5 / 0)|
> max: 0| max: 0| max: 0| max: 0|
> +---
> --+
>
> I have attached the screen shot.
>
> Can you please help me what I am missing here.
>
> Thanks
> Austin
>


Re: MR jobs NOT starting in HIVE over IGFS

2016-08-12 Thread Vladimir Ozerov
Hi,

Could you please subscribe to the user list so that we see your messages not
only on the forum, but in the mailing lists as well? Instructions can be
found here: http://ignite.apache.org/community/resources.html#mail-lists

As per your problem, we need additional information. In particular:
1) Please attach full Ignite XML configuration. You provided only some
beans, but we need to get the whole picture.
2) Please attach Ignite server logs. Chances that we will see some
exceptions or warnings there.

Vladimir.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/MR-jobs-NOT-starting-in-HIVE-over-IGFS-tp6870p7006.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: 1.7.0 release on mvn central?

2016-08-08 Thread Vladimir Ozerov
Hi,

Yes, it was just released. E.g.:
https://repository.apache.org/content/repositories/releases/org/apache/ignite/ignite-core/
It will take some for the release to appear on Maven repo search sites.

Vladimir.

On Mon, Aug 8, 2016 at 5:46 AM, barrettbnr  wrote:

> Hello All,
>
> Will there be a 1.7.0 release pushed to maven central? We can of course
> build from source but very convenient to find it there.
>
> Thanks!
>
> -b
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/1-7-0-release-on-mvn-central-tp6841.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: question about ignite benchmark test

2016-08-04 Thread Vladimir Ozerov
Hi Kevin,

This is the number of keys involved into PUT or GET operation. If there is
1 key, then *IgniteCache.get() *or *IgniteCache.put() *operation was used.
If there are more keys, then we benchmarked *IgniteCache.getAll() *or
*IgniteCache.putAll()
*operations.

On Thu, Aug 4, 2016 at 11:08 AM, Zhengqingzheng 
wrote:

> Hi  there,
>
> I am reading ignite benchmark test result from this address:
> http://www.gridgain.com/resources/benchmarks/ignite-vs-hazelcast-benchmarks
>
> And I see there are some graph result labeled with descriptions like this:
> Graphs: 1 key
> 
> , 2 keys
> 
> , 6 keys
> 
> , 10 keys
> 
> .
>
> Can anyone gives me a rough description for the meaning of different keys
> in the test?
>
>
>
>
>
> Best regards,
>
> Kevin
>
>
>
>
>


Re: Problem about Ignite Hadoop Accelerator MRv2 with CDH5.5.2 and kerberos

2016-07-28 Thread Vladimir Ozerov
Hi Mao,

Ignite has special file system factory for Kerberized environments -
*org.apache.ignite.hadoop.fs.KerberosHadoopFileSystemFactory*. Please try
using it to resolve your problem.
Should you have any questions please provide your current XML configuration
and we will try to help you.

Vladimir.

On Fri, Jul 22, 2016 at 6:38 AM, mao guo  wrote:

> Hi,all
>
> In my env  CDH5.5.2 and kerberos,I configure ignite to Accelerator
> mapreduce 2 as the office docs,but I got a problem when submit a MR job
> like this:
> 16/07/22 11:16:42 WARN security.UserGroupInformation:
> PriviledgedActionException as:@XX.COM (auth:KERBEROS)
> cause:java.io.IOException: Failed to submit job (null status obtained):
> job_666baa91-cf92-49d9-a2ab-4cdc702019aa_0003
> java.io.IOException: Failed to submit job (null status obtained):
> job_666baa91-cf92-49d9-a2ab-4cdc702019aa_0003
> at
> org.apache.ignite.internal.processors.hadoop.proto.HadoopClientProtocol.submitJob(HadoopClientProtocol.java:123)
> at
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:243)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1307)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1304)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1304)
> at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1325)
> at
> org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:306)
> at
> org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:354)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at
> org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:363)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
> at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
> at
> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
>
> did I configured something wrong or ignite not support MRv2 with kerberos
>
>


Re: 答复: ignite Connection refused exception

2016-07-21 Thread Vladimir Ozerov
Hi Kevin,

The last error message suggests that it was a problem when trying to
establish communication between two nodes on the same machine using shared
memory mechanics. It is not clear now what is the reason for this, but I
would try to disable shared memory and see if it helps.

In XML file please add the following bean:


...









And add the following line to your programmatic configuration:

TcpCommunicationSpi commSpi = new TcpCommunicationSpi();
*commSpi.setSharedMemoryPort(-1); *

igniteCCF.setCommunicationSpi(commSpi);

Please let me know if it resolves the problem. Otherwise please attach the
whole logs from all nodes.

Vladimir.

On Thu, Jul 21, 2016 at 12:03 PM, Zhengqingzheng 
wrote:

> Hi Denis,
>
> I have configured TcpDiscoverySpi both on the server side and client side.
>
> On the server side, my configuration is as follows:
>
>
>
> 
>
>  class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
>
> 
>
>  class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
>
>  
>
> 
>
>
> 10.120.70.122:47500..47509
>
>
> 10.120.89.196:47500..47509
>
>
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
>
>
>
>
> On the client side, I am using java to configure the cache, here is the
> code:
>
>
>
>TcpDiscoveryVmIpFinder  ipFinder = new
> TcpDiscoveryVmIpFinder(false);
>
>
>
> List addrs = new ArrayList();
>
> addrs.add(IGNITE_NEW_ADDRESS);
>
> addrs.add(IGNITE_ADDRESS);
>
> ipFinder.setAddresses(addrs);
>
>
>
>
>
> TcpDiscoverySpi discoverySpi = new TcpDiscoverySpi();
>
> discoverySpi.setIpFinder(ipFinder);
>
>
>
>// discoverySpi.setLocalAddress(instanceName);
>
>// discoverySpi.setLocalPort(47505);
>
> igniteCCF.setDiscoverySpi(discoverySpi);
>
>
>
>
>
>
>
> TcpCommunicationSpi commSpi = new TcpCommunicationSpi();
>
>
>
> igniteCCF.setCommunicationSpi(commSpi);
>
>
>
>
>
>
>
> //create ignite instance
>
> ignite = Ignition.start(igniteCCF);
>
>
>
>
>
> But  I still cannot connect to the server. Or I should see, it did
> connected to the server, however, closed without communication.
>
> When I check the server console, I can see the client node joined, but
> return the following error:
>
>  at
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
>
>  at
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
>
>  at
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
>
> Caused by: class org.apache.ignite.IgniteCheckedException: Failed to start
> SPI: TcpDiscoverySpi [addrRslvr=null, sockTimeout=5000, ackTimeout=5000,
> reconCnt=10, maxAckTimeout=60, forceSrvMode=false,
> clientReconnectDisabled=false]
>
>  at
> org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:258)
>
>  at
> org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:660)
>
>  at
> org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1505)
>
>  ... 36 more
>
> Caused by: class org.apache.ignite.spi.IgniteSpiException: Failed to
> connect to cluster, connection failed and failed to reconnect.
>
>  at
> org.apache.ignite.spi.discovery.tcp.ClientImpl$Reconnector.body(ClientImpl.java:1287)
>
>  at
> org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
>
>
>
>
>
> also I checked the log, the there are two types of error messages:
>
> 1.   
> [21:15:30,801][ERROR][grid-time-coordinator-#50%null%][GridClockSyncProcessor]
> Failed to send time sync snapshot to remote node (did not leave grid?)
> [nodeId=de5a1f4c-d051-4dec-97d0-37da943ebd88,
> msg=GridClockDeltaSnapshotMessage [snapVer=GridClockDeltaVersion [ver=55,
> topVer=30], deltas={66198003-510d-4170-8b8f-5316c01f3d58=8740,
> 8c38e19b-3aeb-4865-834b-ee6327913980=96665,
> 8fb43d92-da97-4be9-8ecd-50c6456d0362=72055,
> 2623a147-609e-4e64-97e3-e8a7fc9ccc42=96665,
> 109e1f29-6c93-4e77-a6e0-ba09adbc79eb=-3227,
> 6cbddaee-3fc9-4132-88ed-19ab1e5195a1=-5236,
> de5a1f4c-d051-4dec-97d0-37da943ebd88=-10946}], err=Failed to send message
> (node may have left the grid or TCP connection cannot be established due to
> firewall issues) [node=TcpDiscoveryNode
> [id=de5a1f4c-d051-4dec-97d0-37da943ebd88, addrs=[0:0:0:0:0:0:0:1,
> 10.135.66.169, 127.0.0.1], sockAddrs=[/0:0:0:0:0:0:0:1:0,
> /0:0:0:0:0:0:0:1:0, /10.135.66.169:0, /127.0.0.1:0], discPort=0,
> order=30, intOrder=20, 

Re: igfs.withAsync is still synchronous in most file operations?

2016-07-21 Thread Vladimir Ozerov
Hi Nicolae,

This is not very easy question.

First, "withAsync()" was introduced to IGFS mainly to support task
execution (methods "execute(...)"). For now it is pretty clear that these
methods are of very little use because there are much more convenient
frameworks to achieve the same goals - Hadoop and Spark. And IGFS can be
plugged into them easily. So I think there are rather high chances that
almost all async methods will be removed in Apache Ignite 2.0.

Second, we already has a kind of asynchrony for file writes. We have
special mode DUAL_ASYNC which flushes data to the secondary file system
asynchronously. Having two "flavors" of asynchrony makes API complex and
dirty.

But having said that I still think that asynchronous execution on standard
file system operations like "mkdirs", "remove", etc. could be useful. E.g.
removal of a directory with million files may take substantial time, and
user may want this process to happen in a background.

I hope we will come to some clear solution in Apache Ignite 2.0 and most
methods will have async counterparts.

Vladimir.


On Thu, Jul 21, 2016 at 12:48 AM, vkulichenko  wrote:

> Hi,
>
> Async execution is supported only for methods that are marked with
> @IgniteAsyncSupported annotation. read/write/create/delete operations are
> not among them.
>
> I'm not aware of any plans to provide this support. Can someone else from
> the community chime in?
>
> -Val
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/igfs-withAsync-is-still-synchronous-in-most-file-operations-tp6420p6425.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: How to use rest api to put an object into cache?

2016-06-28 Thread Vladimir Ozerov
Hi Kevin,

Yes, currently REST protocol interpret everything as String. At this moment
you can use *ConnectorMessageInterceptor *interface. Once you implement and
configure it, you will start receiving callback for all keys and values
passed back and forth. So you can encode you object as a String somehow and
then convert it to real object inside interceptor. And the opposite: before
returning object from cache you can convert it to some String form.

This is not very convenient, though it will allow you to go further without
awaiting any tickets to be implemented.

Vladimir.

On Mon, Jun 27, 2016 at 6:42 PM, Alexey Kuznetsov 
wrote:

> Hi Kevin.
>
> >> 1.   Does Rest API only support String as key and value? When I
> try to use Integer as key, and gives null result.
> See: https://issues.apache.org/jira/browse/IGNITE-3345
>
> >> 2.   Assume I have a key object and value object, If I want to
> store this object in server side cache, Do I have to store them as json
> format string, and parse it on client side?
> >> In this case, How can I set read/write through to enable database
> interaction?
>
> See: https://issues.apache.org/jira/browse/IGNITE-962
>
> It seems both your questions is not implemented yet.
>
> IGNITE-3345 - could be easily implemented IMHO. Do you interested to
> contribute?
>
>
> --
> Alexey Kuznetsov
> GridGain Systems
> www.gridgain.com
>


Re: Can I get a better performance on BinaryMarshaller?

2016-06-24 Thread Vladimir Ozerov
Hi Lin,

If you cannot change source code of classes, then you can use
*BinarySerializer* interface. You can implement it and then assign with
particular class using *BinaryTypeConfiguration*.

Vladimir.

On Fri, Jun 24, 2016 at 12:24 PM, Lin <m...@linlyu.com> wrote:

> Hi Vladimir,
>
> I got it, thank you for your fast reply.  There are some third party jar
> files without source code, so we can't implementing the Binarylizable
> interface directly.
>
> Thanks again.
>
> Lin.
>
>
>
> -- Original --
> *From: * "Vladimir Ozerov";<voze...@gridgain.com>;
> *Date: * Fri, Jun 24, 2016 05:03 PM
> *To: * "user"<user@ignite.apache.org>;
> *Subject: * Re: Can I get a better performance on BinaryMarshaller?
>
> Hi Lin,
>
> Compact object representation was never our main goal when creating the
> protocol. Our goal was to give users ability to easily work with objects
> without necessity to deserialize them.
>
> You can make object representation more compact if you write them directly
> with help of *BinaryRawWriter *class. Please try implementing
> *Binarylizable* interface on *Address* and *Contact* classes. Then access
> raw writer using *BinaryWriter.rawWriter()* method, and finally write all
> the fields by hand. This should give you more compact result.
>
> Vladimir.
>


Re: Where to watch the cache informations in IGFS when i use SecondryFileSystem

2016-06-15 Thread Vladimir Ozerov
Hi,

Currently you can only access IGFS programatically, using
*IgniteFileSystem.metrics()
*method. If this is not an option for you, please attach the following:
1) Ignite configuration
2) Content of *core-site.xml* used for job execution.
3) Command line you use to start the job.

Vladimir.

On Wed, Jun 15, 2016 at 5:15 AM, Xun Zhai <410353...@qq.com> wrote:

> Hi @Vasiliy
> The reason why i want to watch IGFS cache informations is because of with
> SecondryFileSystem it looks like my job's performance has no obviously
> change, i think this is because of no cache is hited, am i right? or could
> you give me some advices to analysis this situation.
>
> Best Regards
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Where-to-watch-the-cache-informations-in-IGFS-when-i-use-SecondryFileSystem-tp5587p5633.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: ignite in-memory sql query performance issue

2016-06-07 Thread Vladimir Ozerov
Hi Kevin,

Currently Ignite uses H2 as underlying database engine. And according to H2
documentation, group indexes are only used when all fields from the index
participate in a query. For this reason it might necessary to have several
indexes or index groups if multiple different queries are executed against
the cache.

Please let me know if you have any further questions.

Vladimir.

On Mon, Jun 6, 2016 at 12:19 PM, Zhengqingzheng <zhengqingzh...@huawei.com>
wrote:

> Hi Vladimir,
>
> I have tried to reset the group index definition.
>
> Using gId and oId as the group index, the time used to retrieve the query
> reduced to 16ms.
>
>
>
> In order to speed up the sql queries, do I need to set all the possible
> group indexes ?
>
>
>
> Best regards,
>
> Kevin
>
>
>
> *发件人:* Vladimir Ozerov [mailto:voze...@gridgain.com]
> *发送时间:* 2016年6月6日 16:10
> *收件人:* user@ignite.apache.org
> *主题:* Re: ignite in-memory sql query performance issue
>
>
>
> Hi Kevin,
>
>
>
> Could you please provide the source code of SelectedClass and estimate
> number of entries in the cache? As Vladislav mentioned, most probably this
> is a matter of setting indexes on relevant fields. If you provide the
> source code, we will be able to give you exact example on how to do that.
>
>
>
> Vladimir.
>
>
>
> On Mon, Jun 6, 2016 at 5:56 AM, Zhengqingzheng <zhengqingzh...@huawei.com>
> wrote:
>
> Hi there,
>
> When using sql query to get a list of objects, I find that the performance
> is really slow. I am wondering, is this normal?
>
> I tried to call a sql query as follows:
>
> String qryStr = "select * from SelectedClass where  field1= ? and
> field2=?";
>
> SqlQuery<BinaryObject, BinaryObject> qry = new
> SqlQuery(SelectedCalss.class, qryStr);
>
> qry.setArgs( "97901336", "a88");
>
>
>
> If I call getAll() method like this:
>
> List<Entry<BinaryObject, BinaryObject>> result =
> cache.withKeepBinary().query(qry).getAll();
>
> It took 160ms to get all the objects (only two objects inside the list)
>
>
>
> it takes 1ms to get a querycursor object, like this:
>
>  QueryCursor qc = cache.withKeepBinary().query(qry);
>
> But still need 160ms to put the objects into a list and return;
>
>
>
> Best regards,
>
> Kevin
>
>
>
>
>
>
>
>
>


Re: ignite in-memory sql query performance issue

2016-06-06 Thread Vladimir Ozerov
Hi Kevin,

Could you please provide the source code of SelectedClass and estimate
number of entries in the cache? As Vladislav mentioned, most probably this
is a matter of setting indexes on relevant fields. If you provide the
source code, we will be able to give you exact example on how to do that.

Vladimir.

On Mon, Jun 6, 2016 at 5:56 AM, Zhengqingzheng 
wrote:

> Hi there,
>
> When using sql query to get a list of objects, I find that the performance
> is really slow. I am wondering, is this normal?
>
> I tried to call a sql query as follows:
>
> String qryStr = "select * from SelectedClass where  field1= ? and
> field2=?";
>
> SqlQuery qry = new
> SqlQuery(SelectedCalss.class, qryStr);
>
> qry.setArgs( "97901336", "a88");
>
>
>
> If I call getAll() method like this:
>
> List> result =
> cache.withKeepBinary().query(qry).getAll();
>
> It took 160ms to get all the objects (only two objects inside the list)
>
>
>
> it takes 1ms to get a querycursor object, like this:
>
>  QueryCursor qc = cache.withKeepBinary().query(qry);
>
> But still need 160ms to put the objects into a list and return;
>
>
>
> Best regards,
>
> Kevin
>
>
>
>
>
>
>


Re: Data Streamer

2016-05-23 Thread Vladimir Ozerov
Hi Alexander,

Please make sure that you flush data streamer before checking the "sum"
value:

fileStream.mapToInt(Integer::parseInt).forEach(line->streamer.addData(line,1L));
streamer.flush();

Vladimir.


On Mon, May 23, 2016 at 10:35 AM, Александр Савинов 
wrote:

>
> Hello.
> I have a problem with stream API and Ignite. The value of "sum" variable
> should be 100 (that equals to length of file test.csv), but it equals
> 999424. If file length is small (10 or even 1000) there is nothing in cache.
> Thank you.
> Если вы знаете русский, ответьте, пожалуйста, на русском.
> 
>
> IgniteConfiguration igniteConfiguration = new IgniteConfiguration();
> igniteConfiguration.setPeerClassLoadingEnabled(true);
> Ignite ignite = Ignition.start(igniteConfiguration);
> CacheConfiguration cacheConfiguration = new 
> CacheConfiguration<>("cache");
> IgniteCache cache = 
> ignite.getOrCreateCache(cacheConfiguration);
> IgniteDataStreamer streamer = ignite.dataStreamer("cache");
> streamer.receiver(StreamTransformer.from((entry, arg)->{
> Long value = entry.getValue();
> entry.setValue(value==null ? 1L : value + 1L);
> return entry;
> }));
> Stream fileStream = Files.lines(Paths.get("test.csv"));
> fileStream.mapToInt(Integer::parseInt).forEach(line->streamer.addData(line,1L));
> cache.forEach((entry)->System.out.println(entry.getKey() + ": " + 
> entry.getValue()));
> int s = 0;
> Iterator> iterator = cache.iterator();
> while(iterator.hasNext()){
> Cache.Entry entry = iterator.next();
> s+=entry.getValue();
> }
> System.out.println(s);
> cache.clear();
> ignite.close();
>
>
>
> --
> С уважением, Александр.
>
> --
>
> --
> С уважением, Александр.
>


Re: Problem with getting started on Windows 7

2016-05-06 Thread Vladimir Ozerov
Hi Alexey,

Please try unpacking Ignite to some directory other than "Program Files",
so that there are no spaces in the path.

Vladimir.

On Fri, May 6, 2016 at 4:35 PM, Alexey  wrote:

> Hi, I am trying to start Ignite on Windows using cygwin.
> I downloaded Apache Ignite as zip, unzip it and set IGNITE_HOME
> When I try to start Ignite from command line I see the following errors.
>
> $ bin/ignite.sh
> Error: Could not find or load main class
> org.apache.ignite.startup.cmdline.CommandLineRandomNumberGenerator
> Error: Could not find or load main class
> org.apache.ignite.internal.util.portscanner.GridJmxPortFinder
> bin/ignite.sh, WARN: Failed to resolve JMX host (JMX will be disabled):
> WIN-1DIQOJOBO9G
> Error: Could not find or load main class
> org.apache.ignite.startup.cmdline.CommandLineStartup
>
> $ bin/ignite.bat
> Error: Could not find or load main class Files\Ignite\libs\*;c:\Program\*
> Error: Could not find or load main class Files\Ignite\libs\*;c:\Program\*
> Error: Could not find or load main class Files\Ignite\libs\*;c:\Program\*
> "C:\Program Files\Ignite\bin\ignite.bat", WARN: Failed to resolve JMX host.
> JMX will be disabled.
> Error: Could not find or load main class
> org.apache.ignite.startup.cmdline.CommandLineStartup
> ▒▒▒ ▒த▒▒▒ ▒▒  ▒▒▒ . . .
>
> Could you please advise how to fix it?
>
> Thanks.
>
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Problem-with-getting-started-on-Windows-7-tp4830.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Error starting c++ client node using 1.6

2016-05-04 Thread Vladimir Ozerov
Hi Murthy,

The stack trace you provided shows that you still use old unpatched
version. This could be observed by the line "at
org.apache.ignite.Ignition.start(Ignition.java:322)". Patched version do
not perform this call any more and goes directly to the
"IgnitionEx.start()".
You can see the fix made by myself here:
https://github.com/apache/ignite/commit/22263773313dde6694f41d8eff2cd7af3fb72936#diff-016bf8b9581f2cf9c0e7255d54bd9f83R43

Could you please double-check that correct version of ignite-core.jar is
picked?

Vladimir.


On Mon, May 2, 2016 at 7:33 PM, Murthy Kakarlamudi  wrote:

> Hi Denis..Thanks for your response. I tried that too, but am getting an
> Spring Context not Injected error as below:
>
> [12:56:43,819][SEVERE][main][IgniteKernal] Got exception while starting
>> (will rollback startup routine).
>> class org.apache.ignite.IgniteException: Spring application context
>> resource is not injected.
>> at
>> org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStoreFactory.create(CacheJdbcPojoStoreFactory.java:156)
>> at
>> org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStoreFactory.create(CacheJdbcPojoStoreFactory.java:96)
>> at
>> org.apache.ignite.internal.processors.cache.GridCacheProcessor.createCache(GridCacheProcessor.java:1260)
>> at
>> org.apache.ignite.internal.processors.cache.GridCacheProcessor.onKernalStart(GridCacheProcessor.java:785)
>> at
>> org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:922)
>> at
>> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1736)
>> at
>> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1589)
>> at
>> org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1042)
>> at
>> org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:569)
>> at
>> org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:516)
>> at org.apache.ignite.Ignition.start(Ignition.java:322)
>> at
>> org.apache.ignite.internal.processors.platform.PlatformAbstractBootstrap.start(PlatformAbstractBootstrap.java:36)
>> at
>> org.apache.ignite.internal.processors.platform.PlatformIgnition.start(PlatformIgnition.java:72)
>>
>
>
> Below is the c++ client config I used that had Java based Cachestore
> Implementation details. Please let me know if I am doing anything wrong
> here.
>
> http://www.springframework.org/schema/beans;
>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
>xmlns:util="http://www.springframework.org/schema/util;
>xsi:schemaLocation="
> http://www.springframework.org/schema/beans
> http://www.springframework.org/schema/beans/spring-beans.xsd
> http://www.springframework.org/schema/util
> http://www.springframework.org/schema/util/spring-util.xsd;>
>  class="org.springframework.jdbc.datasource.DriverManagerDataSource">
>  value="com.microsoft.sqlserver.jdbc.SQLServerDriver" />
>  value="jdbc:sqlserver://localhost;databaseName=test;integratedSecurity=true"
> />
> 
>  class="org.apache.ignite.configuration.IgniteConfiguration">
> 
>
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
>
> 
> 
> 
> 
> 
>  class="org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStoreFactory">
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
>
> 
> 
>  class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
> 
> 
> 
>  class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
> 
> 
> 
> 
> 127.0.0.1:47500..47501
> 
> 
> 
> 
> 
> 
> 
> 
>
>
>>
> On Mon, May 2, 2016 at 12:18 PM, Denis Magda  wrote:
>
>> Hi Murthy,
>>
>> In my understanding you can only set Java based CacheStore
>> implementations. That’s why there are no .net and c++ examples for this
>> kind of functionality.
>>
>> You need to specify Java based CacheStore implementation via an XML
>> configuration and everything should work fine out of the box after that.
>>
>> Regards,
>> Denis
>>
>> On May 2, 2016, at 8:08 PM, Murthy Kakarlamudi  wrote:
>>
>> Any help on this issue please. Basically I am stuck at a point where I
>> have to access the database from c++ client node. I could not find an
>> equivalent java/.net cachestore example for c++. Looking for guidance on
>> how to access persistence store from c++.
>>
>> Thanks,
>> Murthy.
>>
>> On Sat, Apr 30, 2016 at 1:19 PM, Murthy Kakarlamudi 
>> wrote:

Re: Ignite WRite Behind

2016-05-04 Thread Vladimir Ozerov
Hi,

writeAll() method is only executed for entries located in the same cache.
As you have only 1 entry per cache in transaction, this is expected
behavior that writeAll() is not called. Normally, you should make your
logic dependent on whether write() or writeAll() is called, because it is
up to Ignite to decide which method to call.

Vladimir.

On Wed, May 4, 2016 at 5:54 AM, amitpa  wrote:

> Hi,
>
> I have 2 nodes setup and there are around 5 different caches involved, with
> 1 entry per cache per transaction.
>
> I have writeThrough set to true and writeBehindEnabled set to true.
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Ignite-WRite-Behind-tp4741p4748.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Number of partitions of IgniteRDD

2016-04-29 Thread Vladimir Ozerov
Hi Vij,

I see method "getPartitions" in IgniteRDD, not "getNumPartitions". Please
confirm that we are talking about the same thing.

Anyway, logic of this method is extremely straightforward - it simply call
Ignite.affinity("name_of_your_cache").partitions() method, so it should
return actual number of partitions.
"getPartitions" returns array, could you please show is printed to the
console from your code?


Vladimir.

On Fri, Apr 29, 2016 at 3:10 PM, vijayendra bhati <veejayend...@yahoo.com>
wrote:

> Yes its Spark RDD's standard method, but it has been overridden in
> IgniteRDD.
>
> Regards,
> Vij
>
>
> On Friday, April 29, 2016 5:25 PM, Vladimir Ozerov <voze...@gridgain.com>
> wrote:
>
>
> Hi Vij,
>
> I am not quite uderstand where does method "getNumPartitions" came from.
> Is it on standard Spark API? I do not see it on*
> org.apache.spark.api.java.JavaRDD* class.
>
> Vladimir.
>
> On Fri, Apr 29, 2016 at 7:50 AM, vijayendra bhati <veejayend...@yahoo.com>
> wrote:
>
> Hi Val,
>
> I am creating DataFrame using below code -
>
> * public DataFrame getStockSimulationReturnsDataFrame(LocalDate
> businessDate,String stock){*
> * /**
> * * If we use sql query , we are assuming that data is in cache.*
> * * */*
> * String sql = "select simulationUUID,stockReturn from
> STOCKSIMULATIONRETURNSVAL where businessDate = ? and symbol = ?";*
> * DataFrame df =jic.fromCache(PARTITIONED_CACHE_NAME).sql(sql,
> businessDate,stock);*
> * return df;*
> * }*
>
>
> And to check partitions I am doing -
>
> private JavaRDD getSimulationsForStock(String stock,LocalDate
> businessDate)
> {
> DataFrame df =  StockSimulationsReaderFactory.getStockSimulationStore(jsc,
> businessDate,
> businessDate).getStockSimulationReturnsDataFrame(businessDate, stock);
> System.out.println(""+df.javaRDD().getNumPartitions());
> return df.javaRDD();
> }
>
> Regards,
> Vij
>
>
> On Friday, April 29, 2016 3:09 AM, vkulichenko <
> valentin.kuliche...@gmail.com> wrote:
>
>
> Hi Vij,
>
>
> How do you check the number of partitions and what are you trying to
> achieve? Can you show the code?
>
> -Val
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Number-of-partitions-of-IgniteRDD-tp4644p4671.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>
>
>
>
>
>
>


Re: Affinitykey is not working

2016-04-29 Thread Vladimir Ozerov
Hi,

Could you please explain how do you detect a node to which key is mapped?
Do you use Affinity API?

Vladimir.

On Fri, Apr 29, 2016 at 11:48 AM, nikhilknk  wrote:

> I used the below ToleranceCacheKey  as the key . I want to keep all the
> keys
> whose  marketSectorId is same in the same node . So I kept annotation
> "@AffinityKeyMapped" for marketSectorId .
>
> I started 3 nodes of ignite cluster but the instruments of same
> marketSectorId  are sahred among three nodes .
>
> Is affinity doesnt work in this case . please suggest if i am missing
> anything .
>
> I am using ignite 1.5 and scala 2.10.5 versions .
>
> /**
>  *
>  */
> package com.spse.pricing.domain
>
> import java.util.Date
> import org.apache.ignite.cache.affinity.AffinityKeyMapped
> /**
>  * @author nkakkireni
>  *
>  */
> case class ToleranceCacheKey (
>
> val instrumentId:String = null,
> val cycleId:Int = 0,
> @AffinityKeyMapped val marketSectorId:Int = 0,
> val runDate :Date = null
> )
>
>
>
> my cache configuration
>
> val toleranceCache = {
> val temp = ignite match {
>   case Some(s) => {
>
>  val toleranceCache = new
>
> CacheConfiguration[ToleranceCacheKey,ToleranceCacheValue]("toleranceCache");
> toleranceCache.setCacheMode(CacheMode.PARTITIONED);
> toleranceCache.setTypeMetadata(toleranceCacheMetadata());
>
> val cache = s.getOrCreateCache(toleranceCache)
> cache
>   }
>   case _ => logError("Getting toleranceCache cache failed")
> throw new Throwable("Getting toleranceCache cache failed")
>
> }
> temp
>
>   }
>
> def toleranceCacheMetadata() = {
> val types = new ArrayList[CacheTypeMetadata]();
>
> val cacheType = new CacheTypeMetadata();
> cacheType.setValueType(classOf[ToleranceCacheValue].getName);
>
>   val qryFlds = cacheType.getQueryFields();
> qryFlds.put("tradingGroupId", classOf[Int]);
>
> val indexedFlds=cacheType.getAscendingFields
> indexedFlds.put("instrumentId", classOf[String]);
> indexedFlds.put("cycleId", classOf[Int]);
> indexedFlds.put("runDate", classOf[Date]);
> indexedFlds.put("marketSectorId", classOf[Int]);
>
> types.add(cacheType);
>
> types;
> }
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Affinitykey-is-not-working-tp4685.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Issue with Java 8 datatype LocalDate while using IgniteRDD

2016-04-29 Thread Vladimir Ozerov
Hi Vij,

Do you see any exception or some other kind of error? Please provide more
error description.

Vladimir.

On Fri, Apr 29, 2016 at 2:47 PM, vijayendra bhati 
wrote:

> Hi Guys,
>
> I am trying to store a object which contains object of type LocalDate
> datatype of Java8 time's API
> I am facing issues over it while working with IgniteRDD
> Looks like LocalDate is not handle in IgniteRDD and may be in Spark as well
>
> Anybody can help here ?
>
> Regards,
> Vij
>


Re: Ignite Installation with Spark under CDH

2016-04-29 Thread Vladimir Ozerov
Hi Michael,

Ok, so it looks like the process didn't have enough heap.
Thank you for your inputs about CDH configuration. We will improve our
documentation based on this.

Vladimir

On Thu, Apr 28, 2016 at 5:15 PM, mdolgonos 
wrote:

> Vladimir,
>
> I fixed this by changing the way I start Ignite based on a recommendation
> from another post here and the OOME has gone:
> ignite.sh -J-Xmx10g
> The data that I put in cache is about 1.5GB
>
> Thank you,
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Ignite-Installation-with-Spark-under-CDH-tp4457p4665.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Ignite Installation with Spark under CDH

2016-04-28 Thread Vladimir Ozerov
Hi Michael,

Did you have a chance to analyze heap dump to understand what caused OOME?
As per *IgniteConfiguration*, it is made non-serialziable intentionally,
because we do not expect it to be passed over the wire. Could you please
provide a stack trace where you see it is being serialized?

Vladimir.

On Wed, Apr 27, 2016 at 10:31 PM, mdolgonos 
wrote:

> Vladimir,
>
> Update - I think I solved the ClassNotFound exception. It looks like the
> Ignite installation document for Spark and CDH is outdated and doesn't
> contain complete information on integrating Ignite with Spark running in a
> 'Yarn' (cluster) mode on CDH which I have. This is what I have done and now
> am able to run my Spark program (however, with exception described below):
> went to
> Cloudera Manager -> YARN (MR2 Included) -> Configuration -> Service Wide ->
> Advanced -> Spark Client Advanced Configuration Snippet (Safety Valve) for
> spark-conf/spark-defaults.conf
> Added etc/ignite-fabric-1.5.0/libs/* to already present
>
> spark.executor.extraClassPath=/opt/cloudera/parcels/CDH-5.5.2-1.cdh5.5.2.p0.4/jars/htrace-core-3.2.0-incubating.jar.
> The combined line looks like this:
>
> spark.executor.extraClassPath=/opt/cloudera/parcels/CDH-5.5.2-1.cdh5.5.2.p0.4/jars/htrace-core-3.2.0-incubating.jar:/etc/ignite-fabric-1.5.0/libs/*
>
> However, after starting Ignite and submitting my Spark program (in another
> window) I see the following exception:
> java.lang.OutOfMemoryError: GC overhead limit exceeded
>
> The only line of code where I put something into the Ignite cache where I'm
> trying to save RDD there:
> cacheIgRDD.savePairs(partRDD, true)
>
> One thing to note is it looks like IgniteConfigurationis not Serializable
> so
> I had to create MyIgniteConfiguration extends IgniteConfiguration with
> Serializable
>
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Ignite-Installation-with-Spark-under-CDH-tp4457p4624.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Ignite Installation with Spark under CDH

2016-04-27 Thread Vladimir Ozerov
Hi Michael,

I meant to echo to the console SPARK_CLASSPATH variable which you created
in *spark-env.sh* following recommendations from Ignite docs. We need to
ensure that JCache jar is included into it.
BTW, do you see Ignite jars in " java.class.path" property?

Vladimir.

On Wed, Apr 27, 2016 at 4:30 PM, mdolgonos 
wrote:

> Vladimir,
>
> I don't see this property. I put the following code at the very beginning
> of
> my main() method:
> println(" Props=" + System.getProperties())
> The only classpath related property I see is java.class.path which doesn't
> contain cache-api-1.0.0.jar despite my changing spark-env.sh as described
> in
> the documentation.
>
> Thank you
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Ignite-Installation-with-Spark-under-CDH-tp4457p4608.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Error running nodes in .net and c++

2016-04-27 Thread Vladimir Ozerov
Murthy,

As per initial issue - I created a ticket and fixed the bug causing your
initial problem (*"org.apache.ignite.**IgniteException: Spring application
context resource is not injected"*). The fix will be included into upcoming
Ignite 1.6 release.

Vladimir.

On Wed, Apr 27, 2016 at 11:50 AM, Vladimir Ozerov <voze...@gridgain.com>
wrote:

> Hi Murthy,
>
> Please provide configs you used to start clients and servers.
>
> Vladimir.
>
> On Wed, Apr 27, 2016 at 5:45 AM, Murthy Kakarlamudi <ksa...@gmail.com>
> wrote:
>
>> Can someone please help how Ignite works for the following use case. The
>> server node loads data from Persistent Store into cache upon start up.
>> There will be a couple of client nodes (c++, .net based) that needs to
>> access the cache.
>> The server node will have the configuration for cachestore. Should the
>> client nodes also have the configuration for cachestore? I am hoping no
>> because all they need is to read the cache.
>> But I am assuming, if these client nodes can also update the cache then
>> the cachestore config is required if write through is enabled.
>> Please validate my assumptions.
>>
>> Thanks,
>> Satya...
>>
>> On Tue, Apr 26, 2016 at 9:44 AM, Murthy Kakarlamudi <ksa...@gmail.com>
>> wrote:
>>
>>> No..I am not. I have different configs for my server node in java vs my
>>> client node in c++. That was the question I had. In my server node that
>>> loads the data from persistent store to cache, I configured cachestore. But
>>> my c++ node is only a client node that needs to access cache. So I was not
>>> sure if my client node config should have the cachestore details as well.
>>>
>>> Let me try the option you suggested.
>>>
>>> On Tue, Apr 26, 2016 at 9:40 AM, Vladimir Ozerov <voze...@gridgain.com>
>>> wrote:
>>>
>>>> HI Murthy,
>>>>
>>>> Do you start all nodes with the same XML configuration? Please ensure
>>>> that this is so, and all nodes know all caches from configuration in
>>>> advance.
>>>>
>>>> Vladimir.
>>>>
>>>> On Tue, Apr 26, 2016 at 3:27 PM, Murthy Kakarlamudi <ksa...@gmail.com>
>>>> wrote:
>>>>
>>>>> Hi Vladimir...I made the update and still running into the same issue.
>>>>>
>>>>> Here is the updated spring config for my Java node:
>>>>> 
>>>>>
>>>>> 
>>>>>
>>>>> 
>>>>> http://www.springframework.org/schema/beans;
>>>>> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; xmlns:util="
>>>>> http://www.springframework.org/schema/util;
>>>>> xsi:schemaLocation="
>>>>> http://www.springframework.org/schema/beans
>>>>> http://www.springframework.org/schema/beans/spring-beans.xsd
>>>>> http://www.springframework.org/schema/util
>>>>> http://www.springframework.org/schema/util/spring-util-2.5.xsd;>
>>>>>
>>>>> >>>> class="org.springframework.jdbc.datasource.DriverManagerDataSource">
>>>>> >>>> value="com.microsoft.sqlserver.jdbc.SQLServerDriver" />
>>>>> >>>> value="jdbc:sqlserver://LAPTOP-QIT4AVOG\MSSQLSERVER64;databaseName=PrimeOne;integratedSecurity=true"
>>>>> />
>>>>> 
>>>>>
>>>>> >>>> class="org.apache.ignite.configuration.IgniteConfiguration">
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>>
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>>
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> >>>> class="org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStoreFactory">
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
&g

Re: ignite nodes close exception when using cache -clear -c=@c7

2016-04-27 Thread Vladimir Ozerov
Hi Kevin,

Looks like the topology got broken for some reason. Could you please attach
logs from all nodes so that I can investigate it deeper?

Vladimir.

On Wed, Apr 27, 2016 at 1:46 PM, Zhengqingzheng 
wrote:

> Hi there,
>
> When I tried to clear one specific cache, service nodes closed unexpected.
>
>
>
>
>
> visor> cache -clear -c=@c7
>
> [17:26:35] Topology snapshot [ver=62, servers=9, clients=0, CPUs=16,
> heap=63.0GB]
>
> [17:26:38,009][SEVERE][tcp-disco-msg-worker-#2%null%][TcpDiscoverySpi]
> TcpDiscoverSpi's message worker thread failed abnormally. Stopping the node
> in order to prevent cluster wide instability.
>
> java.lang.InterruptedException
>
> at
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017)
>
> at
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2095)
>
> at
> java.util.concurrent.LinkedBlockingDeque.pollFirst(LinkedBlockingDeque.java:519)
>
> at
> java.util.concurrent.LinkedBlockingDeque.poll(LinkedBlockingDeque.java:682)
>
> at
> org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorkerAdapter.body(ServerImpl.java:5779)
>
> at
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.body(ServerImpl.java:2161)
>
> at
> org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
>
> [17:26:38] Topology snapshot [ver=71, servers=1, clients=0, CPUs=8,
> heap=7.0GB]
>
> [17:26:43] Topology snapshot [ver=62, servers=9, clients=0, CPUs=16,
> heap=63.0GB]
>
> [17:26:43] Topology snapshot [ver=62, servers=9, clients=0, CPUs=16,
> heap=63.0GB]
>
> [17:26:44] Topology snapshot [ver=62, servers=9, clients=0, CPUs=16,
> heap=63.0GB]
>
> [17:27:19] Topology snapshot [ver=63, servers=8, clients=0, CPUs=16,
> heap=56.0GB]
>
> [17:27:19] Topology snapshot [ver=63, servers=8, clients=0, CPUs=16,
> heap=56.0GB]
>
> [17:27:19] Topology snapshot [ver=64, servers=7, clients=0, CPUs=16,
> heap=49.0GB]
>
> [17:27:19] Topology snapshot [ver=64, servers=7, clients=0, CPUs=16,
> heap=49.0GB]
>
> [17:27:19] Topology snapshot [ver=65, servers=6, clients=0, CPUs=16,
> heap=42.0GB]
>
> [17:27:19] Topology snapshot [ver=65, servers=6, clients=0, CPUs=16,
> heap=42.0GB]
>
> [17:27:19] Topology snapshot [ver=67, servers=5, clients=0, CPUs=16,
> heap=35.0GB]
>
> [17:27:19] Topology snapshot [ver=67, servers=4, clients=0, CPUs=16,
> heap=28.0GB]
>
> [17:27:19] Topology snapshot [ver=67, servers=5, clients=0, CPUs=16,
> heap=35.0GB]
>
> [17:27:19] Topology snapshot [ver=67, servers=4, clients=0, CPUs=16,
> heap=28.0GB]
>
> [17:27:23,326][SEVERE][sys-#19%null%][GridCachePartitionExchangeManager]
> Failed to send local partition map to node [node=TcpDiscoveryNode
> [id=b247699c-8545-40a4-9b9c-aa478ea3ca55, addrs=[0:0:0:0:0:0:0:1%lo,
> 10.120.70.122, 127.0.0.1], sockAddrs=[/0:0:0:0:0:0:0:1%lo:47501,
> /0:0:0:0:0:0:0:1%lo:47501, /10.120.70.122:47501, /127.0.0.1:47501],
> discPort=47501, order=2, intOrder=2, lastExchangeTime=1461673945474,
> loc=false, ver=1.5.0#20151229-sha1:f1f8cda2, isClient=false],
> exchId=GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion
> [topVer=62, minorTopVer=0], nodeId=fe76324d, evt=NODE_FAILED]]
>
> class org.apache.ignite.IgniteCheckedException: Failed to send message
> (node may have left the grid or TCP connection cannot be established due to
> firewall issues) [node=TcpDiscoveryNode
> [id=b247699c-8545-40a4-9b9c-aa478ea3ca55, addrs=[0:0:0:0:0:0:0:1%lo,
> 10.120.70.122, 127.0.0.1], sockAddrs=[/0:0:0:0:0:0:0:1%lo:47501,
> /0:0:0:0:0:0:0:1%lo:47501, /10.120.70.122:47501, /127.0.0.1:47501],
> discPort=47501, order=2, intOrder=2, lastExchangeTime=1461673945474,
> loc=false, ver=1.5.0#20151229-sha1:f1f8cda2, isClient=false],
> topic=TOPIC_CACHE, msg=GridDhtPartitionsSingleMessage
> [parts={1=GridDhtPartitionMap2 [moving=12, size=121],
> -2146922738=GridDhtPartitionMap2 [moving=12, size=121],
> 745661760=GridDhtPartitionMap2 [moving=12, size=121],
> -2100569601=GridDhtPartitionMap2 [moving=0, size=100],
> -1071296927=GridDhtPartitionMap2 [moving=12, size=121],
> -1667118441=GridDhtPartitionMap2 [moving=12, size=121],
> 689859866=GridDhtPartitionMap2 [moving=12, size=121],
> 810756007=GridDhtPartitionMap2 [moving=12, size=121],
> -1582327725=GridDhtPartitionMap2 [moving=12, size=121],
> 1316949047=GridDhtPartitionMap2 [moving=12, size=121],
> 1325947219=GridDhtPartitionMap2 [moving=0, size=20]}, partCntrs=null,
> client=false, super=GridDhtPartitionsAbstractMessage
> [exchId=GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion
> [topVer=62, minorTopVer=0], nodeId=fe76324d, evt=NODE_FAILED],
> lastVer=GridCacheVersion [topVer=73153881, nodeOrderDrId=8,
> globalTime=1461749161426, order=1461727649806], super=GridCacheMessage
> [msgId=12782, depInfo=null, err=null, skipPrepare=false, cacheId=0,
> cacheId=0]]], policy=2]
>
> at

Re: ODBC Driver?

2016-04-27 Thread Vladimir Ozerov
Hi Vij,

It is not merged to master yet. We think this will happen in the nearest
days.

Vladimir.

On Wed, Apr 27, 2016 at 3:34 PM, vijayendra bhati 
wrote:

> If I build Ignite from nightly build , would I will be getting working
> ODBC driver with it ?
> I need to integrate it with Tableau.
>
> Regards,
> Vij
>
>
> On Wednesday, April 27, 2016 5:55 PM, Igor Sapego 
> wrote:
>
>
> Hi Arthi,
>
> ODBC driver supports rowset binding though currently only
> fetching of a single row per call is supported, i.e.
> SQL_ATTR_ROW_ARRAY_SIZE
> attribute can only be set to 1 right now.
>
> Best Regards,
> Igor
>
> On Wed, Apr 27, 2016 at 1:56 PM, arthi  > wrote:
>
> Thank You.
>
> Arthi
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/ODBC-Driver-tp4557p4599.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>
>
>
>
>


Re: Ignite cache data size problem.

2016-04-27 Thread Vladimir Ozerov
Kevin,

I would say that your assumption - more nodes, faster data load - makes
perfect sense. N nodes will have N threads in total, performing the load.
The problem is that current JDBC POJO store implementation makes this
process not very efficient, because each node iterates over the whole data
set. More efficient approach is to split the whole data set into several
parts (e.g. by attribute's value) and then let nodes iterate only over a
part of data, taking in count affinity function. This will be the most
efficient approach, I believe.

We will improve that in further Ignite versions.

Vladimir.

On Wed, Apr 27, 2016 at 1:52 PM, Zhengqingzheng <zhengqingzh...@huawei.com>
wrote:

> Thank you Vladimir,
>
> I will try to use the second recommendation first, and let you know the
> result asap.
>
>
>
> I thought more  nodes will result in fast data loading. Because I assume
> that each node can load different data simultaneously. Clearly, it is my
> misunderstanding about increasing number of nodes.
>
>
>
> Best regards,
>
> Kevin
>
>
>
> *发件人:* Vladimir Ozerov [mailto:voze...@gridgain.com]
> *发送时间:* 2016年4月27日 18:03
> *收件人:* user@ignite.apache.org
> *主题:* Re: Ignite cache data size problem.
>
>
>
> Hi Kevin,
>
>
>
> My considerations:
>
> 1) I see that amount of allocated heap is gradually increases over time.
> Can you confirm that you use the configuration with OFFHEAP which you shown
> several posts earlier?
>
> 2) Why do you have several nodes per host? Our recommended approach is to
> have a single node per machine.
>
> 3) The main things: you invoke loadCache() method. This method will
> execute the same query - "SELECT *" as you mentioned earlier - on all
> nodes. It means that your cluster will have to perform full-scan N times,
> where N - number of nodes.
>
>
>
> I looked closely at our store API and I believe it is not suited for such
> cases (millions rows, lots of nodes in topology) well. We will think of
> better API to handle such scenarios.
>
>
>
> My recommendations for now:
>
>
>
> 1) Start no more than 1 node per host. This way you will decrease amount
> of scans from 16 to 2-3, which should make perf much better. Let each node
> take as much memory as possible.
>
>
>
> 2) If this approach still doesn't show good enough numbers, consider
> switching to *IgniteDataStreamer *instead -
> https://apacheignite.readme.io/docs/data-streamers
>
> It was designed specifically for efficient bulk data load by employing
> batching and affinity co-location techniques. You can do that in the
> following way:
>
> - Create *IgniteDataStreamer *for your cache;
>
> - Scan the table using JDBC, Hibernate of any other framework you have;
>
> - For each returned row, create appropriate cache key and value object;
>
> - Put key-value pair to the streamer: * IgniteDataStreamer.addData()*;
>
> - When scan is finished, close the streamer: IgniteDataStreamer.close().
>
> This should be done only on one node.
>
>
>
> I believe the second approach should show much better numbers.
>
>
>
> Vladimir.
>
>
>
>
>
> On Wed, Apr 27, 2016 at 6:06 AM, Zhengqingzheng <zhengqingzh...@huawei.com>
> wrote:
>
> Hi Vladimir,
>
> Sorry to reply so late.  The loadCache process took 8 hours to load all
> the data (this time ,there is no exception occurred, but memory consumption
> up to 56gb, 80% of all the heap that I have defined, which include 10
> nodes, each node allocate 7gb heap).
>
> I give you all the log files which were located at work/log/ folder.
> Please see the attachment.
>
>
>
>
>
> *发件人:* Vladimir Ozerov [mailto:voze...@gridgain.com]
> *发送时间:* 2016年4月25日 20:21
> *收件人:* user@ignite.apache.org
> *主题:* Re: Ignite cache data size problem.
>
>
>
> Hi Kevin,
>
>
>
> I performed several experiments. Essentially, I put 1M entries of the
> class you provided with fields initialized as follows:
>
>
>
> *for *(*int *i = 0; i < 1_000_000; i++) {
> UniqueField field = *new *UniqueField();
>
> field.setDate(*new *Date());
> field.setGuid(UUID.*randomUUID*().toString());
> field.setMsg(String.*valueOf*(i));
> 
> field.setNum(BigDecimal.*valueOf*(ThreadLocalRandom.*current*().nextDouble()));
> field.setOId(String.*valueOf*(i));
> field.setOrgId(String.*valueOf*(i));
>
> cache.put(i, field);
> }
>
>
>
> My results are:
>
> 1) Onheap, no indexes - about 400Mb is required to store 1M objects, or
> ~20Gb for 47M objects.
>
> 2) Onheap, with indexes - about 650Mb, or ~30Gb for 47M objects.
>
> 3) Offheap, with indexes - about 4

Re: Gar file gives me java.lang.ClassNotFoundException

2016-04-27 Thread Vladimir Ozerov
Hi,

Please provide the full stack trace and (if possible) code to reproduce the
problem.

Vladimir.

On Wed, Apr 27, 2016 at 9:26 AM, mortias  wrote:

> Hi all,
>
> I'm trying to use a gar file to deploy my project however I get a
> java.lang.ClassNotFoundException
>
> In short, I've created a gar file, with a meta-inf file and a lib folder
>(my lib folder contains all dependencies and my own project packaged in
> a
> xxx-snapshot.jar
> so basically I use the "appassembler-maven-plugin:create-repository"
> goal)
>
> when I run ignite he don't seem to find my classes..
>   org.apache.ignite.spi.IgniteSpiException: Failed to get tasks declared in
> xml file
>   .. cannot find class .. caused by java.lang.ClassNotFoundException
>
> any ideas ?
>
> tx !
>
>
>
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Gar-file-gives-me-java-lang-ClassNotFoundException-tp4582.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Affinity Collocation

2016-04-27 Thread Vladimir Ozerov
Hi,

There should not be any problems with config like this because all
Organization entries will be located on all nodes in the cluster.

Vladimir.

On Wed, Apr 27, 2016 at 9:47 AM, Kamal C  wrote:

> What do you mean by cache configuration?
>
> If I go with the below configuration, will it create any problem ?
>
> Person cache  - Partitioned, Atomic Mode
> Organization / Company cache - Replicated, Transactional Mode
>
> --Kamal
>
> On Wed, Apr 27, 2016 at 10:46 AM, Alexey Goncharuk <
> alexey.goncha...@gmail.com> wrote:
>
>> Hi,
>>
>> As long as cache configuration is the same, affinity assignment for such
>> caches will be identical, so you do not need to explicitly specify cache
>> dependency. On the other hand, if cache configurations do differ, it is not
>> always possible to collocate keys properly, so for this case such a
>> dependency also does not seem legit.
>>
>> Makes sense?
>> ​
>>
>
>


Re: Caused by: class org.apache.ignite.binary.BinaryInvalidTypeException:

2016-04-27 Thread Vladimir Ozerov
Hi.

Yes, normally computations are performed on both primary and backup nodes.

Vladimir.

On Wed, Apr 27, 2016 at 10:19 AM, kcheng.mvp  wrote:

> Saying cache mode is "PARTITIONED", in this case both primary node and
> backup
> nodes would execute the same piece of code, right?
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Caused-by-class-org-apache-ignite-binary-BinaryInvalidTypeException-tp4311p4585.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Ignite Installation with Spark under CDH

2016-04-27 Thread Vladimir Ozerov
Hi Michael,

Could you please print resulting value of the SPARK_CLASSPATH variable?

Vladimir.

On Tue, Apr 26, 2016 at 8:55 PM, mdolgonos 
wrote:

> Vladimir,
> I verified that the cache jar is in the Cloudera jars directory. All the
> cache packages are also included in the deployment jar-with-dependencies as
> I used
> 
> org.apache.ignite
> ignite-spark
> ${ignite.version}
> compile
> 
> 
>   javax.cache
>   cache-api
>   1.0.0
>   compile
> 
> Not sure what else I can take a look at.
> Thank you again for your help.
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Ignite-Installation-with-Spark-under-CDH-tp4457p4558.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Error running nodes in .net and c++

2016-04-26 Thread Vladimir Ozerov
=o.a.i.cache.affinity.rendezvous.RendezvousAffinityFunction@76311661,
> cacheMode=PARTITIONED, atomicityMode=ATOMIC, atomicWriteOrderMode=PRIMARY,
> backups=1, invalidate=false, tmLookupClsName=null, rebalanceMode=ASYNC,
> rebalanceOrder=0, rebalanceBatchSize=524288,
> rebalanceBatchesPrefetchCount=2, offHeapMaxMem=-1, swapEnabled=false,
> maxConcurrentAsyncOps=500, writeBehindEnabled=false,
> writeBehindFlushSize=10240, writeBehindFlushFreq=5000,
> writeBehindFlushThreadCnt=1, writeBehindBatchSize=512,
> memMode=ONHEAP_TIERED,
> affMapper=o.a.i.i.processors.cache.CacheDefaultBinaryAffinityKeyMapper@2e41d426,
> rebalanceDelay=0, rebalanceThrottle=0, interceptor=null,
> longQryWarnTimeout=3000, readFromBackup=true,
> nodeFilter=o.a.i.configuration.CacheConfiguration$IgniteAllNodesPredicate@d211e68,
> sqlSchema=null, sqlEscapeAll=false, sqlOnheapRowCacheSize=10240,
> snapshotableIdx=false, cpOnRead=true, topValidator=null], cacheType=USER,
> initiatingNodeId=bc7d2aa2-4a64-467f-8097-d0f579dec0b3, nearCacheCfg=null,
> clientStartOnly=true, stop=false, close=false, failIfExists=false,
> template=false, exchangeNeeded=true, cacheFutTopVer=null,
> cacheName=buCache]], clientNodes=null,
> id=45ec9825451-cbb8263a-223e-4f3e-8492-71f2612ddae6,
> clientReconnect=false], affTopVer=AffinityTopologyVersion [topVer=11,
> minorTopVer=1], super=DiscoveryEvent [evtNode=TcpDiscoveryNode
> [id=bc7d2aa2-4a64-467f-8097-d0f579dec0b3, addrs=[0:0:0:0:0:0:0:1,
> 127.0.0.1, 192.168.0.5, 2001:0:9d38:90d7:1064:ea:bb9b:11d9,
> 2600:8806:0:8d00:0:0:0:1, 2600:8806:0:8d00:15e5:c0bf:286e:8785,
> 2600:8806:0:8d00:3ccf:1e94:1ab4:83a9], sockAddrs=[LAPTOP-QIT4AVOG/
> 192.168.0.5:0, /0:0:0:0:0:0:0:1:0, LAPTOP-QIT4AVOG/192.168.0.5:0, /
> 127.0.0.1:0, LAPTOP-QIT4AVOG/192.168.0.5:0, /192.168.0.5:0,
> LAPTOP-QIT4AVOG/192.168.0.5:0, /2001:0:9d38:90d7:1064:ea:bb9b:11d9:0,
> LAPTOP-QIT4AVOG/192.168.0.5:0, /2600:8806:0:8d00:0:0:0:1:0,
> /2600:8806:0:8d00:15e5:c0bf:286e:8785:0,
> /2600:8806:0:8d00:3ccf:1e94:1ab4:83a9:0], discPort=0, order=11, intOrder=0,
> lastExchangeTime=1461673644205, loc=true, ver=1.5.0#20151229-sha1:f1f8cda2,
> isClient=true], topVer=11, nodeId8=bc7d2aa2, msg=null,
> type=DISCOVERY_CUSTOM_EVT, tstamp=1461673645026]],
> rcvdIds=GridConcurrentHashSet [elements=[]], rmtIds=null,
> exchId=GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion
> [topVer=11, minorTopVer=1], nodeId=bc7d2aa2, evt=DISCOVERY_CUSTOM_EVT],
> init=true, ready=false, replied=false, added=true,
> initFut=GridFutureAdapter [resFlag=2, res=false, startTime=1461673645026,
> endTime=1461673645046, ignoreInterrupts=false, lsnr=null, state=DONE],
> topSnapshot=null, lastVer=null, partReleaseFut=null, skipPreload=false,
> clientOnlyExchange=false, initTs=1461673645026,
> oldest=7700cd68-08b1-4571-8744-0e91dcdad9b0, oldestOrder=1, evtLatch=0,
> remaining=[], super=GridFutureAdapter [resFlag=1, res=class
> o.a.i.IgniteException: Spring application context resource is not
> injected., startTime=1461673645026, endTime=1461673645046,
> ignoreInterrupts=false, lsnr=null, state=DONE]]
> class org.apache.ignite.IgniteCheckedException: Spring application context
> resource is not injected.
> at
> org.apache.ignite.internal.util.IgniteUtils.cast(IgniteUtils.java:7005)
> at
> org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:166)
> at
> org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:115)
> at
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:1299)
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: class org.apache.ignite.IgniteException: Spring application
> context resource is not injected.
> at
> org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStoreFactory.create(CacheJdbcPojoStoreFactory.java:156)
> at
> org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStoreFactory.create(CacheJdbcPojoStoreFactory.java:96)
> at
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.createCache(GridCacheProcessor.java:1243)
> at
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheStart(GridCacheProcessor.java:1638)
> at
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCachesStart(GridCacheProcessor.java:1563)
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.startCaches(GridDhtPartitionsExchangeFuture.java:956)
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(

Re: Ignite cache data size problem.

2016-04-26 Thread Vladimir Ozerov
Hi Kevin,

Yes, log files are created under this directory by default.

Vladimir.

On Tue, Apr 26, 2016 at 3:18 PM, Zhengqingzheng <zhengqingzh...@huawei.com>
wrote:

> Hi Vladimir,
>
> No problem.
>
> I will re-run the loading process and give you  the log file.
>
> To be clear, when you say log file, do you mean files located at
> work/log/*?
>
> I use IgntieCache.loadCache(null, “select * from table”) to load all the
> data.
>
>
>
> Best regards,
>
> Kevin
>
>
>
> *发件人:* Vladimir Ozerov [mailto:voze...@gridgain.com]
> *发送时间:* 2016年4月26日 20:15
> *收件人:* user@ignite.apache.org
> *主题:* Re: Ignite cache data size problem.
>
>
>
> Hi Kevin,
>
>
>
> Could you please re-run your case and attach Ignite logs and GC logs from
> all participating servers? I would expect that this is either a kind of
> out-of-memory problem, or network saturation. Also please explain how
> exactly do you load data to Ignite? Do you use *DataStreamer*, or may be
> *IgniteCache.loadlCache*, or *IgniteCache.put*?
>
>
>
> Vladimir.
>
>
>
> On Tue, Apr 26, 2016 at 3:01 PM, Zhengqingzheng <zhengqingzh...@huawei.com>
> wrote:
>
> Hi Vladimir,
>
> Thank you for your help.
>
> I tried to load 1Million records, and calculate each object’s size
> (including key and value object, where key is  string type), summarized
> together and found the total memory consumption is 130Mb.
>
>
>
> Because ./ignitevisor.sh  command only shows number of records, and no
> data allocation information, I don’t know how much memory has been consumed
> for each type of cache.
>
>
>
> My result is as follows:
>
> Log:
>
> Found big object: [Ljava.util.HashMap$Node;@24833164 size:
> 30.888206481933594Mb
>
> Found big object:   java.util.HashMap@15819383 size: 30.88824462890625Mb
>
> Found big object: java.util.HashSet@10236420 size: 30.888259887695312Mb
>
> key size: 32388688 human readable data: 30.888259887695312Mb
>
> Found big object:   [Lorg.jsr166.ConcurrentHashMap8$Node;@29556439 size:
> 129.99818420410156Mb
>
> Found big object: org.jsr166.ConcurrentHashMap8@19238297 size:
> *129.99822235107422Mb*
>
> value size: 136313016 human readable data: 129.99822235107422Mb
>
> *the whole number of record is 47Million, so the  data size inside the
> cache should be 130*47=**6110Mb(around 6Gb).*
>
>
>
> However, when I try to load the whole data into the cache, I still get
> exceptions:
>
> The exception information is listed as follows:
>
> 1.   -- exception info from client
> 
>
> Before exception occurred, I have 10 nodes on two servers,  server1 (48G
> ram) has 6nodes, each node is assigned with 7gb jvm heap; server2 (32gb
> ram)  has 4 nodes with the same jvm settings as previous one.
>
> After exception, client stopped, and 8 node left (server1’s node remain,
> no exception occurred on this server), server2 ( 2 nodes remain, two nodes
> droped)
>
> The total number of records loaded is 37Million.
>
>
>
> [19:01:47] Topology snapshot [ver=77, servers=9, clients=1, CPUs=20,
> heap=64.0GB]
>
> [19:01:47,463][SEVERE][pub-#46%null%][GridTaskWorker] Failed to obtain
> remote job result policy for result from ComputeTask.result(..) method
> (will fail the whole task): GridJobResultImpl [job=C2 [],
> sib=GridJobSiblingImpl
> [sesId=0dbc0f15451-880a2dd1-bc95-4084-a705-4effcec5d2cd,
> jobId=6dbc0f15451-832bed3e-dc5d-4743-9853-127e3b516924,
> nodeId=832bed3e-dc5d-4743-9853-127e3b516924, isJobDone=false],
> jobCtx=GridJobContextImpl
> [jobId=6dbc0f15451-832bed3e-dc5d-4743-9853-127e3b516924, timeoutObj=null,
> attrs={}], node=TcpDiscoveryNode [id=832bed3e-dc5d-4743-9853-127e3b516924,
> addrs=[0:0:0:0:0:0:0:1%lo, 10.120.70.122, 127.0.0.1],
> sockAddrs=[/0:0:0:0:0:0:0:1%lo:47500, /0:0:0:0:0:0:0:1%lo:47500, /
> 10.120.70.122:47500, /127.0.0.1:47500], discPort=47500, order=7,
> intOrder=7, lastExchangeTime=1461663619161, loc=false,
> ver=1.5.0#20151229-sha1:f1f8cda2, isClient=false], ex=class
> o.a.i.cluster.ClusterTopologyException: Node has left grid:
> 832bed3e-dc5d-4743-9853-127e3b516924, hasRes=true, isCancelled=false,
> isOccupied=true]
>
> class org.apache.ignite.cluster.ClusterTopologyException: Node has left
> grid: 832bed3e-dc5d-4743-9853-127e3b516924
>
>  at
> org.apache.ignite.internal.processors.task.GridTaskWorker.onNodeLeft(GridTaskWorker.java:1315)
>
>  at
> org.apache.ignite.internal.processors.task.GridTaskProcessor$TaskDiscoveryListener$1.run(GridTaskProcessor.java:1246)
>
>  at
> org.apache.ignite.internal.util

Re: Ignite cache data size problem.

2016-04-26 Thread Vladimir Ozerov
t;35700K(1013632K), 0.0103066 secs] [Times:
> user=0.02 sys=0.00, real=0.02 secs]
>
> 2016-04-26T18:46:28.413+0800: 3975.223: [GC (Allocation Failure)
> 2016-04-26T18:46:28.413+0800: 3975.223: [DefNew: 281296K->1679K(314560K),
> 0.0080063 secs] 315316K->35706K(1013632K), 0.0081820 secs] [Times:
> user=0.00 sys=0.00, real=0.01 secs]
>
> 2016-04-26T18:47:04.190+0800: 4011.234: [GC (Allocation Failure)
> 2016-04-26T18:47:04.190+0800: 4011.234: [DefNew: 281295K->1678K(314560K),
> 0.0102572 secs] 315322K->35712K(1013632K), 0.0104334 secs] [Times:
> user=0.02 sys=0.00, real=0.01 secs]
>
> 2016-04-26T18:47:40.161+0800: 4047.120: [GC (Allocation Failure)
> 2016-04-26T18:47:40.161+0800: 4047.120: [DefNew: 281294K->1677K(314560K),
> 0.0109581 secs] 315328K->35718K(1013632K), 0.0111609 secs] [Times:
> user=0.01 sys=0.00, real=0.02 secs]
>
> 2016-04-26T18:48:15.352+0800: 4082.428: [GC (Allocation Failure)
> 2016-04-26T18:48:15.352+0800: 4082.428: [DefNew: 281293K->1677K(314560K),
> 0.0087567 secs] 315334K->35724K(1013632K), 0.0089472 secs] [Times:
> user=0.00 sys=0.00, real=0.01 secs]
>
> 2016-04-26T18:48:51.254+0800: 4118.407: [GC (Allocation Failure)
> 2016-04-26T18:48:51.254+0800: 4118.407: [DefNew: 281293K->1677K(314560K),
> 0.0115241 secs] 315340K->35731K(1013632K), 0.0117504 secs] [Times:
> user=0.02 sys=0.00, real=0.02 secs]
>
> 2016-04-26T18:49:26.547+0800: 4153.705: [GC (Allocation Failure)
> 2016-04-26T18:49:26.547+0800: 4153.705: [DefNew: 281293K->1967K(314560K),
> 0.0095867 secs] 315347K->36027K(1013632K), 0.0097683 secs] [Times:
> user=0.01 sys=0.00, real=0.01 secs]
>
> 2016-04-26T18:50:02.043+0800: 4189.206: [GC (Allocation Failure)
> 2016-04-26T18:50:02.043+0800: 4189.206: [DefNew: 281583K->1676K(314560K),
> 0.0096753 secs] 315643K->35742K(1013632K), 0.0098599 secs] [Times:
> user=0.02 sys=0.00, real=0.01 secs]
>
> 2016-04-26T18:50:37.663+0800: 4224.505: [GC (Allocation Failure)
> 2016-04-26T18:50:37.663+0800: 4224.505: [DefNew: 281292K->1675K(314560K),
> 0.0077731 secs] 315358K->35749K(1013632K), 0.0079524 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
>
> 2016-04-26T18:51:13.058+0800: 4259.780: [GC (Allocation Failure)
> 2016-04-26T18:51:13.058+0800: 4259.780: [DefNew: 281291K->1676K(314560K),
> 0.0086972 secs] 315365K->35756K(1013632K), 0.0088771 secs] [Times:
> user=0.02 sys=0.00, real=0.01 secs]
>
> 2016-04-26T18:51:48.677+0800: 4295.269: [GC (Allocation Failure)
> 2016-04-26T18:51:48.677+0800: 4295.269: [DefNew: 281292K->1931K(314560K),
> 0.0099264 secs] 315372K->36016K(1013632K), 0.0101337 secs] [Times:
> user=0.02 sys=0.00, real=0.02 secs]
>
> 2016-04-26T18:52:23.956+0800: 4330.493: [GC (Allocation Failure)
> 2016-04-26T18:52:23.956+0800: 4330.493: [DefNew: 281547K->1673K(314560K),
> 0.0079896 secs] 315632K->35766K(1013632K), 0.0081558 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
>
> 2016-04-26T18:52:59.459+0800: 4366.025: [GC (Allocation Failure)
> 2016-04-26T18:52:59.459+0800: 4366.025: [DefNew: 281289K->1942K(314560K),
> 0.0099881 secs] 315382K->36041K(1013632K), 0.0101999 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
>
> 2016-04-26T18:53:35.248+0800: 4401.571: [GC (Allocation Failure)
> 2016-04-26T18:53:35.248+0800: 4401.571: [DefNew: 281558K->1675K(314560K),
> 0.0080323 secs] 315657K->35779K(1013632K), 0.0081949 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
>
> 2016-04-26T18:54:11.231+0800: 4437.434: [GC (Allocation Failure)
> 2016-04-26T18:54:11.231+0800: 4437.434: [DefNew: 281291K->1672K(314560K),
> 0.0107709 secs] 315395K->35785K(1013632K), 0.0109682 secs] [Times:
> user=0.02 sys=0.00, real=0.01 secs]
>
> 2016-04-26T18:54:47.360+0800: 4473.741: [GC (Allocation Failure)
> 2016-04-26T18:54:47.360+0800: 4473.741: [DefNew: 281288K->1675K(314560K),
> 0.0077845 secs] 315401K->35796K(1013632K), 0.0079611 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
>
> 2016-04-26T18:55:22.937+0800: 4509.361: [GC (Allocation Failure)
> 2016-04-26T18:55:22.937+0800: 4509.361: [DefNew: 281291K->1675K(314560K),
> 0.0082094 secs] 315412K->35802K(1013632K), 0.0083421 secs] [Times:
> user=0.01 sys=0.00, real=0.01 secs]
>
> 2016-04-26T18:55:58.273+0800: 4544.708: [GC (Allocation Failure)
> 2016-04-26T18:55:58.273+0800: 4544.708: [DefNew: 281291K->1674K(314560K),
> 0.0100806 secs] 315418K->35807K(1013632K), 0.0102764 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
>
> 2016-04-26T18:56:33.184+0800: 4579.506: [GC (Allocation Failure)
> 2016-04-26T18:56:33.184+0800: 4579.507: [DefNew: 281290K->1674K(314560K),
> 0.0111419 secs] 315423K->35814K(1013632K), 0.0113590 secs] [Times:
> user=0.00 sys=0.00, real=0.00 

Re: Error running nodes in .net and c++

2016-04-26 Thread Vladimir Ozerov
Hi Murthy,

Seems that you faced a kind of usability issue, which happens only in some
specific cases. Please try replacing the following line in your config:



with this:



It should help.

Vladimir.

On Tue, Apr 26, 2016 at 1:36 AM, Murthy Kakarlamudi <ksa...@gmail.com>
wrote:

> Hi Alexey...Apologize the delay in my response. Below are the 2 links from
> gdrive for my Java and c++ projects.
>
> Java Project:
> https://drive.google.com/open?id=0B8lM91-_3MwRZmF6N0tnN1pyN2M
>
> C++ Project:
> https://drive.google.com/open?id=0B8lM91-_3MwRMGE5akVWVXc0RXc
>
> Please let me know if you have any difficulty downloading the projects.
>
> Thanks,
> Satya.
>
> On Mon, Apr 25, 2016 at 10:49 AM, Alexey Kuznetsov <
> akuznet...@gridgain.com> wrote:
>
>> I see in stack trace "Caused by: class org.apache.ignite.IgniteException:
>> Spring application context resource is not injected."
>>
>> Also CacheJdbcPojoStoreFactory contains such declaration:
>> @SpringApplicationContextResource
>> private transient Object appCtx;
>>
>> Anybody know why appCtx may not be injected?
>>
>> Also Satya, it is possible for you to prepare small reproducible example
>> that we could debug?
>>
>>
>> On Mon, Apr 25, 2016 at 9:39 PM, Vladimir Ozerov <voze...@gridgain.com>
>> wrote:
>>
>>> Alexey Kuznetsov,
>>>
>>> Provided you have more expertise with POJO store, could you please
>>> advise what could cause this exception? Seems that POJO store expects some
>>> injection, which doesn't happen.
>>> Are there any specific requirements here? C++ node starts as a regular
>>> node and also use Spring.
>>>
>>> Vladimir.
>>>
>>> On Mon, Apr 25, 2016 at 5:32 PM, Murthy Kakarlamudi <ksa...@gmail.com>
>>> wrote:
>>>
>>>> Any help on this issue please...
>>>>
>>>> On Sat, Apr 16, 2016 at 7:29 PM, Murthy Kakarlamudi <ksa...@gmail.com>
>>>> wrote:
>>>>
>>>>> Hi,
>>>>>In my use case, I am starting a node from .net which loads data
>>>>> from SQL Server table into cache upon start up. I have to read those
>>>>> entries from cache from a c++ node that acts as a client. I am getting the
>>>>> below error trying to start the node from c++.
>>>>>
>>>>> [19:08:57] Security status [authentication=off, tls/ssl=off]
>>>>> [19:08:58,163][SEVERE][main][IgniteKernal] Failed to start manager:
>>>>> GridManagerAdapter [enabled=true,
>>>>> name=o.a.i.i.managers.discovery.GridDiscoveryManager]
>>>>> class org.apache.ignite.IgniteCheckedException: Remote node has peer
>>>>> class loading enabled flag different from local [locId8=f02445af,
>>>>> locPeerClassLoading=true, rmtId8=8e52f9c9, rmtPeerClassLoading=false,
>>>>> rmtAddrs=[LAPTOP-QIT4AVOG/0:0:0:0:0:0:0:1, LAPTOP-QIT4AVOG/127.0.0.1,
>>>>> LAPTOP-QIT4AVOG/192.168.0.5,
>>>>> LAPTOP-QIT4AVOG/2001:0:9d38:90d7:145b:5bf:bb9b:11d9,
>>>>> LAPTOP-QIT4AVOG/2600:8806:0:8d00:0:0:0:1,
>>>>> /2600:8806:0:8d00:3ccf:1e94:1ab4:83a9,
>>>>> /2600:8806:0:8d00:f114:bf30:2068:352d]]
>>>>> at
>>>>> org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.checkAttributes(GridDiscoveryManager.java:1027)
>>>>> at
>>>>> org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:680)
>>>>> at
>>>>> org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1505)
>>>>> at
>>>>> org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:917)
>>>>> at
>>>>> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1688)
>>>>> at
>>>>> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1547)
>>>>> at
>>>>> org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1003)
>>>>> at
>>>>> org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:534)
>>>>> at
>>>>> org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:515)
>>>>> at org.apache.ignite.Ignition.start(Ignition.java:322)
>>>>> at
>>>>> org.apache.ignite.internal.processors.platform.PlatformAbstractBootstrap.start(Pl

Re: Ignite Installation with Spark under CDH

2016-04-26 Thread Vladimir Ozerov
Hi,

This is pretty hard to say what is the root cause, especially in complex
deployments like CDH. Most probably you JAR is packaged incorrectly because
your application is able to load Ignite classes, but cannot load jcache
API.
Could you try to simply put cache-api-1.0.0.jar to all places and scripts
where you added Igntte JARs and/or your jar-with-dependencies?

Vladimir.

On Tue, Apr 26, 2016 at 12:43 AM, mdolgonos 
wrote:

> Vladimir,
> There are 2 things that I'm experiencing so far:
> 1. I have added the following code to spark-env.sh in my CDH installation
> IGNITE_HOME=/etc/ignite-1.5.0
> IGNITE_LIBS="${IGNITE_HOME}/libs/*"
>
> for file in ${IGNITE_HOME}/libs/*
> do
> if [ -d ${file} ] && [ "${file}" != "${IGNITE_HOME}"/libs/optional ];
> then
> IGNITE_LIBS=${IGNITE_LIBS}:${file}/*
> fi
> done
>
> but Ignite jars are still not recognized by Spark after restarting Spark as
> well as the entire CDH
> . My location of CDH is the default one:
> /opt/cloudera/parcels/CDH-5.5.2-1.cdh5.5.2.p0.4/etc/spark/conf.dist
> So I decided to compile my code into a jar-with-dependencies which led me
> to
> the second issue:
>
> 2. Looks like the jars were discovered by spark-submit, however, now I'm
> getting the following exception:
> Exception in thread "main" java.lang.NoClassDefFoundError:
> javax/cache/configuration/MutableConfiguration
> at java.lang.ClassLoader.defineClass1(Native Method)
> at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
> at
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
> at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
> at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
> at
>
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.marshallerSystemCache(IgnitionEx.java:2098)
> at
>
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.initializeDefaultCacheConfiguration(IgnitionEx.java:1914)
> at
>
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.initializeConfiguration(IgnitionEx.java:1899)
> at
>
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1573)
> at
>
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1547)
> at
> org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1003)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:534)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:515)
> at org.apache.ignite.Ignition.start(Ignition.java:322)
> at
> org.apache.ignite.spark.IgniteContext.ignite(IgniteContext.scala:153)
> at
> org.apache.ignite.spark.IgniteContext.(IgniteContext.scala:62)
> at
> training.PnLMergerVectorIgnite$.main(PnLMergerVectorIgnite.scala:50)
> at training.PnLMergerVectorIgnite.main(PnLMergerVectorIgnite.scala)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at
>
> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:672)
> at
> org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
> at
> org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> Caused by: java.lang.ClassNotFoundException:
> javax.cache.configuration.MutableConfiguration
> at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
> I know that the class javax.cache.configuration.MutableConfiguration
> belongs
> to cache-api-1.0.0.jar and I see it present in my jar-with-dependencies
>
> Thank you for 

Re: Detecting a node leaving the cluster

2016-04-26 Thread Vladimir Ozerov
Hi Ralph,

Yes, this is how we normally respond to node failures - by listening
events. However, please note that you should not perform heavy and blocking
operations in the callback as it might have adverse effects on nodes
communication. Instead, it is better to move heavy operations into separate
thread or thread pool.

Vladimir.

On Mon, Apr 25, 2016 at 2:54 PM, Ralph Goers <ralph.go...@dslextreme.com>
wrote:

> Great, thanks!
>
> Is listening for that the way you would implement what I am trying to do?
>
> Ralph
>
> On Apr 25, 2016, at 4:22 AM, Vladimir Ozerov <voze...@gridgain.com> wrote:
>
> Ralph,
>
> EVT_NODE_LEFT and EVT_NODE_FAILED occur on local node. They essentially
> mean "I saw that remote node went down".
>
> Vladimir.
>
> On Sat, Apr 23, 2016 at 5:48 PM, Ralph Goers <ralph.go...@dslextreme.com>
> wrote:
>
>> Some more information that may be of help.
>>
>> Each user of a client application creates a “session” that is represented
>> in the distributed cache. Each session has its own connection to the third
>> party application. If a user uses multiple client applications they will
>> reuse the same session and connection with the third party application. So
>> when a single node goes down all the user’s sessions need to become “owned”
>> by different nodes.
>>
>> In the javadoc I do see IgniteEvents.localListen(), but the description
>> says it listens for “local” events. I wouldn’t expect EVT_NODE_LEFT or
>> EVT_NODE_FAILED to be considered local events, so I am a bit confused as to
>> what the method does.
>>
>> Ralph
>>
>> On Apr 23, 2016, at 6:49 AM, Ralph Goers <ralph.go...@dslextreme.com>
>> wrote:
>>
>> From what I understand in the documentation client mode will mean I will
>> lose high availability, which is the point of using a distributed cache.
>>
>> The architecture is such that we have multiple client applications that
>> need to communicate with the service that has the clustered cache. The
>> client applications expect to get callbacks when events occur in the third
>> party application the service is communicating with. If one of the service
>> nodes fail - for example during a rolling deployment - we need one of the
>> other nodes to re-establish the connection with the third party so it can
>> continue to monitor for the events. Note that the service servers are
>> load-balanced so they may each have an arbitrary number of connections with
>> the third party.
>>
>> So I either need a listener that tells me when one of the nodes in the
>> cluster has left or a way of creating the connection using something ignite
>> provides so that it automatically causes the connection to be recreated
>> when a node leaves.
>>
>> Ralph
>>
>>
>> On Apr 23, 2016, at 12:01 AM, Владислав Пятков <vldpyat...@gmail.com>
>> wrote:
>>
>> Hello Ralph,
>>
>> I think the correct way is to use client node (with setClientMode - true)
>> for control of cluster. Client node is isolated from data processing and
>> not subject fail of load.
>> Why are you connect each node with third party application instead of to
>> do that only from client?
>>
>> On Sat, Apr 23, 2016 at 4:10 AM, Ralph Goers <ralph.go...@dslextreme.com>
>> wrote:
>>
>>> I have an application that is using Ignite for a clustered cache.  Each
>>> member of the cache will have connections open with a third party
>>> application. When a cluster member stops its connections must be
>>> re-established on other cluster members.
>>>
>>> I can do this manually if I have a way of detecting a node has left the
>>> cluster, but I am hoping that there is some other recommended way of
>>> handling this.
>>>
>>> Any suggestions?
>>>
>>> Ralph
>>>
>>
>>
>>
>> --
>> Vladislav Pyatkov
>>
>>
>>
>>
>


Re: Not able to save data in Ignite datagrid using JavaIgniteRDD

2016-04-25 Thread Vladimir Ozerov
Hi Vij,

I am a bit confused - do you still have any problems? Because one message
earlier you mentioned that finally it worked. Is any assistance still
needed?

Vladimir.

On Thu, Apr 21, 2016 at 7:32 AM, vijayendra bhati 
wrote:

> Hi Alexei,
>
> Here it is, the file StockSimulationsCacheWriter (2).java is the one
> which is not working.
>
> Regards,
> Vij
>
>
> On Thursday, April 21, 2016 1:54 AM, Alexei Scherbakov <
> alexey.scherbak...@gmail.com> wrote:
>
>
> Hi,
>
> It's very strange, because JavaIgniteContext is just a wrapper around
> IgniteContext.
> Could you provide the source code describing your problem ?
>
> 2016-04-20 21:14 GMT+03:00 vijayendra bhati :
>
> Its working now.I moved from JavaIgniteContext to IgniteContext.
> Also invoked igniteConext.fromCache(cacheConfiguration) rather than java
> igniteConext.fromCache(cacheName)
>
> Also while initializing cacheConfiguration , added index to it.
>
> Thanks,
> Vij
>
>
> On Wednesday, April 20, 2016 11:17 PM, vijayendra bhati <
> veejayend...@yahoo.com> wrote:
>
>
> Hi,
> I am trying to save data in Ignite datagrid using JavaIgniteRDD by calling
> method savePairs().
> Some how job is getting finished properly , am not getting any
> exception.Still there is not data in cache.I am checking through H2 debug
> console.
>
> I am not ablel to understand what could be the reason.The other thing is
> how to specify Indexes when using JavaIgniteContext.
>
> Regards,
> Vij
>
>
>
>
>
> --
>
> Best regards,
> Alexei Scherbakov
>
>
>


Re: Error running nodes in .net and c++

2016-04-25 Thread Vladimir Ozerov
Alexey Kuznetsov,

Provided you have more expertise with POJO store, could you please advise
what could cause this exception? Seems that POJO store expects some
injection, which doesn't happen.
Are there any specific requirements here? C++ node starts as a regular node
and also use Spring.

Vladimir.

On Mon, Apr 25, 2016 at 5:32 PM, Murthy Kakarlamudi 
wrote:

> Any help on this issue please...
>
> On Sat, Apr 16, 2016 at 7:29 PM, Murthy Kakarlamudi 
> wrote:
>
>> Hi,
>>In my use case, I am starting a node from .net which loads data from
>> SQL Server table into cache upon start up. I have to read those entries
>> from cache from a c++ node that acts as a client. I am getting the below
>> error trying to start the node from c++.
>>
>> [19:08:57] Security status [authentication=off, tls/ssl=off]
>> [19:08:58,163][SEVERE][main][IgniteKernal] Failed to start manager:
>> GridManagerAdapter [enabled=true,
>> name=o.a.i.i.managers.discovery.GridDiscoveryManager]
>> class org.apache.ignite.IgniteCheckedException: Remote node has peer
>> class loading enabled flag different from local [locId8=f02445af,
>> locPeerClassLoading=true, rmtId8=8e52f9c9, rmtPeerClassLoading=false,
>> rmtAddrs=[LAPTOP-QIT4AVOG/0:0:0:0:0:0:0:1, LAPTOP-QIT4AVOG/127.0.0.1,
>> LAPTOP-QIT4AVOG/192.168.0.5,
>> LAPTOP-QIT4AVOG/2001:0:9d38:90d7:145b:5bf:bb9b:11d9,
>> LAPTOP-QIT4AVOG/2600:8806:0:8d00:0:0:0:1,
>> /2600:8806:0:8d00:3ccf:1e94:1ab4:83a9,
>> /2600:8806:0:8d00:f114:bf30:2068:352d]]
>> at
>> org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.checkAttributes(GridDiscoveryManager.java:1027)
>> at
>> org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:680)
>> at
>> org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1505)
>> at
>> org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:917)
>> at
>> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1688)
>> at
>> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1547)
>> at
>> org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1003)
>> at
>> org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:534)
>> at
>> org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:515)
>> at org.apache.ignite.Ignition.start(Ignition.java:322)
>> at
>> org.apache.ignite.internal.processors.platform.PlatformAbstractBootstrap.start(PlatformAbstractBootstrap.java
>>
>> Below if my config for .net node:
>> 
>>
>> http://www.springframework.org/schema/beans;
>>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
>>xsi:schemaLocation="
>> http://www.springframework.org/schema/beans
>> http://www.springframework.org/schema/beans/spring-beans.xsd;>
>>   > class="org.apache.ignite.configuration.IgniteConfiguration">
>> 
>>   > class="org.apache.ignite.configuration.ConnectorConfiguration">
>> 
>>   
>> 
>>
>> 
>>   
>> 
>>   
>>   
>>   
>>   
>>   
>>   
>> > class="org.apache.ignite.platform.dotnet.PlatformDotNetCacheStoreFactory">
>>   > value="TestIgniteDAL.SQLServerStore, TestIgniteDAL"/>
>> 
>>   
>>   
>> 
>>   
>> 
>> 
>> 
>> 
>>   
>> 
>> 
>> 
>> 
>> 
>> 
>>   
>> 
>> 
>> 
>>   
>> 
>>   
>> 
>>   
>> 
>>   
>> 
>>   
>> 
>>
>> 
>>   > class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">
>> 
>>   
>> 
>>
>> 
>>   
>> 
>>   > class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
>> 
>>   
>> 127.0.0.1:47500..47509
>>   
>> 
>>   
>> 
>>   
>> 
>>   
>> 
>>
>>
>> Below is my config for node from c++:
>> 
>>
>> 
>>
>> http://www.springframework.org/schema/beans;
>>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
>>xmlns:util="http://www.springframework.org/schema/util;
>>xsi:schemaLocation="
>> http://www.springframework.org/schema/beans
>> http://www.springframework.org/schema/beans/spring-beans.xsd
>> http://www.springframework.org/schema/util
>> http://www.springframework.org/schema/util/spring-util.xsd;>
>> > class="org.apache.ignite.configuration.IgniteConfiguration">
>> 
>> 
>>
>> 

Re: IgniteSemaphore problem

2016-04-25 Thread Vladimir Ozerov
Hi,

There was several issues associated with distributed semaphore in 1.5.0
release. As far as I know most of the will be fixed in 1.6.0 release.
Can you try building from current development master and see if the problem
is still there?

Vladimir.

On Wed, Apr 20, 2016 at 1:00 PM, swoky  wrote:

> now we have two nodes,
> created an IgniteSemaphore  in node A ,then we can use it in node B,
>
> but when node A down ,can't use it in node B any more,
>
> is there some Configurations i missed?
>
> below is my Configuration file
>
> -
> http://www.springframework.org/schema/beans;
>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
>xmlns:util="http://www.springframework.org/schema/util;
>xsi:schemaLocation="
> http://www.springframework.org/schema/beans
> http://www.springframework.org/schema/beans/spring-beans.xsd
> http://www.springframework.org/schema/util
> http://www.springframework.org/schema/util/spring-util.xsd;>
>
> 
>
> 
>  class="org.apache.ignite.configuration.CacheConfiguration">
>
> 
>
> 
> 
> 
>
>
> 
>  class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
> 
> 
> class="org.apache.ignite.spi.discovery.tcp.ipfinder.zk.TcpDiscoveryZookeeperIpFinder">
>  value="false" />
>  value="127.0.0.1:2181"/>
> 
> 
> 
> 
> 
> 
> 
> and the java code :
> ---
> Ignite ignite = Ignition.start("d:\x\ignite.xml");
>
> IgniteSemaphore semaphore = ignite.semaphore("semName", //
> Distributed
> semaphore name.
> 90, // Number of permits.
> true, // Release acquired permits if node,
> that owned them, left
> topology.
> true // Create if it doesn't exist.
> );
> -
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/IgniteSemaphore-problem-tp4366.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Getting a null pointer exception when trying to bulk load a cache using LoadCache

2016-04-25 Thread Vladimir Ozerov
Hi,

Could you please clarify what is ROCCache and ROCCacheConfiguration? And
how many nodes do you have in topology?

Vladimir.

On Thu, Apr 21, 2016 at 10:34 AM, abhishek94 
wrote:

> 1 client node and 1 remote server node, have added the Pojo jar and MYSQl
> jar
> in the remote node libs but still the loadcache throws an nullpointer
> exception.
>
> The code  is as follows
>
>   @Test
>   public void testClusterNormalCache() {
> Connection con = null;
> Statement stmt = null;
> String sql = null;
> IgniteCluster igniteCluster = rocCachemanager.getCluster();
> HashMap defaults = new HashMap();
> HashMap hmLocal = new HashMap();
> hmLocal.put("host", "10.113.56.231");
> hmLocal.put("port", 22);
> defaults.put("port", 22);
> hmLocal.put("uname", "root");
> hmLocal.put("passwd", "root123");
> hmLocal.put("nodes", 1);
> hmLocal.put("igniteHome",
> "/home/benakaraj/Downloads/apache-ignite-fabric-1.5.0.final-bin");
> hmLocal.put("cfg", "conf/spring_igniteConfig.xml");
> Collection> hosts = new ArrayList Object>>();
> hosts.add(hmLocal);
> try {
>   Collection result =
> igniteCluster.startNodes(hosts, defaults, true, 1, 2);
>   int waitTime = 0;
>   while (waitTime < 15000) {
> synchronized (this) {
>
>   try {
> wait(2000);
>   } catch (InterruptedException e) {
> // TODO Auto-generated catch block
> e.printStackTrace();
>   }
> }
>
> waitTime += 2000;
>   }
>   Collection ids = new ArrayList();
>   for (ClusterStartNodeResult res : result) {
> if (!res.isSuccess())
>   System.out.println(res.getError());
> else {
>   System.out.println(res.getHostName());
>   ClusterMetrics metrics = igniteCluster.metrics();
>   ClusterGroup group = igniteCluster.forRemotes();
>   // ClusterNode node=group.node();
>   Collection nodes = group.nodes();
>   System.out.println("Nodes--" + nodes.size());
>   for (ClusterNode node : nodes) {
> ids.add(node.id());
> Map hm = node.attributes();
> for (java.util.Map.Entry entry : hm.entrySet())
> {
>   // System.out.println(entry.getKey()+"---"+entry.getValue());
> }
>
>
> System.out.println("");
>
> System.out.println("");
>
> System.out.println("");
>
> System.out.println("");
>
>   }
>
>
> }
>   }
>   try {
> con = ds.getConnection();
>
> stmt = con.createStatement();
> sql = "create table Organization(org_id int,org_name
> varchar(20),primary key(org_id))";
> stmt.executeUpdate(sql);
> sql = "insert into Organization values(1,'Subex')";
> stmt.executeUpdate(sql);
> sql = "insert into Organization values(2,'Informatica')";
> stmt.executeUpdate(sql);
> sql = "insert into Organization values(3,'SAP')";
> stmt.executeUpdate(sql);
> sql = "insert into Organization values(4,'William O Niel')";
> stmt.executeUpdate(sql);
> sql = "insert into Organization values(5,'Subex1')";
> stmt.executeUpdate(sql);
> sql = "insert into Organization values(6,'Informatica1')";
> stmt.executeUpdate(sql);
> sql = "insert into Organization values(7,'SAP1')";
> stmt.executeUpdate(sql);
> sql = "insert into Organization values(8,'William O Niel1')";
> stmt.executeUpdate(sql);
> sql = "insert into Organization values(9,'Subex2')";
> stmt.executeUpdate(sql);
> sql = "insert into Organization values(10,'Informatica2')";
> stmt.executeUpdate(sql);
> sql = "insert into Organization values(11,'SAP2')";
> stmt.executeUpdate(sql);
> sql = "insert into Organization values(12,'William O Niel2')";
> stmt.executeUpdate(sql);
>   } catch (Exception e) {
>   }
>
>   try {
>
> ROCCacheConfiguration new4 = new
> ROCCacheConfiguration<>();
> new4.setName("loadCacheAll");
> new4.setCacheMode(CacheMode.REPLICATED);
> new4.setReadThrough(true);
> new4.setWriteThrough(true);
> JdbcType jdbcType = new JdbcType();
>
> jdbcType.setCacheName("loadCacheAll");
>
> jdbcType.setDatabaseSchema(con.getCatalog());
>
> jdbcType.setDatabaseTable("Organization");
> jdbcType.setKeyType(Integer.class);
> jdbcType.setValueType(Organization.class); // Key fields for
> PERSON.
>
> Collection keys = new 

Re: Ignite Installation with Spark under CDH

2016-04-25 Thread Vladimir Ozerov
Hi,

Could you please clarify the exact problem? Do you see any exception?

Vladimir.

On Fri, Apr 22, 2016 at 4:57 PM, mdolgonos 
wrote:

> I'm trying to install and integrate Ignite with Spark under CDH by
> following
> the recommendation at
> https://apacheignite-fs.readme.io/docs/installation-deployment for
> Standalone Deployment. I modified the existing
> $SPARK_HOME/conf/spark-env.sh
> as recommended but the following command still can't recognize  the ignite
> package
> import org.apache.ignite.configuration._
> Can somebody advise if I'm missing anything?
> Thank you,
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Ignite-Installation-with-Spark-under-CDH-tp4457.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Detecting a node leaving the cluster

2016-04-25 Thread Vladimir Ozerov
Ralph,

EVT_NODE_LEFT and EVT_NODE_FAILED occur on local node. They essentially
mean "I saw that remote node went down".

Vladimir.

On Sat, Apr 23, 2016 at 5:48 PM, Ralph Goers 
wrote:

> Some more information that may be of help.
>
> Each user of a client application creates a “session” that is represented
> in the distributed cache. Each session has its own connection to the third
> party application. If a user uses multiple client applications they will
> reuse the same session and connection with the third party application. So
> when a single node goes down all the user’s sessions need to become “owned”
> by different nodes.
>
> In the javadoc I do see IgniteEvents.localListen(), but the description
> says it listens for “local” events. I wouldn’t expect EVT_NODE_LEFT or
> EVT_NODE_FAILED to be considered local events, so I am a bit confused as to
> what the method does.
>
> Ralph
>
> On Apr 23, 2016, at 6:49 AM, Ralph Goers 
> wrote:
>
> From what I understand in the documentation client mode will mean I will
> lose high availability, which is the point of using a distributed cache.
>
> The architecture is such that we have multiple client applications that
> need to communicate with the service that has the clustered cache. The
> client applications expect to get callbacks when events occur in the third
> party application the service is communicating with. If one of the service
> nodes fail - for example during a rolling deployment - we need one of the
> other nodes to re-establish the connection with the third party so it can
> continue to monitor for the events. Note that the service servers are
> load-balanced so they may each have an arbitrary number of connections with
> the third party.
>
> So I either need a listener that tells me when one of the nodes in the
> cluster has left or a way of creating the connection using something ignite
> provides so that it automatically causes the connection to be recreated
> when a node leaves.
>
> Ralph
>
>
> On Apr 23, 2016, at 12:01 AM, Владислав Пятков 
> wrote:
>
> Hello Ralph,
>
> I think the correct way is to use client node (with setClientMode - true)
> for control of cluster. Client node is isolated from data processing and
> not subject fail of load.
> Why are you connect each node with third party application instead of to
> do that only from client?
>
> On Sat, Apr 23, 2016 at 4:10 AM, Ralph Goers 
> wrote:
>
>> I have an application that is using Ignite for a clustered cache.  Each
>> member of the cache will have connections open with a third party
>> application. When a cluster member stops its connections must be
>> re-established on other cluster members.
>>
>> I can do this manually if I have a way of detecting a node has left the
>> cluster, but I am hoping that there is some other recommended way of
>> handling this.
>>
>> Any suggestions?
>>
>> Ralph
>>
>
>
>
> --
> Vladislav Pyatkov
>
>
>
>


Re: java.lang.OutOfMemoryError in normal Ignite usage

2016-04-25 Thread Vladimir Ozerov
Hi,

Most probably, you just have insufficient heap, as queries require some
heap space during execution. Probably you should give your node more heap
and monitor it for some. If all is fine, you will see a saw-like pattern -
when memory is allocated and then release. If there is a leak, you will see
constant heap growth.

Vladimir.

On Sun, Apr 24, 2016 at 1:53 AM, Eduardo Julian 
wrote:

> Hi, everyone.
>
> I started a program 3 days ago on a server with 1GB RAM.
>
> Everything was running smoothly until a couple of hours ago, when the
> service wasn't responding.
> When we go check the logs, there is an OOMError with an Ignite stack-trace
> tagging along.
>
> All that I store on that server is stored on Ignite (with read-through and
> write-through to a MySQL DB).
> It's fairly light stuff, just some objects with 10+ fields, and it's just
> 1 cache right now.
>
> However, in the logs, the bulk of the operations preceding the OOM error
> were read-operations that neither created nor updated the data on Ignite.
> Just performed some queries.
>
> And yet, for some reason, there was a HashMap being updated somewhere,
> which triggered the OOM error.
>
> Check it out:
>
> Apr 23, 2016 4:35:44 PM org.apache.ignite.logger.java.JavaLogger error
> SEVERE: Caught unhandled exception in NIO worker thread (restart the node).
> java.lang.OutOfMemoryError: Java heap space
> at java.util.HashMap$KeySet.iterator(HashMap.java:912)
> at java.util.HashSet.iterator(HashSet.java:172)
> at
> java.util.Collections$UnmodifiableCollection$1.(Collections.java:1039)
> at
> java.util.Collections$UnmodifiableCollection.iterator(Collections.java:1038)
> at
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.checkIdle(GridNioServer.java:1489)
> at
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:1406)
> at
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1280)
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
> at java.lang.Thread.run(Thread.java:745)
>
> Apr 23, 2016 4:35:45 PM org.apache.ignite.logger.java.JavaLogger error
> SEVERE: Runtime error caught during grid runnable execution: GridWorker
> [name=grid-nio-worker-0, gridName=null, finished=false, isCancelled=false,
> hashCode=1645791145, interrupted=false, runner=grid-nio-worker-0-#40%null%]
> java.lang.OutOfMemoryError: Java heap space
> at java.util.HashMap$KeySet.iterator(HashMap.java:912)
> at java.util.HashSet.iterator(HashSet.java:172)
> at
> java.util.Collections$UnmodifiableCollection$1.(Collections.java:1039)
> at
> java.util.Collections$UnmodifiableCollection.iterator(Collections.java:1038)
> at
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.checkIdle(GridNioServer.java:1489)
> at
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:1406)
> at
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1280)
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
> at java.lang.Thread.run(Thread.java:745)
>
> Apr 23, 2016 4:35:46 PM org.apache.ignite.logger.java.JavaLogger error
> SEVERE: Runtime error caught during grid runnable execution: GridWorker
> [name=partition-exchanger, gridName=null, finished=false,
> isCancelled=false, hashCode=685490869, interrupted=false,
> runner=exchange-worker-#45%null%]
> java.lang.OutOfMemoryError: Java heap space
> at java.util.HashMap.newNode(HashMap.java:1734)
> at java.util.HashMap.putVal(HashMap.java:630)
> at java.util.HashMap.put(HashMap.java:611)
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionMap2.put(GridDhtPartitionMap2.java:112)
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionMap2.(GridDhtPartitionMap2.java:96)
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionFullMap.(GridDhtPartitionFullMap.java:107)
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtPartitionTopologyImpl.partitionMap(GridDhtPartitionTopologyImpl.java:841)
> at
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager.sendAllPartitions(GridCachePartitionExchangeManager.java:747)
> at
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager.refreshPartitions(GridCachePartitionExchangeManager.java:698)
> at
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager.refreshPartitions(GridCachePartitionExchangeManager.java:724)
> at
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager.access$1600(GridCachePartitionExchangeManager.java:107)
> at
> 

Re: Compute Grid API in C++/Scala

2016-04-25 Thread Vladimir Ozerov
Hi Arthi,

I think C++ Compute API will be ready somewhere around 1.7 version. Exact
timeline is not know for now, but I think this is a matter of several
months.

Vladimir.

On Mon, Apr 25, 2016 at 8:47 AM, arthi 
wrote:

> Thanks Igor & Val.
>
> Is there a tentative timeline when the next release with C++ compute grid
> will come out?
>
> Thanks,
> Arthi
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Compute-Grid-API-in-C-Scala-tp4456p4487.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Ignite cache data size problem.

2016-04-22 Thread Vladimir Ozerov
Hi,

It looks like you have relatively small entries (somewhere about 60-70
bytes per key-value pair). Ignite also has some intrinsic overhead, which
could be more than actual data in this case. However, I surely would not
expect that it will not fit into 80GB.

Could you please share your key and value model classes and your XML
configuration to investigate it further?

Vladimir.

On Fri, Apr 22, 2016 at 2:02 PM, Zhengqingzheng 
wrote:

> Hi there,
>
> I am trying to load a table with 47Million records, in which the data size
> is less than 3gb. However, When I load into the memory ( two vm with 48+32
> = 80gb), it is still crushed due to not enough memory space? This problem
> occurred when I instantiated 6 + 4 nodes.
>
> Why the cache model need so much space ( 3g vs 80g)?
>
> Any idea to explain this issue?
>
>
>
> Kind regards,
>
> Kevin
>
>
>


Re: Performance Issue - Threads blocking

2016-04-22 Thread Vladimir Ozerov
Hi,

Could you please explain why do you think that the thread is blocked? I see
it is in a RUNNABLE state.

Vladimir.

On Fri, Apr 22, 2016 at 2:41 AM, ccanning  wrote:

> We seem to be having some serious performance issues after adding Apache
> Ignite Local cache to our APIs'. Looking at a heap dump, we seem to have a
> bunch of threads blocked by this lock:
>
> "ajp-0.0.0.0-8009-70" - Thread t@641
>java.lang.Thread.State: RUNNABLE
> at
>
> org.apache.ignite.internal.binary.BinaryReaderExImpl.(BinaryReaderExImpl.java:166)
> at
>
> org.apache.ignite.internal.binary.BinaryUtils.doReadObject(BinaryUtils.java:1486)
> at
>
> org.apache.ignite.internal.binary.BinaryUtils.deserializeOrUnmarshal(BinaryUtils.java:1830)
> at
>
> org.apache.ignite.internal.binary.BinaryUtils.doReadMap(BinaryUtils.java:1813)
> at
>
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1597)
> at
>
> org.apache.ignite.internal.binary.BinaryReaderExImpl.readField(BinaryReaderExImpl.java:1646)
> at
>
> org.apache.ignite.internal.binary.BinaryFieldAccessor$DefaultFinalClassAccessor.read(BinaryFieldAccessor.java:643)
> at
>
> org.apache.ignite.internal.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:714)
> at
>
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1450)
> at
>
> org.apache.ignite.internal.binary.BinaryObjectImpl.deserializeValue(BinaryObjectImpl.java:537)
> at
>
> org.apache.ignite.internal.binary.BinaryObjectImpl.value(BinaryObjectImpl.java:117)
> at
>
> org.apache.ignite.internal.processors.cache.CacheObjectContext.unwrapBinary(CacheObjectContext.java:280)
> at
>
> org.apache.ignite.internal.processors.cache.CacheObjectContext.unwrapBinaryIfNeeded(CacheObjectContext.java:145)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheEventManager.addEvent(GridCacheEventManager.java:276)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheEventManager.addEvent(GridCacheEventManager.java:159)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheEventManager.addEvent(GridCacheEventManager.java:92)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerGet0(GridCacheMapEntry.java:862)
> - locked <70d32489> (a
> org.apache.ignite.internal.processors.cache.local.GridLocalCacheEntry)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerGet(GridCacheMapEntry.java:669)
> at
>
> org.apache.ignite.internal.processors.cache.local.atomic.GridLocalAtomicCache.getAllInternal(GridLocalAtomicCache.java:587)
> at
>
> org.apache.ignite.internal.processors.cache.local.atomic.GridLocalAtomicCache.get(GridLocalAtomicCache.java:483)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:1378)
> at
>
> org.apache.ignite.internal.processors.cache.IgniteCacheProxy.get(IgniteCacheProxy.java:864)
> at
> org.apache.ignite.cache.spring.SpringCache.get(SpringCache.java:52)
>
>  - locked <70d32489> (a
> org.apache.ignite.internal.processors.cache.local.GridLocalCacheEntry)
>
> Should this be causing blocking in a high-throughput API? Do you have any
> pointers in how we could solve this issue?
>
> Thanks.
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Performance-Issue-Threads-blocking-tp4433.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Map-reduce proceesing

2016-04-20 Thread Vladimir Ozerov
Hi,

If you broadcast the job and want to iterate over cache inside it, then
please make sure that you iterate only over local entries (e.g.
IgniteCache.localEntries(), ScanQuery.setLocal(true), etc.). Otherwise your
jobs will duplicate work and performance will suffer.

Also please note that returned result set might be incomplete if one of the
nodes failed during job processing. If you care about it, you should either
implement some failover, or use Ignite's built-in queries (ScanQuery,
SqlQuery) which already take care of it.

Anyway, I strongly recommend you to focus on SqlQuery first. You can
configure indexes on cache and they could give you great boost, because
instead of iterating over the whole cache, Ignite will use indexes for fast
data lookup.

Vladimir.

On Wed, Apr 20, 2016 at 12:31 PM, dmreshet  wrote:

> Yes, I know.
> I want to compare performance of SQL,  SQL with indexes and MapReduce job.
> I have found that I can use broadcast to garantie that my MapReduce job
> will
> be executed on each node exactly once.
> So now my job uses code:
> /Collection result =
>
> ignite.compute(ignite.cluster()).broadcast((IgniteCallable>)
> () -> {...});/
>
> And than I will reduce the result.
>
> Is that the best practise to implement MapReduce job in case that I should
> process data from cache?
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Map-reduce-proceesing-tp4357p4364.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


  1   2   >