Re: IgniteDataStreamer

2018-01-15 Thread Alexey Kukushkin
Thank you Thomas, I did not even know the star "*" makes a
CacheConfiguration bean registered as a template. I am not sure Ignite has
it documented anywhere.


RE: IgniteDataStreamer

2018-01-15 Thread Thomas Isaksen
Hi

You can configure a template as follows:

 
 




Then you can do 

CREATE TABLE IF NOT EXISTS MyTable 
(
id bigint,
myValue VARCHAR,
PRIMARY KEY (id)
)
WITH "template=myCache, cache_name=whateverYouWant";

If you don't set "cache_name" your cache will be called "SQL_PUBLIC_MYCACHE"


./t


-Original Message-
From: gene [mailto:gene.foj...@gm.com] 
Sent: tirsdag 16. januar 2018 04.31
To: user@ignite.apache.org
Subject: Re: IgniteDataStreamer

I saw that method.  But the 2.3 for CREATE TABLE indicates you can passing by 
WITH for the template.

WITH - accepts additional parameters not defined by ANSI-99 SQL:
TEMPLATE= - case-sensitive​ name of a cache template 
registered in Ignite to use as a configuration for the distributed cache that 
is deployed by the CREATE TABLE command. A template is an instance of the 
CacheConfiguration class registered with Ignite.addCacheConfiguration in the 
cluster. Use predefined TEMPLATE=PARTITIONED or TEMPLATE=REPLICATED templates 
to create the cache with the corresponding replication mode. The rest of the 
parameters will be those that are defined in the CacheConfiguration object. By 
default, TEMPLATE=PARTITIONED is used if the template is not specified 
explicitly

-g



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Purpose of cache in cache.query(Create Table ...) statement

2018-01-15 Thread Shravya Nethula
Hi Andrey,

Thank you for the information.

I want to create some tables using cache.query(Create Table ...) statement.
Is there any way in which I can group some of my tables in one cache? Is
there any hierarchy in organizing the caches like a super cache holding some
sub caches? 

Regards,
Shravya Nethula.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Connection problem between client and server

2018-01-15 Thread Jeff Jiao
Thank you very much Denis, we will give it a try and let you know, I think
this should solve our problem.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: IgniteDataStreamer

2018-01-15 Thread gene
I saw that method.  But the 2.3 for CREATE TABLE indicates you can passing by
WITH for the template.

WITH - accepts additional parameters not defined by ANSI-99 SQL:
TEMPLATE= - case-sensitive​ name of a cache template
registered in Ignite to use as a configuration for the distributed cache
that is deployed by the CREATE TABLE command. A template is an instance of
the CacheConfiguration class registered with Ignite.addCacheConfiguration in
the cluster. Use predefined TEMPLATE=PARTITIONED or TEMPLATE=REPLICATED
templates to create the cache with the corresponding replication mode. The
rest of the parameters will be those that are defined in the
CacheConfiguration object. By default, TEMPLATE=PARTITIONED is used if the
template is not specified explicitly

-g



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Limit cache size ?

2018-01-15 Thread Jeff Jiao
Hi Val,

Is there a max entry limit for a single cache by default?
Is it Integer.MAX_VALUE? what will happen if the data amount reaches the
limit or pass it?


Thanks,
Jeff



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Create BinaryObject without starting Ignite?

2018-01-15 Thread vkulichenko
zbyszek,

Generally, the answer is no. Binary format depends on internal Ignite
context, so there is no clean way to create a binary object without starting
Ignite. The code that was provided in the referenced thread is a hacky
workaround which could probably worked in one of the previous versions, but
there is a big chance it doesn't work anymore in the latest.

What is the purpose of this?

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Semaphore Stuck when no acquirers to assign permit

2018-01-15 Thread Timay
I saw a release date set for 2.4 but have not had any feedback on the jira so
i wanted to check in on this. Can this make it into the 2.4 release?

Thanks
Tim



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: IgniteDataStreamer

2018-01-15 Thread Alexey Kukushkin
I do not think there is a way to add a cache template via XML configuration
file. You have to write code ignite.addCacheConfiguration(cacheTemplate)
where cacheTemplate is an instance of CacheConfiguration that you reference
from CREATE TABLE ... WITH "template=NAME" by NAME.


RE: SQL Error: Failed to query Ignite

2018-01-15 Thread Thomas Isaksen
I deleted everything under $IGNITE_HOME/work and it's working as expected now. 
I probably messed up big with all my different configuration attempts :)
Again, thank you so much for your help!

./t

-Original Message-
From: slava.koptilin [mailto:slava.kopti...@gmail.com] 
Sent: mandag 15. januar 2018 16.44
To: user@ignite.apache.org
Subject: RE: SQL Error: Failed to query Ignite

Hi,

> Now my question is simply, why the star? What does it do?
"*" means that it is a cache template :)

> I now have two "identical" caches
Hmm, I cannot reproduce that behavior. Please share your Ignite configuration.

Thanks!




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Test Ignite Client

2018-01-15 Thread afedotov
Hi Humphrey,

Consider forcing server mode on the client
https://apacheignite.readme.io/docs/clients-vs-servers#section-forcing-server-mode-on-client-nodes.

Another option is to enforce bean definition registration order. You could
explicitly order configurations so that server bean gets registered before
the client. Another trick is to use @ConditionalOnBean(name = "serverBean")
annotation to impose bean definition order.

Kind regards,
Alex



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: IgniteOutOfMemoryException when using putAll instead of put

2018-01-15 Thread Alexey Popov
Hi Larry,

I am without my PC for a while. I will check the file you attached later
this week.

Thanks,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Default Cache template

2018-01-15 Thread slava.koptilin
Hello,

Cache template can be configured in the following way











In that case, you should be able to create a table: 
CREATE TABLE TEST(id LONG, name VARCHAR, PRIMARY KEY (id)) WITH 
template=cachetemplate;

Thanks, 
Slava.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Default Cache template

2018-01-15 Thread Evgenii Zhuravlev
Hi gene,

You just need to create the template in Ignite before it:
https://apacheignite.readme.io/v2.1/docs/distributed-ddl#section-extended-parameters

The template should be registered as the common cache, but with symbol '*'
in its name.

If you are not able to create it from java API or xml config file, you can
use one of the default templates, for example: TEMPLATE=PARTITIONED or
TEMPLATE=REPLICATED

If you still can't use templates. please share what you've already tried.

Evgenii

2018-01-15 19:27 GMT+03:00 gene :

> Hello,  I'm having difficulties setting the default template while using
> the
> DDL create table.  I've tried multiple ways, however I don't believe I have
> the register cache template part down.  While trying to create any Table w/
> DDL using template= I get a cache not found error.
>
> please advise.
>
> -gene
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: IgniteDataStreamer

2018-01-15 Thread gene
thank you.

-g



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: IgniteDataStreamer

2018-01-15 Thread gene
Thank you.  I've moved past this issue now and have been able to create
Table/Cache's using the DDL and WITH parameters.  The only issue I'm facing
now is how to register a cache template that I can call using the WITH
Template=

-g



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Default Cache template

2018-01-15 Thread gene
Hello,  I'm having difficulties setting the default template while using the
DDL create table.  I've tried multiple ways, however I don't believe I have
the register cache template part down.  While trying to create any Table w/
DDL using template= I get a cache not found error.

please advise.

-gene



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Purpose of cache in cache.query(Create Table ...) statement

2018-01-15 Thread Andrey Mashenkov
Hi Shravya,


This is because of Ignite API allows to run query with having some cache
instance.

We are going to deprecate with API and implement SQL API on higher level to
avoid dummy cache creation,
but anyway old API can't be dropped until next major (3.0) version due to
compatibility requirements.

On Mon, Jan 15, 2018 at 5:12 PM, Shravya Nethula <
shravya.neth...@aline-consulting.com> wrote:

> Hi guys,
>
> I am new to the world of Ignite. I am currently going through the examples
> folder.
> The following are the statements from example CacheQueryDdlExample
> (apache-ignite-fabric-2.3.0-bin/examples/src/main/java/
> org/apache/ignite/examples/datagrid/CacheQueryDdlExample.java):
>
> cache.query(new SqlFieldsQuery(
> "CREATE TABLE city (id LONG PRIMARY KEY, name VARCHAR)
> WITH \"template=replicated\"")).getAll();
>
> cache.query(new SqlFieldsQuery(
> "CREATE TABLE person (id LONG, name VARCHAR, city_id
> LONG, PRIMARY KEY (id, city_id)) " +
> "WITH \"backups=1, affinityKey=city_id\"")).getAll();
>
> From the above two statements, "cache" is used to create two other caches
> "SQL_PUBLIC_CITY" and "SQL_PUBLIC_PERSON". Why do we need one cache to
> create another cache? What is the main purpose of "cache"?
>
> I kindly request you guys to throw some light on this topic. Any related
> information is appreciable.
>
> Regards,
> Shravya Nethula.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Re: Upgrading Ignite Persistence from 2.1 to 2.3

2018-01-15 Thread Andrey Mashenkov
Hi Josephine,

Just try to update ignite up to 2.3 with specifying pageSize=2048 in
configuration.


On Thu, Jan 11, 2018 at 2:58 PM, Josephine Barboza <
josephine.barb...@nviz.com> wrote:

> Hi Andrey,
>
> Thanks for the info. Could you also let me know how to migrate the data
> from 2.1 to 2.3. Are there any APIs/feature which I can use to do that?
>
>
>
> *From:* Andrey Mashenkov [mailto:andrey.mashen...@gmail.com]
> *Sent:* Thursday, January 11, 2018 5:04 PM
> *To:* user@ignite.apache.org
> *Subject:* Re: Upgrading Ignite Persistence from 2.1 to 2.3
>
>
>
> Hi,
>
>
>
> In 2.3 some configuration methods are deprecated and will be removed in
> next major release (ignite-3.0).
>
> You can safely continue to use old-style configuration, but of course we
> recommend to switch to new-style if possible to be able to use new features.
>
>
> I'm not sure 2.3 can support old storage out of box, but you can try.
>
> Also, please, note the default pagesize was changed in between 2.2 and 2.3
> releases from 2048 to 4096. So, you may need to specify it explicitly in
> your config.
>
>
>
>
>
> On Thu, Jan 11, 2018 at 11:23 AM, Josephine Barboza <
> josephine.barb...@nviz.com> wrote:
>
> Hi,
>
>
>
> I am currently using version 2.1 of ignite and want to upgrade to latest
> version 2.3. My application uses Ignite 
> Persistence(org.apache.ignite.configuration.
> PersistentStoreConfiguration) to persist data. As part of upgrading to
> version 2.3 do I need to change this to DataStoreConfiguration or can I
> continue using PersistentStoreConfiguration. What is recommended? Also, if
> I do need to change to DataStoreConfiguration how do I migrate the existing
> data?
>
>
>
> Please help.
>
>
>
> Thanks,
>
> Josephine
>
> *IMPORTANT NOTICE: This email and any files transmitted with it are
> confidential and intended solely for the use of the individual or entity to
> whom they are addressed. If you have received this email in error, please
> notify the system manager and/or the sender immediately.*
>
>
>
>
>
> --
>
> Best regards,
> Andrey V. Mashenkov
>



-- 
Best regards,
Andrey V. Mashenkov


RE: SQL Error: Failed to query Ignite

2018-01-15 Thread slava.koptilin
Hi,

> Now my question is simply, why the star? What does it do?
"*" means that it is a cache template :)

> I now have two "identical" caches
Hmm, I cannot reproduce that behavior. Please share your Ignite
configuration.

Thanks!




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Create BinaryObject without starting Ignite?

2018-01-15 Thread zbyszek
Hello Igniters,

Is it possible to create BinaryObject without starting Ignite?

I was trying the following:

private static BinaryObject createPrototype() throws
IgniteCheckedException {
// based on
http://apache-ignite-users.70518.x6.nabble.com/Working-Directly-with-Binary-objects-td5131.html
IgniteConfiguration iCfg = new IgniteConfiguration();
BinaryConfiguration bCfg = new BinaryConfiguration();
iCfg.setBinaryConfiguration(bCfg);
BinaryContext ctx = new
BinaryContext(BinaryCachingMetadataHandler.create(), iCfg, new
NullLogger());
BinaryMarshaller marsh = new BinaryMarshaller();
marsh.setContext(new MarshallerContextImpl(null));
IgniteUtils.invoke(BinaryMarshaller.class, marsh,
"setBinaryContext", ctx, iCfg);
BinaryObjectBuilder builder = new BinaryObjectBuilderImpl(ctx,
"MyBinaryName");
builder.setField("f1", (String) null);
builder.setField("f2", (String) null);
builder.setField("f3", (String) null);
BinaryObject res = builder.build(); //  ---> throws NPE here
return res;
}

but this throws NPE on builder.build() due to null transport member in
MarshallerContextImpl.

Thank you for your help,
zbyszek



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Segmentation fault (JVM crash) while memory restoring on start with native persistance

2018-01-15 Thread Andrey Mashenkov
Hi Arseny,

Have you success with reproducing the issue and getting stacktrace?
Do you observe same behavior on OracleJDK?

On Mon, Jan 15, 2018 at 5:50 PM, Andrey Mashenkov <
andrey.mashen...@gmail.com> wrote:

> Hi Arseny,
>
> Have you success with reproducing the issue and getting stacktrace?
> Do you observe same behavior on OracleJDK?
>
> On Tue, Dec 26, 2017 at 2:43 PM, Andrey Mashenkov <
> andrey.mashen...@gmail.com> wrote:
>
>> Hi Arseny,
>>
>> This looks like a known issues that is unresolved yet [1],
>> but we can't sure it is same issue as there is no stacktrace in logs
>> attached.
>>
>>
>> [1] https://issues.apache.org/jira/browse/IGNITE-7278
>>
>> On Tue, Dec 26, 2017 at 12:54 PM, Arseny Kovalchuk <
>> arseny.kovalc...@synesis.ru> wrote:
>>
>>> Hi guys.
>>>
>>> We've successfully tested Ignite as in-memory solution, it showed
>>> acceptable performance. But we cannot get stable work of Ignite cluster
>>> with native persistence enabled. Our first error we've got is Segmentation
>>> fault (JVM crash) while memory restoring on start.
>>>
>>> [2017-12-22 11:11:51,992]  INFO [exchange-worker-#46%ignite-instance-0%]
>>> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager:
>>> - Read checkpoint status [startMarker=/ignite-work-dire
>>> ctory/db/ignite_instance_0/cp/1513938154201-8c574131-763d-4c
>>> fa-99b6-0ce0321d61ab-START.bin, endMarker=/ignite-work-directo
>>> ry/db/ignite_instance_0/cp/1513932413840-55ea1713-8e9e-44cd-
>>> b51a-fcad8fb94de1-END.bin]
>>> [2017-12-22 11:11:51,993]  INFO [exchange-worker-#46%ignite-instance-0%]
>>> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager:
>>> - Checking memory state [lastValidPos=FileWALPointer [idx=391,
>>> fileOffset=220593830, len=19573, forceFlush=false],
>>> lastMarked=FileWALPointer [idx=394, fileOffset=38532201, len=19573,
>>> forceFlush=false], lastCheckpointId=8c574131-763d
>>> -4cfa-99b6-0ce0321d61ab]
>>> [2017-12-22 11:11:51,993]  WARN [exchange-worker-#46%ignite-instance-0%]
>>> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager:
>>> - Ignite node stopped in the middle of checkpoint. Will restore memory
>>> state and finish checkpoint on node start.
>>> [CodeBlob (0x7f9b58f24110)]
>>> Framesize: 0
>>> BufferBlob (0x7f9b58f24110) used for StubRoutines (2)
>>> #
>>> # A fatal error has been detected by the Java Runtime Environment:
>>> #
>>> #  Internal Error (sharedRuntime.cpp:842), pid=221,
>>> tid=0x7f9b473c1ae8
>>> #  fatal error: exception happened outside interpreter, nmethods and
>>> vtable stubs at pc 0x7f9b58f248f6
>>> #
>>> # JRE version: OpenJDK Runtime Environment (8.0_151-b12) (build
>>> 1.8.0_151-b12)
>>> # Java VM: OpenJDK 64-Bit Server VM (25.151-b12 mixed mode linux-amd64
>>> compressed oops)
>>> # Derivative: IcedTea 3.6.0
>>> # Distribution: Custom build (Tue Nov 21 11:22:36 GMT 2017)
>>> # Core dump written. Default location: /opt/ignite/core or core.221
>>> #
>>> # An error report file with more information is saved as:
>>> # /ignite-work-directory/core_dump_221.log
>>> #
>>> # If you would like to submit a bug report, please include
>>> # instructions on how to reproduce the bug and visit:
>>> #   http://icedtea.classpath.org/bugzilla
>>> #
>>>
>>>
>>>
>>> Please find logs and configs attached.
>>>
>>> We deploy Ignite along with our services in Kubernetes (v 1.8) on
>>> premises. Ignite cluster is a StatefulSet of 5 Pods (5 instances) of Ignite
>>> version 2.3. Each Pod mounts PersistentVolume backed by CEPH RBD.
>>>
>>> We put about 230 events/second into Ignite, 70% of events are ~200KB in
>>> size and 30% are 5000KB. Smaller events have indexed fields and we query
>>> them via SQL.
>>>
>>> The cluster is activated from a client node which also streams events
>>> into Ignite from Kafka. We use custom implementation of streamer which uses
>>> cache.putAll() API.
>>>
>>> We got the error when we stopped and restarted cluster again. It
>>> happened only on one instance.
>>>
>>> The general question is:
>>>
>>> *Is it possible to tune up (or implement) native persistence in a way
>>> when it just reports about error in data or corrupted data, then skip it
>>> and continue to work without that corrupted part. Thus it will make the
>>> cluster to continue operating regardless of errors on storage?*
>>>
>>>
>>> ​
>>> Arseny Kovalchuk
>>>
>>> Senior Software Engineer at Synesis
>>> skype: arseny.kovalchuk
>>> mobile: +375 (29) 666-16-16
>>> ​LinkedIn Profile ​
>>>
>>
>>
>>
>> --
>> Best regards,
>> Andrey V. Mashenkov
>>
>
>
>
> --
> Best regards,
> Andrey V. Mashenkov
>



-- 
Best regards,
Andrey V. Mashenkov


Re: Segmentation fault (JVM crash) while memory restoring on start with native persistance

2018-01-15 Thread Andrey Mashenkov
Hi Arseny,

Have you success with reproducing the issue and getting stacktrace?
Do you observe same behavior on OracleJDK?

On Tue, Dec 26, 2017 at 2:43 PM, Andrey Mashenkov <
andrey.mashen...@gmail.com> wrote:

> Hi Arseny,
>
> This looks like a known issues that is unresolved yet [1],
> but we can't sure it is same issue as there is no stacktrace in logs
> attached.
>
>
> [1] https://issues.apache.org/jira/browse/IGNITE-7278
>
> On Tue, Dec 26, 2017 at 12:54 PM, Arseny Kovalchuk <
> arseny.kovalc...@synesis.ru> wrote:
>
>> Hi guys.
>>
>> We've successfully tested Ignite as in-memory solution, it showed
>> acceptable performance. But we cannot get stable work of Ignite cluster
>> with native persistence enabled. Our first error we've got is Segmentation
>> fault (JVM crash) while memory restoring on start.
>>
>> [2017-12-22 11:11:51,992]  INFO [exchange-worker-#46%ignite-instance-0%]
>> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager:
>> - Read checkpoint status [startMarker=/ignite-work-dire
>> ctory/db/ignite_instance_0/cp/1513938154201-8c574131-763d-
>> 4cfa-99b6-0ce0321d61ab-START.bin, endMarker=/ignite-work-directo
>> ry/db/ignite_instance_0/cp/1513932413840-55ea1713-8e9e-
>> 44cd-b51a-fcad8fb94de1-END.bin]
>> [2017-12-22 11:11:51,993]  INFO [exchange-worker-#46%ignite-instance-0%]
>> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager:
>> - Checking memory state [lastValidPos=FileWALPointer [idx=391,
>> fileOffset=220593830, len=19573, forceFlush=false],
>> lastMarked=FileWALPointer [idx=394, fileOffset=38532201, len=19573,
>> forceFlush=false], lastCheckpointId=8c574131-763d-4cfa-99b6-0ce0321d61ab]
>> [2017-12-22 11:11:51,993]  WARN [exchange-worker-#46%ignite-instance-0%]
>> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager:
>> - Ignite node stopped in the middle of checkpoint. Will restore memory
>> state and finish checkpoint on node start.
>> [CodeBlob (0x7f9b58f24110)]
>> Framesize: 0
>> BufferBlob (0x7f9b58f24110) used for StubRoutines (2)
>> #
>> # A fatal error has been detected by the Java Runtime Environment:
>> #
>> #  Internal Error (sharedRuntime.cpp:842), pid=221, tid=0x7f9b473c1ae8
>> #  fatal error: exception happened outside interpreter, nmethods and
>> vtable stubs at pc 0x7f9b58f248f6
>> #
>> # JRE version: OpenJDK Runtime Environment (8.0_151-b12) (build
>> 1.8.0_151-b12)
>> # Java VM: OpenJDK 64-Bit Server VM (25.151-b12 mixed mode linux-amd64
>> compressed oops)
>> # Derivative: IcedTea 3.6.0
>> # Distribution: Custom build (Tue Nov 21 11:22:36 GMT 2017)
>> # Core dump written. Default location: /opt/ignite/core or core.221
>> #
>> # An error report file with more information is saved as:
>> # /ignite-work-directory/core_dump_221.log
>> #
>> # If you would like to submit a bug report, please include
>> # instructions on how to reproduce the bug and visit:
>> #   http://icedtea.classpath.org/bugzilla
>> #
>>
>>
>>
>> Please find logs and configs attached.
>>
>> We deploy Ignite along with our services in Kubernetes (v 1.8) on
>> premises. Ignite cluster is a StatefulSet of 5 Pods (5 instances) of Ignite
>> version 2.3. Each Pod mounts PersistentVolume backed by CEPH RBD.
>>
>> We put about 230 events/second into Ignite, 70% of events are ~200KB in
>> size and 30% are 5000KB. Smaller events have indexed fields and we query
>> them via SQL.
>>
>> The cluster is activated from a client node which also streams events
>> into Ignite from Kafka. We use custom implementation of streamer which uses
>> cache.putAll() API.
>>
>> We got the error when we stopped and restarted cluster again. It happened
>> only on one instance.
>>
>> The general question is:
>>
>> *Is it possible to tune up (or implement) native persistence in a way
>> when it just reports about error in data or corrupted data, then skip it
>> and continue to work without that corrupted part. Thus it will make the
>> cluster to continue operating regardless of errors on storage?*
>>
>>
>> ​
>> Arseny Kovalchuk
>>
>> Senior Software Engineer at Synesis
>> skype: arseny.kovalchuk
>> mobile: +375 (29) 666-16-16
>> ​LinkedIn Profile ​
>>
>
>
>
> --
> Best regards,
> Andrey V. Mashenkov
>



-- 
Best regards,
Andrey V. Mashenkov


Re: Error of start with multiple data regions

2018-01-15 Thread slava.koptilin
Hi,

I've created JIRA ticket https://issues.apache.org/jira/browse/IGNITE-7414
As a temporary workaround, I'd suggest deactivating the cluster before
closing.

Thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Apache Ignite & unixODBC and truncating text

2018-01-15 Thread bagsiur
ok,

I ask becouse I wont to be sure. Fix will be included in Ignite 2.4 or 2.5? 

On the ticket: https://issues.apache.org/jira/browse/IGNITE-7362 in details
is write that fix this bug is planning for 2.5 version...



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Purpose of cache in cache.query(Create Table ...) statement

2018-01-15 Thread Shravya Nethula
Hi guys,

I am new to the world of Ignite. I am currently going through the examples
folder.
The following are the statements from example CacheQueryDdlExample
(apache-ignite-fabric-2.3.0-bin/examples/src/main/java/org/apache/ignite/examples/datagrid/CacheQueryDdlExample.java):

cache.query(new SqlFieldsQuery(
"CREATE TABLE city (id LONG PRIMARY KEY, name VARCHAR)
WITH \"template=replicated\"")).getAll();

cache.query(new SqlFieldsQuery(
"CREATE TABLE person (id LONG, name VARCHAR, city_id
LONG, PRIMARY KEY (id, city_id)) " +
"WITH \"backups=1, affinityKey=city_id\"")).getAll();

>From the above two statements, "cache" is used to create two other caches
"SQL_PUBLIC_CITY" and "SQL_PUBLIC_PERSON". Why do we need one cache to
create another cache? What is the main purpose of "cache"?

I kindly request you guys to throw some light on this topic. Any related
information is appreciable.

Regards,
Shravya Nethula.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Connection problem between client and server

2018-01-15 Thread Denis Mekhanikov
Hi Jeff!

There is a workaround for the problem of huge discovery messages.
Try changing IgniteConfiguration.includeProperties

parameter to an empty array. It will make Ignite filter out all environment
variables.
It should improve node connection time in your case.

Add the following lines to IgniteConfiguration in your XML:




Denis

ср, 10 янв. 2018 г. в 11:37, Denis Mekhanikov :

> Hi Jeff.
>
> Looks like my letter wasn't noticed by the developer community.
>
> I sent a message to the dev list one more time:
> http://apache-ignite-developers.2346864.n4.nabble.com/Irrelevant-data-in-discovery-messages-td25927.html
>
> In the meanwhile make sure, that this is really the cause of the discovery
> process being slow. Try deploying nodes on the same environment, but
> without additional jar files on the classpath. Will it make discovery work
> faster?
>
> Denis
>
> ср, 10 янв. 2018 г. в 8:39, Jeff Jiao :
>
>> Hi Denis,
>>
>> Does Ignite dev team give any feedback for this?
>>
>> Thanks
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


RE: SQL Error: Failed to query Ignite

2018-01-15 Thread Thomas Isaksen
Another thing, I now have two "identical" caches, is this correct? I assumed I 
would just end up with SQL_PUBLIC_TOKEN

+-+-+---+---+---+---+---+---+
| SQL_PUBLIC_TOKEN(@c2)   | PARTITIONED | 2 | min: 0 (0 / 0)
| min: 0| min: 0| min: 0| min: 0|
| | |   | avg: 0.00 (0.00 / 0.00)   
| avg: 0.00 | avg: 0.00 | avg: 0.00 | avg: 0.00 |
| | |   | max: 0 (0 / 0)
| max: 0| max: 0| max: 0| max: 0|
+-+-+---+---+---+---+---+---+
| tokenCache(@c3) | PARTITIONED | 2 | min: 0 (0 / 0)
| min: 0| min: 0| min: 0| min: 0|
| | |   | avg: 0.50 (0.00 / 0.50)   
| avg: 0.00 | avg: 0.00 | avg: 0.00 | avg: 0.00 |
| | |   | max: 1 (0 / 1)
| max: 0| max: 0| max: 0| max: 0|
+---+


-Original Message-
From: Thomas Isaksen [mailto:thomas.isak...@sysco.no] 
Sent: mandag 15. januar 2018 12.19
To: user@ignite.apache.org
Subject: RE: SQL Error: Failed to query Ignite

Yes! That did it! Now my question is simply, why the star? What does it do. I 
thought I had to use template=

Thanks heaps!

./t

-Original Message-
From: slava.koptilin [mailto:slava.kopti...@gmail.com] 
Sent: mandag 15. januar 2018 11.55
To: user@ignite.apache.org
Subject: Re: SQL Error: Failed to query Ignite

Hi Thomas,

Please try to configure your template in the following way:




**"
/>








In that case, you should be able to create a table via thin driver:

CREATE TABLE TOKEN(id LONG, token VARCHAR, PRIMARY KEY (id)) WITH 
template=tokenCache;

Thanks,
Slava.




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: SQL Error: Failed to query Ignite

2018-01-15 Thread Thomas Isaksen
Yes! That did it! Now my question is simply, why the star? What does it do. I 
thought I had to use template=

Thanks heaps!

./t

-Original Message-
From: slava.koptilin [mailto:slava.kopti...@gmail.com] 
Sent: mandag 15. januar 2018 11.55
To: user@ignite.apache.org
Subject: Re: SQL Error: Failed to query Ignite

Hi Thomas,

Please try to configure your template in the following way:




**"
/>








In that case, you should be able to create a table via thin driver:

CREATE TABLE TOKEN(id LONG, token VARCHAR, PRIMARY KEY (id)) WITH 
template=tokenCache;

Thanks,
Slava.




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Problem enabling read-through on cache - no suitable driver

2018-01-15 Thread slava.koptilin
Hi,

please see my answer to that thread. I hope it solves the issue.

Thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: SQL Error: Failed to query Ignite

2018-01-15 Thread slava.koptilin
Hi Thomas,

Please try to configure your template in the following way:




**"
/>








In that case, you should be able to create a table via thin driver:

CREATE TABLE TOKEN(id LONG, token VARCHAR, PRIMARY KEY (id)) WITH
template=tokenCache;

Thanks,
Slava.




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Problem enabling read-through on cache - no suitable driver

2018-01-15 Thread slava.koptilin
Hi Thomas,

> however, when I try using my own templates I simply get SQL Error[5]
> "Cache doesn't exist!"
>- not sure why this is happening but maybe I'm not connecting to the right
DB? 
Could you please provide us with the template you are trying to use?

> Is the jdbc:ignite:thin connection not a connection to the H2 instance
> used by Ignite internally?
No, jdbc:ignite:thin is not a connection to the underlying H2 db. This is a
thin driver which is used to connect to Ignite node and allows processing
distributed data using Ignite cluster. Please see the following page for
details [1]

[1] https://apacheignite-sql.readme.io/docs/jdbc-driver#jdbc-thin-driver

Thanks,
Slava.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: BinaryObjectImpl.deserializeValue with specific ClassLoader

2018-01-15 Thread Vladimir Ozerov
The ticket is on the radar, but not in immediate plans. The problem might
sounds simple at first glance, but we already spent considerable time on
implementation and review, because we heavily rely on classes caching, and
a lot of internal BinaryMarshaller infrastructure should be reworked to
allow for this change. Hopefully, we will have it in Apache Ignite 2.5.

On Thu, Jan 4, 2018 at 2:28 AM, Valentin Kulichenko <
valentin.kuliche...@gmail.com> wrote:

> Ticket is still open. Vladimir, looks like it's assigned to you. Do you
> have any plans to work on it?
>
> https://issues.apache.org/jira/browse/IGNITE-5038
>
> -Val
>
> On Wed, Jan 3, 2018 at 1:26 PM, Abeneazer Chafamo <
> chaf...@nodalexchange.com> wrote:
>
>> Is there any update on the suggested functionality to resolve cache entry
>> classes based on the caller's context first instead of relying on Ignite's
>> classloader?
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>
>


Re: insert so slow via JDBC thin driver

2018-01-15 Thread Vladimir Ozerov
Streaming mode for thin JDBC driver is expected in Apache Ignite 2.5.
Meanwhile you can load data using thick driver which support streaming, and
then switch to thin driver for normal operations it you prefer it over
thick one.


On Fri, Dec 15, 2017 at 9:09 PM, Denis Magda  wrote:

> Hi Michael,
>
> Yes, I heard from Ignite SQL experts that Ignite thin client is not
> optimal for data loading yet. However, there is already some work in
> progress to speed up the loading and batching of a myriad of rows.
>
> Vladimir, could you please step in and comment here.
>
> —
> Denis
>
> > On Dec 14, 2017, at 1:02 AM, Michael Jay <841519...@qq.com> wrote:
> >
> > Hello, I am a new Ignite leaner. I want to insert 50,000,000 rows into a
> > table. Here,i got a problem.
> > When one host and one sever node, the speed of insert is about 2,000,000
> per
> > minute,  the usage of cpu is 30-40%; however two hosts and two sever
> nodes,
> > about 100,000 per minute,and the usage of cpu is only 5%. It's so
> slow,what
> > can i do to improve the performance? Thanks.
> >
> > my default-config.xml:
> >   
> >   
> >   
> >   
> > > class="org.apache.ignite.configuration.MemoryConfiguration">
> >value="19327352832"/>
> >
> >value="16"/>
> >
> >   
> >   
> >   
> >   
> >
> >
> > > class="org.apache.ignite.configuration.CacheConfiguration">
> >   
> >value="PARTITIONED"/>
> >value="4"/>
> >
> >
> >
> >value="false"/>
> >value="false"/>
> >value="false"/>
> >
> >name="rebalanceBatchSize" value="#{256L * 1024 * 1024}"/>
> >
> >value="0"/>
> >
> >name="rebalanceThreadPoolSize" value="8"/>
> >
> >value="false"/>
> >
> >
> >
> >
> >
> > > class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
> >
> >
> > > class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.
> TcpDiscoveryMulticastIpFinder">
> >
> >   
> >
> >
> >
>  10x.x.x.226:47500..47509
> >
>  10x.x.x.75:47500..47509
> >
> >
> >
> >
> >
> >
> >   
> > 
> >
> >
> >
> >
> > --
> > Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
>


SQL Error: Failed to query Ignite

2018-01-15 Thread Thomas Isaksen
Hi guys !
I am not in any way sure that I'm on the right track here so please bear with 
me.

I have a single node running with my cache:

[10:48:34,480][INFO][main][IgniteKernal] Configured caches [in 'sysMemPlc' 
dataRegion: ['ignite-sys-cache'], in 'default' dataRegion: ['tokenCache']]

I want my table "Token" to use the "tokenCacheTemplate" configuration and I 
want to use native persistence (no 3rd. party db.)

I am connecting with DBeaver with the following connect string:

jdbc:ignite:cfg://file:///D:\apache-ignite-fabric-2.3.0-bin\config\gatekeeper-token-config.xml
(I can connect to jdbc:ignite:thin://127.0.0.1 but then I'll just be connected 
to ignite-sys-cache - seemingly.)

I get a connection but once I try to create a table I get:

SQL Error: Failed to query Ignite.
  Failed to query Ignite.
Ouch! Argument is invalid: Cache name must not be null or empty.

The table def. is :

CREATE TABLE IF NOT EXISTS Token
(
   id bigint,
   token VARCHAR,
   PRIMARY KEY (id)
)
WITH "template=tokenCacheTemplate";

Where  is defined in the above config file as follows:



   
  
 





 
  
   
   
  
 

   

 
  
   



Thanks!


Re: Ignite 2.3 Swap Path configuration is causing issue

2018-01-15 Thread Alexey Goncharuk
Hi,

Just to reiterate and clarify the behavior. Region maxSize defines the
total maxSize of the region, you will get OOME if your data size exceeds
the maxSize. However, when using swap, you can set maxSize _bigger_ than
RAM size, in this case, the OS will take care of the swapping.

2017-12-21 11:57 GMT+03:00 Alexey Kukushkin :

> You will get OutOfMemory error if data does not fit into data region.
> Configure an eviction policy or persistence to avoid that.
>


Re: Ignite yarn cluster deployment

2018-01-15 Thread ilya.kasnacheev
Hello Andrey!

You basically need to run:

IGNITE_YARN_JAR=/mnt/ignite/apache-ignite-2.3.0-src/modules/yarn/target/ignite-yarn-2.3.0.jar
 
yarn jar ${IGNITE_YARN_JAR} ${IGNITE_YARN_JAR}
/mnt/ignite/ignite_yarn.properties

As you can see there's two local paths - for JAR and for properties. If you
ensure they are delivered to nodes at predictable locations, you can run it
with a bash-script. Unfortunarely it doesn't look like any of these is taken
from e.g. HDFS.

With regards to embedded mode - I think you only need IgniteContext, it's
deprecated all over but seems to be actively used by Ignite RRD.

I can see it will start spark.executor.instances # of Ignite instances. 

Regards,



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to re-direct Ignite logs to log4j2?

2018-01-15 Thread ilya.kasnacheev
Hello Sherry!

While you're waiting for the response from topic starter, this link may come
in handy:

https://apacheignite.readme.io/docs/logging#section-log4j2

...path to log4j2.xml should be either absolute or a relative local file
system path, relative to META-INF in classpath, or relative to IGNITE_HOME.

So it seems that you've got three options.

Regards,



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Exception Occuredjava.sql.BatchUpdateException: ORA-00001: unique constraint

2018-01-15 Thread Evgenii Zhuravlev
Hi,

As for 9800 entries instead of 1 - most possible that you just have
a small mistake when you work with your counter and these entries were not
written to the DB: if(counter==100)

Anyway, for both problems community need additional information for an
investigation. It would be great if you could share your full project with
all configs and code, so the community can fast reproduce your issue and
help you.

Evgenii


2018-01-14 10:32 GMT+03:00 Rajarshi Pain :

> Hello,
>
> Can someone please help me on this scenario if somewhere I am making any
> mistake ?
>
> Thanks,
> Raj
>
> On Fri, Jan 12, 2018 at 11:57 PM, Rajarshi Pain 
> wrote:
>
>> Hi,
>>
>> We were doing a POC(Ignite 2.3 - oracle) to check how cache Persistence
>> Store works using writeBehind, We are updating data as a batch rather than
>> updating one by one record.
>>
>> though there are no duplication record but still getting "unique
>> constraint" exception.
>>
>> We couldnt find any root cause for this not sure is any issue if this is
>> a possible ignite bug as this error is coming randomly.
>>
>> Also in another POC we have seen that it's not inserting all the data to
>> the database.
>> For example we have pushed 10K but in Database we can only see 9800,
>> though there is no error or exception.
>>
>> could you please shade some lights on these issues ?
>>
>> Sample Code:
>>
>> for(int i=1;i<=10;i++)
>> {
>> frstnm="test1name"+i;
>> scndnm="test2name";
>> p1 = new Person(i, frstnm, scndnm);
>> cache.put((long)i, p1);
>> }
>>
>>
>> insert into PERSON (id, firstname, lastname) values (?, ?, ?)
>>
>> *inside write: *
>>
>> st.setLong(1, val.getId());
>> st.setString(2, val.getFirstName());
>> st.setString(3, val.getLastName());
>> st.addBatch();
>> if(counter==100)
>> {
>> System.out.println("Counter "+ counter);
>> st.executeBatch();
>> counter=0;
>> }
>>
>> got exception after 11K records
>>
>> Exception Occuredjava.sql.BatchUpdateException: ORA-1: unique
>> constraint ​
>>
>> --
>> Regards,
>> Rajarshi Pain
>>
>
>
>
> --
> Regards,
> Rajarshi Pain
>