apache ignite installation fails in kubernetes

2019-11-05 Thread Gokulnath Chidambaram
Hello,

I am trying to run Apache ignite : 2.5.0 inside the kubernetes cluster. My
organization security policy doesn't allow to run as 'root' inside any
container. I tried to add security context (runAsNonRoot) in kubernetes
yaml file. I am always getting the following error.

cp: can't create '/opt/ignite/apache-ignite-fabric/libs/README.txt': File
exists
cp: can't create
'/opt/ignite/apache-ignite-fabric/libs/ignite-kubernetes-2.5.0.jar':
Permission denied
cp: can't create
'/opt/ignite/apache-ignite-fabric/libs/jackson-core-asl-1.9.13.jar':
Permission denied
cp: can't create
'/opt/ignite/apache-ignite-fabric/libs/jackson-mapper-asl-1.9.13.jar':
Permission denied
cp: can't create
'/opt/ignite/apache-ignite-fabric/libs/licenses/apache-2.0.txt': File exists
cp: can't create
'/opt/ignite/apache-ignite-fabric/libs/licenses/ignite-kubernetes-licenses.txt':
Permission denied

Any insights is appreciated.

Thanks


Re: Ignite singel instance

2019-11-05 Thread Siew Wai Yow
Hi Stan

The result will be the same without transaction, even with atomic mode. So 
wondering if there is other reason capped the performance. Attach is our POC 
source code which give around 1700 TPS, with or without transaction, may be you 
can advise what's wrong with it.

-DIGNITE_MAX_INDEX_PAYLOAD_SIZE=64 is using already.

We choose ODBC because it is more compatible to existing product's API, which 
will need to create table/cache on the fly without pre-define C-object. Ignite 
C++ need to have the cache C-object ready before run.

Also in Ignite cluster, how can we connect to cluster, just connect to one of 
the host? And when host crash we need to change manually connect to another 
node? Seems ODBC connection string not allow to input multiple host address.

Thanks in advance.

Regards,
Yow



From: Stanislav Lukyanov 
Sent: Wednesday, November 6, 2019 12:33 AM
To: user@ignite.apache.org 
Subject: Re: Ignite singel instance

First, 1700 TPS given your transaction structure is 17 simple operations 
per second, which is quite substantial - especially if you're doing that from a 
single thread / single ODBC client.

Second, note that TRANSACTIONAL_SNAPSHOT is in beta and is not ready for 
production use. There are no claims about performance or stability of that 
cache mode.

Third, you don't need the index on key01 - it is created automatically because 
it is a primary key. But you do need to set INLINE SIZE of the default index. 
Run your Ignite server with system property or environment varialy 
IGNITE_MAX_INDEX_PAYLOAD_SIZE=64.

Finally, the operations you're doing don't look like something to be done with 
SQL. Consider using key-value API in Ignite C++ instead - 
https://apacheignite-cpp.readme.io/docs.

Stan

On Sat, Nov 2, 2019 at 8:15 PM Evgenii Zhuravlev 
mailto:e.zhuravlev...@gmail.com>> wrote:
Hi,

Do you use only one ODBC client? Can you start one more ODBC client and check 
the performance?

Thanks,
Evgenii

сб, 2 нояб. 2019 г. в 16:47, Siew Wai Yow 
mailto:wai_...@hotmail.com>>:
Hi,

We are doing POC on ignite performance using ODBC driver but the performance is 
always capped at around 1700 TPS which is too slow. It is local ignite service. 
All tuning tips from Ignite page has been applied, no bottleneck from CPU and 
Memory. At the moment not turn on persistence yet, it will be worse if turn on. 
This POC is very crucial to our product roadmap.

Any tips? Thank you.

Test case,
1 x insert --> 49 x select -->49 x update --> 1 x delete
repeat for 5 times.

const char* insert_sql = "insert into CDRTEST 
values(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)";
const char* update_sql = "update CDRTEST set 
value01=?,value02=?,value03=?,value04=?,value05=?,value21=?,value22=?,value23=?,value24=?,value25=?
 where key01=?";
const char* delete_sql = "delete from CDRTEST where key01=?";
const char* select_sql = "select value01 from CDRTEST where key01=?";

retcode = SQLExecDirect(hstmt, 
reinterpret_cast(const_cast("CREATE TABLE IF NOT EXISTS 
CDRTEST ( "
  "key01 VARCHAR PRIMARY KEY, "
  "value01 LONG, "
  "value02 LONG, "
  "value03 LONG, "
  "value04 LONG, "
  "value05 LONG, "
  "value06 LONG, "
  "value07 LONG, "
  "value08 LONG, "
  "value09 LONG, "
  "value10 LONG, "
  "value11 LONG, "
  "value12 LONG, "
  "value13 LONG, "
  "value14 LONG, "
  "value15 LONG, "
  "value16 LONG, "
  "value17 LONG, "
  "value18 LONG, "
  "value19 LONG, "
  "value20 LONG, "
  "value21 VARCHAR, "
  "value22 VARCHAR, "
  "value23 VARCHAR, "
  "value24 VARCHAR, "
  "value25 VARCHAR, "
  "value26 VARCHAR, "
  "value27 VARCHAR, "
  "value28 VARCHAR, "
  "value29 VARCHAR, "
  "value30 VARCHAR, "
  "value31 VARCHAR, "
  "value32 VARCHAR, "
  "value33 VARCHAR, "
  "value34 VARCHAR, "
  "value35 VARCHAR, "
  "value36 VARCHAR, "
  "value37 VARCHAR, "
  "value38 VARCHAR, "
  "value39 VARCHAR) "
  "WITH 
\"template=partitioned,atomicity=TRANSACTIONAL_SNAPSHOT,WRITE_SYNCHRONIZATION_MODE=FULL_ASYNC\"")),
 SQL_NTS);
CHECK_ERROR(retcode, "Fail to create table", hstmt, 
SQL_HANDLE_STMT);

retcode = SQLExecDirect(hstmt, 
reinterpret_cast(const_cast("CREATE INDEX key01t_idx ON 
CDRTEST(key01) INLINE_SIZE 64")), SQL_NTS);
CHECK_ERROR(retcode, "Fail to create index", hstmt, 
SQL_HANDLE_STMT);


Below are configuration we used,



http://www.springframework.org/schema/beans;
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
   xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd;>















  

  
  
  
  
  
  
  

  


   
  
  
 
 

Service grid webinar

2019-11-05 Thread Denis Mekhanikov
Hi Igniters!

I’ve been working on the Service Grid functionality in Apache Ignite for a 
while, and at some point I've decided to make a webinar with a high-level 
overview of this part of the project.

If you want to learn more about services, look at some use-cases or just ask a 
few questions somebody who takes part in the development, please feel free to 
join the presentation on November 6th, at 10 AM PST.

You can sign up by the following link: 
https://www.gridgain.com/resources/webinars/best-practices-microservices-architecture-apache-ignite

Denis


Ignite 2.7.0 YARN deployment - IGNITE_JVM_OPTS doesn't seem to work

2019-11-05 Thread Seshan, Manoj N. (TR Tech, Content & Ops)
We are trying to ...

  1.  Use the G1 garbage collector for smaller stop-the-world GC pauses.
  2.  Obtain verbose logs from Ignite for troubleshooting purposes

We have tried setting IGNITE_JVM_OPTS="-DIGNITE_QUIET=false -XX:+UseG1GC", but 
the JVM options are not taking effect on the Server nodes.

Please help.

Rgds, Manoj

Manoj Seshan - Senior Architect
Platform Content Technology, Bangalore
[cid:image001.gif@01C95541.6801BF70]
Voice: +91-98806 72987  +91-80-67492572

From: Seshan, Manoj N. (TR Tech, Content & Ops)
Sent: Tuesday, November 05, 2019 11:45 PM
To: user@ignite.apache.org
Subject: Ignite YARN deployment - how to use TCP IP Discovery?

We are using Ignite as a Distributed In-Memory cache, deployed using YARN on a 
Hadoop Cluster.  We have configured Zookeeper Discovery, and this is working 
fine.

Given this is a small 20 node Ignite cluster, Zookeeper Discovery seems 
overkill. Would it be possible to switch to TCP Discovery? Multicast Finding is 
not an option, as that is disabled. Static IP Finding would also not work, as 
the Ignite Containers are dynamically allocated by YARN to arbitrary nodes of 
the Hadoop Cluster.

Rgds

Manoj Seshan - Senior Architect
Platform Content Technology, Bangalore
[cid:image001.gif@01C95541.6801BF70]
Voice: +91-98806 72987  +91-80-67492572



Re: Apache Ignite .NET with multiple nodes

2019-11-05 Thread Pavel Tupitsyn
Hello, please make sure Ignite is allowed through the firewall, ports:
47500~47600 for discovery, 47100~47200 for communication


On Tue, Nov 5, 2019 at 9:26 PM Manan Joshi  wrote:

> Hello Ignite Comunity,
>
>
>
> We are trying to use apache ignite as our distributed caching solution and
> we want to have clustering of ignite nodes into distributed environment. So
> far we know that nodes are discovered automatically on single machine but
> when we run two instance in different machines on same network the server
> nodes are failed to discover by ignite.
>
>
>
> Below is sample code how we are trying initiating the ignite nodes in
> distributed environment. I have also attached the compete soluation too to
> have complete idea of the problem we are facing.
>
>
> [image: image.png]
>
>
>
> Thanks,
> Manan Joshi
>
>
> 
>  Virus-free.
> www.avast.com
> 
> <#m_-3662754368048113415_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>


Apache Ignite .NET with multiple nodes

2019-11-05 Thread Manan Joshi
Hello Ignite Comunity,



We are trying to use apache ignite as our distributed caching solution and
we want to have clustering of ignite nodes into distributed environment. So
far we know that nodes are discovered automatically on single machine but
when we run two instance in different machines on same network the server
nodes are failed to discover by ignite.



Below is sample code how we are trying initiating the ignite nodes in
distributed environment. I have also attached the compete soluation too to
have complete idea of the problem we are facing.


[image: image.png]



Thanks,
Manan Joshi


Virus-free.
www.avast.com

<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>


Ignite YARN deployment - how to use TCP IP Discovery?

2019-11-05 Thread Seshan, Manoj N. (TR Tech, Content & Ops)
We are using Ignite as a Distributed In-Memory cache, deployed using YARN on a 
Hadoop Cluster.  We have configured Zookeeper Discovery, and this is working 
fine.

Given this is a small 20 node Ignite cluster, Zookeeper Discovery seems 
overkill. Would it be possible to switch to TCP Discovery? Multicast Finding is 
not an option, as that is disabled. Static IP Finding would also not work, as 
the Ignite Containers are dynamically allocated by YARN to arbitrary nodes of 
the Hadoop Cluster.

Rgds

Manoj Seshan - Senior Architect
Platform Content Technology, Bangalore
[cid:image001.gif@01C95541.6801BF70]
Voice: +91-98806 72987  +91-80-67492572



Re: Error while adding the node the baseline topology

2019-11-05 Thread Stanislav Lukyanov
This message actually looks worrisome:
2019-10-22 10:31:42,441][WARN
][data-streamer-stripe-3-#52][PageMemoryImpl] Parking
thread=data-streamer-stripe-3-#52 for timeout (ms)=771038

It means that Ignite's throttling algorithm has decided to put a thread to
sleep for 771 seconds.

Can you share your persistence configuration (DataStorageConfiguration or
PersistenceStorageConfiguration).

Thanks,
Stan

On Thu, Oct 31, 2019 at 2:39 AM Denis Magda  wrote:

> Have you tried to turn of the failure handling following  the previously
> shared documentation page? It looks like some timeouts need to be tuned.
>
> Denis
>
> On Friday, October 25, 2019, krkumar24061...@gmail.com <
> krkumar24061...@gmail.com> wrote:
>
>> Hi - The application is doing two things, one thread is writing 2kb size
>> events to the ignite cache as a key value and other thread is executing
>> ignite SQLs thru ignite jdbc connections. The throughput is anything
>> between
>> 25K to 40K events per second on the cache size. We are using data streamer
>> for writing the key value cache. The cluster has 4 nodes with 198GB ram
>> and
>> 48 cores.
>>
>> We got a similar error again and here is the error description:
>>
>> [2019-10-25 10:16:45,399][ERROR][disco-event-worker-#142][G] Blocked
>> system-critical thread has been detected. This can lead to cluster-wide
>> undefined behaviour [threadName=data-streamer-stripe-0, blockedFor=2032s]
>> [2019-10-25 10:16:45,399][WARN ][disco-event-worker-#142][G] Thread
>> [name="data-streamer-stripe-0-#49", id=80, state=WAITING, blockCnt=7,
>> waitCnt=5352642]
>>
>> [2019-10-25 10:16:45,399][ERROR][disco-event-worker-#142][root] Critical
>> system error detected. Will be handled accordingly to configured handler
>> [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0,
>> super=AbstractFailureHandler [ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED,
>> SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], failureCtx=FailureContext
>> [type=SYSTEM_WORKER_BLOCKED, err=class o.a.i.IgniteException: GridWorker
>> [name=data-streamer-stripe-0, igniteInstanceName=null, finished=false,
>> heartbeatTs=1572010973019]]]
>>
>> Thanx and Regards,
>> KR Kumar
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>
>
> --
> -
> Denis
>
>


Re: How to insert data?

2019-11-05 Thread Stanislav Lukyanov
There are multiple ways to configure a cache to use SQL. The easiest is to
use @QuerySqlField annotation. Check out this doc
https://www.gridgain.com/docs/8.7.6/developers-guide/SQL/sql-api#querysqlfield-annotation
.

On Tue, Nov 5, 2019 at 5:52 PM BorisBelozerov 
wrote:

> I have 3 nodes, and I code in each node:
> The 1st node: in Main function
> Ignite ignite=Ignition.start();
> CacheConfiguration cacheConfiguration = new
> CacheConfiguration();
> QueryEntity valQE = new QueryEntity();
> valQE.setKeyFieldName("key");
> valQE.setKeyType("java.lang.Integer");
> valQE.setValueType("DataX");
> valQE.addQueryField("key", "java.lang.Integer", "key");
> valQE.addQueryField("value","java.lang.String","value");
>
> cacheConfiguration.setQueryEntities(java.util.Arrays.asList(valQE));
> cacheConfiguration.setName("tdCache");
> IgniteCache dtCache =
> ignite.getOrCreateCache(cacheConfiguration);
> The 2nd and 3rd node:
> Ignite ignite=Ignition.start();
> I open ignitevisorcmd.bat but I can't see DataX or tdCache
> I use sqlline.bat: select * from "tdCache".DATAX; but I can't select it:
> Schema "tdCache" not found
> Can you help me?? Thank you
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: ignite-rest-http in classpath error

2019-11-05 Thread Stanislav Lukyanov
Hi,

Web Console requires ignite-rest-http module to be enabled. It is not
enabled by default in Ignite binaries nor Docker image.
The steps that you've taken are done while the container is running - so,
AFTER the Ignite process has started. That's why copying the module has no
effect.

Try setting env variable OPTION_LIBS=ignite-rest-http for your Docker
image. It will tell the script running in the image to copy the module
before starting Ignite process.

Stan

On Tue, Nov 5, 2019 at 12:35 AM jrobison-sb  wrote:

> We're attempting to run Apache Ignite as a standalone application, not
> bundled into our own custom application or anything. We're running it using
> the apacheignite/ignite:2.7.6 docker image.
>
> We're trying to use the ignite-web-agent, but when we start the agent via
> `./apache-ignite/libs/ignite-web-agent-2.7.0/ignite-web-agent.sh`, it fails
> saying:
>
> ```
> [2019-11-04 19:51:15,698][WARN ][pool-1-thread-1][RestExecutor] Failed
> connect to cluster. Please ensure that nodes have [ignite-rest-http] module
> in classpath (was copied from libs/optional to libs folder).
> ```
>
> Can anybody tell me what we're doing wrong?
>
> Steps we're taking:
>
> 1. Run the apacheignite/ignite:2.7.6 docker image. The following steps then
> happen inside that container.
> 2. Unzip ignite-web-agent-2.7.0.zip into apache-ignite/libs
> 3. mv apache-ignite/libs/optional/ignite-rest-http apache-ignite/libs/
> 4. Set up apache-ignite/libs/ignite-web-agent-2.7.0/default.properties with
> the necessary server-uri and token
> 5. Run ./apache-ignite/libs/ignite-web-agent-2.7.0/ignite-web-agent.sh and
> get the above error saying that ignite-rest-http isn't in the classpath.
>
> Thanks in advance.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite singel instance

2019-11-05 Thread Stanislav Lukyanov
First, 1700 TPS given your transaction structure is 17 simple
operations per second, which is quite substantial - especially if you're
doing that from a single thread / single ODBC client.

Second, note that TRANSACTIONAL_SNAPSHOT is in beta and is not ready for
production use. There are no claims about performance or stability of that
cache mode.

Third, you don't need the index on key01 - it is created automatically
because it is a primary key. But you do need to set INLINE SIZE of the
default index. Run your Ignite server with system property or environment
varialy IGNITE_MAX_INDEX_PAYLOAD_SIZE=64.

Finally, the operations you're doing don't look like something to be done
with SQL. Consider using key-value API in Ignite C++ instead -
https://apacheignite-cpp.readme.io/docs.

Stan

On Sat, Nov 2, 2019 at 8:15 PM Evgenii Zhuravlev 
wrote:

> Hi,
>
> Do you use only one ODBC client? Can you start one more ODBC client and
> check the performance?
>
> Thanks,
> Evgenii
>
> сб, 2 нояб. 2019 г. в 16:47, Siew Wai Yow :
>
>> Hi,
>>
>> We are doing POC on ignite performance using ODBC driver but the
>> performance is always capped at around 1700 TPS which is too slow. It is
>> local ignite service. All tuning tips from Ignite page has been applied, no
>> bottleneck from CPU and Memory. At the moment not turn on persistence yet,
>> it will be worse if turn on. This POC is very crucial to our product
>> roadmap.
>>
>> Any tips? Thank you.
>>
>> Test case,
>> 1 x insert --> 49 x select -->49 x update --> 1 x delete
>> repeat for 5 times.
>>
>> *const char* insert_sql = "insert into CDRTEST
>> values(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)";*
>> *const char* update_sql = "update CDRTEST set
>> value01=?,value02=?,value03=?,value04=?,value05=?,value21=?,value22=?,value23=?,value24=?,value25=?
>> where key01=?";*
>> *const char* delete_sql = "delete from CDRTEST where key01=?";*
>> *const char* select_sql = "select value01 from CDRTEST where
>> key01=?";*
>>
>> *retcode = SQLExecDirect(hstmt,
>> reinterpret_cast(const_cast("CREATE TABLE IF NOT EXISTS
>> CDRTEST ( "*
>> *  "key01 VARCHAR PRIMARY KEY, "*
>> *  "value01 LONG, "*
>> *  "value02 LONG, "*
>> *  "value03 LONG, "*
>> *  "value04 LONG, "*
>> *  "value05 LONG, "*
>> *  "value06 LONG, "*
>> *  "value07 LONG, "*
>> *  "value08 LONG, "*
>> *  "value09 LONG, "*
>> *  "value10 LONG, "*
>> *  "value11 LONG, "*
>> *  "value12 LONG, "*
>> *  "value13 LONG, "*
>> *  "value14 LONG, "*
>> *  "value15 LONG, "*
>> *  "value16 LONG, "*
>> *  "value17 LONG, "*
>> *  "value18 LONG, "*
>> *  "value19 LONG, "*
>> *  "value20 LONG, "   *
>> *  "value21 VARCHAR, "*
>> *  "value22 VARCHAR, "*
>> *  "value23 VARCHAR, "*
>> *  "value24 VARCHAR, "*
>> *  "value25 VARCHAR, "*
>> *  "value26 VARCHAR, "*
>> *  "value27 VARCHAR, "*
>> *  "value28 VARCHAR, "*
>> *  "value29 VARCHAR, "*
>> *  "value30 VARCHAR, "*
>> *  "value31 VARCHAR, "*
>> *  "value32 VARCHAR, "*
>> *  "value33 VARCHAR, "*
>> *  "value34 VARCHAR, "*
>> *  "value35 VARCHAR, "*
>> *  "value36 VARCHAR, "*
>> *  "value37 VARCHAR, "*
>> *  "value38 VARCHAR, "*
>> *  "value39 VARCHAR) "*
>> *  "WITH
>> \"template=partitioned,atomicity=TRANSACTIONAL_SNAPSHOT,WRITE_SYNCHRONIZATION_MODE=FULL_ASYNC\"")),
>> SQL_NTS);*
>> *CHECK_ERROR(retcode, "Fail to create table", hstmt,
>> SQL_HANDLE_STMT);*
>>
>> *retcode = SQLExecDirect(hstmt,
>> reinterpret_cast(const_cast("CREATE INDEX key01t_idx ON
>> CDRTEST(key01) INLINE_SIZE 64")), SQL_NTS);*
>>
>> *CHECK_ERROR(retcode, "Fail to create index", hstmt,
>> SQL_HANDLE_STMT); *
>>
>>
>> Below are configuration we used,
>>
>> **
>>
>> *http://www.springframework.org/schema/beans
>> "*
>> *   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance
>> "*
>> *   xsi:schemaLocation="http://www.springframework.org/schema/beans
>> *
>> *http://www.springframework.org/schema/beans/spring-beans.xsd
>> ">*
>>
>> *> class="org.apache.ignite.configuration.IgniteConfiguration">*
>>
>> **
>> *> class="org.apache.ignite.configuration.BinaryConfiguration">*
>> **
>>
>> **
>> *> class="org.apache.ignite.binary.BinaryBasicIdMapper">*
>> **
>> **
>> **
>> **
>> **
>>
>> *  *
>> *> class="org.apache.ignite.configuration.ClientConnectorConfiguration">*
>> *  *
>> *  *
>> *  *
>> *  *
>> *  *
>> *  *
>> *  *
>> **
>> *  *
>>
>> **
>> *   > class="org.apache.ignite.configuration.DataStorageConfiguration">*
>> *  *
>> *  *
>> * > 

Re: Cluster in AWS can not have more than 100 nodes?

2019-11-05 Thread Stanislav Lukyanov
Each node is supposed to add its own IP and port to the S3 bucket when it
starts. That said, I wouldn't check the cluster state based on the contents
of the bucket alone.
Check your logs for errors. Try using some tools (e.g. check out Web
Console - either the one in Ignite
https://apacheignite-tools.readme.io/docs/getting-started or from GridGain
https://www.gridgain.com/products/software/enterprise-edition/management-tool)
to see whether all nodes are there.

If you don't see any clues, please share your logs from any of the running
servers. Ideally, a log from one of the 100 servers which joined the
cluster, and a log from one of the 28 that didn't.

Stan

On Fri, Nov 1, 2019 at 2:41 AM codeboyyong  wrote:

> No . The thing is we started ignite in embedded mode as web app, we can see
> there are 128 instances.
> But when we check he S3 bucket , it always only have 100 its, which means
> only 100 nodes joined the ignite cluster.  Not sure if there are another
> way
> to get the cluster size.
> Since we are running as web app, we actually dont have all the access of
> command line tools.
>
> Thank You
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: How does Apache Ignite distribute???

2019-11-05 Thread Stanislav Lukyanov
I believe that the correct answer to your question - don't do that.

The strength of distributed systems is that you have a number of identical
pieces which you can scale out virtually with no limits.

If your cluster is heterogenous - i.e. all the nodes are different in size,
amount of data and power - you may easily run into issues with uneven load
distribution and eventually into performance issues.
Even more, you can run into cluster management issues, e.g. if one big node
leaves the cluster then the remaining small nodes will redistribute the
data and may not have enough space to do that.

If you really-really want something like that, and you think that your
specific use case is not going to suffer from uneven load or cluster
management issues, then go with the suggestion from Stephen - run multiple
nodes on big machines.

Stan

On Fri, Nov 1, 2019 at 4:36 AM BorisBelozerov 
wrote:

> Can you give me some code to implement it? Thank you!!
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Issue querying id column only

2019-11-05 Thread Ivan Pavlukhin
Hi Kurt,

It might be that you faced https://issues.apache.org/jira/browse/IGNITE-12068
Was fixed in 2.7.6

пн, 21 окт. 2019 г. в 17:56, Denis Mekhanikov :

>
> Kurt,
>
> I tried reproducing this issue using your data, but I didn’t manage to make 
> the query for id return only one entry.
>
> What version of Ignite do you use? How did you insert the data? SQL or cache 
> API?
>
> Denis
> On 17 Oct 2019, 15:21 +0300, Kurt Semba , wrote:
>
> Hi Denis,
>
>
>
> The cache was defined in the spring configuration as follows and not 
> generated through SQL DML:
>
>
>
>  class="org.apache.ignite.configuration.CacheConfiguration">
>
> 
>
> 
>
> 
>
>  class="javax.cache.configuration.FactoryBuilder$SingletonFactory">
>
> 
>
>  class="com.extremenetworks.ignite.store.DomainNodeStore" />
>
> 
>
> 
>
> 
>
>
>
>
>
> 
>
> 
>
> java.lang.String
>
> 
> com.extremenetworks.ignite.model.CachedDomainNode
>
> 
>
> 
>
> 
>
>
>
> The annotated fields in CachedDomainNode:
>
>
>
>@QuerySqlField(index = true)
>
> private String id;
>
>
>
> @QuerySqlField
>
> private String name;
>
>
>
> @QuerySqlField
>
> private String ipAddress;
>
>
>
> @QuerySqlField
>
> private Long lastUpdate;
>
>
>
> @QuerySqlField
>
> private int nodeType;
>
>
>
>
>
> 0: jdbc:ignite:thin://127.0.0.1> select * from account.cacheddomainnode;
>
> ++++++
>
> |   ID   |  NAME  |   
> IPADDRESS|   LASTUPDATE   |NODETYPE   
>  |
>
> ++++++
>
> | ---- | Test-XMC | 
> /127.0.0.1 | 1571185515950  | 0   
>|
>
> | 99488ecd-4cae-4ecc-8306-c3b4215452c2 | XMC-Justice1   | 
> /10.51.102.191 | 1571278075994  | 0   
>|
>
> | 99488ecd-4cae-4ecc-8306-c3b4215452c3 | XMC-Justice2   | 
> /10.51.102.192 | 1571278108222  | 0   
>|
>
> ++++++
>
> 3 rows selected (0.04 seconds)
>
> 0: jdbc:ignite:thin://127.0.0.1> select id from account.cacheddomainnode;
>
> ++
>
> |   ID   |
>
> ++
>
> | ---- |
>
> ++
>
> 1 row selected (0.013 seconds)
>
> 0: jdbc:ignite:thin://127.0.0.1> select _key from account.cacheddomainnode;
>
> ++
>
> |  _KEY  |
>
> ++
>
> | ---- |
>
> | 99488ecd-4cae-4ecc-8306-c3b4215452c2 |
>
> | 99488ecd-4cae-4ecc-8306-c3b4215452c3 |
>
> ++
>
>
>
>
>
> From: Denis Mekhanikov 
> Sent: Thursday, October 17, 2019 11:52 AM
> To: user@ignite.apache.org
> Subject: Re: Issue querying id column only
>
>
>
> External Email: Use caution in opening links or attachments.
>
> Are you able to reproduce this issue using SQL only?
>
> Could you share the DDL, insert and select statements that lead to the 
> described issue?
>
>
>
> I tried the following queries, but they work as expected.
>
>
>
> CREATE TABLE people (id int PRIMARY key, first_name varchar, last_name 
> varchar);
>
>
>
> INSERT INTO people (id, first_name, last_name) VALUES (1, 'John', 'Doe');
>
> INSERT INTO people (id, first_name, last_name) VALUES (2, 'John', 'Foe');
>
>
>
> SELECT id FROM people;
>
>
>
> Denis
>
> On 17 Oct 2019, 09:02 +0300, Kurt Semba , wrote:
>
> Hi all,
>
>
>
> Is it possible for a table through the SQL interface to only return some 
> subset of data if querying against a specific column?
>
>
>
> e.g.
>
>
>
> We have a cache configuration defined based on Java SQL Query annotations 
> that contains an id field and some other string fields. The value of the id 
> field in all entries also matches the value of the cache entry key).
>
>
>
> The table contains 3 entries, however if I execute “select id from table” 
> through sqlline, I only am 

Re: How to insert data?

2019-11-05 Thread BorisBelozerov
I have 3 nodes, and I code in each node:
The 1st node: in Main function
Ignite ignite=Ignition.start();
CacheConfiguration cacheConfiguration = new
CacheConfiguration();
QueryEntity valQE = new QueryEntity();
valQE.setKeyFieldName("key");
valQE.setKeyType("java.lang.Integer");
valQE.setValueType("DataX");
valQE.addQueryField("key", "java.lang.Integer", "key");
valQE.addQueryField("value","java.lang.String","value");

cacheConfiguration.setQueryEntities(java.util.Arrays.asList(valQE));
cacheConfiguration.setName("tdCache");
IgniteCache dtCache =
ignite.getOrCreateCache(cacheConfiguration);
The 2nd and 3rd node: 
Ignite ignite=Ignition.start();
I open ignitevisorcmd.bat but I can't see DataX or tdCache
I use sqlline.bat: select * from "tdCache".DATAX; but I can't select it:
Schema "tdCache" not found
Can you help me?? Thank you



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Does Leap Day 29.02.2020 has any impact on Apache Ignite?

2019-11-05 Thread Ilya Kasnacheev
Hello!

I don't think so, since we don't use clock for anything critical.

Regards,
-- 
Ilya Kasnacheev


пн, 4 нояб. 2019 г. в 14:49, Shiva Kumar :

> Hi all,
>
> I wanted to know if Leap Day 29.02.2020 has any impact on Apache Ignite?
>
> best regards,
> shiva
>


Re: Ignite-spring-data_2.0 not working

2019-11-05 Thread Ilya Kasnacheev
Hello!

Please try the following dependency:


org.springframework.data
spring-data-commons
2.0.9.RELEASE



org.slf4j
jcl-over-slf4j





It seems that they have changed API in 2.2 (which you get by default) and
we're not compatible with that.

Regards,
-- 
Ilya Kasnacheev


сб, 2 нояб. 2019 г. в 20:50, Humphrey :

> I see that some text went missing after the post:
>
> Here the stack trace log:
>
> Error starting ApplicationContext. To display the conditions report re-run
> your application with 'debug' enabled.
> 2019-11-02 13:16:56.482 ERROR 13627 --- [   main]
> o.s.boot.SpringApplication   : Application run failed
>
> org.springframework.beans.factory.BeanCreationException: Error creating
> bean
> with name 'personRepository': Invocation of init method failed; nested
> exception is java.lang.IllegalStateException: You have defined query method
> in the repository but you don't have any query lookup strategy defined. The
> infrastructure apparently does not support query methods!
> at
>
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1803)
> ~[spring-beans-5.2.0.RELEASE.jar:5.2.0.RELEASE]
> at
>
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:595)
> ~[spring-beans-5.2.0.RELEASE.jar:5.2.0.RELEASE]
> at
>
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:517)
> ~[spring-beans-5.2.0.RELEASE.jar:5.2.0.RELEASE]
> at
>
> org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:323)
> ~[spring-beans-5.2.0.RELEASE.jar:5.2.0.RELEASE]
> at
>
> org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222)
> ~[spring-beans-5.2.0.RELEASE.jar:5.2.0.RELEASE]
> at
>
> org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:321)
> ~[spring-beans-5.2.0.RELEASE.jar:5.2.0.RELEASE]
> at
>
> org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:202)
> ~[spring-beans-5.2.0.RELEASE.jar:5.2.0.RELEASE]
> at
>
> org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:860)
> ~[spring-beans-5.2.0.RELEASE.jar:5.2.0.RELEASE]
> at
>
> org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:878)
> ~[spring-context-5.2.0.RELEASE.jar:5.2.0.RELEASE]
> at
>
> org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:550)
> ~[spring-context-5.2.0.RELEASE.jar:5.2.0.RELEASE]
> at
>
> org.springframework.boot.SpringApplication.refresh(SpringApplication.java:747)
> ~[spring-boot-2.2.0.RELEASE.jar:2.2.0.RELEASE]
> at
>
> org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:397)
> ~[spring-boot-2.2.0.RELEASE.jar:2.2.0.RELEASE]
> at
> org.springframework.boot.SpringApplication.run(SpringApplication.java:315)
> ~[spring-boot-2.2.0.RELEASE.jar:2.2.0.RELEASE]
> at
> org.springframework.boot.SpringApplication.run(SpringApplication.java:1226)
> ~[spring-boot-2.2.0.RELEASE.jar:2.2.0.RELEASE]
> at
> org.springframework.boot.SpringApplication.run(SpringApplication.java:1215)
> ~[spring-boot-2.2.0.RELEASE.jar:2.2.0.RELEASE]
> at
>
> org.bug.ignite.springdatabug.SpringDataBugApplication.main(SpringDataBugApplication.java:10)
> ~[classes/:na]
> Caused by: java.lang.IllegalStateException: You have defined query method
> in
> the repository but you don't have any query lookup strategy defined. The
> infrastructure apparently does not support query methods!
> at
>
> org.springframework.data.repository.core.support.RepositoryFactorySupport$QueryExecutorMethodInterceptor.(RepositoryFactorySupport.java:553)
> ~[spring-data-commons-2.2.0.RELEASE.jar:2.2.0.RELEASE]
> at
>
> org.springframework.data.repository.core.support.RepositoryFactorySupport.getRepository(RepositoryFactorySupport.java:332)
> ~[spring-data-commons-2.2.0.RELEASE.jar:2.2.0.RELEASE]
> at
>
> org.springframework.data.repository.core.support.RepositoryFactoryBeanSupport.lambda$afterPropertiesSet$5(RepositoryFactoryBeanSupport.java:297)
> ~[spring-data-commons-2.2.0.RELEASE.jar:2.2.0.RELEASE]
> at org.springframework.data.util.Lazy.getNullable(Lazy.java:212)
> ~[spring-data-commons-2.2.0.RELEASE.jar:2.2.0.RELEASE]
> at org.springframework.data.util.Lazy.get(Lazy.java:94)
> ~[spring-data-commons-2.2.0.RELEASE.jar:2.2.0.RELEASE]
> at
>
> 

Re: How to insert data?

2019-11-05 Thread BorisBelozerov
val cacheConfiguration = new CacheConfiguration[Integer,DataX]()
val valQE = new QueryEntity()
valQE.setKeyFieldName("key")
valQE.setKeyType("java.lang.Integer")
valQE.setValueType("DataX")
valQE.addQueryField("key", "java.lang.Integer", "key")
valQE.addQueryField("value","java.lang.String","value")
cacheConfiguration.setQueryEntities(java.util.Arrays.asList(valQE))
cacheConfiguration.setName("tdCache")
val dtCache = ignite.getOrCreateCache(cacheConfiguration)

Your code is not seem in Eclipse, can you give me some code that can run in
file .java, in Eclipse?
Thank you!!




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Getting same erasure error during compilation

2019-11-05 Thread Ilya Kasnacheev
Hello!

I think I saw one here:
https://bitbucket.org/lokesh_blue/bugdemo/src/master/

Regards,
-- 
Ilya Kasnacheev


ср, 30 окт. 2019 г. в 21:34, niamin :

> Can you share a pom.xml that includes the dependencies that have been
> tested
> and known to work with Spring Data? I used spring-data-2.0 but still cannot
> get to work with Spring Data integration due to additional errors.
>
> Thanks,
> Naushad
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: How to insert data?

2019-11-05 Thread Stephen Darlington
One. The cache is cluster-wide, so once it’s created every node can see it.

> On 5 Nov 2019, at 12:36, BorisBelozerov  wrote:
> 
> Thank you!!
> How many nodes that I run your code??
> I only run the "CREATE database" code in one node or all nodes??
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/




Re: Issue with adding nested index dynamically

2019-11-05 Thread Ivan Pavlukhin
Hi Hemambara,

You can a write an email to d...@ignite.apache.org with reference to
the issue and a description of the fix. It might be that someone will
be ready to do a review and merge.

вт, 5 нояб. 2019 г. в 15:57, Hemambara :
>
> Okay, so the issue you are facing with is incorrect data type which is valid,
> so its not an issue then.
>
> Yes agreed that it requires more testing, but I feel the fix that is going
> in, is safe and good to do. This fix is really important for us to proceed
> further. I have tested few other scnearios and its working fine for me. Is
> there any way we can plan to merge ?  or you do not want to do it now ?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: Issue with adding nested index dynamically

2019-11-05 Thread Hemambara
Okay, so the issue you are facing with is incorrect data type which is valid,
so its not an issue then.

Yes agreed that it requires more testing, but I feel the fix that is going
in, is safe and good to do. This fix is really important for us to proceed
further. I have tested few other scnearios and its working fine for me. Is
there any way we can plan to merge ?  or you do not want to do it now ?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to insert data?

2019-11-05 Thread BorisBelozerov
Thank you!!
How many nodes that I run your code??
I only run the "CREATE database" code in one node or all nodes??



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Issue with adding nested index dynamically

2019-11-05 Thread Ivan Pavlukhin
Hi Hemambara,

Check my example [1].
1. Launch step1.
2. Uncomment code and enable Address.number field.
3. Launch step2.

Actually the problem here is type mismatch for "number" int vs
varchar. Work with nested fields could be really tricky. Subsequently
there is not much activity to improve their support. Applying proposed
fix requires thorough testing and review. A lot of end to end
scenarios should be covered by tests.

[1] https://gist.github.com/pavlukhin/53d8a23b48ca0018481a203ceb06065f

пт, 1 нояб. 2019 г. в 11:28, Ivan Pavlukhin :
>
> Hi Hemambara,
>
> I appologize but I will be able to share a problematic example only on
> next week.
>
> чт, 31 окт. 2019 г. в 19:49, Hemambara :
> >
> > I did not face any issue. Its working fine for me. Can you share your code
> > and exception that you are getting
> >
> > I tried like below and it worked for me.
> > ((Person)cache.get(1)).address.community)
> >
> >
> >
> >
> >
> > --
> > Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
>
>
> --
> Best regards,
> Ivan Pavlukhin



-- 
Best regards,
Ivan Pavlukhin


Re: Apache Spark and Apache Ignite

2019-11-05 Thread Wellington Alves das Neves
Hi,

1. What's your main goal?
   We need to run Genetic Algorithms in parallel to return the best
execution plan for a factory production line.

To solve optimization task on the data in Ignite or in Spark?
   As I have little experience, I'm evaluating the best architecture we
can be applying to this scenario,
   and remembering that we will perform using AWS Elastic MapReduce
(EMR) service.

Regarding architecture, could you suggest this scenario?


2. What's the average size of initial population?
 We are evaluating the size.


Thks!



Em ter., 5 de nov. de 2019 às 08:24, zaleslaw 
escreveu:

> Hi, Wellington
>
> I'll be happy to help you if you give me more information of your goal.
>
> I have a few questions, could you answer please?
>
> 1. What's your main goal? To solve optimization task on the data in Ignite
> or in Spark?
> 2. What's the average size of initial population?
> 3. Did you run these  examples
> <
> https://github.com/apache/ignite/tree/master/examples/src/main/java/org/apache/ignite/examples/ml/genetic>
>
> to solve, for example knapsack problem?
> 4. Did you have a look here, docs
> https://apacheignite.readme.io/docs/genetic-algorithms
>
> Yes, Apache Ignite and Apache Spark has integration  bridge
>    which give us ability to use
> Ignite instead of .cache() or persist() to keep dataframes in-memory for
> intermediate calculations.
>
> But we have no support for GA framework or another ML parts as part of
> extended Spark API.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


-- 
Wellington Alves das Neves
*Software Engineer*

+55 (17) 99194-7119
wellingtonalvesne...@gmail.com
[image: linkedin]  [image:
skype]  [image: gitlab]



Re: Apache Spark and Apache Ignite

2019-11-05 Thread zaleslaw
Hi, Wellington

I'll be happy to help you if you give me more information of your goal.

I have a few questions, could you answer please?

1. What's your main goal? To solve optimization task on the data in Ignite
or in Spark?
2. What's the average size of initial population?
3. Did you run these  examples

  
to solve, for example knapsack problem? 
4. Did you have a look here, docs
https://apacheignite.readme.io/docs/genetic-algorithms

Yes, Apache Ignite and Apache Spark has integration  bridge
   which give us ability to use
Ignite instead of .cache() or persist() to keep dataframes in-memory for
intermediate calculations.

But we have no support for GA framework or another ML parts as part of
extended Spark API.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Apache Spark and Apache Ignite

2019-11-05 Thread Wellington Alves das Neves
Hi,

I am currently researching an architecture on AWS - Elastic MapReduce (EMR)
to run Genetic Algorithms (GA).
Apache Ignite already has some genetic algorithms (GA) defined, so we would
like to do some testing with it integrated with Apache Spark.

But I found little material on how to implement this Apache Ignite GA
architecture along with Apache Spark.

Thks!

-- 
Wellington Alves das Neves
*Software Engineer*

+55 (17) 99194-7119
wellingtonalvesne...@gmail.com
[image: linkedin]  [image:
skype]  [image: gitlab]