Re:

2019-01-29 Thread Pavel Tupitsyn
Ilya is right, what you need is:

var queryEntity = new QueryEntity
{
KeyType = typeof(string),
ValueType = typeof(int),
KeyFieldName = "ClassCode",
ValueFieldName = "Priority"
};

(KeyTypeName and ValueTypeName are names of equivalent Java types.
Prefer KeyType and ValueType properties)


On Tue, Jan 29, 2019 at 8:34 PM Ilya Kasnacheev 
wrote:

> Hello!
>
> > Fields = new[] { newQueryField("ClassCode", typeof(
> string)) { IsKeyField =true }, new QueryField("Priority", typeof(int)) {
> IsKeyField = false } }
>
> Here you declare that you have a field ClassCode *in* your composite key,
> and Priority *in* your composite value.
> But you have neither.
>
> You should use KeyFieldName/ValueFieldName instead (if they present in
> .Net API).
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пн, 28 янв. 2019 г. в 15:33, Som Som <2av10...@gmail.com>:
>
>> .Net part:
>>
>> using (var ignite =Ignition.StartClient(Ignition
>> .ClientConfigurationSectionName))
>>
>> {
>>
>> var queryEntity = new QueryEntity
>>
>> {
>>
>> KeyTypeName = typeof(string).FullName,
>>
>> KeyType = typeof(string),
>>
>> ValueTypeName = typeof(int).FullName,
>>
>> ValueType = typeof(int),
>>
>> Fields = new[] { newQueryField("ClassCode", typeof(
>> string)) { IsKeyField =true }, new QueryField("Priority", typeof(int)) {
>> IsKeyField = false } }
>>
>> };
>>
>>
>>
>> queryEntity.TableName =IgniteCacheName
>> .QUIK_CLASSCODE_PRIORITY;
>>
>>
>>
>> var cfg = new CacheClientConfiguration(IgniteCacheName
>> .QUIK_CLASSCODE_PRIORITY, new[] { queryEntity })
>>
>> {
>>
>> DataRegionName = "persistent",
>>
>> Backups = 1,
>>
>> SqlSchema = "PUBLIC"
>>
>> };
>>
>>
>>
>> var c = ignite.GetOrCreateCache(cfg);
>>
>>
>>
>> c.Put("a", 1);
>>
>> }
>>
>>
>>
>> Sql query part:
>>
>>
>>
>> This query works ok – SELECT _Key, _Val FROM "QUIK.CLASSCODEPRIORITY"
>>
>> But this one throws an error mentioned above SELECT SecCode, Prioruty
>> FROM "QUIK.CLASSCODEPRIORITY"
>>
>> пн, 28 янв. 2019 г., 14:23 Ilya Kasnacheev ilya.kasnach...@gmail.com:
>>
>>> Hello!
>>>
>>> Can you please show your cache configuration and the exact SQL statement
>>> used?
>>>
>>> What happens here is that Ignite expects some composite value type as
>>> opposed to bare integer. Not so clear why yet.
>>>
>>> Regards,
>>> --
>>> Ilya Kasnacheev
>>>
>>>
>>> пн, 28 янв. 2019 г. в 14:12, Som Som <2av10...@gmail.com>:
>>>

 hi, i'v got a roblem reading the cache throgh the odbc.

 cahche was created in .net, key type is string, value type is int.

 error text:

 SQL Error [5]: javax.cache.CacheException: Failed to execute map
 query on remote node [nodeId=c3ef8d97-09d0-432d-a0a2-7fd73e8413bc,
 errMsg=Failed to execute SQL query. General error: "class
 org.apache.ignite.IgniteCheckedException: Unexpected binary object class
 [type=class java.lang.Integer]"; SQL statement:

 SELECT

 __Z0.CLASSCODE __C0_0,

 __Z0.PRIORITY __C0_1

 FROM PUBLIC."QUIK.CLASSCODEPRIORITY" __Z0 [5-197]]

   javax.cache.CacheException: Failed to execute map query on remote
 node [nodeId=c3ef8d97-09d0-432d-a0a2-7fd73e8413bc, errMsg=Failed to execute
 SQL query. General error: "class org.apache.ignite.IgniteCheckedException:
 Unexpected binary object class [type=class java.lang.Integer]"; SQL
 statement:

 SELECT

 __Z0.CLASSCODE __C0_0,

 __Z0.PRIORITY __C0_1

 FROM PUBLIC."QUIK.CLASSCODEPRIORITY" __Z0 [5-197]]

   javax.cache.CacheException: Failed to execute map query on remote
 node [nodeId=c3ef8d97-09d0-432d-a0a2-7fd73e8413bc, errMsg=Failed to execute
 SQL query. General error: "class org.apache.ignite.IgniteCheckedException:
 Unexpected binary object class [type=class java.lang.Integer]"; SQL
 statement:

 SELECT

 __Z0.CLASSCODE __C0_0,

 __Z0.PRIORITY __C0_1

 FROM PUBLIC."QUIK.CLASSCODEPRIORITY" __Z0 [5-197]]

>>>


Ignite grid stops after a few days of uptime

2019-01-29 Thread manish
After our cluster is up for 2-3 days, the grid on one of the two node stops
without proper details. 
In logs I could see the below NPE.

/o.a.i.s.d.tcp.TcpDiscoverySpi - TcpDiscoverSpi's message worker thread
failed abnormally. Stopping the node in order to prevent cluster wide
instability.
java.lang.NullPointerException: null
  at
org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$7.cacheMetrics(GridDiscoveryManager.java:1150)
  at
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMetricsUpdateMessage(ServerImpl.java:5077)
  at
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2647)
  at
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2447)
  at
org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorkerAdapter.body(ServerImpl.java:6648)
  at
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.body(ServerImpl.java:2533)
  at org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)/


Can someone please provide some inputs that what is going wrong.
We are using Ignite 2.3.0 version and the only change which we did recently
was to enable Statistics on the cache and fetch the metrics from the cache.

IgniteCache cache =
ignite.cache(dictionary.getCacheName());
metrics = cache.metrics();

Thanks in advance



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: failure due to IGNITE_BPLUS_TREE_LOCK_RETRIES

2019-01-29 Thread mahesh76private
On further debug, we realised we set index inline option to 2048..while
creating index.  This problem (LOCK_RETRIES issue) started only after that.
Before, that we didn't face much of a problem with indexes. 

Also, another notable observation is setting index inline option to 2048
also made the index huge, ran into 100s of Gig...

Anyway, we are not using this option anymore Frankly, it seems a
internal detail of ignite, and not sure why this is given as an option to
configure for users. 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Client and server authentication

2019-01-29 Thread Aat
you have to create a plugin or buy  - ent solution !



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite 2.7.0 and Hadoop Accelerator

2019-01-29 Thread Sergio Hernández Martínez
Hello Everybody,

After saw the download page, i have one question.

In the download page we have binaries for Apache Ignite 2.7.0 but i don't see 
Hadoop Acelerator binaries for 2.7.0 version.

IGFS for Hadoop is deprecated?

Thank you very much!


Re:

2019-01-29 Thread Ilya Kasnacheev
Hello!

> Fields = new[] { newQueryField("ClassCode", typeof(
string)) { IsKeyField =true }, new QueryField("Priority", typeof(int)) {
IsKeyField = false } }

Here you declare that you have a field ClassCode *in* your composite key,
and Priority *in* your composite value.
But you have neither.

You should use KeyFieldName/ValueFieldName instead (if they present in .Net
API).

Regards,
-- 
Ilya Kasnacheev


пн, 28 янв. 2019 г. в 15:33, Som Som <2av10...@gmail.com>:

> .Net part:
>
> using (var ignite =Ignition.StartClient(Ignition
> .ClientConfigurationSectionName))
>
> {
>
> var queryEntity = new QueryEntity
>
> {
>
> KeyTypeName = typeof(string).FullName,
>
> KeyType = typeof(string),
>
> ValueTypeName = typeof(int).FullName,
>
> ValueType = typeof(int),
>
> Fields = new[] { newQueryField("ClassCode", typeof(
> string)) { IsKeyField =true }, new QueryField("Priority", typeof(int)) {
> IsKeyField = false } }
>
> };
>
>
>
> queryEntity.TableName =IgniteCacheName
> .QUIK_CLASSCODE_PRIORITY;
>
>
>
> var cfg = new CacheClientConfiguration(IgniteCacheName
> .QUIK_CLASSCODE_PRIORITY, new[] { queryEntity })
>
> {
>
> DataRegionName = "persistent",
>
> Backups = 1,
>
> SqlSchema = "PUBLIC"
>
> };
>
>
>
> var c = ignite.GetOrCreateCache(cfg);
>
>
>
> c.Put("a", 1);
>
> }
>
>
>
> Sql query part:
>
>
>
> This query works ok – SELECT _Key, _Val FROM "QUIK.CLASSCODEPRIORITY"
>
> But this one throws an error mentioned above SELECT SecCode, Prioruty FROM
> "QUIK.CLASSCODEPRIORITY"
>
> пн, 28 янв. 2019 г., 14:23 Ilya Kasnacheev ilya.kasnach...@gmail.com:
>
>> Hello!
>>
>> Can you please show your cache configuration and the exact SQL statement
>> used?
>>
>> What happens here is that Ignite expects some composite value type as
>> opposed to bare integer. Not so clear why yet.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> пн, 28 янв. 2019 г. в 14:12, Som Som <2av10...@gmail.com>:
>>
>>>
>>> hi, i'v got a roblem reading the cache throgh the odbc.
>>>
>>> cahche was created in .net, key type is string, value type is int.
>>>
>>> error text:
>>>
>>> SQL Error [5]: javax.cache.CacheException: Failed to execute map
>>> query on remote node [nodeId=c3ef8d97-09d0-432d-a0a2-7fd73e8413bc,
>>> errMsg=Failed to execute SQL query. General error: "class
>>> org.apache.ignite.IgniteCheckedException: Unexpected binary object class
>>> [type=class java.lang.Integer]"; SQL statement:
>>>
>>> SELECT
>>>
>>> __Z0.CLASSCODE __C0_0,
>>>
>>> __Z0.PRIORITY __C0_1
>>>
>>> FROM PUBLIC."QUIK.CLASSCODEPRIORITY" __Z0 [5-197]]
>>>
>>>   javax.cache.CacheException: Failed to execute map query on remote node
>>> [nodeId=c3ef8d97-09d0-432d-a0a2-7fd73e8413bc, errMsg=Failed to execute SQL
>>> query. General error: "class org.apache.ignite.IgniteCheckedException:
>>> Unexpected binary object class [type=class java.lang.Integer]"; SQL
>>> statement:
>>>
>>> SELECT
>>>
>>> __Z0.CLASSCODE __C0_0,
>>>
>>> __Z0.PRIORITY __C0_1
>>>
>>> FROM PUBLIC."QUIK.CLASSCODEPRIORITY" __Z0 [5-197]]
>>>
>>>   javax.cache.CacheException: Failed to execute map query on remote node
>>> [nodeId=c3ef8d97-09d0-432d-a0a2-7fd73e8413bc, errMsg=Failed to execute SQL
>>> query. General error: "class org.apache.ignite.IgniteCheckedException:
>>> Unexpected binary object class [type=class java.lang.Integer]"; SQL
>>> statement:
>>>
>>> SELECT
>>>
>>> __Z0.CLASSCODE __C0_0,
>>>
>>> __Z0.PRIORITY __C0_1
>>>
>>> FROM PUBLIC."QUIK.CLASSCODEPRIORITY" __Z0 [5-197]]
>>>
>>


Re: Error while persisting from Ignite to Hive for a BinaryObject

2019-01-29 Thread Ilya Kasnacheev
Hello!

What is the type that you are storing in this cache? Can you please show
full cache configuration & key-value classes?

Regards,
-- 
Ilya Kasnacheev


пт, 25 янв. 2019 г. в 00:49, Premachandran, Mahesh (Nokia - IN/Bangalore) <
mahesh.premachand...@nokia.com>:

> Hi,
>
>
>
> Sorry for the earlier confusion, the type of apn_id/apnId is indeed
> String. I had written a simple producer to publish messages to kafka topics
> with random values the types of which are
>
>
>
> id  java.lang.String
>
> reportStartTime  java.lang.Long
>
> reportEndTime  java.lang.Long
>
> apnId  java.lang.String
>
> ggsnDiameterTotalEvents  java.lang.Long
>
> apnIdVectorItemCount  java.lang.Long
>
> requestType  java.lang.Long
>
> requestTypeNumberEvents  java.lang.Long
>
> requestTypeImsi  java.lang.String
>
> requestTypeImsiVectorItemCount  java.lang.Long
>
> requestTypeSuccessEvents  java.lang.Long
>
> imsiDiameterSuccess  java.lang.String
>
> imsiDiameterSuccessVectorItemCount  java.lang.Long
>
> diameterRequestsUnsuccessful  java.lang.Long
>
> imsiDiameterUnsuccessful  java.lang.String
>
> imsiDiameterUnsuccessfulVectorItemCount  java.lang.Long
>
> requestDelaySum  java.lang.Double
>
> requestDelayEvents  java.lang.Long
>
> resultCode  java.lang.Long
>
> resultCodeEvents  java.lang.Long
>
> resultCodeImsi  java.lang.String
>
> resultCodeImsiVectorItemCount  java.lang.Long
>
> terminationCause  java.lang.Long
>
> terminationCauseEvent  java.lang.Long
>
>
>
> This is the statement that was used to create the table on HIVE.
>
>
>
> CREATE TABLE apn_diameter_5_min (id VARCHAR(36), report_start_time
> BIGINT,report_end_time BIGINT, apn_id
> VARCHAR(200),ggsn_diameter_total_events BIGINT, apn_id_vector_item_count
> BIGINT, request_type BIGINT,request_type_number_events BIGINT,
> request_type_imsi VARCHAR(16), request_type_imsi_vector_item_count BIGINT,
> request_type_success_events BIGINT, imsi_diameter_success
> VARCHAR(16),imsi_diameter_success_vector_item_count BIGINT,
> diameter_requests_unsuccessful BIGINT, imsi_diameter_unsuccessful
> VARCHAR(16), imsi_diameter_unsuccessful_vector_item_count BIGINT,
> request_delay_sum DOUBLE, request_delay_events BIGINT, result_code BIGINT,
> result_code_events BIGINT, result_code_imsi VARCHAR(16),
> result_code_imsi_vector_item_count BIGINT, termination_cause BIGINT,
> termination_cause_event BIGINT) clustered  by (id) into 2 buckets STORED AS
> orc TBLPROPERTIES('transactional'='true');
>
>
>
>
>
> I am populating a BinaryObject using the BinaryObjectBuilder in my
> implementation of  StreamSingleTupleExtractor.
>
>
>
> Mahesh
>
>
>
> *From:* Ilya Kasnacheev 
> *Sent:* Thursday, January 24, 2019 7:39 PM
> *To:* user@ignite.apache.org
> *Subject:* Re: Error while persisting from Ignite to Hive for a
> BinaryObject
>
>
>
> Hello!
>
>
>
> In your XML apn_id looks like String. Is it possible that actual type of
> apnId in ApnDiameter5Min is neither Long nor String but some other
> complex type? Can you attach those types?
>
>
>
> Regards,
>
> --
>
> Ilya Kasnacheev
>
>
>
>
>
> ср, 23 янв. 2019 г. в 18:37, Premachandran, Mahesh (Nokia - IN/Bangalore) <
> mahesh.premachand...@nokia.com>:
>
> Hi Ilya,
>
>
>
> The field apn_id is of type Long. I have been using the
>  CacheJdbcPojoStore, does that map the BinaryObjects to the database
> schema? or is it only for java pojos? I have attached the xml I am using
> with the client.
>
>
>
> Mahesh
>
>
>
> *From:* Ilya Kasnacheev 
> *Sent:* Wednesday, January 23, 2019 6:43 PM
> *To:* user@ignite.apache.org
> *Subject:* Re: Error while persisting from Ignite to Hive for a
> BinaryObject
>
>
>
> Hello!
>
>
>
> I think that your CacheStore implementation is confused by nested fields
> or binary object values (what is the type of apn_id?). Consider using
> CacheJdbcBlobStoreFactory instead which will serialize value to one big
> field in BinaryObject formar.
>
>
>
> Regards,
>
> --
>
> Ilya Kasnacheev
>
>
>
>
>
> ср, 23 янв. 2019 г. в 15:47, Premachandran, Mahesh (Nokia - IN/Bangalore) <
> mahesh.premachand...@nokia.com>:
>
> Hi all,
>
>
>
> I am trying to stream some data from Kafka to Ignite using
> IgniteDataStreamer and use 3rd party persistence to move it to HIVE. The
> data on Kafka is in avro format, which I am deserailising, populating an
> Ignite BinaryObject using the binary builder and pushing it to Ignite. It
> works well when I do not enable 3rd party persistence, but once that is
> enabled, it throws the following exception.
>
>
>
> [12:32:07] (err) Failed to execute compound future reducer:
> GridCompoundFuture [rdc=null, initFlag=1, lsnrCalls=2, done=true,
> cancelled=false, err=class o.a.i.IgniteCheckedException: DataStreamer
> request failed [node=292ab229-61fb-4d61-8f08-33c8abd310a2], futs=[true,
> true, true]]class org.apache.ignite.IgniteCheckedException: DataStreamer
> request failed [node=292ab229-61fb-4d61-8f08-33c8abd310a2]
>
> at
> 

Re: Blocked system-critical thread has been detected. This can lead to cluster-wide undefined behaviour

2019-01-29 Thread Ilya Kasnacheev
Hello!

1) I think it's Public thread. I think your solution should be OK.
2) Right. When you would listen on this future? I hope it isn't in Event
listener :)

Regards,
-- 
Ilya Kasnacheev


пн, 28 янв. 2019 г. в 17:15, Humphrey :

> Hi Ilya,
>
> 1) Which thread pool is used by compute? (is that the ignite public thread
> pool [1])?
>
> I'm now using the following from when I listen to events:
>
> CompletableFuture.runAsync(() -> {
>   ignite.compute().run(new MyRunnable(event.getValue()))
> }, Executors.newFixedThreadPool(10));
>
> This seems to work now but I'm not sure if this is the correct way to
> handle
> the long running events.
> 2) I think this will will queue all those jobs until a thread (one of the
> 10) finishes it's job right?
>
> I've also tried with a compute.runAsync and then listen on the future,
> after
> doing the put in the callback method.
> 3) Which of these is the best approach?
>
> Humphrey
>
>
> [1] https://apacheignite.readme.io/docs/thread-pools
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: When the IgniteCache. clear () method is being executed, if SELECT COUNT (*) FROM CACHE is executed, the whole cluster will collapse.

2019-01-29 Thread Ilya Kasnacheev
Hello!

It's hard to me to see what's going on since I can't immediately run it.

Can you please provide logs from all nodes in affected cluster? If you
experience a deadlock please also gather thread dumps.

Regards,
-- 
Ilya Kasnacheev


чт, 24 янв. 2019 г. в 16:19, 李玉珏@163 <18624049...@163.com>:

> Hi,
>
> conf are as follows:
> http://www.springframework.org/schema/beans;
> 
> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
> 
> xsi:schemaLocation="http://www.springframework.org/schema/beans
> http://www.springframework.org/schema/beans/spring-beans.xsd;>
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
>
> Kotlin code are follows:
>
> fun main(args: Array) {
> if (args.isNullOrEmpty()) {
> println("pls set accountSize orderSize threadSize")
> return
> }
> val executor = Executors.newFixedThreadPool(args[2].toInt())
> val ignite = Ignition.start("zk.xml")
> ignite.cluster().active(true)
>
> val accountCfg = CacheConfiguration(CACHE_ACCOUNT)
> accountCfg.cacheMode = CacheMode.REPLICATED
> accountCfg.atomicityMode = CacheAtomicityMode.TRANSACTIONAL
> accountCfg.setIndexedTypes(Long::class.java, Account::class.java)
> val accounts = ignite.getOrCreateCache(accountCfg)
>
> val orderCfg = CacheConfiguration(CACHE_ORDER)
> orderCfg.cacheMode = CacheMode.REPLICATED
> orderCfg.atomicityMode = CacheAtomicityMode.TRANSACTIONAL
> orderCfg.setIndexedTypes(Long::class.java, Order::class.java)
> val orders = ignite.getOrCreateCache(orderCfg)
>
> val positionCfg = CacheConfiguration(CACHE_POSITION)
> positionCfg.cacheMode = CacheMode.REPLICATED
> positionCfg.atomicityMode = CacheAtomicityMode.TRANSACTIONAL
> positionCfg.setIndexedTypes(Long::class.java, Position::class.java)
> val positions = ignite.getOrCreateCache(positionCfg)
>
> val tradeResultCfg = CacheConfiguration MatchTradeResult>(CACHE_MATCH_TRADE_RESULT)
> tradeResultCfg.cacheMode = CacheMode.REPLICATED
> tradeResultCfg.atomicityMode = CacheAtomicityMode.TRANSACTIONAL
> tradeResultCfg.setIndexedTypes(Long::class.java,
> MatchTradeResult::class.java)
> val tradeResults = ignite.getOrCreateCache(tradeResultCfg)
>
> val cancelResultCfg = CacheConfiguration MatchCancelResult>(CACHE_MATCH_CANCEL_RESULT)
> cancelResultCfg.cacheMode = CacheMode.REPLICATED
> cancelResultCfg.atomicityMode = CacheAtomicityMode.TRANSACTIONAL
> cancelResultCfg.setIndexedTypes(Long::class.java,
> MatchCancelResult::class.java)
> val cancelResults = ignite.getOrCreateCache(cancelResultCfg)
>
> // clean
> orders.clear()
> tradeResults.clear()
> positions.clear()
> accounts.clear()
> cancelResults.clear()
>
> // init Account
> val accountSize = args[0].toInt()
> val orderSize = args[1].toInt()
> val accountDownLatch = CountDownLatch(accountSize)
> repeat(accountSize) {
> val account = AccountHelper.genAccount()
> accounts.put(account.id, account)
> accountDownLatch.countDown()
> }
> accountDownLatch.await()
>
> //init order
> val orderDownLatch = CountDownLatch(orderSize)
> repeat(orderSize) {
> executor.submit {
> val order = OrderHelper.genOrder(accountSize)
> val result = TradeResultHelper.genResult(order)
> orders.put(order.id, order)
> tradeResults.put(result.id, result)
> orderDownLatch.countDown()
> }
> }
> orderDownLatch.await()
>
> // clear
> val traded = tradeResults.query(
> SqlQuery(MatchTradeResult::class.java, "
> status = ?").setArgs(CLEAR_STATUS_INIT)
> ).all
>
> for (a in TransactionConcurrency.values()) {
> for (b in TransactionIsolation.values()) {
> val countDownLatch = CountDownLatch(traded.size)
> val begin = System.currentTimeMillis()
> for (item in traded) {
> executor.submit {
> val result = item.value
> var done = false
> while (!done) {
> try {
> ignite.transactions().txStart(a, b).use {
> result.status = CLEAR_STATUS_INIT
> val accountBuy =
> accounts.get(result.buyUserId)
> val accountSell =
> accounts.get(result.sellUserId)
> val positionBuy =
> PositionHelp.genPosition(result.buyUserId, result.contractId)
> val positionSell =
> PositionHelp.genPosition(result.sellUserId, result.contractId)
> accountBuy?.let {
> accountBuy.balance -= result.amount
> accounts.put(accountBuy.id, accountBuy)
> 

Re: ZookeeperDiscovery block when communication error

2019-01-29 Thread wangsan
Thank you!
When I use zk discovery.I find many nodes in zookeeper path /jd/ .
In my opinion.When new node join,Then a new /jd/ child node will be
created,When the node join the cluster success,the /jd/ path will be
removed.But in my cluster,That will be many remnant /jd/ nodes.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Why GridDiscoveryManager onSegmentation use StopNodeFailureHandler?

2019-01-29 Thread wangsan
Thanks!
The  IgniteConfiguration.segmentationPolicy RESTART_JVM would be a little
misleading. Exit java with some exit code ,The java application will ignore
the exit code.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Failed to read data from remote connection

2019-01-29 Thread wangsan
When check connections,Many nio socker will be create(one socker per node)
,Then direct memory will grow   up with the node count?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite and dynamic linking

2019-01-29 Thread F.D.
Hi Igor,

thanks for your reply, I've added this code:

Snippet

void Ignition::DestroyJVM()
{
   factoryLock.Enter();

   JniErrorInfo jniErr;

   SharedPointer ctx(JniContext::Create(0, 0,
JniHandlers(), ));

   IgniteError err;
   IgniteError::SetError(jniErr.code, jniErr.errCls, jniErr.errMsg, err);

   if(err.GetCode() == IgniteError::IGNITE_SUCCESS)
  ctx.Get()->DestroyJvm();

   factoryLock.Leave();
}

And call it before the FreeLibrary(), now when I call the start I got a
unknow error. Any ideas?

Thanks,
   F.D.


On Mon, Jan 28, 2019 at 5:08 PM Igor Sapego  wrote:

> Hi,
>
> Currently, Ignite on start creates JVM instance internally, but
> it never stops it. Also, it currently can not work with already started
> JVM.
>
> So when you start Ignite the first time, it loads JVM, when you stop
> and unload it, the JVM remains loaded in process memory. When
> you start Ignite again, it discovers that JVM was already loaded, and
> as it can not work with pre-loaded JVM, it just returns you the error.
>
> To solve the issue, the following ticket should be implemented [1], but
> currently, it is not. As a workaround you may try to call
> JNI_DestroyJavaVM() after you have unloaded Ignite, I'm not sure
> of the result though. This is simply is not a use case we have tested.
>
> [1] - https://issues.apache.org/jira/browse/IGNITE-4618
>
> Best Regards,
> Igor
>
>
> On Mon, Jan 28, 2019 at 3:49 PM F.D.  wrote:
>
>> Hi Igniters,
>> I'm trying to use Ignite in a dll (using c++) that is dinamically loaded.
>> I wrapped the method start/end/... bihind a "c" pure interface that I
>> export.
>>
>> It works quite well. I can call the LoadLibrary and start a Ignite node.
>> I can stop it and restart it again smoothly.
>>
>> I've the problem when I LoadLibrary and then I call FreeLibrary (and
>> until here it works), but when I try to LoadLibrary again and to start the
>> node, I get the error: Failed to initialize JVM* [errCls=, errMsg=JVM
>> already created.]*
>>
>> Do you any ideas why I got this error?
>>
>> Thanks,
>>F.D.
>>
>


Re: Recovering from a data region OOM condition

2019-01-29 Thread colinc
This appears to be a problem that is fixed in Ignite 2.7.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Storing parent child relationship

2019-01-29 Thread shishal
Hi,

I am storing parent child relationship in Ignite cache. Its a runtime
infinite stream of data(From Kafka) and partitioned across many node. 

Data is relevant for 30 days only ,so expiry is set to 30 days.

Right now my Input record structure look like following: 
/{ 
id:  (String: UUID) 
parentId : (String: UUID)
}/

In Ignite I store these records with Id as key and Object as value with both
Id and parentId indexed.

Following are assumption for input record:
- Same input records can come multiple times
- All record are considered as root if Its parentId , do not appear as Id in
another records.
- Records can come any order. ie. For Example leaves Node can come before
root node.

My Use case is, When I searched for an ID, I need to get all parent in
upward direction.( ie. parent node, parent to parent node...till root node).

I am currently making recursive call to get it.

My question is, Hoe can I optimize it. Considering write speed should not be
compromise as its has to keep up with incoming fast data. Right now its
10K-20K/Sec but should have capability to scale further.

I also though about Affinity Collocation, But not sure how to insure all
records of a tree goes to same Ignite Node.

Note: My my previous solution was based on Neo4j but It need something which
can scale horizontally as neo4j expect whole database to be on single
machine.











--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Unable to form connection between ignite(v 2.7) node inside kubernetes-1.11.3

2019-01-29 Thread Павлухин Иван
Hi Lalit,

Usually topics related to some sort of contribution are discussed on
dev list. I added user list to recipients list. You will get an answer
for usability questions on user list quicker.

вт, 29 янв. 2019 г. в 00:00, Lalit Jadhav :
>
> While starting one node it gets up with time delay around 50-60 sec. but
> when we scale deployment to 2-3 then those nodes are unable to connect to
> st node.
>
> Also getting below error on 2nd and 3rd node.
>
> ERROR TcpDiscoverySpi:586 - Failed to get registered addresses from IP
> > finder on start (retrying every 2000ms; change 'reconnectDelay' to
> > configure the frequency of retries). class
> > org.apache.ignite.spi.IgniteSpiException: Failed to retrieve Ignite pods IP
> > addresses. at
> > org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder.getRegisteredAddresses(TcpDiscoveryKubernetesIpFinder.java:172)
> > at
> > org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.registeredAddresses(TcpDiscoverySpi.java:1900)
> > at
> > org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.resolvedAddresses(TcpDiscoverySpi.java:1848)
> > at
> > org.apache.ignite.spi.discovery.tcp.ServerImpl.sendJoinRequestMessage(ServerImpl.java:1049)
> > at
> > org.apache.ignite.spi.discovery.tcp.ServerImpl.joinTopology(ServerImpl.java:910)
> > at
> > org.apache.ignite.spi.discovery.tcp.ServerImpl.spiStart(ServerImpl.java:391)
> > at
> > org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.spiStart(TcpDiscoverySpi.java:2020)
> > at
> > org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:297)
> > at
> > org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:939)
> > at
> > org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1682)
> > at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1066) at
> > org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2038)
> > at
> > org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1730)
> > at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1158) at
> > org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:678) at
> > org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:618) at
> > org.apache.ignite.Ignition.getOrStart(Ignition.java:415) at
> > com.cloud.ignite.server.IgniteServer.startIgnite(IgniteServer.java:57) at
> > com.cloud.ignite.server.IgniteServer.(IgniteServer.java:39) at
> > com.cloud.ignite.server.IgniteServer.getInstance(IgniteServer.java:107) at
> > com.cloud.ignite.server.IgniteServer.main(IgniteServer.java:133) Caused by:
> > java.net.ConnectException: Connection refused (Connection refused) at
> > java.net.PlainSocketImpl.socketConnect(Native Method) at
> > java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
> > at
> > java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
> > at
> > java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
> > at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at
> > java.net.Socket.connect(Socket.java:589) at
> > sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:673) at
> > sun.security.ssl.BaseSSLSocketImpl.connect(BaseSSLSocketImpl.java:173) at
> > sun.net.NetworkClient.doConnect(NetworkClient.java:180) at
> > sun.net.www.http.HttpClient.openServer(HttpClient.java:463) at
> > sun.net.www.http.HttpClient.openServer(HttpClient.java:558) at
> > sun.net.www.protocol.https.HttpsClient.(HttpsClient.java:264) at
> > sun.net.www.protocol.https.HttpsClient.New(HttpsClient.java:367) at
> > sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.getNewHttpClient(AbstractDelegateHttpsURLConnection.java:191)
> > at
> > sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1156)
> > at
> > sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1050)
> > at
> > sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:177)
> > at
> > sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1564)
> > at
> > sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492)
> > at
> > sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:263)
> > at
> > org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder.getRegisteredAddresses(TcpDiscoveryKubernetesIpFinder.java:153)
>
>
>
>
> --
> Thanks and Regards,
> Lalit Jadhav.



-- 
Best regards,
Ivan Pavlukhin


Re: Default Cache template

2019-01-29 Thread mahesh76private
Hi, 

I added the below, in node config.xml file. However, SQL table create from
client side keep complaining that "SQLTABLE_TEMPLATE" template is not found.


 
 
 
 
 
 
 
 
 

The only way this works is from Java code, when I use the
CacheConfiguration.addCacheConfiguration and register the template with the
cluster. 

My need is to set the template in node config xml and ensure it
automatically resisters the template and there should be no need to
explicitly set it. 

Please let me know, if I am doing something wrong.






--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/