enum behavior in REST

2020-07-07 Thread Maxim Volkomorov
I have "type":{"platformType":false}" in REST response for my
enum object property.

Object property:
public class Organization {
//...
private OrganizationType type;
//...
   public OrganizationType type() {
return type;
}
//...
}

OrganizationType :
public enum OrganizationType {
/** Non-profit organization. */
NON_PROFIT,

/** Private organization. */
PRIVATE,

/** Government organization. */
GOVERNMENT
}

I have correct deserializing at log:

... type=PRIVATE ...

Should I make custom deserialization for REST requests? Could I make a
custom REST method for retrieving some fields of object?


Re: Ignite server of perNodeParallelOperatoins ?

2020-07-07 Thread akorensh
Hi,
   A dataStreamer collects entries into a buffer, and when it is full, sends
the buffer over to a server node.. see: perNodeBufferSize(int) 
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteDataStreamer.html#perNodeBufferSize-int-

   perNodeParallelOperations is a client side setting that determines how
many buffers to send to a given server node without waiting for an
acknowledgement. The server node will try to process and acknowledge as many
buffers as it is given. perNodeParallelOperations is a throttling parameter
to make sure receiver nodes are not overloaded.

   IngiteStreamer.addData will block when the limit specified
perNodeParallelOperations is reached.

see:
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteDataStreamer.html#perNodeParallelOperations-int-


from the doc:
  perNodeParallelOperations(int) - sometimes data may be added to the data
streamer via addData(Object, Object) method faster than it can be put in
cache. In this case, new buffered stream messages are sent to remote nodes
before responses from previous ones are received. This could cause unlimited
heap memory utilization growth on local and remote nodes. To control memory
utilization, this setting limits maximum allowed number of parallel buffered
stream messages that are being processed on remote nodes. If this number is
exceeded, then addData(Object, Object) method will block to control memory
utilization. Default is equal to CPU count on remote node multiply by
DFLT_PARALLEL_OPS_MULTIPLIER.

see:
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteDataStreamer.html

Thanks, Alex



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: What does all partition owners have left the grid on the client side mean?

2020-07-07 Thread Evgenii Zhuravlev
John,

Unfortunately, I didn't find messages about lost partitions for this cache,
there is a chance that it happened before. What Partition Loss policy do
you have?

Logs says that there is a problem with partition distribution:
 "Local node affinity assignment distribution is not ideal [cache=cache1,
expectedPrimary=512.00, actualPrimary=493, expectedBackups=512.00,
actualBackups=171, warningThreshold=50.00%]"
How do you restart nodes? Do you wait until rebalance completed?

Evgenii



пт, 3 июл. 2020 г. в 09:03, John Smith :

> Hi Evgenii, did you have a chance to look at the latest logs?
>
> On Thu, 25 Jun 2020 at 11:32, John Smith  wrote:
>
>> Ok
>>
>> stdout.copy.zip
>>
>> https://www.dropbox.com/sh/ejcddp2gcml8qz2/AAD_VfUecE0hSNZX7wGbfDh3a?dl=0
>>
>> On Thu, 25 Jun 2020 at 11:01, John Smith  wrote:
>>
>>> Because in between it's all the business logs. Let me make sure I didn't
>>> filter anything relevant. So maybe in those 13 hours nothing happened?
>>>
>>>
>>> On Thu, 25 Jun 2020 at 10:53, Evgenii Zhuravlev <
>>> e.zhuravlev...@gmail.com> wrote:
>>>
 This doesn't seem to be a full log. There is a gap for more than 13
 hours in the log :
 {"appTimestamp":"2020-06-23T23:06:41.658+00:00","threadName":"ignite-update-notifier-timer","level":"WARN","loggerName":"org.apache.ignite.internal.processors.cluster.GridUpdateNotifier","message":"New
 version is available at ignite.apache.org: 2.8.1"}
 {"appTimestamp":"2020-06-24T12:58:42.294+00:00","threadName":"disco-event-worker-#35%xx%","level":"INFO","loggerName":"org.apache.ignite.internal.managers.discovery.GridDiscoveryManager","message":"Node
 left topology: TcpDiscoveryNode [id=02949ae0-4eea-4dc9-8aed-b3f50e8d7238,
 addrs=[0:0:0:0:0:0:0:1%lo, 127.0.0.1, xxx.xxx.xxx.73],
 sockAddrs=[0:0:0:0:0:0:0:1%lo:0, /127.0.0.1:0,
 xx-task-0003/xxx.xxx.xxx.73:0], discPort=0, order=1258, intOrder=632,
 lastExchangeTime=1592890182021, loc=false,
 ver=2.7.0#20181130-sha1:256ae401, isClient=true]"}

 I don't see any exceptions in the log. When did the issue happen? Can
 you share the full log?

 Evgenii

 чт, 25 июн. 2020 г. в 07:36, John Smith :

> Hi Evgenii, same folder shared stdout.copy
>
> Just in case:
> https://www.dropbox.com/sh/ejcddp2gcml8qz2/AAD_VfUecE0hSNZX7wGbfDh3a?dl=0
>
> On Wed, 24 Jun 2020 at 21:23, Evgenii Zhuravlev <
> e.zhuravlev...@gmail.com> wrote:
>
>> No, it's not. It's not clear when it happened and what was with the
>> cluster and the client node itself at this moment.
>>
>> Evgenii
>>
>> ср, 24 июн. 2020 г. в 18:16, John Smith :
>>
>>> Ok I'll try... The stack trace isn't enough?
>>>
>>> On Wed., Jun. 24, 2020, 4:30 p.m. Evgenii Zhuravlev, <
>>> e.zhuravlev...@gmail.com> wrote:
>>>
 John, right, didn't notice them before. Can you share the full log
 for the client node with an issue?

 Evgenii

 ср, 24 июн. 2020 г. в 12:29, John Smith :

> I thought I did! The link doesn't have them?
>
> On Wed., Jun. 24, 2020, 2:43 p.m. Evgenii Zhuravlev, <
> e.zhuravlev...@gmail.com> wrote:
>
>> Can you share full log files from server nodes?
>>
>> ср, 24 июн. 2020 г. в 10:47, John Smith :
>>
>>> The logs for server are here:
>>> https://www.dropbox.com/sh/ejcddp2gcml8qz2/AAD_VfUecE0hSNZX7wGbfDh3a?dl=0
>>>
>>> The error from the client:
>>>
>>> javax.cache.CacheException: class
>>> org.apache.ignite.internal.processors.cache.CacheInvalidStateException:
>>> Failed to execute cache operation (all partition owners have left 
>>> the grid,
>>> partition data has been lost) [cacheName=cache1, part=580,
>>> key=UserKeyCacheObjectImpl [part=580, val=14385045508, 
>>> hasValBytes=false]]
>>> at
>>> org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1337)
>>> at
>>> org.apache.ignite.internal.processors.cache.IgniteCacheFutureImpl.convertException(IgniteCacheFutureImpl.java:62)
>>> at
>>> org.apache.ignite.internal.util.future.IgniteFutureImpl.get(IgniteFutureImpl.java:137)
>>> at
>>> com.xx.common.vertx.ext.data.impl.IgniteCacheRepository.lambda$executeAsync$d94e711a$1(IgniteCacheRepository.java:55)
>>> at
>>> org.apache.ignite.internal.util.future.AsyncFutureListener$1.run(AsyncFutureListener.java:53)
>>> at
>>> com.xx.common.vertx.ext.data.impl.VertxIgniteExecutorAdapter.lambda$execute$0(VertxIgniteExecutorAdapter.java:18)
>>> at
>>> io.vertx.core.impl.ContextImpl.executeTask(ContextImpl.java:369)
>>> at
>>> io.vertx.core.impl.WorkerContext.lambda$wrapTask$0(WorkerContext.java:35)
>>> at 

Ignite server of perNodeParallelOperatoins ?

2020-07-07 Thread Edward Chen

Hello,

In IgniteDataStreamer, there is a config: perNodeParallelOperatoins 
(int), it is configured in the client side. in the Server side, does it 
have similar configuration ? otherwise,  if client has freedom to set 
any number of perNodeParallelOperatoins they want,  how server prevent 
not crash ?


Thanks. Ed




Re: Create two table with the same VALUE_TYPE

2020-07-07 Thread Surkov.Aleksandr
Nikolay, I think that one of the solutions could be to create the
corresponding classes:

class MyObjInteger {
private Integer id;
private Integer value;
}

class MyObjString {
private Integer id;
private String value;
}

Maybe there is another solution?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Create two table with the same VALUE_TYPE

2020-07-07 Thread Surkov.Aleksandr
Thank you!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite Backup and Restore

2020-07-07 Thread marble.zh...@coinflex.com
And yes, btw, is there a tools that help export data and import data, for
logical backup and restore like the traditional database?

Thanks.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite Backup and Restore

2020-07-07 Thread marble.zh...@coinflex.com
Hello Anton, 

Not met this data lost yet, just want to make data backup/restore policy.

Or, actually, in ignite there is no this kind of concept, as already in
cluster? And will auto restore data?

As our ignite instance data is very critical, we need care about the data
safety.

Thanks.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


[Announcement] Cloud data lake conference with heavy focus on open source

2020-07-07 Thread ldazaa11
Hey Igniters, 

If you’re interested in how Ignite is being applied in cloud data lake
environments, then you should check out a new 1-day LIVE, virtual conference
on July 30. This conference is called  Subsurface

  
and the focus is technical talks tailored specifically for data architects
and engineers building cloud data lakes and related technologies. 

Here are some of the speakers presenting at the event:

Wes McKinney - Director at Ursa Labs, Pandas Creator and Apache Arrow
co-creator.
Maxime Beauchemin - CEO and Founder, Preset. Apache Superset and Airflow
Creator.
Julien Le Dem - Co-founder and CTO at Datakin. Apache Parquet Co-creator.
Daniel Weeks - Big Data Compute Team Lead, Netflix - Parquet Committer and
Hive Contributor.

You can join  here

  
(there’s no cost)

The Subsurface Team
@subsurfaceconf




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Create two table with the same VALUE_TYPE

2020-07-07 Thread Surkov.Aleksandr
Exception:
[14:34:21,079][SEVERE][client-connector-#81%7e375abc-4354-4c8d-a9e1-d193171826c0%][ClientListenerNioListener]
Failed to process client request
[req=o.a.i.i.processors.platform.client.cache.ClientCacheSqlFieldsQueryRequest@78f07b83]
class org.apache.ignite.binary.BinaryObjectException: Wrong value has been
set
[typeName=org.apache.ignite.internal.processors.cache.CreateTwoTablesWithDifferentSchemaTest$MyObj,
fieldName=VALUE, fieldType=int, assignedValueType=String]
at
org.apache.ignite.internal.binary.builder.BinaryObjectBuilderImpl.checkMetadata(BinaryObjectBuilderImpl.java:433)
at
org.apache.ignite.internal.binary.builder.BinaryObjectBuilderImpl.serializeTo(BinaryObjectBuilderImpl.java:321)
at
org.apache.ignite.internal.binary.builder.BinaryObjectBuilderImpl.build(BinaryObjectBuilderImpl.java:188)
at
org.apache.ignite.internal.processors.query.h2.dml.UpdatePlan.processRow(UpdatePlan.java:279)
at
org.apache.ignite.internal.processors.query.h2.dml.DmlUtils.dmlDoInsert(DmlUtils.java:195)
at
org.apache.ignite.internal.processors.query.h2.dml.DmlUtils.processSelectResult(DmlUtils.java:168)
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeUpdateNonTransactional(IgniteH2Indexing.java:2899)
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeUpdate(IgniteH2Indexing.java:2753)
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeUpdateDistributed(IgniteH2Indexing.java:2683)
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeDml(IgniteH2Indexing.java:1186)
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.querySqlFields(IgniteH2Indexing.java:1112)
at
org.apache.ignite.internal.processors.query.GridQueryProcessor$4.applyx(GridQueryProcessor.java:2574)
at
org.apache.ignite.internal.processors.query.GridQueryProcessor$4.applyx(GridQueryProcessor.java:2570)
at
org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36)
at
org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:3097)
at
org.apache.ignite.internal.processors.query.GridQueryProcessor.lambda$querySqlFields$1(GridQueryProcessor.java:2590)
at
org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuerySafe(GridQueryProcessor.java:2628)
at
org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:2564)
at
org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:2491)
at
org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:2447)
at
org.apache.ignite.internal.processors.platform.client.cache.ClientCacheSqlFieldsQueryRequest.process(ClientCacheSqlFieldsQueryRequest.java:110)
at
org.apache.ignite.internal.processors.platform.client.ClientRequestHandler.handle(ClientRequestHandler.java:99)
at
org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:200)
at
org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:54)
at
org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onMessageReceived(GridNioFilterChain.java:279)
at
org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109)
at
org.apache.ignite.internal.util.nio.GridNioAsyncNotifyFilter$3.body(GridNioAsyncNotifyFilter.java:97)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
at
org.apache.ignite.internal.util.worker.GridWorkerPool$1.run(GridWorkerPool.java:70)
at
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
org.apache.ignite.internal.client.thin.ClientServerError: Ignite failed to
process request [4]: Wrong value has been set
[typeName=org.apache.ignite.internal.processors.cache.CreateTwoTablesWithDifferentSchemaTest$MyObj,
fieldName=VALUE, fieldType=int, assignedValueType=String] (server status
code [1])
at
org.apache.ignite.internal.client.thin.TcpClientChannel.processNextMessage(TcpClientChannel.java:390)
at
org.apache.ignite.internal.client.thin.TcpClientChannel.lambda$initReceiverThread$0(TcpClientChannel.java:314)
at java.base/java.lang.Thread.run(Thread.java:834)



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Create two table with the same VALUE_TYPE

2020-07-07 Thread Surkov.Aleksandr
I have a class MyObj. I want to use it for multiple tables. In the class
MyObj there is a field value whose type will be known at runtime, therefore
the type of this field is Object. I try to create two tables using the MyObj
class, but when an insert occurs in the second table, an exception occurs.

public class CreateTwoTablesWithDifferentSchemaTest extends
GridCommonAbstractTest {
/** Create two tables/caches with the same VALUE_TYPE*/
@Test
public void executeTest() throws Exception {
try (final Ignite server =
Ignition.start(Config.getServerConfiguration());
 final IgniteClient client = Ignition.startClient(new
ClientConfiguration()
 .setAddresses(Config.SERVER)
 .setBinaryConfiguration(new
BinaryConfiguration().setCompactFooter(true {

createTableAndInsert(client, "MyTable1", "int");
createTableAndInsert(client, "MyTable2", "varchar");
}
}

private void createTableAndInsert(IgniteClient client, String tableName,
String typeField) {
/**
 * Can use tableName instead of MyObj.class.getName(), but then you
can’t use the {@link ClientCache#get(Object)} method.
 * */
client.query(new SqlFieldsQuery("CREATE TABLE IF NOT EXISTS " +
tableName + " (id int primary key, value " + typeField + ") " +
"WITH \"VALUE_TYPE=" + MyObj.class.getName() + ",CACHE_NAME=" +
tableName + ",atomicity=transactional,template=partitioned\"")
).getAll();

final Object value = "int".equals(typeField) ? 1 : "value_1";

try {
client.query(new SqlFieldsQuery("INSERT INTO " + tableName + "
(id, value) values(?,?)")
.setArgs(1, value)
).getAll();
} catch (Exception e) {
e.printStackTrace();
fail();
}
}

static class MyObj {
private Integer id;
//It is assumed that the type of this property will depend on the
type of field in the table.
private Object value;

public MyObj(Integer id, Object value) {
this.id = id;
this.value = value;
}
}
}

Can I somehow solve this problem?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Blocked system-critical thread has been detected - After upgrade to 2.8.1

2020-07-07 Thread Manu
Hi Alex

We can't share GC logs as they contains sensitive data. We have solve it by
creating a new data cluster with persistence enabled and moving data from
problematic cluster to new one.

As far as we can see, it seems that the problem is in the checkpoint
process, for some unknown reason (maybe it has to do with the migration from
2.7.6 to 2.8.1 and the new changes in persistence management) the checkpoint
thread is blocked.

Anyway, we will be attentive to this topic.

Thank you very much, greetings!




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: ScanQuery Transform: fail to deserialze! ignite 2.8.0, java 11

2020-07-07 Thread Rafael Troilo
Thank you Ilya for creating the issue.

And a big thank you that you already fixed it :-).

Best,
Rafael

On 7/3/20 5:56 PM, Ilya Kasnacheev wrote:
> Hello!
> 
> There is now a ticket about this issue, which I indeed can see:
> https://issues.apache.org/jira/browse/IGNITE-13212
> 
> While it is not fixed, you will have to make sure that filter code is
> deployed to all nodes.
> 
> Regards,
> 

-- 
Rafael Troilo, Dipl.-Inform. (FH)
   GIScience Research Group 
  
   Heidelberg Institute for Geoinformation Technology

   rafael.tro...@uni-heidelberg.de
   http://giscience.uni-hd.de
   http://www.geog.uni-heidelberg.de/gis/heigit.html
  
   Berliner Str. 45 (Mathematikon), D-69120 Heidelberg, Germany
   fon: +49(0)6221 / 54 / 19704


Re: Create two table with the same VALUE_TYPE

2020-07-07 Thread Nikolay Izhikov
Hello, Alexander.

> Can I somehow solve this problem?

You can't use same VALUE_TYPE for two tables with inconsistent field type.
This happens because VALUE_TYPE name actually used to specify
`BinaryObjectType` name.

After it all rows with the same VALUE_TYPE name checked according to the
first created row.
Please, note, that `VALUE_TYPE` can be a random string, not a java class
name:

"The name should correspond to a Java, .NET or C++ class, or it can be a
random one if BinaryObjects
 is used instead of
a custom class"

https://apacheignite-sql.readme.io/docs/create-table




вт, 7 июл. 2020 г. в 15:31, Surkov.Aleksandr :

> Exception:
>
> [14:34:21,079][SEVERE][client-connector-#81%7e375abc-4354-4c8d-a9e1-d193171826c0%][ClientListenerNioListener]
> Failed to process client request
>
> [req=o.a.i.i.processors.platform.client.cache.ClientCacheSqlFieldsQueryRequest@78f07b83
> ]
> class org.apache.ignite.binary.BinaryObjectException: Wrong value has been
> set
>
> [typeName=org.apache.ignite.internal.processors.cache.CreateTwoTablesWithDifferentSchemaTest$MyObj,
> fieldName=VALUE, fieldType=int, assignedValueType=String]
> at
>
> org.apache.ignite.internal.binary.builder.BinaryObjectBuilderImpl.checkMetadata(BinaryObjectBuilderImpl.java:433)
> at
>
> org.apache.ignite.internal.binary.builder.BinaryObjectBuilderImpl.serializeTo(BinaryObjectBuilderImpl.java:321)
> at
>
> org.apache.ignite.internal.binary.builder.BinaryObjectBuilderImpl.build(BinaryObjectBuilderImpl.java:188)
> at
>
> org.apache.ignite.internal.processors.query.h2.dml.UpdatePlan.processRow(UpdatePlan.java:279)
> at
>
> org.apache.ignite.internal.processors.query.h2.dml.DmlUtils.dmlDoInsert(DmlUtils.java:195)
> at
>
> org.apache.ignite.internal.processors.query.h2.dml.DmlUtils.processSelectResult(DmlUtils.java:168)
> at
>
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeUpdateNonTransactional(IgniteH2Indexing.java:2899)
> at
>
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeUpdate(IgniteH2Indexing.java:2753)
> at
>
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeUpdateDistributed(IgniteH2Indexing.java:2683)
> at
>
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeDml(IgniteH2Indexing.java:1186)
> at
>
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.querySqlFields(IgniteH2Indexing.java:1112)
> at
>
> org.apache.ignite.internal.processors.query.GridQueryProcessor$4.applyx(GridQueryProcessor.java:2574)
> at
>
> org.apache.ignite.internal.processors.query.GridQueryProcessor$4.applyx(GridQueryProcessor.java:2570)
> at
>
> org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36)
> at
>
> org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:3097)
> at
>
> org.apache.ignite.internal.processors.query.GridQueryProcessor.lambda$querySqlFields$1(GridQueryProcessor.java:2590)
> at
>
> org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuerySafe(GridQueryProcessor.java:2628)
> at
>
> org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:2564)
> at
>
> org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:2491)
> at
>
> org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:2447)
> at
>
> org.apache.ignite.internal.processors.platform.client.cache.ClientCacheSqlFieldsQueryRequest.process(ClientCacheSqlFieldsQueryRequest.java:110)
> at
>
> org.apache.ignite.internal.processors.platform.client.ClientRequestHandler.handle(ClientRequestHandler.java:99)
> at
>
> org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:200)
> at
>
> org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:54)
> at
>
> org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onMessageReceived(GridNioFilterChain.java:279)
> at
>
> org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109)
> at
>
> org.apache.ignite.internal.util.nio.GridNioAsyncNotifyFilter$3.body(GridNioAsyncNotifyFilter.java:97)
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
> at
>
> org.apache.ignite.internal.util.worker.GridWorkerPool$1.run(GridWorkerPool.java:70)
> at
>
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> at
>
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> 

Re: DataRegion size with node start

2020-07-07 Thread Alex Plehanov
Hello,

For in-memory data reagions DataRegionConfiguration.getInitialSize() will
be allocated on node startup.
Additional memory will be allocated by chunks of some size on demand until
DataRegionConfiguration.getMaxSize() reached.

вт, 7 июл. 2020 г. в 16:17, 배혜원 :

> Hello!
>
> but I’m not use persistent cache.
> Then, what happen?
>
> 나의 iPhone에서 보냄
>
> > 2020. 7. 7. 오후 7:43, akurbanov  작성:
> >
> > Hello kay,
> >
> > The data region memory specified will be allocated as soon as you will
> start
> > your first persistent cache.
> >
> > Best regards,
> > Anton
> >
> >
> >
> > --
> > Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: DataRegion size with node start

2020-07-07 Thread 배혜원
Hello!

but I’m not use persistent cache.
Then, what happen?

나의 iPhone에서 보냄

> 2020. 7. 7. 오후 7:43, akurbanov  작성:
> 
> Hello kay,
> 
> The data region memory specified will be allocated as soon as you will start
> your first persistent cache.
> 
> Best regards,
> Anton
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite Backup and Restore

2020-07-07 Thread akurbanov
Hello,

What is a data loss in your case exactly?

Best regards,
Anton



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: DataRegion size with node start

2020-07-07 Thread akurbanov
Hello kay,

The data region memory specified will be allocated as soon as you will start
your first persistent cache.

Best regards,
Anton



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Force backup on different physical machine

2020-07-07 Thread Stephen Darlington
You can configure the affinity function (RendezvousAffinityFunction 
).
 If you set the backup filter, you can customise which nodes are considered for 
use as backups:
cacheConfiguration.setBackups(1);
cacheConfiguration.setAffinity(new RendezvousAffinityFunction(1024, (n,p) -> {
return !p.hostNames().equals(n.hostNames());
}));
Having said that, you’re probably best configuring k8s not to put two Ignite 
server nodes on a single machine.

Regards,
Stephen

> On 7 Jul 2020, at 10:42, Humphrey  wrote:
> 
> Let say I have 2 kubernetes nodes and I have 4 ignite server nodes.
> This will result (if kubernetes have 2 pods running on each node) in the
> following:
> 
> *kubernetes_node1:* ignite_node1, ignite_node2
> *kubernetes_node2:* ignite_node3, ignite_node4
> 
> I specify that my cache backup = 1
> 
> Is there a way to configure that the backup data of the ignite_node1 goes on
> the ignite_node3 or ignite_node4 and NOT on ignite_node2 (same physical
> machine/kubernetes node)? Is there any configuration for this (I assume it's
> something on runtime, because we don't know where kubernetes will schedule
> the pod)?
> 
> Background:
> If kubernetes_node1 goes down, then there won't be any data loss.
> 
> Humphrey
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/




Re: Force backup on different physical machine

2020-07-07 Thread Pavel Tupitsyn
Normally there are two ways to achieve this: excludeNeighbors
and AffinityBackupFilter [1]

However, excludeNeighbors won't work across pods on the same k8s node,
since it relies on MAC addresses.

So your best bet is to use ClusterNodeAttributeAffinityBackupFilter:
* set ClusterNodeAttributeAffinityBackupFilter with env var K8S_NODE_NAME
as described in [2]
* export K8S_NODE_NAME environment variable in Kubernetes as described in
[3]:

  env:
- name: K8S_NODE_NAME
  valueFrom:
fieldRef:
  fieldPath: spec.nodeName

This way backups won't end up on the same k8s node.

[1]
https://apacheignite.readme.io/docs/affinity-collocation#crash-safe-affinity
[2]
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/affinity/rendezvous/ClusterNodeAttributeAffinityBackupFilter.html
[3]
https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/#use-pod-fields-as-values-for-environment-variables

On Tue, Jul 7, 2020 at 12:42 PM Humphrey  wrote:

> Let say I have 2 kubernetes nodes and I have 4 ignite server nodes.
> This will result (if kubernetes have 2 pods running on each node) in the
> following:
>
> *kubernetes_node1:* ignite_node1, ignite_node2
> *kubernetes_node2:* ignite_node3, ignite_node4
>
> I specify that my cache backup = 1
>
> Is there a way to configure that the backup data of the ignite_node1 goes
> on
> the ignite_node3 or ignite_node4 and NOT on ignite_node2 (same physical
> machine/kubernetes node)? Is there any configuration for this (I assume
> it's
> something on runtime, because we don't know where kubernetes will schedule
> the pod)?
>
> Background:
> If kubernetes_node1 goes down, then there won't be any data loss.
>
> Humphrey
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Ignite Backup and Restore

2020-07-07 Thread marble.zh...@coinflex.com
Hi Guru, 

Any docs about the Ignite backup and restore mechanism, how to recover the
data if data lost from cache.

Now, we separate the WAL and DB to different folders, once met the data
lost, how do I recover data from the wal?

Thanks.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Force backup on different physical machine

2020-07-07 Thread Humphrey
Let say I have 2 kubernetes nodes and I have 4 ignite server nodes.
This will result (if kubernetes have 2 pods running on each node) in the
following:

*kubernetes_node1:* ignite_node1, ignite_node2
*kubernetes_node2:* ignite_node3, ignite_node4

I specify that my cache backup = 1

Is there a way to configure that the backup data of the ignite_node1 goes on
the ignite_node3 or ignite_node4 and NOT on ignite_node2 (same physical
machine/kubernetes node)? Is there any configuration for this (I assume it's
something on runtime, because we don't know where kubernetes will schedule
the pod)?

Background:
If kubernetes_node1 goes down, then there won't be any data loss.

Humphrey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/