where to get the document about Ignite architecture or technical advantage?

2016-10-13 Thread 胡永亮/Bob
Hi, everyone

I only find some document aboute Ignite function, but I want to know how to 
implement those functions.
Where can I get the architecture document?

So, I have some deeper problem to be explained. 

For example, 
1, The sql query of Ignite is implemented with MapReduce or fully MPP 
distributed technology?
2, The object with many columns in Ignite cache is saved with column way or 
row way?
3, What advantage does the stream compute of Ignite have comparing to storm?
4, Does the one Ignite cluster support deploying in many data centers, such 
as one node in one province, and one node in another province?

Thanks your reply.



Bob


---
Confidentiality Notice: The information contained in this e-mail and any 
accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential 
and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of 
this communication is
not the intended recipient, unauthorized use, forwarding, printing,  storing, 
disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this 
communication in error,please
immediately notify the sender by return e-mail, and delete the original message 
and all copies from
your system. Thank you.
---


Re: Execute SQL on Ignite cache of BinaryObjects

2016-10-13 Thread vkulichenko
Hi,

I responded on SO:
http://stackoverflow.com/questions/40019506/execute-sql-on-ignite-cache-of-binaryobjects

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Execute-SQL-on-Ignite-cache-of-BinaryObjects-tp8269p8281.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Couchbase as persistent store

2016-10-13 Thread vkulichenko
Hi,

Can you provide a small project that will reproduce the issue?

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Couchbase-as-persistent-store-tp7476p8280.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Random SSL unsupported record version

2016-10-13 Thread vkulichenko
Hi,

Looks like there is an issue with SSL support when long keys are used. I'm
investigating it right now. How long are the keys you're using?

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Random-SSL-unsupported-record-version-tp8236p8278.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: sample code for customised Partition logic

2016-10-13 Thread vkulichenko
Hi,

What exactly are you trying to achieve? Modifying the affinity function can
be a challenging task and is generally needed only for very rare specific
use cases.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/sample-code-for-customised-Partition-logic-tp8270p8277.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: randomEntry() deprecated

2016-10-13 Thread vkulichenko
Hi,

You can also use SQL queries for indexed search:
https://apacheignite.readme.io/docs/sql-queries

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/randomEntry-deprecated-tp8274p8276.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: randomEntry() deprecated

2016-10-13 Thread ggoleash
Nevermind -- localEntries() is better anyway.  But if anyone knows of a
better way to iterate over unknown cache entries, please advise.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/randomEntry-deprecated-tp8274p8275.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


randomEntry() deprecated

2016-10-13 Thread ggoleash
Since this method of IgniteCache interface is deprecated, does anyone
know how to get an Entry without knowing any of the keys in the cache?

For example, if an external process (maybe via Nifi) is writing directly to
a cache, how does a separate application read from the cache without knowing
any of the keys? 






--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/randomEntry-deprecated-tp8274.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Killing a node under load stalls the grid with ignite 1.7

2016-10-13 Thread vdpyatkov
Hi Binti,

Hi,
This is look like a lock GridCacheWriteBehindStore and
GridCachePartitionExchangeManager.

Could you give work an example of this?
If not I try to reproduce it tomorrow



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Killing-a-node-under-load-stalls-the-grid-with-ignite-1-7-tp8130p8273.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


RE: Evicted entry appears in Write-behind cache

2016-10-13 Thread Pradeep Badiger
Hi Vladislav,

Please see the below cache configuration.

 LruEvictionPolicy evictionPolicy = 
new LruEvictionPolicy<>(getIntProperty(envConfig, CACHE_SIZE, 1));
 cacheConfiguration

.setEvictionPolicy(evictionPolicy)

.setWriteBehindFlushSize(getIntProperty(envConfig, CACHE_WB_FLUSH_SIZE, 0))

.setWriteBehindBatchSize(getIntProperty(envConfig, CACHE_WB_BATCH_SIZE, 200))

.setWriteBehindEnabled(stateQoS.isWriteBehindEnabled())

.setWriteBehindFlushFrequency(getIntProperty(envConfig, CACHE_WB_FLUSH_FREQ_MS, 
5000))

.setWriteBehindFlushThreadCount(getIntProperty(envConfig, 
CACHE_WB_FLUSH_THREADS, 10))

.setCacheStoreFactory(new StateCacheStoreFactory(cacheName,

  storageManager))
.setName(cacheName)

.setReadThrough(stateQoS.isReadThroughEnabled())

.setWriteThrough(stateQoS.isWriteThroughEnabled());

Thanks,
Pradeep V.B.

From: Vladislav Pyatkov [mailto:vldpyat...@gmail.com]
Sent: Wednesday, October 12, 2016 2:59 AM
To: user@ignite.apache.org
Subject: Re: Evicted entry appears in Write-behind cache

Hi Pradeep,

Could you please provide cache configuration?

On Tue, Oct 11, 2016 at 6:57 PM, Denis Magda 
> wrote:
Looks like that my initial understanding was wrong. There is a related 
discussion
http://apache-ignite-users.70518.x6.nabble.com/Cache-read-through-with-expiry-policy-td2521.html

—
Denis

On Oct 11, 2016, at 8:55 AM, Pradeep Badiger 
> wrote:

Hi Denis,

I did the get() on the evicted entry from the cache, it still returned me the 
value without calling the load() on the store. As you said, the entry would be 
cached in the write behind store even for the evicted entry. Is that true?

Thanks,
Pradeep V.B.
From: Denis Magda [mailto:dma...@gridgain.com]
Sent: Monday, October 10, 2016 9:13 PM
To: user@ignite.apache.org
Subject: Re: Evicted entry appears in Write-behind cache

Hi,

How do you see that the evicted entries are still in the cache? If you check 
this by calling cache get like operations then entries can be loaded back from 
the write-behind store or from your underlying store.

—
Denis

On Oct 8, 2016, at 1:00 PM, Pradeep Badiger 
> wrote:

Hi,

I am trying to evaluate Apache Ignite and trying to explore eviction policy and 
write behind features. I am seeing that whenever a cache is configured with 
eviction policy and write behind feature, the write behind cache always have 
all the changed entries including the ones that are evicted, before the write 
cache is flushed. But soon after it is flushed, the store loads again from DB. 
Is this the expected behavior? Is there a documentation on how the write behind 
cache works?

Thanks,
Pradeep V.B.
This email and any files transmitted with it are confidential, proprietary and 
intended solely for the individual or entity to whom they are addressed. If you 
have received this email in error please delete it immediately.

This email and any files transmitted with it are confidential, proprietary and 
intended solely for the individual or entity to whom they are addressed. If you 
have received this email in error please delete it immediately.




--
Vladislav Pyatkov
This email and any files transmitted with it are confidential, proprietary and 
intended solely for the individual or entity to whom they are addressed. If you 
have received this email in error please delete it immediately.


Re: Network Segmentation configuarion

2016-10-13 Thread Vladislav Pyatkov
Hi Yitzhak,

You can configure specific timeout in discovery SPI or increase common
timeout IgniteConfiguration#setFailureDetectionTimeout.
But long timeout lead to stop grid on time the timeout

If you want to handle segmentation event:

ignite.events().localListen(new IgnitePredicate() {
@Override public boolean apply(Event event) {
System.out.println("Execute custom logic...");

return true;
}
}, EventType.EVT_NODE_SEGMENTED);


But what will you do?
You cannot wait until something there, because cluster already segmented
(data will be rebalanced on other nodes) the node.

On Thu, Oct 13, 2016 at 12:40 PM, Yitzhak Molko 
wrote:

> While I didn't configure any network segmentation properties
> (SegmentationResolvers, SegmentationResolveAttempts,
> SegmentCheckFrequency etc.) node is been shutdown from time to time:
> WARN : [discovery.tcp.TcpDiscoverySpi] Date=2016/10/13/07/42/52/009|Node
> is out of topology (probably, due to short-time network problems).
> WARN : [managers.discovery.GridDiscoveryManager]
> Date=2016/10/13/07/42/52/009|Local node SEGMENTED: TcpDiscoveryNode
> [id=4b3349f5-fda0-4e9d-a528-c8b5f4401717, addrs=[0:0:0:0:0:0:0:1%1,
> 10.0.0.5, 127.0.0.1], sockAddrs=[/127.0.0.1:47500,
> /0:0:0:0:0:0:0:1%1:47500, /10.0.0.5:47500], discPort=47500, order=271,
> intOrder=145, lastExchangeTime=1476344572001, loc=true,
> ver=1.7.0#20160801-sha1:383273e3, isClient=false]
> WARN : [managers.discovery.GridDiscoveryManager]
> Date=2016/10/13/07/42/52/092|Stopping local node according to configured
> segmentation policy.
>
> I would like to understand what is default behavior since I didn't
> configure any SegmentationResolvers.
> I can probably set SegmentationPolicy to NOOP to avoid node shutdown, but
> I don't think it's a good idea that node will out of topology for a long
> time.
> Is possible to set time/wait longer until getting SEGMENTED event?
>
> We are using ignite 1.7.0 and running cluster with 20 nodes.
>
> Thank you,
> Yitzhak
> --
>
> Yitzhak Molko
>



-- 
Vladislav Pyatkov


sample code for customised Partition logic

2016-10-13 Thread minisoft_rm
Dear All, I can not find sample code for how customise partition... and I
could almost finish it by myself.

however, the most wired thing is it work occasionally !!?? most of time it
doesn't work

my code is:[
public class UCSRendezvousAffinityFunction extends
RendezvousAffinityFunction
{
public int partitions()
{
return 2; //only test 2 partitions :-)
}

public int partition(final Object key)
{
...
return xxx % 2;
}

//this API is...correct??
 public List> assignPartitions(final
AffinityFunctionContext affCtx)
{

final List> assignments = new
ArrayList<>(partitions());

final boolean exclNeighbors = false;

final Map> neighborhoodCache =
exclNeighbors
? 
GridCacheUtils.neighbors(affCtx.currentTopologySnapshot()) : null;

for (int i = 0; i < partitions(); i++)
{
final List partAssignment = 
assignPartition(i,
affCtx.currentTopologySnapshot(), affCtx.backups(),
neighborhoodCache);

assignments.add(partAssignment);
}

return assignments;

}

}

]


So as I said... it works from time to timewhy.

and could you show me if my API of "assignPartitions()" is right or maybe do
me a favour what is your code? thanks a lot.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/sample-code-for-customised-Partition-logic-tp8270.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Execute SQL on Ignite cache of BinaryObjects

2016-10-13 Thread vatsal mevada
I am creating a cache of BinaryObject from spark a dataframe and then I want
to perform SQL on that ignite cache.

Here is my code where bank is the dataframe which contains three fields
(id,name and age):

   / val ic = new IgniteContext(sc, () => new IgniteConfiguration()) 
val cacheConfig = new CacheConfiguration[BinaryObject, BinaryObject]()
cacheConfig.setName("test123")
cacheConfig.setStoreKeepBinary(true)
cacheConfig.setIndexedTypes(classOf[BinaryObject],
classOf[BinaryObject])

val qe = new QueryEntity()
qe.setKeyType(TestKey)
qe.setValueType(TestValue)
val fields = new java.util.LinkedHashMap[String, String]()
fields.put("id", "java.lang.Long")
fields.put("name", "java.lang.String")
fields.put("age", "java.lang.Int")
qe.setFields(fields)
val qes = new java.util.ArrayList[QueryEntity]()
qes.add(qe)

cacheConfig.setQueryEntities(qes)

val cache = ic.fromCache[BinaryObject, BinaryObject](cacheConfig)

cache.savePairs(bank.rdd, (row: Bank, iContext: IgniteContext) => {
val keyBuilder = iContext.ignite().binary().builder("TestKey");
keyBuilder.setField("id", row.id);
val key = keyBuilder.build();

val valueBuilder = iContext.ignite().binary().builder("TestValue");
valueBuilder.setField("name", row.name);
valueBuilder.setField("age", row.age);
val value = valueBuilder.build();
(key, value);
}, true)/

Now I am trying to execute an SQL query like this:

/cache.sql("select age from TestValue")/

Which is failing with following exception:

/Caused by: org.h2.jdbc.JdbcSQLException: Column "AGE" not found; SQL
statement:
select age from TestValue [42122-191]
  at org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
  at org.h2.message.DbException.get(DbException.java:179)
  at org.h2.message.DbException.get(DbException.java:155)
  at org.h2.expression.ExpressionColumn.optimize(ExpressionColumn.java:147)
  at org.h2.command.dml.Select.prepare(Select.java:852) /
What am I doing wrong here?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Execute-SQL-on-Ignite-cache-of-BinaryObjects-tp8269.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Near cache

2016-10-13 Thread Vladislav Pyatkov
Hi,

Please clarify, are you enable the event (EVT_CACHE_OBJECT_REMOVED) in
config?
I guess, all node will be get EVT_CACHE_OBJECT_REMOVED in that case.






...

On Tue, Oct 11, 2016 at 8:56 PM, javastuff@gmail.com <
javastuff@gmail.com> wrote:

> Thank you for your reply. Would like to add more details to 3rd point as
> you
> have not clearly understood it.
>
> Lets assume there are 4 nodes running, node A brings data to distributed
> cache, as concept of Near Cahce I will push data to distributed cache as
> well as Node A will have it on heap in Map implementation. Later each node
> uses data from distributed cache and each node will now bring that data to
> their local heap based map implementation.
> Now comes the case of cache invalidation -  one of the node initiate REMOVE
> call and this will remove local heap copy for this acting node and
> distributed cache. This invokes EVT_CACHE_OBJECT_REMOVED event. However
> this
> event will be generated only on one node have that data in its partition
> (this is what I have observed, remote event for owner node and local event
> for acting node). In that case owner node has the responsibility to
> communicate to all other node to invalidate their local map based copy.
> So I am combining EVENT and TOPIC to implement this.
>
> Is this right approach? or there is a better approach?
>
> Cache remove event is generated only for owner node (node holding data in
> its partition) and node who is initiating remove API. Is this correct or it
> suppose to generate event for all nodes? Conceptually both have their own
> meaning and use, so I think both are correct.
>
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Near-cache-tp8192p8223.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: Couchbase as persistent store

2016-10-13 Thread kvipin
any clue guys?


thanks,



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Couchbase-as-persistent-store-tp7476p8267.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite - FileNotFoundException

2016-10-13 Thread Taras Ledkov

I guess 64K-128K must be enough.


On 13.10.2016 13:04, Anil wrote:

What is the recommended file descriptor limit for ignite data load ?

Thanks.

On 13 October 2016 at 15:16, Taras Ledkov > wrote:


Please check the file descriptors OS limits.


On 13.10.2016 12:36, Anil wrote:


When loading huge data into ignite - i see the following
exception.. my configuration include off heap as 0 and swap
storage to true.

 org.apache.ignite.logger.java.JavaLogger error
SEVERE: Failed to process directory: /tmp/ignite/work/ipc/shmem
java.io.FileNotFoundException:
/tmp/ignite/work/ipc/shmem/lock.file (Too many open files)
at java.io.RandomAccessFile.open0(Native Method)
at
java.io.RandomAccessFile.open(RandomAccessFile.java:316)
at
java.io.RandomAccessFile.(RandomAccessFile.java:243)
at

org.apache.ignite.internal.util.ipc.shmem.IpcSharedMemoryServerEndpoint$GcWorker.cleanupResources(IpcSharedMemoryServerEndpoint.java:608)
at

org.apache.ignite.internal.util.ipc.shmem.IpcSharedMemoryServerEndpoint$GcWorker.body(IpcSharedMemoryServerEndpoint.java:565)
at

org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:745)


Looks like it is due to high load.

How can we avoid this exception? thanks.

Thanks.



-- 
Taras Ledkov

Mail-To: tled...@gridgain.com 




--
Taras Ledkov
Mail-To: tled...@gridgain.com



Re: How to get the load status of the Ignite cluster

2016-10-13 Thread Vladislav Pyatkov
Hi,

You can look at the code[1] and get computation of all metrics.
The task use public API, but in particular information about thread pools
available through MBeans

*org.apache:clsLdr=764c12b6,group=Thread Pools,name=GridExecutionExecutor*

[1]: https://github.com/apache/ignite/blob/master/modules/
core/src/main/java/org/apache/ignite/internal/IgniteKernal.java#L1005

On Thu, Oct 13, 2016 at 4:24 AM, ght230  wrote:

> From the work log, I can see the Metrics for local node, such as
> "^-- Public thread pool [active=0, idle=512, qSize=0]"
>
> I want to know which API of metrics can I use to get the value of "active",
> "idle" and "qsize".
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/How-to-get-the-load-status-of-the-
> Ignite-cluster-tp8232p8259.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: CacheContinuousQuery did not work after the second servernodejoinned into the topology.

2016-10-13 Thread Nikolai Tikhonov
Hi Lin,

In your case autoUnsubsribe flag should be set to false.

Could you describe how change performance after you enable cache events?

Thanks,
Nikolay

On Mon, Oct 10, 2016 at 6:59 AM, Lin  wrote:

> Hi Nikolay,
>
> I have a requirement on CQ to implement some functions like event
> listener. The client initializes and adds one listener to ther cluster, and
> hope to can recevie the expected CacheEntryEvent persistently without
> considering the leaving or adding nodes.
>
> firstly, i implement this feacture with the Ignite.events, but the
> performance is unacceptable.
>
> Any advices are welcome.
>
> Lin.
>
>
> -- Original --
> *From: * "Nikolai Tikhonov";;
> *Date: * Fri, Oct 7, 2016 09:34 PM
> *To: * "user";
> *Subject: * Re: CacheContinuousQuery did not work after the second
> servernodejoinned into the topology.
>
> Hi Lin!
>
> It's bug. I've create ticket and you can track progress there
> https://issues.apache.org/jira/browse/IGNITE-4047. How workaround you can
> start CQ with setAutoUnsubscribe(true).
>
> BTW: Why you use CQ with auto unsubscribe false?
>
> Thanks,
> Nikolay
>
> On Fri, Sep 30, 2016 at 7:18 AM, Lin  wrote:
>
>> Hi Vladislav,
>>
>> Thank you for your response. I can reproduce this issue with the maven
>> project you gave.
>>
>> My problems is that: after the second server node joinned into the
>> topology, I put some data into the cache, the result is that the CQ query
>> works in the first and second server nodes (the remote filter procuduced
>> the system output as expceted), but the CQ query client node was not
>> working as expected ( the CacheEntryUpdatedListener was not trigged any
>> more).
>>
>> I have modified the pom with my enviroment (only some modification about
>> package versions), and add some shell script in windows to reproduce the
>> issue easily.
>> My enviroment is ignite ver. 1.6.0#20160518-sha1:0b22c45b, the details
>> can be found in the log file in "log/s1.debug.log" which was produced with
>> the "-X" parameter in maven (see script server.bat).
>>
>> Here is the steps about how to reproduce the issue in my envrioment.
>> 1. mvn compile, the produce the target classes and ignite-*.xml.
>> 2. the first test
>> 2.1 run the server.bat to start the first server node, the console
>> outputs were piped into s1.log.
>> 2.2 run the CQClient.bat to create a client with cq query, when the
>> CacheContinuousQueryEvent is received, it will produce outputs like
>> `
>> sys-#5%null% receive CacheEntryEvent CacheContinuousQueryEvent
>> [evtType=CREATED, key=5, newVal=0, oldVal=null]
>> sys-#5%null% receive CacheEntryEvent CacheContinuousQueryEvent
>> [evtType=UPDATED, key=5, newVal=1, oldVal=0]
>> `
>> in the client node, and the server node will produce outputs like
>> `
>> CacheEntryEventRemoteFilter.evaluate CacheContinuousQueryEvent
>> [evtType=CREATED, key=5, newVal=0, oldVal=null], with ret true
>> CacheEntryEventRemoteFilter.evaluate CacheContinuousQueryEvent
>> [evtType=UPDATED, key=5, newVal=1, oldVal=0], with ret true
>> `
>> 2.3 run the DataClient.bat to put 2 kv pairs( (5, 0), (5,1)) into given
>> cache and exit. This will cause the server1 producing outputs from remote
>> filter
>> `
>> CacheEntryEventRemoteFilter.evaluate CacheContinuousQueryEvent
>> [evtType=CREATED, key=5, newVal=0, oldVal=null], with ret true
>> CacheEntryEventRemoteFilter.evaluate CacheContinuousQueryEvent
>> [evtType=UPDATED, key=5, newVal=1, oldVal=0], with ret true
>> `
>> and cause the CQ client producing outputs from CacheEntryUpdatedListener
>> in the client,
>> `
>> sys-#5%null% receive CacheEntryEvent CacheContinuousQueryEvent
>> [evtType=CREATED, key=5, newVal=0, oldVal=null]
>> sys-#5%null% receive CacheEntryEvent CacheContinuousQueryEvent
>> [evtType=UPDATED, key=5, newVal=1, oldVal=0]
>> `
>>
>>
>> 3. continue to start the second test, and the issue is occurred,
>> 3.1 run the server.bat to start the second server node, and piped its
>> output into s2.log.
>> 3.2 run the DataClient.bat to put the same 2 kv pairs into cache, and in
>> server1 and server2's outputs, the remote filters output are the same,
>> `
>> CacheEntryEventRemoteFilter.evaluate CacheContinuousQueryEvent
>> [evtType=UPDATED, key=5, newVal=0, oldVal=1], with ret true
>> CacheEntryEventRemoteFilter.evaluate CacheContinuousQueryEvent
>> [evtType=UPDATED, key=5, newVal=1, oldVal=0], with ret true
>> `
>> but in the CQClient's output, there is nothing, the expected output
>> should be something like
>> `
>> sys-#5%null% receive CacheEntryEvent CacheContinuousQueryEvent
>> [evtType=UPDATED, key=5, newVal=0, oldVal=1]
>> sys-#5%null% receive CacheEntryEvent CacheContinuousQueryEvent
>> [evtType=UPDATED, key=5, newVal=1, oldVal=0]
>> `
>> but not.
>>
>> It looks like that the remote filter is initialized in the server2 node
>> with method org.apache.ignite.internal.processors.cache.query.continuous

Re: Ignite - FileNotFoundException

2016-10-13 Thread Anil
What is the recommended file descriptor limit for ignite data load ?

Thanks.

On 13 October 2016 at 15:16, Taras Ledkov  wrote:

> Please check the file descriptors OS limits.
>
>
> On 13.10.2016 12:36, Anil wrote:
>
>>
>> When loading huge data into ignite - i see the following exception.. my
>> configuration include off heap as 0 and swap storage to true.
>>
>>  org.apache.ignite.logger.java.JavaLogger error
>> SEVERE: Failed to process directory: /tmp/ignite/work/ipc/shmem
>> java.io.FileNotFoundException: /tmp/ignite/work/ipc/shmem/lock.file (Too
>> many open files)
>> at java.io.RandomAccessFile.open0(Native Method)
>> at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
>> at java.io.RandomAccessFile.(RandomAccessFile.java:243)
>> at org.apache.ignite.internal.util.ipc.shmem.IpcSharedMemorySer
>> verEndpoint$GcWorker.cleanupResources(IpcSharedMemo
>> ryServerEndpoint.java:608)
>> at org.apache.ignite.internal.util.ipc.shmem.IpcSharedMemorySer
>> verEndpoint$GcWorker.body(IpcSharedMemoryServerEndpoint.java:565)
>> at org.apache.ignite.internal.util.worker.GridWorker.run(GridWo
>> rker.java:110)
>> at java.lang.Thread.run(Thread.java:745)
>>
>>
>> Looks like it is due to high load.
>>
>> How can we avoid this exception? thanks.
>>
>> Thanks.
>>
>>
>>
> --
> Taras Ledkov
> Mail-To: tled...@gridgain.com
>
>


Re: Ignite - FileNotFoundException

2016-10-13 Thread Taras Ledkov

Please check the file descriptors OS limits.


On 13.10.2016 12:36, Anil wrote:


When loading huge data into ignite - i see the following exception.. 
my configuration include off heap as 0 and swap storage to true.


 org.apache.ignite.logger.java.JavaLogger error
SEVERE: Failed to process directory: /tmp/ignite/work/ipc/shmem
java.io.FileNotFoundException: /tmp/ignite/work/ipc/shmem/lock.file 
(Too many open files)

at java.io.RandomAccessFile.open0(Native Method)
at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
at java.io.RandomAccessFile.(RandomAccessFile.java:243)
at 
org.apache.ignite.internal.util.ipc.shmem.IpcSharedMemoryServerEndpoint$GcWorker.cleanupResources(IpcSharedMemoryServerEndpoint.java:608)
at 
org.apache.ignite.internal.util.ipc.shmem.IpcSharedMemoryServerEndpoint$GcWorker.body(IpcSharedMemoryServerEndpoint.java:565)
at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)

at java.lang.Thread.run(Thread.java:745)


Looks like it is due to high load.

How can we avoid this exception? thanks.

Thanks.




--
Taras Ledkov
Mail-To: tled...@gridgain.com



Network Segmentation configuarion

2016-10-13 Thread Yitzhak Molko
While I didn't configure any network segmentation properties
(SegmentationResolvers, SegmentationResolveAttempts, SegmentCheckFrequency
etc.) node is been shutdown from time to time:
WARN : [discovery.tcp.TcpDiscoverySpi] Date=2016/10/13/07/42/52/009|Node is
out of topology (probably, due to short-time network problems).
WARN : [managers.discovery.GridDiscoveryManager]
Date=2016/10/13/07/42/52/009|Local node SEGMENTED: TcpDiscoveryNode
[id=4b3349f5-fda0-4e9d-a528-c8b5f4401717, addrs=[0:0:0:0:0:0:0:1%1,
10.0.0.5, 127.0.0.1], sockAddrs=[/127.0.0.1:47500,
/0:0:0:0:0:0:0:1%1:47500, /10.0.0.5:47500], discPort=47500, order=271,
intOrder=145, lastExchangeTime=1476344572001, loc=true,
ver=1.7.0#20160801-sha1:383273e3, isClient=false]
WARN : [managers.discovery.GridDiscoveryManager]
Date=2016/10/13/07/42/52/092|Stopping local node according to configured
segmentation policy.

I would like to understand what is default behavior since I didn't
configure any SegmentationResolvers.
I can probably set SegmentationPolicy to NOOP to avoid node shutdown, but I
don't think it's a good idea that node will out of topology for a long time.
Is possible to set time/wait longer until getting SEGMENTED event?

We are using ignite 1.7.0 and running cluster with 20 nodes.

Thank you,
Yitzhak
-- 

Yitzhak Molko