I am using apache ignite version 1.6 and trying to implement a security
plugin by following the post
(http://smartkey.co.uk/development/securing-an-apache-ignite-cluster/).
Since the plugin API has changed after the blog post, I am unable to
activate the plugin and configure only an authenticated a
I started ignite on yarn and then tried to submit a sample spark job. The
ignition job took all available memory and so spark job was in "accepted"
state forever. Spark job never had enough resources to run.
So, I copied the following file to hdfs path /tmp/ignite and gave
IGNITE_XML_CONFIG=/tmp/
Hi,
Please properly subscribe to the mailing list so that the community can
receive email notifications for your messages. To subscribe, send empty
email to user-subscr...@ignite.apache.org and follow simple instructions in
the reply.
twisterius wrote
> I am trying to run ignite (1.6) via spark
Hi,
For some reason you never have more than two server nodes in topology. So
basically you Spark RDD is split between three Ignite RDD and you get
incorrect results.
Make sure that all nodes can connect to each other and that nothing is
blocked by firewalls. Note that by default Ignite uses mult
Yuci,
All the setting provided in documentation are based on testing of
high-loaded applications. Small young size makes for on-heap caches where
most of the objects are long living.
However, there is no silver bullet and each use case can require its own
tuning.
-Val
--
View this message in
P.S. The replace() method is indeed equivalent to the snippet above, but it's
an atomic operation. So it will atomically check the existence of the key
and update the value.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/what-are-the-differences-between-pu
Hi Ranijt,
I created a ticket for this:
https://issues.apache.org/jira/browse/IGNITE-3710. Hopefully someone in the
community will pick it up.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/IncompatibleClassChange-error-in-Java-Spark-Ignite-program-tp7053
Vikram,
You should add JARs to the classpath, not directories that contain JARs. You
can use wildcards as well, e.g.:
/lib/hadoop/*.jar
Please refer to Java documentation for more information on how to properly
specify the classpath for you application.
-Val
--
View this message in conte
Thanks for your suggestion, Vladislav.
After use them, the same issue still happens.
There must be some logic which causes the race contention between eviction
and cache partition re-balance.
Attached the logs for all the server nodes and all the config files. please
help take a look, if any fur
Hi, Sergii.
I can't reproduce this issue. I have written test.
https://github.com/EdShangGG/ignite/commit/8a9462c3a55c6c0317fb8bdc9ef26e12fdbcfb9a
It uses the same configuration as you mentioned.
Could you take a look? Maybe, I am missing something.
On Tue, Aug 16, 2016 at 12:54 PM, Sergii Ty
Leave a comment not in the thread
http://apache-ignite-users.70518.x6.nabble.com/Fail-to-join-topology-and-repeat-join-process-tt6987.html#a7148
Duplicate here:
If you think what dedlock there, you can increase
IGNITE_LONG_OPERATIONS_DUMP_TIMEOUT (through jvm system properties) and
networkTimeout
hi,
Can anyone provide me the solution for this Exception, how to handle this ??
Thanks & Regards,
Vikram Taori
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/IgniteException-No-FileSystem-for-scheme-hdfs-tp7014p7150.html
Sent from the Apache Ignite Users mail
Hi,
If you think what dedlock there, you can increase
IGNITE_LONG_OPERATIONS_DUMP_TIMEOUT (through jvm system properties) and
networkTimeout (through Ignite configuration xml) to several minutes.
-DIGNITE_LONG_OPERATIONS_DUMP_TIMEOUT=30
On Thu, Aug 18, 2016 at 12:22 PM, Jason wr
Thanks Vladislav.
Will try to reproduce this issue again, and seems that this only happens in
a big cluster.
BTW, after the new node joins, and when it tries to do partition map
exchange, seems that there's deadlock.
In some nodes, "Failed to wait for partition map ..." and other nodes,
"Failed t
Thank you for your response and pointing to direction to think.
I solved the problem:
The issue was that nuget did not properly update the project. Just running
update was not enough for some reason.
1. I've removed all Ignite nuget dependencies.
2. Just to be save manually cleaned debug/releas
Anyone can help me to fix this issue? thanks
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Embedded-mode-ignite-on-spark-tp6942p7146.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Hello,
You need take care, when dedicate minimum heap size, because JVM does not
work with unsave memory directly. Copy data from off-heap to heap and
deserialize occur over each cache operation (like get, getAll, execute
query).
Exception (Out of memory) will be thrown if heap was overflowed.
Y
17 matches
Mail list logo