Hi,
I have not seen Ignite containerized within PCF, but I have no see any
limitation of that yet.
May be anyone from Community know about Ignite for PFC?
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/PCF-tile-for-Ignite-tp12466p14034.html
Sent from the
Hi,
One way to re-join node to topology is stoping it and restart.
Node was not been join to cluster themself, because does not possible to
merge data between one cluster node (due to GC pause) and whole topology.
After the node left the cluster some of cache operation (put, remove) could
be and
Hi,
The method will be invokes in server side, but only on particular nodes
(where the data entry will be stored). The batch of data will be submit to
receiver when the buffer (DataStreamer#perNodeBufferSize()) is full of got a
time of do that by frequency (DataStreamer#autoFlushFrequency()) or
Hi,
What are you wan to check? Did not possible know frozen process or not into
themself.
Also you should set a alarm to this metric:
^-- CPU [cur=100%, avg=1.38%, *GC=1078.23%* ]
it means you node suffer from lot of garbage (which lead to SWT pause).
If you want to know when the node is
Hi,
cache.containsKey() never locking key (I checked it in Ignite version 1.7
till 2.0), but you can use simple cache.get().
This method (cache.get) will got a lock for particular key, even if returns
null.
--
View this message in context:
Hi,
Could you please explain what you do?
Are you get some exception on start?
Usually enough download Ignite, extract it and run ignite.(but/sh).
If necessary modify default-config.xml.
--
View this message in context:
Hi,
VisorCmd shows avarage value of some first cache rows, after multiplies over
all elements count. It does not got exact cache size, only estimate.
Furthermore in a latest Ignite version 2.0, work with memory radically
changed[1] and to make estimation by elements does not possible.
If you sow
Hi,
If I correct understand you are mean Ignite 1.X.
Using swap depends of your cache configuration, you can to specifay whole
chain when data puts on java heap, after evicts to off-heap and swaps to
disk at last. You should to specify an eviction policy for the rules from
on-haep to off-heap,
Hi,
You can not use Ignite Alert by custom event, because it works by time only.
But you can to use a local listener for specific events[1] or extend
EventStorageSpi (IgniteConfiguration#setEventStorageSpi) for handling all
local events.
[1]:
Hi,
I have see that your grid suffer from long running operations:
[19:20:19,285][WARNING][grid-timeout-worker-#63%null%][GridCachePartitionExchangeManager]
Found long running cache future [startTime=19:18:55.302,
curTime=19:20:19.280, fut=GridDhtAtomicSingleUpdateFuture
Hi,
If Ignite could not write to home directory, it will got some side effect.
Could you please provide thread dump from this server node?
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Home-Directory-tp12271p12303.html
Sent from the Apache Ignite Users
Hi,
If Ignite server is fail down at some moment, you prevent to loose data only
if you have enough node with backups.
But if your whole cluster is down, you surely got a fail in web application.
Siva Annapareddy wrote
> Application nodes store web Session in their own JVM
Real user can
Hi,
Ignite supports simple continuous mapping API[1] for a stream of data.
I do not understand why you will not can filtered a data on map job or
implement you own interface. But I think, you can got issue with collocation
in this approach.
[1]:
If you configure sslContextFactory then client communicate through security
socket, because client work as node of cluster (using CommunicationSPI).
--
View this message in context:
rick_tem,
Why you not to use SSL/TLS configuration[1]?
In this case all nodes (including visorcmd) will be communicate through a
security socket.
jackbaru,
In my point of view, those places (which was be in the report) do not
relevant to security. This is internal usage of standard platform
At first if you are having only one backup you surely data lose when kill 3
nodes (you can to do that when only kill one by one and waiting rebalancing
complete after each).
Could you please attache a full log file at leas one node where the remap
failed messages are present?
--
View this
How often sessions are timedouted?
Is it node have been alive fe6409cb-88a2-43da-9eb7-9b17cf5debcb when message
are appear?
This seem as topology are breaks down.
Please provide all logs from each nodes.
--
View this message in context:
Hi,
I think it will be possible for java.util.Date, but you you are should
format date which was been understanding by H2 db.
But you can use constants as parameters and it will be more flexible.
For dynamic SQL parameters are useful when more then one query generated
with on query text, but
Hi,
I do not think, which of these places of code are related security concern.
Ignite by itself does not cryptographic framework, but if you want to
implement security logic you always capable to use own random algorithm.
It will by easy, if you are explain what was you worried about?
Please
Hi,
You will get the message "Task was not deployed or was redeployed" when
topology is instability.
You need to look at heap utilization and GC work.
I recommend to try G1 garbage collector and jdk 1.8 with option like this:
-server
-XX:SurvivorRatio=12
-XX:ConcGCThreads=8
-XX:+AlwaysPreTouch
I have reproduce you issue.
Seem that is java heap issue.
If the test is producing in one physical machine the issue does not
reproduce, because costs of network communications are small. But when you
are dedicate nodes by others computers, overheads are growing and heap space
becomes
Hi,
I try to check your example at near time.
But I wan to know hot to do that:
1) I should start empty Ignite server on dedicate machine.
2) After a empty server start, need to begin your example with n more then
1000.
Is it correct scenario?
--
View this message in context:
Hi,
I have found the issue[1].
Unfortunately it were not resolved.
[1]: https://issues.apache.org/jira/browse/IGNITE-2738
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Run-Ignite-on-a-separate-Yarn-Queue-instead-of-default-queue-tp12046p12155.html
Sent from
Although, I'm not right (it should by remaper).
Could you provide reproducer?
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/what-does-remaps-mean-in-DataStreamerImpl-java-tp12033p12079.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Hi,
How many backups (o.a.i.configuration.CacheConfiguration#setBackups) do you
use?
If your cluster does not contain backups, a batch will not remaped, until
rebalance finished.
--
View this message in context:
Hi,
DataStreamer packs all cache operations to batch request to grid server
node. If this batch was not executed correct in particular node, it will by
remapped to other.
Maximum of remaps cunt is 32 (DFLT_MAX_REMAP_CNT).
--
View this message in context:
Hi,
Could you try to exclude p2p class loader
(IgniteConfiguration#setPeerClassLoadingEnabled to false), putting jar (with
closure implementation) on each node and retry?
Is looks like a some race, but I could not reproduce without full working
example.
If it will be easy, please, can you
Hi,
Ignite DataStreamer with IgniteDataStreamer#allowOverwrite is false does not
work correct on instable topology until latest version.
If you want to Failover SPI will be used, you can set the property
(IgniteDataStreamer#allowOverwrite) is true.
--
View this message in context:
Hi,
1) Yes, default configuration failower SPI should by fit. Not need to
configure it specify at most cases.
2) Why are you think so? If it is, what are results when you try to execute
task, but node is out of topology?
Can you provide reproducer?
3) The information often contains in the logs
Hi,
The message appear when operation try to execute on stopping (or starting)
node.
If the message was throwed then means node out of cluster.
Tasks which will be execute in the node should be rout to another (by
failover SPI)
Please, explain where are you sow problem?
--
View this message
Hi
If you are use port range, than this can consume more time, because
Discovery SPI try to connect to by all port in range.
But If you are specified single port, connection should be fast.
Why are you think that delay occur on discovery phase?
--
View this message in context:
Hi,
How was Ignite nodes and physical machines mapped?
Where Ignite nodes more there is CPU utilization greater, isn't it?
Where were SQL query generated (on one of a srever node or other)?
I think, CPU utilization shuold be same over of nodes, because SQL executed
as Map Reduce task.
--
Hi,
Even if you use SQL query whole data set will be loaded to client side (on
reduce phase) and sorted local.
That behavior will be change only 2.0 version[1].
[1]: https://issues.apache.org/jira/browse/IGNITE-3013
--
View this message in context:
Hi,
You can do it using ScanQuery[1] with specific filter, and reorder result by
your logic.
You can not find in nested collection on SQL query. For do this you should
store Person and Phone separately link them using foreign keys.
You can then join these two table in your SQL query[2].
[1]:
Hi,
Ignite has internal implementation ThreadPoolExecutor, which used in ignite
compute, and it has BlockingQueue with Integer#MAX_VALUE internal.
Lock at in IgniteThreadPoolExecutor [1].
[1]:
Hi,
Ignite compute[2] is the general mechanism to execute distributed task. If
you are mean Executor Service[1], it is only convenient API.
Distributed tasks will be stay at queue if public thread pool is full.
If you need to increase thread count in public pull you can do that[3].
...
I do not see an memory issues at the model, but you should always remember
about cache overhead[1]
Size of any entry should be bigger than the 200 bytes.
[1]:
https://apacheignite.readme.io/docs/capacity-planning-bak#calculating-memory-usage
--
View this message in context:
Hi,
Could you please provide a reproduction example?
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/BinaryObjectException-after-Ignite-restarts-tp9552p9558.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Hi,
1) No cache name does not affect the memory utilization.
2) Yes, the key can
3) Ignite stores the description of the class only once, and organizes hash
each object. class or field name length does not affect the memory.
4) Off_heap may be cheaper by memory consume, because in some case two
And take attention: "DataStreamer" and "Ignite Compute" use one thred pool
(public thread pool). Hence you should execute work with streamer
asynchronously (other side you risk to get a thread starvation).
--
View this message in context:
Hi,
Now one node of Ignite execute the operation at only a one thread. You
should use one thread by partition, those. scan query[1] with method
"setPartition" (instance of local entries).
Using affinity[2] for determine partition by nodes and run your task by
each.
You can slightly increase
Hi,
Ignite does not have snapshot isolation in transaction. Although community
seriously wants to implement it [1].
If you are want something like, I think, you can add Date field (make index
by it) into object and execute SQL by concrete timestamp.
[1]:
Hi,
I recoment to use PESSIMISTIC, READ_COMMITTED (without any locks)
transaction for each account. And look at the comment on Val[1].
You are doing somtink like this, as I understand from first post.
Why is this method is not suitable?
[1]:
Hi,
How are you measure size of data (370MB)?
If you use ONHEAP_TIERED mode, entries can to store on in state (serialized
and deserialized), look at issues [1, 2], but you can avoid this behavior,
if server does not have object classes.
[1]: https://issues.apache.org/jira/browse/IGNITE-3347
[2]:
Hi,
Lock at the method (o.a.i.cache.query.SqlFieldsQuery#setPageSize), that
allow to set sql page size.
If you are going through cursor (result of cache.query()) or executing
"getAll" then you get all result data.
Client gets data into page range, if used cursor (and sent request to the
next
Hi,
Yes, you are right.
You should preload[1] data before SQL will execute.
[1]: https://apacheignite.readme.io/docs/data-loading
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/SQL-query-against-cache-with-read-through-CacheStore-tp9039p9045.html
Sent from
Hi,
I think, INFO log level would be enough (from all nodes).
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Cluster-hung-after-a-node-killed-tp8965p8991.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Hi,
You should enable cache statistics:
...
for get correct value from metrics.
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/How-can-I-detect-DB-writing-abnormal-in-case-of-write-behind-tp8954p8989.html
Sent from the Apache Ignite Users
Hi Anil,
I tred to reproduce your case, but got expected behavior (indexes are used).
I slightly modified your SQL for the my cache configuration:
select * from (
(select p.id, p.name from "simple_cache".Person p join table(joinId INT =
(1, 3)) i on p.id = i.joinId)
UNION
(select p.id, p.name
Hi Alsex,
Can you please provide this class com.testlab.api.inf.dao.RepositoryDao?
It can has serialization issue for particular class.
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/java-lang-ClassNotFoundException-Failed-to-peer-load-class-tp8778p8953.html
Hi,
Ignite fully compatible with JDK 7.
I think, you try to run Ignite into lower version than 7.
51.0 matches to JDK 1.7
Please, make sure, that Ignite runs in particular JDK (check env).
--
View this message in context:
Hi Alex,
I think, these threads are executing into pools of threads, and number of
threads always restricted by pool size[1].
You can configure sizes manually:
[1]:
https://apacheignite.readme.io/v1.7/docs/performance-tips#configure-thread-pools
--
View this message in context:
Hi,
You can use Igniteconfiguration.setIndexedTypes(Kei.class, Val.class) and
annotations or (not both) Query Entity and configure indexes in it.
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Cache-setIndexedTypes-Question-tp8895p8937.html
Sent from
Hi,
It is should not be very slow.
If you will not use index, then SQL will work slow.
You can try to use QueryEntity[1] without indexs on fields.
How many index do you use?
Could you please provide this classes?
Please properly subscribe to the user list so that we can see your questions
as
Hi,
If your caches are off_heap, these do not consume heap basically.
Could you please, get heap dump and try to provide briefly analyze, which
object consume heap?
--
View this message in context:
Hi Tracyl,
You can use "invoke", the method will be most effective for retrieve part of
value from cache. For the withKeepBinary off_heap cache "invoke" takes lock
on entry and can to manipulate with off_heap pointer wrapped on
BinaryObject.
--
View this message in context:
Hi Duke,
I don't know, why one node fail leads to fail all other. It is not a normal
behavior.
What is reason fail first node, and others?
Can you increase failureDetectionTimeout, if you think this is network
issue? (or network timeouts in discovery SPI)
Could I look at logs from each nodes?
Hi,
Difficult understand what are you meant.
Could you please provide full working example?
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Class-objects-are-fetched-as-string-when-JDBC-api-is-used-tp8720p8828.html
Sent from the Apache Ignite Users mailing list
Hi Anil,
You should not to pay attention on version warning message in nightly build
or manual build version. Because the message is right only for versions
which already released (this is simple comparison with last released version
on this site http://ignite.run/update_status_ignite.php).
When
Hi,
In my view, you try to do something strange. You have got exception at
compile time, how are you want assign new reference onto return value?
Please, create a ticket on Ignite Jira[1], as Roman said.
[1]: https://issues.apache.org/jira/browse/IGNITE
--
View this message in context:
Hi,
It is look like node cannot to establish connection over CommunicationSPI.
Please check are ports of communication (by default 47100 with range 100)
open in remote server machine.
See this article[1] in additional.
[1]:
I think 100 node is large cluster, but if you configured each machine (FS,
look at this document[1]) equally and debugged work - this can be done.
100 nodes is difficult task, because the cluster will be have many point of
failure, but in the other side cluster will have height failover tolerance.
It is a duplicate of the topic:
http://apache-ignite-users.70518.x6.nabble.com/java-lang-ClassNotFoundException-Failed-to-peer-load-class-tt8778.html
--
View this message in context:
Hi,
Please properly subscribe to the user list so that we can see your questions
as soon as possible and provide answers on them quicker. All you need to do
is send an email to ì user-subscr...@ignite.apache.orgî and follow simple
instructions in the reply.
The class is part of hadoop-common
Hi,
Please properly subscribe to the user list so that we can see your questions
as soon as possible and provide answers on them quicker. All you need to do
is send an email to ì user-subscr...@ignite.apache.orgî and follow simple
instructions in the reply.
Ignite full support jcache API: it
Also, yes, Ignite 1.7 depends of h2 1.4, but Ignite 1.6 depends of h2 1.3.
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/SqlFieldsQuery-error-when-I-update-ignite-version-to-1-7-0-it-works-with-1-6-0-tp8577p8579.html
Sent from the Apache Ignite Users mailing
Hi,
Please properly subscribe to the user list so that we can see your questions
as soon as possible and provide answers on them quicker. All you need to do
is send an email to ì user-subscr...@ignite.apache.orgî and follow simple
instructions in the reply.
Could you please provide full stack of
Hi Anil,
I doubt, about this fields can serialize correctly:
private Scan scan;
private QueryPlan queryPlan;
You need will get rid of this fields from serialized object.
--
View this message in context:
Hi Binti,
Hi,
This is look like a lock GridCacheWriteBehindStore and
GridCachePartitionExchangeManager.
Could you give work an example of this?
If not I try to reproduce it tomorrow
--
View this message in context:
Hi,
It is duplicate.
I am given answer in the thread[1].
[1]:
http://apache-ignite-users.70518.x6.nabble.com/How-to-get-the-load-status-of-the-Ignite-cluster-tc8232.html
--
View this message in context:
Hi,
Yes, you are right. In your case, data does not load from persistent store
before obtain this from cache.
If you are invoke wn.ignite.test.Get between these phases:
8. Start both nodes
> Run Get Client: it gets 10 records from MyCache
9. Run Node3
You get expected behavior.
You can
Hi,
Could you please provide full call stack of the exception?
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/INTERVAL-expression-does-not-work-with-mysql-apache-ignite-tp7986p8145.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Hi,
Discussion has been moved to another thread.
http://apache-ignite-users.70518.x6.nabble.com/Enter-Lock-is-not-working-tc8040.html#a8142
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Filed-to-acquire-Lock-tp8042p8144.html
Sent from the Apache Ignite
Hi,
If you can not get lock, this means it has been already acquired.
Could you please provide thread dump from all cluster nodes?
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Enter-Lock-is-not-working-tp8040p8142.html
Sent from the Apache Ignite Users
Hi,
Lockups may be caused that some locks may not be released.
The discussion moved to chatter.
Let community know about solution, if it you solved already...
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/CLUSTER-GET-HANGED-SUDDENLY-tp8027p8056.html
Sent
Hi,
This can be if you change data from database (for example using SQL update)
after loading to Ignite cache.
Ignite does not load data if they had loaded already (idetificatin by key).
If you use setReadThrough and setWriteThrough, you need to modify data
through cache only.
junyoung.kang
Hi,
As I know you can configure CacheJdbcPojoStore in order to pairs of various
object save to different tables. But in the case types of key must be
different.
All nodes of cluster need to have value classes on classpath or to configure
About loading data from persistent store you can
Hi,
If SSL configured, communication and discovery SPIs will be working through
security socket.
All ports discovery and communication, which port do you mean?
ctalluri wrote
> If we enable SSL, what is the default SSL port for ignite
>
> -Thanks in advance
--
View this message in
Hi,
Unfortunately, It is not supported in current version.
But lock at this ticket[1], it planed to next Ignite version.
[1]: https://issues.apache.org/jira/browse/IGNITE-735
--
View this message in context:
Hi,
I think it depends of transaction type (optimistic / pessimistic).
Pessimistic transaction gets a lock from key at first attempt to access and
last operation (getAndPutIfAbsent) on this key will by able to perform after
commit previous transaction only.
At optimistic transaction all be occur
I think, you are right.
Could you plese create issue for Ignite in Apache Jira[1]?
Until you can to use something like this:
Ignite ig = Ignition.ignite();
backend = ig.services().service(CacheStoreBackend.SERVICE_NAME);
as workaround.
[1]:
Hello,
Please properly subscribe to the mailing list so that the community can
receive email notifications for your messages. To subscribe, send empty
email to user-subscr...@ignite.apache.org and follow simple instructions in
the reply.
The reason of segmentation in most cases is a delays:
Hello,
Please properly subscribe to the mailing list so that the community can
receive email notifications for your messages. To subscribe, send empty
email to user-subscr...@ignite.apache.org and follow simple instructions in
the reply.
Thank for the information.
I have created matching issue:
Please properly subscribe to the mailing list so that the community can
receive email notifications for you messages. Here is the instruction:
http://apache-ignite-users.70518.x6.nabble.com/mailing_list/MailingListOptions.jtp?forum=1
Look at the article [1]. You need to have TxManagerFactory
Hello,
Replace method is equivalent to the code snippet:
/if (cache.containsKey(key)) {
cache.put(key, value);
return true;
} else {
return false;
}/
Can you lock at the jcache Javadoc:
http://ignite.apache.org/jcache/1.0.0/javadoc/javax/cache/Cache.html#replace(K,
V)
Continue discussion here
http://apache-ignite-users.70518.x6.nabble.com/IgniteDataStreamer-can-t-wirte-data-to-ignite-cluster-tt7012.html
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/IgniteDataStreamer-can-t-wirte-data-to-ignite-cluster-tp7003p7018.html
Sent
Hello,
Please properly subscribe to the mailing list so that the community can
receive email notifications for you messages. Here is the instruction:
http://apache-ignite-users.70518.x6.nabble.com/mailing_list/MailingListOptions.jtp?forum=1
Could you please provide full example, where hang
vdpyatkov wrote
> Hello,
>
> Unfortunately I was not able to get access to logs files.
> I think you need to check IGNITE_HOME environment variable.
>
> You can find information about in the article
> https://apacheignite.readme.io/docs/getting-started in *
bearrito wrote
> Group,
>
> Are there best practices or recommendations on the number of caches one
> should create?
>
> In particular I'm looking to create append-only log type functionality
> with evictions. Suppose for instance I had many financial instruments and
> I wanted to store the last
Hello,
My first answer is corrupted. I duplicate it again
I would used support asynchronous operation, and cancel IgniteFuture if
needed.
For example:
1) Create cluster group by attribute
cg = cluster.forAttribute("ROLE", "worker");
2) Make compute async
c = igniteClient.compute(cg).withAsync()
fs12345 wrote
> Hi all,
>
> Let me start by saying I am quite a novice with Apache Ignite.
>
> Here is my question : I want to be able to monitor and cancel tasks
> running or that are going to run on worker nodes of an Ignite Cluster (say
> for example if one task takes too long to complete),
Hello dilipramji,
You can use TcpDiscoveryS3IpFinder for AWS or TcpDiscoveryCloudIpFinder for
adding nodes to cluster.
Also see the articles:
https://apacheignite.readme.io/docs/aws-config
https://apacheignite.readme.io/docs/generic-cloud-configuration
dilipramji wrote
> Hi,
>
> Does
Hello,
Can you check ping between data centers?
If delay of network may be long, you can incrase FailureDetectionTimeout
(use
org.apache.ignite.configuration.IgniteConfiguration#setFailureDetectionTimeout)
In additional you need make sure what communication ports are available in
all nodes (by
Hello,
I can not reproduce the issue.
In my test case all data entries save immediately after call
Ignition.stopAll(true).
If you had have example, which demonstrate the behavior, can you please
provide source code?
--
View this message in context:
Hello Ravi,
Have experience around three-four years using java usually enough.
Also experience in using Spring xml will help, but much more important
understanding multithreading environment, patterns and approach.
I think you cope.
--
View this message in context:
Hello,
Yes, cache metrics not work by default for performance reason. You can
enable it throught cache configuration property
javax.cache.configuration.MutableConfiguration#setStatisticsEnabled (or into
xml )
--
View this message in context:
96 matches
Mail list logo