Hi,
I am playing with kafka streamer for my use case and noticed that message
has to value of the ignite cache.
getStreamer().addData(msg.key(), msg.message());
(
https://github.com/apache/ignite/blob/master/modules/kafka/src/main/java/org/apache/ignite/stream/kafka/KafkaStreamer.java
)
i
Hi everyone,
I have a model Kc21, akc273 is its one String column .
I create the index in this column, as the following:
@QuerySqlField(index = true)
private String akc273;
Then I load data into cache from oracle, total 47535542 rows.
I execute the sql query to get the
Hi Bob,
If you put annotation on fields then need to use
"CacheConfiguration.setIndexedTypes". But for Query Entity, you must
discribe entiti in configuration (QueryEntity.setIndexes) without
annotation.
Please, look at [1]
If it doesn't help, provide your query configuration.
[1]:
Hi, actually we use a lot of caches from cache store writeAll().
For confirming if that is the cause of the grid stall, we would have to
completely change our design.
Can someone confirm that this is the cause for grid to stall? referencing
cache.get from a cache store and then killing or
The server I did experiment is not a decent one, I set the duration as 2/3
mins, my plan to check it by zeppelin query by count the all entries which
should not be varying too much, and
select * from cache order by timestamp desc limit 10, that I can see the
timestamp is updating.
However,
I had put my data into 3 nodes with collocate data with data, and found the
data is not evenly distributed across the three nodes.
Node A save 30 pieces of data, Node B save 30 pieces of data, and Node C
saved 900 pieces of data.
The example data as following:
NodeAffinityKey count
A
No Val. A message cannot be converted into number of cache entries using
value decoder. am i wrong ?
Thanks.
On 25 October 2016 at 02:42, vkulichenko
wrote:
> Hi,
>
> There are keyDecoder and valueDecoder that you can specify when creating
> the
> KafkaStreamer.
hi Manu,
My code contain't "return true".
I omit the code when I copy the code to textbox.
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/How-to-continuously-subscribe-for-event-tp8438p8453.html
Sent from the Apache Ignite Users mailing list archive at
Hi vkulichenko,
Thanks for the reply! I already subscribed.
What if I have multiple users query at the same time? One user hold the
Ignite client and the others just wait?
--
View this message in context:
Thanks Manu.
if i understand it correctly, if connection is closed due to cluster node
failure, client will automatically recreate connection using discovery
configuration.
and jdbc connection does not support connection pool.
thanks for your help.
On 24 October 2016 at 18:12, Manu
Hi,
as you know, org.apache.ignite.internal.jdbc2.JdbcConnection is an
implementation of java.sql.Connection, works always on client mode (this
flag is hardcoded to true when load xml configuration passed on connection
url) and works on read mode (only select). On same java VM instance,
hi,team
My cache subscribe the EVT_CACHE_OBJECT_EXPIRED,I want the event trigger
every a minute.
But,I find that the event only trigger once.
My code:
IgnitePredicate rmtLsnr = new Task();
typo correction.
Thanks Manu.
>
> if i understand it correctly, if connection is closed due to cluster node
> failure, client will automatically recreate connection using discovery
> configuration.
>
> and *jdbc connection does support connection pool*.
>
> thanks for your help.
>
>
>
>
>
> On 24
You are right, if connection is closed due to cluster *client* node
disconnection, client will automatically recreate connection using discovery
configuration. Pool is also supported, but N pooled instances of
org.apache.ignite.internal.jdbc2.JdbcConnection for same url on same java VM
will use
If you use ignite jdbc driver, to ensure that you always get a valid ignite
instance before call a ignite operation I recommend to use a datasource
implementation that validates connection before calls and create new ones
otherwise.
For common operations with a ignite instance, I use this method
Hi,
You need to return true on apply method to continuously listen.
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/How-to-continuously-subscribe-for-event-tp8438p8442.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Hi,
1) You can exclude slf4j-log4j12 dependency from ignite-rest-http.
Like this:
compile ('org.apache.ignite:ignite-rest-http:1.6.0') {
exclude group: "org.slf4j", name: "slf4j-log4j12"
}
2) Ignite 1.6 supports h2 1.3 version. You need to use last version 1.7 of
Ignite, with support h2
I apologize yes it is. It does have more information than the previous post.
The last suggestion from the group was to change the key size. I created
certificates at 1024 and still have the same issue.
--
View this message in context:
My current dependency looks like below [1](I am getting error pop up in eclipse
when I use name in exclude as you have suggested). But I still get the same
exception [2] mentioned below -
1. dependencies {
compile ("org.apache.ignite:ignite-rest-http:1.7.0") {
exclude
Hi Jeff,
Please properly subscribe to the mailing list so that the community can
receive email notifications for your messages. To subscribe, send empty
email to user-subscr...@ignite.apache.org and follow simple instructions in
the reply.
Jeff Jiao wrote
> Every time when I start a Ignite
Hi,
There are keyDecoder and valueDecoder that you can specify when creating the
KafkaStreamer. Is that what you're looking for?
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Kafka-Streamer-tp8432p8447.html
Sent from the Apache Ignite Users mailing list
21 matches
Mail list logo