I'm not sure I understand you properly.
However in any case you can use a code block like the one below to get an
instance of a cache related to a specific class
String cacheName = object.getClass().getSimpleName().toLowerCase();
IgniteCache cache = ignite.cache(cacheName);
The caches have to
Hi Kristian,
Thanks for reporting on this. I've opened in issue in Apache Ignite JIRA
https://issues.apache.org/jira/browse/IGNITE-3011
As a workaround as you already noted you can set
-Djava.net.preferIPv4Stack=true to JVM upon startup.
Other solution that may work in your case is to set a
Yes, what IDE generates should be fine.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/multiple-value-key-tp4138p4209.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
You can use loadCache method to load any set of data from the DB. It takes
optional list of arguments that you can use to parametrize the loading
process (e.g., use them to provide time bounds and query the DB based on
these bounds).
You can load based on any criteria, there are no limitation and
Hi Binti,
Can you please properly subscribe to the mailing list so that the community
can receive email notifications? Here is the instruction:
http://apache-ignite-users.70518.x6.nabble.com/mailing_list/MailingListOptions.jtp?forum=1
bintisepaha wrote
> We have a use case where multiple
No, I know that. I want to know if we can use UPDATE sql query with ignite or
not?
Because, I want to do something like this: I have a table each in 2
different cache, I want to update some column entry in one table as well as
cache by performing cross-cache sqlqueryfield join. How do I do that?
Thanks for your quick response Val!
I'll test throughly and update here.
--Kamal
On Thu, Apr 14, 2016 at 11:57 PM, vkulichenko wrote:
> Kamal,
>
> I'm not sure I understood what you're trying to achieve. When you use cache
> API, all affinity mappings are done
Kamal,
I'm not sure I understood what you're trying to achieve. When you use cache
API, all affinity mappings are done automatically, so you don't need to
worry about this.
In your particular case, the client is not aware of affinity and essentially
sends a request to a random node, so the cache
Kamal,
Generally, binary format is recommended for all use cases.
OptimizedMarshaller implements legacy serialization format, it's also
compact and efficient, but requires to have classes on all nodes and do not
support dynamic schema changes. If there are any limitations in binary
format, most
Hi Vladimir,Not really, we do not want to store historical data in cache or may
be cache it for few hours and then evict it.But if recent data is missing in
cache then yes we want to cache it.So it would require some custom caching
logic to decide which data to cache.So seems like storing
1. Yes, this is possible.
2. In a key-value storage, each entry has to have unique key. Entries with
the equal keys will overwrite each other.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/querying-sql-performance-of-apache-ignite-tp4135p4188.html
Sent
Hi Shaomin,
1. Yes, this is a per node setting. So if there are two nodes on one box,
it's possible that 20G will be allocated on this box. You should make sure
that this limit correlates with the amount of physical memory you have.
2. In 1.5. IgniteCache.metrics() return values for local node
No, SQL is read-only now. Support for UPDATE and INSERT statements is on the
roadmap, though.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Passing-argument-in-sql-query-under-where-clause-giving-error-tp4164p4184.html
Sent from the Apache Ignite Users
Hi all,
I've a cluster of 2 ignite server nodes + 1 client node (non-ignite).
I've collocated the data resides in ignite cache based on affinity key.
e.g.
Server 1 - contains all the data related to the affinity key (A, C, E)
Server 2 - contains all the data related to the affinity key (B,
Val,
Can you explain with use-case when to use Binary, Optimized, GridOptimized
and JDK Marshallers ?
--Kamal
On Tue, Apr 5, 2016 at 3:41 AM, edwardkblk
wrote:
> Yes, it works with OptimizedMarshaller. Thank you.
>
>
>
> --
> View this message in context:
>
Thanks it works now!
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Passing-argument-in-sql-query-under-where-clause-giving-error-tp4164p4176.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Could you tell more about it ? How to do it ?
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Data-Grid-with-write-through-mode-to-HDFS-with-layer-of-IGFS-tp4122p4171.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Could you tell how to do it ?
Thriftserver may read data from filesystem (for example parquets files)
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/spark-and-ignite-possibilities-tp4055p4170.html
Sent from the Apache Ignite Users mailing list archive at
Try to reply to my:
1. When my key is timestamp. Ii it possible to load to cache memory all rows
from year 2000 to year 2012 ?
2. When my key is timestamp and id (Integer). Is it possible to load load to
cache memory all rows from year 2000 to year 2012 ? Note, that I don't set
value of key id
IGFS is a Hadoop-compatible file-system. If ThriftServer doesn't have
strong dependencies on some HDFS-specific features, then yes - it could be
used instead of HDFS.
Could you please provide more detailed explanation of your use case with
ThriftServer?
On Wed, Apr 13, 2016 at 3:17 PM, tomk
Hi Vij,
Storing hot recent data in cache, and historical data in persistent store
sounds like a perfectly reasonable idea.
If you decide to store historical data in HDFS, then you should be able to
construct HDFS path from the key because store interface accepts keys to
store/load data. If this
Hi,
I'm trying to execute the cross-cache sql fields query with join and it
executes fine as long as I don't do setArgs and pass an argument, when I
need to pass an argument with the where clause, then it gives error: Failed
to execute local query: GridQueryRequest [reqId=1, pageSize=1024,
Thanks Vladimir !!
The drawback with using HDFS as persistent store behind Ignite cache is how we
will take care of appending single key value pair to HDFS file.Ideally we
should use some NoSQL store or RDBMS as persistent back up behind Ignite cache
and then run some scheduled batch to
Hi, Ravi!
Not yet, but we have an issue for that
https://issues.apache.org/jira/browse/IGNITE-962
You can track it JIRA.
On Thu, Apr 14, 2016 at 6:20 PM, Ravi Puri
wrote:
> I want to know do you support json based data to be cached and
> implemnented?
>
>
>
> --
>
I want to know do you support json based data to be cached and implemnented?
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/do-you-support-json-based-cache-tp4160.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
I was seeing quite substantial instabilities in my newly configured 1.5.0
cluster, where messages like this would pop up, resulting in the
termination of the node.:
java.net.UnknownHostException: no such interface lo
at java.net.Inet6Address.initstr(Inet6Address.java:487) ~[na:1.8.0_60]
at
You are right Val, I did not read all the examples. Hopefully problem solved.
Cheers,
Kevin
-邮件原件-
发件人: vkulichenko [mailto:valentin.kuliche...@gmail.com]
发送时间: 2016年4月14日 14:34
收件人: user@ignite.apache.org
主题: Re: problem of using object as key in cache configurations
We already have
We already have such examples. For example, CacheClientBinaryPutGetExample.
But I agree with Andrey that this message is very confusing. I will fix it
shortly.
-Val
--
View this message in context:
Thank you, planning a test session next week!
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Schema-Import-Utility-Mismatch-when-loading-data-between-Date-and-Timestamp-tp3790p4152.html
Sent from the Apache Ignite Users mailing list archive at
29 matches
Mail list logo