Thanks Mikael! I will have a try.
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Ilya,
thx for helping me
the column is collocated. Still I need around ~10+Servers to match one PG
Server speed for a group by.
What do you mean with use "collocated=true"?
I thought it is only needed in case you dont have collocated data and still
want to have full result set.
and its not ne
I have a cache called Person which initially was created like so:
personCache = igniteClient.getOrCreateCache("Person")
and a Person is defined as:
data class Person(
val name: String,
val age: Int,
val city: String
)
I now want to be able to run SQL queries on this cache and based
We have a situation where a development team is trying to embed Ignite grid
into existing legacy big (BBM) application, already running on high heap
space (up to 10GB). Now they’re making things worse by adding offline disk
storage configuration. It’s on windows VMs, with HDD disks, and by-chance
Hi!
Ignite iterates over the whole keys collection, then performs a remove(keys)
method. So if there are a lot of items and a distributed cluster, then this
would take a long time.
You can take a look into into regular IgniteCache, which has a destroy()
method.
And use put/get for insert/check o
Hello!
Have you tried igniteSet.close()?
Regards,
--
Ilya Kasnacheev
вт, 4 июн. 2019 г. в 12:47, yann.blaz...@externe.bnpparibas.com <
yann.blaz...@externe.bnpparibas.com>:
> Hello all ! :)
>
> We are using ignite HashSet in our processes to test unicity of Id's when
> we are importing files
Hello!
It would help making group by column an affinity key. Then you can use
collocated=true setting to make this faster.
Regards,
--
Ilya Kasnacheev
вт, 4 июн. 2019 г. в 15:15, David :
> Hi all,
>
> I need to analyze a performance bottleneck in a complex environment.
> So I decided to split
Hey,
Have you tried these optimizations approaches?
https://apacheignite-sql.readme.io/docs/performance-and-debugging
Not sure that splitting into the multiple tables is the best way to tackle
this.
-
Denis
On Tue, Jun 4, 2019 at 9:17 AM KR Kumar wrote:
> Hi Guys - I am using ignite file bas
Have a look at Ignite data streamer API, they are designed better for data
streaming and big chunks loading:
https://apacheignite.readme.io/docs/streaming--cep
Or, switch the SQL engine to the streaming mode for the time of that INSERT:
https://apacheignite-sql.readme.io/docs/set
Let me know if a
Well, there are several options in additions to what you do with Kafka.
1. Set up MySQL triggers and let them push data to Ignite via JMX or
another streaming tool.
2. Use Oracle Golden Gate integration that works for MySQL. This tool
does this much more efficiently:
https://www.gr
It's possible to do that but not sure the community has any
ready-to-be-used code sample to share.
As for already available solutions, GridGain provides so-called Hadoop
Connector that solves data loading task:
https://docs.gridgain.com/docs/bdb-getting-started#section-gridgain-hadoop-connector
-
Hi, I am trying to load Hdfs data into cache from Spark. it is working
in local mode, but failed in spark-submit yarn mode. it tried to find
Ignite Home path in the cluster. yes, it is true that Ignite is not
installed in cluster. but why needed ? Ignite instance is created inside
my java code
Hi,
Without cache stores, you can try to use ignite-spark integration but it
will not provide the real-time synchronization.
Also, you can try to set up some incremental data loading over JDBC, for
example via apach sqoop.
BR,
Andrei
--
Sent from: http://apache-ignite-users.70518.x6.nabble.c
Hi all,
I need to analyze a performance bottleneck in a complex environment.
So I decided to split down each component and split down the query to most
simple level.
*scenario:*
- only 1 table "person" with 2 columns
id (long), CURRENTCITYID(long) (both indexed)
10_000_000 rows
*Task: *
- make
Hello, finally I did the following trick.
I broadcast my select request on each node, executing the select with Lazy and
collocated, then I user QueryCursor and create BinaryObjects that I insert into
cache by 5000 packet.
Wich seems to be good enough.
Thanks for your help.
Le 30 mai 2019 à 1
Hello all ! :)
We are using ignite HashSet in our processes to test unicity of Id's when we
are importing files in our system.
I have 5 or 6 HashSet wich in my example can contains millions of lines. Insert
and check is working great.
But in next step, when we do not need it anymore, we clear e
Hi!
You only need persistence for the default data region, you can put your
caches in a different data region without persistence, that's what I do.
Mikael
Like:
class="org.apache.ignite.configuration.DataRegionConfiguration">
Hi Igniters,
We want to enable the authentication feature for our Ignite cluster, but
currently, it still requires us to enable Ignite native persistence which is
not suitable for our use case.
Is there a way to enable persistence in IgniteConfiguration but disabled for
all the caches inside?
If
Hi Guys - I am using ignite file based persistence. I have a cache table that
i created using JDBC driver. This table has grown very big, so I have split
the table into 100 tables to improve the query performance. Now the problem,
application has become very slow and also i see long pauses.
One t
19 matches
Mail list logo