I agree - with a new project today you should probably start with JDK 21 (LTS) - it has matured for years now.I have also the observation that a couple of third party libraries (eg spring) do not support anymore JDK8 and thus security fixes etc are not provided for those.Am 18.01.2024 um 11:14 schr
Maybe you have an instance limit of 100 on aws side ?
> Am 31.10.2019 um 19:09 schrieb codeboyyong :
>
> Hi Friends,
> I have a big ignite cluster running in private was like cloud, I use
> TcpDiscoverySpi with TcpDiscoveryS3IpFinder and run with 96 nodes, it works
> fine.
> Today I redeploy it
I would not recommend it from a security perspective. Use separate keystores/
node. Regarding the trustStore - do you have your own CA? It is not recommended
to secure both with a self-signed certificate.
> Am 08.12.2018 um 06:48 schrieb Shesha Nanda :
>
>
> Hi,
>
> I have enabled SSL securit
I think the more important question is why you need this. There are many
different ways on accelerating warehouse depending on what you want to achieve.
> Am 23.11.2018 um 07:56 schrieb lk_hadoop :
>
> hi all,
> I think use hive as DW and use spark do some OLAP on hive is quite common
> . S
I think you need to also look at the processes that are using the id in case of
a split brain scenario.
A unique identifier is always some centralistic approach either it is done by
one central service or a central rule that is enforced in a distributed fashion.
For instance, in your case you ca
Toad itself specializes only to certain dbms. Reasoning is that it offers also
functionality to manage databases in the TOAD Ui (Eg list the tables,
monitoring etc.) and these are based on non-standard SQL queries specific to
the DBMS.
There are plenty of other free clients and I believe develo
This does normally not make sense because most graph databases keep the graph
structure (not necessarily the vertex details, but vertexes and edges )
in-memory. As far as I know, Ignite does not provide graph data structures such
as adjacency matrix/list.
If you have a very huge graph of which t
Maybe you can elaborate more on your use case, because usually it is not a
technical decision , but driven by user requirements.
> On 9. Jul 2018, at 10:01, Mahesh Talreja wrote:
>
> Hi Team,
> I am working on Dot Net project and trying to implement
> Ignite.Net.
> Being new to the
gt; We've been running Tomcat 7 on JDK9 for over 3 months now with no other
> issues.
>
> [1] http://tomcat.apache.org/tomcat-7.0-doc/changelog.html
>
>> On Fri, Mar 30, 2018 at 11:52 AM, Jörn Franke wrote:
>> Tomcat 7 does not support JDK 9
>>
>>> On
Tomcat 7 does not support JDK 9
> On 30. Mar 2018, at 18:30, Eric Ham wrote:
>
> I'm running Tomcat 7 with Oracle JDK 9.0.4 and am attempting to use web
> session clustering based on the following pages [1] and [2] as I saw the
> 2.4.0 release notes say Java 9 is now supported. I copied the fo
You should do first a performance test with your data and our calculation using
a standard vm.
Then use this as a benchmark for non-standard vms.
Do not rely on other benchmarks - different use cases and calculations.
Particularly do your own benchmark and do not listen to advertisement materia
Not exactly sure if this is planned.
However using a key value store as a graph storage should be avoided (it will
not perform, because graph structures need to be stored differently)
Why not use a graph database ?
The best performance you get by using the right data structure for the right
pr
What is the Java source code? Most of the people have difficulties to write
proper Java JDBC code for bulk inserts (even for normal databases). It requires
some thinking on threading, buffers and of course selecting the right
methodology to insert etc
> On 24. Jan 2018, at 08:20, Ganesh Sarde
Probably you have a RDD with Java objects which consume a huge amount of
memory. If you use RDD you can try Kyroserializer which save memory and may
even be faster.
> On 29. Oct 2017, at 08:23, Yair Ogen wrote:
>
> Hi,
>
> I'm trying out the ignite-spark support. I have a dataframe that was c
In theory you can use any JDBC driver in SAP BO.
If ignite does not work then Hive+TEZ+LLAP is suitable for interactive queries
(e.g. With ORC as the underlying format).
However your use case sounds also like you need another reporting tool, such as
SAP Lumira, Tableau, Qlikview etc.
> On 13. A
I think it is still not clear what you are doing. What do you mean by using the
fs.append function? Can you please provide each query that you execute? From
where is the data inserted? Did you check all the logfiles of Hive and in Yarn?
Then single inserts are highly inefficient. Try to use crea
Which database? Some databases can notify an application id they are updated.
You could read these updates with a Java application and insert them in the
ignite cache.
> On 17. Jul 2017, at 16:45, luqmanahmad wrote:
>
> Hi, We have a legacy system, 15 years old at-least, which we are working o
make a graph with the prices of that stock over time.
>
>> On Mon, Jun 12, 2017 at 1:03 PM, Jörn Franke wrote:
>> First you need the user requirements - without them answering your questions
>> will be difficult
>>
>> > On 12. Jun 2017, at 07:08, ishan-jain
First you need the user requirements - without them answering your questions
will be difficult
> On 12. Jun 2017, at 07:08, ishan-jain wrote:
>
> I am new to BIG Data .Just been working for a month.
> I have HDFS data of stock prices. I need to perform data analysis(maybe some
> ML) and visual
Access it via Hive (tez+llap) - you can connect to hive via any analytical tool
Hive provides the data on IGFS as tables that can be accessed by analytical
tools.
> On 9. Jun 2017, at 08:43, ishan-jain wrote:
>
> I am using hdfs to store my data. I have to implement a cache on it for
> faster u
I would not expect any of the things that you mention. A cache is not supposed
to slow down writing. This does not make sense from my point of view. Splitting
a block into several smaller ones is also not feasible. The data has to go
somewhere before splitting.
I think what you refer to is cer
That being said, it is rather easy to include in your application that Hadoop
client libraries and use any of the available inputformats. You do not need a
Hadoop cluster to read files, it can even read from the local file system. This
is done also by Spark and others.
> On 10. Apr 2017, at 19:
Not sure I got the picture of your setup, but the ignite cache should be
started indecently of the application and not within the application.
Aside from that, can you please elaborate more on the problem you like to solve
- maybe with pseudocode? I am not sure if the approach you have selected
Well the data is in memory - do you have a concern that another process on the
same machine as the Ignite daemon can read it - there might be better ways then
encryption to solve it. If you are concerned about swapping to disk then try to
reduce the risk and/or encrypt the hard drive.
In the sc
Depending on the jobs you may use spark for these regular jobs together with
Ignite and Kafka.
Nevertheless the focus of storm is event-based processing. If this is not your
use case you may want to go along with the architecture below.
Do you have fault-tolerance requirements? You may need a f
As already said, it is not really a cache use case. Aside, performance tests on
single nodes simply do not make sense for a distributed system.
Maybe you can describe in more detail your real use case and we can help you.
There are many area where you can tune and cache is only one possibility.
Hi,
For me that looks more like something suitable for stomp.js+messaging bus (eg
rabbitmq).
> On 21 Oct 2016, at 07:08, Alexandr Porunov wrote:
>
> Hello,
>
> I am developing a messaging system with notifications via WebSockets (When
> the user 'A' sends a message to the user 'B' I need to
You have to understand for what the database cache is good: lookups of
single/few rows. This is due to the data structure of a cache. In this sense
you use the cache wrongly. Aside of this I think select * is really the worst
way to do professional performance evaluation of your architecture.
>
Depends how you do the lookup. Is it by ID? Then keep the ids as small as
possible. Lookup is fastest in a hash map type of datastructure. In case of a
distributed setting supported by a bloom filter.
Apache Ignite can be seen as suitable.
Depending on what you need to do (maybe your approach re
You need to configure the igfs in the HDFS configuration file. Then you use the
standard APIs to access HDFS files and it will go automatically through the
cache.
> On 4 Oct 2016, at 07:35, Sateesh Karuturi wrote:
>
> Hello experts,
> I am new to the Apache Ignite , and i need to access igfs
I am not sure that this will be performant. What do you want to achieve here?
Fast lookups? Then the Cassandra Ignite store might be the right solution. If
you want to do more analytic style of queries then you can put the data on
HDFS/Hive and use the Ignite HDFS cache to cache certain partitio
Since all requests go through IGFS it is noticed. I am not aware of any
situation where you can circumvent it , if configured correctly
> On 28 Sep 2016, at 20:53, faizshah wrote:
>
> Hi,
>
> Is it possible to query IGFS to see what blocks are in cache?
>
> Additionally, how does IGFS check
You can have also the case both nodes crash ... The bottom line is that a write
loss can occur in any system. I am always surprised to hear even senior
consultants saying that in a high reliability database no write loss can occur
or the risk is low (think about the human factor! Eg an admin acc
Hmm this would require more details. You can, for example, use ignite as a HDFS
cache for hive and hive (minimum 1.2) +tez +Orc as the SQL layer. This is
probably one of the most fastest way currently available. However, this depends
on your use case.
> On 01 Aug 2016, at 14:45, Labard wrote:
id that the speed was improved), also I looked that Ignite can
> be used as Spark chache with Ignite RDD maybe that could be another approach.
>
> Thanks
>
>> On Fri, Jun 17, 2016 at 2:29 AM, Jörn Franke wrote:
>>
>> This depends on the type of queries!
>>
This depends on the type of queries!
In any case: before you go in-Memory optimize your current data model and
exploit your current technology. I have seen in the past often purely designed
data model that do not leverage the underlying technology well.
> On 16 Jun 2016, at 23:20, Andrés Ival
In Addition to that you should make sure that you run JDK8, it has a lot of
optimizations
> On 21 Apr 2016, at 21:06, vkulichenko wrote:
>
> In most cases it's OK to have one node per machine, but you should not
> allocate more than 10-12G of heap memory, because otherwise you will likely
> ha
You seem to look for streaming solutions , such as Spark Streaming or Flink
Streaming or Storm
> On 18 Jan 2016, at 17:19, Dood@ODDO wrote:
>
> Kafka may suit your needs as a "queue" with producer/consumer and persistence
> capabilities also.
>
>> On 1/18/2016 9:52 AM, Murthy Kakarlamudi wrot
Are you using AWS ? What is the ping time between the nodes?
> On 18 Jan 2016, at 06:48, Babu Prasad wrote:
>
> I did simple sequential puts to the cache. The latencies kept spiking
> intermittently to 30ms or higher.
> The test took about 30 minutes to load 1M records. I am using the s3 ip
>
39 matches
Mail list logo