INFO 2017-06-13 20:36:22 [localhost-startStop-1]
org.apache.ignite.internal.IgniteKernal%svip - [[OS: Linux
4.1.12-61.1.18.el7uek.x86_64 amd64]]
INFO 2017-06-13 20:36:22 [localhost-startStop-1]
org.apache.ignite.internal.IgniteKernal%svip - [[PID: 7608]]
[20:36:22] VM information: Java(TM) SE
Muthu,
Look at Ignite Uuid#randomUuid() method. I think it will provide needed
guarantees for your case.
On Mon, Jun 12, 2017 at 9:53 PM, Muthu wrote:
> Thanks Nikolai..this is what i am doing...not sure if this is too
> much..what do you think..the goal is to make
Hi Raymond,
I think your use case fits well into traditional Ignite model of
write-through cache store with backing database.
Why do you want to avoid a DB? Do you plan to store data on disk directly
as a set of files?
Pavel
On Mon, Jun 12, 2017 at 2:14 AM, Raymond Wilson
Hi,
Explicit configuration is not required, connector starts up automatically
and listens on port 8080.
Which http code have you got when you accessing it by one of the methods
from api? From example, curl http://:8080/ignite?cmd=version.
Also, try access rest api directly from the server, for
Hi,
We need a little bit more info. What is the expected and real values?
Does JDBC, ODBC and API show the same value?
Best Regards,
Igor
On Sat, Jun 10, 2017 at 1:19 AM, pingzing wrote:
> Anyone with experience? A simple query against BigDecimal value field
> doesn't
>
Hello,
Having to explain the choice of Ignite internally, I wonder what is the
"official" position of Apache Ignite towards Storage Class Memory and using
GPUs.
On the SCM story, I guess it is just another way of allocating/freeing
memory in a kind of off-heap mode but on disk.
On the GPUs
I got it! If you do it yourself doesn't shy to share your experience with
community. ;)
On Mon, Jun 12, 2017 at 7:23 PM, Antonio Si wrote:
> Thanks Nikolai. I am wondering if anyone has done something similar.
>
> Thanks.
>
> Antonio.
>
> On Mon, Jun 12, 2017 at 3:30 AM,
Hi,
Please properly subscribe to the mailing list so that the community can
receive email notifications for your messages. To subscribe, send empty
email to user-subscr...@ignite.apache.org and follow simple instructions in
the reply.
This example works for me without problems. Did you change
Muthu,
Please create separate threads on user for mentioned problems: code
generation (if needed), spring transactions, JDBC.
I'm lost in this "big" number of text lines.
--
Alexey Kuznetsov
Hi,
Try adding /usr/local/lib/ to LD_LIBRARY_PATH evn variable.
Best Regards,
Igor
On Tue, Jun 13, 2017 at 4:54 PM, Riccardo Iacomini <
riccardo.iacom...@rdslab.com> wrote:
> Hello,
> I am trying to access Ignite 2.0 using the ODBC driver. I've followed the
> guide
Raymond,
Then Ignite Persistent Store is exactly for your use case. Please refer to this
discussion on the dev list:
http://apache-ignite-developers.2346864.n4.nabble.com/GridGain-Donates-Persistent-Distributed-Store-To-ASF-Apache-Ignite-td16788.html#a16838
Also it was covered a bit in that
Hi Nikolai,
I looked at the code for this method earlier (reproduced below)...the UUID
is generated once (via VM_ID) per cluster node JVM & the atomic long again
is local to the cluster node JVM (unlike the *igniteAtomicSequence*). Do
you think its still okay to use it...i thought we at least
Hi Pavel,
It’s a little complicated. The system is essentially a DB in its own right;
actually it’s an IMDG a bit like Ignite, but developed 8 years ago to
fulfill a need we had.
Today, I am looking to modernize that system and rather than continuing to
build and maintain all the core
Denis,
Ah! Looks very interesting. Thanks for the pointer J
Raymond.
*From:* Denis Magda [mailto:dma...@apache.org]
*Sent:* Wednesday, June 14, 2017 9:41 AM
*To:* user@ignite.apache.org
*Subject:* Re: Write behind using Grid Gain
Raymond,
Then Ignite Persistent Store is exactly for
Denis,
Thanks for clearing it out. That might be the reason for the memory
difference.
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Off-Heap-On-Heap-in-Ignite-2-0-0-tp13548p13683.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
I am getting the following exception when starting Ignite in server mode(2.0)
and from then Ignite is stopping all the caches.
Exception in thread "main" class org.apache.ignite.IgniteException:
Attempted to release write lock while not holding it [lock=7eff50271580,
state=0002
2639
Okay i start using file system api which ignite file system has. I start an
ignite node and start loading data from it.
Now i have to put data into normal ignite cache for using sql queries on
it. Ignite node has been already started from the program in IGFS mode. How
will i create a normal ignite
Hi,
I'm attempting to run a query that does a join between 2 large tables (one
with 6m rows, another with 80m rows). In the query plan I see a join "on
1=1" and separately i see filters for my join under the "where" clause. I'm
not sure if this is standard output in the query plan or if it's doing
By starting another ignite node i mean an ignite node with different
config. As seen in ignite data fabric example of stream words from file we
have to use some event types also .
On Wed, Jun 14, 2017 at 10:38 AM, Ishan Jain wrote:
> Okay i start using file system api
Megha,
The objects are stored in the deserialized form both in heap (AI 1.x) and
off-heap (AI 2.0). The difference is that when an object is in Java heap we
need to an create extra wrapper object around it so that it can be used by an
application running on top of JVM. Plus there might be some
Hello Steve,
Starting with Apache Ignite 2.0 the project is no longer considered as an
in-memory technology only.
The new virtual memory architecture that sits at the core of the platforms
allows considering Ignite as an memory-first (memory-optimized) computational
platform that distributes
What is the size of the data?
For me it looks more that orc or parquet would be enough.
I do not see here specific in-memory requirements.
> On 12. Jun 2017, at 09:59, Ishan Jain wrote:
>
> I need to just get the price of a stock which is stored in hdfs with
>
Hi, Valentine.
Thanks for your response! It's a shame i overlooked DML.
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/How-to-operate-with-cache-Key-or-cache-AffinityKey-tp13561p13642.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Size would be very large as stock prices would be streamed every hour
On Tue, Jun 13, 2017 at 12:05 PM, Jörn Franke wrote:
> What is the size of the data?
> For me it looks more that orc or parquet would be enough.
>
> I do not see here specific in-memory requirements.
>
Hi,
We are performing parallel data loading in memory using multiple
instances of ignite(from kafka to ignite) in single node. While caching CPU
is not getting utilized above 70%. How can we improve this?
Thanks
--
View this message in context:
I need to basically have a sql query remote access from tools like tableau or
zeppelin and have fast mapreduce funtions
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Data-Analysis-and-visualization-tp13614p13639.html
Sent from the Apache Ignite Users mailing
Hi,
I subscribed to the mailing list now. Thank you.
I did not change anything related to the project, was just trying to execute
as-is.
I wanted to run the IgniteNodeStartup and I get this error.
I mentioned the h2 jar to let know that the specific version I was using is
the latest as
Thank you for the reply Igor,
the error just changed into:
*pyodbc.Error: ('01000', "[01000] [unixODBC][Driver Manager]Can't open lib
> 'Apache Ignite' : file not found (0) (SQLDriverConnect)")*
The Ignite Driver seems to be installed. Here's my /etc/odbcinst.ini :
[Apache Ignite]
>
28 matches
Mail list logo