Hi Sergi,
Yes, I refreshed after node start, the JDBC url is using current node id, is
there any wrong in my JDBC configuration? I started one server and one client
on my desktop, and both version are 1.7.0.
Ming
From: Sergi Vladykin [mailto:sergi.vlady...@gmail.com]
Sent: Wednesday, September
Hi,
You may need to hit `refresh` button, because Console may start before
initialization of caches.
Sergi
2016-09-14 8:52 GMT+03:00 :
> Hi All,
>
>
>
> After enabled H2 web console, I can’t see the tables like guide, anything
> I missed in CacheConfiguration, but I can see cache is created in
Hi All,
After enabled H2 web console, I can't see the tables like guide, anything I
missed in CacheConfiguration, but I can see cache is created in log.
Mine:
[cid:image001.png@01D20E8E.EFBC0FB0]
Guide:
[cid:image002.jpg@01D20E8E.EFBC0FB0]
Hi, @matt!
I see in stack trace "Caused by: java.util.ConcurrentModificationException".
May be your application concurrently changing your POJOs while Ignite in
progress of marshaling?
On Wed, Sep 14, 2016 at 9:04 AM, matt wrote:
> Looks like it's a problem with the object I'm sending in. I swa
I use lucense index.
By the way,
what do these two files mean?
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Cache-Problems-tp7477p7697.html
Sent from the Apache Ignite Users mailing list archive at Nabbl
Looks like it's a problem with the object I'm sending in. I swapped out the
object (custom POJO) with an Integer, and all works fine. Is there a way I
can find out what is it about this object that's causing the marshaling
error? Any sample code around that shows how to run the marshaler
stand-alon
Hi,
I have Ignite (version 1.7.0) setup to do streaming, and a
autoFlushFrequency of 15 (for now) - in the logs, I'm getting this during
processing - any ideas on what this is coming from?
2016-09-13T17:31:52,580 - ERROR
[grid-data-loader-flusher-#55%null%:Slf4jLogger@112] - Runtime error caught
Thanks Denis, it worked!!! you made my day.
I appreciate your efforts and suggestions.
Regards
Abhishek
On Tue, Sep 13, 2016 at 11:43 AM, Denis Magda wrote:
> Looks like CLASSPATH env variable is ignored on this Linux distribution
> side. You should refer to basic documentation from Oracle (JR
Yes in theory I could know all changes I made. But in practice it will make
application logic so much more complicated!
E.g in one part of transaction I create an object and in some other part
execute query which should include this object in result cursor taking into
account WHERE and ORDER BY. D
Got it. It's not possible then. I would recommend you to revisit your logic
and check if it's possible to avoid using these features. I don't think
nested and/or autonomous transactions will be supported in Ignite in a
foreseeable future.
-Val
--
View this message in context:
http://apache-ign
Hi,
In current implementation write-behind store can lose writes even if only
one nodes fails. This can be improved by adding backup queues [1], but it's
not implemented yet. Of course, this will not help if you lose more nodes
than number of backups you have (in this case data in memory is also l
Hi,
I'm using a wrapper to start the nodes as service.
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Atomic-Long-tp7706p7720.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Hi,
This just means that the thread was interrupted. You should check the log
before the exception to see what could be a reason. It also looks like
you're using some kind of wrapper, probably it tried to stop the process?
-Val
--
View this message in context:
http://apache-ignite-users.70518
Hi Josh,
I think Binarylizable is the right choice. But you're right that you will
have to write all the fields manually in this case. This works similar to
Externalizable in plain Java serialization.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Custom-
If you're executing a query from the same transaction where you updated the
data, I'm pretty sure you can find a workaround, because you know everything
about the updates made within the transaction. Transactional SQL feature is
mostly designed to avoid dirty reads in case a query transaction and u
Looks like CLASSPATH env variable is ignored on this Linux distribution side.
You should refer to basic documentation from Oracle (JRE/JDK owner) or use
ignite.sh script in the following way:
add MAIN_CLASS env variable referring to Demo class (export
MAIN_CLASS=org.apache.ignite.schema.Demo)
a
It depends on when someone from the community takes over this task getting it
done. If to refer to previous discussion we should expect the feature appear in
Ignite in the beginning of the next year.
—
Denis
> On Sep 13, 2016, at 3:15 AM, akaptsan wrote:
>
> I would say it's critical problem.
You can work with Ignite using JDBC [1] or ODBC [2] drivers executing SQL
queries. DML and DDL should be a part of the product soon. Until they are not
available you need to use Ignite Cache API for transaction executions.
However, nested transactions are not a part of Ignite.
[1] https://apach
Hello Josh,
I think BinarySerializer will be matched.
Look at the article[1] and javadoc[2].
[1]:
http://apacheignite.gridgain.org/docs/binary-marshaller#configuring-binary-objects
[2]:
https://ignite.apache.org/releases/1.5.0.final/javadoc/org/apache/ignite/binary/BinarySerializer.html
On Tue
Hello,
Unfortunately, Ignite has not support anything from the list (nested
transaction, autonomous transactions, DDL) yet.
I think, if you want to use key-value approach, changes of logic will be
significant.
On Tue, Sep 13, 2016 at 12:50 PM, akaptsan wrote:
> We have OLTP system based on Orac
I have an object I need to cache which has more than 100 fields. One
particular field needs some custom handling when being (de)serialized. I've
considered implementing Binarylizable; however, I don't care to manage all
of the fields via the read and write binary methods.
Where should I look in th
You can use installer from here: [1].
What kind of problem do you mean?
[1] -
https://github.com/isapego/ignite/tree/ignite-3868/modules/platforms/cpp/odbc/install
Best Regards,
Igor
On Tue, Sep 13, 2016 at 4:57 PM, amitpa wrote:
> Also is this a problem with Visual Studio 2015 and Ignite and
Hello, everyone.
Recently, while using service injection in custom CacheStore
implementation, we faced the problem that with startup deployment of
services and caches through Spring, the services are not injected into
CacheStore. It doesn't happen when we deploy our services and caches
manual
Also is this a problem with Visual Studio 2015 and Ignite and it doesnt
happen when we use other VS versions like 2010?
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Trouble-with-Using-Ignite-1-8-ODBC-Driver-tp7656p7707.html
Sent from the Apache Ignite Users m
Hi,
I'm having some trouble with atomic long inside my compute job. I'm getting
the following exception once a while:
FINEST|4532/0|Service AntheusMatchServerNode|16-09-13
10:51:23|[13:51:23,382][SEVERE][pub-#46%null%][GridCacheAtomicLongImpl]
Failed to add and get:
o.a.i.i.processors.datastru
Hi Alexey,
thank you for your response. I've implemented your suggestions and now my
cluster is with 100% cpu utilization.
Best regards,
Caio
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/JVM-tuning-tp7532p7705.html
Sent from the Apache Ignite Users mailing
It looks like known issue https://issues.apache.org/jira/browse/IGNITE-2714
On Tue, Sep 13, 2016 at 11:13 AM, kromulan wrote:
> I've had same problem in the past but it was a memory leak in indexing
> code.
> Do you use indexing on your entities ?
>
>
>
> --
> View this message in context: http:
I would say it's critical problem. Can you estimate when it will be fixed?
(I don't have access to jira)
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Query-does-not-include-objects-added-into-Cache-from-within-a-transaction-tp7651p7703.html
Sent from the Apac
We have OLTP system based on Oracle. And we havily use the nested
transactions
We would like to replace Oracle with Ignite. That's why we need all this
features: nested transaction, autonomous transactions, DDL ...
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.
You can have also the case both nodes crash ... The bottom line is that a write
loss can occur in any system. I am always surprised to hear even senior
consultants saying that in a high reliability database no write loss can occur
or the risk is low (think about the human factor! Eg an admin acc
Only 118 jobs for this test.
Bob
From: Taras Ledkov
Date: 2016-09-13 14:52
To: user@ignite.apache.org
Subject: Re: Re: Increase Ignite instances can't increase the speed of compute
Hi,
How many MatchingJobs do you submit?
On Tue, Sep 13, 2016 at 12:29 PM, 胡永亮/Bob wrote:
Hello, Vladis
Great news!
Could you please give me a hint how to access this function?
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/SQL-Queries-propagate-new-CacheConfiguration-queryEntities-over-the-cluster-on-an-already-started-cae-tp5802p7699.html
Sent from the Apache I
Not exactly. Authonomous transaction should see uncommited data of parent
transaction
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Autonomous-transaction-tp7672p7698.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
I've had same problem in the past but it was a memory leak in indexing code.
Do you use indexing on your entities ?
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Cache-Problems-tp7477p7697.html
Sent from the Apache Ignite Users mailing list archive at Nabble.co
Hi,
I've been trying to find information on cache persistence write behind behavior
that might lead to entries not being written to the persistent store. Most
notably: what happens in the following scenario:
* Cache 'cache' has two backup copies on nodes A and B respectively and
is c
35 matches
Mail list logo