Lucky,
What tool are you using to access Ignite over JDBC? Does this problem
reproduce, if you use DBeaver, for example?
As Taras said, looks like the same JDBC connection is used concurrently.
Denis
ср, 29 нояб. 2017 г. в 5:17, Lucky :
> OK,This is the stack trace.
>
Hi,
I opened a ticket for this issue:
https://issues.apache.org/jira/browse/IGNITE-7065. If no one pick it up by
next week then I might find time to fix it next week (although I cannot
promise). I am not sure when community is going to release 2.4 but I (or
anyone who fixes it) could share a
Hi,
No, you cannot activate cluster through XML config. Do you really need
persistence enabled? The cluster is active by default if you disable
persistence.
You might also consider creating a simple ignite.sh wrapper that would also
call "control.sh --activate" to activate the cluster. In this
Hello!
Are you still having this problem?
There's an IgniteCheckedException for which we need full stack traces
(including causes) in order to proceed).
Regards,
--
Ilya Kasnacheev
2017-10-20 17:36 GMT+03:00 daniels :
> Hi.thank you for response
>
> No there isnt any
hi All,
We use the exactly same configuration with same CacheEntryListener and
CacheEntryEventFilter in both ContinuousQuery and
MutableCacheEntryListenerConfiguration
But the ContinuousQuery seem never continues trigger any events while the the
MutableCacheEntryListenerConfiguration
Hi Colin,
I would suggest you send a new message to user list with a detailed
description of your use case, I think the community could check it and give
some comments.
Evgenii
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Lucky,
Try enabling *enforceJoinOrder *parameter in JDBC connection string and let
us know the result.
Your JDBC connection string should look like this:
jdbc:ignite:thin://127.0.0.1?*enforceJoinOrder=true*
Denis
ср, 29 нояб. 2017 г. в 10:22, Lucky :
> Have you come to a
How do we set this in the config XML ‘-DIGNITE_REST_START_ON_CLIENT=true’, I
wanted to enable REST for a client node
What is the XML property
Regards
Naveen
BTW I Use the SQL to update the cache then can not trigger the ContinuousQuery
while If I update one by one seem can work.
Is this the reason?
SqlFieldsQuery update = new
SqlFieldsQuery(UPDATE).setArgs(Utils.utcEpochMills())
.setTimeout(20, TimeUnit.SECONDS)
Hi Vladimir,
Unfortunately I don't have AIX or any other big-endian machine at hand, so
could you please assist me with fixing this bug.
Could you please run the following unit test on your AIX box:
org.apache.ignite.internal.direct.stream.v2.DirectByteBufferStreamImplV2ByteOrderSelfTest
Could
Is there any orm for ignite ? This question is part of my larger enquiry on
how seamlessly it would be possible to migrate a spring based micro-service
into a ignite-services based one. I'm aware of the need to change the
inter-service communication part but I would like to maintain the internal
Hi,
I was working on Ignite 2,3, and my colleague is working on 1.8. We are
running different application on our local machine using the example ignite
xml, but it getting failed with the below exception:-
We are using our local system ip under TcpDiscoverySpi, but still, it is
detecting each
Ignite has built-in object mapping.
Basically you put your objects into cache and it is possible to query them
with SQL.
On Wed, Nov 29, 2017 at 5:56 PM, nunoo wrote:
> Is there any orm for ignite ? This question is part of my larger enquiry on
> how seamlessly it
Ray,
Seems you're looking
for org.apache.ignite.cache.query.SqlFieldsQuery#timeout?
On Tue, Nov 28, 2017 at 5:30 PM, Alexey Kukushkin wrote:
> Ignite Developers,
>
> I know community is developing an "Internal Problems Detection" feature
>
Is it possible to fully configure the connection between Apache Ignite v2.3 &
Cassandra? I started the Cassandra Integration with 1.8 and, at the time,
the configuration had to be done fully with xml. I'm just curious if there
have been enough changes and updates over time (that I'm unaware of)
Ray,
I think to avoid excessive GC and OOM you could try switching to a lazy
result set:
https://apacheignite-sql.readme.io/docs/performance-and-debugging#result-set-lazy-load
- Nick
On Wed, Nov 29, 2017, 7:19 AM Anton Vinogradov
wrote:
> Ray,
>
> Seems you're
Hi,
In this case, you should specify ports for addresses, for example:
10.23.153.56:47500
Evgenii
2017-11-29 18:20 GMT+03:00 Rajarshi Pain :
> Hi,
>
> I was working on Ignite 2,3, and my colleague is working on 1.8. We are
> running different application on our local
Jin,
It's not a bug. All nodes, that will be executing the initial query, should
have metadata of the query class.
When you register an initial query or a remote filter for a CQ, their
objects should be sent to remote nodes, so metadata update is triggered.
It's possible, that anonymous classes,
Hi Naveen,
It seems there is no such XML property.
Anyway, I don't think there is much sense to start REST server on a client
node because it is always better to access server node directly if possible.
Thanks.
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hello,
It looks like there is a new possibility which allows decreasing the number
of open file handles.
As of v2.2, Apache ignite provides a new concept - Cache Groups [1].
Caches within a single cache group share various internal structures and it
allows to mix data of caches in shared
Switched to Ignite 2.3.0 in hope it has better behavior.
Unfortunately it is not.
During the execution of Spark job number of cache rows is growing but after
Spark job completes - looks like some entries has been removed.
JavaIgniteRDD shows correct count but again final result is incorrect.
I
Hi,
I think that most of the open file descriptors relate to partition files.
Perhaps, in terms of performance, this is not a very good way to open/close
files after each read/write.
Thanks.
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
AM using 2.3
What could be the issue with below create index command.
: jdbc:ignite:thin://127.0.0.1> select * from "Customer".CUSTOMER
where ACCOUNT_ID_LIST ='A10001';
But that object is not modeled as a JPA entity is it? That approach is using
a JDBC-like API, right ?
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Val, thank you for confirmation.
zbyszek
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Kenan,
Nothing changed there, ticket is still open:
https://issues.apache.org/jira/browse/IGNITE-4555
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Looks like you're running in embedded mode. In this mode server nodes are
started within Spark executors, so when executor is stopped some of the data
is lost. Try to start a separate Ignite cluster and create IgniteContext
with standalone=true.
-Val
--
Sent from:
I don't think raksja had an issue with only one record in the RDD.
IgniteRDD#count redirects directly to IgniteCache#size, so if it returns 1,
so you indeed have only one entry in a cache for some reason.
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
I am not sure how client-server deployment would look in our case. Our
application starts the ignite in server mode, initializes caches and later
access those caches. To switch to client-server deployment, the application
will start a node in server mode, initializes caches. Then the
Biren,
If half of your operations became multiple times slower, why would you
expect throughput to increase? In case you don't use collocation, I would
recommend you to switch to client-server deployment. Initial performance
with two nodes and fully replicated cache can be slower than now, but
Amit,
Does it work if you start without nohup?
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
HI,
Thanks for your help!
We define it using QueryEntity by providing 'keyFields' property:
triggerId
biEventId
Hi,
Any thoughts on this? Do you need any further details about the setup?
Thanks,
Juan
On Tue, Nov 21, 2017 at 8:59 AM, Juan Rodríguez Hortalá <
juan.rodriguez.hort...@gmail.com> wrote:
> Hi,
>
> Anyone might help a newbie ramping up with Ignite on YARN?
>
> Thanks,
>
> Juan
>
>
> On Sun,
Hello Alexey,
sorry for the delayed answer, here's the PHP example which demonstrates
the problem:
https://gist.github.com/wolframite/ee23b08bdd26bc1284cacc5259b850f8
Cheers
Wolfram
On 28/11/2017 03:15, Alexey Popov wrote:
Wolfram,
The buffer size is hardcoded now, but it could be made
Lucky,
If it's possible, that this code is executed cuncurrently, then you need to
add additional synchronization. Otherwise correct work of JDBC driver is
not guaranteed.
Denis
On Thu, Nov 30, 2017, 09:10 Lucky wrote:
> Denis,
> I used java code,just like this:
>
Denis,
I used java code,just like this:
Connection conn = getConn();
PreparedStatement stmt = conn.prepareStatement(sql);
ResultSet rs = stmt.executeQuery();
public Connection getConn() throws SQLException{
if (conn==null || conn.isClosed()){
try {
Hi Evgenii,
We both tried to specify the port, but still we rae getting same error if
we both running it on the time in our system.
On Wed 29 Nov, 2017, 21:04 Evgenii Zhuravlev,
wrote:
> Hi,
>
> In this case, you should specify ports for addresses, for example:
>
>
Hi Biren,
Are you doing reads from a client or directly on server nodes? If the
letter, then I guess you just do not collocate properly. With two nodes and
one backup all data is available on both nodes, so any server side read
would be local. With four nodes some of them would be remote which is
Hi,
How do you define the schema? Using DDL or in CacheConfiguration?
Basically, you don't need to use _key at all here since TriggerId and
BiEventId actually compose the primary key. But you need to make sure Ignite
knows about that. For example, if you create a table like this:
CREATE TABLE
I'am using ignite 2.0 ],and in my Junit test Igntie brings bellow
excpetion,but my applicataion works properly,only during junit test brings
that error.the configurations are same in both.
I did debug and noticed that the almost same query in my application works
properly but in junit test it
Hi Val,
We are doing read on server only. I understand the get will be slow in case of
4 nodes because its doing remote get for some cache entries. But it should not
bring down overall throughput of an application. Just to give you some context,
with 2 nodes the application can process 500K
Biren,
That's a wrong expectation because local in-memory read is drastically
faster than a network read. How do you choose a server node to read from?
What is overall use case?
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Well I am not expecting that by doubling the number of nodes, I will get 2x
throughput. But it should be at some liner rate and definitely should not bring
the throughput down. We have embedded the ignite in the application. On start
of the application we start ignite in server mode. We
Found these very promising links :)))
https://dzone.com/articles/apache-ignite-with-jpa-a-missing-element
https://www.javacodegeeks.com/2017/07/apache-ignite-spring-data.html
https://apacheignite-mix.readme.io/v2.0/docs/spring-data
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
My understanding with some other in-memory product is
We have seeders and leeches, seeders are the one hold data and leeches are
the one which are exposed to the clients, responsible for processing the
incoming requests. Basis idea was to offload the connection/disconnection
activities from the
45 matches
Mail list logo