Hi,
We are evaluating Ignite and our ultimate goal is to share an rdd between
multiple spark jobs. So one job can cache its computation in ignite and
other can use that for its computation.
We setup ignite server in master node & 6 worker nodes of AWS EMR cluster
and ran the spark-submit with
Hi Alexey,
Just to add a little more context & summarize...I have been really trying
to get this to work for a while so i can begin deploying ignite but as yet
i haven't manged to get this working...below are the current list of issues,
1. The cache fails loading with the configurations
Igniters, today Akmal B. Chaudhri published the 7th post in his series on
"Getting Started with Apache Ignite." This time round the focus is on the
new Machine Learning (ML) Grid. http://bit.ly/2v6QKQ1
--
View this message in context:
Hi Matt,
javax.cache.Caching.getCachingProvider() does not start Apache Ignite.
It looks like, you are trying to call (explicitly or implicitly)
CachingProvider.getCacheManager(), which have to start Ignite server.
Thanks.
--
View this message in context:
Hi,
Could you please clarify, if you run all actions using IGFS, but instaed of
fs.appedn use Hive, like:
insert into table stocks PARTITION (years=2004,months=12,days=3)
values('AAPL',1501236980,120.34);
Does select work this time?
Thanks,
Mikhail.
2017-08-04 12:56 GMT+03:00 csumi
Hi Pradeep,
I didn't reproduce the issue yet, I need more time for this.
But could you please share the full stack-trace of the exception, because
the most interesting part it is cut.
Thanks,
Mikhail.
--
View this message in context:
Hi Pradeep,
I can't run it with 1.9 too, however, the same config works fine with 2.0.
You can try to build zeppelin your self, just apply the following patch:
https://github.com/apache/zeppelin/pull/2445/files
to some stable version of zeppelin.
I'm not sure but it should work, anyway, the only
Could you please clarify what does it mean "I already did LoadCache
before running the program."?
Also it would be good if you can share some minimal reproducer.
On Fri, Aug 4, 2017 at 4:55 PM, kotamrajuyashasvi
wrote:
> Hi
> All nodes use same config xml and pojos.
Thanks, Igor.
That enhancement will be very useful. Both faster load (parallel) and more
efficiency (not transferring all data times) are highly desirable.
Roger
From: Igor Rudyak [mailto:irud...@gmail.com]
Sent: Thursday, August 03, 2017 10:58 PM
To: user@ignite.apache.org
Subject: Re:
Hi Kestas,
There are several possible reasons for that.
In your case, I think you are trying to put or get huge objects, that
contain internal collections and so on.
>at
> o.a.i.i.binary.BinaryUtils.doReadCollection(BinaryUtils.java:2016)
>at
>
Thanks Christos, it is for the application code. For upgrading the ignite
version we can schedule the downtime that's why 99.9% :)
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Platform-updates-tp15998p16001.html
Sent from the Apache Ignite Users mailing list
Luqman, do you want to update the application or the actual Ignite version?
If it's your application Kara then as long as you can manage multiple versions
of your app for a phased upgrade then sure. But if it's for Ignite then this is
not possible to have 2 different version running in the same
I have the ignite dependencies set in my project. I have some test code that
is using a non-Ignite Java test code actually touches Ignite or attempts to
call code that does. As soon as I run
javax.cache.Caching.getCachingProvider() - Ignite starts up and prints the
normal Ignite welcome ascii art.
Hi
It seems that you have different configuration on nodes (e.g. one node
has cacheKeyConfiguration while other doesn't). Isn't it?
On Fri, Aug 4, 2017 at 12:54 PM, kotamrajuyashasvi
wrote:
> Hi
>
> Thanks for the response. When I put cacheKeyConfiguration in ignite
Hi,
for data frames you can try to save records one by one but I'm not sure that
Spark will not use batch store.
The second option is transformation data frame to RDD and use savePairs
method.
Will this work for you?
--
View this message in context:
Hi,
sometimes we get this message in logs. What does it mean ?
Jul 26, 2017 11:43:25 AM org.apache.ignite.logger.java.JavaLogger warning
WARNING: >>> Possible starvation in striped pool.
Thread name: sys-stripe-3-#4%null%
Queue: []
Deadlock: false
Completed: 17
Thread
> git refused to clone the repo in GitExtensions
"git clone https://github.com/apache/ignite.git; in console should work
> Is the ‘data’ pointer actually a pointer to the unmanaged memory in the
off heap cache containing this element
It is just a pointer, it can point both to managed or
Let me try to clear here with the sequence of steps performed.
- Created table with partition through hive using below query. It creates
a
directory in hdfs.
create table stocks3 (stock string, time timestamp, price float)
PARTITIONED BY (years bigint, months bigint, days bigint)
Hi
Thanks for the response. When I put cacheKeyConfiguration in ignite
configuration, the affinity was working. But when I call Cache.Get() in
client program I'm getting the following error.
"Java exception occurred
[cls=org.apache.ignite.binary.BinaryObjectException, msg=Binary type has
Yeah, "Data type in not supported" - this is the message I'm getting
currently. I was just making sure we are on the same page. I'm
working on this and other issues right now.
I'll notify you when I'm done. Thanks for your assistance.
Best Regards,
Igor
On Thu, Aug 3, 2017 at 10:06 PM, Dar
Hi Alexey,
Try this,
https://github.com/softwarebrahma/IgniteCacheWithAutomaticExternalPersistence
Can be run like below,
java -jar -Ddb.host=localhost -Ddb.port=5432 -Ddb.name=postgres
-DIGNITE_QUIET=false target\CacheAutomaticPersistence-0.0.1-SNAPSHOT.jar
Regards,
Muthu
-- The real danger
Hi Muthu,
You understand correctly. If you have write-then-read logic that can
be executed on backup node for particular key than you should use
FULL_SYNC write synchronization mode.
Other way to get similar behaviour is setting readFromBackup property
value to false. In this case you still can
22 matches
Mail list logo