hi @yakov
yakov wrote
> Yes, however, you can still return results from each job and use it.
> Please
> see javadoc for org.apache.ignite.compute.ComputeJobResult#getData
yes, it's good to have such opportunity at least on "result" step.
But still I'm very curious, why the overhead is so big
Yes, however, you can still return results from each job and use it. Please
see javadoc for org.apache.ignite.compute.ComputeJobResult#getData
--Yakov
yakov wrote
> What are your timings now?
on two local nodes, after jvm is warmed up (~100 executions), it's running
in average 30ms instead of 6 sec when result is returned in return/reduce
phase. This is a huge improvement!
I can take it now as a basis and start adding some additional behavior
You are welcome!
What are your timings now?
--Yakov
2017-09-07 15:01 GMT+03:00 ihorps :
> hi @yakov
>
>
> yakov wrote
> > Try attaching @ComputeTaskNoResultCache to your task.
>
> Thank you for the hint. It speeds up task management processing
> drastically!
>
>
>
> --
>
hi @yakov
yakov wrote
> Try attaching @ComputeTaskNoResultCache to your task.
Thank you for the hint. It speeds up task management processing drastically!
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Try attaching @ComputeTaskNoResultCache to your task.
Also filed a ticket - https://issues.apache.org/jira/browse/IGNITE-6284
As far as 2 - I meant empty runnables submitted to an JDK thread pool
executor - submission will require to acquire a lock and notify pool
thread. So overhead is very
hi @yakov
Thank you for your feedback.
1. yes, warming up a jvm - this is what I missed at the begging (no doubts
here at all). I can confirm that it gets better in average after few dozens
of run.
2. did you mean than IgniteRunnable/IgniteCallable here (efficiency for
no-op task/job)? I'd like
Guys,
I see the following issues with the benchmark:
1. There is only one iteration. I would put it in a loop and measure at
least hundred of iterations.
2. no-op jobs are not real world example at all =) job requests are
processed in thread pool executor which is not very much effective for
But of course, it could be changed. The community didn't decide yet if wiki
doesn't have information about it.
2017-09-05 17:46 GMT+03:00 Evgenii Zhuravlev :
> I think it was planned at the end of October.
>
> Evgenii
>
> 2017-09-05 17:41 GMT+03:00 ihorps
I think it was planned at the end of October.
Evgenii
2017-09-05 17:41 GMT+03:00 ihorps :
> hi, @ezhuravlev
>
> This is what I'm looking for, many thanks!
>
> Some hints when v2.3 is planned to be release (I can't find it on wiki)?
>
> I'd rather wait for this API in Ignite
hi, @ezhuravlev
This is what I'm looking for, many thanks!
Some hints when v2.3 is planned to be release (I can't find it on wiki)?
I'd rather wait for this API in Ignite then implementing it by myself an
throw it later such as I'm in evaluation/prototype phase now.
Best regards,
ihorps
--
Hi,
Here is a ticket for exactly what you want, it's in progress right now:
https://issues.apache.org/jira/browse/IGNITE-5037
If you don't want to wait till it will be implemented, you can use
affinityCall(...) or affinityRun(...) and somehow reduce result after it
will be returned.
Evgenii
Hi,
I've added Thread.sleep(200) to Jobs to simulate a small load.
Here is what I've got:
1node: 1 Task 2000 Jobs ~25 sec
2nodes(on the same machine): 1 Task 2000 Jobs ~13 sec
What I want to say here - this overhead will be not noticeable on real Jobs.
What about some configuration changes -
hello
So here are results for NoOpTaks + NoOpJob on two different hosts (hardware
spec. is the same as mentioned above)
1. 1 Task - 100 Jobs -> ~0.1 sec
2. 1 Task - 1000 Jobs -> ~4 sec
3. 1 Task - 2000 Jobs -> ~15 sec
4. 1 Task - 3000 Jobs -> ~36 sec
5. 1 Task - 5000 Jobs -> ~96 sec
--
ezhuravlev wrote
> Also, maybe it's better to compare your current solution with Ignite on
> some real tasks? Or at least more approximate to the real use case
>
> Evgenii
Hi @ezhuravlev
Thank you for your replay!
I'm preparing more "fair" comparison with our custom made solution but it
can't be
Also, maybe it's better to compare your current solution with Ignite on
some real tasks? Or at least more approximate to the real use case
Evgenii
Hi,
I don't really understand, what you've tried to measure here?
If you run two nodes on the same machine you will have more context
switching of the CPU. In this case, your CPU run internal Ignite Threads
not from only one node, but from 2 nodes. Additionally, when you use more
that one node -
It was tested on:
- Windows 7 SP1
- Intel I7-4700MQ 2.40GHz
- 16GB RAM
- SSD
- java 1.8.0_112
- Apache Ignite 2.1.0
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi all
[brief overview]
I'm evaluating Apache Ignite framework as a replacement for Hazelcast. One
of usages where it's planned to be compared is task/job processing. We have
implemented tasks management by ourselves based on Hazelcast but not using
their MarReduce framework (such as it was very
19 matches
Mail list logo