No. Should I have done that?
On 02/12/2014 02:57 PM, Claudio Martella wrote:
did you also set stickyPartitions to some numbers?
On Wed, Feb 12, 2014 at 1:00 PM, Sebastian Schelter <s...@apache.org> wrote:
Updating documentation is never a bad idea :)
I reran my test with giraph.maxPartitionsInMemory >
max(giraph.numComputeThreads,giraph.numInputThreads,giraph.numOutputThreads)
and still got the same behavior. I'll wait for the updated patch.
Get well Armando!
On 02/12/2014 12:53 PM, Armando Miraglia wrote:
btw: I as also thinking to update the documentation page on the Giraph
website to better explain the sticky partition logic. What do you
think?
Cheers,
Armando
On Wed, Feb 12, 2014 at 12:50:25PM +0100, Armando Miraglia wrote:
Indeed, yesterday I was fixing a couple of things and I think I missed a
case that I have to exclude. Sorry for this, I have fever at the momento
so it could be that yesterday I was under the effect of the fever :D
I checked that the tests were passing but I think a missed something.
I'll come back to you very soon
On Wed, Feb 12, 2014 at 10:26:12AM +0100, Claudio Martella wrote:
the problem is that you're running with more threads than in-memory
partitions. increase the number of partitions in memory to be at least
the
number of threads. i have no time right now to check the latest code,
but
you should not set the number of stickypartitions by hand.
On Wed, Feb 12, 2014 at 10:03 AM, Sebastian Schelter <s...@apache.org>
wrote:
I ran a first test with the new DiskBackedPartitionStore and it didn't
work for me unfortunately. The job never leaves the input phase
(superstep
-1). I sshd onto one of the workers and it seems to wait forever on
DiskBackedPartitionStore.getOrCreatePartition:
java.lang.Thread.State: BLOCKED (on object monitor)
at org.apache.giraph.partition.DiskBackedPartitionStore.
getOrCreatePartition(DiskBackedPartitionStore.java:226)
- waiting to lock <0x00000000aeb757c8> (a
org.apache.giraph.partition.DiskBackedPartitionStore$MetaPartition)
Here are the custom arguments for my run, let me know if I should do
another run with a different config.
giraph.oneToAllMsgSending=true
giraph.isStaticGraph=true
giraph.numComputeThreads=15
giraph.numInputThreads=15
giraph.numOutputThreads=15
giraph.maxNumberOfSupersteps=30
giraph.useOutOfCoreGraph=true
giraph.stickyPartitions=5
I also ran the job without using oneToAllMsgSending and saw the same
behavior.
Best,
Sebastian
On 02/12/2014 12:44 AM, Claudio Martella wrote:
please give it a test. i've been working on this with armando. i'll
give a
review, but we have been testing it for while. we'd really appreciate
if
somebody else could run some additional tests as well. thanks!
On Wed, Feb 12, 2014 at 12:39 AM, Sebastian Schelter <s...@apache.org>
wrote:
I'll test the patch from GIRAPH-825 this week.
On 02/12/2014 12:10 AM, Roman Shaposhnik wrote:
Hi!
Given how big the diff here are:
https://issues.apache.org/jira/browse/GIRAPH-825
https://issues.apache.org/jira/browse/GIRAPH-840
I am wondering whether it is realistic
to have them in 1.1.0.
Would appreciate folks chiming in.
Thanks,
Roman.
--
Claudio Martella