[jira] [Commented] (FLINK-8414) Gelly performance seriously decreases when using the suggested parallelism configuration
[ https://issues.apache.org/jira/browse/FLINK-8414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16326285#comment-16326285 ] Greg Hogan commented on FLINK-8414: --- You certainly can measure scalability but as you have discovered the performance will not be monotonically increasing. Redistributing operators require a channel between each pair of tasks, so with a parallelism of 2^7 you will have 2^14 channels between each task for each iteration. There are many reasons to use Flink and Gelly, but for some use cases for certain algorithms you may even get better performance with a single-threaded implementation. See "Scalability! But at what COST?". ConnectedComponents and PageRank require, respectively, no and very little intermediate data, whereas the similarity measures JaccardIndex and AdamicAdar as well as triangle metrics such as ClusteringCoefficient process super-linear intermediate data and benefit much more from Flink's scalability. When comparing against non-distributed implementations it is important to note that all Gelly algorithms process generic data, whereas many "optimized" algorithms assume compact integer representations. > Gelly performance seriously decreases when using the suggested parallelism > configuration > > > Key: FLINK-8414 > URL: https://issues.apache.org/jira/browse/FLINK-8414 > Project: Flink > Issue Type: Bug > Components: Configuration, Documentation, Gelly >Reporter: flora karniav >Priority: Minor > > I am running Gelly examples with different datasets in a cluster of 5 > machines (1 Jobmanager and 4 Taskmanagers) of 32 cores each. > The number of Slots parameter is set to 32 (as suggested) and the parallelism > to 128 (32 cores*4 taskmanagers). > I observe a vast performance degradation using these suggested settings than > setting parallelism.default to 16 for example were the same job completes at > ~60 seconds vs ~140 in the 128 parallelism case. > Is there something wrong in my configuration? Should I decrease parallelism > and -if so- will this inevitably decrease CPU utilization? > Another matter that may be related to this is the number of partitions of the > data. Is this somehow related to parallelism? How many partitions are created > in the case of parallelism.default=128? -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (FLINK-8414) Gelly performance seriously decreases when using the suggested parallelism configuration
[ https://issues.apache.org/jira/browse/FLINK-8414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16325157#comment-16325157 ] flora karniav commented on FLINK-8414: -- Thank you for the information, I understand the fact that lower parallelism levels are sufficient for these small datasets. But why would performance decrease with larger parallelism values? Due to this fact, I cannot measure performance using different datasets (with sizes that vary from MBs to GBs) with the same Flink setup and configuration. In addition, even if I know the Graph size a priori (using VertexMetrics), is there a formula or some kind of standard way to decide the parallelism level accordingly? Or is brute force the only way? Thank you > Gelly performance seriously decreases when using the suggested parallelism > configuration > > > Key: FLINK-8414 > URL: https://issues.apache.org/jira/browse/FLINK-8414 > Project: Flink > Issue Type: Bug > Components: Configuration, Documentation, Gelly >Reporter: flora karniav >Priority: Minor > > I am running Gelly examples with different datasets in a cluster of 5 > machines (1 Jobmanager and 4 Taskmanagers) of 32 cores each. > The number of Slots parameter is set to 32 (as suggested) and the parallelism > to 128 (32 cores*4 taskmanagers). > I observe a vast performance degradation using these suggested settings than > setting parallelism.default to 16 for example were the same job completes at > ~60 seconds vs ~140 in the 128 parallelism case. > Is there something wrong in my configuration? Should I decrease parallelism > and -if so- will this inevitably decrease CPU utilization? > Another matter that may be related to this is the number of partitions of the > data. Is this somehow related to parallelism? How many partitions are created > in the case of parallelism.default=128? -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (FLINK-8414) Gelly performance seriously decreases when using the suggested parallelism configuration
[ https://issues.apache.org/jira/browse/FLINK-8414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324542#comment-16324542 ] Greg Hogan commented on FLINK-8414: --- It is incumbent on the user to configure an appropriate parallelism for the quantity of data. Those graphs contain only a few tens of megabytes of data so it is not surprising that the optimal parallelism is around (or even lower than) 16. You can use `VertexMetrics` to pre-compute the size of the graph and adjust the parallelism at runtime (`ExecutionConfig#setParallelism`). Flink and Gelly are designed to scale to 100s to 1000s of parallel tasks and GBs to TBs of data. > Gelly performance seriously decreases when using the suggested parallelism > configuration > > > Key: FLINK-8414 > URL: https://issues.apache.org/jira/browse/FLINK-8414 > Project: Flink > Issue Type: Bug > Components: Configuration, Documentation, Gelly >Reporter: flora karniav >Priority: Minor > > I am running Gelly examples with different datasets in a cluster of 5 > machines (1 Jobmanager and 4 Taskmanagers) of 32 cores each. > The number of Slots parameter is set to 32 (as suggested) and the parallelism > to 128 (32 cores*4 taskmanagers). > I observe a vast performance degradation using these suggested settings than > setting parallelism.default to 16 for example were the same job completes at > ~60 seconds vs ~140 in the 128 parallelism case. > Is there something wrong in my configuration? Should I decrease parallelism > and -if so- will this inevitably decrease CPU utilization? > Another matter that may be related to this is the number of partitions of the > data. Is this somehow related to parallelism? How many partitions are created > in the case of parallelism.default=128? -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (FLINK-8414) Gelly performance seriously decreases when using the suggested parallelism configuration
[ https://issues.apache.org/jira/browse/FLINK-8414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324228#comment-16324228 ] flora karniav commented on FLINK-8414: -- Thank you for your reply, I am running the ConnectedComponents and PageRank algorithms from Gelly examples on two SNAP datasets: 1) https://snap.stanford.edu/data/egonets-Twitter.html - 81,306 vertices and 2,420,766 edges. 2) https://snap.stanford.edu/data/com-Youtube.html - 1,134,890 vertices and 2,987,624 edges. I also want to point out that I looked into CPU utilization when changing the parallelism level and it seems to grow as expected, however performance is still reduced. (I am sorry if I posted in an inappropriate section but thought of the issue bizarre enough to be configuration or bug-related.) > Gelly performance seriously decreases when using the suggested parallelism > configuration > > > Key: FLINK-8414 > URL: https://issues.apache.org/jira/browse/FLINK-8414 > Project: Flink > Issue Type: Bug > Components: Configuration, Documentation, Gelly >Reporter: flora karniav >Priority: Minor > > I am running Gelly examples with different datasets in a cluster of 5 > machines (1 Jobmanager and 4 Taskmanagers) of 32 cores each. > The number of Slots parameter is set to 32 (as suggested) and the parallelism > to 128 (32 cores*4 taskmanagers). > I observe a vast performance degradation using these suggested settings than > setting parallelism.default to 16 for example were the same job completes at > ~60 seconds vs ~140 in the 128 parallelism case. > Is there something wrong in my configuration? Should I decrease parallelism > and -if so- will this inevitably decrease CPU utilization? > Another matter that may be related to this is the number of partitions of the > data. Is this somehow related to parallelism? How many partitions are created > in the case of parallelism.default=128? -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (FLINK-8414) Gelly performance seriously decreases when using the suggested parallelism configuration
[ https://issues.apache.org/jira/browse/FLINK-8414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16324133#comment-16324133 ] Greg Hogan commented on FLINK-8414: --- This is more of a question than a reported bug and may be more appropriate for the flink-user mailing list. Are you able to share what algorithm(s) you are running and describe the dataset(s)? > Gelly performance seriously decreases when using the suggested parallelism > configuration > > > Key: FLINK-8414 > URL: https://issues.apache.org/jira/browse/FLINK-8414 > Project: Flink > Issue Type: Bug > Components: Configuration, Documentation, Gelly >Reporter: flora karniav >Priority: Minor > > I am running Gelly examples with different datasets in a cluster of 5 > machines (1 Jobmanager and 4 Taskmanagers) of 32 cores each. > The number of Slots parameter is set to 32 (as suggested) and the parallelism > to 128 (32 cores*4 taskmanagers). > I observe a vast performance degradation using these suggested settings than > setting parallelism.default to 16 for example were the same job completes at > ~60 seconds vs ~140 in the 128 parallelism case. > Is there something wrong in my configuration? Should I decrease parallelism > and -if so- will this inevitably decrease CPU utilization? > Another matter that may be related to this is the number of partitions of the > data. Is this somehow related to parallelism? How many partitions are created > in the case of parallelism.default=128? -- This message was sent by Atlassian JIRA (v6.4.14#64029)