In fact, I don't use it. I just had to crawl back the runtime implementation to get to the point where parallelism was switching from 32 to 8.
saluti, Stefano 2016-03-29 12:24 GMT+02:00 Till Rohrmann <till.rohrm...@gmail.com>: > Hi, > > for what do you use the ExecutionContext? That should actually be > something which you shouldn’t be concerned with since it is only used > internally by the runtime. > > Cheers, > Till > > > On Tue, Mar 29, 2016 at 12:09 PM, Stefano Bortoli <s.bort...@gmail.com> > wrote: > >> Well, in theory yes. Each task has a thread, but only a number is run in >> parallel (the job of the scheduler). Parallelism is set in the >> environment. However, whereas the parallelism parameter is set and read >> correctly, when it comes to actual starting of the threads, the number is >> fix to 8. We run a debugger to get to the point where the thread was >> started. As Flavio mentioned, the ExecutionContext has the parallelims set >> to 8. We have a pool of connections to a RDBS and il logs the creation of >> just 8 connections although parallelism is much higher. >> >> My question is whether this is a bug (or a feature) of the >> LocalMiniCluster. :-) I am not scala expert, but I see some variable >> assignment in setting up of the MiniCluster, involving parallelism and >> 'default values'. Default values in terms of parallelism are based on the >> number of cores. >> >> thanks a lot for the support! >> >> saluti, >> Stefano >> >> 2016-03-29 11:51 GMT+02:00 Ufuk Celebi <u...@apache.org>: >> >>> Hey Stefano, >>> >>> this should work by setting the parallelism on the environment, e.g. >>> >>> env.setParallelism(32) >>> >>> Is this what you are doing? >>> >>> The task threads are not part of a pool, but each submitted task >>> creates its own Thread. >>> >>> – Ufuk >>> >>> >>> On Fri, Mar 25, 2016 at 9:10 PM, Flavio Pompermaier >>> <pomperma...@okkam.it> wrote: >>> > Any help here? I think that the problem is that the JobManager creates >>> the >>> > executionContext of the scheduler with >>> > >>> > val executionContext = ExecutionContext.fromExecutor(new >>> > ForkJoinPool()) >>> > >>> > and thus the number of concurrently running threads is limited to the >>> number >>> > of cores (using the default constructor of the ForkJoinPool). >>> > What do you think? >>> > >>> > >>> > On Wed, Mar 23, 2016 at 6:55 PM, Stefano Bortoli <s.bort...@gmail.com> >>> > wrote: >>> >> >>> >> Hi guys, >>> >> >>> >> I am trying to test a job that should run a number of tasks to read >>> from a >>> >> RDBMS using an improved JDBC connector. The connection and the >>> reading run >>> >> smoothly, but I cannot seem to be able to move above the limit of 8 >>> >> concurrent threads running. 8 is of course the number of cores of my >>> >> machine. >>> >> >>> >> I have tried working around configurations and settings, but the >>> Executor >>> >> within the ExecutionContext keeps on having a parallelism of 8. >>> Although, of >>> >> course, the parallelism of the execution environment is much higher >>> (in fact >>> >> I have many more tasks to be allocated). >>> >> >>> >> I feel it may be an issue of the LocalMiniCluster configuration that >>> may >>> >> just override/neglect my wish for higher degree of parallelism. Is >>> there a >>> >> way for me to work around this issue? >>> >> >>> >> please let me know. Thanks a lot for you help! :-) >>> >> >>> >> saluti, >>> >> Stefano >>> > >>> > >>> > >>> >> >> >