On Sat, Jan 28, 2017 at 3:40 PM Brian Matherly <c...@brianmatherly.com>
wrote:

> I've not used the slices module nor studied it deeply. So here are some
> random comments for your consideration.
>
> * Who would be responsible for *first* initializing the global pool?
>
> * Whoever calls it first "wins" by being able to decide the number of
> threads.
>

First caller, which is currently any one of the modules that integrated
mlt_slices. However, it could also be the app before doing anything else
around the same place it calls mlt_factory_init().


> Maybe the number of threads should not be a parameter to
>
> mlt_slices_init_global_pool(). It could be an environment variable, instead.
>
> * Should there also be a mlt_slices_close_global_pool() so that the library 
> can be shut down cleanly? Alternately, you could register the pool with 
> mlt_factory_register_for_clean_up() in mlt_slices_init_global_pool(). For 
> that matter, the global pool could be created by mlt_factory_init().
>
>
Currently the pools are tracked in a hidden global mlt_properties, which
does not get closed anywhere. If the app
calls mlt_slices_init_global_pool(), it should call mlt_slices_close() with
at exit or shutdown. If all references are released, then threads will be
closed, and the only remaining thing is an mlt_properties in memory that
gets released by the kernel when the process exits.

It could instead be stored in mlt_global_properties(), which gets
initialized in mlt_factory_init() and closed in mlt_factory_close(). Then,
an app could initialize with a specific number of threads just after
calling mlt_factory_init().


> * I wonder if the global pool should be initialized and closed by the 
> application. Or maybe as part of some other initialization like 
> mlt_factory_init()
>
> * Theoretically speaking, MLT could do well with only two global thread 
> pools: one for high priority tasks and one for normal priority tasks. Maybe 
> there should only be two thread pools, they could both be global and services 
> would be discouraged from creating additional pools. Additionally, in this 
> hypothetical scenario, there would only be two thread counts to worry about: 
> the number of global high priority threads and the number of global normal 
> priority threads.
>
>
> Those are my random ideas.
>
> Cheers,
>
> ~Brian
>
> ------------------------------
> *From:* Dan Dennedy <d...@dennedy.org>
> *To:* Maksym Veremeyenko <ve...@m1stereo.tv>; mlt-devel <
> mlt-devel@lists.sourceforge.net>
> *Sent:* Saturday, January 28, 2017 3:19 PM
> *Subject:* [Mlt-devel] RFC mlt_slices global shared pool
>
> It seems to me that outside of the special high-priority, low-latency
> producer and consumers, most services should be using one global shared
> pool to prevent too many pools and threads created. Here is a code change I
> am playing with to add function mlt_slices_init_global_pool(int threads),
> which runs at normal priority. What do you think?
>
> --- src/framework/mlt.vers
>
> +++ src/framework/mlt.vers
>
> @@ -493,4 +493,5 @@ MLT_6.6.0 {
>
>    global:
>
>      mlt_slices_count;
>
>      mlt_slices_init_pool;
>
> +    mlt_slices_init_global_pool;
>
>  } MLT_6.4.0;
>
> --- src/framework/mlt_slices.c
>
> +++ src/framework/mlt_slices.c
>
> @@ -27,6 +27,7 @@
>
>  #include <stdlib.h>
>
>  #include <unistd.h>
>
>  #include <pthread.h>
>
> +#include <sched.h>
>
>  #ifdef _WIN32
>
>  #ifdef _WIN32_WINNT
>
>  #undef _WIN32_WINNT
>
> @@ -129,9 +130,9 @@ static void* mlt_slices_worker( void* p )
>
>  /** Initialize a sliced threading context
>
>   *
>
>   * \public \memberof mlt_slices_s
>
> - * \param threads number of threads to use for job list
>
> - * \param policy scheduling policy of processing threads
>
> - * \param priority priority value that can be used with the scheduling 
> algorithm
>
> + * \param threads number of threads to use for job list, 0 for #cpus
>
> + * \param policy scheduling policy of processing threads, -1 for normal
>
> + * \param priority priority value that can be used with the scheduling 
> algorithm, -1 for maximum
>
>   * \return the context pointer
>
>   */
>
>  @@ -184,6 +185,10 @@ mlt_slices mlt_slices_init( int threads, int policy, 
> int priority )
>
>       pthread_cond_init ( &ctx->cond_var_job, NULL );
>
>       pthread_cond_init ( &ctx->cond_var_ready, NULL );
>
>       pthread_attr_init( &tattr );
>
> +    if ( policy < 0 )
>
> +        policy = SCHED_OTHER;
>
> +    if ( priority < 0 )
>
> +        priority = sched_get_priority_max( policy );
>
>       pthread_attr_setschedpolicy( &tattr, policy );
>
>       param.sched_priority = priority;
>
>       pthread_attr_setschedparam( &tattr, &param );
>
> @@ -309,9 +314,9 @@ void mlt_slices_run( mlt_slices ctx, int jobs, 
> mlt_slices_proc proc, void* cooki
>
>  /** Initialize a sliced threading context pool
>
>   *
>
>   * \public \memberof mlt_slices_s
>
> - * \param threads number of threads to use for job list
>
> - * \param policy scheduling policy of processing threads
>
> - * \param priority priority value that can be used with the scheduling 
> algorithm
>
> + * \param threads number of threads to use for job list, 0 for #cpus
>
> + * \param policy scheduling policy of processing threads, -1 for normal
>
> + * \param priority priority value that can be used with the scheduling 
> algorithm, -1 for maximum
>
>   * \param name name of pool of threads
>
>   * \return the context pointer
>
>   */
>
> @@ -355,6 +360,18 @@ mlt_slices mlt_slices_init_pool( int threads, int 
> policy, int priority, const ch
>
>       return ctx;
>
>  }
>
>  +/** Initialize the global sliced thread pool.
>
> + *
>
> + * \public \memberof mlt_slices_s
>
> + * \param threads number of threads to use for job list, 0 for #cpus
>
> + * \return the context pointer
>
> + */
>
> +
>
> +mlt_slices mlt_slices_init_global_pool(int threads)
>
> +{
>
> +    return mlt_slices_init_pool( threads, MLT_SLICES_SCHED_NORMAL, 
> MLT_SLICES_SCHED_NORMAL, "_mlt_slices_global" );
>
> +}
>
> +
>
>  /** Get the number of slices.
>
>   *
>
>   * \public \memberof mlt_slices_s
>
> --- src/framework/mlt_slices.h
>
> +++ src/framework/mlt_slices.h
>
> @@ -23,6 +23,8 @@
>
>  #ifndef MLT_SLICES_H
>
>  #define MLT_SLICES_H
>
>  +#define MLT_SLICES_SCHED_NORMAL (-1)
>
> +
>
>  struct mlt_slices_s;
>
>  typedef struct mlt_slices_s *mlt_slices;                  /**< pointer to 
> Sliced processing context object */
>
>  @@ -38,4 +40,6 @@ extern int mlt_slices_count( mlt_slices ctx );
>
>   extern mlt_slices mlt_slices_init_pool( int threads, int policy, int 
> priority, const char* name );
>
>  +extern mlt_slices mlt_slices_init_global_pool( int threads );
>
> +
>
>  #endif
>
>
>
>
>
> ------------------------------------------------------------------------------
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, SlashDot.org! http://sdm.link/slashdot
> _______________________________________________
> Mlt-devel mailing list
> Mlt-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/mlt-devel
>
>
>
------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
Mlt-devel mailing list
Mlt-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mlt-devel

Reply via email to