Instead of adding mlt_slices_init_global_pool() in mlt_slices.h, here is an 
alternative proposal

Advantages:* Since this is a global object, I think it makes more sense in 
mlt_factory* The user can control the number of threads by setting 
MLT_GLOBAL_SLICES_COUNT   - this would work also for melt
* The application can control the number of threads by setting 
MLT_GLOBAL_SLICES_COUNT* Any service that uses the global pool has *no* control 
over the number of threads* If the global pool is never used, it is never 
created* The pool is destroyed when the factory is closed



diff --git a/src/framework/mlt_factory.c b/src/framework/mlt_factory.c
index e43f08f..a4ff248 100644
--- a/src/framework/mlt_factory.c
+++ b/src/framework/mlt_factory.c
@@ -61,6 +61,8 @@ static mlt_repository repository = NULL;
 static mlt_properties event_object = NULL;
 /** for tracking the unique_id set on each constructed service */
 static int unique_id = 0;
+/** a global slices pool */
+static mlt_slices global_slices = NULL;
 
 /* Event transmitters. */
 
@@ -458,3 +460,23 @@ mlt_properties mlt_global_properties( )
 {
     return global_properties;
 }
+
+/** Get the global sliced thread pool.
+ *
+ * One can override the number of threads in the global slices pool
+ * by setting the environment variable MLT_GLOBAL_SLICES_COUNT.
+ *
+ * \return the global slices object
+ */
+
+mlt_slices mlt_factory_global_slices()
+{
+    if( !global_slices )
+    {
+        char *env = getenv( "MLT_GLOBAL_SLICES_COUNT" );
+        threads = env ? atoi(env) : -1;
+        global_slices = mlt_slices_init_pool( threads, 
MLT_SLICES_SCHED_NORMAL, MLT_SLICES_SCHED_NORMAL, "_mlt_slices_global" );
+        mlt_factory_register_for_clean_up( global_slices, mlt_slices_close );
+    }
+    return global_slices;
+}
diff --git a/src/framework/mlt_factory.h b/src/framework/mlt_factory.h
index dfc2024..9ae9d06 100644
--- a/src/framework/mlt_factory.h
+++ b/src/framework/mlt_factory.h
@@ -58,5 +58,6 @@ extern mlt_consumer mlt_factory_consumer( mlt_profile 
profile, const char *name,
 extern void mlt_factory_register_for_clean_up( void *ptr, mlt_destructor 
destructor );
 extern void mlt_factory_close( );
 extern mlt_properties mlt_global_properties( );
+extern mlt_slices mlt_factory_global_slices();
 
 #endif
diff --git a/src/framework/mlt_types.h b/src/framework/mlt_types.h
index 7850c5c..202f0d7 100644
--- a/src/framework/mlt_types.h
+++ b/src/framework/mlt_types.h
@@ -166,6 +166,7 @@ typedef struct mlt_repository_s *mlt_repository;        
/**< pointer to Reposito
 typedef struct mlt_cache_s *mlt_cache;                  /**< pointer to Cache 
object */
 typedef struct mlt_cache_item_s *mlt_cache_item;        /**< pointer to 
CacheItem object */
 typedef struct mlt_animation_s *mlt_animation;          /**< pointer to 
Property Animation object */
+typedef struct mlt_slices_s *mlt_slices;                /**< pointer to Sliced 
processing context object */
 
 typedef void ( *mlt_destructor )( void * );             /**< pointer to 
destructor function */
 typedef char *( *mlt_serialiser )( void *, int length );/**< pointer to 
serialization function */


      From: Dan Dennedy <d...@dennedy.org>
 To: Brian Matherly <c...@brianmatherly.com>; Maksym Veremeyenko 
<ve...@m1stereo.tv>; mlt-devel <mlt-devel@lists.sourceforge.net> 
 Sent: Saturday, January 28, 2017 8:07 PM
 Subject: Re: [Mlt-devel] RFC mlt_slices global shared pool
   


On Sat, Jan 28, 2017 at 3:40 PM Brian Matherly <c...@brianmatherly.com> wrote:

I've not used the slices module nor studied it deeply. So here are some random 
comments for your consideration.
* Who would be responsible for *first* initializing the global pool?
* Whoever calls it first "wins" by being able to decide the number of threads. 

First caller, which is currently any one of the modules that integrated 
mlt_slices. However, it could also be the app before doing anything else around 
the same place it calls mlt_factory_init(). 
Maybe the number of threads should not be a parameter to 
mlt_slices_init_global_pool(). It could be an environment variable, instead.

* Should there also be a mlt_slices_close_global_pool() so that the library can 
be shut down cleanly? Alternately, you could register the pool with 
mlt_factory_register_for_clean_up() in mlt_slices_init_global_pool(). For that 
matter, the global pool could be created by mlt_factory_init().



Currently the pools are tracked in a hidden global mlt_properties, which does 
not get closed anywhere. If the app calls mlt_slices_init_global_pool(), it 
should call mlt_slices_close() with at exit or shutdown. If all references are 
released, then threads will be closed, and the only remaining thing is an 
mlt_properties in memory that gets released by the kernel when the process 
exits.
It could instead be stored in mlt_global_properties(), which gets initialized 
in mlt_factory_init() and closed in mlt_factory_close(). Then, an app could 
initialize with a specific number of threads just after calling 
mlt_factory_init(). 
* I wonder if the global pool should be initialized and closed by the 
application. Or maybe as part of some other initialization like 
mlt_factory_init()

* Theoretically speaking, MLT could do well with only two global thread pools: 
one for high priority tasks and one for normal priority tasks. Maybe there 
should only be two thread pools, they could both be global and services would 
be discouraged from creating additional pools. Additionally, in this 
hypothetical scenario, there would only be two thread counts to worry about: 
the number of global high priority threads and the number of global normal 
priority threads.

Those are my random ideas.
Cheers,
~Brian

      From: Dan Dennedy <d...@dennedy.org>
 To: Maksym Veremeyenko <ve...@m1stereo.tv>; mlt-devel 
<mlt-devel@lists.sourceforge.net> 
 Sent: Saturday, January 28, 2017 3:19 PM
 Subject: [Mlt-devel] RFC mlt_slices global shared pool
  
It seems to me that outside of the special high-priority, low-latency producer 
and consumers, most services should be using one global shared pool to prevent 
too many pools and threads created. Here is a code change I am playing with to 
add function mlt_slices_init_global_pool(int threads), which runs at normal 
priority. What do you think?
--- src/framework/mlt.vers+++ src/framework/mlt.vers@@ -493,4 +493,5 @@ 
MLT_6.6.0 {   global:     mlt_slices_count;     mlt_slices_init_pool;+    
mlt_slices_init_global_pool; } MLT_6.4.0;--- src/framework/mlt_slices.c+++ 
src/framework/mlt_slices.c@@ -27,6 +27,7 @@ #include <stdlib.h> #include 
<unistd.h> #include <pthread.h>+#include <sched.h> #ifdef _WIN32 #ifdef 
_WIN32_WINNT #undef _WIN32_WINNT@@ -129,9 +130,9 @@ static void* 
mlt_slices_worker( void* p ) /** Initialize a sliced threading context  *  * 
\public \memberof mlt_slices_s- * \param threads number of threads to use for 
job list- * \param policy scheduling policy of processing threads- * \param 
priority priority value that can be used with the scheduling algorithm+ * 
\param threads number of threads to use for job list, 0 for #cpus+ * \param 
policy scheduling policy of processing threads, -1 for normal+ * \param 
priority priority value that can be used with the scheduling algorithm, -1 for 
maximum  * \return the context pointer  */ @@ -184,6 +185,10 @@ mlt_slices 
mlt_slices_init( int threads, int policy, int priority )  pthread_cond_init ( 
&ctx->cond_var_job, NULL );         pthread_cond_init ( &ctx->cond_var_ready, 
NULL );       pthread_attr_init( &tattr );+    if ( policy < 0 )+        policy 
= SCHED_OTHER;+    if ( priority < 0 )+        priority = 
sched_get_priority_max( policy );   pthread_attr_setschedpolicy( &tattr, policy 
);  param.sched_priority = priority;        pthread_attr_setschedparam( &tattr, 
&param );@@ -309,9 +314,9 @@ void mlt_slices_run( mlt_slices ctx, int jobs, 
mlt_slices_proc proc, void* cooki /** Initialize a sliced threading context 
pool  *  * \public \memberof mlt_slices_s- * \param threads number of threads 
to use for job list- * \param policy scheduling policy of processing threads- * 
\param priority priority value that can be used with the scheduling algorithm+ 
* \param threads number of threads to use for job list, 0 for #cpus+ * \param 
policy scheduling policy of processing threads, -1 for normal+ * \param 
priority priority value that can be used with the scheduling algorithm, -1 for 
maximum  * \param name name of pool of threads  * \return the context pointer  
*/@@ -355,6 +360,18 @@ mlt_slices mlt_slices_init_pool( int threads, int 
policy, int priority, const ch    return ctx; } +/** Initialize the global 
sliced thread pool.+ *+ * \public \memberof mlt_slices_s+ * \param threads 
number of threads to use for job list, 0 for #cpus+ * \return the context 
pointer+ */++mlt_slices mlt_slices_init_global_pool(int threads)+{+    return 
mlt_slices_init_pool( threads, MLT_SLICES_SCHED_NORMAL, 
MLT_SLICES_SCHED_NORMAL, "_mlt_slices_global" );+}+ /** Get the number of 
slices.  *  * \public \memberof mlt_slices_s--- src/framework/mlt_slices.h+++ 
src/framework/mlt_slices.h@@ -23,6 +23,8 @@ #ifndef MLT_SLICES_H #define 
MLT_SLICES_H +#define MLT_SLICES_SCHED_NORMAL (-1)+ struct mlt_slices_s; 
typedef struct mlt_slices_s *mlt_slices;                  /**< pointer to 
Sliced processing context object */ @@ -38,4 +40,6 @@ extern int 
mlt_slices_count( mlt_slices ctx );  extern mlt_slices mlt_slices_init_pool( 
int threads, int policy, int priority, const char* name ); +extern mlt_slices 
mlt_slices_init_global_pool( int threads );+ #endif


------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
Mlt-devel mailing list
Mlt-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mlt-devel


   


   
------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
Mlt-devel mailing list
Mlt-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mlt-devel

Reply via email to