This is an automated email from the ASF dual-hosted git repository. okislal pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/madlib.git
The following commit(s) were added to refs/heads/master by this push: new 1a9b557 DL: Clean up user docs 1a9b557 is described below commit 1a9b557ab3c221a2c3956a201cfe2ef060977f39 Author: Orhan Kislal <okis...@apache.org> AuthorDate: Thu Mar 26 16:53:33 2020 -0400 DL: Clean up user docs --- doc/mainpage.dox.in | 1 - src/ports/postgres/modules/deep_learning/madlib_keras.sql_in | 12 ++++++------ .../deep_learning/madlib_keras_fit_multiple_model.sql_in | 12 ++++++------ 3 files changed, 12 insertions(+), 13 deletions(-) diff --git a/doc/mainpage.dox.in b/doc/mainpage.dox.in index 82be4a5..189b948 100644 --- a/doc/mainpage.dox.in +++ b/doc/mainpage.dox.in @@ -296,7 +296,6 @@ Interface and implementation are subject to change. @brief Train multiple deep learning models at the same time for model architecture search and hyperparameter selection. @details Train multiple deep learning models at the same time for model architecture search and hyperparameter selection. @{ - @defgroup grp_automl AutoML @defgroup grp_keras_run_model_selection Run Model Selection @defgroup grp_keras_setup_model_selection Setup Model Selection @} diff --git a/src/ports/postgres/modules/deep_learning/madlib_keras.sql_in b/src/ports/postgres/modules/deep_learning/madlib_keras.sql_in index 75fa56a..b0c77a3 100644 --- a/src/ports/postgres/modules/deep_learning/madlib_keras.sql_in +++ b/src/ports/postgres/modules/deep_learning/madlib_keras.sql_in @@ -87,15 +87,15 @@ Note that the following MADlib functions are targeting a specific Keras version (2.2.4) with a specific TensorFlow kernel version (1.14). Using a newer or older version may or may not work as intended. -@note CUDA GPU memory cannot be released until the process holding it is terminated. -When a MADlib deep learning function is called with GPUs, Greenplum internally -creates a process (called a slice) which calls TensorFlow to do the computation. +@note CUDA GPU memory cannot be released until the process holding it is terminated. +When a MADlib deep learning function is called with GPUs, Greenplum internally +creates a process (called a slice) which calls TensorFlow to do the computation. This process holds the GPU memory until one of the following two things happen: -query finishes and user logs out of the Postgres client/session; or, -query finishes and user waits for the timeout set by `gp_vmem_idle_resource_timeout`. +query finishes and user logs out of the Postgres client/session; or, +query finishes and user waits for the timeout set by gp_vmem_idle_resource_timeout. The default value for this timeout is 18 sec [8]. So the recommendation is: log out/reconnect to the session after every GPU query; or -wait for `gp_vmem_idle_resource_timeout` before you run another GPU query (you can +wait for gp_vmem_idle_resource_timeout before you run another GPU query (you can also set it to a lower value). @anchor keras_fit diff --git a/src/ports/postgres/modules/deep_learning/madlib_keras_fit_multiple_model.sql_in b/src/ports/postgres/modules/deep_learning/madlib_keras_fit_multiple_model.sql_in index b929724..4d1eb09 100644 --- a/src/ports/postgres/modules/deep_learning/madlib_keras_fit_multiple_model.sql_in +++ b/src/ports/postgres/modules/deep_learning/madlib_keras_fit_multiple_model.sql_in @@ -94,15 +94,15 @@ release the disk space once the fit multiple query has completed execution. This is not the case for GPDB 6+ where disk space is released during the fit multiple query. -@note CUDA GPU memory cannot be released until the process holding it is terminated. -When a MADlib deep learning function is called with GPUs, Greenplum internally -creates a process (called a slice) which calls TensorFlow to do the computation. +@note CUDA GPU memory cannot be released until the process holding it is terminated. +When a MADlib deep learning function is called with GPUs, Greenplum internally +creates a process (called a slice) which calls TensorFlow to do the computation. This process holds the GPU memory until one of the following two things happen: -query finishes and user logs out of the Postgres client/session; or, -query finishes and user waits for the timeout set by `gp_vmem_idle_resource_timeout`. +query finishes and user logs out of the Postgres client/session; or, +query finishes and user waits for the timeout set by gp_vmem_idle_resource_timeout. The default value for this timeout is 18 sec [8]. So the recommendation is: log out/reconnect to the session after every GPU query; or -wait for `gp_vmem_idle_resource_timeout` before you run another GPU query (you can +wait for gp_vmem_idle_resource_timeout before you run another GPU query (you can also set it to a lower value). @anchor keras_fit