[
https://issues.apache.org/jira/browse/SVN-4796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
BINETIX SUPPORT TEAM updated SVN-4796:
--------------------------------------
Description:
This is a clearly reproducible problem on SVN versions 1.10.2 and the latest
1.11.0.
We found two similar issues SVN-1964, SVN-3411 closed already, but obviously
beyond our case. So, these are just mentioned for reference.
*Description*
We're going to use the SVN API in order to create and then commit multiple
transactions with multiple files. In order to simplify the scenario, within the
code below, we will apply only 3 actions per transaction:
# Create a transaction
# Add only one directory or one empty file (file with content makes no
difference)
# Commit the transaction
Now let`s loop many transactions like this one in sequence (no concurrent
usage).
After each commit SVN consumes exponentially amount of memory which is never
released afterwords, as follows (Number of transaction - Application's memory
usage):
0 - 380 kb (initial state when the application is started with 0 transactions)
1`000 - 20 Mb
2`000 - 66 Mb
3`000 - 120 Mb
4`000 - 210 Mb
5`000 - 320 Mb
6`000 - 455 Mb
7`000 - 607 Mb
8`000 - 808 Mb
9`000 - 1.1 Gb
10`000 - 1.4 Gb
–
*Observations*
Few curious facts observed during the tests:
* If we delete the added file before committing, the transaction remains
empty, then there is no memory leakage;
* If we add not only one file or dir but 40`000 - 48`000 entities in one
transaction then the files are being created VERY slowly, and the memory
consumption is doubled or even tippled compared to the table above;
* If we create and commit 10`000 entities (in our example) in the root
directory (it is important to do that in the root), and we stop the
application, then on the next application start with the same repository there
are three cases:
** if we add more entities in the root again, then the memory leakage is
increased with the same proportions, as described above, BUT,
** if we add more entities in subdirectory, then the memory allocation is
stopped by 20 Mb until a threshold of about 9`000 files is reached. Above this
relative threshold, if we add more entities in the same subdirectory then the
memory leakage is increased again, AND FINALLY,
** if we add more entities in any other subdirectory below the described
relative threshold, then there is no leakage observed. In other words, we
create (relatively speaking) 10`000 files in the root, then we can add up to
9`000 in each subdirectory without memory leakage;
* On Transaction destroy there is no memory leakage;
Despite these observations and strange "workarounds", obviously we CAN NOT use
Subversion in productions conditions. So we hope you will find a way to fix
these issues.
–
*Sample code for reproduction*
_#define_ _MAX_LOOP 10000_
_svn_error_t* test_func(const char* _szRepository_path_
_, apr_pool_t* pool)_
_{_
_svn_repos_t* repos;_
_svn_fs_t* fs;_
_SVN_ERR(svn_repos_open3(&repos, _szRepository_path, NULL, pool, pool));_
_fs = svn_repos_fs(repos);_
_apr_pool_t* subpool = svn_pool_create(pool);_
_for (int i = 0; i < MAX_LOOP; i++) {_
_svn_fs_txn_t* txn;_
_svn_fs_root_t* txn_root;_
_svn_revnum_t youngest;_
_const char* txn_id = NULL;_
_svn_pool_clear(subpool);_
_SVN_ERR(svn_fs_youngest_rev(&youngest, fs, subpool));_
_SVN_ERR(svn_repos_fs_begin_txn_for_commit (&txn_
_, repos_
_, youngest_
_, "user_name"_
_, "comment default"_
_, subpool));_
_//---- Create unique file name and add to "/" in repository_
_SVN_ERR(svn_fs_txn_root(&txn_root, txn, subpool));_
_const char* pszFile_name = create_unique_file_name (subpool);_
_SVN_ERR(svn_fs_make_file(txn_root, pszFile_name, subpool));_
_if ((i % 100) == 0)_
_printf ("added: %d\n", i+1);_
_const char* conflict = NULL;_
_svn_revnum_t new_rev = SVN_INVALID_REVNUM;_
_SVN_ERR(svn_fs_commit_txn(&conflict, &new_rev, txn, subpool));_
_if (!SVN_IS_VALID_REVNUM(new_rev))_
_return svn_error_create(SVN_ERR_FS_GENERAL, NULL,_
_"Commit failed...");_
_}_
_svn_pool_destroy(subpool);_
_return SVN_NO_ERROR;_
_}_
was:
This is a clearly reproducible problem on SVN versions 1.10.2 and the latest
1.11.0.
We found two similar issues SVN-1964, SVN-3411 closed already, but obviously
beyond our case. So, these are just mentioned for reference.
*Description*
We're going to use the SVN API in order to create and then commit multiple
transactions with multiple files. In order to simplify the scenario, within the
code below, we will apply only 3 actions per transaction:
# Create a transaction
# Add only one directory or one empty file (file with content makes no
difference)
# Commit the transaction
Now let`s loop many transactions like this one in sequence (no concurrent
usage).
After each commit SVN consumes exponentially amount of memory which is never
released afterwords, as follows (Number of transaction - Application's memory
usage):
0 - 380 kb (initial state when the application is started with 0 transactions)
1`000 - 20 Mb
2`000 - 66 Mb
3`000 - 120 Mb
4`000 - 210 Mb
5`000 - 320 Mb
6`000 - 455 Mb
7`000 - 607 Mb
8`000 - 808 Mb
9`000 - 1.1 Gb
10`000 - 1.4 Gb
–
*Observations*
Few curious facts observed during the tests:
* If we delete the added file before committing, the transaction remains
empty, then there is no memory leakage;
* If we add not only one file or dir but 40`000 - 48`000 entities in one
transaction then the files are being created VERY slowly, and the memory
consumption is doubled or even tippled compared to the table above;
* If we create and commit 10`000 entities (in our example) in the root
directory (it is important to do that in the root), and we stop the
application, then on the next application start with the same repository there
are three cases:
** if we add more entities in the root again, then the memory leakage is
increased with the same proportions, as described above, BUT,
** if we add more entities in subdirectory, then the memory allocation is
stopped by 20 Mb until a threshold of about 9`000 files is reached. Above this
relative threshold, if we add more entities in the same subdirectory then the
memory leakage is increased again, AND FINALLY,
** if we add more entities in any other subdirectory below the described
relative threshold, then there is leakage observed. In other words, we create
(relatively speaking) 10`000 files in the root, then we can add up to 9`000 in
each subdirectory without memory leakage;
* On Transaction destroy there is no memory leakage;
Despite these observations and strange "workarounds", obviously we CAN NOT use
Subversion in productions conditions. So we hope you will find a way to fix
these issues.
–
*Sample code for reproduction*
_#define_ _MAX_LOOP 10000_
_svn_error_t* test_func(const char* _szRepository_path_
_, apr_pool_t* pool)_
_{_
_svn_repos_t* repos;_
_svn_fs_t* fs;_
_SVN_ERR(svn_repos_open3(&repos, _szRepository_path, NULL, pool, pool));_
_fs = svn_repos_fs(repos);_
_apr_pool_t* subpool = svn_pool_create(pool);_
_for (int i = 0; i < MAX_LOOP; i++) {_
_svn_fs_txn_t* txn;_
_svn_fs_root_t* txn_root;_
_svn_revnum_t youngest;_
_const char* txn_id = NULL;_
_svn_pool_clear(subpool);_
_SVN_ERR(svn_fs_youngest_rev(&youngest, fs, subpool));_
_SVN_ERR(svn_repos_fs_begin_txn_for_commit (&txn_
_, repos_
_, youngest_
_, "user_name"_
_, "comment default"_
_, subpool));_
_//---- Create unique file name and add to "/" in repository_
_SVN_ERR(svn_fs_txn_root(&txn_root, txn, subpool));_
_const char* pszFile_name = create_unique_file_name (subpool);_
_SVN_ERR(svn_fs_make_file(txn_root, pszFile_name, subpool));_
_if ((i % 100) == 0)_
_printf ("added: %d\n", i+1);_
_const char* conflict = NULL;_
_svn_revnum_t new_rev = SVN_INVALID_REVNUM;_
_SVN_ERR(svn_fs_commit_txn(&conflict, &new_rev, txn, subpool));_
_if (!SVN_IS_VALID_REVNUM(new_rev))_
_return svn_error_create(SVN_ERR_FS_GENERAL, NULL,_
_"Commit failed...");_
_}_
_svn_pool_destroy(subpool);_
_return SVN_NO_ERROR;_
_}_
> SVN leaks exponentially significant amount of memory on commit
> --------------------------------------------------------------
>
> Key: SVN-4796
> URL: https://issues.apache.org/jira/browse/SVN-4796
> Project: Subversion
> Issue Type: Bug
> Components: libsvn_delta, libsvn_fs, libsvn_fs_fs, libsvn_fs_x,
> libsvn_repos, libsvn_subr
> Affects Versions: 1.11.0, 1.10.2
> Environment: Test environment:
> * Linux (Ubuntu 18.04.1 LTS)
> * 8 & 16GB RAM dev environments
> * GCC/G++ 7.3.0
> * Apache Portable Runtime Library, version 1.6.3-2 amd64
> * Repositories are created on EXT4 or XFS
> * Repositories are created on SSD
> * SVN repositories` type is FSFS
> * SVN repositories` file system version is the latest for the respective API
> Reporter: BINETIX SUPPORT TEAM
> Priority: Critical
> Labels: features, ready-to-commit, usability
>
> This is a clearly reproducible problem on SVN versions 1.10.2 and the latest
> 1.11.0.
> We found two similar issues SVN-1964, SVN-3411 closed already, but obviously
> beyond our case. So, these are just mentioned for reference.
> *Description*
> We're going to use the SVN API in order to create and then commit multiple
> transactions with multiple files. In order to simplify the scenario, within
> the code below, we will apply only 3 actions per transaction:
> # Create a transaction
> # Add only one directory or one empty file (file with content makes no
> difference)
> # Commit the transaction
> Now let`s loop many transactions like this one in sequence (no concurrent
> usage).
> After each commit SVN consumes exponentially amount of memory which is never
> released afterwords, as follows (Number of transaction - Application's memory
> usage):
> 0 - 380 kb (initial state when the application is started with 0 transactions)
> 1`000 - 20 Mb
> 2`000 - 66 Mb
> 3`000 - 120 Mb
> 4`000 - 210 Mb
> 5`000 - 320 Mb
> 6`000 - 455 Mb
> 7`000 - 607 Mb
> 8`000 - 808 Mb
> 9`000 - 1.1 Gb
> 10`000 - 1.4 Gb
> –
> *Observations*
> Few curious facts observed during the tests:
> * If we delete the added file before committing, the transaction remains
> empty, then there is no memory leakage;
> * If we add not only one file or dir but 40`000 - 48`000 entities in one
> transaction then the files are being created VERY slowly, and the memory
> consumption is doubled or even tippled compared to the table above;
> * If we create and commit 10`000 entities (in our example) in the root
> directory (it is important to do that in the root), and we stop the
> application, then on the next application start with the same repository
> there are three cases:
> ** if we add more entities in the root again, then the memory leakage is
> increased with the same proportions, as described above, BUT,
> ** if we add more entities in subdirectory, then the memory allocation is
> stopped by 20 Mb until a threshold of about 9`000 files is reached. Above
> this relative threshold, if we add more entities in the same subdirectory
> then the memory leakage is increased again, AND FINALLY,
> ** if we add more entities in any other subdirectory below the described
> relative threshold, then there is no leakage observed. In other words, we
> create (relatively speaking) 10`000 files in the root, then we can add up to
> 9`000 in each subdirectory without memory leakage;
> * On Transaction destroy there is no memory leakage;
> Despite these observations and strange "workarounds", obviously we CAN NOT
> use Subversion in productions conditions. So we hope you will find a way to
> fix these issues.
> –
> *Sample code for reproduction*
> _#define_ _MAX_LOOP 10000_
> _svn_error_t* test_func(const char* _szRepository_path_
> _, apr_pool_t* pool)_
> _{_
> _svn_repos_t* repos;_
> _svn_fs_t* fs;_
> _SVN_ERR(svn_repos_open3(&repos, _szRepository_path, NULL, pool, pool));_
> _fs = svn_repos_fs(repos);_
> _apr_pool_t* subpool = svn_pool_create(pool);_
> _for (int i = 0; i < MAX_LOOP; i++) {_
> _svn_fs_txn_t* txn;_
> _svn_fs_root_t* txn_root;_
> _svn_revnum_t youngest;_
> _const char* txn_id = NULL;_
>
> _svn_pool_clear(subpool);_
>
> _SVN_ERR(svn_fs_youngest_rev(&youngest, fs, subpool));_
> _SVN_ERR(svn_repos_fs_begin_txn_for_commit (&txn_
> _, repos_
> _, youngest_
> _, "user_name"_
> _, "comment default"_
> _, subpool));_
> _//---- Create unique file name and add to "/" in repository_
> _SVN_ERR(svn_fs_txn_root(&txn_root, txn, subpool));_
> _const char* pszFile_name = create_unique_file_name (subpool);_
> _SVN_ERR(svn_fs_make_file(txn_root, pszFile_name, subpool));_
> _if ((i % 100) == 0)_
> _printf ("added: %d\n", i+1);_
> _const char* conflict = NULL;_
> _svn_revnum_t new_rev = SVN_INVALID_REVNUM;_
>
> _SVN_ERR(svn_fs_commit_txn(&conflict, &new_rev, txn, subpool));_
> _if (!SVN_IS_VALID_REVNUM(new_rev))_
> _return svn_error_create(SVN_ERR_FS_GENERAL, NULL,_
> _"Commit failed...");_
> _}_
> _svn_pool_destroy(subpool);_
> _return SVN_NO_ERROR;_
> _}_
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)