[no subject]
Re: svn x-shelve fails with "checksum mismatch" error
On 03.08.2020 15:19, Johan Corveleyn wrote: On Mon, Aug 3, 2020 at 2:44 PM Marc Strapetz wrote: >> ... >> D:\temp\tiny.dir.svn>svn x-shelve shelf1 --- Shelve 'shelf1' in WC root 'D:/temp/tiny.dir.svn' --- Shelving... Updating '.svn\experimental\shelves\v3\7368656c6631-001.wc': At revision 960. Sendingsub.txt Transmitting file data .svn: E200014: Checksum mismatch for 'D:\temp\tiny.dir.svn\.svn\experimental\shelves\v3\7368656c6631-001.wc\sub.txt': expected: ec87e2cd3ddf5490cbe10e07301c114c actual: b89724a2cec6a05364bb3af4c74fd452 Tested on Windows 8.1 with Subversion 1.14.0 (built ourselves). I think this is known as issue SVN-4827 (svn x-shelve gives E200014: Checksum mismatch when using eol-style=native or keywords) https://issues.apache.org/jira/browse/SVN-4827 Thanks, I can confirm that after resetting svn:eol-style, shelve works as expected. -- Best regards, Marc Strapetz = syntevo GmbH http://www.syntevo.com
svn x-shelve fails with "checksum mismatch" error
I just gave the experimental shelve-feature a try and ran into a "checksum mismatch" error when invoking "svn x-shelve". Any ideas how to resolve this? D:\temp\tiny.dir.svn>dir ... 03.08.2020 14:30 . 03.08.2020 14:30 .. 03.08.2020 14:3036 sub.txt 1 File(s) 36 bytes 2 Dir(s) 4 923 518 976 bytes free D:\temp\tiny.dir.svn>svn diff Index: sub.txt === --- sub.txt (revision 960) +++ sub.txt (working copy) @@ -1,4 +1,4 @@ -1 +1a 2 3 4 D:\temp\tiny.dir.svn>svn info Path: . Working Copy Root Path: D:\temp\tiny.dir.svn URL: file://localhost/D:/svnrepos/test/tiny/trunk/dir Relative URL: ^/tiny/trunk/dir Repository Root: file://localhost/D:/svnrepos/test Repository UUID: b64c88d0-5acc-524b-887a-f04c2292f55f Revision: 960 Node Kind: directory Schedule: normal Last Changed Author: marc Last Changed Rev: 893 Last Changed Date: 2017-09-26 16:04:19 +0200 (Di, 26 Sep 2017) D:\temp\tiny.dir.svn>svn x-shelve shelf1 --- Shelve 'shelf1' in WC root 'D:/temp/tiny.dir.svn' --- Shelving... Updating '.svn\experimental\shelves\v3\7368656c6631-001.wc': At revision 960. Sendingsub.txt Transmitting file data .svn: E200014: Checksum mismatch for 'D:\temp\tiny.dir.svn\.svn\experimental\shelves\v3\7368656c6631-001.wc\sub.txt': expected: ec87e2cd3ddf5490cbe10e07301c114c actual: b89724a2cec6a05364bb3af4c74fd452 Tested on Windows 8.1 with Subversion 1.14.0 (built ourselves). -- Best regards, Marc Strapetz = syntevo GmbH http://www.syntevo.com
Re: Subversion 2.0
On 25.06.2019 23:35, Branko Čibej wrote:> On 25.06.2019 19:15, Thomas Singer wrote: What I don't like: - after more than a decade the umlaut problem of composed/decomposed UTF-8 has not been solved It has, actually, in Apple's APFS, where the fix belongs. That sounds interesting. Just to be sure, you are referring to this problem: https://issues.apache.org/jira/browse/SVN-2464 ? It would be great to have some more information for which OSX version and which file systems the problem should be resolved. -Marc On 25.06.2019 23:35, Branko Čibej wrote: On 25.06.2019 19:15, Thomas Singer wrote: What I don't like: - after more than a decade the umlaut problem of composed/decomposed UTF-8 has not been solved It has, actually, in Apple's APFS, where the fix belongs. -- Brane
Re: org.apache.subversion.javahl.ClientException: Found invalid algorithm in certificate
On 10.01.2018 02:44, Philip Martin wrote: Marc Strapetz writes: Marc, please let us know if you learnt any more about this problem. Unfortunately we didn't make progress here since my posting. I have just fixed a bug in the JavaHL implementation of SSL trust prompting, see r1820718. I don't know if applies in your case, but it might if the client was attempting to accept the cert failures temporarily. We have cherry-picked your fix onto 1.9.7 tag but unfortunately it doesn't solve the problem for the user. -Marc
Re: org.apache.subversion.javahl.ClientException: Found invalid algorithm in certificate
Marc, please let us know if you learnt any more about this problem. Unfortunately we didn't make progress here since my posting. -Marc On 02.01.2018 11:26, Julian Foad wrote: Ping! Anyone? (I noticed this message had no response on the list so far.) Marc, please let us know if you learnt any more about this problem. Thanks, - Julian Marc Strapetz wrote: One of our users is encountering following exception: svn: Java exception svn: Wrapped Java Exception at org.apache.subversion.javahl.remote.RemoteFactory.open (Native Method) at org.apache.subversion.javahl.remote.RemoteFactory.openRemoteSession (RemoteFactory.java:228) ... Caused by: org.apache.subversion.javahl.ClientException: Found invalid algorithm in certificate Unexpected ASN1 tag ... with Subversion 1.9.7. He writes that command line client works fine for him. It warns about the certificate not being "issued by a trusted authority" and then shows the expected "(R)eject, accept (t)emporarily or accept (p)ermanently?" question. This happens on Linux. Any ideas why command line client and JavaHL might behave differently here? Note that the command line binaries are not identical to the JavaHL binaries, but both have been compiled from Subversion 1.9.7. Could this be a problem in our build process? -Marc
org.apache.subversion.javahl.ClientException: Found invalid algorithm in certificate
One of our users is encountering following exception: svn: Java exception svn: Wrapped Java Exception at org.apache.subversion.javahl.remote.RemoteFactory.open (Native Method) at org.apache.subversion.javahl.remote.RemoteFactory.openRemoteSession (RemoteFactory.java:228) ... Caused by: org.apache.subversion.javahl.ClientException: Found invalid algorithm in certificate Unexpected ASN1 tag ... with Subversion 1.9.7. He writes that command line client works fine for him. It warns about the certificate not being "issued by a trusted authority" and then shows the expected "(R)eject, accept (t)emporarily or accept (p)ermanently?" question. This happens on Linux. Any ideas why command line client and JavaHL might behave differently here? Note that the command line binaries are not identical to the JavaHL binaries, but both have been compiled from Subversion 1.9.7. Could this be a problem in our build process? -Marc
Re: JavaHL: redirect cycle detected for non-cyclic redirects
On 24.05.2017 19:59, Branko Čibej wrote: On 24.05.2017 19:37, Branko Čibej wrote: On 24.05.2017 12:19, Marc Strapetz wrote: I have following Apache virtual host configuration which contains a redirect: RedirectMatch 301 ^/svntest/(.*)$ /svntests/$1 DAV svn SVNParentPath /misc/svntests ... When trying to access a redirected repository from command line, this works fine: $ svn ls https://host/svntest/test1 Redirecting to URL 'https://host/svntests/test1': project1/ When trying to access using JavaHL, a "Redirect cycle detected for URL" SubversionException is thrown. Code snippet: RemoteFactory remoteFactory = new RemoteFactory(); remoteFactory.openRemoteSession("https://host/svntest/test1";, 100); As the definition is not cyclic and retryAttempts=100 should be sufficient, it looks like there is a JavaHL problem related to redirects? Could be a bug in the redirect detection logic in JavaHL. I'll take a look. Can you try this patch, please? Index: subversion/bindings/javahl/native/RemoteSession.cpp === --- subversion/bindings/javahl/native/RemoteSession.cpp (revision 1796083) +++ subversion/bindings/javahl/native/RemoteSession.cpp (working copy) @@ -214,8 +214,9 @@ RemoteSession::RemoteSession(int retryAttempts, cycle_detected = true; break; } - /* ### Shouldn't url be updated for the next attempt? - ### There is no real cycle if we just do the same thing twice? */ + + url = corrected_url; + corrected_url = NULL; } if (cycle_detected) Thanks, Brane! We have applied the patch to 1.9.x branch and I can confirm that it's working. Will it be possible to backport the patch to 1.9.x branch in the Subversion repository, too? -Marc
JavaHL: redirect cycle detected for non-cyclic redirects
I have following Apache virtual host configuration which contains a redirect: RedirectMatch 301 ^/svntest/(.*)$ /svntests/$1 DAV svn SVNParentPath /misc/svntests ... When trying to access a redirected repository from command line, this works fine: $ svn ls https://host/svntest/test1 Redirecting to URL 'https://host/svntests/test1': project1/ When trying to access using JavaHL, a "Redirect cycle detected for URL" SubversionException is thrown. Code snippet: RemoteFactory remoteFactory = new RemoteFactory(); remoteFactory.openRemoteSession("https://host/svntest/test1";, 100); As the definition is not cyclic and retryAttempts=100 should be sufficient, it looks like there is a JavaHL problem related to redirects? Tested with Subversion 1.9.5 -Marc
JavaHL: specify --extensions for blame
We have been requested to support svn blame -x "--ignore-eol-style -w" for our Java client. AFAIU, this is currently not possible using JavaHL? If so, please take this as RFE: --extensions options support for blame, diff and related operations. -Marc
Re: JavaHL: "Not implemented" error in StatusEditor.addAbsent
On 25.11.2015 17:04, Bert Huijben wrote: -Original Message- From: Marc Strapetz [mailto:marc.strap...@syntevo.com] Sent: woensdag 25 november 2015 16:09 To: Branko Čibej ; dev@subversion.apache.org Subject: Re: JavaHL: "Not implemented" error in StatusEditor.addAbsent On 25.11.2015 10:43, Branko Čibej wrote: On 25.11.2015 09:49, Marc Strapetz wrote: One of our users has reported following Exception against Subversion 1.9.2: Caused by: java.lang.RuntimeException: Not implemented: StatusEditor.addAbsent at org.apache.subversion.javahl.remote.StatusEditor.addAbsent(StatusEditor.j ava:110) ... 15 more Actually, StatusEditor.addAbsent looks like this: public void addAbsent(String relativePath, NodeKind kind, long replacesRevision) { //DEBUG:System.err.println(" [J] StatusEditor.addAbsent"); checkState(); throw new RuntimeException("Not implemented: StatusEditor.addAbsent"); } Is there any more debug information I should try to collect? Well, it's not implemented ... I can't think of anything more specific? I'm wondering whether it was not implemented by intention, because it's not expected to be called (same as for copy/move)? Actually, this is the only user who is experiencing this problem, so conditions causing this problem seem to be very specific. Fortunately it's currently perfectly reproducible for him. Should I ask for an "svn status -u" output? This should be perfectly reproducable when you call status on a directory that contains subdirectories that you are not allowed to read (via a mod_authz_svn config file or similar svnserve config). svn status -u explicitly ignores absent nodes. Thanks, it actually was for another user now. Attached patch ignores absent nodes too and has been confirmed to resolve the problem. -Marc Index: subversion/bindings/javahl/src/org/apache/subversion/javahl/remote/StatusEditor.java === --- subversion/bindings/javahl/src/org/apache/subversion/javahl/remote/StatusEditor.java (revision 1718836) +++ subversion/bindings/javahl/src/org/apache/subversion/javahl/remote/StatusEditor.java (working copy) @@ -107,7 +107,7 @@ { //DEBUG:System.err.println(" [J] StatusEditor.addAbsent"); checkState(); -throw new RuntimeException("Not implemented: StatusEditor.addAbsent"); +// ignore this callback, as svn status -u does } public void alterDirectory(String relativePath,
svn cleanup error "svn: E720032: Can't remove '...\.svn\tmp\svn-...'
One of our users is reporting frequent clean up errors: svn: E720032: Can't remove 'C:\Project\Path\To\WorkingCopy\.svn\tmp\svn-30D0973' svn: E720032: Can't remove file 'C:\Project\Path\To\WorkingCopy\.svn\tmp\svn-30D0973': Det går inte att komma åt filen eftersom den används av en annan process. When investigating file system locks, smartsvn.exe (which is using Subversion binaries) is the only process which is holding locks on this file. Locks are hold until SmartSVN is closed. Which parts of the code might hold locks on such temporary files and might this be caused by wrong API usage? As a side note, he is only able to perform a clean up with NetBeans (which is using SVNKit), once SmartSVN has been closed. This means, that he is also using SVNKit on the same working copy (but not necessarily concurrently). -Marc
Re: JavaHL: "Not implemented" error in StatusEditor.addAbsent
On 25.11.2015 10:43, Branko Čibej wrote: On 25.11.2015 09:49, Marc Strapetz wrote: One of our users has reported following Exception against Subversion 1.9.2: Caused by: java.lang.RuntimeException: Not implemented: StatusEditor.addAbsent at org.apache.subversion.javahl.remote.StatusEditor.addAbsent(StatusEditor.java:110) ... 15 more Actually, StatusEditor.addAbsent looks like this: public void addAbsent(String relativePath, NodeKind kind, long replacesRevision) { //DEBUG:System.err.println(" [J] StatusEditor.addAbsent"); checkState(); throw new RuntimeException("Not implemented: StatusEditor.addAbsent"); } Is there any more debug information I should try to collect? Well, it's not implemented ... I can't think of anything more specific? I'm wondering whether it was not implemented by intention, because it's not expected to be called (same as for copy/move)? Actually, this is the only user who is experiencing this problem, so conditions causing this problem seem to be very specific. Fortunately it's currently perfectly reproducible for him. Should I ask for an "svn status -u" output? -Marc
JavaHL: "Not implemented" error in StatusEditor.addAbsent
One of our users has reported following Exception against Subversion 1.9.2: Caused by: java.lang.RuntimeException: Not implemented: StatusEditor.addAbsent at org.apache.subversion.javahl.remote.StatusEditor.addAbsent(StatusEditor.java:110) ... 15 more Actually, StatusEditor.addAbsent looks like this: public void addAbsent(String relativePath, NodeKind kind, long replacesRevision) { //DEBUG:System.err.println(" [J] StatusEditor.addAbsent"); checkState(); throw new RuntimeException("Not implemented: StatusEditor.addAbsent"); } Is there any more debug information I should try to collect? -Marc
Re: svn status API and missing switched flag
On 12.10.2015 13:49, Bert Huijben wrote: -Original Message- From: Marc Strapetz [mailto:marc.strap...@syntevo.com] Sent: maandag 12 oktober 2015 13:37 To: Bert Huijben ; dev@subversion.apache.org Subject: Re: svn status API and missing switched flag The old behavior makes sense if you think as the 'S' as switched against the ancestor... If you are not looking at an ancestor it can't be switched. If you see the behavior in 1.7, 1.8 or 1.9 then I agree that the api should at least provide switched information. I see this behavior with SVN 1.9.2. I already reproduced the behavior, but thanks for confirming. I'm looking what the options are here... It looks like it was an explicit api decision for the status walker. (The single status result api does provide the switched flag!) Just adding the flag as simple fix would make the status much slower on a single directory without recursion. Every db transaction counts in this performance critical scenario. And as this was implemented explicitly this way for wc-ng we really have an api change at hand. Just to mention, I'm using JavaHL. So I guess I don't have access to the single status API at all? Either way, I've now worked around the problem by manually comparing the expected URL (using the parent URL which I always know in my case) against the actual URL and deriving the 'switched' state from this comparison. So no problem to wait for an API change in a future release. -Marc
Re: svn status API and missing switched flag
On 12.10.2015 12:31, Bert Huijben wrote: -Original Message- From: Marc Strapetz [mailto:marc.strap...@syntevo.com] Sent: maandag 12 oktober 2015 10:56 To: dev@subversion.apache.org Subject: svn status API and missing switched flag Consider following working copy for which directory "dir" is switched: $ svn status -v 814 813 marc . S 814 813 marc dir 814 356 strapetz dir\sub.txt Now, when invoking "svn status" in sub-directory "dir", the "switched" state is not displayed anymore: $ svn status -v 814 813 marc . 814 356 strapetz sub.txt From command line, this may be reasonable, because the user may expect to see the status "relative" to his current working directory. From API perspective, the missing "switched" flag is not expected. I guess that usually a non-root and non-infinity "svn status" will be invoked to efficiently update the state of a certain directory (at least we do so). Still the state is usually expected to be relative to the working copy root. To resolve this, I'd propose to change core "svn status" itself to evaluate the "switched" flag for the status root directory. This will result in an additional "S", but won't do any harm: $ svn status -v S814 813 marc . 814 356 strapetz sub.txt Which version of Subversion did you use for this? Without checking any of the code I would have expected the behavior you describe for Subversion <= 1.6, while I would have guessed this behavior changed with Subversion 1.7 when we moved to the single database per working copy. (Pre 1.7 we simply couldn't open a directory about the current working copy in a portable way) The old behavior makes sense if you think as the 'S' as switched against the ancestor... If you are not looking at an ancestor it can't be switched. If you see the behavior in 1.7, 1.8 or 1.9 then I agree that the api should at least provide switched information. I see this behavior with SVN 1.9.2. -Marc
svn status API and missing switched flag
Consider following working copy for which directory "dir" is switched: $ svn status -v 814 813 marc . S 814 813 marc dir 814 356 strapetz dir\sub.txt Now, when invoking "svn status" in sub-directory "dir", the "switched" state is not displayed anymore: $ svn status -v 814 813 marc . 814 356 strapetz sub.txt From command line, this may be reasonable, because the user may expect to see the status "relative" to his current working directory. From API perspective, the missing "switched" flag is not expected. I guess that usually a non-root and non-infinity "svn status" will be invoked to efficiently update the state of a certain directory (at least we do so). Still the state is usually expected to be relative to the working copy root. To resolve this, I'd propose to change core "svn status" itself to evaluate the "switched" flag for the status root directory. This will result in an additional "S", but won't do any harm: $ svn status -v S814 813 marc . 814 356 strapetz sub.txt
JavaHL: strange/impossible IOException
We have just received following bug report which shows an impossible stack trace: java.io.IOException: No space left on device at java.io.FileOutputStream.writeBytes(Native Method) at java.io.FileOutputStream.write(Unknown Source) at org.apache.subversion.javahl.remote.RemoteSession.nativeGetFile(Native Method) at org.apache.subversion.javahl.remote.RemoteSession.getFile(RemoteSession.java:167) I'm considering it as "impossible", because RemoteSession.nativeGetFile only throws a ClientException and no IOException. I guess the only possible way to throw a checked Exception which is not declared is from native code. Now I'm wondering whether this might be related to the Exception wrapping/unwrapping problem which Bert has addressed in r1664939 (and following)? Btw, these kinds of stack traces have also been reported for the javahl-1.8-extensions branch. -Marc
Re: JavaHL, 1.9: "Bad file descriptor", "Stream doesn't support this capability" errors
On 14.08.2015 11:21, Philip Martin wrote: Marc Strapetz writes: It's reproducible with an empty repository on the server (just initialized with svnadmin) and a local repository which has been prepared for the initial import: C:\temp\svn> svn status -v 00 ? . A- ? ? dir A- ? ? dir\subfile A- ? ? file C:\temp\svn> svn commit -m "initial import" svn: E140004: Commit failed (details follow): svn: E140004: Stream doesn't support this capability svn: E09: Polling for available data on filestream failed: Bad file descriptor On the server, we are running SVN 1.6.17. That's the apr_poll() call in data_available_handler_apr() failing, and E09 could be EBADF. I suppose the file could have been closed, or the file descriptor could have been overwritten. What do you see in the debugger? Philip, is there any input you are expecting from my side? Because I don't have an idea how I should debug this on the server side. Do you think the problem can be caused by the rather old version SVN 1.6.17 on the server? Either way, there must have happened something in the SVN 1.9 release as well, breaking this. -Marc
Re: JavaHL, 1.9: "Bad file descriptor", "Stream doesn't support this capability" errors
On 14.08.2015 00:20, Branko Čibej wrote: On 13.08.2015 13:32, Marc Strapetz wrote: On 27.07.2015 09:21, Branko Čibej wrote: On 27.07.2015 09:17, Marc Strapetz wrote: One of our 1.9 (early-access) users is reporting problems when performing remote commands, for example a copy URL->URL: org.apache.subversion.javahl.ClientException: Stream doesn't support this capability Bad file descriptor svn: Polling for available data on filestream failed: Bad file descriptor at org.apache.subversion.javahl.SVNClient.copy(Native Method) at ... He hasn't encountered such problems with 1.8 versions. AFAIU, he is connecting using SSH. Is this an SSH-related problem? Could it be related to the underlying SSH client? Which platform is this? Can the user reproduce this problem with the command-line svn on the same machine? It's on Windows, in combination with SSH. I'm now able to reproduce this problem myself and it looks like a regression to me: It's reproducible with our own Windows binaries as well as with the WANdisco binaries. It's reproducible with Plink/Pageant as well as with Trilead SSH. The commit works fine with Subversion 1.8. Is there any additional information/debugging I can do on my side? I'd still want to know if the command-line client works. If not, a minimal Java program using JavaHL that demonstrates the problem would be a real help. No, the command-line client does not work: neither the binaries we are building nor WANdisco's binaries. It's reproducible with an empty repository on the server (just initialized with svnadmin) and a local repository which has been prepared for the initial import: C:\temp\svn> svn status -v 00 ? . A- ? ? dir A- ? ? dir\subfile A- ? ? file C:\temp\svn> svn commit -m "initial import" svn: E140004: Commit failed (details follow): svn: E140004: Stream doesn't support this capability svn: E09: Polling for available data on filestream failed: Bad file descriptor On the server, we are running SVN 1.6.17. -Marc
Re: JavaHL, 1.9: "Bad file descriptor", "Stream doesn't support this capability" errors
On 27.07.2015 09:21, Branko Čibej wrote: On 27.07.2015 09:17, Marc Strapetz wrote: One of our 1.9 (early-access) users is reporting problems when performing remote commands, for example a copy URL->URL: org.apache.subversion.javahl.ClientException: Stream doesn't support this capability Bad file descriptor svn: Polling for available data on filestream failed: Bad file descriptor at org.apache.subversion.javahl.SVNClient.copy(Native Method) at ... He hasn't encountered such problems with 1.8 versions. AFAIU, he is connecting using SSH. Is this an SSH-related problem? Could it be related to the underlying SSH client? Which platform is this? Can the user reproduce this problem with the command-line svn on the same machine? It's on Windows, in combination with SSH. I'm now able to reproduce this problem myself and it looks like a regression to me: It's reproducible with our own Windows binaries as well as with the WANdisco binaries. It's reproducible with Plink/Pageant as well as with Trilead SSH. The commit works fine with Subversion 1.8. Is there any additional information/debugging I can do on my side? -Marc
Re: JavaHL, 1.9: "Bad file descriptor", "Stream doesn't support this capability" errors
On 27.07.2015 09:21, Branko Čibej wrote: On 27.07.2015 09:17, Marc Strapetz wrote: One of our 1.9 (early-access) users is reporting problems when performing remote commands, for example a copy URL->URL: org.apache.subversion.javahl.ClientException: Stream doesn't support this capability Bad file descriptor svn: Polling for available data on filestream failed: Bad file descriptor at org.apache.subversion.javahl.SVNClient.copy(Native Method) at ... He hasn't encountered such problems with 1.8 versions. AFAIU, he is connecting using SSH. Is this an SSH-related problem? Could it be related to the underlying SSH client? Which platform is this? It's Windows 8.1. > Can the user reproduce this problem with the command-line svn on the same machine? I'm going to ask him and would point him to the binaries we are building unless this could be a problem of the build process -- in this case, which binaries do you recommend on Windows? -Marc
JavaHL, 1.9: "Bad file descriptor", "Stream doesn't support this capability" errors
One of our 1.9 (early-access) users is reporting problems when performing remote commands, for example a copy URL->URL: org.apache.subversion.javahl.ClientException: Stream doesn't support this capability Bad file descriptor svn: Polling for available data on filestream failed: Bad file descriptor at org.apache.subversion.javahl.SVNClient.copy(Native Method) at ... He hasn't encountered such problems with 1.8 versions. AFAIU, he is connecting using SSH. Is this an SSH-related problem? Could it be related to the underlying SSH client? -Marc
New source for Subversion binaries
Starting with Subversion 1.9, as part of the SmartSVN build process we are also creating Subversion command line binaries (client-side only) which we are now providing as separate download for Windows (32 bit only) and OSX. Windows binaries are built in a Windows 7 VM with the minimum requirements installed. OSX binaries are built on a dedicated machine. Other properties of the bundles: - portable, no installer - no registration - no certification Currently, they are only available for the 1.9 preview builds: http://www.smartsvn.com/preview#svn Probably they are not perfect yet, so it would be great if Windows and OSX developers could have a look and let me know about possible problems. We would also like to create portable (universal) Linux binaries for 32- and 64-bit platforms. AFAIU, this should be possible if the linking between the libraries would be relative (like on OSX). Unfortunately, we currently don't have a clue how to teach the linker to do so. Does anyone have ideas or have already succeeded in creating such portable binaries? -Marc
Re: JavaHL: ClientNotifyCallback reports unexpected kind "file" for symlinks
On 19.05.2015 16:40, Bert Huijben wrote: -Original Message- From: Marc Strapetz [mailto:marc.strap...@syntevo.com] Sent: dinsdag 19 mei 2015 15:59 To: dev@subversion.apache.org Subject: JavaHL: ClientNotifyCallback reports unexpected kind "file" for symlinks When recursively adding a directory "test" which contains another directory "sub" and a symlink "sub.link" pointing to "sub", "sub.link" is reported with kind=file where I would expect to receive kind=symlink. The problem can be reproduced by following code snippet, using quite recent 1.9 binaries: I don't think Subversion uses kind is symlink anywhere in its public api (yet), so this is totally expected. When we build WC-NG for Subversion 1.7 we introduced database support for storing symlinks as their own kind, but we never switched to this storage yet. Currently symlinks are still files with an 'svn:special' property set on them internally, for Subversion repositories. The node kind enum was extended when we moved to a single enum for node kinds, but changing how we report and store symlinks is far from trivial. Thanks, Bert. I was pretty sure to have seen "symlink" kind reported somewhere. Now I think it might just be our own code which uses (or checks) for "symlink" ... I'll investigate in more detail. Either way, from an API user perspective, it would be helpful to distinguish between normal files and symlinks, especially because symlinks may refer to (local) directories and usually need a different treatment. Should I file an RFE in the issue tracker? Or would this happen implicitly when switching to the planned 1.7 storage? -Marc
JavaHL: ClientNotifyCallback reports unexpected kind "file" for symlinks
When recursively adding a directory "test" which contains another directory "sub" and a symlink "sub.link" pointing to "sub", "sub.link" is reported with kind=file where I would expect to receive kind=symlink. The problem can be reproduced by following code snippet, using quite recent 1.9 binaries: final File root = ; final File dir = new File(root, "test"); dir.mkdirs(); final File sub = new File(dir, "sub"); sub.mkdirs(); final File subLink = new File(dir, "sub.link"); Runtime.getRuntime().exec("ln -s " + sub.getAbsolutePath() + " " + subLink.getAbsolutePath()); final ISVNClient client = new SVNClient(); // client.revert(dir.getAbsolutePath(), Depth.infinity, // new ArrayList()); client.notification2(new ClientNotifyCallback() { @Override public void onNotify(ClientNotifyInformation cni) { System.out.println("[" + cni.getKind() + "] " + cni.getPath() + " " + cni.getAction()); } }); client.add(dir.getAbsolutePath(), Depth.infinity, false, false, false); -Marc
Re: 1.9 JavaHL memory leak in ISVNRemote#status
On 29.04.2015 17:44, Branko Čibej wrote: On 29.04.2015 17:03, Branko Čibej wrote: On 29.04.2015 16:02, Branko Čibej wrote: On 29.04.2015 11:57, Marc Strapetz wrote: On 29.04.2015 05:31, Branko Čibej wrote: On 28.04.2015 21:22, Bert Huijben wrote: -Original Message- From: Marc Strapetz [mailto:marc.strap...@syntevo.com] Sent: dinsdag 28 april 2015 20:26 To: Branko Čibej Cc: Subversion Development Subject: Re: 1.9 JavaHL memory leak in ISVNRemote#status Also, I should add that according to the Profiler, the byte[]s are referenced from the Checksums. The char[]s are referenced from the Strings. And the Strings are referenced directly as JNI local references. Browsing through these Strings, they seem to be server-side paths ("subversion/branches/1.8.x/...") Just guessing: Notifications? No, this is an RA status edit drive; there are no notifications, only editor callbacks, and the checksum objects are created in in the callbacks related to file content changes (file contents streams and checksums always come in pairs). I counted creations, finalizations and garbage collections again. I added forced finalization and GC calls to the test case. For every loop in the test, we create 57 Checksum instances, but only one of them is finalized, no matter how often the finalizer and GC are run. All the Checksum objects are created in the same way, and here are /no/ references anywhere to the remaining 56 objects, yet they're neither finalized nor garbage-collected. The fields (byte array and kind) /are/ collected; all the "live" (according to the heap profiler) Checksum objects have their fields set to null. I've been testing on Windows. According to JProfiler and JVisualVM, byte[]s are still referenced from the Checksums. Hence, I would expect that they are not garbage collected. clearly, the code is cleaning up the references correctly I don't have detailed understanding of the "jniwrapper" package, but I tend to agree with you. In the native code, CreateJ::Checksum and CreateJ::PropertyMap are basically doing the same thing, so there is no reason why Checksums would remain referenced while HashMaps properly do not. I've also tried to comment out all env.CallVoidMethod()-callbacks in EditorProxy.cpp, so created object references would not even be passed into the Java code. Still the same, Checksums remain as "JNI local reference". Finally, I've tried to explicitly call DeleteLocalRef(). This /solves/ the memory leak (at least for Checksums), but I don't understand why this is necessary and whether this is correct. svn_error_t* EditorProxy::cb_alter_file(void *baton, const char *relpath, ... jstring jrelpath = JNIUtil::makeJString(relpath); SVN_JAVAHL_OLDSTYLE_EXCEPTION_CHECK(env); jobject jchecksum = CreateJ::Checksum(checksum); SVN_JAVAHL_OLDSTYLE_EXCEPTION_CHECK(env); jobject jprops = CreateJ::PropertyMap(props, scratch_pool); SVN_JAVAHL_OLDSTYLE_EXCEPTION_CHECK(env); jobject jcontents = NULL; if (contents != NULL) jcontents = wrap_input_stream(contents); env.CallVoidMethod(ep->m_jeditor, mid, jrelpath, jlong(revision), jchecksum, jcontents, jprops); env.DeleteLocalRef(jrelpath); env.DeleteLocalRef(jchecksum); env.DeleteLocalRef(jprops); if (contents != NULL) env.DeleteLocalRef(jcontents); ... but for some unfathomable reason, the collector keeps them alive for a while. I'm not entirely sure about the exact difference of the live data in the VM and a heap dump, but IMO the Checksums are still considered as referenced ("JNI local reference") and hence will never be garbage collected. The profilers confirm this. Given that DeleteLocalRef solves the problem, I think this is either a bug in the jniwrapper or a bug in JNI itself. The latest code wraps the callback implementations with PushLocalFrame/PopLocalFrame; any references created within a local frame should be automatically deleted by PopLocalFrame, according to all JNI docs I can find. I can add the explicit deletions, but it's a shame that frame management wouldn't work as expected. :( So, I'm going to double-check if we're actually getting the frame management right. I can't imagine why the HashMaps and NativeInputStreams would be released, but not the Checksums. All in all, I agree with you that this looks like a JNI bug ... the trick now will be to prove that with a minimal test case and report it upstream. :) (FWIW, I'm using Java 8u45 64-bit on OSX.) So, interesting data point ... I moved the creation of the Checksum objects after the creation of the property maps ... and now they're getting garbage-collected. This is becoming extremely weird. Hah. Fixed it. http://svn.apache.org/r1676771 We were not properly popping off a JNI frame in CreateJ::PropertyMap, so the c
Re: 1.9 JavaHL memory leak in ISVNRemote#status
On 29.04.2015 05:31, Branko Čibej wrote: On 28.04.2015 21:22, Bert Huijben wrote: -Original Message- From: Marc Strapetz [mailto:marc.strap...@syntevo.com] Sent: dinsdag 28 april 2015 20:26 To: Branko Čibej Cc: Subversion Development Subject: Re: 1.9 JavaHL memory leak in ISVNRemote#status Also, I should add that according to the Profiler, the byte[]s are referenced from the Checksums. The char[]s are referenced from the Strings. And the Strings are referenced directly as JNI local references. Browsing through these Strings, they seem to be server-side paths ("subversion/branches/1.8.x/...") Just guessing: Notifications? No, this is an RA status edit drive; there are no notifications, only editor callbacks, and the checksum objects are created in in the callbacks related to file content changes (file contents streams and checksums always come in pairs). I counted creations, finalizations and garbage collections again. I added forced finalization and GC calls to the test case. For every loop in the test, we create 57 Checksum instances, but only one of them is finalized, no matter how often the finalizer and GC are run. All the Checksum objects are created in the same way, and here are /no/ references anywhere to the remaining 56 objects, yet they're neither finalized nor garbage-collected. The fields (byte array and kind) /are/ collected; all the "live" (according to the heap profiler) Checksum objects have their fields set to null. I've been testing on Windows. According to JProfiler and JVisualVM, byte[]s are still referenced from the Checksums. Hence, I would expect that they are not garbage collected. clearly, the code is cleaning up the references correctly I don't have detailed understanding of the "jniwrapper" package, but I tend to agree with you. In the native code, CreateJ::Checksum and CreateJ::PropertyMap are basically doing the same thing, so there is no reason why Checksums would remain referenced while HashMaps properly do not. I've also tried to comment out all env.CallVoidMethod()-callbacks in EditorProxy.cpp, so created object references would not even be passed into the Java code. Still the same, Checksums remain as "JNI local reference". Finally, I've tried to explicitly call DeleteLocalRef(). This /solves/ the memory leak (at least for Checksums), but I don't understand why this is necessary and whether this is correct. svn_error_t* EditorProxy::cb_alter_file(void *baton, const char *relpath, ... jstring jrelpath = JNIUtil::makeJString(relpath); SVN_JAVAHL_OLDSTYLE_EXCEPTION_CHECK(env); jobject jchecksum = CreateJ::Checksum(checksum); SVN_JAVAHL_OLDSTYLE_EXCEPTION_CHECK(env); jobject jprops = CreateJ::PropertyMap(props, scratch_pool); SVN_JAVAHL_OLDSTYLE_EXCEPTION_CHECK(env); jobject jcontents = NULL; if (contents != NULL) jcontents = wrap_input_stream(contents); env.CallVoidMethod(ep->m_jeditor, mid, jrelpath, jlong(revision), jchecksum, jcontents, jprops); env.DeleteLocalRef(jrelpath); env.DeleteLocalRef(jchecksum); env.DeleteLocalRef(jprops); if (contents != NULL) env.DeleteLocalRef(jcontents); ... but for some unfathomable reason, the collector keeps them alive for a while. I'm not entirely sure about the exact difference of the live data in the VM and a heap dump, but IMO the Checksums are still considered as referenced ("JNI local reference") and hence will never be garbage collected. The profilers confirm this. Given that DeleteLocalRef solves the problem, I think this is either a bug in the jniwrapper or a bug in JNI itself. -Marc
Re: 1.9 JavaHL memory leak in ISVNRemote#status
On 28.04.2015 20:06, Marc Strapetz wrote: On 28.04.2015 18:12, Branko Čibej wrote: On 28.04.2015 18:03, Marc Strapetz wrote: Hi Brane, On 28.04.2015 07:36, Branko Čibej wrote: On 24.04.2015 14:11, Branko Čibej wrote: Hi Marc, Just a quick note: your last msg jogged my memory and I think I know the root cause of the leak: improper JNI frame management within a loop. If I'm right, I can both fix the leak and remove the close-stream requirement I just added. On 24 Apr 2015 11:00 am, "Marc Strapetz" mailto:marc.strap...@syntevo.com>> wrote: On 24.04.2015 06 :34, Branko Čibej wrote: On 22.03.2015 05 :06, Branko Čibej wrote: On 21.03.2015 16 :23, Branko Čibej wrote: On 19.03.2015 11:43, Marc Strapetz wrote: Attached example performs an endless series of remote status against the Subversion repository. When invoked with -Xmx24M, the VM will run out of memory soon. Monitoring with jvisualvm shows that the used heap size constantly grows. Monitoring with the Task Manager shows that the allocated memory grows even more (significantly). Looks like a memory leak, for which a large amount of native memory is involved, too. Tested on Windows 8.1 with almost latest Subversion 1.9 JavaHL builds. I can confirm that this happens on the Mac, too, and it's not a garbage collector artefact. I'm trying to trace where the leak is happening ... valgrind with APR pool debugging doesn't tell me much (no surprise there). Just to make sure we weren't doing something bad in our libraries, I wrote a small C program that does the same as your Java example (Ev2 shims included), and memory usage is completely steady. So it is something in JavaHL, but I have no clue yet what the problem is. I have to say this was one of the more "interesting" bug-hunts in my not entirely boring career, and that's not really obvious from the fix itself. :) http://svn.apache.org/r1675771 Marc: this will not be in RC1, but please give the patch a spin and let me know if it fixes your problem. I tested this with the Java program you attached to your original report, and heap size no longer grows without bounds. Great hunt, Brane! The native leak seems to be fixed. I've run my remote status loop with -Xmx24M and still get an OOME after ~170 loop iterations. The memory leak is significantly smaller and this time it seems to be in the Java part. According to the profiler, most memory is allocated by HashMap and friends, referenced from JNI code. Only two org.apache.subversion classes show up, but I guess they indicate the source of the leak: org.apache.subversion.javahl.types.Checksum (~10K instances) org.apache.subversion.javahl.types.NativeInputStream (~10K instances) Let me know, if you more profiler statistics will be helpful. So I've been looking at this in depth. At first I thought that one of the problems was that we didn't release JNI local references; I added code to make sure this happens in the status callbacks (not committed yet) and I verified that all the native wrapped objects do get finalized. However, the Java objects still hang around. One of the problems is that all the callbacks happen within the scope of the ISVNReporter.finishReport call, which means that the whole edit drive is considered a single JNI call (despite the callbacks to Java) and the garbage collector can't reclaim space for the objects created within JNI during that time. But even a forced GC after the report is done and the remote session disposed won't release all the native references. I'm a bit stumped here ... JVM's built-in memory profiler shows the live references and where they're allocated, but doesn't show why they're not released even when I explicitly create and destroy JNI frames. Can you please commit your current state somewhere or send me a patch? I can give this one a try in JProfiler and see whether I can gather some more useful information. Here's the complete patch against 1.9.x. I do see some NativeInputStream objects (but not all) being garbage-collected, but there are a number of other objects, even those allocated in Java code in the callback, that are just hanging around, even if I force a GC. Any help in tracking this
Re: 1.9 JavaHL memory leak in ISVNRemote#status
On 28.04.2015 18:12, Branko Čibej wrote: On 28.04.2015 18:03, Marc Strapetz wrote: Hi Brane, On 28.04.2015 07:36, Branko Čibej wrote: On 24.04.2015 14:11, Branko Čibej wrote: Hi Marc, Just a quick note: your last msg jogged my memory and I think I know the root cause of the leak: improper JNI frame management within a loop. If I'm right, I can both fix the leak and remove the close-stream requirement I just added. On 24 Apr 2015 11:00 am, "Marc Strapetz" mailto:marc.strap...@syntevo.com>> wrote: On 24.04.2015 06 :34, Branko Čibej wrote: On 22.03.2015 05 :06, Branko Čibej wrote: On 21.03.2015 16 :23, Branko Čibej wrote: On 19.03.2015 11:43, Marc Strapetz wrote: Attached example performs an endless series of remote status against the Subversion repository. When invoked with -Xmx24M, the VM will run out of memory soon. Monitoring with jvisualvm shows that the used heap size constantly grows. Monitoring with the Task Manager shows that the allocated memory grows even more (significantly). Looks like a memory leak, for which a large amount of native memory is involved, too. Tested on Windows 8.1 with almost latest Subversion 1.9 JavaHL builds. I can confirm that this happens on the Mac, too, and it's not a garbage collector artefact. I'm trying to trace where the leak is happening ... valgrind with APR pool debugging doesn't tell me much (no surprise there). Just to make sure we weren't doing something bad in our libraries, I wrote a small C program that does the same as your Java example (Ev2 shims included), and memory usage is completely steady. So it is something in JavaHL, but I have no clue yet what the problem is. I have to say this was one of the more "interesting" bug-hunts in my not entirely boring career, and that's not really obvious from the fix itself. :) http://svn.apache.org/r1675771 Marc: this will not be in RC1, but please give the patch a spin and let me know if it fixes your problem. I tested this with the Java program you attached to your original report, and heap size no longer grows without bounds. Great hunt, Brane! The native leak seems to be fixed. I've run my remote status loop with -Xmx24M and still get an OOME after ~170 loop iterations. The memory leak is significantly smaller and this time it seems to be in the Java part. According to the profiler, most memory is allocated by HashMap and friends, referenced from JNI code. Only two org.apache.subversion classes show up, but I guess they indicate the source of the leak: org.apache.subversion.javahl.types.Checksum (~10K instances) org.apache.subversion.javahl.types.NativeInputStream (~10K instances) Let me know, if you more profiler statistics will be helpful. So I've been looking at this in depth. At first I thought that one of the problems was that we didn't release JNI local references; I added code to make sure this happens in the status callbacks (not committed yet) and I verified that all the native wrapped objects do get finalized. However, the Java objects still hang around. One of the problems is that all the callbacks happen within the scope of the ISVNReporter.finishReport call, which means that the whole edit drive is considered a single JNI call (despite the callbacks to Java) and the garbage collector can't reclaim space for the objects created within JNI during that time. But even a forced GC after the report is done and the remote session disposed won't release all the native references. I'm a bit stumped here ... JVM's built-in memory profiler shows the live references and where they're allocated, but doesn't show why they're not released even when I explicitly create and destroy JNI frames. Can you please commit your current state somewhere or send me a patch? I can give this one a try in JProfiler and see whether I can gather some more useful information. Here's the complete patch against 1.9.x. I do see some NativeInputStream objects (but not all) being garbage-collected, but there are a number of other objects, even those allocated in Java code in the callback, that are just hanging around, even if I force a GC. Any help in tracking this down will be greatly appreciated. Thanks,
Re: 1.9 JavaHL memory leak in ISVNRemote#status
Hi Brane, On 28.04.2015 07:36, Branko Čibej wrote: On 24.04.2015 14:11, Branko Čibej wrote: Hi Marc, Just a quick note: your last msg jogged my memory and I think I know the root cause of the leak: improper JNI frame management within a loop. If I'm right, I can both fix the leak and remove the close-stream requirement I just added. On 24 Apr 2015 11:00 am, "Marc Strapetz" mailto:marc.strap...@syntevo.com>> wrote: On 24.04.2015 06 :34, Branko Čibej wrote: On 22.03.2015 05 :06, Branko Čibej wrote: On 21.03.2015 16 :23, Branko Čibej wrote: On 19.03.2015 11:43, Marc Strapetz wrote: Attached example performs an endless series of remote status against the Subversion repository. When invoked with -Xmx24M, the VM will run out of memory soon. Monitoring with jvisualvm shows that the used heap size constantly grows. Monitoring with the Task Manager shows that the allocated memory grows even more (significantly). Looks like a memory leak, for which a large amount of native memory is involved, too. Tested on Windows 8.1 with almost latest Subversion 1.9 JavaHL builds. I can confirm that this happens on the Mac, too, and it's not a garbage collector artefact. I'm trying to trace where the leak is happening ... valgrind with APR pool debugging doesn't tell me much (no surprise there). Just to make sure we weren't doing something bad in our libraries, I wrote a small C program that does the same as your Java example (Ev2 shims included), and memory usage is completely steady. So it is something in JavaHL, but I have no clue yet what the problem is. I have to say this was one of the more "interesting" bug-hunts in my not entirely boring career, and that's not really obvious from the fix itself. :) http://svn.apache.org/r1675771 Marc: this will not be in RC1, but please give the patch a spin and let me know if it fixes your problem. I tested this with the Java program you attached to your original report, and heap size no longer grows without bounds. Great hunt, Brane! The native leak seems to be fixed. I've run my remote status loop with -Xmx24M and still get an OOME after ~170 loop iterations. The memory leak is significantly smaller and this time it seems to be in the Java part. According to the profiler, most memory is allocated by HashMap and friends, referenced from JNI code. Only two org.apache.subversion classes show up, but I guess they indicate the source of the leak: org.apache.subversion.javahl.types.Checksum (~10K instances) org.apache.subversion.javahl.types.NativeInputStream (~10K instances) Let me know, if you more profiler statistics will be helpful. So I've been looking at this in depth. At first I thought that one of the problems was that we didn't release JNI local references; I added code to make sure this happens in the status callbacks (not committed yet) and I verified that all the native wrapped objects do get finalized. However, the Java objects still hang around. One of the problems is that all the callbacks happen within the scope of the ISVNReporter.finishReport call, which means that the whole edit drive is considered a single JNI call (despite the callbacks to Java) and the garbage collector can't reclaim space for the objects created within JNI during that time. But even a forced GC after the report is done and the remote session disposed won't release all the native references. I'm a bit stumped here ... JVM's built-in memory profiler shows the live references and where they're allocated, but doesn't show why they're not released even when I explicitly create and destroy JNI frames. Can you please commit your current state somewhere or send me a patch? I can give this one a try in JProfiler and see whether I can gather some more useful information. -Marc
JavaHL RFE: ISVNRemote should provide API to retrieve a contents of a specific file
To allow users to browse through all contents of a file (as part of an interactive blame), it's necessary to have an efficient API to retrieve these file contents. AFAIU, the low-level file_rev_handler already provides this information via svn_txdelta_window_handler_t. Unfortunately, in RemoteSettion.cpp this information is converted to just a boolean (delta_handler != NULL) and passed to the JavaHL callback afterwards. I don't think it's necessary (or even desirable) to provide the patch/stream logic, like svn_stream_open_readonly, as Java API, just a way to retrieve complete file contents for all revisions. Suggestion: interface ISVNRemote { /** * @param RemoteFileContentsCallback may be null */ void getFileRevisions(String path, long startRevision, long endRevision, boolean includeMergedRevisions, RemoteFileRevisionsCallback handler RemoteFileContentsCallback contentsHandler) throws ClientException; } interface RemoteFileContentsCallback { void doFileContent(ISVNRemote.FileRevision fileRevision, InputStream content); } -Marc
Re: 1.9 JavaHL memory leak in ISVNRemote#status
On 24.04.2015 06:34, Branko Čibej wrote: On 22.03.2015 05:06, Branko Čibej wrote: On 21.03.2015 16:23, Branko Čibej wrote: On 19.03.2015 11:43, Marc Strapetz wrote: Attached example performs an endless series of remote status against the Subversion repository. When invoked with -Xmx24M, the VM will run out of memory soon. Monitoring with jvisualvm shows that the used heap size constantly grows. Monitoring with the Task Manager shows that the allocated memory grows even more (significantly). Looks like a memory leak, for which a large amount of native memory is involved, too. Tested on Windows 8.1 with almost latest Subversion 1.9 JavaHL builds. I can confirm that this happens on the Mac, too, and it's not a garbage collector artefact. I'm trying to trace where the leak is happening ... valgrind with APR pool debugging doesn't tell me much (no surprise there). Just to make sure we weren't doing something bad in our libraries, I wrote a small C program that does the same as your Java example (Ev2 shims included), and memory usage is completely steady. So it is something in JavaHL, but I have no clue yet what the problem is. I have to say this was one of the more "interesting" bug-hunts in my not entirely boring career, and that's not really obvious from the fix itself. :) http://svn.apache.org/r1675771 Marc: this will not be in RC1, but please give the patch a spin and let me know if it fixes your problem. I tested this with the Java program you attached to your original report, and heap size no longer grows without bounds. Great hunt, Brane! The native leak seems to be fixed. I've run my remote status loop with -Xmx24M and still get an OOME after ~170 loop iterations. The memory leak is significantly smaller and this time it seems to be in the Java part. According to the profiler, most memory is allocated by HashMap and friends, referenced from JNI code. Only two org.apache.subversion classes show up, but I guess they indicate the source of the leak: org.apache.subversion.javahl.types.Checksum (~10K instances) org.apache.subversion.javahl.types.NativeInputStream (~10K instances) Let me know, if you more profiler statistics will be helpful. -Marc
Re: RFE: copy with metadataOnly should allow removed/replaced sources and added/replaced targets
On 23.04.2015 16:59, Julian Foad wrote: Marc Strapetz wrote: Using copy with the new metadataOnly option (through the API) only allows to "move" or "copy" a missing file onto an unversioned file. It could also be helpful to copy/move metadata from a removed (or replaced) source to an already added (or replaced) target. Use case 1: the user has removed file "a" and moved file "b" to file "a" without using SVN: $ svn status M a ! b Goal is to preserve "b"'s history for the new "a" and have the history of the old "a" being ended. Marc, If I understand correctly, the goal of this example is to make the version control operations reflect the filesystem operations, so that we end up with: path 'a': replaced with a copy from 'b' path 'b': deleted (or rather the object that was here has been moved to path 'a') That's correct. With metadataOnly being more tolerant, this could then be done by: $ svn rm --keep-local a $ svn add a $ svn cp --metadata-only b a I don't understand why you suggest that sequence of commands. I don't expect 'svn cp' should allow copying to a destination that's already under version control (that is, 'a' after 'svn add a'), metadata-only or not. I would expect 'svn cp --metadata-only' to do everything just the same as plain 'svn cp' except not touch or look at what's on disk: so not try to copy the disk file and not care about whether the file is present on disk at either the source or target location. Therefore I think the appropriate sequence for your example would be: $ svn rm --keep-local a $ svn mv --metadata-only b a I agree that this would result in the same state (though currently doesn't work, too). The reason why I was using the additional "svn add" is that SmartSVN provides two GUI commands for this procedure: - "mark as replaced" which is applicable on a modified file and turns this one into a replaced file by invoking: $ svn rm --keep-local a $ svn add a - "move" which is applicable on a single, versioned file and on two files for which one must be missing and the other one unversioned. The latter two-files-version invokes: $ svn mv --metadata-only b a With a hack, the two-files-version is (resp. was) also applicable on combinations like (missing, added), (missing, replaced), (removed, added), (removed, replaced) ... With "mark as replaced" and "move" both uses cases could be satisfied and it is still close to the user's thinking: a has been replaced, hence split the history of a, a is actually b, hence link b's history with a. For: $ svn rm --keep-local a $ svn mv --metadata-only b a This can also be done with two GUI commands: "remove" and "move", however the procedure isn't that close to the user's thinking: it requires him to remove a file to make place for a "virtual" move operation which would otherwise fail. This is harder to comprehend, at least IMO. For command line usage, I would agree that --metadata-only should probably be limited to (missing, unversioned) to avoid possible misusage directly by the user. The API could be more flexible, assuming clients using the API will pass this flexibility only in a controlled and careful way to the user :) -Marc
Re: RFE: copy with metadataOnly should allow removed/replaced sources and added/replaced targets
On 23.04.2015 16:01, Branko Čibej wrote: On 22.04.2015 20:28, Marc Strapetz wrote: Using copy with the new metadataOnly option (through the API) only allows to "move" or "copy" a missing file onto an unversioned file. It could also be helpful to copy/move metadata from a removed (or replaced) source to an already added (or replaced) target. Use case 1: the user has removed file "a" and moved file "b" to file "a" without using SVN: $ svn status M a ! b Goal is to preserve "b"'s history for the new "a" and have the history of the old "a" being ended. With metadataOnly being more tolerant, this could then be done by: $ svn rm --keep-local a $ svn add a $ svn cp --metadata-only b a What happens if you do (the API equivalent of): $ svn cp --metadata-only b@BASE a Brane, I guess this is a question of what happens currently, not a counter-example which would result in problems? I've tried now with following JavaHL code, but the problem remains the same (problem is that the target file already exists): client.remove( Collections.singleton(a.getAbsolutePath()), false, true, null, null, null); client.add(a.getAbsolutePath(), Depth.empty, false, false, false, true); client.copy( Collections.singletonList(new CopySource(b.getAbsolutePath(), Revision.BASE, Revision.BASE)), a.getAbsolutePath(), false, true, true, true, false, null, null, null, null); Output: Exception in thread "main" org.apache.subversion.javahl.ClientException: Entry already exists svn: Path 'D:\svntest\small\a.txt' already exists at org.apache.subversion.javahl.SVNClient.copy(Native Method) -Marc
1.9: javahl.ISVNClient#cleanup(String) always fails with "Attempted to lock an already-locked dir"
cleanup-related code which is working fine with 1.8 JavaHL starts failing with 1.9 JavaHL. According to the docs, ISVNClient#cleanup(String) does not break locks, which seems to cause the problems: /** * Recursively cleans up a local directory, finishing any * incomplete operations, removing lockfiles, etc. * * Behaves like the 1.9 version with breakLocks and * includeExternals set to false, and the * other flags to true. * @param path a local directory. * @throws ClientException */ When using ISVNClient.cleanup(path, *true*, true, true, true, false) code works. -Marc
Re: Subversion 1.9: svn cp --pin-externals may produce dummy log entries
On 23.04.2015 11:27, Stefan Sperling wrote: On Wed, Apr 22, 2015 at 07:58:35PM +0200, Marc Strapetz wrote: $ svn proplist -r2 -R -v ^/ ... Properties on 'file://localhost/D:/temp/externals/repo/dst/dir': svn:externals ^/ext@1 ext $ svn proplist -r1 -R -v ^/ ... Properties on 'file://localhost/D:/temp/externals/repo/src/dir': svn:externals ^/ext@1 ext To rule out whitespace differences, can you please send these outputs through hexdump or diff to ensure they really are identical? I believe they are, just checking. The contents seem to be actually identical: $ svn proplist -r2 -R -v ^/ > hexdump ... 0A 20 20 20 20 0D 0A 50 - 72 6F 70 65 72 74 69 65 | Propertie| 73 20 6F 6E 20 27 66 69 - 6C 65 3A 2F 2F 6C 6F 63 |s on 'file://loc| 61 6C 68 6F 73 74 2F 44 - 3A 2F 74 65 6D 70 2F 65 |alhost/D:/temp/e| 78 74 65 72 6E 61 6C 73 - 2F 72 65 70 6F 2F 64 73 |xternals/repo/ds| 74 2F 64 69 72 27 3A 0D - 0A 20 20 73 76 6E 3A 65 |t/dir':svn:e| 78 74 65 72 6E 61 6C 73 - 0D 0A 20 20 20 20 5E 2F |xternals ^/| 65 78 74 40 31 20 65 78 - 74 0D 0D 0A 20 20 20 20 |ext@1 ext | 0D 0A - | | $ svn proplist -r1 -R -v ^/ ... 20 20 20 0D 0A 50 72 6F - 70 65 72 74 69 65 73 20 | Properties | 6F 6E 20 27 66 69 6C 65 - 3A 2F 2F 6C 6F 63 61 6C |on 'file://local| 68 6F 73 74 2F 44 3A 2F - 74 65 6D 70 2F 65 78 74 |host/D:/temp/ext| 65 72 6E 61 6C 73 2F 72 - 65 70 6F 2F 73 72 63 2F |ernals/repo/src/| 64 69 72 27 3A 0D 0A 20 - 20 73 76 6E 3A 65 78 74 |dir':svn:ext| 65 72 6E 61 6C 73 0D 0A - 20 20 20 20 5E 2F 65 78 |ernals ^/ex| 74 40 31 20 65 78 74 0D - 0D 0A 20 20 20 20 0D 0A |t@1 ext | -Marc
RFE: copy with metadataOnly should allow removed/replaced sources and added/replaced targets
Using copy with the new metadataOnly option (through the API) only allows to "move" or "copy" a missing file onto an unversioned file. It could also be helpful to copy/move metadata from a removed (or replaced) source to an already added (or replaced) target. Use case 1: the user has removed file "a" and moved file "b" to file "a" without using SVN: $ svn status M a ! b Goal is to preserve "b"'s history for the new "a" and have the history of the old "a" being ended. With metadataOnly being more tolerant, this could then be done by: $ svn rm --keep-local a $ svn add a $ svn cp --metadata-only b a Use case 2: the user has moved file "a" to file "b" and created a new file "a" without using SVN: $ svn status M a ? b Goal is to preserve old "a"'s history for "b" and start a new history for new "a". With metadataOnly being more tolerant, this could then be done by: $ svn rm --keep-local a $ svn add a $ svn cp --metadata-only a b Btw, currently "svn help cp" does not show a "--metadata-only" option at all. Is this option intentionally not available from command line? -Marc
Subversion 1.9: svn cp --pin-externals may produce dummy log entries
After invoking following series of commands: svnadmin create repo svn checkout file://localhost/d:/temp/externals/repo wc mkdir wc cd wc mkdir ext touch ext\file mkdir src\dir svn add * svn propset svn:externals "^/ext ext" src svn propset svn:externals "^/ext@1 ext" src\dir svn commit -m "initial import" svn up svn cp --pin-externals src ^^/dst -m "copy" svn log -r2 -v svn proplist -r2 -R -v ^^/ svn proplist -r1 -R -v ^^/ The final log output shows /dst/dir as modified: - r2 | marc | 2015-04-22... Changed paths: A /dst (from /src:1) M /dst/dir copy - However, there is no modification expected, because src\dir external already has a revision number set. The proplist outputs confirm that the property hasn't been modified: $ svn proplist -r2 -R -v ^/ ... Properties on 'file://localhost/D:/temp/externals/repo/dst/dir': svn:externals ^/ext@1 ext $ svn proplist -r1 -R -v ^/ ... Properties on 'file://localhost/D:/temp/externals/repo/src/dir': svn:externals ^/ext@1 ext -Marc
Differences in tree conflict reporting between 1.8 and 1.9 (possible regression?)
For a working copy which has been checked out using SVN 1.8, "svn info" output is slightly different between version 1.8 and 1.9 -- for 1.9 the local *dir* is missing. For 1.8: $ svn info a/b Path: a\b Name: b Repository Root: ... Repository UUID: ... Node Kind: none Schedule: normal Tree conflict: local *dir* missing, incoming dir edit upon merge Source left: (dir) ^/trunk/a/b@1 Source right: (dir) ^/branch/a/b@4 For 1.9: $ svn info a/b Path: a\b Name: b Repository Root: ... Repository UUID: ... Node Kind: none Schedule: normal Tree conflict: local missing or deleted or moved away, incoming dir edit upon merge Source left: (dir) ^/trunk/a/b@1 Source right: (dir) ^/branch/a/b@4 In terms of JavaHL this means that for version 1.8: info.conflicts[0].nodeKind="dir" while for version 1.9: info.conflicts[0].nodeKind="none" I can provide a test repository to reproduce this difference. -Marc
Re: JavaHL: Exceptions in LogMessageCallback.singleMessage should abort the log immediately
On 16.03.2015 17:54, Bert Huijben wrote: -Original Message- From: Marc Strapetz [mailto:marc.strap...@syntevo.com] Sent: maandag 16 maart 2015 17:30 To: dev@subversion.apache.org Subject: JavaHL: Exceptions in LogMessageCallback.singleMessage should abort the log immediately If e.g. a RuntimeException is thrown in LogMessageCallback#singleMessage, it's not processed in LogMessageCallback::singleMessage and the log is continued nevertheless: (1) At line 77 in LogMessageCallback.cpp, there should be returned an appropriate error code. (2) After line 122, JNIUtil::isJavaExceptionThrown() should be called and there should be returned an appropriate error code. In both cases, the returned error code should result in stopping the low-level log; rethrowing the Exception in RemoteSession::getLog won't be necessary, as this can be established easily from within client code itself. This is a common problem that applies to almost all callbacks in JavaHL in <= 1.9. A fix for this generic problem has been applied to trunk in r1664938 (further tweaks/extensions in 1664939,1664940,1664978,1664984). This introduces some behavior changes (such as the one you noted), so backporting needs discussion here. Thanks for starting the discussion ;-) As JavaHL was reworked significantly for Subversion 1.9, is there a possibility to get this change backported? -Marc
1.9 JavaHL memory leak in ISVNRemote#status
Attached example performs an endless series of remote status against the Subversion repository. When invoked with -Xmx24M, the VM will run out of memory soon. Monitoring with jvisualvm shows that the used heap size constantly grows. Monitoring with the Task Manager shows that the allocated memory grows even more (significantly). Looks like a memory leak, for which a large amount of native memory is involved, too. Tested on Windows 8.1 with almost latest Subversion 1.9 JavaHL builds. -Marc import java.io.*; import org.apache.subversion.javahl.*; import org.apache.subversion.javahl.callback.*; import org.apache.subversion.javahl.remote.*; import org.apache.subversion.javahl.types.*; public class RemoteStatusMain { // Static = public static void main(String[] args) throws Exception { final RemoteFactory remoteFactory = new RemoteFactory(); for (;;) { System.out.println("\n\n\n"); final ISVNRemote remote = remoteFactory.openRemoteSession("http://svn.apache.org/repos/asf/subversion/branches/1.8.x";); try { final ISVNReporter status = remote.status("/", Revision.SVN_INVALID_REVNUM, Depth.infinity, new MyRemoteStatus()); try { status.setPath("", 166, Depth.infinity, false, null); status.finishReport(); } finally { status.dispose(); } } finally { remote.dispose(); } } } // Inner Classes == private static class MyRemoteStatus implements RemoteStatus { @Override public void addedDirectory(String relativePath) { System.out.println("A D " + relativePath); } @Override public void addedFile(String relativePath) { System.out.println("A F " + relativePath); } @Override public void addedSymlink(String relativePath) { System.out.println("A S " + relativePath); } @Override public void modifiedDirectory(String relativePath, boolean childrenModified, boolean propsModified, Entry nodeInfo) { System.out.println("M D " + relativePath + " " + childrenModified + " " + propsModified); } @Override public void modifiedFile(String relativePath, boolean textModified, boolean propsModified, Entry nodeInfo) { System.out.println("M F " + relativePath + " " + textModified + " " + propsModified); } @Override public void modifiedSymlink(String relativePath, boolean targetModified, boolean propsModified, Entry nodeInfo) { System.out.println("M S " + relativePath + " " + relativePath + " " + targetModified); } @Override public void deleted(String relativePath) { System.out.println("D " + relativePath); } } }
Re: Estimated release date of version 9
On 17.03.2015 13:28, Marc Strapetz wrote: We are currently faced with the decision whether to release a new "major" version of SmartSVN which is compatible with Subversion 8 or wait for Subversion 9 release. The two main factors driving this decision are: (i) whether Subversion 1.9 will be able to access Subversion 1.8 working copies, without making it unusable for version 1.8. As far as I understood from [1] this will be most likely the case. (ii) when the anticipated release date of version 9 will be (measured in months). Sorry for the version number confusions, it should always read "version 1.8" instead of "version 8" and "version 1.9" instead of "version 9". -Marc On 17.03.2015 13:28, Marc Strapetz wrote: We are currently faced with the decision whether to release a new "major" version of SmartSVN which is compatible with Subversion 8 or wait for Subversion 9 release. The two main factors driving this decision are: (i) whether Subversion 1.9 will be able to access Subversion 1.8 working copies, without making it unusable for version 1.8. As far as I understood from [1] this will be most likely the case. (ii) when the anticipated release date of version 9 will be (measured in months). I'd appreciate your ideas on (ii). Of course I understand that this is a moving target and current estimations may be wrong. [1] http://svn.haxx.se/users/archive-2015-03/0098.shtml -Marc
Estimated release date of version 9
We are currently faced with the decision whether to release a new "major" version of SmartSVN which is compatible with Subversion 8 or wait for Subversion 9 release. The two main factors driving this decision are: (i) whether Subversion 1.9 will be able to access Subversion 1.8 working copies, without making it unusable for version 1.8. As far as I understood from [1] this will be most likely the case. (ii) when the anticipated release date of version 9 will be (measured in months). I'd appreciate your ideas on (ii). Of course I understand that this is a moving target and current estimations may be wrong. [1] http://svn.haxx.se/users/archive-2015-03/0098.shtml -Marc
Re: JavaHL: Exceptions in LogMessageCallback.singleMessage should abort the log immediately
On 16.03.2015 17:54, Bert Huijben wrote: A fix for this generic problem has been applied to trunk in r1664938 (further tweaks/extensions in 1664939,1664940,1664978,1664984). Great -- this is exactly what I was looking for. This introduces some behavior changes (such as the one you noted), so backporting needs discussion here. Thanks for starting the discussion ;-) Silently dropping an exception and continuing to process the operation is quite unexpected. Hence, this could be considered as bugfix instead of behavior change :) SVNKit usually allows to return a SubversionException from its various callbacks. As far as I understand your patch, it delivers the same Exception object which is thrown in Java code (possibly wrapped). So extending JavaHL's callback signatures by SubversionException would be reasonable, too. Use cases are mainly to deliver checked ProgressCancelledExceptions. -Marc On 16.03.2015 17:54, Bert Huijben wrote: -Original Message- From: Marc Strapetz [mailto:marc.strap...@syntevo.com] Sent: maandag 16 maart 2015 17:30 To: dev@subversion.apache.org Subject: JavaHL: Exceptions in LogMessageCallback.singleMessage should abort the log immediately If e.g. a RuntimeException is thrown in LogMessageCallback#singleMessage, it's not processed in LogMessageCallback::singleMessage and the log is continued nevertheless: (1) At line 77 in LogMessageCallback.cpp, there should be returned an appropriate error code. (2) After line 122, JNIUtil::isJavaExceptionThrown() should be called and there should be returned an appropriate error code. In both cases, the returned error code should result in stopping the low-level log; rethrowing the Exception in RemoteSession::getLog won't be necessary, as this can be established easily from within client code itself. This is a common problem that applies to almost all callbacks in JavaHL in <= 1.9. A fix for this generic problem has been applied to trunk in r1664938 (further tweaks/extensions in 1664939,1664940,1664978,1664984). This introduces some behavior changes (such as the one you noted), so backporting needs discussion here. Thanks for starting the discussion ;-) Bert
JavaHL: Exceptions in LogMessageCallback.singleMessage should abort the log immediately
If e.g. a RuntimeException is thrown in LogMessageCallback#singleMessage, it's not processed in LogMessageCallback::singleMessage and the log is continued nevertheless: (1) At line 77 in LogMessageCallback.cpp, there should be returned an appropriate error code. (2) After line 122, JNIUtil::isJavaExceptionThrown() should be called and there should be returned an appropriate error code. In both cases, the returned error code should result in stopping the low-level log; rethrowing the Exception in RemoteSession::getLog won't be necessary, as this can be established easily from within client code itself. -Marc
State of issue 2464 (Canonicalize / stringprep UTF-8 filenames to handle composed / decomposed differences shown by e.g. Mac OS X HFS+)
Since SmartSVN has been switched from SVNKit to JavaHL (in version 8.5), we have a quite large amount of users complaining about problems with special characters in file names on Mac OSX. This is in particular a problem for SmartSVN, because SVNKit was automatically converting to composed form before storing a file in repository and decomposing automatically when writing to the file system. Hence, repositories and working copies which are working fine with version 8, suddenly start breaking with version 8.5. I understand that an all-embracing solution to issue 2464 is not trivial. On the other hand, there seem to be patches which are working quite well and which could solve mentioned issues when combined with a Subversion config option like "miscellany.autoComposeUTF8", so users have the choice whether *always* storing file names in composed or decomposed form in repository. Any ideas on this issue are appreciated. -Marc
Re: 1.9.x JavaHL: long initial delay when performing a log
On 16.03.2015 01:50, Bert Huijben wrote: -Original Message- From: Marc Strapetz [mailto:marc.strap...@syntevo.com] Once the log responds, a bunch of revisions are reported, so it seems that there is some kind of caching of log records. I've tested with latest 1.9.x sources on Windows but have seen the same behavior with javahl-1.8-extensions branch on Linux, too. I can only find a server side buffering... But this might explain what you see. Our server reports use an apr feature that buffers +- 8 KByte data before sending out the first data. In this specific JavaHL case you ask for just the revision numbers. (Unlike the C api, JavaHL's session.getLog() appears to handle a null list of revision properties as no revision properties. Not the standard set!) I think every revision would (encoded in our Xml protocol) cost about 70 bytes, so there would fit at least 100 revisions in that buffer. That's it! I now ran my previous example as it was (3 times) and it took average 19s until the first revision was reported. With discoverPath set to true, time until first response dropped to average 7s. With discoverPath set back to false, but revisionProperties set to "svn:log", time until first response is 1.5s - 2s -- so timings like the command line client! Regarding your patch, it could make sense to stop doubling next_forced_flush at a certain limit (say 128), if the sent log records are small, i.e. if discoverPath=false and revisionProperties = null . -Marc On 16.03.2015 01:50, Bert Huijben wrote: -Original Message- From: Marc Strapetz [mailto:marc.strap...@syntevo.com] Sent: vrijdag 13 maart 2015 20:34 To: dev@subversion.apache.org Subject: 1.9.x JavaHL: long initial delay when performing a log I'm experiencing a strange initial delay when performing a log using JavaHL. svn log http://svn.apache.org/repos/asf/subversion/branches/1.8.x shows first results after 2-3 seconds, while following code snippet takes at least 20 seconds (sometimes significantly more, might depend on the server's load): ISVNRemote session = factory.openRemoteSession("http://svn.apache.org/repos/asf";); List paths = Collections.singletonList("subversion/branches/1.8.x"); session.getLog(paths, Revision.SVN_INVALID_REVNUM, 0, 0, false, false, false, null, new LogMessageCallback() { public void singleMessage(Set changedPaths, long revision, Map revprops, boolean hasChildren) { System.out.println("DATA"); } }); Once the log responds, a bunch of revisions are reported, so it seems that there is some kind of caching of log records. I've tested with latest 1.9.x sources on Windows but have seen the same behavior with javahl-1.8-extensions branch on Linux, too. I can only find a server side buffering... But this might explain what you see. Our server reports use an apr feature that buffers +- 8 KByte data before sending out the first data. In this specific JavaHL case you ask for just the revision numbers. (Unlike the C api, JavaHL's session.getLog() appears to handle a null list of revision properties as no revision properties. Not the standard set!) I think every revision would (encoded in our Xml protocol) cost about 70 bytes, so there would fit at least 100 revisions in that buffer. For each of these revisions the apr repository has to handle a security check for every changed path... And many branch revisions involve more than a few paths. This looks like an extremely worst case for this operation. The first 100 revisions would have to be fully processed before you get the first result. I think a patch like the one attached should fix most of the usecases without affecting server performance too much... But it has to be applied at the server. (I'm trying to create a testcase to see how much this helps) Bert
Re: 1.9.x JavaHL: long initial delay when performing a log
Same here on OSX. However, I can't any place in the code that would cause the delay. I added similar time-printing code to the C++ part of JavaHL and got extremely strange results: TestStatus (Java): 2015-03-13 22:21:40.403 svn_ra_get_log2: 2015-03-13T21:21:40.404731Z callback: 2015-03-13T21:21:50.098592Z invoke: 2015-03-13T21:21:50.098671Z TestStatus (Java): 2015-03-13 22:21:50.098 1666354 return: 2015-03-13T21:21:50.099058Z I can confirm this delay in native code on Windows. I've tried to dig deeper into svn_ra_get_log2, however I'm lost at session->vtable->get_log ... is there some kind of "core loop" which processes incoming HTTP data, so we could place debug output there? Now I'm really beginning to wonder what the native JavaHL implementation is doing differently from libsvn_client. Just a vague idea: could there be some kind of input caching in low-level HTTP libraries before information is sent to Subversion and converted to log entries? Maybe JavaHL would initialize this caching differently than command line or not at all? -Marc On 13.03.2015 22:28, Branko Čibej wrote: [Since when are we top-posting? grr...] On 13.03.2015 21:17, Bert Huijben wrote: Are you requesting the results in the same order in both cases? (I don't know what the arguments in your code represent) If you retrieve oldest to youngest some delay is expected as then first all interesting revisions are fetched (youngest to oldest) and then results+detailed are spooled back the other way. It's a standard youngest-to-oldest log; parameters for getLogs are (paths, start-revision, end-revision, limit, ...). The normal svn invocation you compare to is the most efficient one... Bert ---- From: Marc Strapetz <mailto:marc.strap...@syntevo.com> Sent: 13-3-2015 20:35 To: dev@subversion.apache.org <mailto:dev@subversion.apache.org> Subject: 1.9.x JavaHL: long initial delay when performing a log I'm experiencing a strange initial delay when performing a log using JavaHL. svn log http://svn.apache.org/repos/asf/subversion/branches/1.8.x shows first results after 2-3 seconds, while following code snippet takes at least 20 seconds (sometimes significantly more, might depend on the server's load): ISVNRemote session = factory.openRemoteSession("http://svn.apache.org/repos/asf";); List paths = Collections.singletonList("subversion/branches/1.8.x"); session.getLog(paths, Revision.SVN_INVALID_REVNUM, 0, 0, false, false, false, null, new LogMessageCallback() { public void singleMessage(Set changedPaths, long revision, Map revprops, boolean hasChildren) { System.out.println("DATA"); } }); Once the log responds, a bunch of revisions are reported, so it seems that there is some kind of caching of log records. I tried this with a slight change, setting the limit parameter of getLog() to 1 instead of 0; here are the results: $ time svn log --limit 1 http://svn.apache.org/repos/asf/subversion/branches/1.8.x ... real0m1.574s user0m0.007s sys 0m0.006s $ time java -cp ... -Djava.library.path=... TestStatus DATA real0m1.430s user0m0.138s sys 0m0.036s So, no real difference here. Without the limit, it does take a bit more than 10 seconds to begin displaying results. So I tested when the callback was actually invoked: I added code to print the current time just before the call to getLog(), and the current time and revision in the log receiver callback. The output confirms that the delay is real and not, for example, an artefact of some caching in stdout, for example: $ time java -cp ... -Djava.library.path=... TestStatus 2015-03-13 21:59:46.731 2015-03-13 21:59:57.223 1666354 2015-03-13 21:59:57.223 1666269 ... 2015-03-13 22:00:27.318 836421 2015-03-13 22:00:27.318 836420 I've tested with latest 1.9.x sources on Windows but have seen the same behavior with javahl-1.8-extensions branch on Linux, too. Same here on OSX. However, I can't any place in the code that would cause the delay. I added similar time-printing code to the C++ part of JavaHL and got extremely strange results: TestStatus (Java): 2015-03-13 22:21:40.403 svn_ra_get_log2: 2015-03-13T21:21:40.404731Z callback: 2015-03-13T21:21:50.098592Z invoke: 2015-03-13T21:21:50.098671Z TestStatus (Java): 2015-03-13 22:21:50.098 1666354 return: 2015-03-13T21:21:50.099058Z (note that there's an hour of difference between local time printed by Java and UTC printed from the native code). This confirms that there is an actual delay of 10 seconds in the *native* code between the call to svn_ra_get_log2() and the first invocation of the (native) callback wrapper; each callback invocation takes about half a millisecond. Now I'm really beginning to wonder what the native JavaHL implementation is doing differently from libsvn_client. -- Brane
1.9.x JavaHL: long initial delay when performing a log
I'm experiencing a strange initial delay when performing a log using JavaHL. svn log http://svn.apache.org/repos/asf/subversion/branches/1.8.x shows first results after 2-3 seconds, while following code snippet takes at least 20 seconds (sometimes significantly more, might depend on the server's load): ISVNRemote session = factory.openRemoteSession("http://svn.apache.org/repos/asf";); List paths = Collections.singletonList("subversion/branches/1.8.x"); session.getLog(paths, Revision.SVN_INVALID_REVNUM, 0, 0, false, false, false, null, new LogMessageCallback() { public void singleMessage(Set changedPaths, long revision, Map revprops, boolean hasChildren) { System.out.println("DATA"); } }); Once the log responds, a bunch of revisions are reported, so it seems that there is some kind of caching of log records. I've tested with latest 1.9.x sources on Windows but have seen the same behavior with javahl-1.8-extensions branch on Linux, too. -Marc
Re: RFE: API for an efficient retrieval of server-side mergeinfo data
On 19.02.2014 16:06, Julian Foad wrote: > Marc Strapetz wrote: >> Julian Foad wrote: >>> It looks like we have an agreement in principle. Would you like >>> to file an enhancement issue? >> >> Great. I've filed an issue now: >> >> http://subversion.tigris.org/issues/show_bug.cgi?id=4469 >> >> Would you please review the various attributes (Subcomponent, >> ...)? > > [...] > > SmartSVN and other front ends like to be able to draw a merge graph. > Even the 'svn mergeinfo' command-line command now draws a little > ASCII-art graph showing limited information about the most recent > merge. At present they all have to interpret mergeinfo themselves, at > a pretty low level, and the interpretation is subtle and poorly > understood. (I don't understand the edge cases related to adds and > deletes properly, and I've been working with it for years.) > So it seems like a good idea to encapsulate the interpretation of > mergeinfo a bit more, and expose data in a form that is geared > specifically towards explaining the history in the way that users can > understand it. Maybe think of it as an extended 'log' operation, > adding a small number of new notification types such as: > > * there is a full merge into here, bringing in all the new changes > from PATH up to REV; > * there is a partial merge to here, bringing in > some changes from PATH between REV1 and REV2; > > What do you think of that sort of interface? That definitely sounds good. Just to note that the extended-log-information should be easily receivable and cacheable for the entire repository and it must be rich enough to easily extract information for a specific path. Examples: - allow to include/exclude subtree merges for merge arrows - allow merge arrow display for sub-directories and individual files Ultimately, when having received all extended-log-information for all revisions, one should be able to recreate raw svn:mergeinfo for all paths of all revisions. I think this will guarantee that we won't miss any possible use case when defining the protocol and data structures. > Does your code already calculate something like that? Yes, and I recall having a hard time when writing this code :) -Marc
Re: RFE: API for an efficient retrieval of server-side mergeinfo data
On 18.02.2014 15:26, Julian Foad wrote: > Marc Strapetz wrote: >> On 17.02.2014 18:36, Julian Foad wrote: >>> Marc Strapetz wrote: >>>> Hence an API like the following should work well for us: >>>> >>>> interface MergeinfoDiffCallback { >>>>void mergeinfoDiff(int revision, >>>> Map pathToAddedMergeinfo, >>>> Map pathToRemovedMergeinfo); >>>> } >>>> >>>> void getMergeinfoDiff(String rootPath, >>>>long fromRev, long toRev, >>>>MergeinfoDiffCallback callback) >>>>throws ClientException; >>>> >>>> This should give us all mergeinfo which affects any path at or below >>>> rootPath. > [...] >>> let's use the simpler version that's sufficient for your use case. >> >> That will be fine. > [...] >> From cache perspective it's easier to build the cache starting at r0: >> [...] Anyway, I agree that receiving mergeinfo for more recent >> revisions first is reasonable as well. Hence if you say the effort is >> the same, then we could allow both: fromRev <= toRev, in which case we >> will received mergeinfo in ascending order and fromRev > toRev in which >> case it will be descending order? > > Could do. It seems like a relatively minor decision. > >>>> [...] important that ranges for which no mergeinfo diff is present >>>> will be processed quickly on the server-side, otherwise we could run >>>> into some kind of endless loop, if the cache building process is >>>> shutdown and resumed frequently. >>> >>> [...] There is a client-side work-around: request ranges of say a thousand >>> revisions at a time, and then you can easily keep track of how many of these >>> requests have been completed. >> >> OK, that will work. > > It looks like we have an agreement in principle. Would you like to file an > enhancement issue? Great. I've filed an issue now: http://subversion.tigris.org/issues/show_bug.cgi?id=4469 Would you please review the various attributes (Subcomponent, ...)? -Marc
Re: RFE: API for an efficient retrieval of server-side mergeinfo data
On 17.02.2014 18:36, Julian Foad wrote: > Marc Strapetz wrote: > >>> ... I'll dig into the cache code ... >> >> I did that now and the storage is quite simple: we have a main file >> which contains the diff (added, removed) for every path in every >> revision and a revision-based index file with constant record length (to >> quickly locate entries in the main file). >> >> This storage allows to efficiently query for the mergeinfo diff for a >> path in a certain revision. That's sufficient to build the merge arrows. >> Assembling the complete mergeinfo for a certain revision is hard with >> this cache, but actually not necessary for our use case. >> >> Hence an API like the following should work well for us: >> >> interface MergeinfoDiffCallback { >> void mergeinfoDiff(int revision, >> Map pathToAddedMergeinfo, >> Map pathToRemovedMergeinfo); >> } >> >> void getMergeinfoDiff(String rootPath, >> long fromRev, long toRev, >> MergeinfoDiffCallback callback) >> throws ClientException; >> >> This should give us all mergeinfo which affects any path at or below >> rootPath. >> >> When disregarding our particular use case, a more consistent API could be: >> >> void getMergeinfoDiff(Iterable paths, >> long fromRev, long toRev, >> Mergeinfo.Inheritance inherit, >> boolean includeDescendants, >> MergeinfoDiffCallback callback) >> throws ClientException; > > I want to discourage callers from knowing or caring how the mergeinfo is > stored, so I want to leave out the 'inherit' parameter. > > I also think it makes sense not to offer the options of ignoring descendants > (that is, subtree mergeinfo), or specifying multiple paths. After all, this > is not a low level API to be used for implementing the mergeinfo subsystem, > it's a high level query. > > So let's use the simpler version that's sufficient for your use case. That will be fine. >> The mergeinfo diff should be received starting at fromRev and ending at >> toRev. No callback is expected if there is no mergeinfo diff for a >> certain revision. Depending on the server-side storage, we may require >> to always have fromRev >= toRev or always fromRev <= toRev. If it >> doesn't matter, better have always fromRev <= toRev (for reasons given >> below). > > The same procedure could work either forwards or backwards, it doesn't really > matter as long as you know which way it is going. Often it is useful to know > about the more recent changes first, and have the option to look back right > to revision 0 if necessary. >From cache perspective it's easier to build the cache starting at r0: then cache files will contain information for older revision at lower positions. This allows to crop files easily at a certain revision and rebuild them from there. That's something we do, if a Log message is modified from within the GUI (it might not play a role for mergeinfo, though). Anyway, I agree that receiving mergeinfo for more recent revisions first is reasonable as well. Hence if you say the effort is the same, then we could allow both: fromRev <= toRev, in which case we will received mergeinfo in ascending order and fromRev > toRev in which case it will be descending order? >> Regarding the usage, let's assume always fromRev <= toRev, then we will >> invoke >> >> getMergeinfoDiff(cacheRoot, 0, head, callback) >> >> This should start returning mergeinfo diff immediately, starting at >> revision 0, so we quickly make at least a bit of progress. Now, if the >> cache building process is shutdown and restarted later, it will resume >> with the latest known revision: >> >> getMergeinfoDiff(cacheRoot, latestKnownRevision, head, callback) >> >> This procedure will be performed until we have caught up with head. >> Note, that the latestKnownRevision is the last revision for which we >> have received a callback. Depending on the server-side storage, this may >> be different from the current revision which the server is currently >> processing at the time the cache building process is shutdown. Hence it >> will be important that ranges for which no mergeinfo diff is present >> will be processed quickly on the server-side, otherwise we could run >> into some kind of endless loop, if the cache building process is >> shutdown and resumed frequently
Re: RFE: API for an efficient retrieval of server-side mergeinfo data
On 14.02.2014 14:18, Marc Strapetz wrote: >>> Can we think of a better way to design the API so that it returns the >>> interesting data without all the redundancy? Basically I think we want to >>> describe changes to mergeinfo, rather than raw mergeinfo. >> >> Marc, >> >> Perhaps a better way to ask the question is: Can I encourage you to write >> the API that you want? You already designed a cache for the data. What is >> the shape of the data >> in your cache, and can the API get the data you want in the form you >> want it, directly? We'd be glad to help implement it. Even if you start with >> an API which simply iterates over a range of revisions, at least that would >> allow for the possibility of improving the efficiency internally at various >> layers. > > Looks like our emails have crossed :) I'll dig into the cache code and > will try to come back with a more detailed API suggestion soon. I did that now and the storage is quite simple: we have a main file which contains the diff (added, removed) for every path in every revision and a revision-based index file with constant record length (to quickly locate entries in the main file). This storage allows to efficiently query for the mergeinfo diff for a path in a certain revision. That's sufficient to build the merge arrows. Assembling the complete mergeinfo for a certain revision is hard with this cache, but actually not necessary for our use case. Hence an API like the following should work well for us: interface MergeinfoDiffCallback { void mergeinfoDiff(int revision, Map pathToAddedMergeinfo, Map pathToRemovedMergeinfo); } void getMergeinfoDiff(String rootPath, long fromRev, long toRev, MergeinfoDiffCallback callback) throws ClientException; This should give us all mergeinfo which affects any path at or below rootPath. When disregarding our particular use case, a more consistent API could be: void getMergeinfoDiff(Iterable paths, long fromRev, long toRev, Mergeinfo.Inheritance inherit, boolean includeDescendants, MergeinfoDiffCallback callback) throws ClientException; The mergeinfo diff should be received starting at fromRev and ending at toRev. No callback is expected if there is no mergeinfo diff for a certain revision. Depending on the server-side storage, we may require to always have fromRev >= toRev or always fromRev <= toRev. If it doesn't matter, better have always fromRev <= toRev (for reasons given below). Regarding the usage, let's assume always fromRev <= toRev, then we will invoke getMergeinfoDiff(cacheRoot, 0, head, callback) This should start returning mergeinfo diff immediately, starting at revision 0, so we quickly make at least a bit of progress. Now, if the cache building process is shutdown and restarted later, it will resume with the latest known revision: getMergeinfoDiff(cacheRoot, latestKnownRevision, head, callback) This procedure will be performed until we have caught up with head. Note, that the latestKnownRevision is the last revision for which we have received a callback. Depending on the server-side storage, this may be different from the current revision which the server is currently processing at the time the cache building process is shutdown. Hence it will be important that ranges for which no mergeinfo diff is present will be processed quickly on the server-side, otherwise we could run into some kind of endless loop, if the cache building process is shutdown and resumed frequently. -Marc
Re: RFE: API for an efficient retrieval of server-side mergeinfo data
>> Can we think of a better way to design the API so that it returns the >> interesting data without all the redundancy? Basically I think we want to >> describe changes to mergeinfo, rather than raw mergeinfo. > > Marc, > > Perhaps a better way to ask the question is: Can I encourage you to write the > API that you want? You already designed a cache for the data. What is the > shape of the data > in your cache, and can the API get the data you want in the form you > want it, directly? We'd be glad to help implement it. Even if you start with > an API which simply iterates over a range of revisions, at least that would > allow for the possibility of improving the efficiency internally at various > layers. Looks like our emails have crossed :) I'll dig into the cache code and will try to come back with a more detailed API suggestion soon. -Marc On 14.02.2014 14:09, Julian Foad wrote: > I (Julian Foad) wrote: > >> Can we think of a better way to design the API so that it returns the >> interesting data without all the redundancy? Basically I think we want to >> describe changes to mergeinfo, rather than raw mergeinfo. > > Marc, > > Perhaps a better way to ask the question is: Can I encourage you to write the > API that you want? You already designed a cache for the data. What is the > shape of the data > in your cache, and can the API get the data you want in the form you > want it, directly? We'd be glad to help implement it. Even if you start with > an API which simply iterates over a range of revisions, at least that would > allow for the possibility of improving the efficiency internally at various > layers. > > - Julian >
Re: RFE: API for an efficient retrieval of server-side mergeinfo data
On 14.02.2014 11:38, Julian Foad wrote: > Marc Strapetz wrote: >> For SmartSVN we are optionally displaying merge arrows in the Revision >> Graph. Here is a sample image, how this looks like: >> >> http://imgur.com/MzrLq00 >> >>> From the JavaHL sources I understand that there is currently only one >>> method to retrieve server-side mergeinfo and this one works on a single >>> revision only: >> >> Map getMergeinfo(Iterable paths, >> long revision, >> Mergeinfo.Inheritance inherit, >> boolean includeDescendants) > > Right. This is a wrapper around the core library function > svn_ra_get_mergeinfo(). > >> This makes the Merge Arrow feature practically unusable for larger graphs. >> >> To improve performance, in earlier versions we were using a client-side >> mergeinfo cache (similar as the main log-cache, which TSVN is using as >> well). However, populating this cache (i.e. querying for mergeinfo for >> *every* revision of the repository) often resulted in bringing the >> entire Apache server down, especially if many users were building their >> log cache at the same time. >> >> To address these problems, it would be great to have a more powerful >> API, which allows either to retrieve all mergeinfo for a *revision >> range* or for a *set of revisions*. > > The request for a more powerful API certainly makes sense, but what form of > API? > > In the Subversion project source code: > > # How many lines/bytes of mergeinfo in trunk, right now? > $ svn pg -R svn:mergeinfo | wc -lc > 245 24063 > > # How many branches and tags? > $ svn ls ^/subversion/tags/ ^/subversion/branches/ | wc -l > 288 > > # Approx. total lines/bytes mergeinfo per revision? > $ echo $((245 * 289)) $((24063 * 289)) > 70805 6954207 > > So in each revision there are roughly 70,000 lines of mergeinfo, occupying 7 > MB in plain text representation. > > The mergeinfo properties change whenever a merge is done. All other commits > leave all the mergeinfo unchanged. So mergeinfo is unchanged in, what, 99% of > revisions? > > It doesn't seem logical to simply request all the mergeinfo for each revision > in turn, and return it all in raw form. > > Can we think of a better way to design the API so that it returns the > interesting data without all the redundancy? Basically I think we want to > describe changes to mergeinfo, rather than raw mergeinfo. True, actually on the client-side we interested in the diff, anyway. So some kind of callback: interface MergeInfoDiffCallback { void mergeInfoDiff(int revision, Mergeinfo added, Mergeinfo removed); } would be convenient. This would work for revision ranges as well as a set of revisions. -Marc
RFE: API for an efficient retrieval of server-side mergeinfo data
For SmartSVN we are optionally displaying merge arrows in the Revision Graph. Here is a sample image, how this looks like: http://imgur.com/MzrLq00 >From the JavaHL sources I understand that there is currently only one method to retrieve server-side mergeinfo and this one works on a single revision only: Map getMergeinfo(Iterable paths, long revision, Mergeinfo.Inheritance inherit, boolean includeDescendants) This makes the Merge Arrow feature practically unusable for larger graphs. To improve performance, in earlier versions we were using a client-side mergeinfo cache (similar as the main log-cache, which TSVN is using as well). However, populating this cache (i.e. querying for mergeinfo for *every* revision of the repository) often resulted in bringing the entire Apache server down, especially if many users were building their log cache at the same time. To address these problems, it would be great to have a more powerful API, which allows either to retrieve all mergeinfo for a *revision range* or for a *set of revisions*. Querying a set of revisions would be more flexible and would allow to generate merge arrows on the fly. On the other hand, to alleviate the server, it's desirable to cache retrieved mergeinfo on the client-side anyway, hence a range query would be fine as well. -Marc
JavaHL: support for authenticating with different credentials for the same realm
For SmartSVN, we are looking for a way to support multiple credentials (username and password) for the same realm. When using the command line client, --username will do the job, however this option is not well suited for a GUI client. Here, a quite flexible as well as intuitive approach would be to support the user@-specification as part of the URL. To be able to implement that, I'd suggest to extend JavaHL in either of the following ways: (A) UserPasswordCallback.prompt(String realm, String username) should give the user@-user from the requested URL as the default "username", if present. Hence, precedence for this default would be: (a) ISVNClient.username() (--username-option) (2) "user@" from URL (3) system user (as currently) (B) rework UserPasswordCallback to include the accessed URL, like: UserPasswordCallback.prompt(String realm, String username, String url) (B) will allow to parse the user@ from the specified URL and has the additional advantage that the credentials prompt will also be able to display for which URL credentials are required. This can be helpful to design a clearer "Login" dialog. Taken as a whole, I'd appreciate a more general review of UserPasswordCallback which is currently hard to implement for a GUI client: e.g. to provide better support for SSL, more specific methods than askQuestion with a single String-parameter would be needed. I can post a proposal, if this will be helpful. Thanks for your consideration. -Marc
Re: SVN 1.7 problems with case insensitive file systems (Windows)
On 12.09.2011 11:15, Philip Martin wrote: > Marc Strapetz writes: > >> There are some problems when capitalization of a file or directory name >> changes in the working copy (at least on Windows). I'm starting off with >> following tree: >> >> # svn status -v >> 11 Marc . >> 11 Marc a >> 11 Marc a\mu >> 11 Marc a\b >> 11 Marc a\b\lambda >> 11 Marc a\b\e >> 11 Marc a\b\e\alpha >> 11 Marc a\b\e\beta >> 11 Marc a\d >> 11 Marc a\d\gamma >> 11 Marc iota >> >> Then a/b/e will be changed to upper case a/b/E: >> >> # svn status >> ! a\b\e >> ? a\b\E >> >> This is somewhat strange as a/b/e is missing, but a/b/e/alpha and beta >> are not. > > There is an assumption in the code that when a directory is missing the > whole tree is missing, so that is the expected behaviour. I can't confirm that. If I'm starting off with a clean working copy (without having a/b/e changed its case) and I'm removing a/b/e, I'll get: # svn status ! a\b\e ! a\b\e\alpha ! a\b\e\beta >> Adding the unversioned directory and removing the missing one >> seems to work: >> >> # svn add a/b/E >> # svn rm a/b/e >> # svn status >> ! a\b\E >> ! a\b\E\alpha >> ! a\b\E\beta >> D a\b\e >> D a\b\e\alpha >> D a\b\e\beta >> >> However, a subsequent commit fails: >> >> # svn commit -m "a/b/e moved to a/b/E" >> svn: E155010: Commit failed (details follow): >> svn: E155010: 'D:\greek-tree.svn\a\b\E' is scheduled for addition, but >> is missing > > That's odd. It looks like a case-only rename and issue 3702 claims to > be fixed: > > http://subversion.tigris.org/issues/show_bug.cgi?id=3702 > > If you start with a pristine, unmodified tree and run > > svn mv a\b\e a\b\E > > can you commit that? Yes, that works. >> When adding a\b\E now, alpha gets duplicated: >> >> # svn add a\b\E >> # svn status >> ! a\b\e >> M a\b\e\alpha >> A a\b\E >> A a\b\E\alpha >> A a\b\E\beta >> >> Removing a\b\e doesn't work: >> >> # svn rm a\b\e >> svn: E195006: Use --force to override this restriction (local >> modifications may be lost) >> svn: E195006: 'D:\greek-tree.svn\a\b\e\alpha' has local modifications -- >> commit or revert them first > > Does adding force work? Yes, that works. # svn status ! a\b\E ! a\b\E\alpha ! a\b\E\beta D a\b\e D a\b\e\alpha D a\b\e\beta However commit fails with similar error message as before: # svn commit -m "a b/e removed to a/b/E" svn: E155010: Commit failed (details follow): svn: E155010: 'D:\greek-tree\a \b\E' is scheduled for addition, but is missing That's the contents of wc.db after "svn rm -f": sqlite> select local_relpath, op_depth, presence from nodes; a|0|normal a/b|0|normal a/b/e|0|normal a/b/e/alpha|0|normal a/b/e/beta|0|normal a/b/lambda|0|normal a/d|0|normal a/d/gamma|0|normal a/mu|0|normal iota|0|normal |0|normal a/b/E|3|normal a/b/E/alpha|4|normal a/b/E/beta|4|normal a/b/e|3|base-deleted a/b/e/alpha|3|base-deleted a/b/e/beta|3|base-deleted -- Best regards, Marc Strapetz = syntevo GmbH http://www.syntevo.com http://blog.syntevo.com
SVN 1.7 problems with case insensitive file systems (Windows)
There are some problems when capitalization of a file or directory name changes in the working copy (at least on Windows). I'm starting off with following tree: # svn status -v 11 Marc . 11 Marc a 11 Marc a\mu 11 Marc a\b 11 Marc a\b\lambda 11 Marc a\b\e 11 Marc a\b\e\alpha 11 Marc a\b\e\beta 11 Marc a\d 11 Marc a\d\gamma 11 Marc iota Then a/b/e will be changed to upper case a/b/E: # svn status ! a\b\e ? a\b\E This is somewhat strange as a/b/e is missing, but a/b/e/alpha and beta are not. Adding the unversioned directory and removing the missing one seems to work: # svn add a/b/E # svn rm a/b/e # svn status ! a\b\E ! a\b\E\alpha ! a\b\E\beta D a\b\e D a\b\e\alpha D a\b\e\beta However, a subsequent commit fails: # svn commit -m "a/b/e moved to a/b/E" svn: E155010: Commit failed (details follow): svn: E155010: 'D:\greek-tree.svn\a\b\E' is scheduled for addition, but is missing There are more unexpected results, when starting off with a modified a/b/e/alpha: # svn status M a\b\e\alpha # svn cat a\b\e\alpha new file content Again, a/b/e is changed to a/b/E. Then status is more or less OK: # svn status ! a\b\e M a\b\e\alpha ? a\b\E When adding a\b\E now, alpha gets duplicated: # svn add a\b\E # svn status ! a\b\e M a\b\e\alpha A a\b\E A a\b\E\alpha A a\b\E\beta Removing a\b\e doesn't work: # svn rm a\b\e svn: E195006: Use --force to override this restriction (local modifications may be lost) svn: E195006: 'D:\greek-tree.svn\a\b\e\alpha' has local modifications -- commit or revert them first Committing seems to work: # svn commit -m "a/b/e moved to a/b/E" Adding a\b\E Adding a\b\E\alpha Adding a\b\E\beta Sendinga\b\e\alpha Transmitting file data ... Committed revision 2. It has added a/b/E, as expected: # svn ls file://localhost/d:/greek-tree.repo a/ a/b/ a/b/E/ a/b/E/alpha a/b/E/beta a/b/e/ a/b/e/alpha a/b/e/beta a/b/lambda a/d/ a/d/gamma a/mu iota But it has modified a/b/e/alpha as well, unexpectedly: # svn cat file://localhost/d:/greek-tree.repo/a/b/e/alpha new file content # svn cat file://localhost/d:/greek-tree.repo/a/b/E/alpha new file content Apart from a possible fix in 1.7 series (which I understand might be quite complex), what would be the correct resp. expected behavior? >From the perspective of a UI client, would you recommend to reject working with case-changed entries at all to avoid mentioned problems? Instead the user could be told to fix the file name case changes or the client does that automatically (like TSVN does for SVN 1.6). -- Best regards, Marc Strapetz = syntevo GmbH http://www.syntevo.com http://blog.syntevo.com