Hi Bert! No, that wasn't running. And just the make sure, I redid my checkout tests, closed all explorer windows and checked with TaskManager. Got the same timing results as before.
Regards, Michael ________________________________ From: Bert Huijben [b...@qqmail.nl] Sent: Thursday, August 11, 2011 12:52 To: Ketting, Michael; dev@subversion.apache.org Subject: RE: Significant checkout performance degradation between 1.6.1 and 1.7b2 A completely different question: Do you have a recent TortoiseSVN (TSvnCache.exe) running while checking out for those tests? I just ruined a testrun by accidentally enabling TSvnCache on it (by accessing a parent directory with the Windows explorer), and for the rest of the checkout TSvnCache took consistently 3 times more CPU than the checkout process by continuously calling ‘svn status’ on the same working copy. Bert From: Ketting, Michael [mailto:michael.kett...@rubicon.eu] Sent: donderdag 11 augustus 2011 12:13 To: dev@subversion.apache.org Subject: RE: Significant checkout performance degradation between 1.6.1 and 1.7b2 Bert, you can access the repository here, in case you want to take a closer look: https://svn.re-motion.org/svn/Remotion/trunk/ > What about svn:needs-lock? No, don't have any locked files. > svn:eol-style Never heard of that one before. No, I've never set it. And as far as I know, it's not set in the other repositories, either. > svn:keyword Mainly, it's a couple of mime-types for binary files. Maybe a 100? 200? Plus a few ignores. And probably some merge-info sprinkled across the trunk. Michael ________________________________ From: Bert Huijben [b...@qqmail.nl] Sent: Thursday, August 11, 2011 12:01 To: Ketting, Michael; dev@subversion.apache.org<mailto:dev@subversion.apache.org> Subject: RE: Significant checkout performance degradation between 1.6.1 and 1.7b2 Can you tell a bit more about this ‘worst case’ working copy? Does it use svn:keywords in many places? What about svn:needs-lock? More svn:eol-style keywords than the other working copies? Bert From: Ketting, Michael [mailto:michael.kett...@rubicon.eu]<mailto:[mailto:michael.kett...@rubicon.eu]> Sent: donderdag 11 augustus 2011 10:54 To: dev@subversion.apache.org<mailto:dev@subversion.apache.org> Subject: RE: Significant checkout performance degradation between 1.6.1 and 1.7b2 Just a bit more information: I've now also tried the chekcout tests with other other big trunks in our company: One took 7min (svn 1.6) vs 9min (svn 1.7), the other 4min (svn 1.6) vs 6min (svn 1.7), so, both are slower but in the range also measured with the benchmarks. Looks like my own project really is the worst case scenario :) Regards, Michael ________________________________ From: Mark Phippard [markp...@gmail.com] Sent: Tuesday, August 09, 2011 17:05 To: Ketting, Michael Cc: dev@subversion.apache.org<mailto:dev@subversion.apache.org> Subject: Re: Significant checkout performance degradation between 1.6.1 and 1.7b2 On Tue, Aug 9, 2011 at 8:07 AM, Mark Phippard <markp...@gmail.com<mailto:markp...@gmail.com>> wrote: Is this via http? Given that export is slower I'd be willing to bet the performance difference is from the new http client library - serf. It is typically slower than Neon. Try switching to neon and run it again. I updated to the latest Beta of TortoiseSVN and it looks to me like they have changed the default HTTP client to Neon already. So unless you have specifically made serf the default client in your servers file it is not likely that this is your problem. I developed a set of open-source benchmarks to measure Subversion performance that you can get here: https://ctf.open.collab.net/sf/sfmain/do/viewProject/projects.csvn Perhaps you could set up the repository on your server and run the benchmarks using 1.6 and 1.7 to see what kind of results you see? When I run the tests I see considerable performance gain with 1.7. The "FolderTests" are probably the closes tests to your scenario. It will be easier to focus on any remaining performance issues if we can identify and measure them in an open and consistent manner so we can see progress and the impact of different changes. If these benchmarks do not show the same problems you see on your real code, then we need to add more benchmarks so that we can capture whatever the problem is. -- Thanks Mark Phippard http://markphip.blogspot.com/