Re: svn commit: r1697654 - /subversion/branches/1.9.x/STATUS

2015-08-25 Thread Branko Čibej
On 25.08.2015 17:31, Stefan Fuhrmann wrote:
 On Tue, Aug 25, 2015 at 12:55 PM, Branko Čibej br...@wandisco.com wrote:

 On 25.08.2015 13:49, br...@apache.org wrote:
 Author: brane
 Date: Tue Aug 25 11:49:09 2015
 New Revision: 1697654

 URL: http://svn.apache.org/r1697654
 Log:
 * branches/1.9.x/STATUS:
- Approve r1693886.
- Temporarily veto r1694481; the change looks broken.
 [...]

 @@ -98,5 +84,22 @@ Candidate changes:
  Veto-blocked changes:
  =

 + * r1694481
 +   Fix Unix build on systems without GPG agent.
 +   Justification:
 + This is a user-reported issue.
 +   Votes:
 + +1: stefan2, philip
 + -1: brane (You can't just remove a public API implementation,
 +even if it is deprecated. And the prototyps is still
 +right there in svn_auth.h)
 +
  Approved changes:
  =
 r1694481 (conditionally) removes the implementation of a public API,
 whilst leaving the prototype in svn_auth.h untouched. This is a
 violation of our ABI compatibility rules, and also a linking error
 waiting to happen.

 Except that the very problem is that
 svn_auth__get_gpg_agent_simple_provider
 is not implemented either if SVN_HAVE_GPG_AGENT
 is not defined. And that linker problem is the one being
 already reported and fixed by the patch.

 You are still right that we introduce another linker problem
 further down the road for some people that stumbled
 across the first one in the past. And not implementing
 the public API is a bad thing.

 So, I think we need to do some coding to fix this on /trunk.
 Question is whether we want to skip r1694481 as a  stop-
 gap patch for 1.9.1 and enable people to build SVN again.


Daniel suggested inserting a dummy handler if we don't have the GPG
agent support. I think that may be the only reasonable solution for both
trunk and 1.9.1 (or .x if we don't thing it's important enough for .1).

The real effort here is double-checking that a dummy handler won't break
credentials resolution.

-- Brane


Re: svn commit: r1694533 - /subversion/trunk/subversion/libsvn_ra_svn/marshal.c

2015-08-25 Thread Branko Čibej
On 25.08.2015 14:10, Branko Čibej wrote:
 On 06.08.2015 18:10, stef...@apache.org wrote:
 Author: stefan2
 Date: Thu Aug  6 16:10:39 2015
 New Revision: 1694533

 URL: http://svn.apache.org/r1694533
 Log:
 Fix an alignment problem on machines with 32 bit pointers but atomic 64
 data access that may not be misaligned.

 * subversion/libsvn_ra_svn/marshal.c
   (read_item): Ensure that the array contents are always have the APR
default alignment.

 Found by: Rainer Jung rainer.jung{_AT_}kippdata.de

 Modified:
 subversion/trunk/subversion/libsvn_ra_svn/marshal.c

 Modified: subversion/trunk/subversion/libsvn_ra_svn/marshal.c
 URL: 
 http://svn.apache.org/viewvc/subversion/trunk/subversion/libsvn_ra_svn/marshal.c?rev=1694533r1=1694532r2=1694533view=diff
 ==
 --- subversion/trunk/subversion/libsvn_ra_svn/marshal.c (original)
 +++ subversion/trunk/subversion/libsvn_ra_svn/marshal.c Thu Aug  6 16:10:39 
 2015
 @@ -1190,14 +1190,20 @@ static svn_error_t *read_item(svn_ra_svn
  }
else if (c == '(')
  {
 +  /* On machines with 32 bit pointers, array headers are only 20 bytes
 +   * which is not enough for our standard 64 bit alignment.
 +   * So, determine a suitable block size for the APR array header that
 +   * keeps proper alignment for following structs. */
 +  const apr_size_t header_size
 += APR_ALIGN_DEFAULT(sizeof(apr_array_header_t));
 +
/* Allocate an APR array with room for (initially) 4 items.
 * We do this manually because lists are the most frequent protocol
 * element, often used to frame a single, optional value.  We save
 * about 20% of total protocol handling time. */
 -  char *buffer = apr_palloc(pool, sizeof(apr_array_header_t)
 -  + 4 * sizeof(svn_ra_svn_item_t));
 -  svn_ra_svn_item_t *data
 -= (svn_ra_svn_item_t *)(buffer + sizeof(apr_array_header_t));
 +  char *buffer = apr_palloc(pool,
 +header_size + 4 * 
 sizeof(svn_ra_svn_item_t));
 +  svn_ra_svn_item_t *data = (svn_ra_svn_item_t *)(buffer + header_size);
  
item-kind = SVN_RA_SVN_LIST;
item-u.list = (apr_array_header_t *)buffer;

 How exactly is all this different from:

 apr_array_make(pool, 4, sizeof(svn_ra_svn_item_t)?

 Other than relying on magic knowledge of APR implementation details?

All right, so I figured that the difference is that apr_array_make does
two allocations compared to one in this code. Still: relying on
knowledge of APR implementation details is a really bad idea. As it
stands, APR will correctly resize this array if necessary; but there's
no guarantee that, sometime in the future, resizing such arrays would
fail horribly.

I very strongly suggest that we put a plain ol' apr_array_make here and,
if this is really perceived as such a huge *overall* performance
problem, put the allocation hack into APR itself.

I suspect that apr_palloc is fast enough that we really don't need this
hack.

-- Brane


Re: Issue #4588, part 1: FSFS access error

2015-08-25 Thread Stefan Fuhrmann
On Tue, Aug 25, 2015 at 12:47 AM, Evgeny Kotkov evgeny.kot...@visualsvn.com
 wrote:

 Stefan Fuhrmann stefan.fuhrm...@wandisco.com writes:

  My current hypothesis is that the server did not get restarted after
  replacing the repository.  Because we decided not to make the instance ID
  part of the cache key, we could easily have picked up cached format 6
 data
  for the format 7 repository.

 [...]

  That said, are we still happy with the decision to not make the instance
  ID part of the cache key? The rationale has basically been fail early
  because failure to restart or reconfigure the server after the repo files
  got modified might lead to any kind of unknown problems (much) further
 down
  the road.

 As I see it, there are two separate problems:

[...]


 2) The second part of the problem, to my mind, is the offset and item-based
addressing.  Irrespectively of whether we use instance IDs in the cache
keys, or not, I find it rather questionable that the same entry in the
cache can mean two different things, depending on how you look at it.


3 different things, in fact. Two in format7, two in older formats.
The 3 different addressing modes are:

1. Absolute position (phys) in a rev / pack file.
   This is where we need to navigate to when reading data.
2. Offset within a revision. phys = manifest[ref] + offset
3. Item number within a revision: phys = L2P[rev, item]

So, we always need to translate pairs of (rev, item-index)
to phys. SVN 1.9 uses item-index as the umbrella term
for case 2. and 3. Only a single function, svn_fs_fs__item_offset,
deals with the differences between the two - modulo the
format specific ways to find the rev root node and changed
paths list.

In that sense, the cache content not only *means* the
same thing (this is item xyz) but even uses the same
generalized data type. The problem here is simply that
the cache contents becomes invalid as soon as history
is being rewritten.


What happens if we're unlucky enough, and the offset in the revision
 file
also is a valid index in the l2p lookup table?  Is there something we
 can
do about it — say, associate the addressing type with the corresponding
cache entry?


Since they are homogenous throughout a repository,  we
could simply add the repo format, addressing mode and
sharding size (manifest cache vs. resharding) to the cache
key prefix.

The practical implications would be similar to adding the
instance ID, except working for pre-f7 repos and being
less specific. So, if we decide to add the instance ID to
the cache key, we should add the other parts as well.

-- Stefan^2.


Re: JavaHL, 1.9: Bad file descriptor, Stream doesn't support thiscapability errors

2015-08-25 Thread Thomas Singer

The fix is alread on the 1.9.x branch and will be
part of the 1.9.1 release


Perfect, I can confirm that it is working now. Thanks.

--
Tom


On 25.08.2015 12:39, Branko Čibej wrote:

On 25.08.2015 12:36, Thomas Singer wrote:

Is this bug already reported in the issue tracker? I've searched but
could not found. Should I report it?



We don't need that. The fix is alread on the 1.9.x branch and will be
part of the 1.9.1 release, which we'll probably expedite precisely
because of this bug.

-- Brane




--
Best regards,
Thomas Singer
=
syntevo GmbH
http://www.syntevo.com
http://www.syntevo.com/blog


On 20.08.2015 09:28, b...@qqmail.nl wrote:

Hi Thomas,

The Subversion version of the fix is nominated for 1.9.1
[[
+ * r1696222, r1696225
+   Implement polling on named pipes on Windows to resolve svn+ssh://
problems
+   Justification:
+ Without this patch operations like checkout just fail with an
error.
+ This is a regression as older Subversion releases ignored the
error.
+   Votes:
]]

Apr just returns EBADF on any attempt to poll on a file or pipe on
Windows. This workaround on the Subversion side makes us support
polling on pipes. (In Subversion  1.8 we always assumed that there
was no data waiting, because we never bothered to check for an error)

The apr version of the patch is harder than I expected, because the
poll function supports many flags of which we can really only
implement one new variant, that can’t really be combined with any
other flags.


We need one more vote on this, before we can start the release
process on 1.9.1.

I would hope we can also get r1696695 in the release, as that problem
appears to break Subversion for at least some Linux distributions.



We might want to consider also fixing the polling in 1.8 (and 1.7?),
as that would allow exiting much earlier when a commit fails.
Currently Subversion (when using svn+ssh:// on Windows) just goes on
trying to commit on Windows and only notices that svnserve returned
an error when done.


Bert

Sent from Mail for Windows 10



From: Thomas Singer
Sent: donderdag 20 augustus 2015 08:23
To: dev@subversion.apache.org
Subject: Re: JavaHL, 1.9: Bad file descriptor, Stream doesn't
support thiscapability errors


Hi Bert,


I'll see if I can properly fix this (preferably in both Subversion
and Apr), without reverting to just ignoring errors.


Did you have any success fixing it yet? Is there already a bug entered
in the issue tracker? I'd rate it as blocker priority.

--
Best regards,
Thomas Singer
=
syntevo GmbH
http://www.syntevo.com
http://www.syntevo.com/blog








RE: New 'svn cp --pin-externals' feature compat question.

2015-08-25 Thread Bert Huijben
Hi,

 

The code actually detects if the definition is in the old or new format and 
uses the same format to apply the pegging. 

See ‘make_external_description’ which uses 
‘svn_wc__external_description_format_1’ and 
‘svn_wc__external_description_format_2’ to handle the different formats.

 

(In the pre 1.5 format the ‘-r’ is interpreted as a peg revision)

 

Bert

 

From: C. Michael Pilato [mailto:cmpil...@collab.net] 
Sent: dinsdag 25 augustus 2015 17:07
To: Subversion Development dev@subversion.apache.org
Subject: New 'svn cp --pin-externals' feature compat question.

 

I was reading up on the new 'svn cp --pin-externals' feature in the 1.9 release 
notes.  Great addition, by the way, and one that I hope to use myself with 
ViewVC's release tags.

 

One question came to mind, though.  The use of the feature appears to result in 
pegged externals definitions (as in, @-bearing URLs).  That's great and 
obviously the correct approach.  But if I recall correctly, this means that use 
of the feature will cause the copy's externals to be written in a way that 
older (pre-1.5) clients do not understand.

 

Is that correct?  If so, I think that's a fine limitation to have -- no 
complaints at all here.  But perhaps this merits a mention in the Compatibility 
section of the release notes?

 

-- Mike



Re: svn commit: r1697390 - /subversion/branches/1.9.x/STATUS

2015-08-25 Thread Branko Čibej
On 24.08.2015 15:32, stef...@apache.org wrote:
 Author: stefan2
 Date: Mon Aug 24 13:32:41 2015
 New Revision: 1697390

 URL: http://svn.apache.org/r1697390
 Log:
 * STATUS: Add svnfsfs load-index fixes (r1697381, r1697384, r1697387).

 Modified:
 subversion/branches/1.9.x/STATUS

 Modified: subversion/branches/1.9.x/STATUS
 URL: 
 http://svn.apache.org/viewvc/subversion/branches/1.9.x/STATUS?rev=1697390r1=1697389r2=1697390view=diff
 ==
 --- subversion/branches/1.9.x/STATUS (original)
 +++ subversion/branches/1.9.x/STATUS Mon Aug 24 13:32:41 2015
 @@ -129,6 +129,15 @@ Candidate changes:
 Votes:
   +1: rhuijben
  
 + * r1697381, r1697384, r1697387
 +   Make 'svnfsfs load-index' work as advertised in the documentation
 +   Justification:
 + svnfsfs load-index would only ever be used in high-stress situations
 + like desaster recovery.  So, while workarounds are possible, having
 + people jump though a few extra hoops is a bad thing in that context.
 +   Votes:
 + +1: stefan2
 +

I have to admit that I don't have a clue what this backport proposal is
about. What doesn't work as advertised? What kind of workarounds are we
talking about?

According to the log messages of these three revisions, it would appear
that they're three (unrelated?) bug fixes in svnfsfs. However, I can't
find any test cases that would help me verify that the fixes actually
perform as advertised.

In other words, I've no idea how to vote for this backport.

-- Brane


New 'svn cp --pin-externals' feature compat question.

2015-08-25 Thread C. Michael Pilato
I was reading up on the new 'svn cp --pin-externals' feature in the 1.9
release notes.  Great addition, by the way, and one that I hope to use
myself with ViewVC's release tags.

One question came to mind, though.  The use of the feature appears to
result in pegged externals definitions (as in, @-bearing URLs).  That's
great and obviously the correct approach.  But if I recall correctly, this
means that use of the feature will cause the copy's externals to be written
in a way that older (pre-1.5) clients do not understand.

Is that correct?  If so, I think that's a fine limitation to have -- no
complaints at all here.  But perhaps this merits a mention in the
Compatibility section of the release notes?

-- Mike


Re: Issue #4588, part 1: FSFS access error

2015-08-25 Thread Stefan Fuhrmann
 On Tue, Aug 25, 2015 at 7:15 AM, Daniel Shahaf d...@daniel.shahaf.name
wrote:

 Evgeny Kotkov wrote on Tue, Aug 25, 2015 at 02:47:16 +0300:
  Stefan Fuhrmann stefan.fuhrm...@wandisco.com writes:
 
   My current hypothesis is that the server did not get restarted after
   replacing the repository.  Because we decided not to make the instance
 ID
   part of the cache key, we could easily have picked up cached format 6
 data
   for the format 7 repository.
 
  [...]
 
   That said, are we still happy with the decision to not make the
 instance
   ID part of the cache key? The rationale has basically been fail early
   because failure to restart or reconfigure the server after the repo
 files
   got modified might lead to any kind of unknown problems (much) further
 down
   the road.
 
  As I see it, there are two separate problems:
 

 I see it the same way.

  1) First of all, I am thinking that we should indeed revisit the
 decision of
 not making instance IDs a part of the cache keys.  As far as I
 recall, this
 part of the change was reverted after we agreed that failing fast is
 better
 than narrowing the window when the cache issues exist [1] (there are
 still
 scenarios like backing up and restoring the repository with cp -a).
 
 Now I am almost sure that we should redo this change.  The experience
 of
 receiving errors related to stale cache entries isn't exactly
 educating,
 but rather than that, it's frustrating,


I think this is the core issue. If you do it wrong, you are
likely to experience frustration at some later point, this
is the very definition of doing it wrong. So, if we want
to spare the user this frustration without them changing
their behaviour, we must be sure to *always* make it
right.


 and the procedures describing dump,
 load and hotcopy rarely say something about a server restart.


Yes, the SVN book is not very explicit about that and
the release notes have been amended only yesterday.

In the past, our thinking has been along the lines of It's a
DB server and who in their right mind would fiddle with
the disk data and expect everything to work just fine?
To an occasional user, it may be much less clear that e.g.
a tool called svnadmin does NOT interact with running
servers whenever necessary.


 As we have
 the mechanism to make a huge set of cases work without the necessity
 of
 performing additional actions, I think that we should do it, leaving
 the
 edge cases as they are.  In other words, people who are used to
 svnadmin
 dump/load/hotcopy shouldn't struggle, because we think that they also
 could
 be doing something like the aforementioned cp -a.


Well, getting all cases that involve svnadmin right would
be enough in my book. People actually fiddling with
repository details on disk need to know what they are
doing. Those are the same kind of people that need to
know about the rep-cache.db.

We would also need an explicit way to bump the
instance ID to support restore scenarios to cover the
full range of admin activity. Maybe some cache reset
UI on the server would be useful as well.


 What are the downsides to having instance IDs as part of the cache key?
 I haven't been able to understand that from your post or from Ben's post
 you link to (53f4c92b.2010...@reser.org), which only mentions a failure
 to clear the cache race that was discussed offlist.

 Is this the problem scenario? ---

 1. Open an svn_fs_t handle

 2. Replace the repository with another repository having the same UUID
but different contents

 3. Make a commit from the svn_fs_t opened in (1)

 4. The commit creates a corrupted revision due to stale (false positive)
cache hits


That is the worst-case scenario. By adding the instance ID
to the cache key, the only race I can think of is:

* open svn_fs_t, read repo format (sharding!) and config
* replace repo on disk
* begin txn
* ...
* commit

SVN 1.9 rereads the format upon commit, *probably* making
the commit fail if things are inconsistent now (txn and rev paths).
If the repo gets replaced earlier that it's obviously no problem.
If it was replaced later, the TXN directory will very likely be
missing.

However, there is more state than caches. So, here are two
counter-arguments to relying on the instance ID:

* Client connections still become invalid, i.e. reports in flight
  are likely to error out, future requests may or may not result
  in an error.
* The instance ID is only available in format 7 repositories.
  The problem exists in any dump/load scenario - even if
  the format number does not change.

The first one is an inconvenience, the second one is somewhat
stronger.

-- Stefan^2.


Re: New 'svn cp --pin-externals' feature compat question.

2015-08-25 Thread C. Michael Pilato
Nice!  Thanks for the info, Bert.

On Tue, Aug 25, 2015 at 11:30 AM, Bert Huijben b...@qqmail.nl wrote:

 Hi,



 The code actually detects if the definition is in the old or new format
 and uses the same format to apply the pegging.

 See ‘make_external_description’ which uses ‘
 svn_wc__external_description_format_1’ and ‘
 svn_wc__external_description_format_2’ to handle the different formats.



 (In the pre 1.5 format the ‘-r’ is interpreted as a peg revision)



 Bert



 *From:* C. Michael Pilato [mailto:cmpil...@collab.net]
 *Sent:* dinsdag 25 augustus 2015 17:07
 *To:* Subversion Development dev@subversion.apache.org
 *Subject:* New 'svn cp --pin-externals' feature compat question.



 I was reading up on the new 'svn cp --pin-externals' feature in the 1.9
 release notes.  Great addition, by the way, and one that I hope to use
 myself with ViewVC's release tags.



 One question came to mind, though.  The use of the feature appears to
 result in pegged externals definitions (as in, @-bearing URLs).  That's
 great and obviously the correct approach.  But if I recall correctly, this
 means that use of the feature will cause the copy's externals to be written
 in a way that older (pre-1.5) clients do not understand.



 Is that correct?  If so, I think that's a fine limitation to have -- no
 complaints at all here.  But perhaps this merits a mention in the
 Compatibility section of the release notes?



 -- Mike



Re: svn commit: r1697654 - /subversion/branches/1.9.x/STATUS

2015-08-25 Thread Branko Čibej
On 25.08.2015 23:12, Stefan Fuhrmann wrote:
 On Tue, Aug 25, 2015 at 4:43 PM, Branko Čibej br...@wandisco.com wrote:

 On 25.08.2015 17:31, Stefan Fuhrmann wrote:
 On Tue, Aug 25, 2015 at 12:55 PM, Branko Čibej br...@wandisco.com
 wrote:
 On 25.08.2015 13:49, br...@apache.org wrote:
 Author: brane
 Date: Tue Aug 25 11:49:09 2015
 New Revision: 1697654

 URL: http://svn.apache.org/r1697654
 Log:
 * branches/1.9.x/STATUS:
- Approve r1693886.
- Temporarily veto r1694481; the change looks broken.
 [...]

 @@ -98,5 +84,22 @@ Candidate changes:
  Veto-blocked changes:
  =

 + * r1694481
 +   Fix Unix build on systems without GPG agent.
 +   Justification:
 + This is a user-reported issue.
 +   Votes:
 + +1: stefan2, philip
 + -1: brane (You can't just remove a public API implementation,
 +even if it is deprecated. And the prototyps is still
 +right there in svn_auth.h)
 +
  Approved changes:
  =
 r1694481 (conditionally) removes the implementation of a public API,
 whilst leaving the prototype in svn_auth.h untouched. This is a
 violation of our ABI compatibility rules, and also a linking error
 waiting to happen.

 Except that the very problem is that
 svn_auth__get_gpg_agent_simple_provider
 is not implemented either if SVN_HAVE_GPG_AGENT
 is not defined. And that linker problem is the one being
 already reported and fixed by the patch.

 You are still right that we introduce another linker problem
 further down the road for some people that stumbled
 across the first one in the past. And not implementing
 the public API is a bad thing.

 So, I think we need to do some coding to fix this on /trunk.
 Question is whether we want to skip r1694481 as a  stop-
 gap patch for 1.9.1 and enable people to build SVN again.

 Daniel suggested inserting a dummy handler if we don't have the GPG
 agent support. I think that may be the only reasonable solution for both
 trunk and 1.9.1 (or .x if we don't thing it's important enough for .1).

 The real effort here is double-checking that a dummy handler won't break
 credentials resolution.

 I think just starting with a full copying the GPG agent handler and
 making each call return failed should work. Didn't try it, though.

I'll give this a go and hopefully come up with a test case, too.

-- Brane


Re: Issue #4588, part 1: FSFS access error

2015-08-25 Thread Daniel Shahaf
(abbreviating instance ID to IID)

Stefan Fuhrmann wrote on Tue, Aug 25, 2015 at 15:13:04 +0100:
  On Tue, Aug 25, 2015 at 7:15 AM, Daniel Shahaf d...@daniel.shahaf.name
 wrote:
 
  Evgeny Kotkov wrote on Tue, Aug 25, 2015 at 02:47:16 +0300:
   Stefan Fuhrmann stefan.fuhrm...@wandisco.com writes:
  
My current hypothesis is that the server did not get restarted after
replacing the repository.  Because we decided not to make the instance
  ID
part of the cache key, we could easily have picked up cached format 6
  data
for the format 7 repository.
  
   [...]
  
That said, are we still happy with the decision to not make the
  instance
ID part of the cache key? The rationale has basically been fail early
because failure to restart or reconfigure the server after the repo
  files
got modified might lead to any kind of unknown problems (much) further
  down
the road.
  
   As I see it, there are two separate problems:
  
 
  I see it the same way.
 
   1) First of all, I am thinking that we should indeed revisit the
  decision of
  not making instance IDs a part of the cache keys.  As far as I
  recall, this
  part of the change was reverted after we agreed that failing fast is
  better
  than narrowing the window when the cache issues exist [1] (there are
  still
  scenarios like backing up and restoring the repository with cp -a).
  
  Now I am almost sure that we should redo this change.  The experience
  of
  receiving errors related to stale cache entries isn't exactly
  educating,
  but rather than that, it's frustrating,
 
 
 I think this is the core issue. If you do it wrong, you are
 likely to experience frustration at some later point, this
 is the very definition of doing it wrong. So, if we want
 to spare the user this frustration without them changing
 their behaviour, we must be sure to *always* make it
 right.

Well, yes, but another reasonable option would be to throw a clear and
unambiguous error as soon as the user has did something he shouldn't
have.  I imagine an error such as:

svn: E123456: Repository replaced without server restart; restart the server

... although I realize it might not be easy to detect that situation.
Could we re-read the IID under the commit write lock and compare it to
the cached value?  I.e., check that either the IID was unset at
svn_fs_open2() time and is still unset at the start of commit_body(), or
that it was set at svn_fs_open2() time and is still set to the same
value on disk?  That wouldn't be bulletproof,¹ but it should narrow the
window for the race.


¹ For example, someone could replace the repository while commit_body()
is running after it checked the IID.

  and the procedures describing dump,
  load and hotcopy rarely say something about a server restart.
 
 
 Yes, the SVN book is not very explicit about that and
 the release notes have been amended only yesterday.
 
 In the past, our thinking has been along the lines of It's a
 DB server and who in their right mind would fiddle with
 the disk data and expect everything to work just fine?
 To an occasional user, it may be much less clear that e.g.
 a tool called svnadmin does NOT interact with running
 servers whenever necessary.
 

(See below.)

  What are the downsides to having instance IDs as part of the cache key?
  I haven't been able to understand that from your post or from Ben's post
  you link to (53f4c92b.2010...@reser.org), which only mentions a failure
  to clear the cache race that was discussed offlist.
 
  Is this the problem scenario? ---
 
  1. Open an svn_fs_t handle
 
  2. Replace the repository with another repository having the same UUID
 but different contents
 
  3. Make a commit from the svn_fs_t opened in (1)
 
  4. The commit creates a corrupted revision due to stale (false positive)
 cache hits
 
 
 That is the worst-case scenario. By adding the instance ID
 to the cache key, the only race I can think of is:
 
 * open svn_fs_t, read repo format (sharding!) and config
 * replace repo on disk
 * begin txn
 * ...
 * commit
 
 SVN 1.9 rereads the format upon commit, *probably* making
 the commit fail if things are inconsistent now (txn and rev paths).
 If the repo gets replaced earlier that it's obviously no problem.
 If it was replaced later, the TXN directory will very likely be
 missing.
 
 However, there is more state than caches. So, here are two
 counter-arguments to relying on the instance ID:
 
 * Client connections still become invalid, i.e. reports in flight
   are likely to error out, future requests may or may not result
   in an error.

I don't see a way we can make it work in the general case.  For example,
even without IIDs, we would still error out if the repository is
replaced with an 'svndumpfilter exclude'ed copy of itself, and somebody
is checking out the excluded path.

I don't think there's an easy solution here.  The FSFS code is designed
around the assumption that history 

Re: svn commit: r1697654 - /subversion/branches/1.9.x/STATUS

2015-08-25 Thread Daniel Shahaf
Branko Čibej wrote on Wed, Aug 26, 2015 at 07:26:47 +0200:
 On 25.08.2015 23:12, Stefan Fuhrmann wrote:
  On Tue, Aug 25, 2015 at 4:43 PM, Branko Čibej br...@wandisco.com wrote:
  Daniel suggested inserting a dummy handler if we don't have the GPG
  agent support. I think that may be the only reasonable solution for both
  trunk and 1.9.1 (or .x if we don't thing it's important enough for .1).
 
  The real effort here is double-checking that a dummy handler won't break
  credentials resolution.
 
  I think just starting with a full copying the GPG agent handler and
  making each call return failed should work. Didn't try it, though.
 
 I'll give this a go and hopefully come up with a test case, too.

I think you might be able to get away with:

static svn_auth_provider_t dummy = {
.cred_kind = dummy, NULL, NULL, NULL
};


Re: Issue #4588, part 1: FSFS access error

2015-08-25 Thread Daniel Shahaf
Evgeny Kotkov wrote on Tue, Aug 25, 2015 at 02:47:16 +0300:
 Stefan Fuhrmann stefan.fuhrm...@wandisco.com writes:
 
  My current hypothesis is that the server did not get restarted after
  replacing the repository.  Because we decided not to make the instance ID
  part of the cache key, we could easily have picked up cached format 6 data
  for the format 7 repository.
 
 [...]
 
  That said, are we still happy with the decision to not make the instance
  ID part of the cache key? The rationale has basically been fail early
  because failure to restart or reconfigure the server after the repo files
  got modified might lead to any kind of unknown problems (much) further down
  the road.
 
 As I see it, there are two separate problems:
 

I see it the same way.

 1) First of all, I am thinking that we should indeed revisit the decision of
not making instance IDs a part of the cache keys.  As far as I recall, this
part of the change was reverted after we agreed that failing fast is better
than narrowing the window when the cache issues exist [1] (there are still
scenarios like backing up and restoring the repository with cp -a).
 
Now I am almost sure that we should redo this change.  The experience of
receiving errors related to stale cache entries isn't exactly educating,
but rather than that, it's frustrating, and the procedures describing dump,
load and hotcopy rarely say something about a server restart.  As we have
the mechanism to make a huge set of cases work without the necessity of
performing additional actions, I think that we should do it, leaving the
edge cases as they are.  In other words, people who are used to svnadmin
dump/load/hotcopy shouldn't struggle, because we think that they also could
be doing something like the aforementioned cp -a.

What are the downsides to having instance IDs as part of the cache key?
I haven't been able to understand that from your post or from Ben's post
you link to (53f4c92b.2010...@reser.org), which only mentions a failure
to clear the cache race that was discussed offlist.

Is this the problem scenario? ---

1. Open an svn_fs_t handle

2. Replace the repository with another repository having the same UUID
   but different contents

3. Make a commit from the svn_fs_t opened in (1)

4. The commit creates a corrupted revision due to stale (false positive)
   cache hits

?

Thanks,

Daniel

 [1] http://svn.haxx.se/dev/archive-2014-08/0239.shtml
 
 
 Regards,
 Evgeny Kotkov
 



Re: JavaHL, 1.9: Bad file descriptor, Stream doesn't support thiscapability errors

2015-08-25 Thread Branko Čibej
On 25.08.2015 12:36, Thomas Singer wrote:
 Is this bug already reported in the issue tracker? I've searched but
 could not found. Should I report it?


We don't need that. The fix is alread on the 1.9.x branch and will be
part of the 1.9.1 release, which we'll probably expedite precisely
because of this bug.

-- Brane



 -- 
 Best regards,
 Thomas Singer
 =
 syntevo GmbH
 http://www.syntevo.com
 http://www.syntevo.com/blog


 On 20.08.2015 09:28, b...@qqmail.nl wrote:
 Hi Thomas,

 The Subversion version of the fix is nominated for 1.9.1
 [[
 + * r1696222, r1696225
 +   Implement polling on named pipes on Windows to resolve svn+ssh://
 problems
 +   Justification:
 + Without this patch operations like checkout just fail with an
 error.
 + This is a regression as older Subversion releases ignored the
 error.
 +   Votes:
 ]]

 Apr just returns EBADF on any attempt to poll on a file or pipe on
 Windows. This workaround on the Subversion side makes us support
 polling on pipes. (In Subversion  1.8 we always assumed that there
 was no data waiting, because we never bothered to check for an error)

 The apr version of the patch is harder than I expected, because the
 poll function supports many flags of which we can really only
 implement one new variant, that can’t really be combined with any
 other flags.


 We need one more vote on this, before we can start the release
 process on 1.9.1.

 I would hope we can also get r1696695 in the release, as that problem
 appears to break Subversion for at least some Linux distributions.



 We might want to consider also fixing the polling in 1.8 (and 1.7?),
 as that would allow exiting much earlier when a commit fails.
 Currently Subversion (when using svn+ssh:// on Windows) just goes on
 trying to commit on Windows and only notices that svnserve returned
 an error when done.


 Bert

 Sent from Mail for Windows 10



 From: Thomas Singer
 Sent: donderdag 20 augustus 2015 08:23
 To: dev@subversion.apache.org
 Subject: Re: JavaHL, 1.9: Bad file descriptor, Stream doesn't
 support thiscapability errors


 Hi Bert,

 I'll see if I can properly fix this (preferably in both Subversion
 and Apr), without reverting to just ignoring errors.

 Did you have any success fixing it yet? Is there already a bug entered
 in the issue tracker? I'd rate it as blocker priority.

 -- 
 Best regards,
 Thomas Singer
 =
 syntevo GmbH
 http://www.syntevo.com
 http://www.syntevo.com/blog






Re: JavaHL, 1.9: Bad file descriptor, Stream doesn't support thiscapability errors

2015-08-25 Thread Thomas Singer
Is this bug already reported in the issue tracker? I've searched but 
could not found. Should I report it?


--
Best regards,
Thomas Singer
=
syntevo GmbH
http://www.syntevo.com
http://www.syntevo.com/blog


On 20.08.2015 09:28, b...@qqmail.nl wrote:

Hi Thomas,

The Subversion version of the fix is nominated for 1.9.1
[[
+ * r1696222, r1696225
+   Implement polling on named pipes on Windows to resolve svn+ssh:// problems
+   Justification:
+ Without this patch operations like checkout just fail with an error.
+ This is a regression as older Subversion releases ignored the error.
+   Votes:
]]

Apr just returns EBADF on any attempt to poll on a file or pipe on Windows. This 
workaround on the Subversion side makes us support polling on pipes. (In 
Subversion  1.8 we always assumed that there was no data waiting, because we 
never bothered to check for an error)

The apr version of the patch is harder than I expected, because the poll 
function supports many flags of which we can really only implement one new 
variant, that can’t really be combined with any other flags.


We need one more vote on this, before we can start the release process on 1.9.1.

I would hope we can also get r1696695 in the release, as that problem appears 
to break Subversion for at least some Linux distributions.



We might want to consider also fixing the polling in 1.8 (and 1.7?), as that 
would allow exiting much earlier when a commit fails. Currently Subversion 
(when using svn+ssh:// on Windows) just goes on trying to commit on Windows and 
only notices that svnserve returned an error when done.


Bert

Sent from Mail for Windows 10



From: Thomas Singer
Sent: donderdag 20 augustus 2015 08:23
To: dev@subversion.apache.org
Subject: Re: JavaHL, 1.9: Bad file descriptor, Stream doesn't support 
thiscapability errors


Hi Bert,


I'll see if I can properly fix this (preferably in both Subversion and Apr), 
without reverting to just ignoring errors.


Did you have any success fixing it yet? Is there already a bug entered
in the issue tracker? I'd rate it as blocker priority.

--
Best regards,
Thomas Singer
=
syntevo GmbH
http://www.syntevo.com
http://www.syntevo.com/blog





Re: svn-mergeinfo-normalizer ideas

2015-08-25 Thread Stefan Hett

Hi,
On Fri, Jul 10, 2015 at 12:40 AM, Stefan Fuhrmann 
stefan.fuhrm...@wandisco.com mailto:stefan.fuhrm...@wandisco.com 
wrote:


On Thu, Jun 25, 2015 at 5:10 PM, Stefan Hett ste...@egosoft.com
mailto:ste...@egosoft.com wrote:

Hi,

I'm dealing with one remaining case svn-mergeinfo-normalizer
normalize doesn't seem to be able to handle yet. Would it be
possible to add support for this?

Case: Eliminate incorrect mergeinfos of pre-branch-revisions.

Looking at the following output:
Trying to elide mergeinfo from path
E:/[projects]/XR/clean_source_branch/src/SDKs/bullet
into mergeinfo at path
E:/[projects]/XR/clean_source_branch/src

All branches still exist in HEAD.

Try to elide remaining branches:
CANNOT elide branch /XRebirth/trunk/src/SDKs/bullet
revisions not movable to parent:
173817,174057,180942,181150

Branches not mentioned in parent:
/SDKs/BulletPhysics/tags/2.82
/SDKs/BulletPhysics/trunk

Sub-tree merge info cannot be elided due to the following
branches:
/SDKs/BulletPhysics/tags/2.82
/SDKs/BulletPhysics/trunk
/XRebirth/trunk/src/SDKs/bullet

here you see that the revisions 173817,174057,180942,181150
are reported to not be movable to the parent.

The thing here is that all of these revisions are effectively
referencing themselves and therefore should be removable in
principle.

The WC is a check-out of
repo
\proj1
\branches
\proj1v1

The proj1v1 branch was created in revision 184223 from trunk -
aka from:
repo
\proj1
\trunk

so all the revisions (173817,174057,180942,181150) are
referring to the same thing which is implicitly included in
the branch (due to its creation from trunk at r184223). So
these should simply be eliminated, no?


Yes, for all paths, their natural history can be considered being
an implicit
part of their merge history, i.e. they contain all their own
changes. Hence,
they can't be actually missing and the branch entry in your
example should
be removed from the mergeinfo.

To do this efficiently requires a fair amount coding, though. I'll
give it a try.


The tool is on /trunk now and will detect overlapping natural path 
history.


-- Stefan^2.


Thanks so much. Just tested it and it works like a charm.

--
Regards,
Stefan Hett



Re: svn commit: r1697654 - /subversion/branches/1.9.x/STATUS

2015-08-25 Thread Branko Čibej
On 25.08.2015 13:49, br...@apache.org wrote:
 Author: brane
 Date: Tue Aug 25 11:49:09 2015
 New Revision: 1697654

 URL: http://svn.apache.org/r1697654
 Log:
 * branches/1.9.x/STATUS:
- Approve r1693886.
- Temporarily veto r1694481; the change looks broken.

[...]

 @@ -98,5 +84,22 @@ Candidate changes:
  Veto-blocked changes:
  =
  
 + * r1694481
 +   Fix Unix build on systems without GPG agent.
 +   Justification:
 + This is a user-reported issue.
 +   Votes:
 + +1: stefan2, philip
 + -1: brane (You can't just remove a public API implementation,
 +even if it is deprecated. And the prototyps is still
 +right there in svn_auth.h)
 +
  Approved changes:
  =

r1694481 (conditionally) removes the implementation of a public API,
whilst leaving the prototype in svn_auth.h untouched. This is a
violation of our ABI compatibility rules, and also a linking error
waiting to happen.

I suggest the correct fix is for the function to just do nothing if GPG
support is not available. Ideally it would return an error, but the
prototype doesn't let us do that.

For example:

--- ../trunk/subversion/libsvn_subr/deprecated.c(revision 1697651)
+++ ../trunk/subversion/libsvn_subr/deprecated.c(working copy)
@@ -1497,14 +1497,14 @@
 #endif /* DARWIN */
 
 #if !defined(WIN32)
-#ifdef SVN_HAVE_GPG_AGENT
 void
 svn_auth_get_gpg_agent_simple_provider(svn_auth_provider_object_t **provider,
apr_pool_t *pool)
 {
+#ifdef SVN_HAVE_GPG_AGENT
   svn_auth__get_gpg_agent_simple_provider(provider, pool);
-}
 #endif /* SVN_HAVE_GPG_AGENT */
+}
 #endif /* !WIN32 */
 
 svn_error_t *


-- Brane



Re: svn commit: r1694533 - /subversion/trunk/subversion/libsvn_ra_svn/marshal.c

2015-08-25 Thread Stefan Fuhrmann
./subversion/svnserve/svnserve -Tdr
/media/stefan/033c7ee9-9980-45a8-b9c6-f911dc2ac664/f7-test/ -M 1000 -c 0
--client-speed 10
stefan@macbook:~/develop/trunk$ time ./subversion/svn/svn diff --summarize
svn://localhost/ruby-f6-packed -r0:HEAD  /dev/null

real0m2.150s
user0m1.964s
sys0m0.104s
stefan@macbook:~/develop/trunk$ time ./subversion/svn/svn diff --summarize
svn://localhost/ruby-f6-packed -r0:HEAD  /dev/null

real0m1.960s
user0m1.764s
sys0m0.120s

==24293== I   refs:  14,916,573,539

real0m2.335s
user0m2.176s
sys0m0.100s
stefan@macbook:~/develop/trunk$ time ./subversion/svn/svn diff --summarize
svn://localhost/ruby-f6-packed -r0:HEAD  /dev/null

real0m2.132s
user0m1.916s
sys0m0.128s

==5689== I   refs:  16,225,780,792

3.1Gb

On Tue, Aug 25, 2015 at 3:23 PM, Branko Čibej br...@wandisco.com wrote:

 On 25.08.2015 14:10, Branko Čibej wrote:
  On 06.08.2015 18:10, stef...@apache.org wrote:
  Author: stefan2
  Date: Thu Aug  6 16:10:39 2015
  New Revision: 1694533
 
  URL: http://svn.apache.org/r1694533
  Log:
  Fix an alignment problem on machines with 32 bit pointers but atomic 64
  data access that may not be misaligned.
 
  * subversion/libsvn_ra_svn/marshal.c
(read_item): Ensure that the array contents are always have the APR
 default alignment.
 
  Found by: Rainer Jung rainer.jung{_AT_}kippdata.de
 
  Modified:
  subversion/trunk/subversion/libsvn_ra_svn/marshal.c
 
  Modified: subversion/trunk/subversion/libsvn_ra_svn/marshal.c
  URL:
 http://svn.apache.org/viewvc/subversion/trunk/subversion/libsvn_ra_svn/marshal.c?rev=1694533r1=1694532r2=1694533view=diff
 
 ==
  --- subversion/trunk/subversion/libsvn_ra_svn/marshal.c (original)
  +++ subversion/trunk/subversion/libsvn_ra_svn/marshal.c Thu Aug  6
 16:10:39 2015
  @@ -1190,14 +1190,20 @@ static svn_error_t *read_item(svn_ra_svn
   }
 else if (c == '(')
   {
  +  /* On machines with 32 bit pointers, array headers are only 20
 bytes
  +   * which is not enough for our standard 64 bit alignment.
  +   * So, determine a suitable block size for the APR array header
 that
  +   * keeps proper alignment for following structs. */
  +  const apr_size_t header_size
  += APR_ALIGN_DEFAULT(sizeof(apr_array_header_t));
  +
 /* Allocate an APR array with room for (initially) 4 items.
  * We do this manually because lists are the most frequent
 protocol
  * element, often used to frame a single, optional value.  We
 save
  * about 20% of total protocol handling time. */
  -  char *buffer = apr_palloc(pool, sizeof(apr_array_header_t)
  -  + 4 * sizeof(svn_ra_svn_item_t));
  -  svn_ra_svn_item_t *data
  -= (svn_ra_svn_item_t *)(buffer + sizeof(apr_array_header_t));
  +  char *buffer = apr_palloc(pool,
  +header_size + 4 *
 sizeof(svn_ra_svn_item_t));
  +  svn_ra_svn_item_t *data = (svn_ra_svn_item_t *)(buffer +
 header_size);
 
 item-kind = SVN_RA_SVN_LIST;
 item-u.list = (apr_array_header_t *)buffer;
 
  How exactly is all this different from:
 
  apr_array_make(pool, 4, sizeof(svn_ra_svn_item_t)?
 
  Other than relying on magic knowledge of APR implementation details?

 All right, so I figured that the difference is that apr_array_make does
 two allocations compared to one in this code. Still: relying on
 knowledge of APR implementation details is a really bad idea.


The structure definition of apr_array_header_t is part of
the public API, i.e. will never change.


 As it
 stands, APR will correctly resize this array if necessary; but there's
 no guarantee that, sometime in the future, resizing such arrays would
 fail horribly.


Even if that was true, the resize is done outside APR as
well - a few lines further down.


 I very strongly suggest that we put a plain ol' apr_array_make here and,
 if this is really perceived as such a huge *overall* performance
 problem, put the allocation hack into APR itself.

 I suspect that apr_palloc is fast enough that we really don't need this
 hack.


Is it absolutely necessary to use that hand-crafted section
of code? Of course not. Does it have a significant impact
on ra_svn throughput? Yes. The whole section of local
array management saved 20% runtime in command-heavy
reports like this one:

$time svn diff --summarize svn://localhost/ruby -r0:HEAD  /dev/null
real0m1.960s
user0m1.764s
sys0m0.120s

Going back to apr_array_make alone sets us back by ~10%:

real0m2.132s
user0m1.916s
sys0m0.128s

The reason is that the protocol uses lists for every optional
element of any struct being transmitted. Of the 36M protocol
items passing through read_item(), 16M are lists; the rest
is numbers and strings.

Based on the outrage that a 3% and 5% slowdowns caused
last 

Re: svn commit: r1694533 - /subversion/trunk/subversion/libsvn_ra_svn/marshal.c

2015-08-25 Thread Stefan Fuhrmann
On Tue, Aug 25, 2015 at 10:08 PM, Stefan Fuhrmann 
stefan.fuhrm...@wandisco.com wrote:

 ./subversion/svnserve/svnserve -Tdr
 /media/stefan/033c7ee9-9980-45a8-b9c6-f911dc2ac664/f7-test/ -M 1000 -c 0
 --client-speed 10
 stefan@macbook:~/develop/trunk$ time ./subversion/svn/svn diff
 --summarize svn://localhost/ruby-f6-packed -r0:HEAD  /dev/null

 [...]

Args! Copy'n'paste left-over.

-- Stefan^2.


Re: svn commit: r1697654 - /subversion/branches/1.9.x/STATUS

2015-08-25 Thread Stefan Fuhrmann
On Tue, Aug 25, 2015 at 4:43 PM, Branko Čibej br...@wandisco.com wrote:

 On 25.08.2015 17:31, Stefan Fuhrmann wrote:
  On Tue, Aug 25, 2015 at 12:55 PM, Branko Čibej br...@wandisco.com
 wrote:
 
  On 25.08.2015 13:49, br...@apache.org wrote:
  Author: brane
  Date: Tue Aug 25 11:49:09 2015
  New Revision: 1697654
 
  URL: http://svn.apache.org/r1697654
  Log:
  * branches/1.9.x/STATUS:
 - Approve r1693886.
 - Temporarily veto r1694481; the change looks broken.
  [...]
 
  @@ -98,5 +84,22 @@ Candidate changes:
   Veto-blocked changes:
   =
 
  + * r1694481
  +   Fix Unix build on systems without GPG agent.
  +   Justification:
  + This is a user-reported issue.
  +   Votes:
  + +1: stefan2, philip
  + -1: brane (You can't just remove a public API implementation,
  +even if it is deprecated. And the prototyps is still
  +right there in svn_auth.h)
  +
   Approved changes:
   =
  r1694481 (conditionally) removes the implementation of a public API,
  whilst leaving the prototype in svn_auth.h untouched. This is a
  violation of our ABI compatibility rules, and also a linking error
  waiting to happen.
 
  Except that the very problem is that
  svn_auth__get_gpg_agent_simple_provider
  is not implemented either if SVN_HAVE_GPG_AGENT
  is not defined. And that linker problem is the one being
  already reported and fixed by the patch.
 
  You are still right that we introduce another linker problem
  further down the road for some people that stumbled
  across the first one in the past. And not implementing
  the public API is a bad thing.
 
  So, I think we need to do some coding to fix this on /trunk.
  Question is whether we want to skip r1694481 as a  stop-
  gap patch for 1.9.1 and enable people to build SVN again.


 Daniel suggested inserting a dummy handler if we don't have the GPG
 agent support. I think that may be the only reasonable solution for both
 trunk and 1.9.1 (or .x if we don't thing it's important enough for .1).

 The real effort here is double-checking that a dummy handler won't break
 credentials resolution.


I think just starting with a full copying the GPG agent handler and
making each call return failed should work. Didn't try it, though.

-- Stefan^2.


Re: svn commit: r1694533 - /subversion/trunk/subversion/libsvn_ra_svn/marshal.c

2015-08-25 Thread Branko Čibej
On 06.08.2015 18:10, stef...@apache.org wrote:
 Author: stefan2
 Date: Thu Aug  6 16:10:39 2015
 New Revision: 1694533

 URL: http://svn.apache.org/r1694533
 Log:
 Fix an alignment problem on machines with 32 bit pointers but atomic 64
 data access that may not be misaligned.

 * subversion/libsvn_ra_svn/marshal.c
   (read_item): Ensure that the array contents are always have the APR
default alignment.

 Found by: Rainer Jung rainer.jung{_AT_}kippdata.de

 Modified:
 subversion/trunk/subversion/libsvn_ra_svn/marshal.c

 Modified: subversion/trunk/subversion/libsvn_ra_svn/marshal.c
 URL: 
 http://svn.apache.org/viewvc/subversion/trunk/subversion/libsvn_ra_svn/marshal.c?rev=1694533r1=1694532r2=1694533view=diff
 ==
 --- subversion/trunk/subversion/libsvn_ra_svn/marshal.c (original)
 +++ subversion/trunk/subversion/libsvn_ra_svn/marshal.c Thu Aug  6 16:10:39 
 2015
 @@ -1190,14 +1190,20 @@ static svn_error_t *read_item(svn_ra_svn
  }
else if (c == '(')
  {
 +  /* On machines with 32 bit pointers, array headers are only 20 bytes
 +   * which is not enough for our standard 64 bit alignment.
 +   * So, determine a suitable block size for the APR array header that
 +   * keeps proper alignment for following structs. */
 +  const apr_size_t header_size
 += APR_ALIGN_DEFAULT(sizeof(apr_array_header_t));
 +
/* Allocate an APR array with room for (initially) 4 items.
 * We do this manually because lists are the most frequent protocol
 * element, often used to frame a single, optional value.  We save
 * about 20% of total protocol handling time. */
 -  char *buffer = apr_palloc(pool, sizeof(apr_array_header_t)
 -  + 4 * sizeof(svn_ra_svn_item_t));
 -  svn_ra_svn_item_t *data
 -= (svn_ra_svn_item_t *)(buffer + sizeof(apr_array_header_t));
 +  char *buffer = apr_palloc(pool,
 +header_size + 4 * sizeof(svn_ra_svn_item_t));
 +  svn_ra_svn_item_t *data = (svn_ra_svn_item_t *)(buffer + header_size);
  
item-kind = SVN_RA_SVN_LIST;
item-u.list = (apr_array_header_t *)buffer;


How exactly is all this different from:

apr_array_make(pool, 4, sizeof(svn_ra_svn_item_t)?

Other than relying on magic knowledge of APR implementation details?

-- Brane


Re: svn-normalizer tool error E160013

2015-08-25 Thread Stefan Hett

Hi,
On Mon, Jul 20, 2015 at 6:08 PM, Stefan Hett ste...@egosoft.com 
mailto:ste...@egosoft.com wrote:


Hi,

(sending to dev rather than to user since it's still an unreleased
tool)

I just tried to do a test-run on another checked-out path from the
same repository I already ran svn-normalizer on, but get a weird
error which I don't understand the reason for... Maybe you have an
idea?

E:\[projects]\SDKsE:\[delete]SVNTest\SVNtest\svn-mergeinfo-normalizer
normalize

Scanning working copy E:/[projects]/SDKs ...
Found mergeinfo on 16 nodes.
Found 688 branch entries.
Found 1472 merged revision ranges.

Fetching log for http://192.168.3.2:8055/svn/Egosoft/Foo ...svn:
E160013: '/svn/E
gosoft/!svn/rvr/198196/Foo' path not found


The only way that could happen is if the working copy refers to
a path that does not exist (anymore), e.g. because it got renamed.

As of r1696183, the tool should work as long as at least the
repository still exists and is accessible.

-- Stefan^2.
It's possible. Unfortunately I don't have the environment available 
anymore to confirm this was the case. Think I've to assume that was 
indeed the situation. Thanks for improving the handling.


--
Regards,
Stefan Hett



Re: svn-normalizer tool error E160013

2015-08-25 Thread Stefan Hett

Hi,

Hi,
On Mon, Jul 20, 2015 at 6:08 PM, Stefan Hett ste...@egosoft.com 
mailto:ste...@egosoft.com wrote:


Hi,

(sending to dev rather than to user since it's still an
unreleased tool)

I just tried to do a test-run on another checked-out path from
the same repository I already ran svn-normalizer on, but get a
weird error which I don't understand the reason for... Maybe you
have an idea?

E:\[projects]\SDKsE:\[delete]SVNTest\SVNtest\svn-mergeinfo-normalizer
normalize

Scanning working copy E:/[projects]/SDKs ...
Found mergeinfo on 16 nodes.
Found 688 branch entries.
Found 1472 merged revision ranges.

Fetching log for http://192.168.3.2:8055/svn/Egosoft/Foo ...svn:
E160013: '/svn/E
gosoft/!svn/rvr/198196/Foo' path not found


The only way that could happen is if the working copy refers to
a path that does not exist (anymore), e.g. because it got renamed.

As of r1696183, the tool should work as long as at least the
repository still exists and is accessible.

-- Stefan^2.
It's possible. Unfortunately I don't have the environment available 
anymore to confirm this was the case. Think I've to assume that was 
indeed the situation. Thanks for improving the handling 

I just found a backup from that earlier state.
So the WC root was on E:\[projects]\SDKs and linked to the repository at 
http://192.168.3.2:8055/svn/Egosoft/SDKs (repo root is 
http://192.168.3.2:8055/svn/Egosoft). The URL is still valid on HEAD 
revision.
So not sure what you mean by the working copy refers to a path that 
does not exist. If you mean that some of the checked-out paths inside 
the WC are/were no longer present on HEAD in the repository, then this 
might be possible. If you mean that the WC path no longer existed on the 
repository HEAD then this certainly was not the case.


Anyway. Retested running the current version on trunk of 
svn-mergeinfo-normalizer and it now passes without problems in this 
case. So the fix/improvement seemed to work. :-)


--
Regards,
Stefan Hett



Re: detection of moved branches for svn-normalizer tool

2015-08-25 Thread Stefan Hett

Hi,
On Wed, Jul 22, 2015 at 12:07 PM, Stefan Hett ste...@egosoft.com 
mailto:ste...@egosoft.com wrote:


Hi,

I came across a case where svn-normalizer did remove mergeinfo for
a branch which was still present but got renamed in one revision.
I understand why it behaves the current way, but maybe in favor of
the improvements done for handling moves it might also be a good
idea to have svn-normalizer better handle these scenarios?


The latest tool version distinguishes between 3 different cases of 
missing branches:


* potential branch - the (sub-)path never existed. That tends to 
happens for old branches

  which have not been sync'ed with /trunk for a long time.
* surviving copies - path got deleted but there are still copies of 
it (or copies of copies).

  Even if only parts got copied, they still count.
* deleted with no surviving copies

Only the last kind will be removed automatically. In the other cases, 
it is up to the

user to decide whether to keep the mergeinfo or not.

For a quick demo/test-case showing the underlying issue, I've
attached a batch-file (for windows) demonstrating the case.

As you see the resulting mergeinfo for FooProject/FooSDK is:
/SDK/FooSDK/trunk:2-3
/SDK/FooSDK2/tags/v2:6
/SDK/FooSDK2/trunk:4

Running svn-normalizer analyse now reports
/SDK/FooSDK as a deleted branch (in r4) and running svn-normalizer
normalize --remove-obsoletes would then drop this mergeinfo.

Suggested behavior would be that if a branch is renamed and still
present on trunk, it would not be removed when using
svn-normalizer normalize --remove-obsoletes.


That's how it is implemented now.

-- Stefan^2.
So if I get you right, running svn-mergeinfo-normalizer normalize 
--remove-obsoletes should not remove FooSDK.
If so, I think I ran into a bug here. Tested on our productive 
environment (rather than on this test-case) and it removed the entry in 
this case for SDKs\CrashServer\trunk.


Following is the relevant output from the analyse -v run:
[...]
Trying to elide mergeinfo from path
C:/[...]/src/SDKs/DrDump
into mergeinfo at path
C:/[...]/src
[...]
Removing obsolete entries ...
has SURVIVING COPIES: /SDKs/CrashServer/trunk
e.g.: /SDKs/DrDump/trunk
[...]
Encountered 1 missing branches:
[r185624, copied or moved] /SDKs/CrashServer
- /SDKs/DrDump
[...]

So to me the analyse output sounds correct. It detected that 
/SDKs/CrashServer/trunk no longer exists on head but was renamed to 
/SDKs/DrDump/trunk and therefore flagged as having surviving copies.

Still the normalize run seems to remove the entry.

Am I getting something wrong her?

--
Regards,
Stefan Hett