Re: Tool for upgrading many svn repos with dump/load?

2015-07-20 Thread Les Mikesell
On Sat, Jul 11, 2015 at 8:23 AM, Thorsten Schöning
tschoen...@am-soft.de wrote:
 Guten Tag Andrew Reedick,
 am Freitag, 10. Juli 2015 um 19:26 schrieben Sie:

 Since you're moving from windows to Ubuntu

 I've already moved to Ubuntu some years ago. ;-)

 As for merging the configurations, short of creating a temp repo in
 which you check in the default repo's auth/conf/hook files into
 trunk, and then checking in your live repo auth/conf/hook scripts
 into a branch off of trunk, then merging the branch into the trunk
 to effectively merge the files and then copying the merged files to
 the new repos, I don't know of anything.  =(

 That really sounds a bit complicated and though I thought of using
 some diff tool myself, I gues the easiest is to just copy the
 configured lines I know I did change in my repos, which are are most
 likely just those not commented. Treating the configuration as
 key/value file and just copy some special keys may be enough for my
 case...


I've always liked 'meld' as a visual merge tool.  If you can get a
copy of your old, edited file on the same system (running an X
desktop) as the updated and possibly slightly different  copy it is
fairly easy to identify and copy over your changes and make any other
needed additional edits.

-- 
   Les Mikesell
lesmikes...@gmail.com


Re: Best version for RHEL7/CentOS 7?

2015-05-22 Thread Les Mikesell
On Thu, May 21, 2015 at 1:38 AM, Ignacio González (Eliop)
igtorque.el...@googlemail.com wrote:
 Les, I´ve installad Collabnet Subversion Edge 4.0.13 over CentOS 7 (miminal
 ISO) and works fine.


Thanks - does this include a working mod_dav_svn for http access using apache?

-- 
Les Mikesell
  lesmikes...@gmail.com


Re: Best version for RHEL7/CentOS 7?

2015-05-22 Thread Les Mikesell
On Fri, May 22, 2015 at 7:52 AM, Ignacio González (Eliop)
igtorque.el...@googlemail.com wrote:

 And this is part of the content of
 /opt/csvn/data/conf/csvn_modules_httpd.conf (Collabnet embeds an Apache
 httpd server in the installation program, I don't know what tweaks thay may
 have done):

 LoadModule dav_module lib/modules/mod_dav.so
 LoadModule dav_fs_module lib/modules/mod_dav_fs.so
 LoadModule dav_svn_module lib/modules/mod_dav_svn.so
 LoadModule authz_core_module lib/modules/mod_authz_core.so
 LoadModule authz_host_module lib/modules/mod_authz_host.so


A custom apache may be what it takes to work around the problem.   I
was hoping to avoid that since I run several other web services on the
same host.

-- 
  Les Mikesell
lesmikes...@gmail.com


Re: Best version for RHEL7/CentOS 7?

2015-05-22 Thread Les Mikesell
On Fri, May 22, 2015 at 8:14 AM, Mark Phippard markp...@gmail.com wrote:
 
 Thanks - does this include a working mod_dav_svn for http access using
 apache?


 Les,

 I thought you used to use SVN Edge (which I maintain), so that is why I did
 not recommend it.  I assumed you were not considering it for a reason.

I think we may have once had support with Collabnet but just used the
stock Centos binaries, and later switched to Wandisco to use their
authentication proxy against AD because it was getting cumbersome to
manage passwords and the authz file.  I wasn't very involved in
choosing that part but don't think there were any problems involved
before or after the change.   But, we still have a mix of http:// and
svn:// access for an assortment of complicated reasons and I'm just
looking to move it from CentOS5 to something newer without breaking
anything.

 SVN Edge includes everything, but that might not be what you want.  It does
 not use or replace the packages that come with CentOS.  Instead it provides
 a full private stack of everything.  It also needs Java installed to run the
 web GUI.

That may be a wise choice if the issue is really a bug in the stock
version's mod_dav module.  Given the redhat/centos policy of not
changing behavior of things for the life of a major revision it's hard
to guess when/if it might be fixed. But, the path of least resistance
here might be to move to CentOS6 where the wandisco 1.8.x rpms would
just drop in and work and we'd be good for several more years with
minimal changes to the configuration.

Thanks for commenting, though - I'm still trying to understand all of
the options.

-- 
   Les Mikesell
 lesmikes...@gmail.com


Re: Best version for RHEL7/CentOS 7?

2015-05-20 Thread Les Mikesell
On Tue, Apr 28, 2015 at 4:37 PM, Les Mikesell lesmikes...@gmail.com wrote:

 Can someone comment on the best version of subversion to run on
 CentOS7?   The distribution supplies 1.7.14.I was thinking of
 using the packaged 1.8.x from Wandisco, but I don't see a mod_dav_svn
 here:
 http://opensource.wandisco.com/centos/7/svn-1.8/RPMS/x86_64/
 Does that mean there is a problem building it or am I missing something?


 In general, the CentOS releases will be based on more regression
 tested, stable source from RHEL. Even though I publish the tools to
 build backport SRPM's and RPM's myself, at https://github.com/nkadel/,
 I'd encourage stable, upstream tested tools where feasible. And I'd
 jump to 1.7.20, or even to the noticeable performance and integration
 of 1.8.13, only if I really need some of the new features.

 This is especially true of you have NFS or CIFS based home directories
 and will access them with 1.8.x on one upgraded system and 1.7.x on
 another system for the same working copy.


Looking into this a bit more, it appears that the mod_dav shipped in
RHEL/CentOS7 has a bug that breaks mod_dav_svn.  Has anyone worked
around this yet or am I the only person who wants to run a
newer-than-stock subversion with http access on RHEL/CentOS 7?

-- 
   Les Mikesell
 lesmikes...@gmail.com


Re: bug or a feature :)

2015-05-06 Thread Les Mikesell
On Wed, May 6, 2015 at 8:07 PM, Scott Aron Bloom sc...@towel42.com wrote:
 So I noticed something..



 We use svn:externals quite a bit, using the format

 -r 1234 ^/externals/ lcldr



 And it works just fine, we never moved to the “new” format of
 ^/externals/@1234



 However, recently, we found that in restructuring one of our externals
 directory, which caused  to go away, svn up fails, saying it cant find
 the external.  However switching to the @1234 format it works fine.



 Is this a known issue?

It is not an issue - they just mean different things - see 'peg
revisions' in the documentation.path@rev means go back to the
revision, find what was at the path.  -r rev path means look at the
path in the head revision, and back up to the specified revision
number in that object's history.   If there have been deletions/moves
they may be different things.  Usually path@rev is what you expect.

-- 
   Les Mikesell
  lesmikes...@gmail.com


Re: Best version for RHEL7/CentOS 7?

2015-04-28 Thread Les Mikesell
On Tue, Apr 28, 2015 at 4:06 PM, Nico Kadel-Garcia nka...@gmail.com wrote:
 On Tue, Apr 28, 2015 at 2:15 PM, Les Mikesell lesmikes...@gmail.com wrote:
 Can someone comment on the best version of subversion to run on
 CentOS7?   The distribution supplies 1.7.14.I was thinking of
 using the packaged 1.8.x from Wandisco, but I don't see a mod_dav_svn
 here:
 http://opensource.wandisco.com/centos/7/svn-1.8/RPMS/x86_64/
 Does that mean there is a problem building it or am I missing something?


 In general, the CentOS releases will be based on more regression
 tested, stable source from RHEL. Even though I publish the tools to
 build backport SRPM's and RPM's myself, at https://github.com/nkadel/,
 I'd encourage stable, upstream tested tools where feasible. And I'd
 jump to 1.7.20, or even to the noticeable performance and integration
 of 1.8.13, only if I really need some of the new features.

 This is especially true of you have NFS or CIFS based home directories
 and will access them with 1.8.x on one upgraded system and 1.7.x on
 another system for the same working copy.

I'd expect the wandisco packaged versions to have reasonable
stability, and was hoping for something that would need nothing but
'yum update' to maintain for many years in the future - and would like
to jump to 1.8 if possible.I see wandisco does have mod_dav_svn
built for RHEL/Centos6 along with the 1.8 subversion rpms.   Using
CentOS 6.x would be an alternative, but I'd like to get as much life
out of this system as possible.

-- 
   Les Mikesell
 lesmikes...@gmail.com


Best version for RHEL7/CentOS 7?

2015-04-28 Thread Les Mikesell
Can someone comment on the best version of subversion to run on
CentOS7?   The distribution supplies 1.7.14.I was thinking of
using the packaged 1.8.x from Wandisco, but I don't see a mod_dav_svn
here:
http://opensource.wandisco.com/centos/7/svn-1.8/RPMS/x86_64/
Does that mean there is a problem building it or am I missing something?

-- 
   Les Mikesell
 lesmikes...@gmail.com


Re: Merge trunk and prod directories without workspace

2015-03-21 Thread Les Mikesell
On Fri, Mar 20, 2015 at 9:14 AM, Lathan Bidwell lat...@andrews.edu wrote:

  file trunk rev prod rev
  /a/b/c50004850incoming update
  /1/2/32000   2001 up to date
  /x/y/z90007438incoming update
  /x/y/z/index.html   8000 8001up to date

 So only one outstanding change (set) is possible?


 a Publish action  operates on 1 file or folder at once.

But what about concurrent but different change sets?.  That is, how do
you handle someone who might be preparing a large set of changes that
may take some time to complete and preview, but meanwhile needs to
make some small updates to the existing content?   I'd expect
something this complex to need multiple branches for development with
the changes being merged to the preview workspace.   Or maybe even a
tree of external linkages with the ability to either track the HEAD of
the external parts or to be pegged to revisions as determined by the
preview results.


 Where are the edits/commits happening?  If they are not made on the
 preview system, I don't think it would change much to do a merge from
 trunk into the previous production workspace there, and publish by
 committing to production.

 most of the commits are using direct repository actions. There are a couple
 actions that do a sparse checkout for an action.

New imports or copies from branches you haven't mentioned?  Imports
won't have any other ancestry.

-- 
  Les Mikesell
lesmikes...@gmail.com


Re: Merge trunk and prod directories without workspace

2015-03-17 Thread Les Mikesell
On Tue, Mar 17, 2015 at 9:45 PM, Les Mikesell lesmikes...@gmail.com wrote:

Sorry - accidentally sent before finished...

 On Tue, Mar 17, 2015 at 8:21 AM, Lathan Bidwell lat...@andrews.edu wrote:


 Also, these publishes are not like putting out a full release of a software
 package. The entire branch is a website for a university. We never publish
 everything at the same time. So, I'm not sure how I could implement a useful
 TAG every time someone decided to publish a subfolder or index.html file.


 If the sections are independent subdirectories, you might want to
 manage them independently.


  The problem with using switch is its hard to know where your production
  branch is, and quite easy to accidentally svn update -r HEAD and
  accidentally deploy things.

 It's a matter of workflow.  I don't see why it isn't just as easy to
 deploy by incorrectly publishing something to your branch head.


 The difference is, if you publish something to the branch head, there is a
 revision that you see in a log, and could revert.

 if my prod checkout has a bunch of folders each switched to a different
 revision, if I lose that filesystem and have to recheck out that workspace
 I've lost all information about what is published.

 Except for special cases where you've reverted that would normally be
 your latest release tag. But, with a workflow of publishing by
 tracking tags you would probably track the tag names with some
 process, maybe logging who approved it and when.

 if I copy my entire 250 gig branch, is SVN going to deduplicate that
 internally and not need more disk?

 Svn copies are very cheap. Probably much more so than a merge that
 ends up not being an exact copy of an ancestor revision.

 Most of my publishes happen on subfolders of the full tree, so basically
 every folder / file could have a different publsh status: incoming add,
 incoming update, incoming delete. with different revisions.

 file trunk rev prod rev
 /a/b/c50004850incoming update
 /1/2/32000   2001 up to date
 /x/y/z90007438incoming update
 /x/y/z/index.html   8000 8001up to date

So only one outstanding change (set) is possible?

 Do you mean diffs against trunk/HEAD?  That should be the same
 regardless of the workspace url.

 What I currently do is compare the rev number from the prod branch and the
 trunk branch for an item, and if there are newer trunk revisions, then I
 show the user that this file has incoming updates.

That would not be much different if your published copy was a tag.

 My preview runs off my trunk branch, so when they preview they see most up
 to date (albeit unpublished) version.

Where are the edits/commits happening?  If they are not made on the
preview system, I don't think it would change much to do a merge from
trunk into the previous production workspace there, and publish by
committing to production.

-- 
  Les Mikesell
lesmikes...@gmail.com


Re: Merge trunk and prod directories without workspace

2015-03-17 Thread Les Mikesell
On Tue, Mar 17, 2015 at 8:21 AM, Lathan Bidwell lat...@andrews.edu wrote:


 Also, these publishes are not like putting out a full release of a software
 package. The entire branch is a website for a university. We never publish
 everything at the same time. So, I'm not sure how I could implement a useful
 TAG every time someone decided to publish a subfolder or index.html file.


If the sections are independent subdirectories, you might want to
manage them independently.


  The problem with using switch is its hard to know where your production
  branch is, and quite easy to accidentally svn update -r HEAD and
  accidentally deploy things.

 It's a matter of workflow.  I don't see why it isn't just as easy to
 deploy by incorrectly publishing something to your branch head.


 The difference is, if you publish something to the branch head, there is a
 revision that you see in a log, and could revert.

 if my prod checkout has a bunch of folders each switched to a different
 revision, if I lose that filesystem and have to recheck out that workspace
 I've lost all information about what is published.

Except for special cases where you've reverted that would normally be
your latest release tag. But, with a workflow of publishing by
tracking tags you would probably track the tag names with some
process, maybe logging who approved it and when.

 if I copy my entire 250 gig branch, is SVN going to deduplicate that
 internally and not need more disk?

Svn copies are very cheap. Probably much more so than a merge that
ends up not being an exact copy of an ancestor revision.

 Most of my publishes happen on subfolders of the full tree, so basically
 every folder / file could have a different publsh status: incoming add,
 incoming update, incoming delete. with different revisions.

 file trunk rev prod rev
 /a/b/c50004850incoming update
 /1/2/32000   2001 up to date
 /x/y/z90007438incoming update
 /x/y/z/index.html   8000 8001up to date

But only one i




  I'd also like to know in SVN if there are unpublished changes to a file
  or
  folder (separate topic) which just using switch on the workspace would
  make
  it more complicated.

 Do you mean diffs against trunk/HEAD?  That should be the same
 regardless of the workspace url.

 What I currently do is compare the rev number from the prod branch and the
 trunk branch for an item, and if there are newer trunk revisions, then I
 show the user that this file has incoming updates.


   I need to be able to stage changes and preview them (preview server
   runs
   off
   the /trunk/ branch).
 
  Alternatively, you could merge the trunk changes into your preview
  workspace and commit that to production, with the edits actually being
  done elsewhere.
 
  I will talk with my colleague about that idea, although I think the last
  time I mentioned it there was some reason why it was problematic.

 I would think you would really want to preview the exact copy of what
 is about to be pushed to production instead of hoping a a merge ends
 up with the changes you wanted.   And along those lines it is possible
 to have things in your staging/preview workspace that aren't even
 committed if you aren't careful about it.  Copy to tag/preview
 tag/switch production to tag/ is a safer approach to be sure no
 uncommitted changes happen in between.


 You are correct in why I have not used merge for this operation before.

 My preview runs off my trunk branch, so when they preview they see most up
 to date (albeit unpublished) version.


 --
Les Mikesell
 lesmikes...@gmail.com





Re: Merge trunk and prod directories without workspace

2015-03-16 Thread Les Mikesell
On Mon, Mar 16, 2015 at 4:33 PM, Lathan Bidwell lat...@andrews.edu wrote:


 I usually think in revision numbers or tag names instead of pretending
 there was only one. If, instead of tracking HEAD, you copied each
 release to a new TAG with your own naming scheme you could just switch
 your production workspace to that TAG instead of arranging for what
 you want to be at the head of a branch.  And as a side effect you get
 an easily tracked name for the tag you would need to revert those
 changes.


 Hard to make friendly names automatically.

They usually end up being release numbers with a base name and
major/minor components, but much more related to the release/publish
step than intermediate commits.

 What I failed to mention before is that these publishes happen closer to the
 leaf nodes, more like: blah/foo/bar and blah/foo/hi both get published, but
 blah/foo/bye didn't get published.

 Each user in the content Management System has folders that they have access
 to, and they can publish any files or folders in their area.

So, how would you track those if you wanted to revert?

 The problem with using switch is its hard to know where your production
 branch is, and quite easy to accidentally svn update -r HEAD and
 accidentally deploy things.

It's a matter of workflow.  I don't see why it isn't just as easy to
deploy by incorrectly publishing something to your branch head.

 I'd also like to know in SVN if there are unpublished changes to a file or
 folder (separate topic) which just using switch on the workspace would make
 it more complicated.

Do you mean diffs against trunk/HEAD?  That should be the same
regardless of the workspace url.

  I need to be able to stage changes and preview them (preview server runs
  off
  the /trunk/ branch).

 Alternatively, you could merge the trunk changes into your preview
 workspace and commit that to production, with the edits actually being
 done elsewhere.

 I will talk with my colleague about that idea, although I think the last
 time I mentioned it there was some reason why it was problematic.

I would think you would really want to preview the exact copy of what
is about to be pushed to production instead of hoping a a merge ends
up with the changes you wanted.   And along those lines it is possible
to have things in your staging/preview workspace that aren't even
committed if you aren't careful about it.  Copy to tag/preview
tag/switch production to tag/ is a safer approach to be sure no
uncommitted changes happen in between.

-- 
   Les Mikesell
lesmikes...@gmail.com


Re: Merge trunk and prod directories without workspace

2015-03-16 Thread Les Mikesell
On Mon, Mar 16, 2015 at 3:16 PM, Lathan Bidwell lat...@andrews.edu wrote:


  I have a content management system running on top of SVN. My web servers
  run a post commit hook that does svn update off of svnlook after every
  commit.
 
  I currently have a Publish operation which I implement by doing svn
  delete $prod_url  svn cp $trunk_url $prod_url. (both repo urls)
 
  This causes problems because the post commit hook triggers a delete of
  the folder on my production web server, and then sometimes takes longer to
  re-download all the content (some folders have some decent media, about
  15-30 gig).

Don't you really want to just 'svn switch' your production workspace
to the new production target url instead of deleting and checking out
again?  As long as the content shares ancestry it should just move the
differences.

-- 
   Les Mikesell
 lesmikes...@gmail.com


Re: Merge trunk and prod directories without workspace

2015-03-16 Thread Les Mikesell
On Mon, Mar 16, 2015 at 4:04 PM, Lathan Bidwell lat...@andrews.edu wrote:


 Don't you really want to just 'svn switch' your production workspace
 to the new production target url instead of deleting and checking out
 again?  As long as the content shares ancestry it should just move the
 differences.


 The copy and delete is not ideal. What I am really trying to do is deploy
 the version of the trunk branch to the production branch.

I don't see why delete/copy in the repository is a problem.  But why
track the delete with a post-commit hook?

 I am not changing my production target url. I am trying to send new changes
 from trunk to prod, while keeping trunk as a separate branch.

 Before and after a publish action, there will still be those 2 branches:
 /trunk/blah
 /prod/blah

I usually think in revision numbers or tag names instead of pretending
there was only one. If, instead of tracking HEAD, you copied each
release to a new TAG with your own naming scheme you could just switch
your production workspace to that TAG instead of arranging for what
you want to be at the head of a branch.  And as a side effect you get
an easily tracked name for the tag you would need to revert those
changes.

 They just happen to have the same content until someone makes new changes to
 /trunk/blah/.

 Publish should make the /prod/ be the same as the /trunk/ while keeping them
 separate enough to make changes to /trunk/ and not touch /prod/ (until the
 next publish).

 I need to be able to stage changes and preview them (preview server runs off
 the /trunk/ branch).

Alternatively, you could merge the trunk changes into your preview
workspace and commit that to production, with the edits actually being
done elsewhere.

-- 
   Les Mikesell
 lesmikes...@gmail.com.


Re: SVN commit does nothing

2015-03-12 Thread Les Mikesell
On Wed, Mar 11, 2015 at 12:32 PM, pascal.sand...@freescale.com
pascal.sand...@freescale.com wrote:
 I'm in a company that use redhat as base distribution. I asked for a web 
 server to migrate some tools from an old server. I asked for recent versions 
 of some tools like php, perl or mysql.
 For this reason (mostly compile a newer version of php) and to be able to put 
 all the binaries and configuration on a storage with backup they gave me a 
 custom web stack compiled with httpd 2.2. Now I have to add svn, that's why I 
 compiled it from source.

 I found that discussion where POST request where not handled by svn. Except I 
 have to error, it seems I face the same kind of problem:

[...]

 How can I trace requests and see where are they handle in httpd?

Does the httpd error_log show a problem?  It could be something simple
like needing to increase apache's setting for LimitRequestBody.  Or
not.

 Should I try to build an older version of svn?

Personally, I've gotten tired of fighting that kind of battle and
would either switch to running svnserve so you don't  have to deal
with apache at all, or find a way to run a stock apache rpm and either
the stock older subversion or a packaged  newer one like wandisco's.
 Is there any chance of getting RHEL7 as your base system?  That
shouldn't be horribly outdated at this point.  You didn't say what
version you have, but mixing httpd 2.2 with new custom stuff seems
like asking for trouble.

-- 
   Les Mikesell
 lesmikes...@gmail.com


Re: SVN commit does nothing

2015-03-12 Thread Les Mikesell
On Thu, Mar 12, 2015 at 11:59 AM, pascal.sand...@freescale.com
pascal.sand...@freescale.com wrote:
 I also think that something is interfering with this request but I don't 
 know how to identify this issues as I don't master these technologies and 
 don't see any error.
 But thanks, I take note that it would be better to install version 1.8 anyway.
 Pascal


If you had a stock RHEL5.x apache, I'd recommend starting here:
http://opensource.wandisco.com/rhel/5/svn-1.8/RPMS/.  You might be
able to get the src.rpms to rebuild against what you have, but I think
the apr* part has to match your apache build.

-- 
   Les Mikesell
 lesmikes...@gmail.com


Re: SVN commit does nothing

2015-03-11 Thread Les Mikesell
On Wed, Mar 11, 2015 at 11:51 AM, pascal.sand...@freescale.com
pascal.sand...@freescale.com wrote:
 Hi again

 In fact my previous conclusion are wrong.

 What exactly works is svn commit from a client that is version 1.6. Using a
 svn client 1.7 or 1.8, commit does not proceed until stopped by the timeout.

 svn version on server is 1.8.11.

 What is different between version 1.6 and 1.7 or 1.8 during a commit that
 could make it fail?

There have been some client issues involving the serf or neon
libraries mentioned before but usually things get better with newer
clients, not the other way around.  Taking a step back, why are you
compiling your own server?   Isn't there a packaged version available
for your platform?   It might help to start with a known-working
server build.

-- 
   Les Mikesell
 lesmikes...@gmail.com


Re: Copy and Reduce the size of SVn repos

2015-03-11 Thread Les Mikesell
On Wed, Mar 11, 2015 at 2:00 AM, Stümpfig, Thomas
thomas.stuemp...@siemens.com wrote:
 l
 Actually splitting projects is not a solution to something that eliminates 
 old data.

Correct, but if we give up on getting a working obliterate, we are
left with dump/filter/load as the only way to administer content.  And
as a practical matter, how many dump/filter/load cycles do you want to
do on repositories after they go over a few hundred gigs with all of
your development teams waiting for you to get the filters right to
match all the distributed cruft?Also, in many cases over the years
whole projects become obsolete so getting rid of or archiving that
part would be easy if you had used the 'directory of repositories'
approach instead of 'repository of projects' and everything would have
worked about the same.

-- 
   Les Mikesell
 lesmikes...@gmail.com


Re: Copy and Reduce the size of SVn repos

2015-03-10 Thread Les Mikesell
On Sun, Mar 8, 2015 at 8:27 PM, Nico Kadel-Garcia nka...@gmail.com wrote:
 
 Heh, I have to ask, where did you find that doctrine? There's no such
 thing. It's all a lot more mundane: First, you have to get people to

 I've had to deal with that doctrine personally and professionally
 since first working with Subversion in 2006. It comes up again eveyr
 so often, for example in
 http://subversion.tigris.org/issues/show_bug.cgi?id=516 and is
 relevant to the original poster's request.

 There can be both software and legal reasons to ensure that the
 history is pristine and never forgets a single byte. But in most
 shops, for any lengthy project, *someone* is going to submit
 unnecessary bulky binaries, and *someone* is going to create spurious
 branches, tags, or other subdirectories that should go the way of the
 passenger pigeon.

 agree what obliterate actually means; there are about five meanings
 that I know of. And second, all five are insanely hard to implement with
 our current repository design (just ask Julian, he spent about a year
 trying to come up with a sane, moderately backwards-compatible solution).

 -- Brane

 I appreciate that it's been awkward. The only workable method method
 now is the sort of svn export; svn import to new repo and discard old
 repo that I described, or a potentially dangerous and often fragile
 dump, filter, and reload approach that preserves the original URL's
 for the repo, but it's really not the same repo.

 It remains messy as heck. This is, in fact, one of the places where
 git or other systems's more gracious exclusion or garbage collection
 tools doe better. Even CVS had the ability to simply delete a
 directory on the main fileserver to discard old debris: it's one of
 the risks of the more database based approach of Subversion to
 managing the entire repository history.

Maybe it is time to change the request from 'obliterate' to _any_
reasonable way to fix a repository that has accumulated cruft.   And a
big warning to new users to put separate projects in separate
repositories from the start because they are too hard to untangle
later.  I've considered dumping ours and trying to split by project,
but I'm not even sure that is possible because many were imported from
CVS then subsequently moved to improve the layout.  So I can't really
filter by path.

-- 
   Les Mikesell
 lesmikes...@gmail.com


Re: Copy and Reduce the size of SVn repos

2015-03-08 Thread Les Mikesell
On Sun, Mar 8, 2015 at 3:31 PM, Tony Sweeney swee...@addr.com wrote:

 As I recall, this was feature request #13 after Perforce was released, and 
 was implemented the best part of 15 years ago.  As near as I can tell it's 
 architecturally impossible to implement in Subversion as a consequence of 
 some of the initial design choices.  Subversion has served me well, but this 
 has been a glaring misfeature since its inception:


I have to agree.  I can't imagine anyone using subversion for any
length of time without having some things committed that shouldn't be
there.   It probably would still be the main topic of conversation
here if everyone had not simply given up hope long ago.

-- 
  Les Mikesell
  lesmikes...@gmail.com


Re: Correct way to undo commit of a tag?

2015-02-24 Thread Les Mikesell
On Tue, Feb 24, 2015 at 8:49 AM, David Aldrich
david.aldr...@emea.nec.com wrote:

 My most recent commit was the creation of a tag.  I want to delete that tag.
 Should I reverse merge the commit or simply delete the tag?

In subversion the usual convention is that tags are never changed
after the copy that creates them. That is, they become human-friendly
names for a single revision. If you are following that convention,
then you should delete the tag if it was not what you intended so you
can reuse that tag name. However, changes tend to be ongoing so you
may want to name your tags with some version numbering scheme - in
which case you might create a newer tag later and ignore earlier
versions. Copies are cheap in subversion and it doesn't hurt to have
extra tags as long as the names are not confusing.

-- 
   Les Mikesell
  lesmikes...@gmail.com


Re: How to Switch, or Update, a file that exists only in a branch ?

2015-02-20 Thread Les Mikesell
On Fri, Feb 20, 2015 at 9:26 AM, Kerry, Richard richard.ke...@atos.net wrote:
 
 Also, to follow up points in Les' response:
 If you put your documents in directories and branch the directory instead
 of doing individual file operations, you'll have a place to switch to when
 you want to access the contents of that branch directory.

 I understand this.

Then I don't understand where you are expecting to switch to if you
don't create a containing directory. CVS stores branch/tag/revision
info inside individual files so it was possible to apply those
concepts to a single file - but much harder to hold project
directories together.Subversion moves the whole concept of
branches and tags into the paths in the repository. If you don't have
a different path (which translates to a directory) you can't have a
branch - or a tag.

 For source code work, this almost always makes sense.

 Indeed - that's what I do for source code, where the folder makes sense as
 the unit of interest.

 If you are working on some random collection of individual files with no
 natural structure for branch copies it might not.

 That's pretty much the situation I have here.
 Either it's a folder of miscellaneous configuration files or documents.
 Sorry if that wasn't clear from my explanations (below).

And perhaps what I didn't make clear was that since you can't switch
to a branch without a path to give it a name you have to create one.
You can branch (copy) the entire directory if you want, copies are
cheap.  Or you can create the new branch directory and copy one or a
few files into it - but you'll have to remember where you put things.
Under a single path, you can only have revisions, and while you can
check out or update to any older revision, when you commit under that
path it can only go in as a newer revision number so that's not a
great way to work with multiple revisions in parallel.  You could
(svn) copy the document to a different name in the same directory -
and both will retain the previous history to that point, but then you
wouldn't 'switch' to go between them, you would just use their
different names for as long as they co-exist.

-- 
   Les Mikesell
 lesmikes...@gmail.com


Re: How to move folder from one path to another along with all revisions

2015-02-20 Thread Les Mikesell
On Fri, Feb 20, 2015 at 9:01 AM, Mohsin mohsinchan...@gmail.com wrote:
 Hi,

 Suppose I have one folder in Repo1 (http://abc.com/SVN/Repo1/folder1) and I
 have moved that folder to some new path which is (http://abc.com/SVN/Repo1).
 Point is I want to move that folder along with all its revisions to new path
 when i check history of that folder there should be all revisions instead of
 one revision. Can someone tell how can i move folder along with all its
 revisions to new path ?

_All_ svn copies and moves retain the revision history of the
contents.  That's how branches and tags work, but it also applies to
any other copies.  You can use a url-url copy or check out the
directory above and 'svn copy' in the workspace before committing back
with the same result.   Also note that even if you move a path, it can
still be accessed at it's previous path and revisions using the peg
syntax:  path@revision.

-- 
Les Mikesell
  lesmikes...@gmail.com


Re: how to backup repository with all history?

2015-02-10 Thread Les Mikesell
On Mon, Feb 9, 2015 at 10:25 PM, James oldyounggu...@yahoo.com wrote:
 I have few repositories in my svn-root directory.
 I have put a conf folder at the same level as other repositories. that conf
 folder contains a authz and a passwd file which are shared by all
 repositories. By doing this I can setup users and their privilege in one
 place. But because of this conf folder, I cannot do hotcopy automatically. I
 am thinking just tar all of them daily.

 My purpose of backup is just in case my machine die. So how big the
 difference will be for my tar approach and the hotcopy?

If you are sure that no changes can happen during the tar run, that
should be fine - or rsync to another system might be more efficient.
But with any backup approach you need to understand that you can't
continue to use existing working copies that might have newer
revisions than the restored backup. You would have to check out a
fresh copy, then copy over any new work you want to keep from the old
working copy and commit again.

And yet another alternative if you have another system to hold the
data (and it's not much of a backup if you don't...) would be to
svnsync each repository and rsync your common config.

-- 
   Les Mikesell
 lesmikes...@gmail.com


Re: Sanity-check tool for authz file?

2015-01-28 Thread Les Mikesell
On Thu, Jan 22, 2015 at 6:30 PM, Ben Reser b...@reser.org wrote:
 On 1/22/15 1:00 PM, Les Mikesell wrote:
 Are there any tools to help find syntax issues or mismatches in paths
 between an authz file and the associated repo?

 The validate subcommand of svnauthz (1.8 or newer) or svnauthz-validate before
 that.

 Comes in the tools/server-side directory of the source distribution.  You may
 or may not have it depending on how you're getting Subversion.

 An authz file with this in it:
 [abc/be/de]
 * = r

 Will generate this:
 svnauthz: E220003: Section name 'abc/be/de' contains non-canonical fspath
 'abc/be/de'

Thanks!  This instance happens to be the stock 1.6.x maintained for
the RHEL/CentOS distribution but I'll keep that in mind when I
upgrade.

-- 
   Les Mikesell
 lesmikes...@gmail.com


Re: How to Switch, or Update, a file that exists only in a branch ?

2015-01-26 Thread Les Mikesell
On Mon, Jan 26, 2015 at 10:00 AM, Kerry, Richard richard.ke...@atos.net wrote:

 Given that Subversion I usually presented as better than CVS in all ways I’m 
 assuming it can do this but I have yet to work out how to tell it to do so.


The 'better' part about subversion is that it understands directories
as projects and holds the contents together during atomic operations
like commits.   If you put your documents in directories and branch
the directory instead of doing individual file operations, you'll have
a place to switch to when you want to access the contents of that
branch directory.  For source code work, this almost always makes
sense.  If you are working on some random collection of individual
files with no natural structure for branch copies it might not.

-- 
   Les Mikesell
 lesmikes...@gmail.com


Sanity-check tool for authz file?

2015-01-22 Thread Les Mikesell
Are there any tools to help find syntax issues or mismatches in paths
between an authz file and the associated repo?I just spent some
time tracking down a typo that had the weird effect of some users
being able to create directories that they subsequently could not move
or delete.   Turned out to be a path missing the leading / in a
section considerably below what I thought should the relevant entry,
and the log of 'reparent ' without any mention of a problem (just
no commit) on the denied action wasn't very helpful.  Is there a good
way to avoid such problems or find them after the fact?

-- 
   Les Mikesell
lesmikes...@gmail.com


Re: One data set - two repositories?

2014-10-29 Thread Les Mikesell
On Wed, Oct 29, 2014 at 5:33 PM, Andreas Stieger andreas.stie...@gmx.de wrote:
 Hello,

 On 29/10/14 21:07, c...@qgenuity.com wrote:
 I'm looking for a way to use Subversion to store data from a single data
 set across two repositories. More specifically, I want to have one
 repository which contains all of the data and a second that contains only
 specific directories.

 When I am at home, I will sync to the complete repository and when I am
 traveling, I will sync to the partial repository.

 How do I set this up so that the two repositories play nicely with each
 other?

 svn being a centralized system, the disconnected two-way element of your
 scenario is not part of its design. The keywords below will help you
 work around:
 a) svnsync, (one way) which has provisions for partial syncs
 b) same, but switching the direction as required
 c) two repositories with remote merging
 d) two repositories with vendor branches

Or, if you are only concerned with committing/updating certain
subdirectories in a working copy when traveling, you don't need
different repositories for that.  Just do what you need.
You could even make different 'top level' projects that that pull in
the directories you want via svn external properties.   Updating from
the top level would update the whole tree of external references, but
you'd have to commit changes to them separately.

-- 
   Les Mikesell
 lesmikes...@gmail.com


Re: restricting certain users to read a particular folders in the Repo

2014-10-24 Thread Les Mikesell
On Fri, Oct 24, 2014 at 3:08 AM, janardhan adatravu
surya.janard...@gmail.com wrote:
 Hello,

 Thank you for your reply.

 This method needs more activity on SVN administrator side.
 for example  a branch/tag is created from a trunk, paths should be updated
 in authz file.

 Is there any other way to restrict certain users not to checkout particular
 folders?

Paths equate to folders...  Perhaps you need to reconsider your
concept of users with access to 'all other folders except ...' and
nail down what they are actually allowed to access.   Or rearrange the
paths where branches/tags are copied (these are mostly arbitrary,
after all) if your administrator can't keep up with your needs.

-- 
   Les Mikesell
  lesmikes...@gmail.com


Re: SVN doesn't like .so files?

2014-10-10 Thread Les Mikesell
On Fri, Oct 10, 2014 at 1:19 PM, James oldyounggu...@yahoo.com wrote:
 I am trying to add an existing project into the SVN repository. It seems
 work but when I check them out in a new location, I found all .so files are
 not present.  Then I look at the repository, these .so files from JDK are
 not there.

 Any workaround? I am using 1.8.10(r1615264) svn. The command I used to add
 project is svn import -m my message .
 svn://homeNetworkIP/repositoryName.


The client where you committed was most likely configured to ignore
*.so files (and other common binary build results).   You can override
this by explictly doing an 'svn add' of missing files and committing
them, or you can change the client configuration if you want different
default behavior.

-- 
   Les Mikesell
 lesmikes...@gmail.com


Re: SVN doesn't like .so files?

2014-10-10 Thread Les Mikesell
On Fri, Oct 10, 2014 at 2:56 PM, James oldyounggu...@yahoo.com wrote:
 After I import, I have renamed my project folder to project.bak folder and
 created a new empty project folder.  I found this after I do co in my new
 project folder.

 How can I easily add these .so files or other possible ignored files into
 repository? There are about 900MB data (JDK and Eclipse). I don't want to
 miss any files.

First, make sure this is really what you want to do.  Normally you
would only want 'source' type files that you would check out in
working copies so you can edit and rebuild any binaries from them.  If
you do need some unrelated fixed-version binaries brought along with
every checkout, consider putting them elsewhere (in the same or a
different repository) and using svn:external properties to pull any
tools or supporting libraries into their own subdirectories of the
working space.That way you can separate the versioning of 'your'
project from any separately-managed support tools/code/binaries.  If
you subsequently commit new versions of the tools/libs, you can
control the checked-out version by using peg-revision syntax or tag
copies for your external targets.

 1. Can I delete the new project folder and rename back the project.bak, and
 then use svn add .?

No.

 or
 2. Copy entire contents of project.bak folder (has ALL files) into the new
 project folder (missing some files), and then do svn add .?

Yes.  If you are sure you really want that.

 Does the svn add . can find new files and ignore existing files to add
 onto repository, after commit?

I'm not sure if the svn:ignore becomes a remembered property after the
first run or if it is strictly a client setting.  You'll be able to
tell by what the svn add command shows, though.  If it isn't taking
them, explicitly putting the filenames on the command line will work
and, at least on unix-like systems you can use wildcards like * */*
*/*/*, etc,. to have the shell expand all the filenames for you.

-- 
   Les Mikesell
 lesmikes...@gmail.com


Re: Every Version of Every File in a Repository

2014-10-08 Thread Les Mikesell
On Wed, Oct 8, 2014 at 3:38 PM, Andreas Stieger andreas.stie...@gmx.de wrote:
 Hi,

 On 08/10/14 21:08, Bob Archer wrote:
 I assume by “scan” you are talking about virus scanning.  I would
 question the need to do this. Yea, I know… but still, many request come
 from a lack of understanding of a technology.

 It is more likely that this is about a legal discovery or license/code
 review. Here then is a hint.

If you are looking to make it searchable, fisheye from Atlassian knows
how to do that.  Be prepared to wait a couple weeks for a large
repository while it does an 'svn cat' of every revision of every file
to feed to it's indexer, though.

-- 
   Les Mikesell
  lesmikes...@gmail.com


Re: AW: SVN Commit Failed For Data larger Than 2 GB [How To Resolve]

2014-10-01 Thread Les Mikesell
On Wed, Oct 1, 2014 at 7:45 AM, Mohsin mohsinchan...@gmail.com wrote:
 Hi,

 we are using HTTP protocol for repository access in browser e.g

 http://server/some/path


I'm not sure it is clear from this thread whether you have succeeded
in committing 2Gb with a command line svn client using http protocol
(your import example showed file://).  If the command line client
still shows the problem over http, then the issue may be with apache
on the server side.  However if the command line svn works with an
http url but tortoise fails, then the issue is obviously with the
tortoise libraries on the client side.

-- 
   Les Mikesell
  lesmikes...@gmail.com


Re: AW: SVN Commit Failed For Data larger Than 2 GB [How To Resolve]

2014-10-01 Thread Les Mikesell
On Wed, Oct 1, 2014 at 10:23 AM, Mohsin mohsinchan...@gmail.com wrote:
 Dear,


 That's what i am saying from last 2 days I was successful in committing data
 by using svn command line on Linux server but I faced issue with tortoise
 svn client on my window machine .

Thanks - it would have been more clear if you had shown that svn
command line instead of the one that used the file:// protocol.

 That clearly depicts issue is with
 tortoise svn i know that but most important thing which i am eagerly to
 listen from you people how to resolve this issue ? What should I do to
 figure out this issue with tortoise svn client ?

First note that the svn command line client is available for windows
too, and it might be worth verifying that it can succeed in exactly
the same circumstances where tortoise fails.  As others have noted,
this mail list doesn't have much to do with the tortoise client, so
you probably won't get the best advice about this problem here.   But,
make sure you are using the latest tortoise -  if the issue is really
in the neon libraries, it looks like neon has been dropped in the 1.8
release:

http://subversion.apache.org/docs/release-notes/1.8.html#neon-deleted

-- 
   Les Mikesell
 lesmikes...@gmail.com


Re: AW: SVN Commit Failed For Data larger Than 2 GB [How To Resolve]

2014-10-01 Thread Les Mikesell
On Wed, Oct 1, 2014 at 11:27 AM, Mohsin mohsinchan...@gmail.com wrote:
 Thanks Dear,

Thanks - it would have been more clear if you had shown that svn
command line instead of the one that used the file:// protocol.

 Ignore my file:// protocol post that was in some other context . I am using
 HTTP protocol for repository access.

First note that the svn command line client is available for windows
too, and it might be worth verifying that it can succeed in exactly
the same circumstances where tortoise fails.  As others have noted,
this mail list doesn't have much to do with the tortoise client, so
you probably won't get the best advice about this problem here.   But,
make sure you are using the latest tortoise -  if the issue is really
in the neon libraries, it looks like neon has been dropped in the 1.8
 release:

 How can I use svn command line for windows ? Can you tell me in this regard
 ?

The tortoise installer should offer to include a command line client
too, but it will probably be built with the same library as the GUI.
There are several other builds linked from:
https://subversion.apache.org/packages.html#windows
plus one from the Cygwin environment (which, being a more linux-like
environment may confuse your line endings if you aren't careful).

 I am using Tortoise svn client older version (1.6 or 1.7) may be due to this
 older version Tortoise svn client was not able to commit larger data. I'll
 upgrade my Tortoise svn client version to latest and try to commit data may
 be with latest version I will be able to commit data on windows machine too.
 I'll update you regarding this shortly.

As a general recommendation: I usually try to update free software
before fighting bugs that might already be fixed.

-- 
   Les Mikesell
 lesmikes...@gmail.com


ssh+svn vs. bash security bug?

2014-09-24 Thread Les Mikesell
Does the recently announced bash bug:
https://securityblog.redhat.com/2014/09/24/bash-specially-crafted-environment-variables-code-injection-attack/
affect the security of the way people generally configure svn+ssh access?

-- 
   Les Mikesell
 lesmikes...@gmail.com


Re: Blocking root from SVN repository

2014-08-29 Thread Les Mikesell
On Thu, Aug 28, 2014 at 3:28 AM, Zé jose.pas...@gmx.com wrote:

 -Original Message- And I hate to repeat myself, but I'll
 repeat for the third time this question: if file:// is not intended
 to be used, then what are the available options for those who need
 a version control system and can't set up a server?

 Zé


 Does the file server support SSH?


 There is no file server.  This discussion is about local repositories on a
 local system (a workstation) managed and accessed by a single user.

And yet, the there was the problem of accessing through the file
system under different user ids.

 Be definition you have a server since the files are on it. Just run
 the svnserve deamon on it even if it is your workstation.



 This is the problem.  I doubt anyone who claims this is a reasonable
 approach has even considered the problem and thought about how the solution
 is simply unacceptable.

On the contrary, everyone arguing for using a server feels that
filesystem protections are inadequate.   If you don't care about that,
there is nothing inherent to subversion that makes file:// access a
problem.

 For example, picture the scenario where someone tries to pitch subversion to
 a version control newbie to use for such basic tasks such as track changes
 to a file tree present on his file system:

 newbie: this version tracking thing sounds neat.  how do I track this
 folder, then?

 svn supporter:  well, you start off by installing Apache and mod_dav_svn on
 your desktop, register a dedicated user account to run the server, and setup
 a subversion server.  Don't forget to read these books on the basics behind
 server management, or else you risk getting hit by a myriad of security
 problems...

For someone already using apache, that's trivial - just module that
can co-exist with a myriad of others.   And if you aren't using
apache, svnserve is easy.  And all are packaged such that anyone
familiar with the the operating system or installing any program can
do it easily.  More to the point, in any organization, only one person
has to set up the server and a large number of people only need the
client.

 Do you believe this is acceptable?  Even plain old rsync -a is a far better
 alternative than this.

If you just want a backup copies of a few versions taken at random
points in time, there are lots of better solutions (I like backuppc
myself) but they all involve getting those copies onto a different
filesystem and stored such that the same hardware or software error or
user command that destroys the original won't take the backups with
it.

 Frankly, this approach makes no sense.  It makes much more sense and much
 more efficient to simply abandon subversion and migrate to pretty much any
 version control system.  I'm not aware of any other system who forces users
 to install, manage and run servers just to track changes made to a file.
 How is this acceptable?

Using file:// is no worse than any other mechanism that stores the
data so any user can corrupt it directly - but no better either.   If
that's what you want, subversion will let you do it.  However, it is
designed to do better and let you put your files under central control
with better management than the local filesystem can provide.   You
just have to decide what you want from it.   And you are right that
some of the version control systems designed for distributed use might
be better suited to having multiple copies sitting around in different
places in different states like you have to do if you just have files
with some backup copies somewhere - and you are willing to lose
versions if you have a problem with your local copy.   Subversion's
strength is in keeping one authoritative copy in a place where it can
be managed better than client's own filesystems.

-- 
   Les Mikesell
 lesmikes...@gmail.com


Re: Repository migrated via SVNSYNC is much smaller than one migrated using SVNADMIN DUMP/LOAD

2014-08-28 Thread Les Mikesell
On Thu, Aug 28, 2014 at 11:12 AM, Christopher Lamb
christopher.l...@ch.ibm.com wrote:

 4.4G/svn_repos/repo_loadedby_svnadmin/db/revs
 75M /svn_repos/repo_loadedby_svnsync/db/revs


 SVN LIST gives results for the repo loaded by SVNADMIN, but nothing for

 those loaded via SVNSYNC.

Either you made a typo in the source of the svnsync target (maybe
pointing at a path that doesn't exist) or the user you connect as does
not have permission to read the data.   Personally, I think the lack
of error messages in these data-losing scenarios is very dangerous,
but there is no warning about these problems.

-- 
   Les Mikesell
  lesmikes...@gmail.com


Re: Blocking root from SVN repository

2014-08-27 Thread Les Mikesell
On Wed, Aug 27, 2014 at 6:36 AM, D'Arcy J.M. Cain da...@vex.net wrote:
 I have read the posts about trying to deal with an untrusted root.  I
 know that there is no point in even trying.  That's not my issue.  My
 issue is that sometimes I accidentally commit as root and files get
 changed to root ownership blocking normal access to the repository.
 All I want is something that recognizes root and prevents the commit.  I
 don't care if it is easily overcome by root as long as root can choose
 not to do so.  In other words, a warning would be good enough.


It's basically a bad idea to use file:// access at all for anything
that might be used under multiple user ids.  Maybe even for a single
user...  Svnserve or http(s) access have much better access control
and avoid the possibility of accidentally deleting or changing
ownership through filesystem access.   And it's also a bad idea to do
things carelessly as root - programs generally can't second-guess
that.

-- 
   Les Mikesell
 lesmikes...@gmail.com


Re: Blocking root from SVN repository

2014-08-27 Thread Les Mikesell
On Wed, Aug 27, 2014 at 9:34 AM, Zé jose.pas...@gmx.com wrote:
 On 08/27/2014 01:49 PM, Les Mikesell wrote:

 It's basically a bad idea to usefile://  access at all for anything

 that might be used under multiple user ids.  Maybe even for a single
 user...


 Well, that sucks.  If file:// is not to be used then what are the available
 options to those who only need a local svn repository and don't have the
 option to set up a server?

It's not that you can't use it, just that it can't protect you from
the things that can happen through direct file system access.  Like
accidentally deleting the whole repo or changing ownership or
permissions.  If those things are important, then you (or if multiple
users are involved, someone in your organization) should set up a
server - it isn't difficult.

-- 
   Les Mikesell
lesmikes...@gmail.com


Re: Blocking root from SVN repository

2014-08-27 Thread Les Mikesell
On Wed, Aug 27, 2014 at 10:08 AM, Zé jose.pas...@gmx.com wrote:
 On 08/27/2014 03:53 PM, Les Mikesell wrote:

 It's not that you can't use it, just that it can't protect you from
 the things that can happen through direct file system access.  Like
 accidentally deleting the whole repo or changing ownership or
 permissions.


 I don't see your point.  There's also a likelihood that those accidents can
 happen on a remote server.

Accidents can happen anywhere, but having files that are not writable
by ordinary users greatly reduces the possibility and having them on a
separate machine where only experienced administrators log in at shell
level even more so.

 But regarding my question, if file:// is not intended to be used, as you and
 Stefan Sperling argued, then what are the available options for those who
 need a version control system and can't set up a server?  Is it even
 possible to deploy subversion in that scenario?

There is nothing specific about subversion that is a problem with
file:// access.  It is just the nature of having direct write access
to anything that makes it a fragile situation. With svn:// or http://
access there is nothing a client can do to delete data or change
access for anyone else.   With file access it is as easy as typing 'rm
-rf in the wrong place since you have to have write access to use it
at all.

-- 
   Les Mikesell
  lesmikes...@gmail.com


Re: Blocking root from SVN repository

2014-08-27 Thread Les Mikesell
On Wed, Aug 27, 2014 at 10:28 AM, Zé jose.pas...@gmx.com wrote:
 

 And I hate to repeat myself, but I'll repeat for the third time this
 question: if file:// is not intended to be used, then what are the available
 options for those who need a version control system and can't set up a
 server?

The answer isn't going to change, no matter how many times we repeat
it.  Subversion works with file:// access, but it can't protect you
from all the other ways that the filesystem allows write access and it
can't work that way without write access.  If that bothers you, set up
a server - or just keep good backups and realize that if you switch to
a backup repository copy that is not exactly in sync, you need to
check workspaces back out again.If you really can't set up a
server you might be better off with one of the version control systems
that are intended to have distributed copies - and keep several,
updated frequently.   Subversion's real advantage is where you want
one centrally-managed and authoritative copy.   But I don't understand
why you can't set up a server when the advantages it provides seem
important to you.

-- 
Les Mikesell
   lesmikes...@gmail.com


Re: Subversion Windows Performance compared to Linux

2014-04-28 Thread Les Mikesell
On Fri, Apr 25, 2014 at 11:58 PM, Branko Čibej br...@wandisco.com wrote:
 
 Mostly read-only would be a pretty good description of mature
 project maintenance - which in my experience is where most developer
 time goes.


 You're confusing the contents of versioned files with working copy metadata.
 The latter is never mostly read-only; even a simple svn update that
 doesn't change any working file can modify lots of metadata, and this is
 where locking is involved.

Will the subversion performance issue affect local storage that is
exported via nfs or just the clients mounting it remotely?

-- 
   Les Mikesell
 lesmikes...@gmail.com


Re: Subversion Windows Performance compared to Linux

2014-04-28 Thread Les Mikesell
On Mon, Apr 28, 2014 at 10:23 AM, Branko Čibej br...@wandisco.com wrote:

 Mostly read-only would be a pretty good description of mature
 project maintenance - which in my experience is where most developer
 time goes.


 You're confusing the contents of versioned files with working copy metadata.
 The latter is never mostly read-only; even a simple svn update that
 doesn't change any working file can modify lots of metadata, and this is
 where locking is involved.

 Will the subversion performance issue affect local storage that is
 exported via nfs or just the clients mounting it remotely?


 It's not a Subversion performance issue, it's an NFS performance and
 correctness issue; let's not confuse issues here. :)

Call it what you will - I just want to be realistic about what to
expect from application performance in common environments.

 That said, simultaneous local and (NFS) remote access to the same working
 copy is an extremely bad idea; it makes triggering the NFS atomicity bug far
 more likely.

There's no concurrent access happening - just home directories where a
user will be working on one machine or another - which is mostly
transparent to normal applications..  Should there be a difference if
they work on the server hosting the exported partition or will it
still be slow due to locking?

-- 
   Les Mikesell
 lesmikes...@gmail.com


Re: Disjoint working copy

2014-01-31 Thread Les Mikesell
On Fri, Jan 31, 2014 at 9:18 AM, Ræstad Atle Eivind
atle.ras...@saabgroup.com wrote:
 Is there a way to get full status information for a disjoint working copy 
 without performing svn st on each sub working copies?

 A workaround on Linux with svn client 1.7 or newer is to use a command 
 similar to:
 find . -type d -name \.svn | sed 's/\.svn$//' | xargs svn st


Is that really easier than setting an external property?

-- 
  Les Mikesell
 lesmikes...@gmail.com


Re: Merging SVN repositories

2014-01-22 Thread Les Mikesell
On Wed, Jan 22, 2014 at 5:21 AM, Deepak Saraswat
deepak_saras...@hotmail.com wrote:
 Hi All,



 I need to do copy the content from one SVN repository to the other SVN
 repositories. Also, I need to make sure that Bidirectional merges are
 working after I copy content from one SVN to other.

 Does anybody know how to do that. I think we have a svnsync command to copy
 the contents from one svn to other. But I am not sure if this allows
 bidirectional merging or not or bidirectional merging is even possible or
 not.

You really can't keep 2 writable repositories in sync because
subversion needs to strictly serialize transactions and the
repository's global revision number.   Depending on what you are
doing, you might be able to set up a proxy that reads from the nearby
repository but writes to the master - with svnsync duplicating the
changes.Or, Wandisco has a commercial tool to do this across a
larger number of locations.

Or, if groups at different locations normally work on different
components of your project(s), you might split them into different
repositories, using svn externals to pull everything together.

-- 
   Les Mikesell
 lesmikes...@gmail.com


Re: Update-Only Checkout Enhancement

2013-12-15 Thread Les Mikesell
On Fri, Dec 13, 2013 at 7:21 AM, Branko Čibej br...@wandisco.com wrote:
 On 12.12.2013 17:18, Les Mikesell wrote:
 On Thu, Dec 12, 2013 at 4:02 AM, Branko Čibej br...@wandisco.com wrote:
 Some things...  But not the things you really need to complete any
 amount of actual work - like updates and commits.
 You're forgetting diff. If you use Subversion daily, you've become so
 used to it being local that you can't appreciate how slow it would be
 without locally cached pristine copies.
 But (a) it is trivial

 Frankly, I'm a bit tired of people who have no idea what they're talking
 about telling us what's trivial and what isn't. If it's trivial, I'll be
 happy to take time to review the patch you produce in the next couple of
 days.

I meant it is trivial for a user to make his own snapshot copy of a
file/directory at any state if he thinks he is going to have some need
to diff against that particular state later without server-side
support.   That has nothing to do with coding or patching anything.
It just requires an assumption of plenty of local disk space to stash
things (which seems to have already been made).

-- 
   Les Mikesell
 lesmikes...@gmail.com


Re: Update-Only Checkout Enhancement

2013-12-12 Thread Les Mikesell
On Thu, Dec 12, 2013 at 4:02 AM, Branko Čibej br...@wandisco.com wrote:

 Some things...  But not the things you really need to complete any
 amount of actual work - like updates and commits.

 You're forgetting diff. If you use Subversion daily, you've become so
 used to it being local that you can't appreciate how slow it would be
 without locally cached pristine copies.

But (a) it is trivial to make your own snapshot copy of file versions
if you expect to need them (as you must anyway for any offline state
except 'as checked out' and 'completely current'.   And (b), I'm as
likely to want to diff against a different branch or someone else's
subsequent commit as my own pristine version (and no, that isn't
unreasonably slow...).Also, most of this discussion is related to
scenarios where active development is not happening - like the jenkins
build server example.

 Anyway, this thread has gone way off any kind of reasonable topic. The
 short answer to the original poster is that Subversion is not intended
 to be a replacement for rsync. You can take that as a hint as to what
 the solution to your problem might be.

But it _was_ intended to be a replacement for cvs, which has nowhere
near the client-side resource requirements.

-- 
   Les Mikesell
  lesmikes...@gmail.com


Re: Update-Only Checkout Enhancement

2013-12-11 Thread Les Mikesell
On Wed, Dec 11, 2013 at 11:16 AM, Bob Archer bob.arc...@amsi.com wrote:
 Yes, I understand the export function.  I want functionality for release
 management into test and production environments.

 For these environments I have a few requirements:
   Files in these environments will NEVER be edited
   For new releases I will need to perform an update to revision, which
 will add, update and delete needed files
   I want as small of a .svn directory as possible

 Why? Disk is cheap. Much cheaper than the time it would take to modify svn to 
 work the way you are requesting. Heck, I just bought a 1TB SSD for ~$500. The 
 spinning version was about $120.

Sometimes it is a good idea to distribute work out to clients and
sometimes you really want the client to just be a lightweight client
and make the server do the work of serving.   I've always liked the
minimal amount of data that CVS needs on the clients and sometimes
wished that svn could match that.

 I would recommend you write a script that does an export rather than using 
 the update feature. Although, this would probably mean the task would take 
 longer, since you don't have the benefit of the .svn pristines to know what 
 changed files need to be requested from the server. But, it sounds like you 
 care more about disk space than network traffic and/or time. Of course, that 
 is your decision to make.

Within reasonable limits it doesn't cost anything more to send more
network traffic.   But the cost of client disks scales up by the
number of clients.   Sometimes you can get by mounting a network disk
into all the clients, but then performance suffers, especially with
windows clients.

-- 
   Les Mikesell
 lesmikes...@gmail.com


Re: Update-Only Checkout Enhancement

2013-12-11 Thread Les Mikesell
On Wed, Dec 11, 2013 at 12:24 PM, Ben Reser b...@reser.org wrote:
 On 12/11/13 9:47 AM, Les Mikesell wrote:
 Within reasonable limits it doesn't cost anything more to send more
 network traffic.   But the cost of client disks scales up by the
 number of clients.   Sometimes you can get by mounting a network disk
 into all the clients, but then performance suffers, especially with
 windows clients.

 Network traffic has scaling costs just like storage space.

Not exactly.  Network traffic is generally bursty.  Clients rarely
spend 100% of their time checking out files, so a very large number
could share a local network even if they always deleted their
workspaces and checked out fresh copies.  But when storing the
pristine copies, they can't share anything - even if you map a shared
network volume you don't want to share the workspaces.

 If we'd made the
 decision to not store pristines and you had to go to the server for pristine
 copies then the discussion here would be reversed.  Someone would be asking 
 why
 we don't just store pristines and pointing out how disk space is cheap 
 compared
 to the cost of converting their entire network to have more capacity.

Sure - if you aren't local or the server is overloaded it is nice to
have the pristine copies.

  Can't
 make everyone happy all the time.

Well, that's why programs have options - all situations are not the same...

-- 
   Les Mikesell
  lesmikes...@gmail.com


Re: Update-Only Checkout Enhancement

2013-12-11 Thread Les Mikesell
On Wed, Dec 11, 2013 at 3:05 PM, Bob Archer bob.arc...@amsi.com wrote:

  Wouldn't that mean that you need to have some daemon service (or file
 watcher or something) running to determine if a file is modified?

 Yes.

Why would you need that in real-time instead of only when an svn
operation is done (possibly never...).

  Also, it would mean you would need a constant connection to the server to
 use a subversion working copy.

That's hardly a problem these days,

 Not necessarily; we don't need a pristine copy to check if a file is 
 modified, or
 if 's out of date WRT the repository. But the former problem (requiring a
 daemon) is already a non-starter.

 Right, but if a file is modified you would need to contact the repository to 
 get the pristine because you are going to get an event after the file is 
 modified. There may be some transactional file systems that allow you to get 
 an event before the modification is committed to the file system so you can 
 access the original copy, but I think they are few and far between.

Assuming you have metadata to know the revision of the file, what
possible scenario can there be that you could not get a copy of that
revision back from the server if you happen to need it?   Isn't that
why you put it there in the first place?  Or if you were headed this
route, wouldn't it be better to send the new copy to the server and
let it do the server work?   Unless your pending operation is a
revert, in which case you would want that copy from the server.

-- 
   Les Mikesell
  lesmikes...@gmail.com


Re: Update-Only Checkout Enhancement

2013-12-11 Thread Les Mikesell
On Wed, Dec 11, 2013 at 8:26 PM, Ben Reser b...@reser.org wrote:

 Absolutely, the answer here isn't a one size fits all.  Nobody is objecting to
 the idea of allowing this.  The problem is that the code is not designed to
 allow this and it's a ton of work to change that.  I can think of a good 10
 other things that would require the same level of effort that would improve
 things for a lot more people

 Once you no longer need a pristine there are a lot of potential scenarios that
 require different behaviors.  So it's not even a simple matter of just 
 changing
 the working copy library to support pristines not existing.  It becomes
 thinking about how to deal with these scenarios (not that all of them need to
 be implemented immediately, but you probably want to not pigeon hole yourself
 into an implementation that doesn't support them).

I guess I don't understand why it couldn't be as simple as having the
library get a pristine copy on demand if some operation needs it.
Isn't there already a recovery procedure for a missing pristine file?
 And then make saving it optional.

As a case in point, consider the accumulation of cruft on a typical
jenkins build server where a large set of projects are built and
rarely removed -  you have to allow much more disk space to each build
slave to accommodate the pristine files that don't have a whole lot of
use.

-- 
   Les Mikesell
 lesmikes...@gmail.com


Re: Update-Only Checkout Enhancement

2013-12-11 Thread Les Mikesell
On Wed, Dec 11, 2013 at 11:01 PM, Ryan Schmidt
subversion-20...@ryandesign.com wrote:

 On Dec 11, 2013, at 19:19, Les Mikesell wrote:

 Also, it would mean you would need a constant connection to the server to
 use a subversion working copy.

 That's hardly a problem these days,

 You apparently don’t try to work at the kinds of coffee shops I go to, where 
 50 college students are all watching youtube videos and making the network 
 unbearable and Subversion’s ability to do some things offline very valuable 
 to me.

Some things...  But not the things you really need to complete any
amount of actual work - like updates and commits.

-- 
   Les Mikesell
  lesmikes...@gmail.com


Re: SQLite appears to be compiled without large file support

2013-12-05 Thread Les Mikesell
On Thu, Dec 5, 2013 at 11:07 AM, Adam Daughterson
adam.daughter...@dothill.com wrote:

 Checkouts on the local disk do work, and checkouts to Samba shares (not
 Windows) work as well.  I've only found operations on WC living on a Windows
 share to not work.

This is starting to sound like one of those if it hurts, don't do it
things.  But maybe it's a version-specific samba bug.

-- 
   Les Mikesell
 lesmikes...@gmail.com


Re: SQLite appears to be compiled without large file support

2013-12-05 Thread Les Mikesell
On Thu, Dec 5, 2013 at 11:32 AM, Adam Daughterson
adam.daughter...@dothill.com wrote:

 Checkouts on the local disk do work, and checkouts to Samba shares (not
 Windows) work as well.  I've only found operations on WC living on a
 Windows
 share to not work.

 This is starting to sound like one of those if it hurts, don't do it
 things.  But maybe it's a version-specific samba bug.

 Personally, I blame Windows.  That's not trite at all, is it? ;)

I blame Microsoft for not opening the file share protocol long ago so
samba could interoperate better, but I don't think this specific
problem is on the server side.

 I am all for not doing it, but the rest of the organization will
 undoubtedly be difficult to convince.

It is an odd scenario.  Doing it the other way around is probably more
common - but I thought Microsoft had a reasonable NFS these days that
MS-centric admins might be able to set up.

Anyway google turned up a few hits on possibly similar samba issues.
Does adding ,nounix,noserverinfo to the mount options help?

-- 
   Les Mikesell
 lesmikes...@gmail.com


Re: Unable to open repository following reboot

2013-12-04 Thread Les Mikesell
On Wed, Dec 4, 2013 at 4:06 PM, Pat Haley pha...@mit.edu wrote:

 svn: Unable to open repository
 'file:///home/phaley/Papers/2011/ArpitVel/SvnPaper'
 svn: disk I/O error
 svn: disk I/O error

 Sorry that I forgot to mention it in my original Email,
 but yes I can look in the directory, and I seem to
 see everything I expected

 % ls -al /home/phaley/Papers/2011/ArpitVel/SvnPaper

How about 'svnadmin verify /home/phaley/Papers/2011/ArpitVel/SvnPaper'?

Also, does the NAS physically holding the disks have any logging
facility where you could see physical disk errors reported?

-- 
   Les Mikesell
 lesmikes...@gmail.com


Re: Linux svn clients 1.7 have issues operating on WC which physically lives on Windows shares

2013-12-04 Thread Les Mikesell
On Wed, Dec 4, 2013 at 4:05 PM, Ben Reser b...@reser.org wrote:
 On 12/4/13 12:59 PM, Adam Daughterson wrote:
 Prior to upgrading to WanDisco svn client 1.7.x I was able to operate on
 working copies which physically live on Windows shares.  After the upgrade, I
 get the following error when trying to do a fresh checkout:

 me@here:tmp$ svn co http://myThing/trunk myThing
 *svn: E200030: sqlite[S22]: large file support is disabled*

 I've verified that I can perform this same operation on a Linux samba share
 with the new clients, and nothing has changed on the Windows side of things.

 Anyone have any idea as to WTH this is all about?

 Subversion 1.7 moved from having .svn directories in each directory and flat
 file entries files to maintain the state of the working copy to using a single
 .svn directory at the root of the working copy and a sqlite database.

 The error you're getting is because the working copy database file
 (myThing/.svn/wc.db in your case) is larger than 2GB.  Your Windows server may
 not support files over 2GB.

Or if the filesystem is on VFAT, you might be hitting the the 4GB file
size limit.

 All that said, if you have that large of a working copy database, I doubt
 you're going to have very good performance using the working copy over a
 network share.

Not to mention what happens if you have linux filenames that differ
only in case.

-- 
   Les Mikesell
 lesmikes...@gmail.com


Re: Unable to open repository following reboot

2013-12-04 Thread Les Mikesell
On Wed, Dec 4, 2013 at 4:31 PM, Pat Haley pha...@mit.edu wrote:

 How about 'svnadmin verify /home/phaley/Papers/2011/ArpitVel/SvnPaper'?


 No, that also fails

 mseas(MeanAvg)% svnadmin verify /home/phaley/Papers/2011/ArpitVel/SvnPaper
 svnadmin: disk I/O error
 svnadmin: disk I/O error

So, I'm guessing it's a disk error...

 Also, does the NAS physically holding the disks have any logging
 facility where you could see physical disk errors reported?


 I looked in /var/log, but none of the files had any
 messages at the times I did the above tests.

It if is a generic linux box 'dmesg' might show it.  If it is some
dedicated file server you'll have to find its own diagnostic
procedures.

-- 
  Les Mikesell
lesmikes...@gmail.com


Re: Unable to open repository following reboot

2013-12-04 Thread Les Mikesell
On Wed, Dec 4, 2013 at 7:09 PM, Pat Haley pha...@mit.edu wrote:

 One thing that didn't stand out in my original Email was the reason
 for the reboot.  We turned quotas on.  Would svn react poorly
 to this?

Only on a write that exceeds quota.

 It is a generic linux box.  However doing dmesg before and
 after generating an svn error doesn't show anything new.

Can you run the svnadmin verify on the box with the drives?   Maybe it
is just a client/network/mount issue and the things that worked (ls,
etc.) were running from cache.  Can you unmount/mount or reboot a
client?

-- 
   Les Mikesell
 lesmikes...@gmail.com


Re: Branch/switch/merge question

2013-11-28 Thread Les Mikesell
On Thu, Nov 28, 2013 at 8:51 AM, Edward Ned Harvey (svn4)
s...@nedharvey.com wrote:

 But I prefer to do this:
 svn co --depth=immediates $URL
 svn update --set-depth infinity project/trunk
 svn update --set-depth immediates project/branches
 svn update --set-depth infinity project/branches/eharvey

 Because in the latter sparse checkout, all the directories retain some 
 context about the directories around them, and I can issue a single svn 
 update at the top level in order to update both, and I won't get confused 
 about the path relation between two independent checkouts, while I'm browsing 
 around my local filesystem, and stuff like that.

 But as far as branching/merging is concerned, it's functionally equivalent.

We just think the opposite way about the relationship.  My approach is
that the checked out copies are completely independent things and any
relationship that might exist is best maintained by separate
commit/update cycles and eventual merges - just as it would be if
different people were working on the separate copies.   What commit
log message would ever be appropriate if you commit to both the trunk
and branch through an upper level directory that ties them together?

-- 
   Les Mikesell
 lesmikes...@gmail.com


Re: Branch/switch/merge question

2013-11-27 Thread Les Mikesell
On Tue, Nov 26, 2013 at 8:02 PM, Edward Ned Harvey (svn4)
s...@nedharvey.com wrote:


 At first, I was doing a sparse checkout.  I non-recursively checked out /, 
 and then I made /trunk fully recursive, and then I went one level deeper into 
 /branches, and then I made /branches/eharvey fully recursive...  And then I 
 discovered how natural it was to switch  merge, so I got rid of my whole 
 working copy, and re-checked out recursively (non-sparse) /trunk.  But it 
 sounds like you suggest going back to the non-recursive checkout of /, with 
 recursive /trunk, and recursive /branches/eharvey.

Not sure what you mean about with sparse and recursive checkouts or
why you'd start with /.   If there is one project in the repository
you would normally just check out /trunk.  Or with multiple projects,
/project_name/trunk.

 Just to keep the branch  trunk logically separate from each other and 
 eliminate any user error regarding Which what, oh, where am I?  I forget... 
   You might be right...  and if I say so to the other guys, they might use 
 this to bludgeon me into git.   ;-)

You can have multiple checkouts that don't really affect each other,
so yes sometimes it is simpler to just have a parallel checkout of a
branch unless there are bandwidth or disk space constraints that make
switching more efficient.

If your project is large enough to split out libraries that can be
developed separately, consider moving them out to project-level
directories or separate repositories, and assembling the components
you want in the main project with svn externals.   Then each component
can have its own release scheduling (make tags, then when a project
using the component wants to update, change the external to the newer
tag).  This lets your team work on different parts without too much
impact on each other while still being able to build the whole thing
easily.

-- 
Les Mikesell
  lesmikes...@gmail.com


Re: Svndumpfilter

2013-10-30 Thread Les Mikesell
On Wed, Oct 30, 2013 at 8:15 AM, Thorsten Schöning
tschoen...@am-soft.de wrote:
 Guten Tag Somashekarappa, Anup (CWM-NR),
 am Mittwoch, 30. Oktober 2013 um 13:43 schrieben Sie:

 May I know how to include the '/GTS/ENUMS' part?

 Please always respond to the list. I'm not sure but you may give more
 than one path separated by spaces, like /GTS /GTS/ENUMS.


Is there any way to actually clean things up and make the
dumped/filtered/loaded version start at a path after a move while
preserving the history of changes other than the top-level move?  For
example, we have some projects that were initially converted from cvs
and loaded in one place, then (probably with no other changes) moved
to a different place in the tree of projects.  If I ever wanted to
extract individual projects into their own repository, would there be
a way to keep the whole history of changes within the project but lose
the old paths?   Coming from a cvs background, the idea of the
location of the top level of a project permanently becomes part of its
history was, ummm, surprising.

-- 
   Les Mikesell
 lesmikes...@gmail.com


Re: Copy changes from one branch to another

2013-09-30 Thread Les Mikesell
On Mon, Sep 30, 2013 at 12:13 PM, Bob Archer bob.arc...@amsi.com wrote:

 But that has the effect that i will have all the changes from trunk in 
 branch A,
 which is not what I want. I only want some certain changes inside there, the
 changes committed from our team.


 Common stuff should be brought in with Externals rather than a merge. That 
 may be a better use case considering what you are asking for.

Yes, if different teams are working on different components, using
externals is the way to go even if it takes some effort to arrange
things into neat subdirectories for easy reference.  Besides giving
you a way to pull in the changes cleanly, it also gives you a way to
version those points separately and each component can have its own
release/tagging process and the consuming projects can adjust their
references when they are ready.

-- 
Les Mikesell
   lesmikes...@gmail.com


samba performance with svn?

2013-09-24 Thread Les Mikesell
I think this has been discussed here previously but I don't remember
any definitive conclusions so I'll ask again.   We now have some
Windows users with samba-mounted disk space and large svn checkouts
and commits are very slow compared to local disk access.   Are there
any samba server settings that would be likely to help with the speed?

-- 
Les Mikesell
 lesmikes...@gmail.com


Re: Shared branch vs single branch

2013-09-23 Thread Les Mikesell
On Mon, Sep 23, 2013 at 1:50 PM, Bob Archer bob.arc...@amsi.com wrote:
 It really depends. I think all work for a specific release should be done in 
 a single branch/folder. Many people follow the stable trunk model. In this 
 model you generally do all work on trunk and then branch for a release. This 
 is the same model svn itself is developed under. In this model you would 
 also use what are called feature branches. This is generally for a 
 feature/use case that will take more than a day to complete or will be 
 worked on by more than one developer.

 Once again, it's up to the people not the tool to ensure your release 
 management is done properly.

Well, sort-of.   It is always a good idea to (a) include tests for new
code and (b) have a workflow that ensures that the tests are run and
that someone checks the results.   Expecting one person to never make
a mistake just doesn't always work out.

A general rule can't cover all cases, but in general I think the
longer you let branches diverge with isolated work, the more likely
they are to have conflicting changes that will take extra work to
resolve when you finally do merge.

-- 
  Les Mikesell
lesmikes...@gmail.com


Re: Shared branch vs single branch

2013-09-23 Thread Les Mikesell
On Mon, Sep 23, 2013 at 2:35 PM, Bob Archer bob.arc...@amsi.com wrote:
 On Mon, Sep 23, 2013 at 1:50 PM, Bob Archer bob.arc...@amsi.com wrote:
  It really depends. I think all work for a specific release should be done 
  in a
 single branch/folder. Many people follow the stable trunk model. In this 
 model
 you generally do all work on trunk and then branch for a release. This is the
 same model svn itself is developed under. In this model you would also use
 what are called feature branches. This is generally for a feature/use case 
 that
 will take more than a day to complete or will be worked on by more than one
 developer.
 
  Once again, it's up to the people not the tool to ensure your release
 management is done properly.

 Well, sort-of.   It is always a good idea to (a) include tests for new
 code and (b) have a workflow that ensures that the tests are run and
 that someone checks the results.   Expecting one person to never make
 a mistake just doesn't always work out.

 Isn't is up to the people to put those processes in place? To create the 
 correct workflow? To write the automation?

 I don't think I ever said it should be ONE person's responsibility to 
 manually do this work. Where did I say that?

You didn't explicitly say it was one person's fault, but what you said
could easily be interpreted that way by anyone who had to ask the
question in the first placeYes, people have to set things up,
but there are tools that can help.

-- 
   Les Mikesell
 lesmikes...@gmail.com


Re: Push ?

2013-09-17 Thread Les Mikesell
On Tue, Sep 17, 2013 at 7:11 AM, Nico Kadel-Garcia nka...@gmail.com wrote:

 There is always the trick of ssh-ing a command from inside the
 firewall to the DMZ box that (a) sets up port-forwarding and (b) runs
 the svn command as though the repo is on localhost.  Technically, and
 from the firewall's point of view, the connection is established
 outbound.


 This is also a firing offense in many environments.

Yes, I can understand institutions and security policies that blindly
outlaw tunnels, but note that in this case it goes the 'right'
direction.- that is the control and connection comes from the 'more
secure' side and the tunnel is just because the program that needs to
run won't make its own connection in the direction you need.

 I once had a chief
 developer, with various root SSH key access, running just such tunnels to
 and from his home machine, tunnels that I happened to notice. He was also
 using non-passphrase protected SSH keys, and had *built* the previous
 version of Subversion in use at that company. Given the secure data he had
 access to this way, from offsite, it caused a serous scandal behind closed
 doors, (And I replaced that Subversion with a source controlled one, owned
 by root, instead of the one owned by him individually!)

First, it is kind of foolish to assume that anyone with an
unrestricted ssh login doesn't have complete access to all the data
that account can read (or reach from either side of the connection),
but also note that this is the opposite case, where the connection
origin and tunnel destination are on the 'less secure' side and the
controlling keys are also outside.

-- 
   Les Mikesell
 lesmikes...@gmail.com


Re: Push ?

2013-09-16 Thread Les Mikesell
On Mon, Sep 16, 2013 at 2:53 PM, Dan White d_e_wh...@icloud.com wrote:
 The described solution is one we already use within our network space, but
 Security will not allow a connection from DMZ to the internal SVN server.
 It violates the whole purpose of having a DMZ in the first place.


There is always the trick of ssh-ing a command from inside the
firewall to the DMZ box that (a) sets up port-forwarding and (b) runs
the svn command as though the repo is on localhost.  Technically, and
from the firewall's point of view, the connection is established
outbound.

-- 
   Les Mikesell
 lesmikes...@gmail.com


Re: Breaking up a monolothic repository

2013-09-12 Thread Les Mikesell
On Wed, Sep 11, 2013 at 10:49 PM, Nico Kadel-Garcia nka...@gmail.com wrote:
 Les, disk space isn't the issue for the empty revs. It's any operations that
 try to scan or assemble information from the revisions. 5000 empty objects
 is still a logistical burden, especially if assembling any kind of change
 history for the new repository.

I don't see how that imposes a bigger computational burden than the
same number of unrelated revisions did in the combined repo. - which
typically is not a problem.  We are at rev 186767 on a large
multi-project repo which, although I wish it had been created as
separate repos for easier future maintenance, does not have serious
performance issues.

 And since the new repositories are
 effectively a rebase of a subset of the code, you don't normally *gain*
 anything from having empty revisions for code that is in the other new
 repositories. You can't meaninglfully merge content between the new smaller
 repositories and the old repo, barring some seriously weird cases, so it's
 safer to treat them as completely distinct and not bother to preserve all
 the empty revisions.

 The revision numbers are stored in support tickets is the only reason I
 can think of to keep them.

Or pegged externals if they stay in the same relative location.  Or
any email, documentation or recorded discussion referring to the
changes in a revision.   My point is that any change that requires new
training or human intervention to fix something is never going to win
back that time.   Someone who completely understands the current
process and user base might be able to optimize and improve it with
drastic changes, but that seems unlikely if they are asking for advice
on a mail list.

-- 
   Les Mikesell
lesmikes...@gmail.com


Re: Breaking up a monolothic repository

2013-09-10 Thread Les Mikesell
On Tue, Sep 10, 2013 at 6:22 AM, Nico Kadel-Garcia nka...@gmail.com wrote:

 Even if the history is considered sacrosanct (and this is often a
 theological policy, not an engineering one!), an opportunity to reduce the
 size of each reaporitory by discarding deadwood at switchover time should be
 taken seriously.

Those empty revs take what, a couple of dollars worth of disk space
(OK, x3 or 4 for backups...), vs. how much human time will it take to
make everyone involved understand that you use one procedure for
revisions before a certain date, and a different one after, and to get
diffs between them you have to either check out both copies and use
local tools or map the rev number from your old reference to the new
numbering scheme?   And then there are likely to be pegged externals
to pull in components that you'll have to fix even if they stay within
the same project repo and use relative notation.   I'd call not
unnecessarily changing the history you use a version control system to
preserve to be 'philosophically correct'  as opposed to a theological
requirement.  If your engineering choices were always right the first
time, you probably wouldn't have all these revisions in the first
place.

-- 
   Les Mikesell
  lesmikes...@gmail.com


Re: Breaking up a monolothic repository

2013-09-10 Thread Les Mikesell
On Tue, Sep 10, 2013 at 4:36 PM, Bob Archer bob.arc...@amsi.com wrote:

 Also part of the reason to split up the  repos is to make access
 control easier, and it looks bad if Alice (who  should have access to
 project 1 but not project 2) can see Bob's old  commit metadata to
 project 2, even if she can't see the commit bodies  after the split.
 
  How does this work now in the combined repository?

 Right now, they don't have it with the combined repo.  Anyone in the svn 
 group
 can read everything.  (This is one of the reasons they want to break up the
 single repo into per-project repos.)

 You should knock the reason off the list. You can set up path based 
 authorization fairly easily. (especially compared to braking it up into 
 multiple repos.)


Unless you already have a central authentication source you'll have a
certain tradeoff in complexity between maintaining password control
for multiple repos vs. path-based control in a single one and if there
are external references where different groups use each others'
libraries it can be a little messy either way.

-- 
   Les Mikesell
lesmikes...@gmail.com


Re: Breaking up a monolothic repository

2013-09-09 Thread Les Mikesell
On Sun, Sep 8, 2013 at 8:13 PM, Trent W. Buck trentb...@gmail.com wrote:

 I'm stuck.  Since it's no fun to have tens of thousands of empty revs
 in each project repo, my current approach is to leave existing
 projects in the monolithic repo, and new projects get separate repos.


Why do you think an empty rev will bother anyone any more in a
per-project rev that having the rev number jump from a commit to an
unrelated project does in the combined repo?It shouldn't be a
problem in either case.  Rev numbers for any particular use don't need
to be sequential, you just need to know what they are.

-- 
   Les Mikesell
 lesmikes...@gmail.com


Re: Breaking up a monolothic repository

2013-09-09 Thread Les Mikesell
On Mon, Sep 9, 2013 at 8:03 AM, Grierson, David
david.grier...@bskyb.com wrote:
 I can see Trent's view point that people are weird and get freaked out by the 
 unexpected (where they might expect the revision numbers to be relatively 
 low).


I could see that for someone who had never used subversion before and
did not understand the concept of global revision numbers, but not for
anyone who has used a multi-project repository.

 I guess what we should be providing him are points like you do make to help 
 him sell why this isn't an issue to the end users.

 Like Les says, if someone performs a large batch of commits to a particular 
 branch then the trunk revision numbers are going to leap forward 
 (unexpectedly). So what to sell those folks concerned about it is that 
 they're experiencing this already.

Revision numbers aren't something you guess at or expect anything
from.  They are only useful in terms of the repository history, and it
doesn't matter if your project runs sequentially or not.   If you want
names/numbers that make human sense, you'll be copying to tags for
easier reference anyway.

-- 
Les Mikesell
  lesmikes...@gmail.com


Re: Breaking up a monolothic repository

2013-09-09 Thread Les Mikesell
On Mon, Sep 9, 2013 at 7:23 PM, Trent W. Buck trentb...@gmail.com wrote:
 Ryan Schmidt subversion-20...@ryandesign.com writes:

 As someone used to Subversion's usually sequential revision numbers,
 that bugs me aesthetically, but it works fine.

 I think that's the crux of it.

Have you checked if the users have/need anything (emails, ticket
system, etc.) that refer to specific revisions or the history of
changes made there?   It seems kind of drastic to throw that away
because you think the numbers aren't pretty enough.

Also part of the reason to split up the
 repos is to make access control easier, and it looks bad if Alice (who
 should have access to project 1 but not project 2) can see Bob's old
 commit metadata to project 2, even if she can't see the commit bodies
 after the split.

How does this work now in the combined repository?

-- 
   Les Mikesell
  lesmikes...@gmail.com


Re: Switching

2013-08-24 Thread Les Mikesell
On Sat, Aug 24, 2013 at 2:48 AM, Branko Čibej br...@wandisco.com wrote:
 On 24.08.2013 03:44, Ryan Schmidt wrote:
 On Aug 23, 2013, at 13:31, Les Mikesell wrote:

 On Fri, Aug 23, 2013 at 1:09 PM, Edwin Castro wrote:

 I can't, off the top of my head, think of a scenario where it would be
 harmful to replace an unversioned directory with a versioned instance,
 leaving any unversioned local files that happen to be there alone.
 Leaving unversioned local files alone in a directory is not the problem.
 I think it is the problem we've been discussing.  Leaving them means
 you have to keep the containing directory, which becomes unversioned
 as you switch away from the branch having it,
 Correct.

 and then a conflict when
 you switch back.
 *This* is the problem we're discussing. *This* is what Subversion should be 
 smart enough to avoid. None of the discussion I've read thus far gives me a 
 convincing explanation for why this should not be possible.

 You're assuming it is correct, in all cases, to silently make a
 directory versioned because the incoming directory happens to have the
 same name. It is not. It may be marginally correct in your case,
 however, Subversion has no way of knowing that the unversioned directory
 it sees is in any way related to whatever is on the switched branch. It
 needs user input; it cannot magically become smart enough.

 For example, consider a typical Java module which has build.xml file
 and two directories, src and test. You add such a module called A
 on the branch. Someone else creates a completely different and unrelated
 module in their working copy, incidentally also calling it A. Then
 they switch to the branch. What happens?

 You're proposing that Subversion would say, Oh, this unversioned thing
 I have here is also called A, I'm going to assume its the same as the
 incoming directory, let's make it so. And in the next step: Oh, I have
 an unversioned file called build.xml, I'll just assume it's the same as
 the incoming and merge changes in boom, instant merge conflict.

 It actually gets worse, because following your proposal, Subversion will
 happily recurse in the same way into src and test -- the final result
 being an unholy mess that you're going to have a fine time untangling,
 not to mention that you just messed up the poor user's unversioned local
 changes.

 And of course, all of the above is not specific to switch -- but also to
 update, when there are no branches involved.

 So, yeah, it ain't gonna happen. You /do/ have the option to use
 --force, but there's a reason why that isn't the default.

 -- Brane

 --
 Branko Čibej | Director of Subversion
 WANdisco // Non-Stop Data
 e. br...@wandisco.com


Re: Switching

2013-08-24 Thread Les Mikesell
On Sat, Aug 24, 2013 at 6:51 AM, Ryan Schmidt
subversion-20...@ryandesign.com wrote:

 *This* is the problem we're discussing. *This* is what Subversion should be 
 smart enough to avoid. None of the discussion I've read thus far gives me a 
 convincing explanation for why this should not be possible.

 You're assuming it is correct, in all cases, to silently make a
 directory versioned because the incoming directory happens to have the
 same name.

I'm not sure anyone proposed silently assuming anything.  I'd suggest
adding a way to tell it to overwrite unversioned directories and files
with identical contents without also telling it to overwrite files
with differing content.

 It is not. It may be marginally correct in your case,
 however, Subversion has no way of knowing that the unversioned directory
 it sees is in any way related to whatever is on the switched branch. It
 needs user input; it cannot magically become smart enough.

Don't forget that it was subversion, not the user, that created the
directory and abandoned it in the first place.   Yes, some of the
other tools the user uses may be equally dumb about leaving cruft
behind and confusing svn, but the user may not know the internals of
all the tools.

 If, as you said below, this shouldn't happen generally, then one way to make 
 Subversion smart enough would be to have it remember when it converted a 
 directory from versioned to unversioned due to a switch, so that it can then 
 seamlessly transform it back if the user switches back.


 For example, consider a typical Java module which has build.xml file
 and two directories, src and test. You add such a module called A
 on the branch. Someone else creates a completely different and unrelated
 module in their working copy, incidentally also calling it A. Then
 they switch to the branch. What happens?

 You're proposing that Subversion would say, Oh, this unversioned thing
 I have here is also called A, I'm going to assume its the same as the
 incoming directory, let's make it so. And in the next step: Oh, I have
 an unversioned file called build.xml, I'll just assume it's the same as
 the incoming and merge changes in boom, instant merge conflict.

 Certainly if there are *file* conflicts then Subversion should complain 
 loudly and not do anything automatically, however that was not the scenario 
 the person who opened this thread posed.

I'd propose that file conflicts aren't really conflicts if the
contents are identical.   And that there should be a way to tell
subversion to go ahead and force overwrites of directories and
identical file contents without, at the same time, telling it to
clobber files with differing data.

-- 
   Les Mikesell
 lesmikes...@gmail.com


Re: Switching

2013-08-24 Thread Les Mikesell
On Sat, Aug 24, 2013 at 2:04 PM, Stefan Sperling s...@elego.de wrote:
 On Sat, Aug 24, 2013 at 10:22:41AM -0500, Les Mikesell wrote:
 Don't forget that it was subversion, not the user, that created the
 directory and abandoned it in the first place.

 If a previously versioned directory is left behind unversioned, that
 means there are unversioned (aka obstructing) nodes within the
 directory, such as files created during a build. Those files could
 not have been created by svn.

Sure, other tools besides subversion do some unintuitive things...

 I hope that we will eventually extend tree conflict handling to the
 point where it makes these kinds of situations trivial to resolve,
 even for novice users. svn should interactively offer a set of
 reasonable courses of action, such as removing the unversioned nodes,
 or moving them to a special lost+found area, or something else that
 allows incoming versioned nodes to be created in their place.

Yes, but it would be nice if one of the choices were to overwrite all
unversioned directories and files with identical contents (but not
files with different contents) - perhaps optionally listing any
unversioned non-conflicting files that happen to be there.

-- 
   Les Mikesell
 lesmikes...@gmail.com


Re: Switching

2013-08-23 Thread Les Mikesell
On Fri, Aug 23, 2013 at 11:17 AM, Edwin Castro 0ptikgh...@gmx.us wrote:

 I don't buy the argument about different histories: the pre-existing
 directory doesn't have a subversion history, so from svn's point of
 view there is no conflict.  What are the real, practical problems that
 you know of or foresee with svn swich --force?


 When objects do not have history, then subversion is in the position to
 try to decide what to do with content that already exists on the
 filesystem.

I can't, off the top of my head, think of a scenario where it would be
harmful to replace an unversioned directory with a versioned instance,
leaving any unversioned local files that happen to be there alone.
Other than maybe the chance that you'd accidentally commit them later,
but that is no different than if you had put the local files in after
the switch.  Am I missing something?   Is there a way to --force that
without also potentially --force'ing files that conflict to be
clobbered?

-- 
   Les Mikesell
 lesmikes...@gmail.com


Re: Switching

2013-08-23 Thread Les Mikesell
On Fri, Aug 23, 2013 at 1:09 PM, Edwin Castro 0ptikgh...@gmx.us wrote:

 I can't, off the top of my head, think of a scenario where it would be
 harmful to replace an unversioned directory with a versioned instance,
 leaving any unversioned local files that happen to be there alone.

 Leaving unversioned local files alone in a directory is not the problem.

I think it is the problem we've been discussing.  Leaving them means
you have to keep the containing directory, which becomes unversioned
as you switch away from the branch having it, and then a conflict when
you switch back.

 I've personally ran into scenarios where the local directory contained
 unversioned local files that obstruct a file that will be added by a
 switch/update/merge/what-have-you.

Don't think that's the case here.  These files are supposed to be
svn-ignored, so they should not have a copy in the repo.

 If the local file's content matches
 the content in the repository, then I think it is safe to simply replace
 the unversioned local file with the versioned file from the repository.

Yes, that would be handy - and harmless - as well.

 Often, the content has not been the same. It all depends on a lot of
 factors such as how it became unversioned, what has happened to the file
 while it was unversioned, etc. Replacing the content of the local file
 with the content in the repository looses local data.

Agreed, there is no reasonable way to handle this case automatically.
But it shouldn't happen as long as the clutter is never committed.

It is probably bad practice to default to letting cruft stay across
switches since your workspace would end up different than a fresh
checkout but it would be handy to have a way to force mostly-harmless
operations without overwriting any differing file data.

-- 
   Les Mikesell
  lesmikes...@gmail.com


Re: Feature Req: sorthand urls for branches/tags in CLI

2013-08-23 Thread Les Mikesell
On Thu, Aug 22, 2013 at 4:15 PM, Laszlo Kishalmi lkisha...@ovi.com wrote:

 I'd propose a -b [--branch] option or extend the meaning of ^ sign for
 those commands which can work with URL-s. Extending ˇ would mean
 that when used as ^/ it means repository root and using it as ^[branch] then
 it would refer to a branch.

 How would it work:

 Let's imagine the following repository layout:
  /project1/trunk
  /project1/trunk/dir1/dir2/dir3/fileA
  /project1/branches/branchA
  /project1/branches/branchA/dir1/dir2/dir3/fileA
  /project1/branches/branchB
  /project1/branches/branchB/dir1/dir2/dir3/fileA
  /project1/tags/tag1
  /project1/tags/tag2

So how do you see this working where your branches have their own sub-levels:
/project1/branches/release/branchA
/project1/branches/qa/branchA
/project1/branches/dev/branchA

Who gets to use the shorthand?

-- 
  Les Mikesell
lesmikes...@gmail.com


Re: Switching

2013-08-22 Thread Les Mikesell
On Thu, Aug 22, 2013 at 6:30 AM, John Maher jo...@rotair.com wrote:

 @Andrew there is no need for a svn copy.  I do not want to copy a feature in 
 one branch to another; I wish to keep the code isolated.

 And yes I know subversion won't delete unversioned files, I appreciate the 
 info on how subversion works.  Some of it was helpful.  I was hoping to hear 
 how others may have solved the same problem.

Your problem is not so much that svn doesn't deleted the unversioned
files, but that it can't delete the directory containing them.

 But it seems the only answer is a tedious and manual process for the simplest 
 of enhancements.

Don't your build tools have commands to remove any spurious files
they've created or some equivalent of 'make clean' that you can run to
remove the clutter in a non-tedious way so that svn switch is free to
work correctly with the versioned content?

 I was hoping to find what others seem to praise, but have continually come up 
 empty handed.  I'll check stackoverflow before I give up.

If the big picture is including library components and their
containing directories in some versions and not others, the simple
solution might be to give the components their own location in the
repository (or a different one) and pull them in only as needed with
svn externals.   But, I think you still have to clean up the
unversioned clutter so a switch can remove the directory when it is
not wanted.   A slightly different approach is to version the built
component binaries and access them with externals, but that has its
own issues in multi-platform scenarios.

-- 
   Les Mikesell
 lesmikes...@gmail.com


Re: Switching

2013-08-22 Thread Les Mikesell
On Thu, Aug 22, 2013 at 11:40 AM, John Maher jo...@rotair.com wrote:
 I don't think you even tried Thorsten,

 I can easily.  There are actually several options.

How about just 'delete the spurious unversioned files yourself'?   The
problem is the versioned directory containing them that is not
supposed to exist after the switch.  And the only way svn can fix it
for you is if you clean it up.  Svn can't decide which of your files
that it can't recreate for you should disappear.   Getting that wrong
would be much worse than presenting a conflict on the directory
holding them.

-- 
Les Mikesell
  lesmikes...@gmail.com


Re: Switching

2013-08-22 Thread Les Mikesell
On Thu, Aug 22, 2013 at 12:15 PM, John Maher jo...@rotair.com wrote:
 How about just 'delete the spurious unversioned files yourself'?

 As I said in the previous reply, two of those files are user settings.  They 
 would have to be constantly recreated by the developer.  That increases 
 costs.  One of the reasons I wanted some form of source code control was to 
 reduce costs.

So put them somewhere else or  version them - and if the tool can't
deal with multiple users, work out a way to script a rename the
correct copy for the current user.   A clever developer should be able
to find an alternative to forcing subversion to keep a versioned
directory in conflicting place or retyping a file to recreate it when
needed...

Of course if it is too much trouble to clean up the files correctly,
you can just delete the whole workspace and check out again to go
between the branch/trunk versions.

 Svn can't decide which of your files that it can't recreate for you should 
 disappear.

 It could ignore files that it is instructed to ignore.  That would help.

How many people actually know which files subversion is ignoring?
Again, a clever developer could probably come up with his own way to
delete files matching some patterns if he really wants that to happen.

-- 
  Les Mikesell
lesmikes...@gmail.com


Re: Switching

2013-08-22 Thread Les Mikesell
On Thu, Aug 22, 2013 at 12:43 PM, John Maher jo...@rotair.com wrote:

 The clean up script is a good idea but won't work here.  We have mostly all 
 class libraries.  One executable.  This means to test we need to specify an 
 application in the project.  Some developers use the exe while some use a 
 tool made just for testing the classes.  This information is in the *.sou 
 files which are unversioned for this reason.  So we don't want to delete them 
 (as I incorrectly stated somewhere) but ignore them.

You are sort-of asking for trouble if you have any dependency on
unversioned files being in a workspace at all, much less for them to
continue to exist when switching among versions with/without the
containing directories.   I'd advise stepping back from the immediate
problem and thinking of processes that will always work with a fresh
checkout so that in the future you can use build automation tools like
jenkins, relying only on the contents of the repository even when the
build happens on a new host.  It will simply your life even for manual
operations if you can count on that.

-- 
   Les Mikesell
 lesmikes...@gmail.com


Re: Switching

2013-08-22 Thread Les Mikesell
On Thu, Aug 22, 2013 at 12:52 PM, John Maher jo...@rotair.com wrote:
 I'll try to clarify, everyone has their own copy of the tool.  They also have 
 their own copy of their settings.  The problem arises because the tool stores 
 the settings files in the same folder as some code specific files.  This can 
 not be changed.  So within a single directory we need to version some files 
 and ignore others.

 Sure I could write a pre-processing program to do a multitude of things.  It 
 wouldn't be that hard.  But then my plate has plenty of things that are not 
 that hard.  What will I gain?

Things that don't mysteriously break?  Reproducible builds?   Are
those worth anything?

 A happy working copy with a single command as long as what I write always 
 works perfectly.  Highly unlikely, so then I will make more problems for 
 myself.  Plus I assigned myself the task of learning subversion.  Covering up 
 a symptom does not treat the disease.

If you want to rely on the contents of dirty workspaces, just check
out different copies for each branch and let the cruft live there as
long as you need it.  You can still update/commit independently, etc.
 But you need to understand that the cruft is yours, not subversion's
and whatever you are doing isn't reproducible.

-- 
   Les Mikesell
 lesmikes...@gmail.com


Re: Switching

2013-08-22 Thread Les Mikesell
On Thu, Aug 22, 2013 at 12:58 PM, John Maher jo...@rotair.com wrote:
 You are correct that there will be issues with a fresh checkout.  But I can 
 live with that.

Not caring if you can reproduce a workspace is a bold statement to
make on a version control mail list.  Don't be surprised if everyone
doesn't agree with that choice.

 The code will not be affected, just the way the code is tested.  Once the 
 developer decides on how they wish to test I do not want to A) lose those 
 changes or B) step on the choices others have made by versioning it.

If this is preliminary testing, maybe that's OK.  If it is the whole
story, I'd want it to be reproducible.

 Think config or settings file.

Think build automation where you'd want to be able to reproduce these
on demand, not just rely on what happens to still live in the current
filesystem.  It might take a one-time effort to find the files and
decide how to handle them (renamed versioned copies, templates, moved
local copies, etc.) but then aside from being able to switch among
banches, you can reproduce a usable working copy.

-- 
   Les Mikesell
 lesmikes...@gmail.com


Re: Switching

2013-08-22 Thread Les Mikesell
On Thu, Aug 22, 2013 at 1:34 PM, John Maher jo...@rotair.com wrote:
 Again Les, you misunderstand.  I have no problems with the workspace.  It is 
 exactly the same for everyone, everytime.  Please read carefully before you 
 respond.  It has nothing to do with the build.  It is user settings, a config 
 file, ini file, choose your terminology.  If you don't understand please ask 
 for clarification instead of making incorrect assumptions.

The contents of the file are irrelevant.  The point is that it has to
either be versioned so svn can delete it knowing that you can get it
back, and then delete the containing directory that is really the
issue, or you have to delete it yourself.  Pick one.  If it really is
always the same, I don't see the problem with committing it.  If it
isn't, and you need to reproduce it, you need to work out a way to do
that, or use the multiple-checkout approach to maintain dirty
workspaces, realizing that you can't reproduce them reliably.
Personally, I don't like things that rely on any unversioned state
getting into production releases.  That developer will leave or that
machine will crash and all the magic is gone - and if you can't do a
matching build on a clean machine from a clean checkout, you won't
ever know how much magic was involved.

-- 
   Les Mikesell
 lesmikes...@gmail.com


Re: Switching

2013-08-22 Thread Les Mikesell
On Thu, Aug 22, 2013 at 4:49 PM, Travis Brown trav...@travisbrown.ca wrote:
 On Thu, Aug 22, 2013 at 04:16:49PM -0500, Les Mikesell claimed:
 snip
The contents of the file are irrelevant.  The point is that it has to
either be versioned so svn can delete it knowing that you can get it
back, and then delete the containing directory that is really the
issue, or you have to delete it yourself.  Pick one.  If it really is
 snip

 Why must svn delete the directory in order to create it?

When it creates it, it will create it as a versioned object with
history.  What is it supposed to do with that history when it can't
create it because there is already an unversioned instance there?
Svn doesn't pretend that things that just happen to have the same
names are the same object, they actually have to have the same
history.

 Reading this thread it seems to me that the core of the issue is that svn
 switch is not symmetrical when dealing with directories.

I think it would have the same problem with any unversioned object
with the same name as the versioned one that it needs to create.

 Why can svn not, instead, simply interpret an already existing directory
 as not a conflict? Certainly if a versioned file would overwrite an
 unversioned file of the same name then that is a true conflict because
 the content may differ. A directory has nicely compartmentalized units
 of content which can be handled in a smarter way.

No, look at your logs of directory history.  They aren't just
containers for whatever happens to be there, the history of adds and
deletes are there.   It might be possible to make things work where it
would pretend that it created a directory keeping the history from the
repo but ignoring extraneous content, but its not what I'd expect.

-- 
   Les Mikesell
 lesmikes...@gmail.com


Re: How to change paths on an external file without a full update --depth infinity?

2013-08-16 Thread Les Mikesell
On Fri, Aug 16, 2013 at 11:14 AM,  dlel...@rockwellcollins.com wrote:

 In subversion's view a copy is a branch so any distinction is strictly
 your own convention.  Likewise for tags, except that there is a
 generally accepted convention of not committing changes after a tag
 copy.   Do you have additional conventions or tools to help users of
 the pre-fork version know that it has branched?I don't think there
 is a generic solution for that - subversion tracks copy-from history,
 but not copy-to.

 No.  There is no way to know who is using a fork you may have created or who
 has forked from someplace (short of scanning all projects of course).  I'm
 not sure fork is the best name to give this concept, but we didn't want to
 choose branch or tag for obvious reasons

The reason isn't obvious to me.  It is a branch as far as subversion
is concerned except perhaps for the convention of naming an upper
directory 'branches'.  Or a tag if you don't intend to do additional
changes to that copy.   And if you think in terms of 'release'
branches/tags it even makes sense for your usage.

-- 
   Les Mikesell
lesmikes...@gmail.com


Re: How to change paths on an external file without a full update --depth infinity?

2013-08-15 Thread Les Mikesell
On Thu, Aug 15, 2013 at 10:12 AM,  dlel...@rockwellcollins.com wrote:

  Once you copy, you break the link.  If you were to make a change to the
  copy, no one else would then see it.

 No one else would see it with externals either, except that you
 wrote a custom tool to analyze the externals, see if a newer
 revision of the original exists, and show that to the user. If you
 can do that with externals, you can do that with copies too. (Use
 svn log --stop-on-copy to find out where the copy came from, then
 see if there are newer revisions of that.)

 The challenge I then see on this is one of finding all instances of foo.c.
 If you have foo.c copied/forked fifty times to different projects, each of
 which has branched a couple of times, how do you programmatically find all
 different instances of foo.c (to let a developer choose which may be most
 appropriate?)  If you have good ideas, I'm very open to listening.

There is no difference in that question than finding where the
'future' copies of a pegged external target went.  You can only do
either if you have a convention for a canonical path.

 Also if you have to projects that both want foo.c and both have valid
 changes to make to the file, how does that get managed when they are copies?
 Its a trivial implementation when it is implemented as a file external.

How so?  I assume you also have to handle cases either way: where both
projects want the same change and where both projects need different
changes - where typical svn users would have branches/tags to
distinguish them.   But regardless of how you identify the target
file, there shouldn't be any effective difference between copying a
version into your directory or using a file external as long as you
don't modify it in place and commit it back - something your external
tool could track.

 We also have instances where we purposely want multiple copies of the same
 exact file within the same project.  We can effectively manage this through
 file externals to a structured datastore (AKA a set of folders within a
 repo).  Regardless of where and how a team decides to structure their
 project, all files are neatly organized in this one section of the repo
 (that is considered taboo to directly interact with).  The ability to have a
 specific file having many copies of itself and not care about its position
 within the repository is a powerful feature.  I understand this may diverge
 a bit from SVN's core thoughts on CM, but if SVN can support odd variations
 to its use, it becomes an even more indispensable building block.  Diversity
 in approaches is good.

Again, you get the history in a copy.  You can tell if they are the
same.  Or, on unix-like systems you can use symlinks to a canonical
copy within the project.

 From a feature perspective, externals are a very appropriate method to
 accomplish this (really a CM implementation of symlinks).  If we're saying
 that externals from an implementation standpoint are not quite appropriate
 at this time, I get that argument.  What is the general consensus as to
 where externals are on the roadmap?

I agree that externals are very useful, but most projects would use
them at subdirectory levels for component libraries where they work
nicely, not for thousands of individual file targets.   Is there
really no natural grouping - perhaps even of sets of combinations that
have been tested together that you could usefully group in
release-tagged directories?

 I may not convince the team that externals are really really useful (even if
 abused) in this application, but I'm hoping that the team does appreciate
 the general usefulness of externals and keeps maturing the feature.

Please make the distinction between file externals - which are sort of
an exception with special handling, and normal externals. Subversion
uses directories as a natural sort of project container - which, not
surprisingly fits the model of most things you want to manage on
computers, and some reasonable number of directory-level externals
'just work' without special considerations.  I'm not against better
performance, of course, but it makes sense to me to make pragmatic
design decisions for the same reasons you might avoid throwing
millions of files in one flat directory even in a non-versioned
scenario.   Theoretically, you should be able to do that, but in
practice it isn't going to perform as well as something with better
structure.

-- 
   Les Mikesell
 lesmikes...@gmail.com


Re: How to change paths on an external file without a full update --depth infinity?

2013-08-15 Thread Les Mikesell
On Thu, Aug 15, 2013 at 2:03 PM,  dlel...@rockwellcollins.com wrote:

 With more complexity comes more bugs and process missteps.  We're really
 striving to keep things as simple as possible.  We're fundamentally
 accepting of update times going from 2 seconds to 2 minutes.  Its harder
 when 2 minutes becomes 20 minutes.

Are your build systems on the other side of the world from the
repository?   The quick fix might be to reduce the latency one way or
another (move one of the pieces, use the wandisco distributed repo
approach, etc?) or automate with something like jenkins so you don't
have a person waiting.

-- 
   Les Mikesell
 lesmikes...@gmail.com


Re: How to change paths on an external file without a full update --depth infinity?

2013-08-15 Thread Les Mikesell
On Thu, Aug 15, 2013 at 2:24 PM,  dlel...@rockwellcollins.com wrote:

  With more complexity comes more bugs and process missteps.  We're
  really
  striving to keep things as simple as possible.  We're fundamentally
  accepting of update times going from 2 seconds to 2 minutes.  Its harder
  when 2 minutes becomes 20 minutes.

 Are your build systems on the other side of the world from the
 repository?   The quick fix might be to reduce the latency one way or
 another (move one of the pieces, use the wandisco distributed repo
 approach, etc?) or automate with something like jenkins so you don't
 have a person waiting.

 Yes, and that's a big contributor.  Co-location helps significantly, but
 isn't an option in our case.  I'll take a look at your suggestions.

Depending on your platform and tools, there are times when it works
better to have a remote display to a machine on a network where the
real work happens than to copy everything locally.   And if you can
automate with Jenkins, you don't really even need the display for the
build, although you do have to somehow commit your changes to
subversion.

-- 
   Les Mikesell
  lesmikes...@gmail.com


Re: How to change paths on an external file without a full update --depth infinity?

2013-08-15 Thread Les Mikesell
On Thu, Aug 15, 2013 at 2:03 PM,  dlel...@rockwellcollins.com wrote:

  We do want to modify in place.  Copying back creates an additionalstep
  that

  is already managed quite well by SVN with externals.

 I've never done that with a file external - where does the commit go?


 It commits a new revision of what the file external pointed to - pretty
 handy.  If you are pegged, it will not automagically update your pegged
 revision (as I'd expect), so unless you are on the HEAD or update your peg
 to what just committed, an update will revert your WC back to the pegged
 version.

I'd actually expect it to be pretty confusing if you had multiple
people committing changes based from different back-rev pegged
references anywhere near the same time frame.  Does your external
'notify about new versions' tool help with that?   Don't you get
conflicting changes that you'd want branches to help isolate?

-- 
   Les Mikesell
 lesmikes...@gmail.com


Re: How to change paths on an external file without a full update --depth infinity?

2013-08-15 Thread Les Mikesell
On Thu, Aug 15, 2013 at 4:03 PM,  dlel...@rockwellcollins.com wrote:


 I'd actually expect it to be pretty confusing if you had multiple
 people committing changes based from different back-rev pegged
 references anywhere near the same time frame.  Does your external
 'notify about new versions' tool help with that?   Don't you get
 conflicting changes that you'd want branches to help isolate?

 Yes, its obvious to users when they are not on the latest revision of the
 file, but they c/would still go through a merge process if need be.

 Its actually very straight-forward as we intentionally focused on targeting
 non-CM-guru folks.

I'm having a little trouble picturing how you would sanely maintain
(say) a conflicting line of code where 2 projects need the difference
across several revisions when all the commits from both go to the head
of the same trunk copy.

-- 
   Les Mikesell
 lesmikes...@gmail.com


Re: How to change paths on an external file without a full update --depth infinity?

2013-08-15 Thread Les Mikesell
On Thu, Aug 15, 2013 at 5:14 PM,  dlel...@rockwellcollins.com wrote:

  I'd actually expect it to be pretty confusing if you had multiple
  people committing changes based from different back-rev pegged
  references anywhere near the same time frame.  Does your external
  'notify about new versions' tool help with that?   Don't you get
  conflicting changes that you'd want branches to help isolate?

 The commit won't complete you'll get an out of date error.

 That's right, isn't it.  It'd be no different than two folks trying to
 commit the same file around the same time, right (one would get an out of
 date error)?

Right, but when working on the trunk explicitly you'd expect to update
to accept others' changes often, or to branch to preserve and isolate
your differences.   I don't see how either quite matches a model where
changes might be made based on multiple differing back-rev pinned
externals.   What happens if two projects don't want to accept each
others' changes and need to commit their own?  In a more typical
scenario they would be working on branch copies that could diverge,
but I think you lose that by forcing a canonical path for development
(as a tradeoff for knowing where the 'new' work is...).

-- 
Les Mikesell
  lesmikes...@gmail.com


Re: How to change paths on an external file without a full update --depth infinity?

2013-08-15 Thread Les Mikesell
On Thu, Aug 15, 2013 at 6:09 PM,  dlel...@rockwellcollins.com wrote:

 If a project doesn't want to accept a change, they fork to a new
 history.  The tool does this with a svn cp old_pedigree/foo.c
 new_pedigree/foo.c and an update to the svn:externals property.  They now
 lose sight of what the other project commits after that fork though.  The
 backend of where the file is stored and how is masked by the tool.
 pedigree is essentially implemented as a folder.  To the developer, they
 may know that their file is really a file external, but they don't treat it
 really any different from a normal file until it comes time to fork.  I
 try to differentiate forking as a pedigree/history from branching and the
 like.

 This system is essentially an implementation of Rational's CMVC history
 feature.

In subversion's view a copy is a branch so any distinction is strictly
your own convention.  Likewise for tags, except that there is a
generally accepted convention of not committing changes after a tag
copy.   Do you have additional conventions or tools to help users of
the pre-fork version know that it has branched?I don't think there
is a generic solution for that - subversion tracks copy-from history,
but not copy-to.

-- 
  Les Mikesell
lesmikes...@gmail.com


Re: How to change paths on an external file without a full update --depth infinity?

2013-08-14 Thread Les Mikesell
On Wed, Aug 14, 2013 at 11:48 AM,  dlel...@rockwellcollins.com wrote:

 I believe that if we can improve external performance (speed and integration
 -- like handling externals when depth != infinity), not only would we help
 the current users of SVN that have come to accept this, but we would have a
 huge opportunity to get back on the radar of other users that have
 previously chosen other options.

I'm not sure that current SVN users accept problems with depth !=
infinity as much as they arrange their layout so they don't have to do
that.   What's a common use case for needing some disjoint arrangement
of components that you can't assemble with externals and normal
recursion?

-- 
  Les Mikesell
 lesmikes...@gmail.com


Re: How to change paths on an external file without a full update --depth infinity?

2013-08-14 Thread Les Mikesell
On Wed, Aug 14, 2013 at 2:04 PM,  dlel...@rockwellcollins.com wrote:


 Designing a build
 management system on top of subversion using only externals
 is risky.

 I disagree, but we've had this conversation already.  I'd be very welcome to
 try anything to help our performance.

Externals can be very useful to assemble mixed revisions of
separately-versioned components, but why do they have to be
file-level?  Are there other tools involved in the process that do not
understand subdirectories or symlinks?  Or just no natural groupings?

-- 
   Les Mikesell
 lesmikes...@gmail.com


Re: How to change paths on an external file without a full update --depth infinity?

2013-08-14 Thread Les Mikesell
On Wed, Aug 14, 2013 at 4:23 PM,  dlel...@rockwellcollins.com wrote:

 Now, in our case, we do stuff for aircraft,... wouldn't it be nice to
 maintain living pedigrees with all similar models of aircraft?  Fix an issue
 in one place and advertise it to all the others.  File externals give you
 this.  It fits very well into the embedded safety critical world in the
 don't touch that code unless you have to and let's fix it once.
 Refactoring code in this world just can't happen as often as you'd like (its
 also a chance to reinject bugs).

 Hope this helps!

So the point is to intentionally pull the HEAD revision of a whole
bunch of files together where each is located arbitrarily  and can
change independently?   I guess that's about the opposite of the way I
think of version control, so I can't suggest anything else.   Are
there enough different 'top level' collections or a fast enough change
that you can't simply copy the right files into place instead of
having external references there?

-- 
   Les Mikesell
lesmikes...@gmail.com


Re: How to change paths on an external file without a full update --depth infinity?

2013-08-14 Thread Les Mikesell
On Wed, Aug 14, 2013 at 5:14 PM,  dlel...@rockwellcollins.com wrote:


 So the point is to intentionally pull the HEAD revision of a whole
 bunch of files together where each is located arbitrarily  and can
 change independently?   I guess that's about the opposite of the way I
 think of version control, so I can't suggest anything else.   Are
 there enough different 'top level' collections or a fast enough change
 that you can't simply copy the right files into place instead of
 having external references there?

 No HEAD revisions, all files are pegged.

 For example, if everyone is linked to a common file (foo.c, revision 20,
 project A pedigree) and a bug is fixed, each project will see that a fix
 has been made.  Each project has to make a decision: to remain on the
 current revision (no budget or schedule to update), update to the latest (if
 they have budget and schedule), or to fork (if they don't like the fix and
 have to implement a new feature).  This can happen any time, not necessarily
 when the file is committed.

Then what makes the collection of pegged externals easier to manage
than explicitly copying specific revisions into the top level where
the reference makes it appear and subsequently merging changes or
deleting files and copying in the updated version.  Those operations
are relatively efficient with svn and copies carry along the revision
history.

I don't follow how each project 'sees' that fixes have been made - you
shouldn't see that through a pegged external.

 Each new project inherits the externals from their baseline (they do not
 create new links or go out and search for reuse - that'd be a pain!)  So
 until a project decides to fork a file, they see (but not automatically
 receive) all the changes made to that file.  Nothing happens to your project
 without explicit action.

You'd get roughly the same effect with an 'svn cp' of a baseline - the
equivalent of a branch or tag even if you name it something else.

 Each file lives in a special area of the repository that identifies its
 pedigree.  Code your project writes automatically becomes usable by any
 other project.

None of that would need to change regardless of whether you copy a
revision into a different folder of reference it directly with an
external.   As long as you aren't floating to the HEAD version you
have to do something to bump the revisions - why not just copy it in
the repo where remote revision checks will be fast?

-- 
  Les Mikesell
lesmikes...@gmail.com


  1   2   3   4   5   6   >