[Sorry this is a bit long, I tried to make each point as concise as
possible without missing the point. Also, I know some of the git-usage
details below are well-known already but I include them for context,
and for whoever reads this and is less familiar with git]

On 23 February 2017 at 08:55, Alexander Burger <a...@software-lab.de> wrote:
> ..[snip]..
> 2. Using a code repository is fine. But I personally will not use any of the
>    existing ones, because my working style deeply depends on file meta
>    informations (most important the modification date), which are destroyed by
>    those repositories. I explained the situation at various occasions. 
> Checking
>    files *into* a repo is all right, but as soon as I check them out they 
> would
>    be unusable for me.

I realise from your previous emails - e.g.
http://www.mail-archive.com/picolisp@software-lab.de/msg05272.html -
that you are not too keen on a git-or-similar-DVCS for actual
development due to the metadata issues (and believing it to be
overkill), and - as you said in
http://www.mail-archive.com/picolisp@software-lab.de/msg05250.html -
"alea iacta est". However - just in case you are not already aware -
since version 2.2.2 git now only updates modification times on
*changed* files during checkout (see
which may be adequate for not messing with the system you have in
place? If that isn't sufficient, for fitting git into an unusual
system I once added shellscript checkout/commit-hooks which
generated-by-find/stat and reinstated-by-touch the metadata from an
index-file in the repo root dir, and added "make clean" to the
checkout hook to force a full build after checkout. It was years ago,
and I can't find them now that I search for them, but I remember it
was not too difficult to implement (and I'll keep an eye out for them
if you think it is at all interesting/worth finding). If using git
would only be interesting without needing "make clean" after
checkouts, then I found this "long answer" quoted from Linus Torvalds
on SO which provides at least 3 other workarounds (his typically
acerbic "short answer" isn't so helpful though):

> 3. Being present on StackOverflow would be a good thing.

Because trawling through historical emails and copy-pasting Q&As to
StackOverflow is something no-one would enjoy (but is an easy way to
build "reputation points" for whoever cares about such things), I
suggest that whenever someone asks a question on the mailing-list that
has already been asked-and-answered (I know I did it once on a bad
google-fu day, sorry), they should be urged to ask it on SO and post
it's link back to the mailing list, and then someone can find the
historical answer and copy-paste it as SO-answer (as a quotation, with
link to the original). This would mean:

 * The answer appears at least once independently of SO (in case SO
suddenly goes bankrupt, eggs are in more than one basket).

 * The most recent, google-friendliest iterations of a question in the
mailing list at least include links to SO, so that people who
google-find the mailing list answer first can go to SO and up-vote it
there to help the google-juice rise for those questions, and therefore
to reduce the number of repeat-questions on the list.

 * It is a way to reduce duplicate Q&As in the mailing list in a way
more diplomatic than having to just say "already answered, use

 * SO users see lots of picolisp content pointing back to the
mailing-list, to the wildly impressive fount of knowledge called
regenaxer :-), and can marvel at how nice it is to have the language's
actual architect/author so present and willing to help.

> As for (2), I see no problem with the current situation. Is it so much worse 
> to
> do a "wget http://software-lab.de/picoLisp.tgz"; instead of "git clone .."? The
> change history is in "doc/ChangeLog", and is the same as was checked into
> mercury when we were still using google code.

It goes without saying that it is entirely your (regenaxer)
prerogative if you still don't want to onboard such a workflow, but I
am "campaiging" for it a bit (particularly for publicly accessible
DVCS collaboration-platform) because I have a hunch that it might open
more useful contribution-floodgates than expected, not to mention
boost the public-profile of picolisp more than expected. The reasons
for my hunch are:

Firstly, I think even the best ChangeLog in the world can never
compete with "git log -p" for speed/resolution when finding issues or
points of interest in the code (right next to their commit-message for
context, like a distant cousin of literate-programming), or "git
bisect" when troubleshooting specific nits - and this is equally true
for users as for the maintainer. Due to constant changes a distinct
ChangeLog file is the most likely to prevent a conflict-free merge, so
many deprecate it in favour of only having git-log, but if a ChangeLog
external to git is preferred then, to avoid duplication of effort, it
can be auto-generated from "git log" during build (or "git
export"/"git archive") using logic like this:
https://wiki.gnome.org/Git/ChangeLog ...and tools like Jenkins even
include a build-plugin for it:

Secondly, as mentioned in my previous email, there are very
streamlined git-workflows and tools in place now in several distros so
keeping packages up-do-date becomes much easier/faster if upstream is
already on git. Beyond that, I think some of the biggest benefits
would be to users/potential-contributors. It is a huge timesaver for
me when doing low-level experiments, and maintaining custom (or
site-specific) patches, to do:

git checkout my-weird-feature-branch
git rebase upstream/master

to keep my customisations up-to-date on latest upstream (often without
even having any merge-conflicts), instead of each time doing manual
steps on top of the latest imported tarball - which usually just means
staying based on old upstream for ages, and consequently not helping
discover and shake out edge-case upstream regressions in a timely
fashion (which is often the main reason for upstream to open-source in
the first place). Another use-case which I find saves me *lots* of
time contributing to projects with public git and mailing-list is:

git clone [the-repo]
cd [dir]
git checkout -b my-proposed-patch upstream/master
# [hack,hack,hack]
git commit .
git format-patch --cover-letter -M upstream/master -o patches/
vi patches/0000-*  # [personalise the cover-letter]
git send-email patches/*  # [send to mailing list, as per git-config]

git am --signoff patches/*  # [upstream merges patch-attachments]

..or on collab-platforms like github/gitlab/gogs it reduces to just:

# [fork the repo in browser or by API call]
git clone [my-fork]
git checkout -b my-proposed-patch upstream/master
# [hack,hack,hack]
git commit .
git push origin my-proposed-patch # [push to my fork]
# [open pull-request in browser or by API call]

# [upstream does "git merge" or "git cherry-pick" as needed]

..a workflow which would take me at least 10 times as long to do
equivalently manually, and is far more prone to human-error (and in
many cases unfortunately makes the difference between me being
motivated enough to undertake it at all, or to end up contributing one
of the other ideas/features/patches on my TODO-list to a project which
uses a fast and trivial workflow/turnaround).

Rowan Thorpe
"A riot is the language of the unheard." - Dr. Martin Luther King
"There is a great difference between worry and concern. A
worried person sees a problem, and a concerned person solves a
problem." - Harold Stephens
"Ignorance requires no apologies when it presents questions
rather than assertions." - Michael Sierchio (OpenSSL mailing list)
"What we need more than an end to wars is an end to the
beginning of all wars." - Franklin Roosevelt
UNSUBSCRIBE: mailto:picolisp@software-lab.de?subject=Unsubscribe

Reply via email to