Re: Confirming PAUSE operating model safe harbor for Alt::* distributions

2017-10-30 Thread Aristotle Pagaltzis
* David Golden <x...@xdg.me> [2017-10-26 14:58]:
> What do we think about this? Do we feel it falls under the 'safe
> harbor' exception?

As far as I can see, the mechanism described by Peter does not permit
scenarios in which a user unwittingly gets their Perl installation
screwed over by DBIx::Class::Boring. As such, it seems to me that the
mechanism follows the spirit of the principles, and therefore should
clearly fall under the safe-harbour clause in question.

> My personal thoughts:
>
>- The reason for the safe harbor clause in the first place was to
>  allow this sort of thing while putting appropriate protections in
>  place for end users -- so I think the intent is clearly protected
>  by the safe harbor and questions should focus only on mechanisms
>  and transparency.

Agreed.

>- Per the "explicit user confirmation", I think an explicit opt-in
>  must be present, not merely checking for overwriting via hashing.

I don’t think so, and think it’s fine to not require it. But you didn’t
state a reason why you think that so I don’t know whether I disagree
with you.

In particular, requiring an opt-in even in presence of a hash match
interacts badly with the fact that the actual codebase in the fork will
be staying in the DBIx::Class namespace for at least a very long time,
because it means that an opt-in is required not only for switching to
the fork, but also for all upgrades *after* following the fork. Such
a requirement would therefore mean that users who intend to stay on
the DBIx::Class::Boring fork, or who use a downstream project that has
chosen the DBIx::Class::Boring fork, will forever be needing to shim the
setting of this environment variable into their toolchain or deploy
machinery.

OTOH, I believe upgrading users of a forked module from a pre-fork
version to the post-fork codebase is fine *so long as* the fork has
a sufficiently strong backcompat commitment to pre-fork versions. And
as far as that applies to DBIx::Class::Boring, the most neutral way
I can express myself is that the entire situation came to a head because
Peter’s commitment to backcompat has been perceived as *too* overriding.

These are my reasons to believe that explicit opt-in under pure-upgrade
situations (as opposed to switching an installation over from the other
fork) is neither necessary nor reasonable under the mechanism described
in the proposal you quoted.

>  If prompting during Makefile.PL, I would prefer the default to be
>  "no", but I don't think the safe harbor is violated if the
>  default is "yes" (people who auto-accept prompts get what they
>  deserve).

The proposal you quoted says that the installation will *abort*, without
even a prompt, unless explicit opt-in via environment variable is given.
Therefore this requirement is fulfilled in spades.

>- I would prefer checking for the presence of an environment
>  variable over prompting as that similarly indicates explicit
>  confirmation and is kinder to many poor users who auto-accept
>  prompts -- or whose cpan client does so on their behalf.

(See right above.)

>- I'd be happy to see a convention established checking for
>  a particular environment variable like "PERL_ALLOW_ALT_MODULES=1"
>  that could apply across many Alt-style modules. A Makefile.PL
>  prompt could default to "yes" if the environment variable is
>  true..

Generally, yes. But I have to agree with Chad on this point: an opt-in
should be specific to particular modules, and not limit the user to
expressing a blanket “I accept any Alt:: that might be listed anywhere
in my dep chain”. In fact I raised this exact issue a long time ago:
https://github.com/ingydotnet/alt-pm/issues/3

It’s fine for there to be a convention for such opt-ins. It’s possibly
even a good idea (if it comes up in enough different cases in practice
to matter).

Peter’s proposal in particular is already sane on that specific front.

>- I have no objection to "DBIx::Class::Boring" as a name. I don't
>  think we should mandate a convention of "Alt::*".

Agree. I believe that conveying intent through the name is imperative
only when the protection of users at install time is too weak – as it
has been in Ingy’s conception of the Alt:: concept.

Regards,
-- 
Aristotle Pagaltzis // <http://plasmasturm.org/>


Re: No . in @INC breaks CPAN

2016-11-15 Thread Aristotle Pagaltzis
* James E Keenan <jk...@verizon.net> [2016-11-15 14:12]:
> Before we polarize ourselves into the camps of "we have to fix all of
> CPAN" and "it's hopeless to try to fix CPAN", it's important to
> realize that we now have the conceptual tools with which to assess the
> scope of the problem.

Speaking of empirical facts: broken Module::Install versions are still
bundled now. Why is it polarising to point to factual evidence of what
we can and cannot expect to happen?

> We now know that we can visualize CPAN as a river of dependencies.

That covers the CPAN – under the best of cases. I would hope that Perl
has ambitions to serve actual users rather than just people publishing
libraries for it.

> If we can identify a critical part of the "upstream", we can set up
> a CPANtesters-like apparatus to see how much damage (flooding?)
> default_inc_excludes_dot will actually cause.

Sounds good in theory. It *would* be great if we had that. But are you
building it or at least planning to – or at the very least working to
convince someone specific to do so? Because experience says otherwise
it’s not getting built.

And when it comes to results rather than aspirations, it makes no sense
to consider solutions that can be counted on to happen on a par with
ones that cannot.

I’m not arguing against what you said. Those are great things to want.
I am only arguing against it being an argument against what I said. The
things you propose are far less likely than the ones I am interested in,
but ideally of course we would have both.

Personally I wish to prioritise the likely because we cannot afford to
end up with no solution. Considering blue-sky solutions on a par with
ones that are within reach is not very helpful to that end, even if the
blue-sky stuff is necessary to the long term.

> Discussion of courses of action will then be empirically informed.

Broken Module::Install bundles (and the track record of services not
getting built merely by calling for them) are empirically confirmed.
What bar does an empirical fact have to clear before it attains the
power to affect the decision-making process?

> This change is, after all, just a much larger version of the "blead
> breaks CPAN" problem we've been handling for years.

But not well. Something is better than nothing here, but I think we feel
too great about doing a bit and punting on anything more difficult just
because most other communities barely even try addressing this at all.

With this particular change we’re in a different quantitative breakage
league than usually.

Regards,
-- 
Aristotle Pagaltzis // <http://plasmasturm.org/>


Re: No . in @INC breaks CPAN

2016-11-14 Thread Aristotle Pagaltzis
* Todd Rinaldo <to...@cpanel.net> [2016-11-14 15:12]:
> Long Term
>
> We need to fix the CPAN modules themselves.

I’m afraid that’s a pipe dream. You can fix the most popular part of
CPAN but not even close to everything. There are still distributions
containing broken Module::Install versions, years after the last
bugfixes. Any solution that sets out from “we fix everything on CPAN”
as a starting point is no solution at all.

So the reality is this: if we make a change that makes most of CPAN non-
installable then a quite significant fraction of it will be left behind
and will never work again.

My first thought for managing this ran in the direction of introducing
a new flag in META for dists to opt in to installation under dot-free
@INC, and then only dists that explicitly ask for it get it.

The downside is that this will withhold the improvement from dists that
already don’t depend on dot in @INC. It will take time till everything
that can be installed under a dot-free @INC actually is.

The upside is, dists which do depend on dot in @INC and are not actively
maintained will continue being installable, indefinitely.

In the longer term, a new version of the META spec could declare the new
flag to be implicitly turned on, so the META in every new dist does not
have to contain this boilerplate bit forever (much as saying `use 5.022`
replaces the need to `use feature qw/mile-long litany to the present/`).

This is a brain dump, mind you; I am not at all confident that there are
no unintended consequences.

Regards,
-- 
Aristotle Pagaltzis // <http://plasmasturm.org/>


Re: Renaming the "QA Hackathon"?

2016-04-18 Thread Aristotle Pagaltzis
* Aristotle Pagaltzis <pagalt...@gmx.de> [2016-04-18 14:31]:
> FWIW, in case it helps (probably not, but eh), the IETF runs hackathons
> that seem to follow the original meaning of the term as we use it; e.g.:
> https://www.ietf.org/blog/2016/04/ietf-hackathon-getting-tls-1-3-working-in-the-browser/

No they don’t. I misread the whole thing, ugh. Sorry for the noise.


Re: Renaming the "QA Hackathon"?

2016-04-18 Thread Aristotle Pagaltzis
* Neil Bowers  [2016-04-09 16:23]:
> There’s a well-established definition for “hackathon” these days, and
> the QAH is not one of those.

FWIW, in case it helps (probably not, but eh), the IETF runs hackathons
that seem to follow the original meaning of the term as we use it; e.g.:
https://www.ietf.org/blog/2016/04/ietf-hackathon-getting-tls-1-3-working-in-the-browser/


Re: Why do we keep using META.json for stuff that has nothing to do with installation

2016-02-27 Thread Aristotle Pagaltzis
* David Golden <x...@xdg.me> [2016-02-27 13:25]:
> The more interesting question is "why are we using META for installation"

Because we can’t go back in time and make historical versions of EUMM/MB
*not* use META for installation. End of line.

> If the problem is with MYMETA, I have no problem having MYMETA strip
> out everything but absolutely essential fields – but then affected
> users have to upgrade their EU::MM/M::B, which is no better than
> having them upgrade their JSON parser, so I don't think it's worth the
> effort to do so.

But can we get to a situation where nobody needs to upgrade anything?

That would be possible if we introduce a new file (e.g. INFO [^1]) for
the things we currently stuff into META in spite of EUMM/MB never having
cared about them. The data EUMM/MB do care about doesn’t really require
Unicode, so if we only leave that data there, then historical versions
of EUMM looking for META.json and parsing it using a broken JSON parser
will work just fine, without the need to upgrade anything. So riba’s
proposal has a benefit: it makes life better for users on old installs.

Introducing a new file for install-time metadata (just because we once
named another file META and would like to have that name continue to
refer to all metadata about the distribution) would require everyone to
upgrade their EUMM/MB to a version that supports this new file. This is
much worse than making them upgrade their JSON parser. So might as well
not do the separate files and just keep stuffing everything into META.
That is, your line of thought here doesn’t seem to add up to any benefit
for anyone compared to the current situation.

It is true that using META for install-time data and then INFO (or some
other name) for “general” metadata is annoying. But that’s the only way
that it makes sense to separate the data into different files.

[^1]: I don’t like `META.meta`. :-)

Regards,
-- 
Aristotle Pagaltzis // <http://plasmasturm.org/>


Re: Should Test2 maintain $! and $@?

2016-01-17 Thread Aristotle Pagaltzis
* Chad Granum <exodi...@gmail.com> [2016-01-12 04:20]:
> That said, it just occured to me that this can possibly be
> accomplished by having a context store $! And $@ when it is obtained,
> then restore them when it is released, which would avoid needing to
> use local everywhere, and still preserve them for all tools
> automatically...

I actually like the magic Kent is wary about in this instance, as it
makes it easier for test functions to get this right without each of
them having to carry extra boilerplate. But this also means that a test
function which explicitly *wants* to change these variables has to fight
the framework for it. So maybe there ought to be a mechanism to request
that they not be restored on context release.

Regards,
-- 
Aristotle Pagaltzis // <http://plasmasturm.org/>


Re: Perl-toolchain-gang -- using devel branches rather than master

2015-05-19 Thread Aristotle Pagaltzis
* David Golden x...@xdg.me [2015-05-19 19:26]:
 I've created devel branches for several of the dists I maintain. I'd
 like to leave master for things that ship, so that, for example,
 I can ship a stable release after a trial by shipping master.

I tried this in several projects, esp. including at $work, and eventually
abandoned the approach, for basically all the reasons that Kent listed.

 Other development work can go live on devel (or topic branches) while
 we wait for TRIAL releases to stabilize. I think this will be a good
 practice for the PTG in general. Thoughts?

At $work I ended up going the other way: a `master`/`release` combo.

For my smaller projects, I ended up not even having another branch next
to master. For releases I just tag commits on master. I think that is
adequate for projects with only a single committer who pushes to master.

Multi-committer projects do need some way of keeping people off each
others’ toes so they should probably have a workflow of all development
going into branches and landing in an integration branch. That would be
master.

Projects that actually require a RC phase with potential for significant
stabilisation work would need a branch for that, which probably ought to
be something other than master, e.g. “release”. (Just cutting a TRIAL as
a precaution isn’t the same to me, though.)

Mind you, that is the absolute minimum best-practice branch/tag workflow
I think should be suggested; if you want to formalise extra branches for
keeping track of your TRIALs etc, that’s obviously fine.


* Kent Fredric kentfred...@gmail.com [2015-05-19 22:50]:
 That is, I think semantically, master and blead are similar, and
 assuming master to mean stable is a bit odd.

Yeah, I’ve settled into that too.

 Though my thinking is obviously at odds with a lot of peoples here,
 its based on the idea that master is universally the default
 branch in Git. The default branch for commits and pushes and etc etc
 etc.

There is variety of places where Git assumes master is where development
happens, such that I always felt friction when I tried to work against
that. Off the top of my head, the default merge commit message comes to
mind, which will explicitly say

Merge branch 'topic' into 'devel'

for any target branch other than `master`, where it’s a plain

Merge branch 'topic'

That’s hardcoded.

I encountered various corners that grant primacy to the `master` branch
in this way. At some point I realised I was fighting Git on this matter,
and that this was a waste of effort, so I stopped.

 Having to tell everyone using some mechanism that the default is not
 the regular default is just peculiar.

FWIW, in my clone of perl5.git I deleted `blead` and created a `master`
branch whose upstream is set to `origin/blead` – that’s how tired I had
gotten of the friction. (So far the illusion appears perfect so I guess
Git only specially cares about the local `master` branch.)

If we were doing the Perforce migration now, I would advocate against
retaining the old nomenclature – despite the need to rewrite docs and
some of the tooling (which would be needing adjustment for Git anyhow).

Even with the migration being long past, I still contemplate advocating
a switch to `master` every now and then.

Regards,
-- 
Aristotle Pagaltzis // http://plasmasturm.org/


Re: On Not Merging new and old behavior ( was Re: Test-Stream + MongoDB scalar refcount issue )

2015-05-02 Thread Aristotle Pagaltzis
* Sawyer X xsawy...@gmail.com [2015-05-02 23:05]:
 Effectively what happened/happens is that, while plugins are now able
 to provide two different implementations without worrying about
 backwards compatibility (we originally wanted it to be seamless but
 turned out to be very hard), most plugins had a shared core. This was
 odd to maintain. You either fork it or you put it in a common ::Core
 module, or you ship both in the same distribution.

I was going to suggest shipping both in the same distribution – in fact
if the code is completely identical you could put them in one file and
basically just alias one namespace to the other, which seems desirable
as it reduces the maintenance burden to “essentially free” – but there
is a problem with that: dependencies and testing.

Essentially the only sane thing is to declare dependencies on both of
the ecosystems. Then you can also run your tests against each of them.

But this obviously sucks for users.

Yet the alternatives are worse. One of them is you pick one ecosystem as
the preferred one. That sucks for users on the other side. Or you do not
declare dependencies on *either* of them – which sucks for everyone.

Also, the tests for any ecosystem that is not declared as a dependency
will have to skip. But the user might then install that ecosystem’s core
later, and then your plugin magically is already installed in spite of
its tests against that ecosystem never having run.

(Conversely if the user has both ecosystems installed and the tests for
one of them fail but the tests for the other do not, there is no way to
install the plugin for just one ecosystem but not the other.)

All of this insanity is avoided if there is one plugin for one ecosystem
and another for the other. Then they can each declare their dependency
on the right one, and their tests for it can be run unconditionally, and
if they fail, the other ecosystem’s version of the plugin is unaffected.

But shipping a ::Core plus a wrapper for each ecosystem means you have
to ship (at least) 3 distributions, and every plugin comes in 2 parts.
That’s pretty annoying.

I think the right answer here is “Dist::Zilla plugin”: i.e. you maintain
a single codebase, in a single repo – but at release time, the authoring
tool of your choice bakes you two distinct distributions to upload. The
code in the repo can then be two-faced, and `prove` ought to run both
sets of tests. This makes it natural from the maintainer’s perspective.
And if the versions of the plugin need to diverge, from the perspective
of the CPAN index, that transition to separate codebases is invisible.

Regards,
-- 
Aristotle Pagaltzis // http://plasmasturm.org/


Re: File::Temp/File::Spec problems on Windows under taint mode – breaking change needed?

2015-04-23 Thread Aristotle Pagaltzis
* David Golden x...@xdg.me [2015-04-23 17:40]:
 [trying out this channel for toolchain discussions]

Not sure it’s the right venue for this level of discussion. It seems to
me more governance/admin-level. Then again looking back at the archives,
there have been quite specific technical discussions here before, so I’m
not sure quite where it falls.

 Please see https://rt.cpan.org/Ticket/Display.html?id=60340 for context.

 I think File::Temp needs to be able to work around File::Spec::Win32
 returning a non-writable directory.

 My proposal was to warn and fall back to ..  That's a small breaking
 change, but I think doing something in a different place than
 requested is better than failing entirely.

 Alternatively, it needs to validate the Win32 response and throw an error
 early, before attempting to make the directory so that the error message is
 more informative.

 Thoughts?

Windows considers the current directory an implied part of %PATH%, has
no concept like the executable bit of Unixoid OSes, nor does it allow
unlinking files while they’re open. So I’d feel uncomfortable about just
unexpectedly dropping junk into the current directory, announced or not.
Therefore I’d tend toward the latter.

But someone with better instincts for Windows may be able to call this
bunk. Mostly I didn’t want to leave the question warnocked.

Regards,
-- 
Aristotle Pagaltzis // http://plasmasturm.org/


Re: http://search.cpan.org/ does not answer

2014-07-17 Thread Aristotle Pagaltzis
http://log.perl.org/2014/07/7182014-scheduled-maintenance-moving-day.html


Re: Lancaster Consensus, deal with PUREPERL_ONLY=0

2014-06-02 Thread Aristotle Pagaltzis
Hi Jens,

* Jens Rehsack rehs...@gmail.com [2014-06-02 13:30]:
 * Karen Etheridge p...@froods.org [2014-06-02 01:30]:
  I'm wondering why it isn't always possible to split a dist into two
  implementations, one PP and the other with XS optimizations. If the
  dist simply cannot be implemented using pure Perl (Moose, for
  example), then surely the right thing to is simply refuse to install
  on PP systems?

 Eg. LMU - you cannot split out LMU::XS for historical reasons.

how come? Is there a technical difficulty with that? What prevents it?

Regards,
-- 
Aristotle Pagaltzis // http://plasmasturm.org/


Re: perlbrew switch perl-5.18.2 opening subshell

2014-03-22 Thread Aristotle Pagaltzis
* James E Keenan jk...@verizon.net [2014-03-22 19:25]:
 http://perlbrew.pl/Release-0.29.html appears to suggest that this
 is a recent modification to perlbrew's behavior.

Uhm, for rather relative values of “recent”: 0.29 is 2½ years old as
of this writing, and was released when perlbrew was just 1½ years old.
The current version is 0.67.

-- 
Aristotle Pagaltzis // http://plasmasturm.org/


Re: Preserving git history across repositories

2013-12-29 Thread Aristotle Pagaltzis
* James E Keenan jk...@verizon.net [2013-12-28 19:15]:
 At this point, the top-level directory merely contained the
 directories formerly found in ext/Pod-Html/. And 'git log' and 'git
 blame' indicated that the history had been preserved.

Except, at one time it had lived in lib/ and you are completely missing
that history. Yours effectively starts at the point where Nick moved it:
https://github.com/jkeenan/opodhtml/commit/75e62e6c1c6203daf034df38b525a6428d419b19
But you then have a couple of commits before that, at the beginning of
your local history… which are completely empty.

 git filter-branch --subdirectory-filter ext/Pod-Html/ -- --all

So first, make sure there are no useless commits:

git filter-branch --prune-empty \
--subdirectory-filter ext/Pod-Html/ -- --all

Next, you REALLY don’t want --all, which makes some 63 THOUSAND commits
that will take forEVER to process (hours). You want to look at only the
commits that touch relevant paths, which is some 530 total. On an SSD
that can be index-filtered in half a minute. MUCH better.

git filter-branch --prune-empty \
--subdirectory-filter ext/Pod-Html/ -- -- lib/Pod ext/Pod-Html

Note the double `--` – that is not a typo, the first `--` is for telling
git-filter-branch that the rest of the arguments are for git-rev-list,
so the second one gets passed through to git-rev-list, which takes it to
mean that only paths follow.

Next, since unfortunately --subdirectory-filter cannot extract multiple
directories at once, this job will need --index-filter.

git filter-branch --prune-empty --index-filter '
git rm --cached -r -q -- . ;
git reset -q $GIT_COMMIT -- ext/Pod-Html/ lib/Pod/
' -- -- ext/Pod-Html/ lib/Pod/

The first line will clear out the index entirely. The next line restores
the relevant directories from the original commit undergoing rewriting.

Now comes the hard part, because lib/Pod/ alone is both too much (there
have been a number of other modules in there over time) as well as too
little (the relevant files used to be strewn all over the place before
there were consolidated into ext/Pod-Html/). This requires sleuthing.
I started with a full clone of perl5.git and did

git log --name-status --full-diff -- ext/Pod-Html | egrep ^R

to find out all the files that were ever moved into ext/Pod-Html from
elsewhere:

lib/Pod/Html.pm
lib/Pod/t/eol.t
lib/Pod/t/htmlescp.pod
lib/Pod/t/htmlescp.t
lib/Pod/t/htmllink.pod
lib/Pod/t/htmllink.t
lib/Pod/t/htmlview.pod
lib/Pod/t/htmlview.t
lib/Pod/t/pod2html-lib.pl
pod/pod2html.PL

That’s not necessarily sufficient since those files may have chequered
histories of their own that may need tracking. E.g. it turns out that
pod/pod2html.PL had a predecessor called pod/pod2html.SH very early on,
which had been called pod/pod2html before even that. Are they relevant?
Maybe. In this case it turns out the answer is yes: they were not actual
shell scripts, but wrappers that generated a Perl pod2html script whose
code became the installed pod2html became the module plus stub script…
so you don’t want to miss them.

The other files turn out to be boring and obvious. (Thankfully!)

A potential complication is that Pod::Html used to load Pod::Functions.
But it turns out that it never actually used anything from that module…
as far as I can tell. So I’d take the easy way out: simply ignore that.

In the final analysis, you get this:

export L='ext/Pod-Html/ lib/Pod/Html.pm ...' # all the files listed above
git filter-branch --prune-empty --index-filter '
git rm --cached -r -q -- . ;
git reset -q $GIT_COMMIT -- $L
' -- -- $L

This extracts 287 commits from perl5.git, including the far beginning of
history. It leaves everything in the subdirectories it was in while it
was in perl5.git, but I figure that’s better here since the history of
splits and moves among files becomes unintelligible otherwise.

I’ve put up the result:

https://github.com/ap/opodhtml

Feel free to clone that as a basis for the rest of your work.

I figure a commit that moves ext/Pod-Html/* to the root of the repo is
a clean cut to document “today begins the rest of life for this module”.
I have not done this, figuring I’ll leave it to you to do the honours.

(I want to get rid of that repo once it has served its purpose so please
let me know either way. If you do choose to use it, ideally, do not fork
it on GitHub (since that would record it as a fork), just git-clone it,
change the origin URL in to your existing GH repo, and force-push.)

-- 
Aristotle Pagaltzis // http://plasmasturm.org/


Re: Preserving git history across repositories

2013-12-29 Thread Aristotle Pagaltzis
* James E Keenan jk...@verizon.net [2013-12-29 14:05]:
 When I went to that page, I saw a box labelled HTTPS clone URL where
 I most often see SSH clone URL.

Uhm yeah, GitHub defaults the clone URL to HTTPS everywhere you do not
have push access.

 I tried the following and did not succeed.

 $ git clone https://github.com/ap/opodhtml.git aristotle-opodhtml
 Initialized empty Git repository in
 /Users/jimk/gitwork/aristotle-opodhtml/.git/
 fatal: https://github.com/ap/opodhtml.git/info/refs download error -
 Protocol https not supported or disabled in libcurl
  ^^^

 I have available:
 $ git version
 git version 1.6.3.2

 Can you identify the problem

I wager it’s exactly what it says the problem is: the libcurl that your
Git is linked against does not support HTTPS.

 or enable clone via SSH?

It’s nothing I did and nothing I can do. There’s a little label saying
“You can clone with HTTPS, SSH, or Subversion” right below the little
box you’re copy-pasting from though. How about trying that?

-- 
Aristotle Pagaltzis // http://plasmasturm.org/


Re: Preserving git history across repositories

2013-12-29 Thread Aristotle Pagaltzis
* James E Keenan jk...@verizon.net [2013-12-30 02:25]:
 Can you check out that the 'master' branch at
 https://github.com/jkeenan/opodhtml has the correct history?

Looks fine. I guess you want to do something like

git push -f origin aristotle:blead

to do away with the old extracted history on the blead branch
and point it at the tip of the new extraction?

-- 
Aristotle Pagaltzis // http://plasmasturm.org/