Re: perlbrew switch perl-5.18.2 opening subshell

2014-03-22 Thread Kent Fredric
On 23 March 2014 04:31, James E Keenan jk...@verizon.net wrote:

 And I have added: 'source ~/perl5/perlbrew/etc/bashrc' to the end of my
 ~/.bash_profile file.  (AAMOF, it's the only entry yet in that file.)


This might be the cause, I have that stanza in ~/.bashrc instead

And this seems to make a difference.

Because ~/.bashrc is sourced each time bash is spawned, but ~/.bash_profile
is only sourced under *login* shells, which means you have to do `bash -l`
to load perlbrew into ENV

And this matters, because without the perlbrew bashrc magic, `perlbrew
switch` is a command, while with the perlbrew bashrc magic, `perlbrew` is a
shell function.

type -t perlbrew
function

-- 
Kent


RFC Consensus on Network testing ENV?

2014-06-11 Thread Kent Fredric
If you grep CPAN you'll find a lot of ad-hoc ways of asking usespace if
network testing is permitted.

http://grep.cpan.me/?q=%28if|unless%29\s.%2BENV.*NET

It seems wide spread enough that we may as well establish a common way of
doing it, document it, and then enhance the available tooling to make it
make sense.

That way, instead of a user having to see a notice in their test output and
manually change ENV to suit ( and have to do this many different ways for
many different packages ), they'll just have to set the relevant option in
their installation stack.

And this would be a standard option for those running smoke testers, and
smoke testers who felt that network testing was something that *should*
work on their boxes could set the relevant configuration tunable, and we'll
get better CPAN testers coverage as a result.

Once we decide on an official and blessed ENV key, we can go about
getting support for it added in various testing tools, like:

https://metacpan.org/pod/Test::Is
https://metacpan.org/pod/Test::DescribeMe
and
https://metacpan.org/pod/Test::Settings

That way, Test authors only need to do:

use Test::Is 'networked' ;


Or similar instead of having to hand code a bunch of ENV conditions and
skip rules.

Though the exact syntax those modules should use can be decided at a later
date and we should simply decide on the ENV name to use.

Other thoughts:

Some tools have their distribution name prefixed in their ENV key, which
may be because those distributions consider network testing a bad idea,
even if it is otherwise indicated globally as ok, and requesting *explicit*
indication for itself to run. At least, that is why I imagine it is done.
It may just be out of paranoia of ENV key clutter causing unintentional
keys to be in scope and have people running tests without meaning to.

Technically we don't really need to solve this as a feature within the ENV
key, I hashed together something way more complicated that would allow
per-distribution granularity settings without any specific
testing/toolchain infrastructure, but it seems its not really necessary.

Ultimately if a smoke tester wanted that degree of granularity, the table
of rules could be established in the smoke testing toolchain itself, and
per-package ENV flags could be set up to allow network tests for some dists
and not for other dists using this simple approach.

I also considered using values other than 1 in this ENV flag( ie:
Dist-Foo:Dist-Bar:Dist-Baz, but it all smells too much of featureitis and
things we don't need yet and overcomplicates things.

Essentially, the simplest thing that could work is just

 $ENV{NETWORK_TESTING}

This is informally used in a place or two already, mimics our existing set
of ENV vars, and its sensible enough that we may as well standardize on it.

Competition:

- A few places use NETWORK_TEST (singular)
( WWW::Offliberty, LBMA::Statistics , Net::Douban
- A few places use NETWORK_TESTS (plural)
( RDF::RDFa::Template }
- Quite a few places use SOMEPREFIX_NETWORK_TEST or SOMEPREFIX_NETWORK_TESTS
( Net::Social::Mapper, WorePAN, RDF::Query, RDF::Trine, Data::Feed )
- RUN_NETWORK_TESTS is seen in at least one place
( IO::Pipeley )
- Other Weird things exist:
  - TEST_YAR_NET_CONNECT in Net::YAR
  - AUTHOR_TEST_NET_ROUTE in Net::Route
  - HAS_INTERNET in WWW::OPG
  - TEST_INTERNET in WebService::CIA
  - PUGS_TEST_ALLOW_NETWORK in Perl6::Pugs
  - TEST_NET in MojoX::UserAgent
  - ONLINETESTS in HTTP::LoadGen
  - USE_NETWORK in Arch

Well, you get the picture. Its a complete mess.

I merely got bored with enumerating all the ways network testing could be
enabled, and I doubt anyone running a smoker would care enough to track
them all down like I just started to.

Better to say If you want network tests to be smoke tested, USE THIS



-- 
Kent
( ~KENTNL )


Re: RFC Consensus on Network testing ENV?

2014-06-11 Thread Kent Fredric
On 12 June 2014 05:58, Jens Rehsack rehs...@gmail.com wrote:

 You never know whether a test fails because of failure or insufficient
 capabilities. So a restricting envvar isn't worse at all.


I think he was more saying that he'd prefer:

set NO_NETWORK_TESTING=1

over

set NETWORK_TESTING=1

Where network testing should run by default and users on boxes where it
*couldnt* work ( for whatever reason ) could disable it.

That would be more helpful on an imaginary example environment that was
sandboxed where calling network functions during 'make test' triggers a
SIGKILL or something.

And then with that proviso agreed upon, have a module that ascertains (
using basic testing within the test itself ) if network behaviour is
conducive to making the test pass, and if so, permit the test to run (
guarding the test against actual network problems instead of relying on an
ENV guard , and using the ENV guard only for users who have continued
issues with the heuristic failing to fail properly )

1. begin test
2. load test networking module module
3. is NO_NETWORK_TESTING? SKIP!
4. can access specified resources?  yes - run tests
  no   - SKIP!


-- 
Kent


Re: wrt File::ShareDir

2014-12-09 Thread Kent Fredric
On 10 December 2014 at 20:48, Jens Rehsack rehs...@gmail.com wrote:

 Hi Kent,

 thanks for coming back on perl5-utils/File-ShareDir/pull/3.
 I really forgot that it's still open.

 While I discussed related things on cpan-workers@ and prove on
 File::ConfigDir + File::ConfigDir::Install how slightly improved EU::MM, I
 recognized that FHS compliant File::ShareDir will break
 File::ShareDir::Install and in distributions with more than such an
 Install-Extension it will leave the developer alone with the deep darkness
 of EU::MM extensions ...

 So - yes - the approach is important and needs to be worked in somehow,
 but the big picture needs to be structured before.
 Sorry for put you off again.

 Cheers
 --
 Jens Rehsack
 rehs...@gmail.com


Thanks for getting back I guess =)

-- 
Kent

*KENTNL* - https://metacpan.org/author/KENTNL


On Not Merging new and old behavior ( was Re: Test-Stream + MongoDB scalar refcount issue )

2015-05-02 Thread Kent Fredric
On 3 May 2015 at 02:32, Aristotle Pagaltzis pagalt...@gmx.de wrote:

 That feels like “this is the point in our programme where we rethink the
 approach” to me; like a strong hint that the journey went down the wrong
 turn somewhere quite a ways back.


I am in agreement, and a few of us are now exploring the idea I believe you
were in favour of earlier, doing it in a separate namespace.

That there are likely equally as many as heisenbugs like this that only
show up 1 in a million times is a prospect I really dislike, especially
given we still have no idea why its happening.

I would be fine in investing that level of debugging effort if we had
published code that was already in deployment, but having to debug that in
a prospective release ? That  that sounds dangerous to me.

And more, not only is our attempt at perfect backcompat failing us, the
goal of mixing perfect backcompat and radical API changes screws us in
several ways. One significant one being that the new technology has to pay
the technical debt of all the bugs and mistakes in the old technology.

In the last day, my opinion has switched from It must work together out of
the box to It must not work together at all,  unless somebody who wrote a
test explicitly declared they wished it to work together.

Its better to explode outright and tell people No, we cant guarantee this
test will do what you think it will than to muddle around and *hope* it
just works. Hope is not sufficient for the level we are at.

We have to guarantee we don't break anything, and unlike the usual perfect
is the enemy of the good, we have no such quandry.

We *can* get the new stuff with no risk of breaking the old stuff.

The biggest conceptual downside of a Force no compatibility unless
requested is the potential that downstream ecosystem testing tools may, in
turn, make bad choices about future support schemes.

That is, conceptually, its possible that a misguided author of a
distribution at the same level as say, Test::Differences, thinks it wise to
simply rewrite their existing code in the new framework.

In a try work if possible automatically environment, that change would
imply that existing users of Test::Differences would get the new test
system without changing their code, and that is unacceptable.

It is admittedly not *our* problem if TD makes a bad choice here and does
the wrong thing, but the right thing to do must be clear and encouraged,
and if authors of other important toolchain modules see fit to
intentionally do the wrong thing and break their dependants, and circumvent
our strictures that serve as a deterrent, then we cannot do anything about
that.

They can arguably do lots of dumb things unrelated to Test::Stream, and the
problem is unrelated to how we must proceed.

It is critical that _we_ must not break things.

IF we establish a system not to break things, and documentation, processes
and code to encourage other people doing the same, then it is not on our
heads if people go in a different direction.

But we MUST be the gold standard.  Because literally everything else Perl
is relying on us.


-- 
Kent

*KENTNL* - https://metacpan.org/author/KENTNL


Re: On Not Merging new and old behavior ( was Re: Test-Stream + MongoDB scalar refcount issue )

2015-05-02 Thread Kent Fredric
On 3 May 2015 at 07:03, Stefan Seifert n...@detonation.org wrote:

 What would that mean from a user perspective? Would one be able to mix
 Test::More and Test::More2 (and higher level modules based on them) in the
 same test file?


I initially thought this was desirable, but am now against that idea, and
am leaning in favour of They should be mutually exclusive by design.

Part of that is because Having a higher level module be a fool and change
which one it is based on would be a big no-no (the Dancers Dilemma),
because that would essentially be moving the same New system breaks
existing code to be hidden until some higher level module  causes that to
transpire for you, despite you not actually changing any of your code.

Hence I'm in a mind now that unless a .t file explicitly states that it
wants to support both systems ( with the old system being implemented in
terms of the new ), then it should not be supported, and the code should
fail, instead of giving a weak promise of we tried and hoping the test
results are accurate.

-- 
Kent

*KENTNL* - https://metacpan.org/author/KENTNL


Re: On Not Merging new and old behavior ( was Re: Test-Stream + MongoDB scalar refcount issue )

2015-05-02 Thread Kent Fredric
On 3 May 2015 at 08:24, Chad Granum exodi...@gmail.com wrote:

 I like the Test2 Idea, mind if I upload a module real quick to first-come
 the namespace? or are you going to claim it?



+1 =)


-- 
Kent

*KENTNL* - https://metacpan.org/author/KENTNL


Re: On Not Merging new and old behavior ( was Re: Test-Stream + MongoDB scalar refcount issue )

2015-05-02 Thread Kent Fredric
On 3 May 2015 at 08:37, Chad Granum exodi...@gmail.com wrote:

 Looks like Test2 is already taken, which is odd considering a permission
 check on pasue did not show it, but when I tried to upload something it
 failed.



https://pause.perl.org/pause/authenquery?pause99_peek_perms_by=mepause99_peek_perms_query=test2pause99_peek_perms_sub=1

Ugh.  lowercase test2 is registered. Guess that's a non-player then :(

-- 
Kent

*KENTNL* - https://metacpan.org/author/KENTNL


Re: On Not Merging new and old behavior ( was Re: Test-Stream + MongoDB scalar refcount issue )

2015-05-02 Thread Kent Fredric
On 3 May 2015 at 10:49, Chad Granum exodi...@gmail.com wrote:

 Test-Stream and Test-Builder effectively adds 1 dep, that is not
 burdensome.


And before anyone suggests the total test time will be larger, the total
test time will be the same as it is now because the layout is divided such
that there's 2 suites in the one dist, if I'm not mistaken.

( You might get a little install time overhead in `make` and thats about it
)


-- 
Kent

*KENTNL* - https://metacpan.org/author/KENTNL


Re: On Not Merging new and old behavior ( was Re: Test-Stream + MongoDB scalar refcount issue )

2015-05-02 Thread Kent Fredric
On 3 May 2015 at 07:49, Stefan Seifert n...@detonation.org wrote:

 To me this future sound like an even larger burden for downstream.


The burden of extra namespace maintenance is annoying. Sure. But the price
of the namespace itself is basically free.

And a burdensome system as pertaining to *new* development is much more
acceptable than foisting an even greater danger of breaking *existing* code
in *existing* productions.

Because much of that software that is presently in use and presently
keeping things alive, is itself, likely unmaintained.

And that implies that by our changes, anyone who foolishly upgrades
Test::Simple as-is, is taking on a huge burden of maintaining a substantial
amount of code which is now broken[*]

One might ask Why upgrade Test::Simple if you're not maintaining software.

Well, one does not always upgrade just to get features. Sometimes you
upgrade to get bug fixes.

Making people pay the price of an upgrade for features they didn't ask for
( which introduce their own bugs ) to get the bug fixes you need is
undesirable.

*: And to clarify, there is no way for us to make any guarantees or
certifications about We haven't broken anything. We've done tests, which
seems like best effort, but its only best effort in regards to making
radical changes. When the true best effort is not introducing changes
that could even *potentially* break code. And we have seen evidence
recently that our changes are *well* within their ability to break code.

-- 
Kent

*KENTNL* - https://metacpan.org/author/KENTNL


Re: File::ShareDir - Outstanding modernization

2015-05-05 Thread Kent Fredric
On 6 May 2015 at 12:33, breno oainikus...@gmail.com wrote:

 That said, I would rather see it in a new module than have it in
 File::ShareDir itself. Kinda like how nowadays I prefer Path::Tiny over
 File::Spec + Cwd (or even Path::Class).


There is still a need for a simple mechanism for hooking FSD from a 3rd
party library without resorting to crazy hacks, and its conceivable that
such functionality can in fact be added without need for radical
modernization.

I have two such modules that rely on said crazy hacks, but they shouldn't
be broken even under moderate FSD changes.

You could in fact implement the low level simple hook mechanism in such a
way that anyone who wanted Sno's fancy FHS compliant directories could
simply load a new module, and that new module would hook FSD in the right
places to get the benefits.


But either way, I really want to champion small, logical, obvious, and
entirely uncomplicated simple steps to improve FSD itself slowly, and that
any more radical changes should occur at the users discretion, not at FSDs.

-- 
Kent

*KENTNL* - https://metacpan.org/author/KENTNL


Re: Documenting best practices and the state of ToolChain guidelines using CPAN and POD

2015-05-06 Thread Kent Fredric
On 6 May 2015 at 19:26, Peter Rabbitson ribasu...@cpan.org wrote:

 Sorry for the sidetrack


I was actually hoping for naming feedback :)

The names suggested seem amenable to me.

The only real problem I still have to resolve is what name we put
non-article-oriented things like The Lancaster Consensus info under.


-- 
Kent

*KENTNL* - https://metacpan.org/author/KENTNL


Re: Documenting best practices and the state of ToolChain guidelines using CPAN and POD

2015-05-07 Thread Kent Fredric
On 7 May 2015 at 09:31, Neil Bowers neil.bow...@cogendo.com wrote:

 Please please please, let’s not put this on CPAN. There are enough abuses
 of CPAN already. It’s a comprehensive archive of Perl, not everything in
 any way related to Perl. Plus I wouldn’t want to constrain this sort of
 documentation to pod, and how it’s presented on MetaCPAN / search.cpan.org,
 which is what we’d effectively be talking about.


Agreed. POD is a peculiar syntax. However, markdown in practice is not much
better. In terms of features, the only things it does better from memory
are: Embedding images is native instead of requiring a =for html section,
and it has native tables support, which I may have used once.

IMHO a significant amount of POD suck is POD is written to emit POD that
renders the way MAN pages render, so of course, every POD has the first 4
lines with the standard =head1 NAME ... =haed1 VERSION ... =head1 SYNOPSIS,
and that I believe primes a way of thinking that leads to bad
documentation. Its a standard we adhere to, and I play along with, but I
feel its wrong every time I see it. /sidetrack


 If there were a canonical source of information related to toolchain etc,
 then plenty of things on CPAN would link to it in their SEE ALSO sections,
 but it really doesn’t have to be *on* CPAN.

 I’m no great fan of wikis, but I often thought it surprising that there
 isn’t a centralised wiki for Perl knowledge, a Perlipedia, if you will.


The primary reason for this is really straight forward.

1. The wikis available in Perl are not great.
2. We don't want to use PHP
3. MediaWiki is not that great either, though its still the best there is.


 It doesn’t have to be a wiki. It could be done via a github repo / github
 pages (yes, I did note your comment about markdown, but markdown is
 preferable to pod rendered via MetaCPAN,


Again, my point was not so much that markdown is evil, but Browser based
editing is awful, and The tools at our disposal for working with texts we
can only edit in our browser are few

Similarly, there is a reason you rarely see people editing source code
directly on github, even though there is an option to do just that, its so
lacklustre vs Edit it with a real editor and push it few can be bothered
with it.


 IMHO :-) The advantage of a wiki is that it makes it very easy to
 contribute.

 And easy to contribute is highly subject to the wiki in question.
Keeping in mind any new wiki will need a *new* authorization model, and all
potential editors will have to sign up to said wiki.

For instance, one would not want anyone to be able to edit toolchain
policies, and you'd have a set list.

Additionally, one would not really even desire to edit the documents
directly, instead, you'd have drafts formed and discussion on the diffs and
submissions and changes. None of those workflows are available on wikis I
know of, but they're available right now via git + its done, publish it.

Similarly, I would not imagine a wiki a good way to draft a book amongst
collaborators, its just not optimised for that usecase. Its optimised for
disorganized contributors changing stuff whenever they feel like it, where
review is an afterthought that happens sometimes when one other contributor
decides another contributor is wrong.

You could arguably have a workflow where all wiki content was drafted in
git first, and then some poor sod copy-pasted the text into the wiki form
and crossed their fingers. But that's a step backwards in terms of a
publishing platform.

Even some git based workflow that published to github pages with some
atrocity of jekyll would be better for this task IMHO than a wiki. ( This
is also a moderately low barrier to get it working using existing
contribution systems thing )


 There’s one domain that’s woefully under-used where this could live:
 perl.com

 This isn’t a fully thought-out response, but I wanted to (a) offer support
 for the concept, and (b) plead that it not be done via CPAN.


At best, I think the argument you channel I can agree with is CPAN might
not be the best place for these things. But the alternative places are
simply not in a shape where they can be used reasonably for these things in
any practical sense, and there is a lot of work in simply building any
infrastructure that might be more suitable than CPAN for these things.

And even then, that platform may only be amenable to the P5P/Toolchain
group, not extraneous groups like DBIC/Moose and/or Author oriented
policies like one author has already expressed a desire for.

And building such a platform that covers these concerns as well will take
yet more time and effort, and so you've got several orders of complexity
between where we are  and having documentation we can refer to usefully.

Whereas replication-via-CPAN is right up there on the list of the simplest
things that could work, because literally every part required to make it
function is already in place.
-- 
Kent

*KENTNL* - 

Re: Documenting best practices and the state of ToolChain guidelines using CPAN and POD

2015-05-06 Thread Kent Fredric
On 7 May 2015 at 02:46, H.Merijn Brand h.m.br...@xs4all.nl wrote:

 How much I admire this effort (+1000 as you say from me as well), I
 think a structured HTML doc that people can download and read or PDF
 with index will reach a wider audience.



Generally, with HTML, if I want to read HTML, I need a browser. Having the
HTML on disk there just serves as an annoyance factor while I tell my OS to
tell my browser to open the file. ( To contrast with perldoc Foo or
!mcpan foo in my search bar, needing to locate an ABSPATH first is a bit
annoying )

Generally that implies I have internet, and so a website ( such as metacpan
) is an ideal Go-To.

I would not be opposed to HTML and PDF renditions of said policies being
available, the PDF is of course more likely to be amenable for people who
just want to read it on their phone/kindle/whatever.

And you could, perhaps, periodically aggregate the Policy:: stuff docs
published to CPAN and format them to a single doc and potentially ship that
with perl  but I find it hard to imagine P5P wants to elevate
toolchain + CPAN policies to part of perl itself level.

I find it hard to imagine any of this would be integrated with perl itself,
in any way, no matter how it was formatted, because a large amount of the
concerns presented don't pertain to Perl itself, but to the CPAN
Ecosystem in general.

-- 
Kent

*KENTNL* - https://metacpan.org/author/KENTNL


Re: Documenting best practices and the state of ToolChain guidelines using CPAN and POD

2015-05-06 Thread Kent Fredric
On 7 May 2015 at 02:28, David Golden x...@xdg.me wrote:


 This is like the xkcd standards problem (https://xkcd.com/927/).


I was literally waiting with baited breath for that to be referenced as I
wrote the original email :D

Before charging off down the path of using CPAN as a CDN because it's
 there, I'd think really hard about these questions:

 * Who is each piece of information for?
 * Where are they likely to look today?
 * Are there existing documents that can be fixed/expanded?


* CPAN Authors primarily, but policies and guidelines should be authored in
such a way darkpan authors may find them useful
* Google, and what they search for presently returns substantial amounts of
CPAN results

https://encrypted.google.com/search?hl=enq=cpan%20policy #  moslty cpan
https://encrypted.google.com/search?hl=enq=cpan%20standard # mostly cpan
https://encrypted.google.com/search?hl=enq=cpan%20guide  # exception
to mostly cpan :P
https://encrypted.google.com/search?hl=enq=perl%20policy   # 50/50
https://encrypted.google.com/search?hl=enq=perl%20standard   # a mixed bag
with stackoverflow
https://encrypted.google.com/search?hl=enq=perl%20guide   # learning
material
https://encrypted.google.com/search?hl=enq=perl%20toolchiain%20policy #
most relevant result so far ... 4th result is XDGs personal blog, though
3rd result is the lancaster consensus which is good.
https://encrypted.google.com/search?hl=enq=perl%20toolchiain%20guide #
similar to above, but more CPAN results
https://encrypted.google.com/search?hl=enq=perl%20toolchain%20guide #
similar to above
https://encrypted.google.com/search?hl=enq=cpan%20toolchain%20policy #
aristotles blog take first place and there's a lot of similarairty to the
perl toolchain results
https://encrypted.google.com/search?hl=enq=cpan%20toolchain%20standard #
similar again
https://encrypted.google.com/search?hl=enq=cpan%20toolchain%20guide #
similar again

The overwhelming trends being:

 - CPAN
 - Personal Blogs
 - Deep in Github.
 - Stackoverflow

* People may also be inclined to expect documentation pertaining to
standards on CPAN. I personally was surprised that pumping Lancaster into
MetaCPAN didn't give me a lot of relevant  results.

* Some of the existing documentation exists, but it is scattered, for
instance, this article:
http://www.dagolden.com/index.php/369/version-numbers-should-be-boring/ ,
really deserves to be in a more visible place, or something equivalent to
it.

Not as a binding law necessarily, but as guiding wisdom that people can
simply refer to as We follow this amongst each other voluntarily, similar
to how people presently use Perl::Critic policies, so that all people who
follow the guidelines benefit from the cohesion. ( People are free to make
various policies law within the code they control of course, and an
external representation of those policies helps )


 The problem with blindly throwing stuff like consensus agreements out
 there is that a huge portion of it is of no use to anyone outside toolchain
 maintainers and people who insist on knowing how tools work.


Right. But presently, there is no obvious way to get this knowledge, or if
you have one piece of said knowledge, an obvious way to read related
knowledge. We presently rely on a system of people making a mistake, and
then we discover it, we tell them using our tribal knowledge why its wrong,
and there's no way for them to gloss over other such tribal knowledge that
you don't know you need yet.

At best, presently we can defer to peoples blogs on various things, and we
have to have people who can remember which is on which blog to direct
people to it. That much is very sub-optimal.



 If people go looking for information and find irrelevant stuff, they'll
 stop looking.  Information needs to be narrowly targeted or it's just more
 noise.

 Project documentation is probably best kept with the project.


Right, but Project here often spans multiple cpan modules. As such the
guidelines pertain to the host of modules directly in control of the people
controlling key pieces in that project, but other people may still find the
policies and guidelines useful to consume voluntarily for non-project
related code.



 Guidelines for new authors should probably build on guidelines we already
 give (eg perlnewmod) -- partly to ensure that there isn't contradictory
 information and partly because putting it in the core makes it canonical by
 default.


Right. This was partly my initial idea, but that has its limitations.
Mostly, that a lot of these concerns are not core related. There are
policies that apply to stuff in core. But its easy to see there are
guidelines which would not be suitable as a P5P governed project. For
instance, the versions article I'd imagine would not be accepted as part of
core, because it being there would lend *too much* authority to the
document.

Policies pertaining to toolchain really is an external concern from perl
itself

It also would imply policy 

Re: Measuring water quality of the CPAN river

2015-05-10 Thread Kent Fredric
On 11 May 2015 at 12:37, Neil Bowers neil.bow...@cogendo.com wrote:

 These are pretty simple, and have some problems, so I’m hoping someone
 might come up with something more useful.


Random thought while reading: backflow could be a thing to check for.

Some dists don't have tests, and still have deps, causing a false-pass.

So the quality of a dist could be measured indirectly by the failure rate
of its depedents.

Or as an analogy, we have 2 sampling points in the river 100m apart.

If we sample at point 1, we can't sense the fecal matter because it enters
the river downstream of the sampling point.

But the fecal matter is sampled at point 2, so by conjecture, either point
2 created it, or it entered between points 1 and 2.

Sampling across the river at point 2 helps infer where the probable fecal
matter is.


-- 
Kent

*KENTNL* - https://metacpan.org/author/KENTNL


Re: Measuring water quality of the CPAN river

2015-05-11 Thread Kent Fredric
On 11 May 2015 at 19:20, Neil Bowers neil.bow...@cogendo.com wrote:

 look at 2 or more CPAN Testers fails where the only difference is an
 upriver version number.


my point didn't pertain to upriver versions changing, but the observation
that upriver modules can have buggy code that is only broken on certain
architectures, and have no tests to expose that fact.

Thus, any thing consuming that module, regardless the version its at, will
have failure rates on CPAN for that architecture that the author of the
downstream module didn't anticipate.

But the problem is the upriver module, and its not a symptom exposed by
upriver *changes*, but fundamental issues in that upriver was *alway*
broken.

Most common case of this I've seen is when upriver is only coded so it
works on a specific NV size, and on different NV sizes , behaviour is
unpredictable.

Authors of downstream modules who have the magical NV size don't notice the
problem and ship.

And soon after, they get failures pertaining to their NV size *IF* they had
tests.

This can go on and on and you can get things 3+ deps away exposing an error
that is truely in an upstream module simply due to the intermediate modules
not exposing it due to their absence of tests.

Obviously the accuracy of any such metric gets weaker the further it gets
from the real point of origin. And even at D=2, its already rather vague.



-- 
Kent

*KENTNL* - https://metacpan.org/author/KENTNL


Re: Documenting best practices and the state of ToolChain guidelines using CPAN and POD

2015-05-09 Thread Kent Fredric
On 9 May 2015 at 09:54, Philippe Bruhat (BooK) philippe.bru...@free.fr
wrote:


 Simple template with basic HTML around.

 For now, only markdown format is supported, but I intend to support
 additional formats as needed.


I think some of this question I was looking for information as to how the
document hierarchy itself was layed out.

I get the impression that so far, everything will just be boiled into a
top-level list under a H1 for that origin.

I'm not sure that's ideal, but it works for now.

Some people have voices that they still want to issue their own stuff to
CPAN, and they see CPAN.io more a federation system like the enlightened
perl Ironman deal, instead of a publishing platform.

But to me, the important elements are get it working as best as possible
with the least amount of work, now, and giving people the ability to
choose whether they want to go the CPAN route or go the CPAN.io route and
having both of them viable is much, much superior than sitting around
waiting for the perfect publishing platform to manifest.

So, my TODO list:

1. Draft a contribution / purpose guideline for display on CPAN.io
2. Reign in my existing Policy:: namespace draft documentation to be more
restrictive and discourage CPAN publishing, and instead prioritize pointing
to the CPAN.io contribution guide, while outlying the most clear path one
should take if one wishes to publish to CPAN anyway. ( The present draft in
pre-cpan.io mode is here : https://github.com/kentfredric/Policy )


-- 
Kent

*KENTNL* - https://metacpan.org/author/KENTNL


Re: Dependency List Approval / Problems for ExtUtils::Manifest

2015-06-22 Thread Kent Fredric
On 23 June 2015 at 10:24, David Golden x...@xdg.me wrote:

 I think the early toolchain list is in write_buildcustomize.pl:
 http://perl5.git.perl.org/perl.git/blob/HEAD:/write_buildcustomize.pl#l19

 AFAICT, that makes the perl files in those lib directories available to
 miniperl directly in @INC in order to bootstrap the XS modules, and then the
 toolchain modules get build properly afterwards.  But it's been a while
 since I looked really closely at the build order.

 I *think* that if you're talking test-time depndencies, all the XS stuff
 should be built by the time tests run, but someone would have to check that.

Apparently Cwd.pm is safe due to XS being Optional there.

But that leaves me with a query about what XS things File::Temp needs,
which it seems to use a lot of.

Fcntl, POSIX,  Scalar::Utils  are the ones I can see at the top.

Again, these should all probably be *compiled* and in @INC by the time
the Test phase comes around, so its probably a non-issue. I just don't
know the build cycle well enough to know if there's any stage that
might preclude this.


-- 
Kent

KENTNL - https://metacpan.org/author/KENTNL


Dependency List Approval / Problems for ExtUtils::Manifest

2015-06-22 Thread Kent Fredric
TL;DR^WABSTRACT:
- Is it safe to bundle Test::TempDir::Tiny or similar in EUMs t/tlib?
- I would really like it and it would make such testing cleanly much
better, wow.

---

I'm just going to scratch this together here since the varied
discussions on IRC probably haven't seen the right people and this is
contributing to the stall.

First and foremost: ExtUtils::Manifest has an ungainly unmaintainable
test suite.

It does a lot of IO, and every test ( of which there are a few hundred
) relies on the exit condition of the one before it.

This makes doing any useful changes problematic in a plethora of ways.

In an ideal situation, tests should be grouped in atomic batches,
state created, the test performed, and the state destroyed.

Naturally, File::Temp and friends are desired.

But here, this creates a potential defect when a test fails, because
the state becomes lost in closure.

Which is why I seek to perform something similar to
Test::TempDir::Tiny, where failure state is preserved on failure in
the temporary directory.

I also seek to split the test suite into multiple test files, each
with batches of atomic logic.

This, naturally, also requires a TempDir of some description, because
simply splitting the current tests across several files creates
guaranteed race conditions.


However, the problem(s) at present with that approach is the dependencies.

TTDT is all core-deps, but some of them might require XS ( Not sure
what Cwd requires ).

I'm glad to see it no longer uses Time::HiRes though, which greatly pleases me.


-- 
Kent

KENTNL - https://metacpan.org/author/KENTNL


Re: democratizing some of my dists

2015-11-13 Thread Kent Fredric
On 14 November 2015 at 12:50, James E Keenan  wrote:
> doesn't that mean that its ownership is already democratized?


Not fully until he gives somebody else publish bits.

Software::License,RJBS,f
Software::License::AGPL_3,RJBS,f
Software::License::AGPL_3::or_later,VDB,f
Software::License::Apache_1_1,RJBS,f
Software::License::Apache_2_0,RJBS,f
...

-- 
Kent

KENTNL - https://metacpan.org/author/KENTNL


Re: Should Test2 maintain $! and $@?

2016-01-11 Thread Kent Fredric
On 12 January 2016 at 13:53, Chad Granum  wrote:
> $! and $@ are altered by many many things, adding { local ... } around all
> of them is a pain

As much as I agree, and as much as this is a "thing in all perl, so we
should expect this problem from every module"

If I was to offer a counter example, consider the effort of a `*.t`
file needing to preserve these things instead.

> ok(do_something_scary());
> is($!, 0, "expected $! val");
> is($@, undef, '$@ not changed');

vs

> my ( $error, $exception );
> ok(do {
>   local $@;
>   local $!;
>   my $ret  = do_something_scary());
>   ( $error, $exception ) = ($!, $@);
>   $ret
> });
> is($error, 0, "expected $! val");
> is($exception, undef, '$@ not changed);

I'm not sure I'd consider the second of these an "improvement".


-- 
Kent

KENTNL - https://metacpan.org/author/KENTNL


Re: Should Test2 maintain $! and $@?

2016-01-11 Thread Kent Fredric
On 12 January 2016 at 16:14, Chad Granum  wrote:
> That said, it just occured to me that this can possibly be accomplished by
> having a context store $! And $@ when it is obtained, then restore them when
> it is released, which would avoid needing to use local everywhere, and still
> preserve them for all tools automatically...


As written, that suggestion scares me slightly, because its behaviour
wouldn't be entirely obvious that it does that.

I'd be fine with a system like:

my $error_state = save_errors();
...
restore_errors($error_state);
return $value;

Or

   return preserve_errors(sub{
   /* code that can generate errors that shouldn't leak */
   });

Just lumping it in with the "Context" object seems too magical.

-- 
Kent

KENTNL - https://metacpan.org/author/KENTNL


Re: Should Test2 maintain $! and $@?

2016-01-12 Thread Kent Fredric
On 13 January 2016 at 10:48, Sawyer X  wrote:
>
> If so, I think you should probably clarify what Test2 *does* do. It
> doesn't provide the functions - alright. What *does* it provide then?


Oh, and thought: It may help to consider what testing /testing tools/
looks like here, and wether the tools themselves need to trap $! and
$@ and test for their changes.

Its probably immaterial and indifferent from the "handle it at the
chrome layer", but it may have implications in internals that make
things more difficult if they're suppressed/not-suppressed.

-- 
Kent

KENTNL - https://metacpan.org/author/KENTNL


Re: Should Test2 maintain $! and $@?

2016-01-13 Thread Kent Fredric
On 14 January 2016 at 07:39, Chad Granum  wrote:
> Right now the version I have up on cpan just stores them on creation, and
> restores them on final release. Nothing happens for nested calls to
> context()/release(), all my downstream testing shows no breakages (not a
> full smoke, but does include several modules sensitive to $! and $@
> changes).


In the event some code like this dies:

 sub foo {
  my $context = context();
  die "Bar";
 }

What will happen with regards to $@ auto-stacking?

If somebody catches the die in a higher context, what will $@ be?


-- 
Kent

KENTNL - https://metacpan.org/author/KENTNL


Re: Should Test2 maintain $! and $@?

2016-01-17 Thread Kent Fredric
On 18 January 2016 at 18:53, Chad Granum  wrote:
> Then again, if you /really/ want the mechanism in $ctx, I can add
> $ctx->release_preserving (naming is hard, give me a better one) which does
> have the behavior... but at that point, which behavior do you want, preserve
> one, preserve all, preserve what is requested in arguments? Once again,
> seems over-complicated for something done so rarely, and so easy to just do
> without a mechanism.


You could possibly make it a parameter to ->release

->release({ no_restore => 1 }) # don't restore anything
->release({ no_restore => [qw( $@ )] })  # only avoid restoring $@

But then you might be slowing down the code-path of "release" by
having an additional condition.

Though I think given how infrequently you'll need nuanced control over
variables, "no_restore => 1" is the only thing you need short term, as
the combination of "preserve everything" or "preserve nothing" are
both simple enough to be useful.

Either way, if preserve/restore are to be done by the context without
any user side control, the simplest way of avoiding
the undesired side effects should be documented to discourage the user
doing cheap tricks that cause future maintenance headaches.



-- 
Kent

KENTNL - https://metacpan.org/author/KENTNL


Re: Last call for review of Test-Builder using Test2 (Formerly Test-Stream)

2016-02-06 Thread Kent Fredric
On 6 February 2016 at 08:14, Chad Granum  wrote:
> If there is anything in these
> distributions (Test2 in particular) that makes you uncomfortable, you
> need to speak now.


Mentioning here for visibility:

As with Test-Stream where the apparent silence lead to a premature
conclusion that finalisation was appropriate, I feel interpreting the
current lull in activity in the same way equally premature.

I've seen a proposal floating around that might raise our ability to
be confident about the feature set of Test2 before requiring its
implementation/feature-freeze.

Just the people who I talked to who implied they were going to present
said proposal haven't yet had the tuits to do so yet.


-- 
Kent

KENTNL - https://metacpan.org/author/KENTNL


Re: addressing kwalitee and other quality issues at the head of the CPAN River

2016-01-28 Thread Kent Fredric
On 29 January 2016 at 13:01, Neil Bowers  wrote:
>  - use warnings


This can be a regression sometimes, adding warnings where there are
none in novice code is good.

But people who know what they're doing may omit warnings on purpose.

So this is not so much a "Quality" metric, but a "Best practices for
most projects"

Proof by example: https://metacpan.org/source/RJBS/if-0.0606/if.pm

-- 
Kent

KENTNL - https://metacpan.org/author/KENTNL


Re: addressing kwalitee and other quality issues at the head of the CPAN River

2016-01-28 Thread Kent Fredric
On 29 January 2016 at 13:01, Neil Bowers  wrote:
> adding min perl version


I'd be particularly careful with that one. People who don't know what
they're doing are predisposed to set a bar at a premature location.

For instance, if somebody has a motto of /attempting/ 5.6 support but
doesn't guarantee one, a dist may /appear/ to have a higher minimum
version until one of its dependencies is likewise made 5.6 friendly.

Minimum requirements should be statements of fact and be the lowest of
bars that are viable, not a conjectural "this is probably needed", for
conjectural things, recommends and suggests are better, ( but putting
perl itself in either of those fields is kinda nonsense, because no
CPAN Toolchain can/will transparently upgrade your perl )


-- 
Kent

KENTNL - https://metacpan.org/author/KENTNL


Re: Why do we keep using META.json for stuff that has nothing to do with installation

2016-02-27 Thread Kent Fredric
On 28 February 2016 at 00:06, Peter Rabbitson  wrote:
>  perhaps rethinking "Meta for end-user install purposes" and
> "Meta for meta" would solve most of the recent repeated breakages by "oh
> downstream doesn't like this new thingymagic"

+1

I've been frustrated by this myself, the large amounts of auxilliary
data just makes decoding the META needlessly complicated.

And its amplified by Dzil needlessly documenting both install-relevant
and not-install-relevant data in *both* META.yml and META.json ( esp:
x_Dist_Zilla )

I've even compensated by using a YAML generator filter that excludes
x_* just to reduce my dist-size.

And this seems even more relevant in a static-install future where all
install is META driven.

-- 
Kent

KENTNL - https://metacpan.org/author/KENTNL


Re: Renaming the "QA Hackathon"?

2016-04-09 Thread Kent Fredric
On 10 April 2016 at 03:45, David Golden  wrote:
> Perl Toolchain Summit


Because "Toolchain" is not really a word that necessarily makes sense
outside Perl, you can use "Infrastructure" or even "Critical
Infrastructure" in its stead.  ( I personally like Critical, its a
very spicy word, and accurately reflects the propensity of this sphere
of things to either go pear shaped or steal your SAN Cookies )

Also, depending on how many words you want to spend, throwing "Annual"
in there might help give some context to how frequently these things
happen.

The narrative you want to be carrying in your words is:

"Every year, we get all our brightest in one place and discuss and
work on the most pressing and important problems that affect the the
broadest number of concerns in the Perl+CPAN ecosystem"

-- 
Kent

KENTNL - https://metacpan.org/author/KENTNL


Re: Renaming the "QA Hackathon"?

2016-04-09 Thread Kent Fredric
On 10 April 2016 at 04:49, Sawyer X  wrote:
> Perl Annual Critical Infrastructure Summit.


Perl Infrastructural Engineering Summit.

PIES.  :D


Summit of Perl Infrastructural Critical Engineering.

SPICE.

Hth.

-- 
Kent

KENTNL - https://metacpan.org/author/KENTNL


Re: Thoughts on Kik and NPM and implications for CPAN

2016-03-25 Thread Kent Fredric
That scenario doesn't seem right. A mere deletion of a .pm file in a future
release aught to be the tripwire for such a warning. An explicit namespace
clearance is much more dire.
On 25/03/2016 11:51, "Neil Bowers"  wrote:

> I wonder what the volume in one case vs the other is. Maybe the attempt
> to distinguish the cases is premature optimisation that can be skipped?
> (Hopefully so, but I don’t know.)
>
>
> I suspect you’re right, and we should start off with everything, and only
> worry if it seems too noisy.
>
> One optimisation we might consider including from the start:
>
> Alert if an indexed release is scheduled for deletion, unless there’s a
> higher version of the same dist already indexed.
>
>
> This would prevent the klaxon going off in the case where Foo-Bar-1.01
> included Foo::Bar::Error, which was changed to Foo::Bar::Exception in 1.02.
> With both releases in your author directory, both Foo-Bar-1.01 and
> Foo-Bar-1.02 will appear in 02packages, with Foo-Bar-1.01 only appearing
> against Foo::Bar::Error.
>
> Thinking about it, it should probably still be alerted on, just on the
> off-chance that the rug *is* getting pulled from under someone else, but
> it could be flagged as this “possible module renaming” case.
>
> Neil
>
>


Re: Open source archives hosting malicious software packages

2017-09-21 Thread Kent Fredric
On 22 September 2017 at 00:11, David Cantrell  wrote:

> But is anyone paying attention? I assume you're talking about
> #cpantesters, which I'm on, but I hardly ever look at it, and when I do
> look I certainly don't look at scrollback, let alone looking at
> scrollback *carefully*.

It gets duty on freenode #perl too, and its not uncommon for people
like me to glance at https://metacpan.org/recent ( usually to see
something and regret looking )



-- 
Kent

KENTNL - https://metacpan.org/author/KENTNL


Re: Open source archives hosting malicious software packages

2017-09-21 Thread Kent Fredric
On 21 September 2017 at 20:24, Neil Bowers  wrote:

> I’ll tweak my script to not worry about packages in the same distribution
> (eg Acme::Flat::GV and Acme::Flat::HV). Then I just need to get a list of
> new packages each day, and I’m just about there :-)

I'd probably want PAUSE trust modelling to play a part too. On the
basis that people are unlikely to typo-squat themselves, and that
recognized, reputable authors are less likely to typo-squat.

(Because reputation is an important thing to maintain in opensource,
tarnish your reputation and nobody will use your stuff any more)

Which, by inversion, means that newer authors are more disposed to
typo-squatting, and that people are more likely to typo squat things
dissimilar to what they already own.

A long time ago, I was discussing with somebody, I cant remember who,
that we could generalize this problem as a public feed, allowing
anyone to review new module permissions assignments and changes.

Having public access to the permissions list is good, but having some
sort of feed that makes it public knowledge every time a new
permission occurs, or every time a permission change occurs, would do
wonders for this problem ( and others, like the surprise change of
hands of important but undermaintained modules into the hands of
potentially too keen maintainers )

It would even expose attempts at smuggling typo-squatted names in the
back of distros with dissimilar names, similar to cuckoo-packages.


-- 
Kent

KENTNL - https://metacpan.org/author/KENTNL


Re: Making www.cpan.org TLS-only

2017-09-01 Thread Kent Fredric
> downloading CPAN content roughly to:
> internet connection to not muck with the code you receive.
>
> Obviously the real fix here is that clients need to request via TLS (since I
> doubt any clients other than regular browsers support HSTS).

I was under the impression that any "code" ( eg: content submitted via
pause ) had an existing, long standing additional cryptographic
security on top of plain text, namely:

- Per author CHECKSUM files
- Which are signed by the PAUSE GPG key

http://cpan.metacpan.org/authors/id/K/KE/KENTNL/CHECKSUMS

And I've been using that feature via my CPAN client for years now. ( I
notice occasionally when the checksum files are broken )

I'm fine with allowing there to be additional security mechanisms, its
just *requiring* uses engage in security mechanisms when there's no
*need* to nor *desire* to on the users behalf I consider potentially
harmful.

Is there other content coming from the CPAN network that I'm not
considering here?




-- 
Kent

KENTNL - https://metacpan.org/author/KENTNL


Re: Making www.cpan.org TLS-only

2017-09-01 Thread Kent Fredric
On 1 September 2017 at 13:10, Ask Bjørn Hansen  wrote:
> Hi everyone,
>
> We’re considering how/how-much we can make www.cpan.org TLS-only.
> http://log.perl.org/2017/08/tls-only-for-wwwcpanorg.html
>
> I expect that we can’t make the whole site TLS-only without breaking some 
> CPAN clients, so the conservative version is to force TLS for
>
> - any url ending in *.html
> - any url not in matching some variation of
>  (/authors/ | /MIRRORED.BY | ^/modules/[^/]+ )
>
> Does that sound about right? Maybe /src/, too?
>
> (Also - we will support TLS for www.cpan.org permanently now, so please 
> update URLs where possible and appropriate).
>

I'm just side-stepping the "what" momentarily to ascertain the "why".

I know plain-text is "insecure", but its not clear currently from this
proposal what content needs securing, and what the real
vulnerabilities this aims to solve are.

There probably are some, but it needs to be much more clear.
Specifically, so we know the solution we pick fixes the problem we
know is there, and so its obvious that the downsides of the chosen
solution are necessary.

As it stands, it *looks* like the argument is "we're going to do this
because otherwise google might be more shouty". ( I assume it isn't,
just from the context I have that's all I've got to go on, so I'm
looking for additional context )

> Ask



-- 
Kent

KENTNL - https://metacpan.org/author/KENTNL


Re: Confirming PAUSE operating model safe harbor for Alt::* distributions

2017-10-30 Thread Kent Fredric
On 31 October 2017 at 15:54, David Golden  wrote:
> On Mon, Oct 30, 2017 at 3:11 AM, Aristotle Pagaltzis 
> wrote:
>>
>> >- Per the "explicit user confirmation", I think an explicit opt-in
>> >  must be present, not merely checking for overwriting via hashing.
>>
>> I don’t think so, and think it’s fine to not require it. But you didn’t
>> state a reason why you think that so I don’t know whether I disagree
>> with you.
>
>
> Even if Peter's mechanism is in the spirit of the operating model, I would
> prefer the higher standard of "explicit confirmation" as the operating model
> call for.
>
> If you need a rationale -- practically speaking -- consider this scenario:
>
> 1. User without DBIC or DBIC::Boring installs some module Foo that depends
> on DBIC::Boring; DBIC::Boring gets silently installed.
> 2. User installs some module Bar that depends on DBIC; because DBIC doesn't
> check for conflicts with DBIC::Boring, it silently overwrites it.
> 3. Foo is now broken.  User doesn't know why.
>
> Whereas if in #1, the user had to opt into DBIC::Boring, then they would be
> accepting the risk of future breakage from DBIC conflicts.

I would expect based on the stability goals of DBIC::Boring, that this
would mean the same kinds of breakages would be present for anyone who
simply upgraded from an older version of DBIC to a newer version of
DBIC

And subsequently, that module Foo is broken with new version of DBIC *anyway*

Which means there is a logical problem in the ecosystem entirely
independent of the existence of DBIC::Boring

The only thing DBIC::Boring adds to this table is a situation where
Foo is not broken, other than "Have old DBIC"

Complaining that DBIC::Boring's own sub-ecosystem could become broken
by DBIC::Boring existing seems strange here, because the goal of this
safe-harbour provision is that DBIC::Boring can be permitted to exist
without interfering with DBIC's ecosystem.

And nothing in that example suggests DBIC::Boring interferes with
DBIC's ecosystem.

Only that DBIC's ecosystem may retroactively interfere with
DBIC::Boring's, which is an understood consequence of that design.


-- 
Kent

KENTNL - https://metacpan.org/author/KENTNL