Re: YAML (maybe other modules) might need CPAN smoking

2016-07-05 Thread Sawyer X
I agree.
On Jul 5, 2016 18:43, "Karen Etheridge" <p...@froods.org> wrote:

> There should be -TRIAL releases of anything this far upriver, at the very
> least.
>
> On Tue, Jul 5, 2016 at 5:34 AM, Sawyer X <xsawy...@gmail.com> wrote:
>
>> YAML.pm.
>>
>> It isn't tool chain, but it is relatively upriver, so I thought this
>> group would be interested. Additionally this idea can extend to
>> non-core toolchain tools.
>>
>> On Tue, Jul 5, 2016 at 2:18 PM, David Golden <da...@autopragmatic.com>
>> wrote:
>> > YAML or YAML::Tiny? Only the latter is tool chain.
>> >
>> > On Jul 5, 2016 8:14 AM, "Sawyer X" <xsawy...@gmail.com> wrote:
>> >>
>> >> YAML broke some stuff.
>> >>
>> >> I'm beginning to think that YAML is one of the modules that could
>> >> really use a CPAN smoke of its own. I think the more upriver the
>> >> module, the more this becomes relevant.
>> >>
>> >> Thoughts?
>>
>
>


Re: YAML (maybe other modules) might need CPAN smoking

2016-07-05 Thread Sawyer X
YAML.pm.

It isn't tool chain, but it is relatively upriver, so I thought this
group would be interested. Additionally this idea can extend to
non-core toolchain tools.

On Tue, Jul 5, 2016 at 2:18 PM, David Golden <da...@autopragmatic.com> wrote:
> YAML or YAML::Tiny? Only the latter is tool chain.
>
> On Jul 5, 2016 8:14 AM, "Sawyer X" <xsawy...@gmail.com> wrote:
>>
>> YAML broke some stuff.
>>
>> I'm beginning to think that YAML is one of the modules that could
>> really use a CPAN smoke of its own. I think the more upriver the
>> module, the more this becomes relevant.
>>
>> Thoughts?


YAML (maybe other modules) might need CPAN smoking

2016-07-05 Thread Sawyer X
YAML broke some stuff.

I'm beginning to think that YAML is one of the modules that could
really use a CPAN smoke of its own. I think the more upriver the
module, the more this becomes relevant.

Thoughts?


Re: Renaming the "QA Hackathon"?

2016-04-09 Thread Sawyer X
Merging the suggestions I saw so far:

Perl Annual Critical Infrastructure Summit.

A mouthful, and not a fun acronym.

On Sat, Apr 9, 2016 at 6:16 PM, Karen Etheridge  wrote:
> When I saw this thread title I thought it was going to be discussing the
> "QA" part of it, and I thought "yeah, right on!".. I totally agree with
> Neil's points about what we do not being a "hackathon" though.
>
> ..And I also like the idea of changing the QA bit to Infrastructure.
>
> Perl Infrastructure  gets a vote from me.
>
>
> On Sat, Apr 9, 2016 at 8:50 AM, Kent Fredric  wrote:
>>
>> On 10 April 2016 at 03:45, David Golden  wrote:
>> > Perl Toolchain Summit
>>
>>
>> Because "Toolchain" is not really a word that necessarily makes sense
>> outside Perl, you can use "Infrastructure" or even "Critical
>> Infrastructure" in its stead.  ( I personally like Critical, its a
>> very spicy word, and accurately reflects the propensity of this sphere
>> of things to either go pear shaped or steal your SAN Cookies )
>>
>> Also, depending on how many words you want to spend, throwing "Annual"
>> in there might help give some context to how frequently these things
>> happen.
>>
>> The narrative you want to be carrying in your words is:
>>
>> "Every year, we get all our brightest in one place and discuss and
>> work on the most pressing and important problems that affect the the
>> broadest number of concerns in the Perl+CPAN ecosystem"
>>
>> --
>> Kent
>>
>> KENTNL - https://metacpan.org/author/KENTNL
>
>


Re: Thoughts on Kik and NPM and implications for CPAN

2016-03-23 Thread Sawyer X
Related to this perhaps was the Ion3 debacle:

https://en.wikipedia.org/wiki/Ion_%28window_manager%29#Controversy

Long story short: Ion3 developer did not want a certain feature.
Debian added a patch for it. He got mad, pulled Ion3 out. Same with
ArchLinux, NetBSD, and FreeBSD.



On Wed, Mar 23, 2016 at 4:25 PM, Stefan Seifert  wrote:
> On Wednesday 23 March 2016 11:07:34 David Golden wrote:
>
>> * I think we have to allow mass deletion, even if that de-indexes stuff.  I
>> think that's an author's right.
>
> I've never gotten that argument. The code in question is usually under a very
> permissive license. Publishing code under such a license is a very conscious
> decision of the author. People trust the author and build on this foundation.
> Among those people are the ones that run CPAN and its mirrors. They too are
> only allowed to distribute the code because the license says so. When people
> download distros from CPAN they do so as sub licensees of whoever runs their
> favorite CPAN mirror.
>
> Now if the original author decides to no longer publish her code, that's
> absolutely fine. I just don't get why CPAN should follow suite and do the
> same. We don't demand this of BackPAN and we don't demand the same from other
> users who trusted the license. Why is CPAN literally the only entity that
> should go beyond the license and do the author's bidding? Considering that
> copyright exists solely to benefit the public, I have to ask: how is the
> public served by this self censorship?
>
> Stefan


Re: Thoughts on Kik and NPM and implications for CPAN

2016-03-23 Thread Sawyer X
Well thought-out. I agree.
(I'd add more but really, there's no need. :)

On Wed, Mar 23, 2016 at 4:07 PM, David Golden  wrote:
> If you don't know what I'm referring to, read
> http://www.theregister.co.uk/2016/03/23/npm_left_pad_chaos/
>
> Leaving aside the IP issue, I think it might be worth considering what would
> currently happen if someone chose a 'mass removal' and whether that's what
> we'd like to have happen.
>
> N.B. this is more extreme than
> http://www.xenoterracide.com/2015/05/abandoning-all-perl-modules.html --
> that dropped perms, but left the tarballs indexed.  What if someone goes
> beyond that...
>
> Consider a scenario for user "Pat":
> * Pat schedules all tarballs for deletion and waits 3 days
> * All tarballs are deleted by PAUSE
> * mldistwatch de-indexes any previously indexed tarballs
> * Pat removes all comaints for all modules
> * Pat drops primary permissions on all modules
> * Pat drops co-maint perms on all modules
>
> At that point, anything depending on Pat's tarballs is broken, as they
> aren't indexed (ignoring for the moment cpanm's use of backpan indexes).
>
> Also, I think the next tarball uploaded with a namespace previously
> controlled by Pat gets "first come" permissions and is indexed (regardless
> of version number).
>
> Have I got that scenario right?
>
> My thoughts:
>
> * I think we have to allow mass deletion, even if that de-indexes stuff.  I
> think that's an author's right.
>
> * I think we should *not* free up namespaces for random takeover
>
> * I think PAUSE admins should consider a reasonable request by a
> responsible-seeming party to take over a namespace (e.g. by forking a
> tarball from BackPAN).
>
> In other words: authors own their tarballs, but PAUSE owns the namespaces
> (and periodically delegates responsibility to a maintainer).
>
> Mechanically, I think that means that when PAUSE is dropping permissions, it
> should instead transfer control to a PAUSE-controlled ID.  (Effectively,
> https://github.com/andk/pause/issues/169 )
>
> Thoughts?
>
> David
>
> --
> David Golden  Twitter/IRC/Github: @xdg


Re: Found rare bug in Pod::Simple

2016-03-06 Thread Sawyer X
On Sat, Mar 5, 2016 at 9:56 PM, Neil Bowers  wrote:
> [...]
> As you can see, it first checks for no extension. Also note that it’s not
> checking for the ‘.plx’ extension, which survey handles. I’ve never come
> across anyone using the .plx extensions, but I guess for a while maybe
> people did:
>
> http://www.perlmonks.org/?node_id=336713

This comment seems to clarify the .plx (from the thread):

Actually .plx is an extension to use ActivePerl and IIS with ISAPI.


Re: Should Test2 maintain $! and $@?

2016-01-13 Thread Sawyer X
[Top-posted]

The extra cost would be:
1. Array storage
2. Push
3. Pop

At the cost of supporting any level of nesting, I think it's a negligible
cost, but I would profile it.


On Wed, Jan 13, 2016 at 7:50 PM, Kent Fredric  wrote:

> On 14 January 2016 at 07:39, Chad Granum  wrote:
> > Right now the version I have up on cpan just stores them on creation, and
> > restores them on final release. Nothing happens for nested calls to
> > context()/release(), all my downstream testing shows no breakages (not a
> > full smoke, but does include several modules sensitive to $! and $@
> > changes).
>
>
> In the event some code like this dies:
>
>  sub foo {
>   my $context = context();
>   die "Bar";
>  }
>
> What will happen with regards to $@ auto-stacking?
>
> If somebody catches the die in a higher context, what will $@ be?
>
>
> --
> Kent
>
> KENTNL - https://metacpan.org/author/KENTNL
>


Re: Should Test2 maintain $! and $@?

2016-01-12 Thread Sawyer X
[Top-posting]

Chad, I think I understand what you mean now. You were referring to
whether the underlying pinnings should take care of it (Test2) or
whether the chrome (testing functions) around that should do so. Yes?

If so, I think you should probably clarify what Test2 *does* do. It
doesn't provide the functions - alright. What *does* it provide then?

Also, since the discussion was "Should I - the testing functions' user
- write code around my testing functions to accommodate for the
testing framework not preserving `$!` and `$@`?" instead of "Should
the testing functions take care of it rather than the gory details
underneath them?", it might be useful (other than explaining what the
difference between the testing functions and the gory details,
recommended in previous paragraph) explaining how such an
implementation would look like, or how it would be different, in the
testing functions vs. in the gory details.

I think then it would be simple to understand your intent and to help
you with some useful commentary.

If I didn't understand what you meant, then... this might needs
additional clarification.

S. :)


On Tue, Jan 12, 2016 at 8:56 PM, Chad Granum  wrote:
> Thanks for the input.
>
> My question was not very well formed.
>
> What I should have asked is this:
>
> Preserving $! and $@ is a valuable behavior, and one I won't get rid of.
> However, I am questioning the decision to make the test library jump through
> hoops to preserve them, as opposed to having the test functions be
> responsible for doing it.
>
> Now then, having the test library do it means the tools usually do not have
> to worry about it (though it would be easy for them to still damage $! or
> $@). On the other hand, the protection in-library gets very complicated, and
> hard to maintain, and will not solve all cases.
>
> It was not a question of if we should do it, but how we should do it. Test2
> is just the back-end, but asking if Test2 should do it sounded like I was
> asking if we should do it at all, which was not the intent, the front end
> was always gonna do it.
>
> That said, the discussion served as rubber ducking and I was able to find a
> solution that lets the library do the heavy lifting, but in a single spot
> that avoids the complexity and un-maintainability of the way it has done it
> so far. So the discussion is no longer necessary, the library will continue
> to do the protection, and it will do it in a better way. New tools will not
> need to start concerning themselves with this. Old tools never had to worry
> as I was not going to break backcompat.
>
> -Chad
>
> On Tue, Jan 12, 2016 at 11:41 AM, Michael G Schwern 
> wrote:
>>
>> On 1/11/16 4:53 PM, Chad Granum wrote:
>> > Test::More/Test::Builder work VERY hard to ensure nothing inside them
>> > alters $! or $@. This is for
>> > thing like this:
>> >
>> > ok(do_something_scary());
>> > is($!, 0, "expected $! val");
>> > is($@, undef, '$@ not changed');
>> >
>> > Without Test::More/Builder being careful to support this, the second 2
>> > assertions could fail because
>> > something inside ok() modifies $! or $@.
>>
>> If your test functions modify the really common parts of your global
>> environment then it becomes
>> very difficult to test them... and testing error reporting is a very
>> common thing you'd want to do!
>>
>> Kent already pointed out why changing this makes testing $! and $@ very,
>> very awkward.  Let's see
>> that again.
>>
>> my ( $error, $exception );
>> ok(do {
>>   local $@;
>>   local $!;
>>   my $ret  = do_something_scary());
>>   ( $error, $exception ) = ($!, $@);
>>   $ret
>> });
>> is($error, 0, "expected $! val");
>> is($exception, undef, '$@ not changed);
>>
>> Gross.  I don't know how many times I've written code like this.  I hate
>> it.  I always encapsulate
>> it somehow.  And when a library doesn't have good global discipline it
>> makes it even harder for me
>> to have good global discipline.
>>
>> We tell the users that $! and $@ are only safe per function call.  Then we
>> encourage them all over
>> the docs and interface to pass function return values directly into test
>> functions.  Then we tell
>> them they should be testing their error cases... but to do safely requires
>> gross scaffolding.
>> That's not fair to the user.  The result will be that people won't test
>> their error conditions,
>> library quality will drop, and you'll waste a lot of time on a bug that
>> should have been tested.
>>
>> The argument that $! is only reliable per function call, that's a lowest
>> common denominator
>> thinking.  One of the fundamental design principles of Test::Builder was
>> that it had to be the
>> GREATEST common denominator!
>>
>> I don't write libraries to the lowest common denominator.  I write
>> libraries that raise the bar and
>> encourage others to do so as well.  

Re: Should Test2 maintain $! and $@?

2016-01-12 Thread Sawyer X
On Tue, Jan 12, 2016 at 10:55 PM, Kent Fredric <kentfred...@gmail.com> wrote:
> On 13 January 2016 at 10:48, Sawyer X <xsawy...@gmail.com> wrote:
>>
>> If so, I think you should probably clarify what Test2 *does* do. It
>> doesn't provide the functions - alright. What *does* it provide then?
>
>
> Oh, and thought: It may help to consider what testing /testing tools/
> looks like here, and wether the tools themselves need to trap $! and
> $@ and test for their changes.
>
> Its probably immaterial and indifferent from the "handle it at the
> chrome layer", but it may have implications in internals that make
> things more difficult if they're suppressed/not-suppressed.

Good point!


Re: Should Test2 maintain $! and $@?

2016-01-12 Thread Sawyer X
[Top-posted]

Chad, thank you for the detailed response. I think I now understand
the scope of the problem and your solutions.

I think it makes sense to put this in the guts inside the construction
of a new context (or retrieval of current context) and in the release
of that context. Kent, am I right to believe this also addresses the
concerns you raised regarding testing of testing modules?

I'm sorry to say I'm quite ignorant [also] when it comes to the
testing underpinnings, so I'm not sure if there are additional
concerns this does not address, but otherwise it sounds very
reasonable to me.

Thanks again for taking the time to clarify in detail. :)
(It might be useful to start working on a document for the internals
for anyone who wants to hack on it. This should at least be in the
commit messages so it could be tracked down.)

S.



On Tue, Jan 12, 2016 at 11:26 PM, Chad Granum  wrote:
> Yes, your understanding appears correct. And I can make it more clear.
>
> This is a very simple test tool in Test::Builder (the chrome):
>
>> my $TB = Test::Builder->new;  # Call to internals/guts (singleton)
>>
>> sub ok($;$) {
>> my ($bool, $name) = @_;
>> $TB->ok($bool, $name);# Call to internals/guts
>> return $bool
>> }
>
>
> Here it is again using Test2 instead of Test::Builder (the chrome):
>
>> sub ok($;$) {
>> my ($bool, $name) = @_;
>> my $ctx = context();# Call to internals/guts (A)
>> $ctx->ok($bool, $name); # another one (B)
>> $ctx->release;  # another one (C)
>> return $bool;
>> }
>
>
> The lines marked with 'Call to internals/guts' kick off a series of things
> that read/write from filehandles, possibly opens them, evals code/catches
> exceptions, and any number of other things that can squash $! and $@. It
> should be noted that 'ok' is not the only method ever called on either
> Test::Builder or $ctx, this is just a very simple illustration.
>
> Now for starters, Test::Builder uses a singleton, so it can do all its
> initialization at load time, which allows it to leave several things
> unprotected. The singleton was bad, so Test2 does not use one, which means
> it has to be more protective of $! and $@ in more places to accomplish the
> same end goal.
>
> History, what Test::Builder does: It localizes $! and $@ in an eval wrapper
> called _try() that it uses to wrap things it expects could squash $! and $@.
> It also localizes $SIG{__DIE__} for various reasons. In some places where
> $SIG{__DIE__} should not be localized it will instead use local
> independently of _try(). There is also extra logic for subtests to ensure $?
> from the end of the subtest is carried-over into the regular testing outside
> of the subtest. Some places also need to be careful of $? because they run
> in an end block where squashing $? unintentionally is bad. (Yeah, $? is
> involved in all this as well, but much less so)
>
> This results in a lot of places where things are localized, and several
> places that run through an eval. People simply looking at the code may
> overlook these things, and not know that the protection is happening. The
> first time a new-dev will notice it is when tests start breaking because
> they added an open/close/eval/etc in the wrong place. Thanfully there are
> some tests for this, but not enough as I have found downstream things
> (PerlIO::via::Timeout as an example) that break when $! is squashed in a way
> Test::Builder never tests for.
>
> Test::Builder does not localize $! and $@ in all its public method.
> Realistically it cannot for 2 reasons:
>
> Performance hit
> Can mask real exceptions being thrown that are not caught by Test::Builder
> itself.
>
> In short, this is a significant maintenance burden, with insufficient
> testing, and no catch-all solution.
>
> --
>
> Up until this email thread, Test2 was doing the same thing as Test::Builder.
> The problem is that Test2 does lots of other things differently for good
> reasons, unfortunately it provides a lot more opportunity to squash $! and
> $@. Like Test::Builder it is not reasonable to localize $! and $@ at every
> entry point.
>
> I decided to start this thread after a few downstream breakage was detected
> due to $! being squashed. One because perl changes $! when you clone a
> filehandle, even when no errors happen. Another because of a shared memory
> read that was added in an optional concurrency driver used by
> Test::SharedFork. I realized this would be an eternal whack-a-mole for all
> future maintainers of both projects, and one that is hard to write
> sufficient testing for.
>
> The solution:
> Go back to my second example. Notice there are 3 calls to the guts, marked
> (A), (B), and (C). (A) and (C) are universal to all Test2 based tools, and
> are also universally used in the Test::Builder dev releases when it calls
> out to Test2. No tool will function properly if it does not use both of
> those when it uses Test2. 

Re: CPAN River - water quality metric

2015-12-24 Thread Sawyer X
I have to agree with that, albeit probably less angry about it. :)

On Thu, Dec 24, 2015 at 7:00 PM, Karen Etheridge  wrote:

> > I think “has a META.yml or META.json” is worth keeping in
>
> I'm surprised this one is being discussed at all. IMO, not having a META
> file should disqualify the distribution from being considered at all. At
> Berlin last year we talked about making it mandatory, and held off "for
> now" so the outliers could be fixed. Having META should be non-negotiable
> for a well-formed CPAN distribution.
>
>
> On Thu, Dec 24, 2015 at 1:10 AM, Neil Bowers 
> wrote:
>
>> > CPANdeps (http://deps.cpantesters.org) has been providing useful
>> > information on water quality. It might be enough to make a better or
>> > opinionated presentation of it for the upriver authors. IMHO META
>> > files and min version specification depends more on when a
>> > distribution is released and don't well fit for water quality metrics.
>>
>> I’m not convinced on min version either, but am leaning towards including
>> it, if we can come up with a definition that’s practical and useful.
>>
>> I think “has a META.yml or META.json” is worth keeping in, as there are a
>> number of benefits to having one, and I suspect there’s at least some
>> correlation between dists that don’t have a META file and dists that
>> haven’t listed all prereqs (eg in the Makefile.PL).
>>
>> That said, I’m really just experimenting here, trying to find things that
>> are useful indicators for whether a dist is good to rely on.
>>
>> Neil
>>
>>
>


Re: CPAN River - water quality metric

2015-12-24 Thread Sawyer X
[top-posted]

Further context as someone maintaining distributions with long-running
issues. There are many reasons an issue could stay open for a long time:

* It requires much more consideration (and could relate to multiple
branches of reference implementation or different steps along the way)
* It's a reminder of a very low-priority issue.
* It's a reminder to rethink a topic.
* It's a low-hanging fruit kept so early contributors could pick it up.
("Up for grabs" issue tag, for instance.)
* It's kept until another issue is resolved.
* It's kept for a while until the original person who opened it will
confirm it was resolved or still exists.
* Someone asked to handle it and they're given their time to do so
(depending on complexity and prioritization).
* Some PRs need - as I describe it - time to ripen. I believe whoever dealt
with that knows what I mean.

It's very hard to judge by issues. Perhaps comments on issues? I believe
issues should at least be commented on (and I'm a terrible offender at
this).



On Thu, Dec 24, 2015 at 12:14 AM, Douglas Bell  wrote:

>
> > On Dec 23, 2015, at 4:49 PM, Neil Bowers 
> wrote:
> >
> >> Number (and age if possible) of open tickets might show if someone's
> paying attention to the dist. Like David said, much like the adoption
> criteria. The issues don't have to be valid, they could even be spam for
> all it matters, as long as someone's taking care of them.
> >
> > This is a tricky issue, as I found when trying to tune the adoption
> criteria. There are plenty of big name dists that have a lot of open
> issues, and always do.
> >
> > My current thought on this is that if no issues are getting dealt with
> in some timeframe, then it fails the metric. Even if a dist has a pile of
> open issues, if at least some issues are getting dealt with, then as you
> show, that indicates some level of maintainer engagement. That still has
> failure modes though: someone might have adopted a dist that they’re really
> not up to maintaining, so they avoid the large / scary / critical issues.
>
> Yes, absolute ticket count is not as good as ticket movement or churn,
> even if a release doesn't necessarily result. A clean river is a
> steady-flowing river.


Re: On Not Merging new and old behavior ( was Re: Test-Stream + MongoDB scalar refcount issue )

2015-05-02 Thread Sawyer X
On Sat, May 2, 2015 at 7:31 PM, David Golden x...@xdg.me wrote:

 On Sat, May 2, 2015 at 11:11 AM, Kent Fredric kentfred...@gmail.com
 wrote:


 That is, conceptually, its possible that a misguided author of a
 distribution at the same level as say, Test::Differences, thinks it wise to
 simply rewrite their existing code in the new framework.


 We've seen that horror show in Dancer/Dancer2 around plugins.

 I think that if we ship a Test::More2, then such a release should
 encourage people to leave Test::Foo alone and fork it to Test::Foo2
 instead.  Then a new ecosystem can build up around it without sacrificing
 the existing one.


That has its down-sides too, though, as we've seen in the Dancer/Dancer2
issue.

Effectively what happened/happens is that, while plugins are now able to
provide two different implementations without worrying about backwards
compatibility (we originally wanted ןא to be seamless but turned out to be
very hard), most plugins had a shared core. This was odd to maintain. You
either fork it or you put it in a common ::Core module, or you ship both
in the same distribution.

We're now rewriting the plugins architecture, but a situation of two
classes provides both clear separation benefit and a headache of its own.