Re: [Boston.pm] a question

2017-01-29 Thread Ben Tilly
Were you surprised by Daylight Savings Time?

On Sun, Jan 29, 2017 at 11:32 AM dan moylan  wrote:

>
>
> in fiddling with a perl script to calculate the variable
>
> dates dependent on the date of easter, using Time::Local, i
>
> got the wrong answers for shrove tuesday 47 and ash
>
> wednesday 46 days before easter.  this turned out to be what
>
> appears to me as an anomoly in Time::Local at 20170313 where
>
> the delta seconds between days was 3600 sec short.  a test
>
> script then found a corresponding anomoly at 20171106 where
>
> the delta seconds was 3600 sec long.  fwiw i attach the test
>
> script.  have i missed something obvious?
>
>
>
> #! /usr/bin/perl
>
> # tsttme.pl
>
> # created 170129 by jdm3
>
> # revised 170129 by jdm3
>
>
>
>   use strict;
>
>   use Time::Local;
>
>
>
>   my ($dlt, $dte, $tme, $tmepre, $mth, $Mth, $day, $year, );
>
>   my (@dim, @mth, );
>
>
>
>   $year = $ARGV[0];
>
>   if (! $year) {
>
>   printf ("enter the year, stupid.\n");
>
>   exit (0);
>
>   }
>
>
>
>   @dim = qw (31 28 31 30 31 30 31 31 30 31 30 31);
>
>   @mth = (0 .. 11);
>
>
>
>   if ($year%4 == 0) { $dim[1] = 29; }
>
> # printf ("feb: $dim[1]\n");
>
>
>
>   $tmepre = ::Local::timelocal (0,0,0,31,11,$year-1901);
>
>   foreach $mth (@mth) {
>
>   foreach $day (1..$dim[$mth]) {
>
>   $tme = ::Local::timelocal (0,0,0,$day,$mth,$year-1900);
>
>   $dlt = $tme - $tmepre;
>
> # printf ("$year$mth$day -- $dlt\n");
>
>   $day = sprintf ("%02d", $day);
>
>   $Mth = sprintf ("%02d", $mth+1);
>
>   if ($dlt != 86400) { printf ("$year$Mth$day -- $dlt\n"); }
>
>   $tmepre = $tme;
>
>   }
>
>   }
>
>
>
> ##
>
>
>
> tia,
>
> ole dan
>
>
>
>
>
> j. daniel moylan
>
> 84 harvard ave
>
> brookline, ma 02446-6202
>
> 617-777-0207 (cel)
>
> j...@moylan.us
>
> www.moylan.us
>
> [no html pls]
>
>
>
> ___
>
> Boston-pm mailing list
>
> Boston-pm@mail.pm.org
>
> http://mail.pm.org/mailman/listinfo/boston-pm
>
>

___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] perl 5.10 memory leak?

2016-12-14 Thread Ben Tilly
This is nasty.  Does the following fix work for you?

use IO::All;
if ($^V lt v5.16.0) {
  no warnings 'redefine';
  my $orig_destroy = *IO::All::DESTROY{CODE};
  *IO::All::DESTROY = sub {
my $self = shift;
local $^V;
$orig_destroy->($self, @_) if $orig_destroy;
  };
}

This temporarily replaces $^V with the empty string which does the right
thing with the current version check.  Hopefully your Perl gets upgraded
before IO::All has a new version check in DESTROY that this hack doesn't
work.

On Wed, Dec 14, 2016 at 7:44 AM, Duane Bronson <nerdmach...@gmail.com>
wrote:

> Ahh - the bug is in universal.c
> <https://perl5.git.perl.org/perl.git/blobdiff/202e6ee2081e3a898537656cda1148d9aded394d..bcb2959f0:/universal.c>
>  in
> XS_version_boolean.  Can I override that in perl code?  Something like this?
>
> sub version::boolean {
>   ...
> }
>
> Thanks, Matthew!  I'd like to know how you found that.  Google didn't help
> me.
> Duane
>
> On Dec 14, 2016, at 12:58 AM, Matthew Horsfall (alh) <wolfs...@gmail.com>
> wrote:
>
> On Wed, Dec 14, 2016 at 12:55 AM, Matthew Horsfall (alh)
> <wolfs...@gmail.com> wrote:
>
> On Tue, Dec 13, 2016 at 6:41 PM, Ben Tilly <bti...@gmail.com> wrote:
>
> Is the leak in not calling untie, or in looking at $^V?
>
>
> The leak is in looking at $^V. This was broken in 5.10.0 and fixed in
> 5.16.0.
>
>
> In boolean context.
>
>  my $x = $^V;  # fine
>  my $x = !$^V; # bad
>  if ($^V) { }   # bad
>
> -- Matthew Horsfall (alh)
>
>
>
>
> *Duane Bronson*
> nerdmach...@gmail.com
> http://www.nerdlogic.com/
> 5 Goden St.
> Belmont, MA 02478
> 617.515.2909
>
>
>
>
>

___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] perl 5.10 memory leak?

2016-12-13 Thread Ben Tilly
Is the leak in not calling untie, or in looking at $^V?

My assumption is that it was from not calling untie.  Which is why I'm
suggesting doing that unconditionally before running the original code.

If looking at $^V is a memory leak, that would be really bad.

On Tue, Dec 13, 2016 at 2:16 PM, Duane Bronson <nerdmach...@gmail.com>
wrote:

> By the time we update to a new version of Perl, I will not likely remember
> this kludge, so I'm hoping to future-proof it.
>
> Calling $orig_destroy will still run "unless ($^V ...)" and thus leak
> memory, so I can't future-proof that way.  Now, if I can add a destructor
> to the class 'version', maybe I can future proof that way, although I've
> tried that and failed.  I think it might be a bug in the comparison logic
> (overloading?), but I can't find that code.  Is that built into perl?
>
> Duane
>
> On Dec 13, 2016, at 4:30 PM, Ben Tilly <bti...@gmail.com> wrote:
>
> Module upgrades may not be likely, but I was responding to the first
> question which was:
>
> "...I suspect there are better methods that won't break with a future
> version of IO::All."
>
> On Tue, Dec 13, 2016 at 1:26 PM, Conor Walsh <c...@adverb.ly> wrote:
>
>> On Tue, Dec 13, 2016 at 4:20 PM, Ben Tilly <bti...@gmail.com> wrote:
>> > The reason to call the original DESTROY is so that if a future version
>> of
>> > IO::All adds logic to the DESTROY, you will still run that new code.
>>
>> This is an excellent point, but it does not sound like surprise module
>> upgrades are a likely problem in Duane's world.
>>
>
>
>
> On Dec 13, 2016, at 4:20 PM, Ben Tilly <bti...@gmail.com> wrote:
>
> The reason to call the original DESTROY is so that if a future version of
> IO::All adds logic to the DESTROY, you will still run that new code.  While
> still running the code that you know you need to run.
>
> In fact I would suggest the following instead:
>
> use IO::All;
> {
>   no warnings 'redefine';
>   my $orig_destroy = *IO::All::DESTROY{CODE};
>   sub IO::All::DESTROY {
> my $self = shift;
> no warnings;
> untie *$self if tied *$self;
> $orig_destroy->($self, @_) if $orig_destroy;
>   }
> }
>
> Now the custom code that you run is limited to what you are afraid that
> their DESTROY gets wrong.
>
> (Note that my original version had a bug.  I said IO::ALL::DESTROY when it
> needed to be IO::All::DESTROY.)
>
>
>
>
>
> *Duane Bronson*
> nerdmach...@gmail.com
> http://www.nerdlogic.com/
> 5 Goden St.
> Belmont, MA 02478
> 617.515.2909
>
>
>
>
>

___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] perl 5.10 memory leak?

2016-12-13 Thread Ben Tilly
Module upgrades may not be likely, but I was responding to the first
question which was:

"...I suspect there are better methods that won't break with a future
version of IO::All."

On Tue, Dec 13, 2016 at 1:26 PM, Conor Walsh <c...@adverb.ly> wrote:

> On Tue, Dec 13, 2016 at 4:20 PM, Ben Tilly <bti...@gmail.com> wrote:
> > The reason to call the original DESTROY is so that if a future version of
> > IO::All adds logic to the DESTROY, you will still run that new code.
>
> This is an excellent point, but it does not sound like surprise module
> upgrades are a likely problem in Duane's world.
>

___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] perl 5.10 memory leak?

2016-12-13 Thread Ben Tilly
The reason to call the original DESTROY is so that if a future version of
IO::All adds logic to the DESTROY, you will still run that new code.  While
still running the code that you know you need to run.

In fact I would suggest the following instead:

use IO::All;
{
  no warnings 'redefine';
  my $orig_destroy = *IO::All::DESTROY{CODE};
  sub IO::All::DESTROY {
my $self = shift;
no warnings;
untie *$self if tied *$self;
$orig_destroy->($self, @_) if $orig_destroy;
  }
}

Now the custom code that you run is limited to what you are afraid that
their DESTROY gets wrong.

(Note that my original version had a bug.  I said IO::ALL::DESTROY when it
needed to be IO::All::DESTROY.)

On Tue, Dec 13, 2016 at 1:11 PM, Duane Bronson <nerdmach...@gmail.com>
wrote:

> The original DESTROY had the leaky code (see the commented #unless below),
> so I don't think it would help to keep calling it.  My replacement DESTROY
> is a copy/paste of the one in IO/All.pm with the bad lines commented out.
>
> I haven't really been able to find the implementation of the version
> class.  I hoped it was just missing a destructor that I could copy from a
> newer perl and conditionally add it to the package.
>
> Yes, our product is using scientific linux 6.3 which uses perl 5.10.x, and
> our customers and QA don't like hot fixes that replace nearly every RPM.
> In fact, they would probably prefer the memory leak.
>
> Duane
>
> On Dec 13, 2016, at 2:03 PM, Ben Tilly <bti...@gmail.com> wrote:
>
> There is always a risk when working with the internals of another module.
> I would minimize that risk by making sure that the DESTROY that you are
> replacing always runs.  Just in case something gets added to it.  And
> capture the reference to it in a way that will notice if DESTROY is
> eliminated.
>
> Like this:
>
> use IO::All;
> {
>   no warnings 'redefine';
>   my $orig_destroy = *IO::ALL::DESTROY{CODE};
>   sub IO::All::DESTROY {
> my $self = shift;
> $orig_destroy->($self, @_) if $orig_destroy;
> no warnings;
> untie *$self if tied *$self;
> $self->close if $self->is_open;
>   }
> }
>
> On Tue, Dec 13, 2016 at 10:04 AM, Duane Bronson <nerdmach...@gmail.com>
> wrote:
>
>> Mongers,
>>
>> I've been trying to track down a memory leak in a long running perl
>> script that uses IO::All to write a bunch of files every few seconds.  I
>> traced my leak back to IO::All which checks to make sure the version is at
>> least 5.8.0 before calling untie.  Pretty innocuous, imho.
>>
>> Here's an easy way to reproduce it.  In perl 5.10.1, this script causes
>> perl's memory to grow really big.  It appears to be fixed in later versions
>> of perl because there is no memory leak on my mac (perl 5.18.2).
>>
>> perl -E 'for $i (0..100_000_000) { 1 if ($^V) }'
>> and
>> perl -E '$foo = version->new(v1.2.3); for $i (0..100_000_000) { 1 if
>> ($foo) }'
>> and (slower)
>> perl -E 'use IO::All; for $i (0..100_000_000) { "file contents" >
>> io("/tmp/filename") }'
>>
>> Is there a way of fixing my script so I can still use IO::All and not
>> have a memory leak?  Here is one way, but I suspect there are better
>> methods that won't break with a future version of IO::All.
>>
>> use IO::All;
>> {
>>   no warnings 'redefine';
>>   sub IO::All::DESTROY {
>> my $self = shift;
>> no warnings;
>> #unless ( $^V and $^V lt v5.8.0 ) {
>> untie *$self if tied *$self;
>> #}
>> $self->close if $self->is_open;
>>   }
>> }
>>
>> Thanks,
>> Duane
>>
>>
>>
>> Duane Bronson
>> nerdmach...@gmail.com <mailto:nerdmach...@gmail.com>
>> http://www.nerdlogic.com/ <http://www.nerdlogic.com/>
>> 5 Goden St.
>> Belmont, MA 02478
>> 617.515.2909
>>
>>
>>
>>
>>
>> ___
>> Boston-pm mailing list
>> Boston-pm@mail.pm.org
>> http://mail.pm.org/mailman/listinfo/boston-pm
>
>
>
>
> *Duane Bronson*
> nerdmach...@gmail.com
> http://www.nerdlogic.com/
> 5 Goden St.
> Belmont, MA 02478
> 617.515.2909
>
>
>
>
>

___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] perl 5.10 memory leak?

2016-12-13 Thread Ben Tilly
There is always a risk when working with the internals of another module.
I would minimize that risk by making sure that the DESTROY that you are
replacing always runs.  Just in case something gets added to it.  And
capture the reference to it in a way that will notice if DESTROY is
eliminated.

Like this:

use IO::All;
{
  no warnings 'redefine';
  my $orig_destroy = *IO::ALL::DESTROY{CODE};
  sub IO::All::DESTROY {
my $self = shift;
$orig_destroy->($self, @_) if $orig_destroy;
no warnings;
untie *$self if tied *$self;
$self->close if $self->is_open;
  }
}

On Tue, Dec 13, 2016 at 10:04 AM, Duane Bronson 
wrote:

> Mongers,
>
> I've been trying to track down a memory leak in a long running perl script
> that uses IO::All to write a bunch of files every few seconds.  I traced my
> leak back to IO::All which checks to make sure the version is at least
> 5.8.0 before calling untie.  Pretty innocuous, imho.
>
> Here's an easy way to reproduce it.  In perl 5.10.1, this script causes
> perl's memory to grow really big.  It appears to be fixed in later versions
> of perl because there is no memory leak on my mac (perl 5.18.2).
>
> perl -E 'for $i (0..100_000_000) { 1 if ($^V) }'
> and
> perl -E '$foo = version->new(v1.2.3); for $i (0..100_000_000) { 1 if
> ($foo) }'
> and (slower)
> perl -E 'use IO::All; for $i (0..100_000_000) { "file contents" >
> io("/tmp/filename") }'
>
> Is there a way of fixing my script so I can still use IO::All and not have
> a memory leak?  Here is one way, but I suspect there are better methods
> that won't break with a future version of IO::All.
>
> use IO::All;
> {
>   no warnings 'redefine';
>   sub IO::All::DESTROY {
> my $self = shift;
> no warnings;
> #unless ( $^V and $^V lt v5.8.0 ) {
> untie *$self if tied *$self;
> #}
> $self->close if $self->is_open;
>   }
> }
>
> Thanks,
> Duane
>
>
>
> Duane Bronson
> nerdmach...@gmail.com 
> http://www.nerdlogic.com/ 
> 5 Goden St.
> Belmont, MA 02478
> 617.515.2909
>
>
>
>
>
> ___
> Boston-pm mailing list
> Boston-pm@mail.pm.org
> http://mail.pm.org/mailman/listinfo/boston-pm
>

___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] Search terms help?

2015-09-02 Thread Ben Tilly
I didn't read those articles, but then they wouldn't have been aimed at me.

The only trick is that you have to load those modules and have an import
method that calls their import method.  If you're exporting specific
functions into their namespace, the best way to do so is to import those
functions into your space and then re-export them yourself using Exporter.
(Note, doing that re-exporting is probably a really good way to create a
bad mess...)

On Wed, Sep 2, 2015 at 11:18 AM, Morse, Richard E.,MGH <
remo...@mgh.harvard.edu> wrote:

> Hi! A few years ago, there was a spate of articles online which described
> how to create a “loader” module — the idea was that instead of having to
> add a whole bunch of boilerplate to the start of every file, you could have
> just one module that would import a whole bunch of things.
>
> An example of such a module is Modern::Perl. Instead of doing:
>
> use warnings;
> use strict;
> use IO::File;
> use IO::Handle;
> use ‘:5.10’;
>
> you just do
>
> use Modern::Perl;
>
> Unfortunately, I can’t seem to find any of those articles anymore,
> probably because I’m not searching for the right terms.
>
> Does anyone know what I should be searching for?
>
> Thanks,
> Ricky
>
>
> The information in this e-mail is intended only for the person to whom it
> is
> addressed. If you believe this e-mail was sent to you in error and the
> e-mail
> contains patient information, please contact the Partners Compliance
> HelpLine at
> http://www.partners.org/complianceline . If the e-mail was sent to you in
> error
> but does not contain patient information, please contact the sender and
> properly
> dispose of the e-mail.
>
> ___
> Boston-pm mailing list
> Boston-pm@mail.pm.org
> http://mail.pm.org/mailman/listinfo/boston-pm

___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm

Re: [Boston.pm] OT: recommendation of billing services for consulting work needed

2015-08-31 Thread Ben Tilly
I've used Freshbooks for this in the past.

I am currently using the Hours Keeper app on my phone.

Both work well.

On Mon, Aug 31, 2015 at 8:15 PM, Adam Russell  wrote:

> In addition to my full time job I do some consulting about 10-15 hours a
> week.Up until now I have been using a local staffing agency to handle the
> billing for me. The set up is that I make the arrangements, negotiate my
> rate, and then the clientworks with the agency to handle the billing
> details. I tell the agency the hours I worked and they collect the money
> and cut me a check.All well and good in hiding the tedious administrative
> details from me. The complication is that client has revealed to me that
> the agency's charge for being the middleman is a hefty 30% of my rate! I
> had known that their was a surcharge for their services but the amount was
> either miscommunicated to me, or...adjusted upwards without my being
> notified.
> I know that there are likely at least a few people on this list that have
> solved for this problem themselves. So, my question is, what is the best
> way (in terms of ease, and low overhead) to handle billing for my part time
> consulting work?
>
> ___
> Boston-pm mailing list
> Boston-pm@mail.pm.org
> http://mail.pm.org/mailman/listinfo/boston-pm
>

___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] Max perl runtime?

2015-05-20 Thread Ben Tilly
You can run it indefinitely. But do set up a cron to check that it is
running and restart at need.

This will make sure it is still there after a reboot or an unhappy run in
with the OOM killer.

On Wednesday, May 20, 2015, ja...@nova-sw.com wrote:


 I'm running a small simple tcp  udp monitoring daemon using INET sockets
 in perl.   I'm wondering how long I can safely expect perl + this app to
 run non-stop?It's run for ~ 6 wks. so far on HPUX and RH linux without
 bad behavior (leaks, crashes, execution errors, etc.).I'd prefer to
 avoid the added complexity of periodically forking a new daemon if six
 months or so is reasonably achievable.   What do others experience?



 ___
 Boston-pm mailing list
 Boston-pm@mail.pm.org
 http://mail.pm.org/mailman/listinfo/boston-pm


___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] object composition (has-a) with built-in OO system

2014-11-02 Thread Ben Tilly
On Sun, Nov 2, 2014 at 8:02 AM, Bill Ricker bill.n1...@gmail.com wrote:
 On Sun, Nov 2, 2014 at 10:14 AM, Greg London em...@greglondon.com wrote:

 My experience has been that having a page instance be mangled
 in some way to behave like a book is almost always going to be
 a regrettable coding decision.


 Nice analogy. Agreed.

 As to Ben's comments on Linked lists vs Perl arrays, linked lists tend to
 be more flexible for insertion/deletion in the middle for highly dynamic
 lists. List::Utils etc and Perl6 give us syntax to splice arrays, but
 that's not efficient if doing a lot of that on truly large lists. *   List
 structures also the starting point for building tree structures (Tree nodes
 have N of what list nodes have 1 of ... likewise the algorithms).

Yes, in theory and for very large data structures, this is true.
However the performance difference between built-in native operations
and code written in Perl is sufficiently large that it is hard to find
practical use cases where this is demonstrable in practice.  This is
even more true when you add Perl's slow method calls on top.
Furthermore even when it is, there is no point in complicating your
code with them unless there is a demonstrated performance problem.
And if there is a demonstrated performance problem, you're probably
getting close enough to Perl's limits that I would recommend writing
that in a different language.

So my experience is, Arrays are a better answer than linked lists in
Perl except when Perl is a bad choice of language.

___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] object composition (has-a) with built-in OO system

2014-11-02 Thread Ben Tilly
On Sun, Nov 2, 2014 at 1:00 PM, Bill Ricker bill.n1...@gmail.com wrote:
 On Sun, Nov 2, 2014 at 3:29 PM, Adam Russell ac.russ...@live.com wrote:

 I've been doing OO for years with pure-OO type environments such as Ruby

 Do smalltalkers accept Ruby's claims?  Their native OO is more OO than
 P5's (but we have choices), but is arithmetic really done with messages?

Yes.  Ruby's OO model is an exact copy of Smalltalk's.  Arithmetic
works the same way once you get past the syntactic sugar.

___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] object composition (has-a) with built-in OO system

2014-11-02 Thread Ben Tilly
This works in Ruby:

  class Integer
def hello ()
  puts self
end
  end

  5.hello() # outputs 5

For performance reasons there is a slight bit of cheating for some
numerical types.  For example you cannot add a method to the number 5
only.  See http://ruby-doc.org/core-2.1.4/Fixnum.html for details.

On Sun, Nov 2, 2014 at 3:18 PM, Bill Ricker bill.n1...@gmail.com wrote:
 On Sun, Nov 2, 2014 at 5:39 PM, Ben Tilly bti...@gmail.com wrote:
 Do smalltalkers accept Ruby's claims?  Their native OO is more OO than
 P5's (but we have choices), but is arithmetic really done with messages?

 Yes.  Ruby's OO model is an exact copy of Smalltalk's.  Arithmetic
 works the same way once you get past the syntactic sugar.


 So in Ruby, 3 natively handles metaobject methods ?

 --
 Bill Ricker
 bill.n1...@gmail.com
 https://www.linkedin.com/in/n1vux

___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] object composition (has-a) with built-in OO system

2014-11-02 Thread Ben Tilly
I read the presentation.  It is hard to know where to begin on
responding.  There are a lot of true facts presented, and it all
sounds very clever.  But the big picture is completely wrong.

Yes, there is overhead with arrays, and yes you over allocate space.
(My memory says 5/4, not by 2, but that's a minor implementation
detail.)  However a linked list has so much overhead that it is always
going to lose.

Yes, a push on an array can cause it to come unshared in forked
processes.  But thanks to the way that Perl uses reference counting.
the simple act of iterating through a linked list will cause all of
THAT to become unshared.

Yes, arrays that take space don't untake them.  But if it matters, it
is easy to keep the array that will grow in an array ref, and then
replace the array ref to save space.

Yes, arrays pose locking issues.  But Perl's story around
multi-threading sucks so much that if you're worrying that, you're
probably doing something fundamentally wrong.

And so it goes.  It all sounds smart, but he's thinking about it wrong.

On Sat, Nov 1, 2014 at 11:03 PM, Adam Russell ac.russ...@live.com wrote:
 Well, this isn't code being written for any purpose other than to experiment
 with some algorithms and data structures. The fact that it is in Perl
 is just for fun. That said, I believe that there is a good use case for
 linked lists when you want to strictly control memory allocation.
 Last I heard, Perl arrays grew in large chunks (I forget the details)
 whereas with a LL the memory allocated by perl will grow
 more slowly. This is from a talk I heard back in 2009 at YAPC
 (http://www.slideshare.net/lembark/perly-linked-lists) so some of these
 advantages
 may no longer be applicable. I'd be interested in knowing of this was the
 case.

 As to your question, well, I have no strong reason (again, just playing
 around here). My thinking is along the lines of that I would rather have a
 book have pages
 than just have pages. Perhaps too subtle a point that isn't worth trying
 to capturing. Anyway, my analogy isn't exact since my book is really just
 a reference to the
 title page. :)


 Date: Sat, 1 Nov 2014 22:44:50 -0700
 Subject: Re: [Boston.pm] object composition (has-a) with built-in OO
 system
 From: bti...@gmail.com
 To: ac.russ...@live.com
 CC: boston-pm@mail.pm.org


 Just so that you know, it is very hard to find a use case for linked
 lists in Perl where a native array is not a better option. That said,
 why draw a distinction between a node in a linked list and a linked
 list? I would just have one class, that is a reference to a node.
 (That itself might have children.) The one exception is if you wanted
 a doubly linked linked list, and wanted to avoid having a memory leak.

 For inheritance, usually you just set up @ISA in some way. Either by
 explicit assignment, or with 'use parent'. (The modern replacement
 for 'use base'. Though I don't know why that needed replacing...)

 On Sat, Nov 1, 2014 at 10:36 PM, Adam Russell ac.russ...@live.com wrote:
  I was experimenting with some code, jogging my memory of linked lists.
 
  The approach I took was to define a package LinkedListNode and then a
  package LinkedList.
  My idea is that my LinkedList package is a wrapper around the head node
  which would also define
  some useful methods such as print_list(), remove_node(), and so forth.
 
  I did this by having the constructor for LinkedList create and bless()
  the head node.
  But then I ran into a problem having this object call LinkedListNode
  methods. This was solved
  by making LinkedList a subclass of LinkedListNode with the line
  unshift @ISA, 'LinkedListNode';
 
  Ok, so I am doing what seems to me to be composition in OO speak. That
  was my design intention anyway.
  The only way I've found that it works is to use a parent class
  relationship. Does Perl have some
  other way of doing this, using the built in OO system?
  I've read
  http://search.cpan.org/~rjbs/perl-5.18.4/pod/perlootut.pod#Composition
  and it seems that the answer is No, what I've done is the way to do
  it. but I thought I'd ask in case
  I'm just not getting something?
 
  I only want to use the Perl built in OO system. I am aware Moose (and
  others) have facilities for this.
 
  My code is at http://pastebin.com/8fnxY0Xy if you'd like to take a look.
  Any other questions or comments on this code would be appreciated as
  well. I've been brushing up on a few things
  and am open to comments.
 
  Thanks!
  Adam
 
  ___
  Boston-pm mailing list
  Boston-pm@mail.pm.org
  http://mail.pm.org/mailman/listinfo/boston-pm

___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] object composition (has-a) with built-in OO system

2014-11-01 Thread Ben Tilly
Just so that you know, it is very hard to find a use case for linked
lists in Perl where a native array is not a better option.  That said,
why draw a distinction between a node in a linked list and a linked
list?  I would just have one class, that is a reference to a node.
(That itself might have children.)  The one exception is if you wanted
a doubly linked linked list, and wanted to avoid having a memory leak.

For inheritance, usually you just set up @ISA in some way.  Either by
explicit assignment, or with 'use parent'.  (The modern replacement
for 'use base'.  Though I don't know why that needed replacing...)

On Sat, Nov 1, 2014 at 10:36 PM, Adam Russell ac.russ...@live.com wrote:
 I was experimenting with some code, jogging my memory of linked lists.

 The approach I took was to define a package LinkedListNode and then a
 package LinkedList.
 My idea is that my LinkedList package is a wrapper around the head node
 which would also define
 some useful methods such as print_list(), remove_node(), and so forth.

 I did this by having the constructor for LinkedList create and bless()
 the head node.
 But then I ran into a problem having this object call LinkedListNode
 methods. This was solved
 by making LinkedList a subclass of LinkedListNode with the line
 unshift @ISA, 'LinkedListNode';

 Ok, so I am doing what seems to me to be composition in OO speak. That
 was my design intention anyway.
 The only way I've found that it works is to use a parent class
 relationship. Does Perl have some
 other way of doing this, using the built in OO system?
 I've read
 http://search.cpan.org/~rjbs/perl-5.18.4/pod/perlootut.pod#Composition
 and it seems that the answer is No, what I've done is the way to do
 it. but I thought I'd ask in case
 I'm just not getting something?

 I only want to use the Perl built in OO system. I am aware Moose (and
 others) have facilities for this.

 My code is at http://pastebin.com/8fnxY0Xy if you'd like to take a look.
 Any other questions or comments on this code would be appreciated as
 well. I've been brushing up on a few things
 and am open to comments.

 Thanks!
 Adam

 ___
 Boston-pm mailing list
 Boston-pm@mail.pm.org
 http://mail.pm.org/mailman/listinfo/boston-pm

___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] Help with symbol table munging...

2014-10-29 Thread Ben Tilly
What our does is binds the package variable to lexical scope.  So a
package after an our doesn't change the variable.  But if you have the
our *after* the package then it will bind the correct package.  So in
your eval put the package statement before our %map and you'll be
fine.

Incidentally if you are using eval in this horrible way, I strongly
recommend studying
http://perldoc.perl.org/perlsyn.html#Plain-Old-Comments-(Not!) for how
to make it clear in any error messages where your subroutines were
actually defined.

On Wed, Oct 29, 2014 at 11:47 AM, Morse, Richard E.,MGH
remo...@mgh.harvard.edu wrote:
 Hi! I'm running into an odd desire, and I'm hoping someone here can at least 
 tell me where to look, as so far Google and DDG are not telling me what I 
 want.

 I have a bunch of modules which have the same subroutines in each. Mostly, 
 the code for the subroutines is the same, but there is a chance that any 
 particular subroutine might be slightly different in any module (the context 
 here involves internationalization). So, for instance:

 package PKG::en_US;
 our %map;
 sub handle_ages { ; }
 sub handle_dests { ; }

 package PKG::fr;
 our %map;
 sub handle_ages { ; }
 sub handle_dests { ; }

 package PKG::pt_BR;
 our %map;
 sub handle_ages { ; }
 sub handle_dests { ; }

 What I want to do is be able to create a base module, something like

 package PKG::_base;
 sub handle_ages { ; }
 sub handle_dests { ; }

 I could then define the rest of them as

 package PKG::en_US;
 our %map;

 package PKG::fr;
 our %map;
 sub handle_dests { ; }

 package PKG::pt_BR;
 our %map;

 Then use some kind of symbol table mungery to add the undefined functions to 
 each package.

 This I can do.

 However, where I need help, is that I want to be able to have the package 
 variable %map be properly used by the functions added to each package. That 
 is, if I call PKG::en_US::handle_ages, it should use %PKG::en_US::map, not 
 (the nonexistant) %PKG::_base::map.

 I've tried various things, but from what I can understand, even with 
 everything declared our, the sub definition closes over the package that 
 it's in when defined. I've seen references to doing an `eval (package 
 $package; sub handle_ages { ; })`, but this makes maintaining everything 
 much harder, as I now don't have a base module, but rather a bunch of text 
 strings.

 Thanks,
 Ricky


 The information in this e-mail is intended only for the person to whom it is
 addressed. If you believe this e-mail was sent to you in error and the e-mail
 contains patient information, please contact the Partners Compliance HelpLine 
 at
 http://www.partners.org/complianceline . If the e-mail was sent to you in 
 error
 but does not contain patient information, please contact the sender and 
 properly
 dispose of the e-mail.


 ___
 Boston-pm mailing list
 Boston-pm@mail.pm.org
 http://mail.pm.org/mailman/listinfo/boston-pm

___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] Help with symbol table munging...

2014-10-29 Thread Ben Tilly
In a block you can do:

  no strict 'refs';
  my $mapref = \%{$package\::map};

And now you have a lexical reference that will bind to a closure.

Do note that using objects here is more common.

On Wednesday, October 29, 2014, Morse, Richard E.,MGH 
remo...@mgh.harvard.edu wrote:

 Thanks -- I see how to do this with a string eval, but I'm hoping to avoid
 having to keep the functions as text strings.

 If I can't find any other way, it's what I will have to fall back on, but
 I'm hoping there's some way to avoid this.

 Ricky

 On Oct 29, 2014, at 2:56 PM, Ben Tilly bti...@gmail.com javascript:;
 wrote:

  What our does is binds the package variable to lexical scope.  So a
  package after an our doesn't change the variable.  But if you have the
  our *after* the package then it will bind the correct package.  So in
  your eval put the package statement before our %map and you'll be
  fine.
 
  Incidentally if you are using eval in this horrible way, I strongly
  recommend studying
  http://perldoc.perl.org/perlsyn.html#Plain-Old-Comments-(Not!) for how
  to make it clear in any error messages where your subroutines were
  actually defined.
 
  On Wed, Oct 29, 2014 at 11:47 AM, Morse, Richard E.,MGH
  remo...@mgh.harvard.edu javascript:; wrote:
  Hi! I'm running into an odd desire, and I'm hoping someone here can at
 least tell me where to look, as so far Google and DDG are not telling me
 what I want.
 
  I have a bunch of modules which have the same subroutines in each.
 Mostly, the code for the subroutines is the same, but there is a chance
 that any particular subroutine might be slightly different in any module
 (the context here involves internationalization). So, for instance:
 
 package PKG::en_US;
 our %map;
 sub handle_ages { ; }
 sub handle_dests { ; }
 
 package PKG::fr;
 our %map;
 sub handle_ages { ; }
 sub handle_dests { ; }
 
 package PKG::pt_BR;
 our %map;
 sub handle_ages { ; }
 sub handle_dests { ; }
 
  What I want to do is be able to create a base module, something like
 
 package PKG::_base;
 sub handle_ages { ; }
 sub handle_dests { ; }
 
  I could then define the rest of them as
 
 package PKG::en_US;
 our %map;
 
 package PKG::fr;
 our %map;
 sub handle_dests { ; }
 
 package PKG::pt_BR;
 our %map;
 
  Then use some kind of symbol table mungery to add the undefined
 functions to each package.
 
  This I can do.
 
  However, where I need help, is that I want to be able to have the
 package variable %map be properly used by the functions added to each
 package. That is, if I call PKG::en_US::handle_ages, it should use
 %PKG::en_US::map, not (the nonexistant) %PKG::_base::map.
 
  I've tried various things, but from what I can understand, even with
 everything declared our, the sub definition closes over the package that
 it's in when defined. I've seen references to doing an `eval (package
 $package; sub handle_ages { ; })`, but this makes maintaining everything
 much harder, as I now don't have a base module, but rather a bunch of text
 strings.
 
  Thanks,
  Ricky
 
 
  The information in this e-mail is intended only for the person to whom
 it is
  addressed. If you believe this e-mail was sent to you in error and the
 e-mail
  contains patient information, please contact the Partners Compliance
 HelpLine at
  http://www.partners.org/complianceline . If the e-mail was sent to you
 in error
  but does not contain patient information, please contact the sender and
 properly
  dispose of the e-mail.
 
 
  ___
  Boston-pm mailing list
  Boston-pm@mail.pm.org javascript:;
  http://mail.pm.org/mailman/listinfo/boston-pm



___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] $10K programming contest in Boston

2014-09-29 Thread Ben Tilly
I like the line where they say, You might recognize this problem as
intractable in general.

Yup, bin packing problem.  Standard example of an np-complete problem.

On Sat, Sep 27, 2014 at 1:59 PM, Tom Metro tmetro+boston...@gmail.com
wrote:

 Local company Vistaprint is running a programming contests. The
 objective is to determine what is the smallest shipping box that can
 contain a given collection of Vistaprint products.

 I'm not sure if this is a made up goal, or a real need (they imply it is
 real). I would think by now there would be commercial solutions to this
 common problem.

 I'm posting this here because Vistaprint was/is a company that used
 Perl. They might have a preference for Perl solutions. (They list a
 bunch of acceptable languages for submissions, which includes Perl.)

 It sounds like less of a programming challenge and more of a mathematics
 or algorithm challenge, so team up with a math major.

  -Tom


  Original Message 
 Subject: [Discuss] $10K programming contest in Boston
 Date: Sat, 27 Sep 2014 12:47:30 -0400
 From: Daniel Barrett dbarr...@blazemonger.com
 To: disc...@blu.org


 My Boston-area employer is running a programming contest with a
 $10,000 prize, in case anyone's interested

   http://www.lifeinvistaprint.com/techchallenge/

 --
 Dan Barrett
 dbarr...@blazemonger.com


 ___
 Boston-pm mailing list
 Boston-pm@mail.pm.org
 http://mail.pm.org/mailman/listinfo/boston-pm


___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] directed Graph modules in perl and CPAN

2013-06-25 Thread Ben Tilly
For a specific one-off specific task like this, I would assume that
would take more work to find and evaluate a module that solves my
problem than it would take to roll my own solution.

Heck, in this case you can arrange it as a map-reduce.  Your initial
map takes each file, and spits out key/value pairs where the key is
the meaningful identifier of a node, and the value is the meaningful
identifier of a predecessor.  Your reduce takes the key and dedupes
the values to get a node and its unique predecessors.

With Hadoop you can now distribute this calculation across multiple
machines and parallelize the work.

On Tue, Jun 25, 2013 at 4:47 PM, Steve Tolkin stevetol...@comcast.net wrote:
 Summary: I went looking for a CPAN graph module so I could merge multiple
 directed graphs.  I found two that looked good, by famous Perl authors.
 Unfortunately both have issues.

 1. Graph::Easy looks good, but it has not changed in years and has a bug
 list
 https://rt.cpan.org/Public/Dist/Display.html?Status=Active;Name=Graph-Easy
 with a bunch of Open and Important bugs.  See also
 http://search.cpan.org/~shlomif/Graph-Easy-0.73/lib/Graph/Easy.pm   and
 http://bloodgate.com/perl/graph/index.html
 This was originally by tels and now maintained by by Shlomi Fish.

 2. Graph now says: q
 UNSUPPORTED
 Unfortunately, as of release 0.95, this module is unsupported, and will no
 more be maintained. Sorry about that. /q
 Its bug list at https://rt.cpan.org/Public/Dist/Display.html?Name=Graph is
 short but it includes this Important one: find_a_cycle and has_cycle are
 broken https://rt.cpan.org/Public/Bug/Display.html?id=78465
 See also http://search.cpan.org/~jhi/Graph-0.96/lib/Graph.pod
 This is by Jarkko Hietaniemi.

 3. Graph::Simple is just v0.03

 Are there other good modules?

 A summary of what I MIGHT want to do:
 Merge separate directed graphs into one, by combining equivalent nodes and
 creating the union of their predecessor sets.

 In more detail: There are several existing directed graphs, each in its own
 file.  Sometimes a node in one file is equivalent to a node in another file.
 Nodes have associated attributes.  The meaningful identifier for a node is
 a three part key.  However, in each file each node is assigned an arbitrary
 integer ID starting with 1, so the same integers appear in many files,
 referring to different nodes.  In each file a node's predecessors are
 identified just by a set of those integers.

 --
 Thanks,
 Steve Tolkin




 ___
 Boston-pm mailing list
 Boston-pm@mail.pm.org
 http://mail.pm.org/mailman/listinfo/boston-pm

___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] suppressing warnings

2013-05-04 Thread Ben Tilly
If your module has an import method, and in that method calls
warnings-unimport(once) then the unimport should be lexically
scoped to where your package was used.  Which in the normal case is
the whole file, so it works.

On Sat, May 4, 2013 at 2:07 PM, Jordan Adler jordan.m.ad...@gmail.com wrote:
 This pragma usage is lexically scoped, too.

 Sent from my iPhone

 On May 4, 2013, at 5:01 PM, Jordan Adler jordan.m.ad...@gmail.com wrote:

 Design issues aside,

 no warnings 'once';

 Sent from my iPhone

 On May 4, 2013, at 3:47 PM, Greg London em...@greglondon.com wrote:

 is there a way to suppress this warning from inside testpackage.pm somehow?

 I've tried a bunch of things and still haven't found a solution.

 Is this not possible to do in per?
 Or is it so obvious I can't see it?

 Greg



 package file testpackage.pm:

 package testpackage;
 use warnings;
 use strict;
 use Data::Dumper;
 use Exporter 'import';
 our @EXPORT = qw ( somesub );
 sub somesub{
   my ($name)=@_;
   my @caller=caller(0);
   my $package=$caller[0];
   my $evalstr = '$'.$package.'::'.$name.' = 42;';
   eval($evalstr);
 }
 1;


 script file testscript.pl:

 #!/usr/bin/env perl
 use warnings;
 use strict;
 use testpackage;
 somesub('tricky');
 print hello, tricky is '$main::tricky'\n;


 When I run this script, I get the warning:
 Name main::tricky used only once: possible typo at ./testscript.pl


 is there a way to suppress this error from inside testpackage.pm somehow?








 ___
 Boston-pm mailing list
 Boston-pm@mail.pm.org
 http://mail.pm.org/mailman/listinfo/boston-pm


 --



 ___
 Boston-pm mailing list
 Boston-pm@mail.pm.org
 http://mail.pm.org/mailman/listinfo/boston-pm

 ___
 Boston-pm mailing list
 Boston-pm@mail.pm.org
 http://mail.pm.org/mailman/listinfo/boston-pm

___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] C++ books

2013-04-17 Thread Ben Tilly
On Wed, Apr 17, 2013 at 8:51 AM, Mike Small sma...@panix.com wrote:
 Ben Tilly bti...@gmail.com writes:

 On Wed, Apr 17, 2013 at 7:29 AM, Greg London em...@greglondon.com wrote:
 Why use macros when you can write a function?

 Lisp weenie answer: because the arguments to functions may produce
 side effects, while with macros you can control that.  Of course the
 Lisp answer is inapplicable in other languages because their macros
 are less reliable.

 C/C++ answer: because with a macro you can choose whether to try to
 call functions that might or might not exist on your platform, and
 would make the compiler miserable if it saw it.

 I haven't really gone there myself (the words template metaprogramming
 and SFINAE still make me squirm a bit), but I gather you can do most or
 all of this using templates these days. It's an improvement I think, if
 nothing else because you're not calling what you're making a macro with
 whatever expectations that might give a Lisp programmer. Besides, the
 support for this (e.g. avoiding the monstrous error messages or the
 weirdly hackish nature of the techniques) is something they look to
 improve in newer standards, so maybe someday the code to do it will be
 more natural to people who don't know templates top to bottom.

I have done some template metaprogramming, but I'm much more of a
consumer than a user.  I do not know the full capabilities.  However
while we're on that topic, I have a respect/hate relationship with the
STL.  I respect it, but hate using it.  As an example of a beef,
consider iterators.

If I have a vector of type Foo, then an iterator over it has type
std::vector Foo ::iterator.

If I have a map from Foo to Bar, then an iterator over it has type
std::map Foo, Bar ::iterator.

If I have a set of things of type Foo, then an iterator over it has
type std::set Foo ::iterator.

What's wrong with this?  As a programmer I don't care what data type
they came from, I care what I am getting out of them.  Therefore I
would prefer the data types std::iterator Foo , std::iterator
std::pair Foo, Bar  , and std::iterator Foo .  For the use cases
currently supported by the STL, the compiler should be able to trace
through the data, and replace with what you currently have to write,
and then do something efficient.  But it can fall back on a slow
generic implementation.  (Slow in quotes because other languages do
not even blink at what is required.)

What would you gain from this?  Well read
http://perl.plover.com/Stream/stream.html and it is easy to translate
those examples into easy abstractions over the types that I wish the
STL provided.  But none of it can be easily done in C++ today.  (Well,
I can write the slow generic implementation and make it work, but not
so easily.)

___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] C++ books

2013-04-16 Thread Ben Tilly
On Tue, Apr 16, 2013 at 4:02 AM, David Cantrell da...@cantrell.org.uk wrote:
 On 15/04/2013 19:35, Ben Tilly wrote:

 I'm writing some C++ at the moment that fits into the first group
 (performance-critical code).  For unit testing I've been emitting TAP
 protocol and testing it with prove, but are there better approaches?

 I get a test file with a lot of code that looks like this:

printf(
  %s %d: Some useful description and maybe a number %d\n,
  (expected_value == test_value) ? ok : not ok, ++tests,
  some_useful_debugging_info
);


 How about abusing the pre-processor to build a strangely familiar-looking
 mini-language for testing:

 #define OK(result, text) printf(%s %d %s\n, (result ? ok : not ok),
 test_number++, text); if(!(result)) { all_gone_buggerup = 1; }
 #define DONE_TESTING() printf(%s\n, all_gone_buggerup ? FAILED :
 PASSED); if(all_gone_buggerup) { return 1; } else { return 0; }

 obviously you also need to declare and initialise test_number and
 all_gone_buggerup too.

 You can then write:

 int main(void) {
   int test_number = 0;
   int all_gone_buggerup = 0;
   OK(1 == 1, it works!);
   OK(2 == 1, oh no it doesn't);

   DONE_TESTING();
 }

You missed the significant fact that I am passing information into the
description in further parameters.  That's essential for me, not a
nice to have.  It allows me to do things like have potentially dubious
values appear directly in my test output.  (I caught a subtle bug due
to seeing that output just last night!)

___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] Test code generation (was C++ books)

2013-04-16 Thread Ben Tilly
On Tue, Apr 16, 2013 at 6:51 AM, Gyepi SAM gy...@praxis-sw.com wrote:
 On Mon, Apr 15, 2013 at 11:35:22AM -0700, Ben Tilly wrote:
 On Mon, Apr 15, 2013 at 11:09 AM, Greg London em...@greglondon.com wrote:
 For unit testing I've been emitting TAP
 protocol and testing it with prove, but are there better approaches?

 I get a test file with a lot of code that looks like this:

   printf(
 %s %d: Some useful description and maybe a number %d\n,
 (expected_value == test_value) ? ok : not ok, ++tests,
 some_useful_debugging_info
   );

 This method is usually tolerable for a while but gets old as soon as
 I realize I'm doing the computer's job much slower and less accurately.
 Instead, I generate the test file from a simpler data file that only
 contains the relevant test data.

 I've successfully used three different approaches for this:

 1. Extract the relevant data into a text file
and use it as input to a program that produces the test file.
This works well when the input has a lot of variety or could change in
unexpected ways, but then you also have a code generator to maintain.

 2. Extract the relevant data into XML and use gsl 
 (https://github.com/imatix/gsl)
to generate the test file. Works well when the test data varies in ways
that can be relatively easily expressed by xml. While this approach works
well, I don't particularly like editing xml so I actually write my data in 
 a
hierarchical text format and use, shameless plug, txt2xml 
 (https://github.com/gyepisam/txt2xml)
to convert it to xml.

 3. I have some similar things with autogen (apt-get install autogen) and it
works pretty well too.

 The various conversions from txt to xml to c and h, do produce long pipelines,
 but I use make so I just define implicit rules and let make resolve the
 dependency chain for me. Then I can kick it off with.

 make test

 It works quite well and the pieces are modular enough that the next guy is 
 likely
 to understand it. However, understanding is not required for effective use,
 which is usually the primary goal.

I think you're visualizing something different than my actual problem.
 There are indeed a lot of lines in my test file resembling the
pattern that I showed.  But I do not primarily have values in/out.
Instead I set up method calls, make them, then go poking around the
innards of data structures and verify that they have what I expect
them to have.  This requires intimate knowledge of the class that I'm
testing, which is not so automatable.

But when I have a different type of problem, a data-driven approach is
very good.  So is throwing random data at stuff and seeing what
breaks.  That just doesn't happen to be the type of problem that I
have right now.

___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] C++ books

2013-04-15 Thread Ben Tilly
On Mon, Apr 15, 2013 at 11:09 AM, Greg London em...@greglondon.com wrote:
[...]
 So, I've been doing verilog testbenches for years,
 system verilog test benches for years, and they all
 have their limtations as not being what I would call
 a real language. So, I'm trying to write a testbench
 in C++, interface it with C, use that to jump the DPI
 barrier to verilog, and tie into the hardware simulation.

 So, I'm limited to c/c++ because we're tied to hardware
 simulators which can only run hardware languages such
 as verilog/vhdl and can only interface to one software
 language, namely c, which can then tie into c++.

 That is, at the moment, my only option for any testbench
 that isn't written in verilog or vhdl. The simulator
 limits me to c and c++.

I'm writing some C++ at the moment that fits into the first group
(performance-critical code).  For unit testing I've been emitting TAP
protocol and testing it with prove, but are there better approaches?

I get a test file with a lot of code that looks like this:

  printf(
%s %d: Some useful description and maybe a number %d\n,
(expected_value == test_value) ? ok : not ok, ++tests,
some_useful_debugging_info
  );

I find it manageable, but I'm wondering about the next guy.

___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] Tech Meeting Tuesday Apr 9th, Embedded Perl with Federico, 7, MIT E51-376

2013-04-10 Thread Ben Tilly
On Wed, Apr 10, 2013 at 1:33 PM, Tom Metro tmetro+boston...@gmail.com wrote:
 Greg London wrote:
[...]
 Perl's bolt-on version of classes can fix this
 about as easily as perl's closure stuff can fix it.

 The closure version doesn't scale. You can't stick it in a library and
 call it from multiple places without stepping on things.

I do not agree with this assertion.  I've seen closure based solutions
and OO versions both scale, and both fail.  They are appropriate for
different problems, and different designs.  But as long as you know
what they are (and aren't) good at, you can choose either.

Speaking personally, I tend to prefer code that is heavy on closures
to code that is heavy on OO.  But that's a matter of taste.

[...]

___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] Perl and recursion

2013-04-10 Thread Ben Tilly
On Wed, Apr 10, 2013 at 1:53 PM, Bob Rogers rogers-...@rgrjr.dyndns.org wrote:
From: Gyepi SAM gy...@praxis-sw.com
Date: Wed, 10 Apr 2013 13:54:06 -0400

On Tue, Apr 09, 2013 at 10:24:45PM -0400, Bob Rogers wrote:
[...]
Because your example handles a single result, it is not clear that,
in fact, any return values MUST be collected similarly. Unless, of
course, map is used simply for side effects.

 I would hope (but don't know how to check) that Perl is smart enough to
 notice that the map in my original example is in a void context, and
 won't bother to store any return values from the block.  Is that what
 you mean?

Current versions of Perl are sufficiently smart to not build a return
list.  You can verify by creating objects with destructors inside the
map and seeing when they get destroyed.

 It's hard to think of a case where this would be better than a simple
 loop, which is both smaller and more readable.  However, this technique
 also gives you an early exit when mapping over a tree recursively.

Even then, the need to handle collected values manually would seem to
obviate any advantage in doing so.

 Over the years, I have seen many applications for mapping over trees
 (and also written a few), and most of them just produce side effects,
 either to the tree or externally.  Those that don't tend to be of the
 nature collect (or count) all nodes that satisfy X, for which a
 non-local exit would defeat the purpose.  Those that do require
 non-local exit are of the form find the *first* X or return true if
 *any* X, where the burden of handling a scalar return is minimal and
 the benefit from being able to skip the remainder of the tree is clear.

Many also are of the form, Start where you left off, give me the next
one.  That gets you into internal versus external iterators.

In other words, I believe the collect values cases and the
 nonlocal exit cases are effectively disjoint.  I've never found the
 need for (e.g.) a collect the first N nodes that satisfy X routine.

Returning paginated results in a web page.

This discussion has confirmed, for me, that writing code is, in many
ways, similar to writing prose. Though there may initially, appear to
be many ways to communicate, thoughtfulness, good taste and
circumstance quickly show that there are only a few good ways to do
so.

-Gyepi

 To the extent that there are vastly more ways to write bad (code|prose)
 than good, I heartily agree.  But I submit that that still leaves an
 enormous field of possibilities.  Of course, I always try to write the
 best (code|prose) I can, subject to constraints, of course, and it often
 seems to me that there's only a few acceptable solutions, maybe only
 one.  But I have strong preferences in coding style, so I suspect that's
 just my biases showing.

I absolutely agree.  There is not just one right way.

Sorry to come back at you with so much disagreement; that was not my
 intention.

 -- Bob

 ___
 Boston-pm mailing list
 Boston-pm@mail.pm.org
 http://mail.pm.org/mailman/listinfo/boston-pm

___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] Perl and recursion

2013-04-06 Thread Ben Tilly
On Fri, Apr 5, 2013 at 8:22 PM, Jerrad Pierce belg4...@pthbb.org wrote:
at each level of recursion. What seems to be the case though is that when we 
start going bac
up the stack that memory doesn't seem to be released at each pop. If, say, at 
max depth
500mb of ram has been allocated I don't see that released at any point except 
for when
perl exits and then of course it is all released at once. Or at least that is 
what
seems to happen.

 Perl doesn't release memory, it keeps it for reallocation.
 This is (was?) one of the issues one had to be aware of with mod_perl.

Normally the memory is not given back, but Perl does make a good faith
effort to try to give it back if it is convenient.
http://www.perlmonks.org/?node_id=746953 has a couple of examples of
where it can happen.

However this is not an issue in practice very often.

___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] Passing large complex data structures between process

2013-04-05 Thread Ben Tilly
Pro tip.  I've seen both push based systems and pull based systems at
work.  The push based systems tend to break whenever the thing that
you're pushing to has problems.  Pull-based systems tend to be much
more reliable in my experience.

You have described a push-based system.  I would therefore avoid that
design.  Yes, I know you're afraid of locking a hot spot.  But a
simple database piece that does nothing but distribute known jobs
creates more concurrency than most people realize.  A simple Redis
instance can improve that by a crazy factor.  There are ways to scale
farther, but they won't hit that bottleneck until they have a scaling
problem that is orders of magnitude larger than any that they have.

If you disregard this tip, then learn from experience and give thought
in advance to how you're going to monitor the things that you're
pushing to, notice their problems, and fix them when they break.
(Rather than 2 weeks later when someone wonders why their data stopped
updating.)

On Thu, Apr 4, 2013 at 8:43 PM, John Redford eire...@hotmail.com wrote:

 David Larochelle wrote:

 [...]
 We're using Postgresql 8.4 and running on Ubuntu. Almost all data is
 stored in
 the database. The system contains a list of media sources with associated
 RSS
 feeds. We have a downloads table that has all of the URLs that we want to
 download or have downloaded in the past. This table currently has ~200
 million
 rows. We add downloads for the RSS feed of each source to the downloads
 table a few times a day. When these RSS feeds are downloaded, they are
 analyzed for new stories. We add download entries for each new story that
 we
 encounter.  We have an indexed enum field in the database that lets us
 know if
 a download in the downloads table has already been downloaded, needs to be
 downloaded, is queued in memory, or is currently being downloaded. We have
 ~1.5 million URLs that need to be downloaded at anyone time.

 This does sound like it will not scale. Which is only to say what you have
 said.

 Keeping all that data in one table means you will have to lock it for all
 your operations, preventing parallelism.  Keeping all that data in one table
 means it will always get larger and slower.  Keeping all that data in one
 table means conflicting optimization goals, and thus little optimization.

 I would suggest breaking your data up into a number of tables, such that the
 purpose of each would be more focused and amenable to optimizing for reading
 or writing -- for example, you could have one table of needs, one of
 queued, one of downloading and one of downloaded, moving the data
 along from table to table in batches.  Thus, at any given moment, your
 needs table would only contain 1.5 million rows, rather than 200 million
 -- it will scale with your current workload rather than having to scale
 with all the work you've ever done.

 One could suggest having separate queue tables set up for each of your
 downloading systems.  Thus your main find work to do query, which has the
 logic in it to avoid having too many URLs for a single target, would query
 the 1.5 million row needs table and move rows into the queue table
 associated with a downloader -- the downloader would simply need to perform
 a trivial select against those few-hundred/few-thousand rows,
 nigh-instantly getting its fresh load of work.  As data is downloaded, each
 downloader could move rows from its own queue table to the ~200 million row
 downloaded table.

 Since every downloader would have its own table, they would not conflict on
 locks there.  The write locks to insert into the downloaded table would not
 conflict with any read locks, as you wouldn't read from that table to find
 work to do.  Indeed, you should avoid having any read-friendly indexes on
 that table -- by having it be non-indexed, inserting data into it will be as
 fast as possible.

 All of these steps could be made further more efficient by using batched
 queries and appropriate units of work -- for example, if a queue table held
 a hundred URLs, the downloading system could refrain from moving them into
 the downloaded table until they were all complete -- thus it would only need
 one insert from/delete to move those hundred records in one transaction
 -- and it would not have to actually send any URLs over the network back to
 the database. A further efficiency could be gained by allotting two queue
 tables to each downloader, such that at any given moment, the downloader was
 working to process one of them, while the queue-master was working to find
 work to fill the other.

 If you already have a significant investment in logic within this database,
 you could leverage that by using procedures  cursors to fill queue tables
 and switch queue tables while processing the flow of results from your find
 work to do logic -- cursors will generally get you access to the results as
 soon as possible, before the query is complete.

 By using separate tables, you could also 

Re: [Boston.pm] Passing large complex data structures between process

2013-04-05 Thread Ben Tilly
On Fri, Apr 5, 2013 at 12:04 PM, John Redford eire...@hotmail.com wrote:
 Ben Tilly emitted:

 Pro tip.  I've seen both push based systems and pull based systems at
 work.  The
 push based systems tend to break whenever the thing that you're pushing to
 has problems.  Pull-based systems tend to be much more reliable in my
 experience.
 [...]

 If you disregard this tip, then learn from experience and give thought in
 advance to how you're going to monitor the things that you're pushing to,
 notice their problems, and fix them when they break.
 (Rather than 2 weeks later when someone wonders why their data stopped
 updating.)

 Your writing is FUD.

Are you reading something into what I wrote that wasn't there?
Because I'm pretty sure that what I wrote isn't FUD.

A pull-based system relies on having the job that does the work ask
for work when it's ready.  A push-based system relies on pushing to a
worker.  If the worker in question is busy on a long job, or has
crashed for some reason, it is easy for work to get delayed or lost
with a push-based system while other workers sit idle.  A recent
well-publicized example of resulting sporadic problems is
http://rapgenius.com/James-somers-herokus-ugly-secret-lyrics.  A
pull-based system avoids that failure mode unless all workers crash at
once.

For an example of an interesting failure case, consider a request that
crashes whatever worker tries to do it.  With a push-based system, a
worker gets it, crashes, might be brought up automatically, tries the
same request, crashes again, and all requests sent to the unlucky
worker are permanently lost.  With a pull-based system, the bad
request will start crashing workers left and right, but progress
continues to be made on everything.

This is not to say that push-based systems are always inappropriate.
http is a push-based system, so often push-based system is simpler to
build and design.  But if you have an even choice, prefer the
pull-based system.  Yes, you will have to poll, but it tends to have
better failure modes.

 Pro tip.  Learn to use a database.  I know that it can be fun to play with
 the latest piece of shiny technofrippery, like Redis, and to imagine that
 because it is new, it somehow is better than anything that came before and
 that it can solve problems that have never been solved before.  It's not.
 There's nothing specifically wrong with it, but it's not a silver bullet and
 parallelism is not a werewolf.

What makes you think that I don't know how to use a database?  (Here
is a hint: a separate table per downloader is not exactly a best
practice.)  If you'll note, my first suggestion was to implement
polling on the database.  That's because I've been there, done that.
It works and the database gets better throughput than most people
realize it can.  In fact it probably gets more than sufficient for
this particular application.

If the queries are properly designed (often means that someone else
did the heavy work putting things into the queue), distributing
hundreds of jobs per second to workers is pretty easy.  (I don't know
the limit, 100/second was sufficient when I needed to do this last,
and MySQL didn't break a sweat on that.)  I'll describe how to do that
in a second.

But this particular use case isn't a great fit for a database's
capabilities.  It is like using army tanks for picking up groceries
from the corner store.  If you've got the tanks, might as well, but
there are more appropriate tools.  With Redis will distribute tens of
thousands of jobs per second pretty easily.  Scaling farther than that
requires distributing work in a more sophisticated way, but it sounds
like they have a long way to go before running into that barrier.

(NOTE FOR DAVID: here is a blueprint for something that might be easy
for you to build, to solve your current scaling problem.  It will also
allow you to trivially distribute downloading across multiple machines
for better throughput, without introducing new technologies into your
stack.)

Now if you're curious how to achieve that throughput with a database
and polling, here you go.  This is based on a system that I've built
variations of several times.  Have two tables, let's call them
job_order and job_pickup.  We insert into job_order when we want work
done.  A worker inserts into job_pickup when it's ready to do work.

When a worker wakes up, it checks whether the top id for job_order
exceeds the top id for job_pickup.  If not, sleep.  If it does, then
insert a row into job_pickup.  The id of that row is your job.  Start
polling for that job_order.  When you find it, update the record with
a new status.  Once you're done, mark it done.  If your job_order had
been there right away, assume that there is another, and insert into
job_pickup until the workers have caught up with requests.  Then after
that job, sleep.

When I say sleep, I mean something like usleep(rand(0.2)).  The rand
avoids a thundering herd problem.  When you're polling, put a
smaller

Re: [Boston.pm] Perl and recursion

2013-04-05 Thread Ben Tilly
On Fri, Apr 5, 2013 at 6:03 PM, Conor Walsh c...@adverb.ly wrote:
 On Apr 5, 2013 8:24 PM, Uri Guttman u...@stemsystems.com wrote:
  as for your ram usage, all recursions can be unrolled into plain loops by
  managing your own stack. this is a classic way to save ram and sub call
  overhead. with perl it would be a fairly trivial thing to do. use an array
  for the stack and each element could be a hash ref containing all the
  data/args for that level's call. then you just loop and decide to 'recurse'
  or not. if you recurse you push a new hash ref onto the stack and loop. if
  you don't recurse you pop a value from the stack and maybe modify the
  previous top of the stack (like doing a return early in recursion). i leave
  the details to you. this would save you a ton of ram and cpu for each call
  if you go very deep and complex.

 Uri, you know more perlguts than I do, so maybe this is a dumb question,
 but...  why is this that much faster than actual recursion?  That speaks
 poorly of lowercase-p perl.

Do you like having detailed stack backtraces on program crashes?  It
takes a lot of work to maintain that information, and you're using it
whether or not you crash.

___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] Perl and recursion

2013-04-05 Thread Ben Tilly
On Fri, Apr 5, 2013 at 6:39 PM, Bill Ricker bill.n1...@gmail.com wrote:
 THEORY

 Ever general computer science over-simplification has a BUT that is very
 important.

 Recursion is as efficient as iteration ...
 ... IF AND ONLY IF Tail Recursion Optimization is in effect.

 When Tail Recursion is in effect, you do NOT have all that call stack,
 you're only one level down the entire time (which means no overhead and no
 recursion limit either).

 (Whether you can be thread safe and tail recursive in any modern language i
 haven't heard.)

Erlang is the best example.

 PRACTICE

 When in doubt, benchmark.

I always doubt my benchmarks.

___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] CPAN.pm module? h2xs replacement?

2013-01-18 Thread Ben Tilly
On Fri, Jan 18, 2013 at 6:22 AM, Bill Ricker bill.n1...@gmail.com wrote:
 On Fri, Jan 18, 2013 at 4:08 AM, Peter Vereshagin pe...@vereshagin.orgwrote:

 Kind of intrigued: what's new or any changes on the book particularly?


 A couple of the modules that he wrote for the book were best thinking then,
 but were bypassed by progress and never fully evolved.

 There's an errata page at Oreilly, iirc.

Ironically I got one of my best tips by doing what Damian said not to
do.  He said not to use Hash::Util's lock_keys because it was not
safe, it was too easy to get around.  However that doesn't matter to
me, I know when an anonymous hash has all of its keys, and locking the
keys lets me find typos fairly quickly.  This is kind of like use
strict for anonymous hashes, and speeds my development of log
processing scripts.

Ben

___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] distributed computing in dynamic languages

2012-05-04 Thread Ben Tilly
On Fri, May 4, 2012 at 11:41 AM, Uri Guttman u...@stemsystems.com wrote:
 On 05/04/2012 02:07 PM, Federico Lucifredi wrote:

 Pragmatic Programmers has just announced a book on distributed

 programming in Ruby. Somewhat the possibility never occurred to me :)

 I am wondering, is there some obvious reason, like a well-structured
 library or language property, that makes one of the dynamic languages
 a better option that the others for distributed computing? I am not
 thinking HPC, more like remote method invocation.

 it is much easier than most people realize. the key is a simple message
 passing api. that allows for local or remote calls without changing the
 code. then you can do your work in one process or distributed with little
 extra help. this is a library thing and not something needed in the
 language. so the quality of the library matters as much as anything else and
 that means knowing how to design such a beast.

I disagree on the importance of without changing the code.

However I do agree on the importance of the remote invocation API, and
there are a number of subtle requirements on the API that people
seldom get right.  For instance I guarantee that some day in any
complex system, you will have intermittent performance problems.  When
they happen, you need to have tools to be able to stay on top of that
and track down the location of a performance problem, that may occur
sporadically several remote calls away.

If you don't do this, it is just a question of time until normal minor
mistakes accumulate to take your distributed house of cards down.

___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] backticks tee'd to stdout ?

2011-07-18 Thread Ben Tilly
Sounds like you're suffering from buffering:

http://perl.plover.com/FAQs/Buffering.html

The only way to solve your problem is to convince the program that it
should not buffer its output.  Sometimes you'll have a command switch
you can hit to force that (particularly if you wrote those programs).
If you don't, then you'll need to jump through a bunch of hoops to
convince it that it is talking to an interactive terminal, so please
don't buffer.

Sorry, it is going to be a mess.  There may be a module these days
that makes it easy, but I don't think so.  I would suggest starting
down the path by looking at modules like IO::Pty, IO::Tty, and Expect.

On Mon, Jul 18, 2011 at 2:38 PM, Greg London em...@greglondon.com wrote:
 I have a script that uses backticks to run commands, capture the output and
 append it to a file. Someone requested that the script also output
 immediately to the screen. we are having troubles with some commands
 hanging, amd we want to know where the hang is. so if we could see the last
 output, we would know.

 Is there an easy way to tweak backticks so it still captures the output but
 also tees the output to stdout?

 also, the command that is hanging, I set an ALRM that has 'die' as its
 callback. but I end up with the backtick command.running as a zombie
 process. I have no idea why its doing that. but I dont think I have ever
 used ALRM either.


 ___
 Boston-pm mailing list
 Boston-pm@mail.pm.org
 http://mail.pm.org/mailman/listinfo/boston-pm


___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] [Bob Rogers] Please confirm your message

2011-07-12 Thread Ben Tilly
For him to discuss it off list would be pointless because his emails
will bounce until he goes through the confirmation.  At which point he
won't have spam to deal with.

That said, I understand Randal's position.  It makes emails
inaccessible to anyone who doesn't see the bounce message for any
reason.  This can happen for a variety of reasons.  A few important
ones including that it looks like spam to a spam filter, someone is
busy, or you're getting an email with important notifications from an
address that is not supposed to have replies sent.

Ironically if two people use TMDA, they have no way to ever establish
email contact.  This fact is why the technology never had more than a
niche acceptance.  But if you're willing to be anti-social, I'll grant
that you block a lot of spam.  However at the cost of being
potentially used as a spam host yourself.  Anyone who wants can forge
email header, which can cause you to bounce your spam to anyone else
in the world.

On Tue, Jul 12, 2011 at 1:44 PM, Ronald J Kimball r...@tamias.net wrote:
 If you actually consider this to be spam, forwarding it to the mailing list
 is an odd choice.

 Anyway, this seems like an issue that would be better discussed off list.

 thank you,
 Ronald

 (Not sure if I'm still the list moderator, since I'm no longer the
 facilitator... :)


 On Tue, Jul 12, 2011 at 01:36:35PM -0700, Randal L. Schwartz wrote:

 Oh look, another spam message!  I guess I'll have to make sure
 216.146.47.5 gets added to every real-time-blackhole list, because it
 seems to be spewing nonsense.

 Seriously, dude, TURN THIS OFF.


 ___
 Boston-pm mailing list
 Boston-pm@mail.pm.org
 http://mail.pm.org/mailman/listinfo/boston-pm


___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] Advice on Perl for RPM?

2011-06-18 Thread Ben Tilly
I've just finished 2 weeks of packaging a ton of CPAN modules into
rpms.  Before that I had never touched the format.  So I have some
recent very direct experience with packaging rpms, from the point of
view of a novice.

My first reaction is to ask why you would want this?  To use it you
have to understand how spec files work, and the raw spec format is not
complicated.  Plus to use it you have to abandon the toolchain that a
lot of people have worked on.

This kind of thing just makes you learn a slightly different syntax,
and can keep you from doing what you need to do.

For example the module that you just pointed at does not support
important tags including Requires, BuildRequires and Obsoletes.
Likely because having a hash makes it hard to handle repeated
dependencies.  That makes it very hard to properly manage requirements
and dependencies when you're doing a lot of modules, or if things have
changed their names.

It also provides no way to do #defines.  Which means that if you're
packaging a CPAN module that conditionally requires (say)
Win32::TieRegistry, then rpmbuild will pick that up as a dependency,
and you'll *never* be able to fix it.  (The workaround for this is
kind of a mess, pass the --no-requires flag to cpan2rpm then study its
spec file to see how to do it.)

In short you're putting yourself in serious handcuffs that you *will*
need for the sake of not learning a fairly simple syntax.  Really, it
is not worth it.

On Sat, Jun 18, 2011 at 6:35 AM, James Eshelman ja...@nova-sw.com wrote:
 I’ve looked over the CPAN modules that allow running RPM builds from Perl.   
 RPM::Make::DWIW  seems to be the best choice there, although too new (2010) 
 to have any reviews.  I’m just curious if anyone on this list has any 
 experience with it (or one of the alternatives) that they’d care to share.

 TIA,
 Jim

 ___
 Boston-pm mailing list
 Boston-pm@mail.pm.org
 http://mail.pm.org/mailman/listinfo/boston-pm

___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] Advice on Perl for RPM?

2011-06-18 Thread Ben Tilly
I had missed that.

I suspect that for what I was doing it would have saved a lot of work
- except where it wouldn't.

On Sat, Jun 18, 2011 at 9:19 AM, Steve Scaffidi step...@scaffidi.net wrote:
 Have you checked this out?

 http://search.cpan.org/dist/CPANPLUS-Dist-RPM

 --
 -- Steve Scaffidi step...@scaffidi.net


___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] Advice on Perl for RPM?

2011-06-18 Thread Ben Tilly
That gets at one piece of advice.

My bitter opinion is that the only reason to use an rpm based system
is that someone else made that decision for you.  All of this is much,
much better on Debian.

On Sat, Jun 18, 2011 at 10:09 AM, Steve Scaffidi step...@scaffidi.net wrote:
 I've never used the RPM one, but the dpkg one is decent. I *have* used
 cpan2dist, which might actually use that module under-the-covers. I
 recall it worked pretty well. on debian we have dh-make-perl which is
 *quite* nice.

 On Sat, Jun 18, 2011 at 12:24 PM, Ben Tilly bti...@gmail.com wrote:
 I had missed that.

 I suspect that for what I was doing it would have saved a lot of work
 - except where it wouldn't.

 On Sat, Jun 18, 2011 at 9:19 AM, Steve Scaffidi step...@scaffidi.net wrote:
 Have you checked this out?

 http://search.cpan.org/dist/CPANPLUS-Dist-RPM

 --
 -- Steve Scaffidi step...@scaffidi.net





 --
 -- Steve Scaffidi step...@scaffidi.net


___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] not regular expression

2011-03-23 Thread Ben Tilly
Try ^(?!.*pattern here)

On Wed, Mar 23, 2011 at 1:14 PM, Greg London em...@greglondon.com wrote:
 I'm dealing with a perl gui tool that has a regular expression search tool.
 The tool takes whatever is in the gui window and then does a regularexpre
 ssion search through a bunch fo fields.

 THe thing is the text variable isn't within my control and the gui doesn't
 let me select ~= or !~. It's always doing the functional equivalent of:

 $somehiddenvariable ~= m/my regular expression/

 Is there a way in a regular expression between m// to make it behave as if
 it were

 $somehiddenvariable !~ m/my regular expression/

 Even though the code is doing ~=?

 ???

 Greg


 ___
 Boston-pm mailing list
 Boston-pm@mail.pm.org
 http://mail.pm.org/mailman/listinfo/boston-pm


___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] profiling memory usage

2011-03-08 Thread Ben Tilly
On Tue, Mar 8, 2011 at 7:52 PM, Tom Metro tmetro-boston...@vl.com wrote:
 Some code I'm working on is triggering an out of memory error, and I'd
 like to figure out what specifically is responsible. (It's a complex
 system with dozens of libraries and it runs in parallel across a cluster
 of machines. Running the code in a debugger isn't a practical option.)

 Any recommendation for tools to do this?

Devel::Leak and Devel::LeakTrace are my best suggestion.

 I don't recall if the typical profiling tools record memory usage, but a
 traditional profiler would be overkill for what I need.

 The ideal solution would be something that could hook the OOM exception
 and dump the symbol table along with stats for how much memory each
 symbol is occupying. Another useful possibility would be dumping the
 call stack.

The symbol table is not enough.  It doesn't see data in lexical
variables.  And figuring out how much memory an array or hash may be
taking is easier said than done, because doing it means walking the
array or hash and figuring that out.  But with circular data
structures you have to keep track of where you have been, which
requires somewhere to stick that information, but you're already out
of memory.

 Is it possible to trap the OOM error? I don't think a __DIE__ handler
 catches it. It seems to be an unusual error in that you often see
 multiple of them, as if the first few are warnings, and then eventually
 it is fatal.

Random possibilities.  Could it be that Perl not always check whether
it got memory when asked?  So you don't crash until you ask for memory
somewhere that checked properly.

Or perhaps Perl doesn't exit on asking for memory, but crashes when it
tries to use memory it doesn't have.

Either way I think you are better off inserting things that drop debug
state every so often, and then figure out what is growing, and try to
narrow it down.  Devel::Leak and friends are good for this.  If you
are constantly growing memory usage, this can help figure out why.

Good luck.

___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] Lightweight module for web service calls?

2011-02-16 Thread Ben Tilly
On Wed, Feb 16, 2011 at 12:48 PM, Peter Wood pw...@christianbook.com wrote:
 Hi Uri,

 https is just http over an ssl socket with a different port than
 http. you can use IO::Socket::SSL for that. but the problems you will
 run into are wide and varied which is why LWP is so large. if you know
 your http transactions will be very basic and not need help, it is
 easy
 to write a simple module. but it is so easy to get things wrong and
 when
 it gets more complicated, you will want LWP.

 The requests are going to be in the form of POST requests using
 structured URLs with an optional POST body payload, the responses will
 be JSON. I feel like that's simple enough to warrant a barebones module.
 That being said, I should probably write a simple socket module and do
 some tests to compare it to LWP to confirm that LWP has as much overhead
 as I think it does.

Absolutely test first.  The often heard refrain, I think the standard
answer has a bunch of overhead that I want to avoid is NEVER
appropriate until you have demonstrated that you have an actual
performance problem that needs solving.  If you don't, then use the
standard wheel and worry about being productive.

Premature optimization is the root of all evil and all that.

An interesting study that is referenced in _Code Complete_ found that
when different teams of programmers were given the exact same problem
and told to optimize for different things (speed of development,
maintainability, memory use, performance, etc), by and large they came
up with solutions that were best for the particular thing they were
asked to optimize for.  That is not surprising.  What was surprising
is that the team that was asked to optimize for maintainability came
in second on most of the other categories, while the team that was
asked to optimize for performance came in dead last on most of the
other categories.

___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] Lightweight module for web service calls?

2011-02-16 Thread Ben Tilly
On Wed, Feb 16, 2011 at 1:19 PM, Conor Walsh c...@adverb.ly wrote:
 On 2/16/2011 1:14 PM, Duane Bronson wrote:

 Peter,
 Interesting that the question how do I do X has no answer except don't
 do
 X.  Engineers prefer to give flawless answers to flawless questions and
 when the questions sound flawed, all hell breaks lose.  So, perhaps it
 would
 be helpful if you gave us some rationale behind the need to have a
 lightweight LWP with maybe some example code showing how LWP doesn't do
 what
 you would like.

 I suspect that Ben and Uri are channeling Joel Spolsky a little.  I'm not
 convinced they're right, but I'll admit I'm curious about Peter's use case
 too.

 http://www.joelonsoftware.com/articles/fog69.html

I would much prefer to be accused of channeling Steve McConnell than
Joel Spolsky.

http://www.stevemcconnell.com/cctune.htm

Until you know that you have a performance problem, don't try to
blindly improve performance.  Once you know that you have a
performance problem, profile to discover where the performance problem
actually is before starting to try to fix it.  If you're trying to
optimize blindly the only thing I can guarantee is that your code will
be harder to maintain.

___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] use Moose ?

2011-02-01 Thread Ben Tilly
You can get a good overview of what Moose does for you on a large
project from Ovid's blog where he discussed Moose as he was learning
it.  Let me grab a few relevant entries:

http://use.perl.org/~Ovid/journal/38649
http://use.perl.org/~Ovid/journal/38662
http://use.perl.org/~Ovid/journal/38705
http://use.perl.org/~Ovid/journal/38880
http://use.perl.org/~Ovid/journal/38785

He then encapsulated a lot of this into one presentation at
http://www.slideshare.net/Ovid/inheritance-versus-roles-176.

On Tue, Feb 1, 2011 at 7:03 AM, James Eshelman ja...@nova-sw.com wrote:
 There've been fairly frequent references to and praises for Moose on this 
 list. After reading some of the doc and discussion of it, I'm still wondering 
 about a couple points:

 - Would it be accurate to say that using Moose will save you coding time (on 
 a large project, after learning it) but cost you significant runtime?  
 Always, sometimes, never?  [Of course the frequent comment over the years on 
 this list is that if RT performance is paramount then don't use O-O perl at 
 all.   Probably still true, but assume O-O perl is a given.]

 - What valuable O-O feature(s) does Moose provide (if any) that couldn't be 
 coded by a skilled programmer in perl?   [There's some C programming/symbol 
 table manipulation under the covers?]

 TIA,

 Jim Eshelman
 www.nepm.net
 Network Monitoring with a Difference

 ___
 Boston-pm mailing list
 Boston-pm@mail.pm.org
 http://mail.pm.org/mailman/listinfo/boston-pm


___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] should closing a file descriptor also close the accompanying file handle?

2010-11-03 Thread Ben Tilly
On Wed, Nov 3, 2010 at 8:42 AM, Brian Reichert reich...@numachi.com wrote:
 On Tue, Nov 02, 2010 at 09:59:12PM -0400, Uri Guttman wrote:
 why are you concerned about closing the DATA handle? it is internal to
 the program. actually it is the handle the perl binary uses to read the
 source file and it is left at the seek point where the DATA starts.

 Because, I understand it to be best practice to close all file
 handles, as per:

  http://www.webreference.com/perl/tutorial/9/2.html

    Besides closing any filehandles that might have been opened ...

  http://en.wikipedia.org/wiki/Daemon_%28computer_software%29

    Closing all inherited open files at the time of execution that
    are left open by the parent process, ...

  http://www.enderunix.org/docs/eng/daemon.php

    Unneccesarry [sic] descriptors should be closed before fork()
    system call (so that they are not inherited) or close all open
    descriptors as soon as the child process starts running.

 I also have here in the office a copy of Steven's Unix Network
 programming Volume 1, 2nd edition.  Chapter 12 (Daemon Processes
 and inetd SuperServer) describes daemonizing a process;  his example
 code lib/daemon_init.c also pointedly closes all file handles; review
 that source here:

  http://www.kohala.com/start/unpv12e/unpv12e.tar.gz

 I'm maintaining a perl module (internally) that allows for convienent
 daemonizing; I wrote it to be as 'correct' as possible.

What was wrong with the CPAN module Proc::Daemon?

 Until yesterday, I didn't realize that DATA and END were magic file
 descriptors in perl.  As such, it would be easy to alter my module
 to preserve them.  If I can find reference to other magic descriptors,
 then I'll take them into account as well.

 well, perl keeps it open so it can be used by anyone reading from
 DATA.

 If it keeps it open (despite my closing the file descriptor), then
 why does the daemonized process no longer have access to all of
 it's contents?

 this list (as most other rightly do) filters out attachments. so you
 should just paste your code in the email instead.

 That does make sense; I'll re-post soonish...

 Thanks for the feedback...

 uri

 --
 Uri Guttman  --  ...@stemsystems.com    http://www.sysarch.com --
 -  Perl Code Review , Architecture, Development, Training, Support --
 -  Gourmet Hot Cocoa Mix    http://bestfriendscocoa.com -

 --
 Brian Reichert                          reich...@numachi.com
 55 Crystal Ave. #286
 Derry NH 03038-1725 USA                 BSD admin/developer at large

 ___
 Boston-pm mailing list
 Boston-pm@mail.pm.org
 http://mail.pm.org/mailman/listinfo/boston-pm


___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] $#{$Queue}

2010-09-27 Thread Ben Tilly
Check perlvar.  It is the index of the last array element, which is
one less than the size.

@$Queue will give you the size in scalar context.

On Mon, Sep 27, 2010 at 5:28 PM, Greg London em...@greglondon.com wrote:


 what the heck?

 my $Queue = \...@somearray;

 if ($#{$Queue} = -1){
  # do something
 }

 I thought $# was size,
 but the code is checking for it to be minus one?

 --



 ___
 Boston-pm mailing list
 Boston-pm@mail.pm.org
 http://mail.pm.org/mailman/listinfo/boston-pm


___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] my lang is better than your lang!

2010-07-19 Thread Ben Tilly
On Mon, Jul 19, 2010 at 2:01 PM, Uri Guttman u...@stemsystems.com wrote:

 gack, this thread is annoying. so here are some high level philosophical
 questions to think about regarding languages.

 first off, why are there so many languages? and by many, i mean
 thousands and more. how many of you have invented a language (even a
 mini-lang)?

What is a language?  Seriously, what is the difference between some
configuration parameters and a new language?  What if the config file
format is just a data structure in an existing language that has
semantics attached?  (For instance Rake.)  At what point does existing
language + library turn into a new language?  (Before answering, read
_On Lisp_ and consider how large Lisp programs routinely create
mini-languages within Lisp.  Also consider the origins of C++.)

 no one seems to have mentioned turing compatibility. this means
 something deep in all the langs mentioned. discuss.

It is very hard to come up with a useful language that is not Turing
complete.  Can you loop and have if statements?  Congratulations,
you're Turing complete!

For a long time my favorite example of a deliberately non-Turing
complete language was SQL.  It was not Turing complete because that
made it possible to comprehensively analyze how queries would execute
so they could be optimized.  Unfortunately the latest versions of the
SQL standard have added enough features to become Turing complete.
(However most people don't use those features most of the time, so the
optimizer is usually going to be able to work well.)

 what about all those langs that were meant to conquer computing
 civilization? PL/I, COBOL, ALGOL and even the dreaded ADA. c actually
 conquered more than all of them. do you consider c a high level
 language?

PL/I in many ways was the Perl of the mainframe world. :-)

COBOL was meant to conquer the business world.  To the best of my
knowledge it still processes more financial transactions than any
other language.

Don't knock ALGOL.  It was meant to be a base from which ideas could
be developed, and C is in the ALGOL family.  ALGOL-68 introduced the
idea of lexical closures, though I think that Pascal actually
implemented them first.

ADA I don't know enough about to find anything good to say.

 should you learn assembler? is there work in it (yes)? what would
 assembler teach you when using a high level lang?

It teaches you what the CPU actually does.  Whether that is worthwhile
depends entirely on your interests.

 what does it mean when you like or dislike a lang? in a non-technical
 way why did you make that decision?

I dislike languages that make me type too much (Java), make it too
hard to find my obvious bugs (JavaScript), or whose implementations
are too buggy (VB).

 have any of you ever read an ANSI standard for a language? or tried to
 implement parts of a standard like that? hell, reading ANSI standards is
 a major skill in its own right!

Read, yes.  Tried to implement, no.

 are languages for people or computers?

Most are for people first, computers second.

Machine language is for computers first, people second.

Some newer ones (eg Java and C#) are for tools (IDEs, etc) first,
people second, and computers third.

 enuff for now. let's see what you all have to say before i drop my $.02
 back in.

 uri

 --
 Uri Guttman  --  ...@stemsystems.com    http://www.sysarch.com --
 -  Perl Code Review , Architecture, Development, Training, Support --
 -  Gourmet Hot Cocoa Mix    http://bestfriendscocoa.com -

 ___
 Boston-pm mailing list
 Boston-pm@mail.pm.org
 http://mail.pm.org/mailman/listinfo/boston-pm


___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] Languages to learn in addition to Perl

2010-07-15 Thread Ben Tilly
On Thu, Jul 15, 2010 at 8:57 AM, John Redford eire...@hotmail.com wrote:
[...]
 Sadly, I cannot recommend a good book on JavaScript, which is a shame
 because JavaScript is one of the best-designed languages ever.  Perl is
 actually a pretty good background to learn JavaScript, because it has a
 number of similar features (regexps, closures, dynamic typing) and also has
 a object oriented programming style that is built on minimal language
 support.  https://developer.mozilla.org/En/JavaScript -- This is as good as
 it gets.

Javascript, The Good Parts is commonly recommended.

 Also, I cannot recommend any book on SQL.  I do recommend learning how to
 use BerkeleyDB's various features, which essentially is the
 assembler/forth/C to the Perl/ML/Java of SQL.  The more you know about what
 SQL has to be doing internally, the more you'll be able to use it properly.
 But I've never found it explained well in writing.

I've found Thomas Kyte to be good, if Oracle specific.  For instance
Effective Oracle by Design was good.

I feel that it helped me a lot with understanding other databases as
well.  But you have to wade through a lot of Oracle-specific details
to gain that perspective.

___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] tech events in and around Boston

2010-07-07 Thread Ben Tilly
On Wed, Jul 7, 2010 at 7:13 PM, Jerrad Pierce belg4...@pthbb.org wrote:
 http://www.barcampboston.org/

 Just missed #5, #6 is the weekend before tax day.

 Interestingly, O'Reilly was a Media partner for #5. I say interesting
 because I was under theimpression Bar was created in response to Foo's
 invitation-onlyness rubbing some hackers the wrong way.

Your impression matches my recollection.

See http://www.wired.com/techbiz/media/news/2005/08/68610 for more.

___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] bareword warnings for __ANON__

2010-06-16 Thread Ben Tilly
On Wed, Jun 16, 2010 at 10:32 AM, Tom Metro tmetro-boston...@vl.com wrote:
 I have some code with an anonymous sub that uses __ANON__ to set the sub
 name in logging and error messages (a semi-documented trick) like:

        sub {
            local *__ANON__ = subname; # name the anon sub
            [...]

            warn *__ANON__,: ...\n;

Rather than a semi-documented trick, I'd recommend the clearly
documented Sub::Name module for this.  It does the same thing, with
less confusion for the maintenance programmer.

 The problem comes about with the warn call that wants to reference
 __ANON__. If __ANON__ is used by itself, it produces a bareword warning.
 If it is used as a glob (either *__ANON__ or *{__ANON__}), it works, but
 produces output prefixed with *, which seems to suggest it isn't
 really being interpreted as a glob.

 While the real code is going to end up using a Log::Log4perl method
 instead of warn (which internally uses caller()), making this moot, I'm
 curious to know what the correct syntax is here.

I believe you have the right syntax.  The * prefix just indicates that
it is being interpreted as some other glob, but if you look at the
output the glob it appears as should be what you were trying to name
it.

___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] filename wildcard on command line and perl module paths

2010-05-22 Thread Ben Tilly
On Sat, May 22, 2010 at 9:21 PM, Greg London em...@greglondon.com wrote:
 Two unrelated perl queries.

 First, I have a perl script that needs to pass in via command options a
 filename that might include wildcards. This filename will be used by the
 script at a later point, from a different directory, so I don't want Unix
 to do wildcard replacement when I run the script. But I'd like it to look
 like a wildcard so people are familiar with it and don't need to be
 explained about it.

 I used '@' as my wildcard and then do a s/@/*/g at some point before using
 it. It feels a bit klugey though. Is there a better way to do it?

That is very kludgey.  I would suggest just making people put single
quotes around the wildcard.  That will be less surprising for
experienced people.

 Second, is there a built-in way to find the path to a perl module? I wrote
 a subroutine that does a manual search through @INC, PERL5LIB, PERLLIB,
 etc, but, again, it feels kind of klugey, and, again, I can't imagine I'm
 the first guy to need to do this. Is there a built in way?

After you use it, just look in %INC to figure out where it was found.

___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] Newbie question

2010-05-06 Thread Ben Tilly
On Thu, May 6, 2010 at 1:16 PM, Laura Bethard betha...@nber.org wrote:
[...]
 I'm not sure if the 3-day will cover what I need to know, and the 5-day is
 pricey.  I'd prefer a traditional class over an online one, but might
 consider online with a solid recommendation.  Anyone have any advice?
[...]

My advice.  Bookmark
http://perldoc.perl.org/index-functions-by-cat.html.  Don't try to
read it.  Now read through http://www.perl.org/books/beginning-perl/.
Try to follow what it says but don't worry about really mastering it.
Then on a Unix system do

perl -MCPAN -e 'install Catalyst::Manual::Tutorial'

This will lead you through a bunch of questions and will install a
*ton* of stuff.  (If you're not happy with your answers you can
control C out then try again.)  If you run into problems on your
installation (which unfortunately is quite possible) then you can ask
for help either here or at http://www.perlmonks.org/.  (You can also
ask at stackoverflow.  I haven't paid attention, so that might have a
decent Perl community by now.  Or might not.  Perlmonks has a built up
community and history though.)

After that try to work through
http://search.cpan.org/~hkclark/Catalyst-Manual-5.8004/lib/Catalyst/Manual/Tutorial.pod.
 Expect to have to refer back to Beginning Perl plenty of times.  When
that fails Google it.  Then come back and keep going.  Any time you
run across a module that looks important, be sure to bookmark it.

Given your background and how many digressions you will need, plan on
that tutorial taking 2 weeks to go through.  It could be more, it
could be less.  But it will be a substantial hunk of time.  By the
time you are done you should have learned enough to put together a
small database backed website.

At that point you should be ready to take on a simple project.  Follow
the structure of the tutorial fairly closely as you do it.  When
you're done that you'll be ready for bigger challenges, and can learn
more.

If you get that far, lots of people will have lots of things to suggest.

Good luck,
Ben

___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] domain theft saga

2010-03-30 Thread Ben Tilly
On Tue, Mar 30, 2010 at 1:31 AM, Tom Metro tmetro-boston...@vl.com wrote:
 For those of you not on the BLU list, you might find this an interesting
 read:

 http://old.nabble.com/Dreamhost-account-hacked-td28062149s24859.html

Thanks.  I thought that more people should hear about it so I put it
on Hacker News.  See http://news.ycombinator.com/item?id=1229247.  (It
has generated some discussion but not a ton.)

Ben

___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] Exceptions as control flow

2010-03-19 Thread Ben Tilly
On Fri, Mar 19, 2010 at 11:07 AM, Conor Walsh c...@adverb.ly wrote:
 I _am_ telling you I think exceptions are faster than other control
 structures _In_ _Some_ _Cases_.

 I'm happy to explain  clarify if I am unclear.

 I'm curious.

 Maybe I'm out of my depth, pun not intended, but I was under the impression
 that thrown exceptions have to unwind the stack anyway.  Is there some way
 exceptions are usually implemented that doesn't cost as much as unwinding
 the stack one level at a time?  It seems like 3-or-300 does matter, in that
 you have to check for and possibly call some number of destructors or risk
 leaking all over the place.

The issue is this.  If you don't use exceptions then you have to have
if checks on every return for exceptional conditions.  Those checks
are individually fast, but there are a lot of them.  If there are a
great number of needed checks per exceptional condition, then it is
faster overall to remove those checks and throw an exception when
needed.  Even though it is slower when you throw that exceptions.

In a desktop environment this makes sense.  However in some other
contexts, such as real time embedded programming, it likely doesn't.
And the issue there is the difference between average running time and
worst case running time.

You can see the same issue with, for example, using a hash as a data
structure.  Average access time for a hash is O(1).  Worst case is
O(n).  In many environments hashes make sense.  But if you're looking
to guarantee that you finish in a particular time slice, hashes are a
simply terrible data structure.

 This is a fantastic discussion and I'm happy to see it here.

It is surprisingly involved and detailed.  I've been happy to read through it.

Ben

___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] Exceptions as control flow

2010-03-19 Thread Ben Tilly
On Fri, Mar 19, 2010 at 2:31 PM, Greg London em...@greglondon.com wrote:

 In a desktop environment this makes sense.  However in some other
 contexts, such as real time embedded programming, it likely doesn't.
 And the issue there is the difference between average running time and
 worst case running time.

 I'd kiss you right now if I could...

 ;)

I'm suddenly glad I live on the opposite side of the continent. :-P

Ben

___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] Can't alarm() while reading an unbuffered endless line?

2009-10-19 Thread Ben Tilly
On Mon, Oct 19, 2009 at 2:17 PM, Bogart Salzberg webmas...@inkfist.com wrote:
 Mongers,

 I recently encountered a puzzling dilemma. You might find it interesting, or
 obvious (probably not both) and it leads to a question about how perl
 handles signals.
[...]

perldoc perlipc

Search for Deferred Signals (Safe Signals).

If safe signals are on (the default), then signals are not caught
until the current opcode finishes running.  Unfortunately this can be
forever.  For example you can encounter this if you're blocking on I/O
and no I/O happens.

There are a couple of options available.  The simplest being to set
the PERL_SIGNALS environment variable to unsafe.  However that is
unsafe, and you can dump core.  I've generally gone with that approach
anyways, though, because I've wanted to do things like force a
database connection to Oracle to timeout.

Other options include trying to use the :perlio layer (with the
default configuration it is the default, but you may have a Perl
compiled without it) or to carefully use unbuffered I/O so that you're
never blocked in an uninterruptable op-code.

Cheers,
Ben

___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] s/// with side effects?

2009-05-22 Thread Ben Tilly
On Fri, May 22, 2009 at 8:22 AM, Uri Guttman u...@stemsystems.com wrote:
 SB == Samuel Baldwin shardz4...@gmail.com writes:

  SB A bit of a side question; when would you ever want to try and match an
  SB empty regex? Wouldn't it be semantically saner to use defined?

 i did mention a common use in split( //, ... ). that explodes a string
 into an array of all the chars which is useful sometimes. and that is
 always an null regex afaik. it has the same effect as m/(.)/s in a list
 context.

You wanted m/./sg.  Without the g it only gives you 1 character.

Personally I find the split version clearer.

Ben

___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] Meta-user group meeting! May 2nd summit about how to coordinate a user group

2009-04-14 Thread Ben Tilly
On Tue, Apr 14, 2009 at 10:19 AM, Uri Guttman u...@stemsystems.com wrote:
 RW == Ricker, William william.ric...@fmr.com writes:

  RW We got this thru the Leaders' lists. That's a busy weekend. Maybe Uri,
  RW Ron and I will draw straws ...

 i would prefer to draw a gallon of blood! not much chance i would step
 foot on any property with redmond's curse on it. i still harbor a
 personal grudge from them intentionally lying to our face when i was in
 a startup way back. i can spew the ugly details if anyone dares to hear
 them.

All they did was intentionally lie to you?  You were lucky.  It could
have been *much* worse.

When my friend Ed Curry called them on their illegally claiming that
NT 4.0 was C2 certified so they could sell it to government
departments that by law required that certification, their response
was to convince his clients to take their business elsewhere, then
after his company folded and he found a new job they called up his new
boss before he started and asked how much they needed to pay to get
him fired before he began.

There is no question that this stress was a significant contributer to
his fatal heart attack on March 24, 1999.  I therefore blame his death
on them.

For the record, since Microsoft did so much to muddy the question,
design decisions in NT 4.0 such as moving the video drivers to ring 0
meant that it would never achieve C2 certification.  In the end after
6 service packs they managed to get a British certification that they
called C2 equivalent.  But they never earned that certification and
their operating system should have never been sold for use in the US
military.

While I acknowledge that they began making efforts to improve their
corporate image after the EU levied big enough fines to make them sit
up and notice, it will take a looong time before I'm willing to give
up my grudge.  And I know quite a few other people who feel similarly.

Cheers,
Ben

___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] Larry's MIT talk

2009-04-06 Thread Ben Tilly
On Mon, Apr 6, 2009 at 4:42 AM, David Cantrell da...@cantrell.org.uk wrote:
 On Thu, Apr 02, 2009 at 10:31:29PM -0400, Federico Lucifredi wrote:

 True, but I have not yet done a single animated slide in my life.
 Bullets, code and occasionally pictures slapped on a background are all
 I need. If I can start writing my slides in text templates, my personal
 satisfaction with not having to ever look at Powerpoint or OO junk would
 be great.

 Am I a terrible heretic for quite liking Powerpoint?  I think it's a
 great tool, that is usually horribly abused.  And yes, I've used
 animation in my slides.  Occasionally.  Carefully.

In that case you may find
http://i.i.com.com/cnwk.1d/i/tr/downloads/home/beyond_bullet_points_ch02.pdf
interesting.

Personally I don't like the way that Powerpoint is used because it
encourages oversimplification.  Also I think that spending great
energy on fancy presentations for internal use is a waste of company
time and money.

Cheers,
Ben

___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] Larry's MIT talk

2009-04-06 Thread Ben Tilly
On Mon, Apr 6, 2009 at 7:59 AM, Palit, Nilanjan
nilanjan.pa...@intel.com wrote: From: Ben Tilly Sent: Monday,
April 06, 2009 10:34 AM

 Personally I don't like the way that Powerpoint is used because it
 encourages oversimplification.  Also I think that spending great
 energy on fancy presentations for internal use is a waste of company
 time and money.

 I think it is really naïve to blame a tool for an outcome -- it's
 just as silly as blaming poor driving on cars/roads rather than the
 drivers! What MS did with PPT was that it enabled professional
 looking presentations with relatively low effort. What people did
 with it is another story. PPT does not encourage oversimplification
 -- it's the people who are using PPT, not the tool.

It appears that you didn't read what I wrote, then launched a rant
that would be better aimed at someone else.I say this because I had
linked to a chapter on how to effectively communicate with PPT, then
complained about the way that it is used.

This should tell you that I'm blaming the technique, not the tool.

 What happened with PPT is this: we got a powerful *tool* that allows
 people to create professional presentations. Except, there was no
 training on how to actually do professional presentations -- i.e.,
 no training on the art of communication, something that was
 previously limited only to people (at least purportedly) trained and  paid 
 to do so. PPT enabled the /masses/ to take that into their own  hands 
 without the training to go with it. What do you think would
 happen if all of a sudden, due to a technological breakthrough,
 everyone could afford a personal jet, but no training was offered on
 how to actually fly them?

If you read the article that I linked to you'd have had this point
reinforced.  There are widespread misconceptions about cognition,
which cause systemic misuse of the tool.

 But it is equally naïve to say that spending great energy on fancy
 presentations for internal use is a waste of company time and
 money. Presentations are about communicating both mundane  complex
 ideas and also about *selling* ideas. There is a lot of selling that
 needs to be done inside companies, as much, if not more so than
 outside (especially for big companies, but I'm sure for small ones
 as well). In some ways, selling internally is a lot harder, since
 you're trying to communicate (or sell) to people who are just as or
 more competent as you, have as much or more significant stakes than  you do 
 in the outcome of the decision and in many cases control your
 paycheck. The art of communication is as much a requirement inside a
 company as outside. What tool you use/abuse to do that is totally
 beside the point!

As an individual within a company, absolutely.  But for the company
it is a different story.  The way that companies often work is that
people spend tremendous information on these internal sales efforts,
and have to do it because they are competing for attention with
other employees who are putting in a similar effort.  The end result
of all of this effort is decisions of similar quality to what would
have been made without the sales effort.  (Decisions made on the
quality of the sales effort are not noticeably better than ones made
for other reasons...)  However making those decisions takes a lot
more work.

Therefore a company culture that encourages this kind of sales effort
leads to expending a lot of internal energy to no real company
benefit.  Which is exactly why I called it a waste of company time
and money.  Though I would highly encourage any employee of a
company with that kind of culture to do the best presentations that
you can, because it is necessary for personal success.  And there is
*no* question that being able to sell well is valuable when dealing
with other companies.

 Presentations/animations ( thus PPT by extension) are also crucial
 to convey complex ideas/thoughts, especially in fields of science of
 engineering ( I'm sure in many other fields). There are countless
 cases I can list where an animation (rightly done) can convey a
 complex idea otherwise impossible to describe in words. Animations
 can be as simple as builds, but can enable one to build a complex
 idea step-by-step.

As a former mathematician I distrust giving people the illusion of
comprehension without the substance.  While animations can indeed
illuminate, they can just as easily - and more often do - mislead
people into thinking they understand what they don't.

I have also found that the more complex the idea, the more
important it is to work through it interactively rather than
hoping that the right presentation will be received correctly.
A prepackaged presentation does not replace a whiteboard.

 Moral of the story: it's the people, not the tools!

As should be clear, we never disagreed on that.

Ben

___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] Larry's MIT talk

2009-04-06 Thread Ben Tilly
On Mon, Apr 6, 2009 at 12:25 PM, Palit, Nilanjan
nilanjan.pa...@intel.com wrote:
 From: Ben Tilly [mailto:bti...@gmail.com]
 Sent: Monday, April 06, 2009 11:59 AM

 It appears that you didn't read what I wrote, then launched a rant
 that would be better aimed at someone else.I say this because I had
 linked to a chapter on how to effectively communicate with PPT, then
 complained about the way that it is used.

 This should tell you that I'm blaming the technique, not the tool.

 Your statement Powerpoint ... encourages
 oversimplification expressly blames the tool for
 the  problem, even though your intent may have been
 otherwise (seems like another poster interpreted your
 comment the same way). Since it seems like we're both
 on the same page about communicating ideas, we can move
 on.

Not to beat a dead horse, but what I actually wrote was:

   Personally I don't like the way that Powerpoint
   is used because it encourages oversimplification.

Which means that the thing that I do not like is, the
WAY that Popwerpoint IS USED.  So it really IS a
complaint about the technique, not the tool.

 BTW, I do take issue with describing my response as a
 rant, which seems to be often misused on the web:
 Rant Meaning and Definition
   1. (n.) High-sounding language, without importance or
  dignity of thought; boisterous, empty declamation;
  bombast; as, the rant of fanatics.
   2. (v. i.) To rave in violent, high-sounding, or
  extravagant language, without dignity of thought;
  to be noisy, boisterous, and bombastic in talk or
  declamation; as, a ranting preacher.

 I would hardly qualify my last response as without
 importance or dignity of thought; boisterous, empty
 declamation. A lot of thought went into to
 specifically addressing your comments. I did read your
 linked article  found it in conflict with your
 statement. A response does not qualify as a rant just
 because it is long and its intended target doesn't like
 its message. But, then again, everyone is entitled to
 their own opinion :-)

You put a lot of thought into specifically addressing
my comments and never noticed that I hadn't said what
you thought I said?  You noticed the conflict between
what I linked to and what you thought I said, and
*still* didn't read it carefully enough to correctly
parse what I said?  Then when I pointed out that I'd
said something different than you thought, you went
back, quoted me very poorly and STILL didn't notice that
I had, in fact, said something quite different than you
thought?

I've made mistakes of this kind in the past.  And every
last time was a case where someone said something that
so outraged me that I felt I had to respond, then
proceeded to do so at length.  To quote your definition,
I was responding without importance or dignity of
thought.  In other words I was ranting, and got so
caught up in my rant that I didn't notice that I was
outraged at a misunderstanding, and not about
something real.

Since we're largely in agreement on the actual content,
I'll not reply to the rest of what you said.

Ben

___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] Larry's MIT talk

2009-04-03 Thread Ben Tilly
On Fri, Apr 3, 2009 at 1:18 PM, Jerrad Pierce belg4...@pthbb.org wrote:
 I dunno about platypus versatility so much as contentder for
 designed by committee, but I opted not to proffer it earlier
 because it's the mascot of DarwinOS (OSS OSX core).

  http://www.gnu-darwin.org/

 Definitely cute though.

In many people's minds the Perl logo is a camel.  My understanding is
that this was considered appropriate in part because a camel is a
horse designed by committee.

Another animal designed by committee would therefore not necessarily
be a bad mascot.

Ben

___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] Discount for O'Reilly Open Source Convention (OSCON)

2009-04-02 Thread Ben Tilly
Please note that OSCON spent several years in Portland.  Saying that
Portland is part of Boston is like saying that Boston is part of New
York.

Ben

On Thu, Apr 2, 2009 at 6:33 PM, Andrew Langmead
andrew.langm...@verizon.net wrote:
 On Thu, 2009-04-02 at 21:42 -0400, Tom Metro wrote:
 Do you think they'll ever hold one of these in Boston?

 Once upon a time O'Reilly seemed to have more of a presence here. They
 still have offices here (no?), but I had the impression they started out
 here.

 According to
 http://blogs.oreilly.com/cgi-bin/mt/mt-search.cgi?IncludeBlogs=57search=lovelace
  Tim O'Reilly started the company in 1985, and moved to CA in 1987.

 It seems that for the last decade or so they've done everything in
 California.

 I'm guessing that O'Reilly was a very different company ten years ago.
 In articles like
 http://radar.oreilly.com/2009/02/state-of-the-computer-book-mar-17.html 
 people describe the tech book market collapsing in 2001 (falling 20% a year 
 for three straight years.) I can't find the exact article now, but I've seen 
 them describe that time period as a time for reinvention. Since OSCON has 
 always been in CA I assume it will always stay there (although the keep 
 moving it a bit, Montery, San Diego, etc.)


 ___
 Boston-pm mailing list
 Boston-pm@mail.pm.org
 http://mail.pm.org/mailman/listinfo/boston-pm


___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] Discount for O'Reilly Open Source Convention (OSCON)

2009-04-02 Thread Ben Tilly
Gah, I meant that Portland is part of CA is like..etc.  (I should
drink less before replying to email.)

Ben

On Thu, Apr 2, 2009 at 8:12 PM, Ben Tilly bti...@gmail.com wrote:
 Please note that OSCON spent several years in Portland.  Saying that
 Portland is part of Boston is like saying that Boston is part of New
 York.

 Ben

 On Thu, Apr 2, 2009 at 6:33 PM, Andrew Langmead
 andrew.langm...@verizon.net wrote:
 On Thu, 2009-04-02 at 21:42 -0400, Tom Metro wrote:
 Do you think they'll ever hold one of these in Boston?

 Once upon a time O'Reilly seemed to have more of a presence here. They
 still have offices here (no?), but I had the impression they started out
 here.

 According to
 http://blogs.oreilly.com/cgi-bin/mt/mt-search.cgi?IncludeBlogs=57search=lovelace
  Tim O'Reilly started the company in 1985, and moved to CA in 1987.

 It seems that for the last decade or so they've done everything in
 California.

 I'm guessing that O'Reilly was a very different company ten years ago.
 In articles like
 http://radar.oreilly.com/2009/02/state-of-the-computer-book-mar-17.html 
 people describe the tech book market collapsing in 2001 (falling 20% a year 
 for three straight years.) I can't find the exact article now, but I've seen 
 them describe that time period as a time for reinvention. Since OSCON has 
 always been in CA I assume it will always stay there (although the keep 
 moving it a bit, Montery, San Diego, etc.)


 ___
 Boston-pm mailing list
 Boston-pm@mail.pm.org
 http://mail.pm.org/mailman/listinfo/boston-pm



___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] Discount for O'Reilly Open Source Convention (OSCON)

2009-04-02 Thread Ben Tilly
On Thu, Apr 2, 2009 at 7:39 PM, Andrew Langmead
andrew.langm...@verizon.net wrote:
 On Thu, 2009-04-02 at 20:12 -0700, Ben Tilly wrote:
 Please note that OSCON spent several years in Portland.

 I had forgotten that they moved to Portland. I guess its been years
 since I've even considered checking with my employer to see if a
 conference like this was in the budget, even as employers changed. (I
 still find it sort of funny that when I left boston.com a year and a
 half ago, the move from a media company to a financial company looked
 like a move towards greater stability.)

 Just the other day, I was talking to a co-worker about how it would be
 hard to justify the cost of a conference or tutorial. If I went, would
 the knowledge I gained result in a  $1k+ increase in productivity?

If you went to OSCON last year and did my tutorial, you would have
learned how to do A/B testing properly.  If you work for a web-based
company, and used that learning to run one successful A/B test, you
would have likely improved some metric (eg revenue/user) by 10%.

Would the profit margin on 10% of your web-based business pay for a
$1K conference and the time it took for you to set up and evaluate
that A/B test?  Hopefully it would pretty easily.  Which means that
your *second* successful A/B test will be money in the bank.

The slides to that presentation are available at
http://elem.com/~btilly/effective-ab-testing/.

Cheers,
Ben

___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] software puzzle - extracting longest alphabetical list of phrases from a list of words

2008-10-26 Thread Ben Tilly
On Sun, Oct 26, 2008 at 10:21 AM, Tolkin, Steve [EMAIL PROTECTED] wrote:
 The following is just a problem in computer science.  It is not directly
 related to Perl, or to my work.  I am looking for insights in how to
 think about this.

 The input: a list of words.
 The output: a partitioning of the input list into a longest list of
 phrases, such that the phrases are in alphabetical order.  (Each phrase
 is one of more consecutive words, and a word is a maximum length
 sequence of non-space characters.)
[...]

 I presume this problem is already known to software engineering.  What
 is its name?  (For example, other problems are solved by connected
 components, or topological sort, etc.)
[...]

The algorithm at
http://www.algorithmist.com/index.php/Longest_Increasing_Subsequence
can be adapted to this problem.  You will need small modifications to
take into account the fact that foo bar is alphabetically before
foo baz even though the element foo is the same both times.

Cheers,
Ben

___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] software puzzle - extracting longest alphabetical list of phrases from a list of words

2008-10-26 Thread Ben Tilly
On Sun, Oct 26, 2008 at 5:05 PM, Ben Tilly [EMAIL PROTECTED] wrote:
 On Sun, Oct 26, 2008 at 10:21 AM, Tolkin, Steve [EMAIL PROTECTED] wrote:
 The following is just a problem in computer science.  It is not directly
 related to Perl, or to my work.  I am looking for insights in how to
 think about this.

 The input: a list of words.
 The output: a partitioning of the input list into a longest list of
 phrases, such that the phrases are in alphabetical order.  (Each phrase
 is one of more consecutive words, and a word is a maximum length
 sequence of non-space characters.)
 [...]

 I presume this problem is already known to software engineering.  What
 is its name?  (For example, other problems are solved by connected
 components, or topological sort, etc.)
 [...]

 The algorithm at
 http://www.algorithmist.com/index.php/Longest_Increasing_Subsequence
 can be adapted to this problem.  You will need small modifications to
 take into account the fact that foo bar is alphabetically before
 foo baz even though the element foo is the same both times.

Hrm, returning to this problem I see that I was too fast in foisting
you off to some documentation.  From your point of view the
modification to be able to take the dataset:

foo foo foo foo foo foo foo foo foo foo

and get out the increasing subsequence

foo
foo foo
foo foo foo
foo foo foo foo

is not likely to be obvious.

Also the first element has to start the first phrase, and therefore
has to start the subsequence of interest.  That's an important
restriction.

Now you say you just want insights, so I don't want to give you the
full answer.  So I'll work to avoid that.  The hint is that for each
list element you want to know the length of the longest ascending
sequence of phrases is to starting the next phrase at that list
element, and what the previous phrase started on.  If you have this
data structure built up, then you just look through it for the longest
string of phrases to the final phrase.  Then read off you answer
backwards - you know were the final phrase of your longest sequence
starts, that tells you where the next to last one starts, then the one
before that, and so on.

The nasty complication is that you need to not only consider the list
element starting that phrase but also the minimal length of the phrase
that needs to start at that element for the phrases to be in ascending
order.  (Consider the foo foo foo... example to see how the next
phrase starting at an element may have a minimum length.)  So if list
element 20 can be the next phrase of length 1 or more after a sequence
of 3 phrases, or the next phrase of length 3 or more after a sequence
of 4 phrases, either of those partial sequences could go on to be your
next answer so you have to keep track of both of them.  But often you
don't have to keep track of more than one.  For instance if list
element 25 could be the start of next phrase of length 1+ after a
sequence of 5 phrases , or the start of the next phrase of length 4+
after a sequence of 5 phrases, or the start of the next phrase of
length 2+ after a sequence of 4 phrases, then you only need to track
the first possible sequence to that point.  (Unless, of course, you
wanted to be able to enumerate all of the sequences of phrases of
maximal length.  In that case you'd want to track the first and second
possibilities.  But the third can be ignored.)

I'll leave it to you to figure out a data structure that can properly
track this information, and the development of an algorithm that can
track it.  But I will tell you that there is an algorithm that is
usually O(n*n) for this problem.  (The foo foo foo... example is a
worst case scenario - it is slightly worse for that.  I'm too lazy to
figure it out properly, but I'd guess it to be no worse than about
O(n*n*n).)

Cheers,
Ben

___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] Baby boy

2008-09-30 Thread Ben Tilly
On Tue, Sep 30, 2008 at 1:02 PM, Uri Guttman [EMAIL PROTECTED] wrote:
 SS == Steve Scaffidi [EMAIL PROTECTED] writes:
  SS The hardest part is cleaning up after the frequent core-dumps. ;)

 he posted that the initial core dumps were the worst! :)

While the initial was worse, HE wasn't the one doing it.

After 3 months of nighttime wakeups, the frequent ones get old.

Congratulations, BTW.

Ben

___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] Net::Server::PreFork - How to make dynamic config changes without bouncing a server

2008-07-30 Thread Ben Tilly
On Wed, Jul 30, 2008 at 11:42 AM, Ranga Nathan [EMAIL PROTECTED] wrote:
 Before I go ahead and do something screwy, I thought to ask the public what
 they do in this case. I realize that one of the children would get the
 message indicating the changes. If it updates the data structure in memory
 that would do it only for THAT client, right? In other words, what is the
 best way for all the children to share variables?

If you want to guarantee that your application can never scale, then
you could use shared memory.

If you want your application to be able to run on multiple machines,
then I'd suggest that you use memcached.  See
http://www.danga.com/memcached/, which you can access using the CPAN
module Cache::Memcached.

Several years ago it had a bug where you couldn't have keys over a
certain length.  We got around that by just using Digest::MD5 to make
short keys to use.  I don't know if the limitation is still there or
not.  Other than that I've never seen a problem with it.

But you'll probably want a plain text file to be written out somewhere
in the background to preserve data across server restarts.

Cheers,
Ben

___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] Net::Server::PreFork - How to make dynamic config changes without bouncing a server

2008-07-30 Thread Ben Tilly
On Wed, Jul 30, 2008 at 3:33 PM, Tom Metro [EMAIL PROTECTED] wrote:
 Ben Tilly wrote:

 But you'll probably want a plain text file to be written out somewhere
 in the background to preserve data across server restarts.

 I think the OP is referring to a typical scenario where you update a
 configuration file, and then send a signal to the process to provoke a
 re-read of the configuration file.

It sounded more to me like you send a regular request to the server
that causes the server configuration to change.  More of a remote
control version than what you're describing here.

Though, that said, there is little downside to using a different
procedure to configure a server application than you use to access it.

 If you want to guarantee that your application can never scale, then
 you could use shared memory.
 If you want your application to be able to run on multiple machines,
 then I'd suggest that you use memcached.

 Isn't memcached overkill for a handful of config variables that rarely get
 reloaded? (Assuming I understand the scenario correctly.)

Overkill?  Sure.  Significant overhead?  Not if the processing of a
request takes any real work.  If your application will ever live on
more than one server, then using memcached up front is a very good
idea, and using shared memory is a much worse one.

 Ranga Nathan [EMAIL PROTECTED] wrote:

 I realize that one of the children would get the message indicating
 the changes. If it updates the data structure in memory that would do
 it only for THAT client, right? In other words, what is the best way
 for all the children to share variables?

 This concept is certainly common enough in UNIX. Take Apache, for example.
 But I've never had the need to look into exactly how it is implemented. (The
 multi-threaded/multi-process services I've written in Perl haven't had the
 need to reload config files while running.)

It can be implemented in multiple ways.  One is that you can have it
in shared memory (or a shared cache like I suggested) and the child
processes regularly read that cache.  Another is that you could have
each child check in once per request cycle (or once every several) to
see if their information is stale.  If it is then they could either
try to reload information, or else they could just exit and let the
parent process respawn.

I suspect that Apache with prefork follows the exit and respawn
approach, but I don't really know.

 If a signal directed at the parent process gets propagated to the children
 (either by the OS or by the parent process), then with a bit of redundant
 inefficiency, you could have a signal handler in each child reload the
 configuration and update their local copies of the variables.

It would have to be done by the parent process, because the OS doesn't
generally do that.  But note that signal handling in Perl is fraught
with difficulties to think about.  Starting with the decision about
whether you'd like to catch signals safely (no dumping core please),
or in a timely fashion (don't wait until a database query ends before
noticing the signal please).  For this you probably want to catch it
safely.  However for a hard shutdown you may want to catch it in a
timely fashion.  Unfortunately Perl makes you choose.  (But at least
you can choose - it used to be that you were stuck with whichever one
Perl supported, and different versions of Perl made that choice
differently.)

 The parent process could kill off and restart the children, though that
 probably doesn't meet your criteria of not interrupting the service.

That is why I suggest having the change be something the children
notice somewhere in their request lifetime rather than being forcibly
pushed to them.

 If you were using Perl's threads, there's a built-in mechanism for declaring
 shared variables. Otherwise, I'd investigate the various shared memory and
 IPC modules on CPAN. There are a few IPC modules in the core distribution.

Before choosing a caching module look at
http://cpan.robm.fastmail.fm/cache_perf.html and see the performance.

Cheers,
Ben

___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] Force browser rendering of a partial dataset?

2008-07-15 Thread Ben Tilly
On Tue, Jul 15, 2008 at 11:11 AM, Christopher Schmidt
[EMAIL PROTECTED] wrote:
 On Tue, Jul 15, 2008 at 10:52:45AM -0700, Ben Tilly wrote:
 On Tue, Jul 15, 2008 at 9:01 AM, Christopher Schmidt
 [EMAIL PROTECTED] wrote:
  For the record, the problem you're trying to solve is probably something
  like 'force rendering of partially complete HTML' or something along
  those lines. (The problem is not strongly related to HTTP or anything
  else.)

 I'm sorry but this is entirely wrong if the problem is the time to
 generate his document.

 But that's not the problem he described. He said I want to render
 before the end of the data is received. That is not an HTTP problem, it
 is an HTML problem.

Actually his stated problem was that the user has to wait and he wants
to cut back on that waiting.  His proposed solution was making the
browser render right away.  That will help, but we should not assume
that it solves his real problem, which is that users have to wait.

Furthermore it is important not to block his thinking away from where
he might have to go.  He may need to go this route to solve his real
problem.  But he's less likely to get there if he believes an expert
who told him not to bother thinking about it.

 I think that it is perfectly possible that there will be problems with
 the webserver delivering output to the client before the client has been
 entirely generated. Some tools make this harder than others. It is
 possible that the tool in use here makes it hard -- or it's possible
 that that problem has already been solved. In any case, the problem
 description was narrow, and I addressed the problem description.

I understand why you did that.  But I think it important that you not
do that, and instead let him know that he may have a combination of
problems to solve to get the result that he wants.

In any case the point is now moot since I let him know that that might
be an issue, and gave him some pointers on what he'll have to look at
if it is.

Cheers,
Ben

___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] quiz slides and talk: compiler vs. interpreter vs. script

2008-06-13 Thread Ben Tilly
On Thu, Jun 12, 2008 at 7:40 PM, Bob Rogers [EMAIL PROTECTED] wrote:
   From: Ben Tilly [EMAIL PROTECTED]
   Date: Thu, 12 Jun 2008 18:33:36 -0700

   On Thu, Jun 12, 2008 at 4:01 PM, Bob Rogers wrote:
. . . You can even run code at read time, when the program is being
parsed by the compiler (or interpreter).

   Um, you can do that in Perl as well.  See Lingua::Romana::Perligata
   for an amusing demonstration of exactly what you can choose to do to
   code at read time.

 That's source filtering, which is much cruder; it's a single hook that
 happens before lexing/parsing, rather than during.  Lisp has a simple
 way to say evaluate this and insert it into the parse at this point,
 and a much more elaborate mechanism for hacking the lexer in order to
 introduce new syntax.  New syntax can also be introduced via macros.

I did not know what you meant by at read time.  And yes, I am fully
aware of the power of Lisp macros.  However it is rare in Lisp to
introduce new syntax.  Other than the ubiquitous '(foo) for (QUOTE
foo), the Lisp I've seen doesn't use much syntactic sugar.  Other
kinds of sugar, sure, but still it is toenail clippings in oatmeal
to borrow a phrase from Larry Wall.

   But don't worry; IIUC, Perl 6 will be able to do most if not all of
 this.  ;-}

Well, sort of.  Kind of.  Yes, but sort of in an opposite way.

Perl 6 will have all sorts of things letting you more reliably and
flexibly redefine syntax.  You'll be able to hook directly into the
parsing process, redefine the grammar, and do things like define your
own operators.  However I don't know of anything that will match
Lisp's macro system.

Ben
 
___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] quiz slides and talk: compiler vs. interpreter vs. script

2008-06-12 Thread Ben Tilly
On Thu, Jun 12, 2008 at 4:01 PM, Bob Rogers [EMAIL PROTECTED] wrote:
[...]
 Lisp goes even farther down the road of blurring the boundary between
 interpreter and compiler than Perl does.  You can even run code at read
 time, when the program is being parsed by the compiler (or interpreter).
 Some people aren't aware that Lisp is primarily a compiled language
 (which I bet is also true for Perl).  Even so, nobody thinks Lisp is a
 scripting language.  Go figure.

Um, you can do that in Perl as well.  See Lingua::Romana::Perligata
for an amusing demonstration of exactly what you can choose to do to
code at read time.

Cheers,
Ben
 
___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] SOAP? Alternatives?

2008-04-18 Thread Ben Tilly
I feel your pain.  On a project last summer my need to interact with
WSDL caused me to switch from Perl to Java.

But http://use.perl.org/article.pl?sid=08/04/10/0128226 suggests that
XML::Compile may now be able to help.  There are probably bugs to work
out, but at least there is a chance of it working, and the authors are
probably pretty responsive.

Ben
 
___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] [OT] -Short- OSS survey. Help a student get some data so he can get back to hacking!

2008-03-22 Thread Ben Tilly
On Fri, Mar 21, 2008 at 8:01 PM, Guillermo Roditi [EMAIL PROTECTED] wrote:
   How do you define contribute?  Does it include submitting bug reports
that do not contain source code?

  What I had in mind involved source code, but I did not mean to write
  off the work of bug reporters. I definitely think that a well crafted
  bug report can in many instances be one of the greatest contributions
  a project could receive, but my project deals with source directly.

How about submitting bug reports that *do* contain code?

I've never been paid to do extensive work on a project, but in the
course of my work I've found bugs, produced patches to fix said bugs,
and then submitted said patches back to the project the code came
from.  So I've generated code as part of my work, but not enough to
really consider myself a participant.

Ben
 
___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] merging lists that are ordered but not sorted

2008-01-29 Thread Ben Tilly
On Jan 29, 2008 10:57 AM, David Golden [EMAIL PROTECTED] wrote:
 On Jan 29, 2008 12:11 PM, Tolkin, Steve [EMAIL PROTECTED] wrote:
  I want to reconstruct the underlying list.  In other words the order of
  the elements agrees in all the lists, but there is no sort condition.
 
  Example:
  List 1: dog, cat, mouse
  List 2: dog, shark, mouse, elephant
 
  There are 2 possible outputs, and I do not care which one I get.
 
  The reason that I have not just coded this up is that it seems it
  require an unbounded amount of look ahead.  Also, when there are more
  than 2 lists, I think I need to read from all of them before making a
  decision about which element can be safely output.

Yes, this is true.  You need an unbounded look ahead.  Because you
might have 5 lists, and the last item of list 1 might be the first
item of list 2, the last of list 2 might be the first of last 3, etc.
Until you've read each list in full you can't know how to solve it.

 What comes to mind is indexing all of the words on the maximum depth
 they occur in any list.  Then you output all the elements of max_depth
 1 (dog), all the elements of max_depth 2 (cat, shark), max_depth 3
 (mouse), max_depth 4 (elephant).

List 1: dog, cat, mouse, rat, chimp
List 2: chimp, gorilla, kangaroo

The algorithm you described would output gorilla before rat.

 You still have to read all the lists at least once, though.

That is logically unavoidable.

Here is some lightly tested code to do what is asked.  It has, though,
no error checking.

sub merge_lists {
  my @list_infos
= map { {
list = $_,
depth = {},
position = 0,
done = 0,
  } } @_;

  # Create indexes.
  for my $list_info (@list_infos) {
my $list = $list_info-{list};
my $depth = $list_info-{depth};

for my $i (0..$#$list) {
  $depth-{$list-[$i]} = $i;
}
  }

  my @merged_list;
  FIND_ELEMENT: while (@list_infos) {
LIST: for my $list_info (@list_infos) {
  my $element = $list_info-{list}-[ $list_info-{position} ];

  # Is this element one that can go next?
  for my $other_list_info (@list_infos) {
my $other_position = $other_list_info-{position};
my $other_depth = $other_list_info-{depth}-{$element};
if ($other_depth and $other_depth  $other_list_info-{position}) {
  # Oops, we'll have to take an element from another list.
  next LIST;
}
  }

  # We've found our element, let's do bookkeeping
  push @merged_list, $element;
  for my $other_list_info (@list_infos) {
my $other_list = $other_list_info-{list};
my $other_position = $other_list_info-{position};

if ( $element eq $other_list-[$other_position] ) {
  if ($other_position == $#$other_list) {
$other_list_info-{done} = 1;
  }
  else {
$other_list_info-{position}++;
  }
}
  }

  # And cleanup.
  @list_infos = grep {not $_-{done}} @list_infos;
  next FIND_ELEMENT;
}
  }

  return @merged_list;
}

And to run it just call:

merge_list([qw(dog cat mouse)], [qw(dog shark mouse elephant)];

Cheers,
Ben
 
___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] Perl Foundation grants

2007-12-22 Thread Ben Tilly
On Dec 22, 2007 5:17 AM, Tom Metro [EMAIL PROTECTED] wrote:
 I recently listened to some (old) Perlcast news segments by Randal
 Schwartz and each time he mentioned the TPF grants - talking about what
 grants were recently awarded and how to apply for them.

 I wasn't aware that anyone could propose projects to be funded by TPF
 and that they funded small projects. I had the impression that they were
 there to fund core Perl development. Probably because that's how it
 started out.

Yes, virtually anyone can apply, and TPF does fund small projects that
are deemed to be useful for the Perl community.  However be warned
that the funding is relatively limited, so the grants awarded tend to
be fairly small.

Historically people couldn't propose projects per se.  Instead we
waited until someone submitted a proposal to do a specific project.
However Alberto Simões just took over as secretary of the grant
committee, and it looks like he's thinking about creating a list of
projects that proposals are desired for.  (People could still propose
anything they wanted to do, that list is just a list of ideas.)

 Anyone on the list tried applying for one?

I haven't.  In fact I'm not allowed because it would be a conflict of
interest. :-)

See http://www.perlfoundation.org/grants_committee to find out who
actually decides which proposals to fund.

Ben
 
___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] IE7 and JS image object

2007-11-12 Thread Ben Tilly
On 11/12/07, Alex Brelsfoard [EMAIL PROTECTED] wrote:
 Hey all,

 I know this is not so much a JavaScript group, but I figured someone
 might have heard of what I am running into.
 If not, feel free to ignore this message.

 Situation:
 I am using JavaScript to create an image.
 It needs to be loaded, and i need to see its request in the apache logs.

 Problem:
 The code I am using works in every situation except on IE7 on Windows
 (the image loads and exists in the temporary internet files folder,
 but there is no request for the image in the apache logs).
 I have found a way to make it work in IE7, but the solution confounds me.

 Original code:
 var img_src = ..;
 var my_img = new Image(1,1);
 my_img.src = img_src;

 [That's the code that works everywhere but in IE7]

 The Fix:
 var img_src = ..;
 var my_img = new Image(1,1);
 my_img.onload = function() {}
 my_img.src = img_src;

 Here's the REALLY interesting point: I can replace 'onload' with
 'onerror' and it works.
 But if I do not have that onload or onerror, I do not get the image
 request in my logs.

 Does anyone have any idea how/why this might happen?

Obvioulsy IE 7 has a more aggressive caching algorithm, and your extra
function gets in the way of that algorithm.  For further details we'd
have to know the details of said caching algorithm.

(As Tom Metro says, your technique should be regarded as unreliable,
even though it works now.)

Cheers,
Ben
 
___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] Subroutine definition

2007-09-13 Thread Ben Tilly
On 9/12/07, Bob Rogers [EMAIL PROTECTED] wrote:
On 9/11/07, Bob Rogers [EMAIL PROTECTED] wrote:
From: Ben Tilly [EMAIL PROTECTED]
[...]
That said, know your audience.  Using functional techniques in Perl
should be a deliberate decision.  In many programming groups, I'd
never use them because the others couldn't maintain that code.

 That's pretty sad.  After all, map is in perlfunc -- you don't even
 have to use anything, much less install something from CPAN.  So
 normally I would pounce on any such opportunity to enlighten my
 colleagues.  But I assume you know *your* audience in this case . . .

Overeagerness to enlighten your audience can result in a resentful
audience.  Conversely, willingness to meet them part way can result in
an audience that is more willing to learn.  I've found that my
willingness to be careful in how I push people has resulted in faster
learning on their part.  And less politics.  And less management worry
that nobody else can maintain my code.

And on the occasions where I really need to use what I know, people
are more willing to accept it.  Because they know that I'm not just
showing off for the sake of showing off, but rather because it serves
a purpose.

Cheers,
Ben
 
___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] Subroutine definition

2007-09-11 Thread Ben Tilly
On 9/11/07, Palit, Nilanjan [EMAIL PROTECTED] wrote:
 So I tried this using the following code, where %format_conv has an
 entry for each type of conversion needed with a list of items:
[...]
 When I run it, the 'defined' part works fine, but I get an error on the
 last line:

 Can't use string (formatconv_bidir2in) as a subroutine ref while
 strict refs in use at bsdl_gen.pl line 238.

I'm very glad you have that check.  Read
http://perl.plover.com/varvarname.html and the two follow-up posts
that it links to to understand why you really don't want to
accidentally use symbolic refs.

 How do I get this sub call to work with the sub name in a variable?

Two options.  One is that you can locally turn off that check with

  no strict 'refs';

just before the subroutine call.  The other is that you could take a
reference to the subroutine and use that:

  my $sub = \$subname;
  # time passes
  $sub-(@args);

(I've never quite understood why that's allowed by strict 'refs', but it is.)

A random incidental note.  It is very bad form to use map as a looping
construct.  If you're going to insist on writing that loop inline, use
an inline for loop.  Like this:

  $sub-($_) for @itemlist;  #Convert all the items

A second random incidental note.  I've found that it is generally a
bad idea to change the format of variables in place.  If for no other
reason than the fact that you're unable to give the variables an
unambiguous name.  Read chapter 11 in Code Complete 2 to learn more
about good variable names.  (Read the rest of the book while you're at
it...)

Cheers,
Ben
 
___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] Subroutine definition

2007-09-11 Thread Ben Tilly
On 9/11/07, Palit, Nilanjan [EMAIL PROTECTED] wrote:
[...]
  It is very bad form to use map as a looping construct.

 Can you elaborate why it is a bad form: readability, performance, ...?
 Just want to understand the underlying reason. (To me, both the for 
 map inline forms appear to be the same readability  performance wise.)

Readability is the big difference.

The difference between map and for is that map is _supposed_ to
construct a return list.  (In recent versions of Perl it actually
doesn't construct one in void context, but that is a technical
detail.)  Therefore choosing to use map rather than for is a strong
hint that the return list is going to be used somewhere.  When it is
not going to be used, you have just mislead experienced programmers.

The other is understandable, but it is poor form.  Like someone with
bad English, you can be understood, but the listener also understands
that you don't know English very well.  Furthermore you're demanding
more from your readers than you need to.  Anyone who speaks English
(or who knows a couple of computer languages) can guess what a for
loop does.  But only people who know Perl will understand what map
does.

A more subtle consideration is that good programmers actively try to
avoid side effects.  If you get used to this style, then you
deliberately try to write code which doesn't depend on side effects.
From these programmers, map not only has an implicit promise that the
return list will be used, but also has one that the map block won't
have side effects.  And, of course, you've broken that promise.

These costs may seem to be small, but they are real and there is no
corresponding benefit to using map in the way that you did.  So why
use map?  And as insignificant as this may seem, good programmers try
to accumulate lots of small benefits like that.  While the individual
wins are small, the cumulative effect is quite significant.

For more discussions, with points pro and con, see
http://www.perlmonks.org/?node_id=296795.

Cheers,
Ben
 
___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] Social Meeting in August

2007-08-14 Thread Ben Tilly
On 8/14/07, Kenneth A Graves [EMAIL PROTECTED] wrote:
 On Tue, 2007-08-14 at 11:07 -0400, john saylor wrote:
  hi
 
  On 8/14/07, Ronald J Kimball [EMAIL PROTECTED] wrote:
   Or would people rather do something
   on the weekend?
 
  i attend so infrequently, i do not expect my preference to carry much
  weight, but this sunday night [19 aug] would be good for me.

 The advantage of Tuesday is that it's easier to fit a large group into a
 bar or restaurant than it would be on a weekend night.

The second advantage of a weekday night is that fewer people will have
other plans.

Ben
 
___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] Process Mgmt - Recommendations?

2007-06-15 Thread Ben Tilly
On 6/15/07, Charlie Reitzel [EMAIL PROTECTED] wrote:
 Hi All,

 I'm looking for a Perl module to do process management.  We're building a
 test harness and need to fire up a number of client traffic generators and
 wait for them all to finish.

Do you want them all to be running at once?

 A wrinkle is that we need to report if _any_ of the spawned processes exit
 with an error status.  So the normal bash/ksh wait command won't work, as
 it only reports the exit status of the last child to finish.

There are better ways to do this these days, but several years ago I
wrote http://www.perlmonks.org/?node_id=28870 and still go back to
that solution when I need something quick.  (Because it is already
written.  And it does work.)

Get rid of the $job_count logic and it will spawn an unlimited number
of jobs at once.

 Shouldn't matter, per se, but it may be worth mentioning that these
 processes will typically be ssh script execution on a remote box.  At this
 point, I hoping to keep the client requirements down to SSH/Cygwin.

That shouldn't matter.  However I should note that I haven't tried
that solution on any version of Windows more recent than NT, and a
vague memory says that there may have been minor issues on NT.  So try
it, but don't blame me if it doesn't work perfectly.

Cheers,
Ben
 
___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] Reading a variable CSV file with embeded newlines

2007-06-08 Thread Ben Tilly
On 6/8/07, Gyepi SAM [EMAIL PROTECTED] wrote:
 On Fri, Jun 08, 2007 at 03:26:56PM -0400, Alex Brelsfoard wrote:
  I have a CSV file where each line may NOT have the same number of fields.
  One item per line.

 xSV is line oriented: as long as each line is well formed it should be parsed
 correctly. Making sense of the data may be more difficult though.

Yup.

  But occasionally a field in an item will have one or more newlines in it.
  How can I break this up correctly?

 Embedded newlines are OK as long as the field is quoted. However, not all
 tools will parse the field correctly. Last time I checked, Text::CSV and
 Text::CSV_XS do not. I suspect Text::xSV will be better behaved. There are
 other, non Perl, tools that should work fine as well.

I had thought that Text::CSV did handle that case, but I just checked
and it does not. :-(

Text::xSV does this correctly, though not speedily.  Here is sample code:

  use strict;
  use Text::xSV;

  my $csv = Text::xSV-new(
  filename = foo.csv,
  row_size_warning = 0, # Stop warnings for variable size rows.
  );
  while (my @row = $csv-get_row()) {
  # Do something here.
  }

Cheers,
Ben
 
___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] overriding instance methods, dynamic package namespaces

2007-05-20 Thread Ben Tilly
On 5/20/07, Uri Guttman [EMAIL PROTECTED] wrote:
  BT == Ben Tilly [EMAIL PROTECTED] writes:

   BT The purpose of using goto there is in case some code uses caller() and
   BT could get confused about the extra subroutine.  (For instance Carp
   BT would be likely to warn at the enclosing subroutine that you defined.)

 i ran into that problem when doing error handling in file::slurp. i
 wanted an error handler to be called in several places which would carp
 to the correct level (and also return directly to the caller). so magic
 goto was my answer at the time. but i have since learned that carp (or
 some cpan variant) can be used properly several call levels down from
 the original call into a module. i think it backtraces to find a package
 different from the one that did the carping (where you are now). i
 haven't looked at it in detail but it sounds like it would be useful in
 this case too.

Carp has a number of APIs to allow people to control where it carps
from.  The documentation explains them in detail.  The one that
everyone thinks is right but which is almost always a misuse is to use
$Carp::CarpLevel.  If you read the Perl 5.8 documentation you'll find
instructions on how to use them.  Please disregard those instructions,
they are horribly wrong.  (Yes, I've submitted a patch.)

The right way to control where and when you carp is to use @ISA,
@CARP_NOT, %Carp::Internal and %Carp::CarpInternal.  Here is a brief
explanation of how to do that.

First of all Carp will issue its warning on the first untrusted
call.  By default, Carp's notion of a trusted call is one package
inherits from the other.  It doesn't matter whether the caller
inherits from the callee or the other way around, that call is
trusted.  As you might expect, this trust relationship is recursively
generated by inspecting @ISA.  Normally this works pretty well, but
occasionally you want a different rule.  Therefore you can use
@CARP_NOT to override the implicit direct trust relationship that is
inferred from @ISA.

Of course there are a few calls that we want to not carp no matter
what.  There are two levels of control there, both of which are meant
to be used by the Perl core.  The first is to add your package to
%Carp::Internal.  That's a list of packages that you'll never get a
carp or confess from.  The main purpose of that is to make sure that
complete stack backtraces start from within user code rather than Perl
code.  The other mechanism is to add your package to
%Carp::CarpInternal, which is like %Carp::Internal except that calls
TO those packages will never trigger a carp.  It is this rule that
keeps a carp from being generated on the line where you call carp.

In hindsight this API is too complex and not flexible enough.  What
should have been done is to provide an additional level of control,
which is that if a package defines the function CARP_NOT then that
function should be called with the details of a call to figure out
whether or not a carp warning should be generated on that call.  Then
people could set up their own modules with whatever carp rules they
wanted.

Ah well.  That wouldn't be a hard change to make if someone wanted to
make it.  My only excuse for the bad API is that at the time nobody
was interested in discussing it with me, and my only significant use
case when I came up with it is that I wanted Exporter to do a better
job of reporting errors on the right line.  (And I certainly succeeded
in that.)

Cheers,
Ben
 
___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] overriding instance methods, dynamic package namespaces

2007-05-18 Thread Ben Tilly
On 5/17/07, Tom Metro [EMAIL PROTECTED] wrote:
 Greg London wrote:
  Evals and typeglobs will let you do it.
  If you don't like that sort of thing (I don't),
  you can use a module I wrote called SymbolTable
  which hides all the ugliness for you.

 Thanks Greg for taking the time to ponder this, but I believe using
 SymbolTable will just provide a nicer syntax for doing this:

  *Module::Under::Test::method2 = sub { ... };

 as I previously mentioned. This approach fails to pass this test:

  my $mut1 = Module::Under::Test-new();
  *Module::Under::Test::method2 = sub { ... };
  $mut1-method2(); # modified behavior

  my $mut2 = Module::Under::Test-new();
  $mut2-method2(); # original behavior

 As the call to $mut2-method2() will still invoke the modified behavior,
 because the class, rather than the instance, was modified. (But I think
 you got this, as indicated by your second reply.)

He didn't quite handle that correctly, but there is a solution to that
problem as well.  (All code is untested and may not compile.)

sub replace_sub_for_instance {
my ($object, $subroutine_name, $new_subroutine) = @_;
no strict 'refs';
my $old_subroutine = \$subroutine_name
or die Subroutine $subroutine_name not found;
my $object_name = refaddr($object);
*$subroutine_name = sub {
my $self = $_[0];
if (refaddr($self) eq $object_name) {
goto $new_subroutine;
}
else {
goto $old_subroutine;
}
};
}

Note that I was careful not to capture the object of interest in the
subroutine because I didn't want to mess up a DESTROY.

  Hm, not a sub{}.
  You'd have to insert a sub that would check the
  reference of the instance and compare it to the one you
  want to bypass. If they're equal, skip. if not equal
  then call method2. something like.
 
  my $instance_to_skip = $mut;
  my $intercept_method2 = sub {
 my $obj=shift(@_);
 if($obj eq $instance_to_skip) {
return;
} else {
   return ($obj-method2(@_)); # ...
}
  };
 
  then assign symboltable for your class so that method2
  is replaced with $intercept_method2.

Note that the above code does NOT work because after the symbol table
is replaced, method2 goes to the wrong place.  You have to capture and
work with subroutine methods.

 If I wanted to interject my own layer of indirection at the instance
 level, and was willing to modify the class under test, I'd probably opt
 for an AUTOLOAD method that translated all method calls into calls that
 would execute a sub ref from the object's hash under a key of the same
 name as the method. Then the exact same thing as is possible in
 JavaScript could be done:

 $obj-{method2} = sub { ... };

 But this is not a trivial change - either from a code complexity or
 performance perspective - to impose on the module under test. If this is
 the only option, then dealing with a bit of syntax ugliness (creating a
 subclass) in the test class is far preferable.

The main problem that I see with subclasses is what happens if there
is code that checks ref($object) against some pattern.  If you know
the code and know that that doesn't happen, then I'd strongly
recommend subclassing.  But I've seen code where you can't rely on
that.

Also note that AUTOLOAD and inheritance do NOT play well together.
That's another reason to avoid that solution.

 Browsing further on CPAN turns up Class::Unique:
 http://search.cpan.org/~friedo/Class-Unique-0.03/lib/Class/Unique.pm

 which accomplishes the equivalent with this constructor:

 sub new {
  my $class = shift;
  my $obj = { };

  my $unique_class = $class . '::' . refaddr $obj;

  {
  no strict 'refs';
  @{ $unique_class . '::ISA' } = ( $class );
  }

  # so we don't have to rely on ref()
  $obj-{$PKG} = $unique_class;
  return bless $obj, $unique_class;
 }

 It interjects the address of the object into the class name, then
 creates a subclass using that modified name, and returns an object
 blessed into the subclass. That way each instance has a unique namespace
 in the symbol table. Of course to pull this off your class needs to be a
 subclass of Class::Unique.

This problem is fixable.

sub give_object_unique_subclass {
my $object = shift;
my $class = ref($object);
my $unique_class = $class . :: . refaddr($object);
{
no strict 'refs';
@{ $unique_class . '::ISA' } = ( $class );
}
bless($object, $unique_class);
}

The only big drawbacks to this are that code that checks ref can
break, and we might break code that depends on the stringification of
$object.  (Think inside-out objects.)

 If your class under test is properly designed to allow subclassing, you
 should be able to something like:

  my $subclass = __PACKAGE__ . '::test_method1::Module::Under::Test';
  my $mut = Module::Under::Test::new($subclass);
  {
  no strict 'refs';
  @{ 

Re: [Boston.pm] overriding instance methods, dynamic package namespaces

2007-05-18 Thread Ben Tilly
On 5/18/07, Tom Metro [EMAIL PROTECTED] wrote:
 Ben Tilly wrote:
[...]
  Also note that AUTOLOAD and inheritance do NOT play well together.
  That's another reason to avoid that solution.

 I had that thought as well. Isn't there a workaround where your AUTOLOAD
 handler can explicitly hand off to the superclass AUTOLOAD?

Nope.  And even if there was, that's insufficient in the face of
multiple inheritance.

At one point I started coding a solution to the problem.  It is on
CPAN as Class::AutoloadCAN.  I meant to flesh it out further (eg
properly implement private, protected and public methods) but lost
interest.  If someone else wants to take it over, be my guest.  I
discovered part way through that I'm the wrong person to solve
problems that I'm not particularly interested in...  (Particularly
since several people have come out with object systems that solve it,
and Perl 6 promises to solve it as well.)

See http://www.perlmonks.org/?node_id=342804 and
http://www.perlmonks.org/?node_id=446700 for more background.

[...]
 That's also a nice solution. Should get this stuff on CPAN.

Feel free to take it and run with it.

  The only big drawbacks to this are that code that checks ref can
  break, and we might break code that depends on the stringification of
  $object.  (Think inside-out objects.)

 Isn't there also a potential problem if the if something upstream of the
 reblessing saves a reference to the object, or is this true:

use Scalar::Util 'refaddr';
my $obj = {};
my $before = refaddr($obj);
my $newobj = bless($obj, 'foo');
print true\n if $before eq refaddr($newobj);

 Perl says it is. Makes sense, as although bless returns an object, it
 shouldn't create a new one.

There is no such problem.  Though the following code may surprise you.

my %hash;
my $ref = \%hash;
my $obj = bless \%hash, Test;
print ref: $ref\nobj: $obj\n;

 (The man page for Scalar::Util[1] doesn't say what data type refaddr()
 returns, but the examples show what appears to be a decimal number,
 rather than a hex string, as I would have expected, so that should
 probably be a numeric test above.)

Good point.

 1. http://search.cpan.org/~gbarr/Scalar-List-Utils-1.19/lib/Scalar/Util.pm


   my $subclass = __PACKAGE__ . '::test_method1::Module::Under::Test';
   my $mut = Module::Under::Test::new($subclass);
  [...]
  But if new() is declared in a superclass, that breaks.
 
  New being declared in a superclass isn't a problem.

 Are you sure? I thought these:
Module::Under::Test-new
new Module::Under::Test
 would cause a search for new() in the class hierarchy, but
Module::Under::Test::new
 will only try to run new() in package Module::Under::Test.

Oops, missed that you were making a function call rather than  a
method call.  But you can change that line and you're fine.

 Reblessing is a better solution, anyway.

 Thanks Ben. An informative post, as usual.

You're welcome and *blush*.

Ben
 
___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] overriding instance methods, dynamic package namespaces

2007-05-18 Thread Ben Tilly
On 5/18/07, Greg London [EMAIL PROTECTED] wrote:

 sub replace_sub_for_instance {
 my ($object, $subroutine_name, $new_subroutine) = @_;
 no strict 'refs';
 my $old_subroutine = \$subroutine_name
  or die Subroutine $subroutine_name not found;
 my $object_name = refaddr($object);
 *$subroutine_name = sub {
 my $self = $_[0];
 if (refaddr($self) eq $object_name) {
 goto $new_subroutine;
 }
 else {
 goto $old_subroutine;
 }
 };
 }
 
 Note that I was careful not to capture the object of interest in the
 subroutine because I didn't want to mess up a DESTROY.

 I'm just a tad confused about that last bit.
 There may be advantages to using goto over recalling the method,
 but I'm not sure how shifting the object off @_ will mess up a DESTROY call.

You're looking at the wrong part of the code.  I'm referring to how I
made sure to capture refaddr before creating the anonymous sub so that
the anonymous sub did not have $object in it anywhere.  That keeps
$object from being in the closure, which means that $object won't be
kept alive by a reference from the subroutine.

There is, admittedly, a small risk that the object will be freed and
another object will wind up at the same address.  But I consider that
better than the probable problem with DESTROY.

The purpose of using goto there is in case some code uses caller() and
could get confused about the extra subroutine.  (For instance Carp
would be likely to warn at the enclosing subroutine that you defined.)

 If the original call was $inst-method2(...);
 then $inst will point to the object during the entire method2 call,
 which means if I keep a copy during the call, it shouldnt' be a problem.

During the call I don't care about.  But capturing a reference in the
subroutine that I assign would not be good.

 The reference count cant reach zero until sometime after method2
 returns and $inst gets a new value, and the instance gets garbage
 collected.

 Either that, or I haven't had enough caffeine today.

Additional caffeine generally helps. :-(

Cheers,
Ben
 
___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] overriding instance methods, dynamic package namespaces

2007-05-18 Thread Ben Tilly
On 5/18/07, Greg London [EMAIL PROTECTED] wrote:
[...]
 You're looking at the wrong part of the code.  I'm referring to how I
 made sure to capture refaddr before creating the anonymous sub so that
 the anonymous sub did not have $object in it anywhere.  That keeps
 $object from being in the closure, which means that $object won't be
 kept alive by a reference from the subroutine.


 Ah, take my original code:

 my $instance_to_skip = $mut;
 my $intercept = sub {
my $obj=shift(@_);
if($obj eq $instance_to_skip) {
   return;
   } else {
  $obj-(@_);
   }
 };

 And change the first line to

 my $instance_to_skip = $mut . '';

 This forces the instance to be stringified,
 and then you're storing the string in your
 closure, not a reference.

Right.  I avoided that because of the possibility of an overload, but
that's the natural implementation.

My code was untested, not un-thought-through. ;-)

Cheers,
Ben
 
___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] OT: open source blog, forum, etc., software

2007-02-22 Thread Ben Tilly
On 2/22/07, Bobbi Fox [EMAIL PROTECTED] wrote:
 Those who were at the last boston.pm meeting may recall my asking if there
 was some place where I could find out about various open-source
 not-necessarily-perl-based blog/wiki/forum software.

 People kindly threw out some names of OS stuff they knew about.

 In the meantime, there must be something in the air, because someone on an
 internal mailing list posted a query about OpenSource CMS's today, which
 yielded a link to http://www.opensourcecms.com/index.php  This looks like
 what I was asking for, so I thought I'd pass it on.

I went to the site and couldn't see anything without supplying a
login.  That's annoying.  Googling about them, they seem to be GPLed,
but I couldn't verify that.

If they aren't GPLed then please be aware that there is a lot of
software in this space that claims to be open source but whose claims
are questionable.  In particular they use licenses that have not been
submitted to the OSI for approval, and which would probably fail to
meet the open source definition if they did.

The problem is that they insist on clauses mandating that there must
be a specific logo displayed that links to a specific company.  People
object to this on a number of grounds.  First of all, that it
introduces a requirement for a specific technology in violation of OSD
#10.  (For instance it would be against the license to produce a
command-line tool derived from their software.)  Weaker objections
have also been advanced on several other grounds, notably that putting
this much pressure against commercial reuse is discriminating against
specific fields of endeavour.

For more on this, see http://linuxgazette.net/134/moen.html.

If those limitations do not bother you, then go ahead and use the
software.  But don't call things open source unless they are.

Cheers,
Ben
 
___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] glob() bug?

2007-02-22 Thread Ben Tilly
It looks to me like a bug.

Your expectation of the expansion looks correct to me, and on Linux I
get the behaviour that you wanted from /bin/bash, /bin/sh (links to
bash) and /bin/csh (links to /bin/tcsh).

It is remotely possible that there is some real csh that disagrees,
but if so then I'd still prefer the behaviour that you want.  (Tom
Christensen would, of course, disagree.)

Cheers,
Ben

On 2/22/07, Kripa Sundar [EMAIL PROTECTED] wrote:
 Hi all,

 Is this a glob() bug, or am I overlooking something obvious?
 perldoc -f glob didn't help.  TIA.

 The .[0-9]*[0-9] is globbed correctly when it is not
 inside braces.

  % touch a ab abcd a.777
  %
  % perl -le \
'print for glob(a{,b,b*d,.[0-9]*[0-9]}), ---, glob(a.[0-9]*[0-9])'
  a
  ab
  abcd
  ---
  a.777
  %
  % perl -v | g v5
  This is perl, v5.8.3 built for x86_64-linux
  %

 peace,  || Ben Cohen tells us about Oreo cookies:
 --{kr.pA}   || http://www.truemajorityaction.org/oreos/
 --
 Ignorance is bliss.  Zen contra-positive: Suffering is knowledge.

 ___
 Boston-pm mailing list
 Boston-pm@mail.pm.org
 http://mail.pm.org/mailman/listinfo/boston-pm

 
___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] Is there a way to search for referrers?

2007-02-01 Thread Ben Tilly
On 2/1/07, Uri Guttman [EMAIL PROTECTED] wrote:
  js == john saylor [EMAIL PROTECTED] writes:
[...]
   js wtf! it sure ain't ruby. it's still perl that's executing within
   js apache. it is certainly a different context than what you may be used
   js to- but you could say that about windows perl too.

 there are too many differences between a normal perl process and perl
 embedded in apache via modperl. ram issues, process copies, sharing
 stuff you don't want to share, etc.  i am no modperl expert as i stay
 away from it.

The first thing that you've said that is correct is that you are no
modperl expert.

The RAM issues with mod_perl are that if you have many copies of Perl
running on the same machine, they take up a lot of RAM.  With the
number of concurrent requests that typically get served, this becomes
a significant consideration.  The standard fix is to do as the
mod_perl guide says and use a reverse proxy setup in http accelerator
mode.

At this point mod_perl's memory usage scales quite well.  In fact any
alternate platform serving the same load will have similar issues at
similar volume for the same reasons.  The only setup that might scale
better with memory is to go asynchronous.  Which kills your ability to
use multiple CPUs, causes more complex rewrites, deny you the ability
to use DBI, and doesn't even save that much memory.  (Having the data
for 20 people in 1 process takes somewhat less memory than having the
data for 20 people in 20 processes, but the difference isn't that
big.)

Process copies are an architectural decision.  With Apache 2 you can
choose whether or not to do that.  But if you know what you're doing,
you'd be silly to choose anything other than pre-fork for Perl on
*nix.  And the reasons for that have far more to do with Perl than
Apache.

Sharing stuff you don't want to share is a non-issue if you have
properly written code for mod_perl.  Conversely any attempt to take
CGI code and quickly port it into a persistent environment (ANY
persistent environment) will invite that problem.  And again, the
reasons have nothing to do with Apache.

So all of the mod_perl issues that you've pointed to are either
easily fixed, or had nothing to do with Apache.

 sure it has some benefits if you want to get at the
 insides of each request but as a computing platform it leaves too much
 to be desired. it can't be scaled off the box, you have to worry about
 other modules conflicting, you can't run your own event loop or async
 ops inside modperl, etc.

What do you mean that it can't be scaled off the box?  If you mean
that choosing mod_perl means that you can only have one webserver,
then you've demonstrated abysmal ignorance of the state of the art.
It is bog standard to have a mod_perl webserver farm sitting behind a
load balancer.  In fact that has been standard since load balancers
were invented.  Certainly every serious mod_perl shop that I've heard
about in the last decade has used this type of configuration.

What you are probably thinking of is how using *shared memory* can tie
you to one box.  Admittedly many mod_perl sites have made this
mistake.  However it is a coding mistake you make in Perl, and not a
mistake that is tied in any way, shape, or form to your platform, be
it mod_perl or otherwise.  Furthermore it is a trivial mistake to
avoid - just don't rely on local resources like that.

Worrying about other modules conflicting, well if you load any Perl
process with a bunch of modules then you have to worry about possible
conflicts.  (Though not very much if they are properly coded, which
most are.)  Again that concern is not specific to mod_perl.

The only thing that you've said that is completely correct is that you
can't usefully choose to write your own event loop.  And that is
because a basic event loop is exactly what Apache provides you.

  i could be wrong but i prefer to keep apache as
 simple and clean as possible and use perl behind it in stem (or
 fastcgi, speedycgi, etc.). then you have total control over your perl
 process and no worries about strange things that shouldn't ever matter
 but do with apache/modperl.

Your being wrong is not just a hypothetical possibility.  Your list of
strange things is a list of FUD that has nothing to do with Apache.

  IMNSHO apache is a web server and not an
 application platform. merging those two is a major mistake. but of
 course many do and many are successful. i think it is the wrong way to
 do complex web apps.

And you've just demonstrated a mixture of arrogance and ignorance that
makes me glad I don't work with you.  My only reason for writing this
response is that I didn't want anyone to be mislead by your apparent
certainty.

Regards,
Ben
 
___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] Is there a way to search for referrers?

2007-02-01 Thread Ben Tilly
On 2/1/07, Uri Guttman [EMAIL PROTECTED] wrote:
  BT == Ben Tilly [EMAIL PROTECTED] writes:

   BT At this point mod_perl's memory usage scales quite well.  In fact any
   BT alternate platform serving the same load will have similar issues at
   BT similar volume for the same reasons.  The only setup that might scale
   BT better with memory is to go asynchronous.  Which kills your ability to
   BT use multiple CPUs, causes more complex rewrites, deny you the ability
   BT to use DBI, and doesn't even save that much memory.  (Having the data
   BT for 20 people in 1 process takes somewhat less memory than having the
   BT data for 20 people in 20 processes, but the difference isn't that
   BT big.)

 you can use multiple cpus, async ops, dbi all in a middleware layer
 behind apache. been there, done that with stem and it worked great. and
 it was multiple boxes to boot (threads have slight problems with that
 :). there are many ways to scale and threads is just one narrow solution
 IMO. they have their point but ain't no silver bullet.

WTF do threads have to do with anything?  Threads is not a solution
that I discussed, nor one that I would advocate using in Perl.  Nor
are multiple boxes a difficulty for me.

Also in a straightforward CRUD application, most of your time is spent
in the database.  DBI only offers blocking calls.  Unless you move DBI
off to its own middleware layer, you can't mix DBI and asynchronous
programming in a useful way.

Finally I never said or indicated that alternatives do mod_perl don't
exist and work.  I'm saying that your claims that mod_perl DOESN'T
work are just solid smoke blown out your ass.

   BT Process copies are an architectural decision.  With Apache 2 you
   BT can choose whether or not to do that.  But if you know what you're
   BT doing, you'd be silly to choose anything other than pre-fork for
   BT Perl on *nix.  And the reasons for that have far more to do with
   BT Perl than Apache.

 still limited to no async, no multibox scaling. the whole architecture
 of apache as an app platform is the issue. but it works for many and i
 won't go that route. i like more control over my systems and having
 easier time of debugging and all the rest. also no massive craziness
 with apache config files.

No wonder you are so ignorant on this topic.  Right after you just got
told that multibox scaling is a complete non-issue with mod_perl, you
are repeating your FUD.  When you're that resistant to learning basic
facts, you're going to remain ignorant.

No async is not a weakness, it is a basic design choice.  What I mean
by that is that ANY platform you choose will push certain assumptions
on you.  Those assumptions are neither good nor bad, they are just how
that platform works.  And one of the assumptions that Apache pushes is
that it is responsible for the basic event loop, which in turn means
you aren't using an asynchronous programming model.

On the subject of debugging, I've never seen a problem.  Lots of
people don't know how to use Apache::Reload on their development
servers.  And don't provide a development server per programmer.  (My
development server is, of course, also my desktop...)  Those choices
would make life harder.

And Apache configuration files, while sometimes stupid, are not THAT
hard.  Besides, you only have to get the configuration right once.
After that you don't look at it very often.

   BT So all of the mod_perl issues that you've pointed to are either
   BT easily fixed, or had nothing to do with Apache.

 i professionally and politely disagree. but that is my right. apache is

If you're disagreeing as a matter of ignorance, that is one thing.
When you disagree because you are misinformed as to the tradeoffs,
that is another.  As a professional you should be interested in
figuring out whether your disagreement is the former or the latter.
As a fellow professional, I'm informing you that it really looks like
the latter.

 a web/http server and a fine one at that. beyond that it is insane to do
 anything with it. ever heard of isolation? glomming more and more onto a
 system is how redmond develops. want apache to be the new winblows?
 people complain about emacs that way too (and i use emacs). you can't
 have one system try to be the end all. it is not a good idea and that
 has been proven time and time again. apache won't be able to sustain
 this for all applications for all users. so why no just get out of
 apache asap and do the real work behind it. let apache handle http,
 static files, url munging and those sort of things best done directly in
 the web server. asking it to do all the app stuff is just wrong, no
 matter how they implement it or whether it may work or not. the high
 level picture is wrong.

It looks to me like you are shoehorning reality into a simple
religious picture and ignoring the misfit.

What am I reminded of here?

Oh yes.  The great microkernel vs monolithic kernel debate.

The idea of microkernels is to divide

Re: [Boston.pm] '/' =~ m\/\b; (bug?)

2006-11-15 Thread Ben Tilly
On 11/15/06, Carl Eklof [EMAIL PROTECTED] wrote:
 Hi Guys n Gals,

 I have found some seemingly strange behavior that may
 be of interest to this list.

 My assumption was that the \b pattern in a regex would
 always match the beginning and end of a string (as
 documented in the perlre page). However on my build of
 5.8.7 this is not the case if the character being
 matched at the beginning or the end is a
 meta-character ie. quotemeta would escape it. Also
 note that escaping the charcter doesn't seem to make a
 difference.

Actually that is NOT as documented in the perlre page.  And thoughts
to the contrary are a misreading of the documentation.

What the perlre page says is that there is an imaginary \W at the
beginning and end of the string.  The result is that if the first
character in the string matches \w, then \b will match at the start,
and if the last character matches \w, then \b will match at the end.

However if the first and/or last characters do *not* match \w, then
that is not a word boundary and \b will not match there.

[examples snipped]

 Maybe this is not a bug, and this is just another
 nuance of regexs' that I have not learned, but it
 looks very fishy.

It is definitely not a bug.  If the string is ..., then there are no
words, hence no word boundaries, therefore \b should not match at all.
 (And it does not.)

Conversely if the string is hello then there is a word, and it has
boundaries, and those boundaries should be matched by \b.  (And they
are, thanks to the imaginary characters discussed in the
documentation.)

 Any thoughts/wisdom?

See above.

Cheers,
Ben
 
___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] Python

2006-11-06 Thread Ben Tilly
On 11/5/06, Bill Ricker [EMAIL PROTECTED] wrote:
  I am a firm believer in expressing things in the most straightforward
  way possible.  Most people find loops straightforward, so I'm happy to
  use loops unless I have a good reason not to.

 Most *people* find loops and all other programming constructs weird and 
 opaque.

Well if you get down to brass tacks, most people are functionally
illiterate.  At least in Canada, which is the only country that I know
dares try measuring such stuff.  (Canada's definition of functionally
illiterate is can't read at a grade 8 level.  Which is supposed to
be the level that, for instance, newspapers aim for.  Based on
personal experience, I'd say that the USA does a worse job of
education than Canada...)

 Most *programmers* first programming language used if-else and some
 do-loop-thingy in the introductory section so *most* programmers find
 loops simpler than recursion or 2nd order operators, since they
 weren't until the advanced chapters of the book, if they were even
 included at all. Is this why Dijkstra said BASIC was a mind-crippling
 affliction?

There are so many reasons why Dijkstra could have said it that there
is no point in guessing which specific items motivated him.

 A mathematician would usually rather use a reduction to a previously
 solved problem than a counting argument.

Sorry, this is BS.  I am speaking here as an almost mathematician. (I
came about a month from finishing my PhD then encountered a need for
money...)

Mathematicians are happy to use any technique they can.  But counting
arguments are often preferred because they tend to be more
straightforward, and they tend to be more informative.  By more
informative I mean that the counting argument often gives you
something which can be used to produce either more precise or else
follow-up results.

Reduction to a previously solved problem is used a lot simply because
it is a more powerful technique.  When that allows more elegant
solutions, that can be a big win.  However not always.  In fact there
is a whole branch of mathematics devoted to nothing else.  (It is
called combinatorics.)

On a side note, I remember being part of an interesting conversation
on why students seem to find it easier to learn induction than
recursion when they're almost identical.  Two big parts of the answer
seem to be that induction is somewhat simpler in form, and the
presentation has a more linear flow.  People seem to have a hangup
when reading code that is going to be executed multiple times.  It is
more natural to say, Here is 1.  OK, based on 1, here is 2.  Based on
2, here is 3.  And so on.  Also recursion tends to be more
complicated.  For instance virtually no elementary math proofs have
multiple base cases, but this is fairly common in recursive
algorithms.

It may be helpful to point out the obvious here.  There is a widely
known mathematical notation for expressing iterative counting
expressions.  It is the Greek letter Sigma.  There is no corresponding
widely used mathematical notation for expressing recursion.  (There do
exist notations for it, but they are not nearly as widely used or
understood.)  There are a number of reasons for this, but one of the
major ones is that mathematicians find looping and iteration more
straightforward concepts.

Another tangential note.  It is important to distinguish between how
straightforward a set of concepts is and how straightforward it is to
express an idea using those concepts.  For instance goto is
conceptually very straightforward, but ideas expressed with goto tend
to be very obscure.

 The beauty of Perl is that Larry has wrought a language in which you
 can express things according to your simplicity, and those who see an
 inner simplicity in the Lisp-inspired and APL-inspired dialects of
 Perl can also happily use our simplicity.

Agreed.

  There are actually a lot of very prominent programmers who strongly
  dislike exceptions.

 There are some very prominent programmers who strongly like Java too.

 There are many valid reasons to avoid using exceptions in a given program.
 There are also valid reasons to use them.
 TIMTOWDI.

Right.  Depending on what kinds of programming you're doing, they may
or may not be very useful for you.

 Many of the reasons to avoid using exceptions are throw-backs to the
 bad it's a character string exception of early Perl, Python, C++
 exceptions. Fully wrought exception objects (which, Guido didn't seem
 to realize, could have a stringify operator to work compatibly with
 non-updated code!) address many of old issues.

Funny, the things that I've seen good people complain about have to do
with unexpected flow of control which programmers have not thought
through.  This holds whether or not you're using objects or strings.

But that said, I'm not a huge fan of exception objects.  One big
reason is that exceptions are by nature code that is only run when
things go wrong.  Programmers being programmers and human nature 

Re: [Boston.pm] Python

2006-10-30 Thread Ben Tilly
On 10/29/06, Bob Rogers [EMAIL PROTECTED] wrote:
From: Ben Tilly [EMAIL PROTECTED]
Date: Sat, 28 Oct 2006 09:25:32 -0700

[...]
Ummm...you've mixed three unrelated things together.

1. Continuations.
2. Closures.
3. Tail call elimination.

 Unrelated??  I don't think so.

These are all distinct and none implies the others . . .

 True enough; they are distinct and independent.  That does not rule
 out related.

Point.  However it does suggest that they shouldn't be confused with each other.

But I think this may be a red herring.  I suspect your definition of
 continuation makes you unwilling to consider anything in Common Lisp a
 continuation, and hence what I said above does not make sense to you.
 That is understandable, especially since I did not explain that the
 above recipe was not general.  If you really need call/cc, you can't
 rewrite it in Common Lisp.

Exactly true.

Of course, you're probably better off debugging your problematic code
 under a Scheme implementation that has a trace facility [1].  (If I had
 bothered to search for this earlier, we might have stopped typing at
 each other sooner.  ;-)

Which would have resulted in less confusion/enlightenment/boredom on
the part of our audience. :-)

Taking them in reverse order, here is what they are:

3. Tail call elimination is the process of eliminating call frames
that have nothing left to do except return a later call.  So in other
words if foo ends with return bar(), then one can wipe out foo's call
frame and replace it with bar.

 Mostly true, but you also have to consider dynamic environment.  For
 example, if (in Perl 5) you have

 sub foo {
 local $arg = shift;

 return bar($arg+1);
 }

 the dynamic binding of $arg means that foo has some cleanup to do after
 bar returns, so the call to bar is not a tail call.  If foo used my
 instead of local (which probably makes more sense in any case), then
 the call to bar would indeed be a tail call.

Sorry, but how does this disagree with what I said?  The fact that it
is possible for it to be non-obvious that a call is not a tail call
doesn't change what a tail call is.

 I considered what you said to be incomplete.  You stated only part of
 the requirement for a call to be in tail position, so I filled in the
 rest.  One could argue that this is nitpicking on my part, but one could
 also argue that if foo ends with return bar() is too simple to the
 point of error.

Ah.  I had said it correctly then restated it for clarity.  The
restatement was correct for some languages (eg Scheme), but not for
Perl.  You noticed, I didn't.

We're on the same page now.

Incidentally the existence of reliable destruction mechanics in Perl 5
means that even if you'd used my in this example, it is not
necessarily a tail call.  See ReleaseAction on CPAN for a module that
abuses this fact.

 I don't follow.  That reliable destruction happens when all pointers to
 $arg are destroyed; tail-calling bar would also destroy them when bar
 returned, though in the opposite order (bar's reference last instead of
 foo's reference last).  In other words, it doesn't matter whether foo or
 bar destroys the value, as long as it's the last reference.

It's true that ReleaseAction can call its actions prematurely in the
 event of a tailcall, depending on where the object reference is kept,
 but that's a separate issue.

That's not just a separate issue, that's the whole issue.  Perl has
precise semantics for when DESTROY is called.  Tail call optimization
is supposed to be an optimization that is done only when it will
change nothing about the behaviour of the program.  But in Perl,
trying to do it changes those semantics, and therefore my change what
the program does.  So in Perl, having a lexical variable inside a
function may be enough to make that function not a candidate for tail
call elimination.

Now you could say that the fault lies with my module for abusing
internal details of Perl's implementation.  I'm a bad user.  But one
of Perl's core modules is SelectSaver, which relies on the same thing.

. . .

2. Closures are anonymous functions that carry their own environment
(ie variables that were in lexical scope when the function was
created).  I love closures.  I'm very glad that Perl 5 has them.

 Agreed!  (Except that they need not be anonymous; that's just an
 unfortunate quirk of Perl 5.)

Oops, they need not be anonymous in Perl 5 either.  The unfortunate
quirk of Perl 5 is that you cannot lexically scope the name of a
function in Perl 5, therefore many things that one might want to do
with named lexical closures can't be.  However one can work around
this by just naming them through sticking them in a lexical variable

Re: [Boston.pm] Python

2006-10-28 Thread Ben Tilly
On 10/27/06, Bob Rogers [EMAIL PROTECTED] wrote:
From: Ben Tilly [EMAIL PROTECTED]
Date: Thu, 26 Oct 2006 19:36:36 -0700

On 10/26/06, Bob Rogers [EMAIL PROTECTED] wrote:
From: Ben Tilly [EMAIL PROTECTED]
Date: Thu, 26 Oct 2006 17:09:35 -0700

[...]

While I agree that continuations are insanely powerful, I likewise
prefer generators and iterators to continuations.  Why?  Well suppose
that you write some complex code using continuations.  Suppose that
something goes wrong.  Now you want to debug it.  So you'd want some
useful debugging information.  Something like, say, a stack backtrace.

You're outta luck.

 That is not true -- or, at least, not necessarily true.  In a full CPS
 language like Scheme, tail-merging (or the equivalent CP transformation)
 is obligatory, so you may in fact be of luck.  (I've never used Scheme,
 so I don't know how a real Schemer goes about debugging.)  However, a
 Scheme continuation looks just like a closure at the source level.  It
 is possible to write nearly identical code in Common Lisp that uses
 closures, and since CL does not tail-merge by default, you get the same
 functionality plus useful backtraces in the debugger.

Ummm...you've mixed three unrelated things together.

1. Continuations.
2. Closures.
3. Tail call elimination.

 Unrelated??  I don't think so.

These are all distinct and none implies the others.  A language may
implement any one or two of these without implementing the rest.  In
fact I can think of combinations of languages and compiler settings
that implement 6 of the 8 possible combinations of these features.
(Actually Perl 5 implements 1 and a half, tail call elimination is not
automatically done but you can goto foo.)

Taking them in reverse order, here is what they are:

3. Tail call elimination is the process of eliminating call frames
that have nothing left to do except return a later call.  So in other
words if foo ends with return bar(), then one can wipe out foo's call
frame and replace it with bar.

 Mostly true, but you also have to consider dynamic environment.  For
 example, if (in Perl 5) you have

 sub foo {
 local $arg = shift;

 return bar($arg+1);
 }

 the dynamic binding of $arg means that foo has some cleanup to do after
 bar returns, so the call to bar is not a tail call.  If foo used my
 instead of local (which probably makes more sense in any case), then
 the call to bar would indeed be a tail call.

Sorry, but how does this disagree with what I said?  The fact that it
is possible for it to be non-obvious that a call is not a tail call
doesn't change what a tail call is.

Incidentally the existence of reliable destruction mechanics in Perl 5
means that even if you'd used my in this example, it is not
necessarily a tail call.  See ReleaseAction on CPAN for a module that
abuses this fact.

Tail call elimination can make some recursive algorithms as efficient
as looping.  At the expense of losing potentially useful debugging
information.

 In reducing the stack requirement to that of a loop, you also reduce the
 amount of debugging information to that of a loop.  But what useful
 information about the previous iterations of a loop would you want to
 keep?  Are you saying that loops are therefore intrinsically harder to
 debug than (non-optimized) recursions?

If the recursive algorithm is just a rewritten loop, then there is no
difference.  The potential issue comes with a more complex recursive
algorithm where there are multiple functions you're bouncing between
and it isn't obvious to a human that it can be optimized.  Then you
might indeed get into trouble and want to ask, How did I get here?
And get confused that in one line you're calling foo, and on the next
you're in bar, and you have no idea how you got there.

Making this a compiler setting resolves the issue though.

2. Closures are anonymous functions that carry their own environment
(ie variables that were in lexical scope when the function was
created).  I love closures.  I'm very glad that Perl 5 has them.

 Agreed!  (Except that they need not be anonymous; that's just an
 unfortunate quirk of Perl 5.)

Oops, they need not be anonymous in Perl 5 either.  The unfortunate
quirk of Perl 5 is that you cannot lexically scope the name of a
function in Perl 5, therefore many things that one might want to do
with named lexical closures can't be.  However one can work around
this by just naming them through sticking them in a lexical variable
and calling that.

See http://www.perlmonks.org/?node_id=138442 for a code example
demonstrating why the conflict between global naming and lexical scope
can get you into trouble.  See any of various modules and discussions
about inside-out objects to see a useful use of non-anonymous
closures in Perl 5

Re: [Boston.pm] Python

2006-10-27 Thread Ben Tilly
On 10/27/06, Tolkin, Steve [EMAIL PROTECTED] wrote:
 Dear Ben, Bob et al.,
 Thanks for this thread.  (It has a very high signal to noise
 ratio, compared with many others.)

 Dear Everyone,
 Since this started about Python, in a Perl discussion list, I am
 wondering about whether Perl facilitate the kind of experimentation that
 led to stackless Python. http://www.stackless.com/ An experimental
 implementation that supports continuations, generators, microthreads,
 and coroutines.
 See also
 http://www.onlamp.com/pub/a/python/2000/10/04/stackless-intro.html

Sorta.  There are modules like
http://search.cpan.org/~mlehmann/Coro-2.0/Coro.pm in Perl, but nobody
has built a modified Perl for something like this.

 Perhaps not, because this will be built into Perl 6.

Well it will. :-)

I suspect that out of the box the pugs implementation will be
significantly more capable in this regard than the parrot one.  That's
a pretty safe bet actually.  See
http://www.perlmonks.org/?node_id=580042 for an idea of what the pugs
implementation already does.

 Perhaps not, because the Python community is different than the Perl
 community in some fundamental way, e.g., there is only one version of
 Perl.

 Perhaps not, because Continuations are a Bad Thing.

My dislike notwithstanding, continuations are a core feature of Perl 6.

 I believe some disciplined way of doing concurrency is clearly needed,
 and I do not think any of our current abstractions are good enough.
 (They may work in theory, but not in practice, e.g. they are too hard to
 reason about, or to debug.)

Well I think that continuations are one of the worse possible
abstractions out there.  See the Perlmonks link for a better approach
to concurrency.  (Dunno how well it will work in Perl though.)  See
http://labs.google.com/papers/mapreduce.html for how Google handles
the issue.

 I can think of no better path than for Perl to get this right, and run
 well on the multi-core CPU systems of the future.

No matter how Perl handles this, for most of what I do with Perl, I
can cheerfully ignore it.  I'm not being flippant here.  There are a
number of good ways around the issue, or barriers to using Perl's
solutions.  They include:

- Using Apache with pre-forked processes already takes advantage of
all of the possible concurrency in a multi-core CPU without using any
language support.

- I/O bound jobs don't run any better on multiple CPUs than they do on one.

- Mixing concurrency and communications with an outside process is a
Bad Idea.  So, for instance, you cannot use a database handle from
inside multiple threads without confusing both you and the database
(with disasterous results).

Virtually everything I do is covered by one of those three cases, and
therefore cannot take useful advantage of any further language support
for concurrency that is added on.  I suspect that many other Perl
programmers are in the same boat.

Cheers,
Ben

Cheers,
Ben
 
___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] Python

2006-10-26 Thread Ben Tilly
On 10/26/06, Bob Rogers [EMAIL PROTECTED] wrote:
From: Ben Tilly [EMAIL PROTECTED]
Date: Thu, 26 Oct 2006 17:09:35 -0700

On 10/26/06, Bob Rogers [EMAIL PROTECTED] wrote:
From: Tom Metro [EMAIL PROTECTED]
[...]
Guido made comparisons to Perl only in two areas - saying he likes
generators and iterators better than continuations . . .

 made me think of a paper [1] I only stumbled on recently (despite it
 being 13 years old!) on the semantic weaknesses of iterators.  I found
 it while researching coroutines; I can think of no more compelling
 demonstration of the power of continuations than the fact that they make
 coroutines trivial to implement [2].
[...]

While I agree that continuations are insanely powerful, I likewise
prefer generators and iterators to continuations.  Why?  Well suppose
that you write some complex code using continuations.  Suppose that
something goes wrong.  Now you want to debug it.  So you'd want some
useful debugging information.  Something like, say, a stack backtrace.

You're outta luck.

 That is not true -- or, at least, not necessarily true.  In a full CPS
 language like Scheme, tail-merging (or the equivalent CP transformation)
 is obligatory, so you may in fact be of luck.  (I've never used Scheme,
 so I don't know how a real Schemer goes about debugging.)  However, a
 Scheme continuation looks just like a closure at the source level.  It
 is possible to write nearly identical code in Common Lisp that uses
 closures, and since CL does not tail-merge by default, you get the same
 functionality plus useful backtraces in the debugger.

Ummm...you've mixed three unrelated things together.

1. Continuations.
2. Closures.
3. Tail call elimination.

Taking them in reverse order, here is what they are:

3. Tail call elimination is the process of eliminating call frames
that have nothing left to do except return a later call.  So in other
words if foo ends with return bar(), then one can wipe out foo's call
frame and replace it with bar.

Tail call elimination can make some recursive algorithms as efficient
as looping.  At the expense of losing potentially useful debugging
information.

2. Closures are anonymous functions that carry their own environment
(ie variables that were in lexical scope when the function was
created).  I love closures.  I'm very glad that Perl 5 has them.

1. Continuations are a representation of the execution state of a
program at a given time.  You can think of executing a continuation as
a goto that also changes the executing context (various variables,
etc) as you're performing the goto.  Note that, like with tail call
elimination and unlike with closures, after you call a continuation
there is no record left of where you were or what your state was.  If
there is no record left, then there is no useful debugging information
left.

However it is worse than that.

With tail call elimination there is a limited amount of useful
debugging information that one might want to have.  You don't have the
information, but it is easy to know what the information would be.
And one could in principle have 2 run modes, one in which that is
tracked for debugging purposes, and one in which you can't.

With continuations debugging is not so easily done for a more
fundamental reason.  Which is that the flow of control need not
correspond to any directly comprehensible structure.

A traditional program, even one that uses closures, has an execution
shape that looks like you're traversing a tree.  (Go down one level,
another level, back up, down again, etc.)  And at any point in time
one may think about the most direct path back to the root.  (This is
your stack backtrace.)  But the execution shape of continuation based
code need not look remotely like this.

For instance suppose that you used continuations to implemented a
cooperative multi-tasking system with multiple threads running in
parallel, and inside each thread you're following a familiar execution
paradigm of function calls that return.  (This is a realistic use, and
it all can be implemented with continuations.) Your execution path can
now be visualized as a series of rapid switches between simultaneous
traversals of several different trees.  This is *massively* more
complex than any execution path that a traditional program can follow.
 Furthermore one can only reduce it to this simple picture by knowing
the intent of the code, something that I would not expect any
automatic debugging facility to decode a trace of and present in an
understandable fashion.  And finally, suppose that  this
multi-threading code was messing up, you're debugging a situation
where sometimes it is calling the wrong continuation so that a thread
was randomly forgetting that it had made or returned from some
function calls.  Now you've got a series of executions that has no
sane visualization.

If that didn't make any sense, let me make it simpler

Re: [Boston.pm] Python

2006-10-24 Thread Ben Tilly
On 10/24/06, Bob Rogers [EMAIL PROTECTED] wrote:
From: Tom Metro [EMAIL PROTECTED]
Date: Mon, 23 Oct 2006 03:01:39 -0400

I recently listened to:

Guido van Rossum: Building an Open Source Project and Community
http://cdn.itconversations.com/ITC.SDF-GuidoVanRossum.1-2005.02.17.mp3
http://cdn.itconversations.com/ITC.SDF-GuidoVanRossum.2-2005.02.17.mp3

and I still don't get what's so compelling about Python.

Guido made comparisons to Perl only in two areas - saying he likes
generators and iterators better than continuations, and essentially
saying he prefers the aesthetics of Python over Perl.

 That's really odd.  I was once told (by a fellow Lisp refugee) that
 Python is the favorite new language of Lisp programmers because it has
 similar semantics.  I wonder if that situation will change when Perl 6
 comes out; it is clear from the perl6-language list that Larry has drunk
 deeply of the functional programming wine.  FWIW.

Ironically Guido has very much NOT drunk of the functional programming
wine.  To be specific, he dislikes anonymous functions and closures,
which is one of the reasons that Python's support for them is not
great.  (And he removed some of the support they have.)

Cheers,
Ben
 
___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] Python

2006-10-24 Thread Ben Tilly
On 10/24/06, Bob Rogers [EMAIL PROTECTED] wrote:
From: Ben Tilly [EMAIL PROTECTED]
Date: Tue, 24 Oct 2006 17:12:53 -0700

On 10/24/06, Bob Rogers [EMAIL PROTECTED] wrote:
From: Tom Metro [EMAIL PROTECTED]
Date: Mon, 23 Oct 2006 03:01:39 -0400

. . .

Guido made comparisons to Perl only in two areas - saying he likes
generators and iterators better than continuations, and essentially
saying he prefers the aesthetics of Python over Perl.

 That's really odd.  I was once told (by a fellow Lisp refugee) that
 Python is the favorite new language of Lisp programmers because it has
 similar semantics.  I wonder if that situation will change when Perl 6
 comes out; it is clear from the perl6-language list that Larry has drunk
 deeply of the functional programming wine.  FWIW.

Ironically Guido has very much NOT drunk of the functional programming
wine.  To be specific, he dislikes anonymous functions and closures,
which is one of the reasons that Python's support for them is not
great.  (And he removed some of the support they have.)

Cheers,
Ben

 Thank you for the additional bits.  Now I don't feel so bad about not
 having noshed more on the Python cheese  crackers.

Well I may have left you with an overly bad impression.

While support for them is not great, you can get them to work.  Lots
of people have done so.  And he has lots of other functional bits.
(eg lots of stuff with list comprehensions.)  So depending on which
things you wanted to reach for, you might find Python very friendly.

Cheers,
Ben
 
___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


Re: [Boston.pm] Parser for C-like language?

2006-06-23 Thread Ben Tilly
On 6/23/06, David Cantrell [EMAIL PROTECTED] wrote:
 On Fri, Jun 23, 2006 at 09:23:09AM -0400, Ted Zlatanov wrote:
  On 23 Jun 2006, [EMAIL PROTECTED] wrote:
   Wasn't there a C grammer for Parse::RecDescent ?
   Not that worked.  Damian has acknowledged elsewhere that it shouldn't
   have been included.
  It works for simple cases, and may be adequate for the OP's needs.  I
  would recommend P::RD, because its grammar definitions are pretty
  similar to the Perl 6 grammar definitions (it will matter, some day),
  and because it's pretty good in general.  About the only thing that's
  hard about it is parsing the error messages, which takes practice.

 I've tried P::RD.  I didn't like it at all.  It seemed to take an awful
 lot of work to define a very simple language and I was not impressed by
 the documentation.  Next time I need a parser I'll try Parse::Yapp.

The bigger reason to not use P::RD is that it uses a lot of memory and
is very slow.  Unless you're going to be parsing very small files, you
will be in trouble.

This is not really Damian's fault though, he wrote P::RD before Perl
had the /g modifier so he had no choice about making a copy of
everything  he had left to parse at every step.  (If a 10K string has
3K tokens, the result is about 7.5 MB of data.  If a 100K string has
30K tokens, change that to 750 MB.)

Damian knows how to fix it, and began the project, but ran out of time.

 I recommend the OP looks at it.

I haven't tried Parse::Yapp.  But I understand that you are far less
likely to run into problems with infinite recursion, which is another
Good Thing.

Cheers,
Ben
 
___
Boston-pm mailing list
Boston-pm@mail.pm.org
http://mail.pm.org/mailman/listinfo/boston-pm


  1   2   >