Re: =head1 SEEN ALSO BY
My 2-cents: I've often wanted to be able to browse the module namespace hirearchy. That would be a great addition. I imagine the UI might be tricky to do well though. Independently of that, I'd love to see something like a 'mentioned by' page. It would list all other distros that mention (via an LFoo link) a module in this distro. It would look like the revese dependencies page. I imagine the implementation would be reasonably straightforward. No need to mess with the presentation of POD. Bonus points for also recording and showing the header of the pod section that the L was seen in. Tim.
Re: Devel::Size broken
On Mon, Jun 16, 2014 at 04:57:49PM -0700, Mark Hedges wrote: Devel::Size module seems to be broken in 5.20.0. No response from developers. No work for a year or so. What's the process to address a broken module that the developers won't fix? Kafka.pm indirectly depends on Devel::Size via Data::TreeDumper. I guess it isn't really that commonly used as a development tool. Devel::Size, and Devel::SizeMe, are very complex and require intimate knowledge of perl internals and considerable time to maintain and update for major perl version. I'd be delighted if someone wanted to volunteer to contribute to that effort. Meanwhile, I've no plans to work on Devel::SizeMe in the near future. Sorry. Tim.
Re: Benchmark module with some more statistics
On Sat, Jun 25, 2011 at 12:19:49PM +0200, Lars Dɪ? 迪拉斯 wrote: Coordinate your efforts with Steffen Müller. http://blogs.perl.org/users/steffen_mueller/ http://search.cpan.org/~smueller/Dumbbench-0.04/ +1 Tim.
Re: Perl in the Data Warehouse
On Wed, Aug 18, 2010 at 04:11:32PM +0200, Nelson Ferraz wrote: Tim Bunce wrote: I don't think do. And neither is DataWarehouse. Looking at the code it seems to me this is a 'framework' of inter-related modules that share a common set of assumptions. (Which mandates one particular SQL syntax and hand-builds SQL without proper quoting!) As such I think it should be given a 'framework brand name' instead of being 'crowned' with the 'obvious' name (or an abbreviation of it). Thanks for the feedback. Two comments: 1) The code is alpha - expect everything to change. 2) I agree with your comment about obvious vs. brand name. So, I'm planning to remove all the framework-specific code, and create a framework brand name as you suggested. I'll possibly keep DataWarehouse to describe, as generally as possible, Facts, Dimensions, and Aggregates. YourBrandDataWarehouse::* or DataWarehouseYourBrand::* would be fine. Tim.
Re: Module Namespace for External API Wrappers
On Mon, Jul 05, 2010 at 09:28:04AM -0700, Eric Wilhelm wrote: # from Dave Cardwell # on Saturday 03 July 2010 05:09: I’ve written a module that wraps the notifo.com API ... I’m leaning towards WebService::Notifo, but would appreciate your advice if you would suggest otherwise. WWW::Notifo::API or (Assuming that it's a REST API) maybe REST::Notifo::API? It's best to name modules by what they do rather than how they do it. WebService::Notifo seems right to me. (The WebService:: namespace was created for this kind of thing.) Tim/
Re: Yet another module naming suggestion query
On Tue, Apr 06, 2010 at 01:07:33AM +0300, Sawyer X wrote: On Mon, Apr 5, 2010 at 11:29 PM, Tim Bunce [1]tim.bu...@pobox.com wrote: Data is fairly meaningless as a name. The Data:: is intended to be used for modules that work with abstract data values: Data::Bind, Data::Bucket, Data::BitMask, Data::COW etc. Thank you for taking the time to comment! Meanwhile, while writing all the docs in order to be able to release properly to CPAN, I've also renamed it to Data::Collector. I think it's a more accurate term than Data::Scanner, since it doesn't really scan anything. It provides a small framework for collecting information. There's a Sys:: namespace that has things like Sys::Info. (Perhaps you could integrate with that.) Even though I will be using this in a system environment (trying to get information on the system - where Sys::Info might be useful), Data::Collector was built with flexibility in mind, and can be used for things not related to a system at all. You could write Data::Collector::Info::Dilbert to have a piece of info that fetches Dilbert comics, for example. You could have a Data::Collector::Info::MyCorpIncCustomerInfo and so on. Pretty much like plugins so anything homegrown can be used with it without altering anything. I reckon that's why settling down to a specific type of data will not be right. Generally speaking the more specialized the module the more words-per-level the name should have. I reckon that is why Sys::Collector is probably not a good bet, since it's not necessarily system information. It gets even trickier since the data can be returned in various forms (XML, JSON, Data::Dumper. YAML, Perl objects - these I had already implemented as Data::Collector core serializing engines along the way). Thanks again, I appreciate the response, it helped me understand it more. Abstract data values seems like what I'm going for, so it looks like a good match. You could write Data::Collector::Info::Dilbert to have a piece of info that fetches Dilbert comics. That's far from abstract in the sense that Data:: was indended for. I believe you're thinking of what I'd call generic, and you called it a framework yourself. On CPAN frameworks, especially generic ones with plugins etc., are encouraged to have brand names. Think Catalyst, Mojo, Smolder, Plack, Dist::Zilla to name a few off the top of my head. Posting the docs may help. Tim.
Re: Distributing the CPAN
On Fri, Apr 02, 2010 at 04:49:44PM +0200, Aristotle Pagaltzis wrote: * Tim Bunce tim.bu...@pobox.com [2010-04-02 15:55]: So, for a cpan-git-mirror to update itself it only needs to do: cd cpan-all git pull git submodule update The git pull of cpan-all repro would be very fast as it's tiny. With 15,000(?) distributions = submodules = directories, it’s not *that* tiny. You don’t want to stuff those all in the top-level directory. Naturally. The cpan-all repro would be focussed on distributions not authors, so I figured a structure (for Foo-Bar and Foo-Bar-Baz distros) something like: /Foo /Bar /Foo-Bar.distro/... /Baz /Foo-Bar-Baz.distro/... (Let's not bikeshed that at the moment - the key point is that a hierarchy is needed and that it be focussed on distros.) [...] you still get comparatively much churn for some still rather big directories, because any change to a subdirectory causes the entire chain of objects representing the directory levels above it to also change. I don’t know if that churn is bad enough to require a different solution. I doubt it, but we won't know unless someone tries it :) Hopefully someone with more git foo than me can sanity check it. Assuming I'm not talking nonsense, I think this has great potential. It would take some trickery and thought to do well, but it’s not obviously broken as designed. Great. Tim.
Re: Distributing the CPAN
On Thu, Apr 01, 2010 at 08:03:53PM +0300, Burak Gürsoy wrote: From: Tim Bunce [mailto:tim.bu...@gmail.com] On Behalf Of Tim Bunce Subject: Distributing the CPAN * cpanminus already supports installing from a git repo. * Over time the number of cpan-git-mirror's and cpan-git-server's could grow and the number of traditional CPAN ftp/rsync mirrors could fall. There is a part missing in this scenario. Mirroring gitPAN can be a good idea since it has the actual released distros [...] Yes, I was envisaging something like gitPAN. Though if this took off then moving the tarball-git import logic to the PAUSE server would probably be a good idea. Tim.
Distributing the CPAN
On Thu, Apr 01, 2010 at 12:39:27AM -0400, David Nicol wrote: On Wed, Mar 31, 2010 at 7:43 AM, Ask Bjørn Hansen a...@perl.org wrote: The main point here is that we can't use 20 inodes per distribution. so don't. How much reengineering would be needed to keep CPAN in a database instead of a file system? Random thoughts... * If you squint a little you can view git as a database with excellent replication support. * cpanminus already supports installing from a git repo. * For backwards compatibility a simple perl web server could provide a classic CPAN http mirror 'view' over a git repo like gitpan. This cpan-git-server would create and serve up cached distro tarballs on demand. Someone could whip up one to work over gitpan as a proof of concept. * The need for widespread mirroring is less significant than it was in years past. (Also using git as the inter-mirror transport of source files means there'll be much less traffic between mirrors. Effectively only the diffs between releases.) * New approaches to replication, such as git, don't have to be supported by existing mirror providers. A new set of cpan-git-mirror providers could emerge. * Any cpan-git-mirror provider running a cpan-git-server could be included in the list of mirrors used by existing installers. * Over time the number of cpan-git-mirror's and cpan-git-server's could grow and the number of traditional CPAN ftp/rsync mirrors could fall. Tim.
Re: Tidy up your PAUSE directories
On Tue, Mar 30, 2010 at 05:09:53PM +0200, Rene Schickbauer wrote: brian d foy wrote: It's time for Spring cleaning again. If you have ancient versions of modules sitting around in your PAUSE directory, consider letting them retire to BackPAN (http://backpan.cpan.org). They don't disappear from the world, but they don't inflate CPAN either. You don't have to do anything, but many mirror operators might be happy that you did. :) Just my one point nine periodic cents: If you got ancient modules sitting around that you wont be updating anymore be good and ask around if someone wants to take over. It would be handy if there was a way for authors to indicate that new maintainers are sought. Perhaps via the META.yaml/(.json) file. Tim. If the module is simple enough (and possibly an Acme module), you could also ask in beginners: Maintaining a simple module with an existing user base is in my opinion a good learning experience! LG Rene Note: I've taken over three Acme-modules. Among them is Acme-AutoColor. If you lack any non-standard colors (i just added octarine), more features or something like that, just mail me. The other two, Acme::Innuendo and Acme::Mobile::Therbligs need updating too (working on it), ideas are welcome as well.
Trimming the CPAN - Automatic Purging
Currently on PAUSE you have to explicitly delete old uploads. How about changing it so you have to explicitly KEEP old uploads that appear to have been superseded? PAUSE already has a mechanism to delete files at some future point in time. That's currently only used as part of a safety/sanity check to delay deletions that were manually invoked. I envisage PAUSE having a set of rules it would apply monthly, say, to automatically select files for purging. The rules might look something like this: File does not have deletion date set, and File is older than 3 months, and File has a later upload - in the same directory - with the same major version - with a higher minor version - which is also more than 3 months old (Naturally these are just suggestions. Let's not bikeshed the fine details yet. It's the approach we need to discuss first.) Files selected in this way would be scheduled to be deleted in a month and an email would be sent to the authors, just as if they'd selected the files for deletion via PAUSE. All that's needed, in addition to the above script, is a way for authors to indicate that a particular file shouldn't be purged. The database could use a far-future date for that which the UI could present as do not purge checkbox against the file. Tim.
Re: Why you don't want to use /dev/random for testing
The next version of NYTProf supports profiling some 'slow' perl opcodes. I've included the rand opcode for exactly this reason. Tim. On Tue, Nov 10, 2009 at 07:01:38PM -0800, cr...@animalhead.com wrote: Many of you know that the random number generator /dev/random is subject to delays when it has not accumulated enough entropy, which is to say randomness. These delays are said to be longer on Linux /dev/random that on some other Unices. They occur particularly after a system is booted, which I hear is a regular occurrence on some smoke-test systems. But I bet many of you will be surprised by the magnitude of the delays that can occur. Recently one perl tester's Linux system tested my module IPC::MMA version 0.58, which used /dev/random to drive testing, to produce report 5888084. It took 22320 wallclock seconds to complete the tests: 6.2 hours. A few days later the same system tested version 0.58001, which differs from 0.58 mainly in using /dev/urandom which is not subject to entropy delays. Report 5889682 shows that it took 5 wallclock seconds. Anyway, I found it interesting, Craig MacKenna
Re: Why you don't want to use /dev/random for testing
On Wed, Nov 11, 2009 at 10:22:23AM +, Tim Bunce wrote: The next version of NYTProf supports profiling some 'slow' perl opcodes. I've included the rand opcode for exactly this reason. I meant srand (though rand is also included, just in case). Though having just looked at the Configure code and relevant man pages I realise I was misguided. You can't (easily) configure perl to use a random function that uses /dev/random. Tim. Tim. On Tue, Nov 10, 2009 at 07:01:38PM -0800, cr...@animalhead.com wrote: Many of you know that the random number generator /dev/random is subject to delays when it has not accumulated enough entropy, which is to say randomness. These delays are said to be longer on Linux /dev/random that on some other Unices. They occur particularly after a system is booted, which I hear is a regular occurrence on some smoke-test systems. But I bet many of you will be surprised by the magnitude of the delays that can occur. Recently one perl tester's Linux system tested my module IPC::MMA version 0.58, which used /dev/random to drive testing, to produce report 5888084. It took 22320 wallclock seconds to complete the tests: 6.2 hours. A few days later the same system tested version 0.58001, which differs from 0.58 mainly in using /dev/urandom which is not subject to entropy delays. Report 5889682 shows that it took 5 wallclock seconds. Anyway, I found it interesting, Craig MacKenna
Re: [ANNOUNCE] Test::More/Builder 0.89_01 now with subtests
On Tue, Jun 23, 2009 at 04:07:55PM -0400, Michael G Schwern wrote: is_passing() As a side effect of this work, there is finally a way to tell if a test is currently passing. Test::Builder-is_passing(). Its really have I failed yet, but if you don't think about it too hard is_passing() makes sense. The name is up in the air. failed_yet() is one idea which returns the number of tests which have failed (or violated the plan). FYI, I'd have found a failed_yet(), and subtests, very useful recently. The NYTProf tests run each test 8 times with various combinations of options (set via the NYTPROF env var). Sometimes it's hard to tell if a failure is related to certain combinations. So I added this quick hack to NYTProf's test library: sub count_of_failed_tests { my @details = Test::Builder-new-details; return scalar grep { not $_-{ok} } @details; } and used it to produce this kind of report: # Test failures of test21-streval3 related to settings: #compress: 0 = {fail 1,pass 7}, 1 = {pass 8} # leave: 0 = {pass 8}, 1 = {fail 1,pass 7} # savesrc: 0 = {pass 8}, 1 = {fail 1,pass 7} # use_db_sub: 0 = {pass 8}, 1 = {fail 1,pass 7} (In this case the issue wasn't directly related to the settings but simply a side effect of the virtual machine being very slow http://www.nntp.perl.org/group/perl.cpan.testers/2009/06/msg4227689.html) So count this as a vote for $count_of_failures = $b-failed_yet(); Tim.
Re: Distributing C code
You might find Memcached::libmemcached interesting. http://search.cpan.org/dist/Memcached-libmemcached/ It ships with a bundled copy of the libmemcached source. Makefile.PL not only runs the libmemcached configure script, but also make and make install. (The install directory is actually a temp directory in which the rest of Makefile.PL expects to find the built libs.) Tim. On Mon, Mar 17, 2008 at 05:23:38PM +, Alberto Simões wrote: Hi I have a C package that has a Perl Module associated. At the moment it follows the following file system structure: [/] | |- configure.ac |- Makefile.am |- *.c |- perl |- file.pm |- Makefile.PL |- MANIFEST | ... At the moment the approach is to run the configure script that is hacked to produce a Makefile.AM from the output of Makefile.PL. Also, the dist rule is hacked to follow the MANIFEST files. What I would like was to rewrite this configure system and base it in Perl. I want to make all the package installable from CPAN as all users of the C package uses the Perl module interface. My question is: what is the best and easier approach? Any module I might give a peek to steal some ideas? TIA ambs -- Alberto Simões - Departamento de Informática - Universidade do Minho Campus de Gualtar - 4710-057 Braga - Portugal
Re: Naming convention for thin wrappers around C libfoo.so ?
On Tue, Dec 11, 2007 at 10:52:56AM -0600, Jonathan Rockway wrote: On Mon, 2007-12-10 at 23:29 +, Tim Bunce wrote: Re the choice of name for the low level library... Lib::Memcached Lib::memcached Lib::libmemcached My preference is for Lib::libmemcached because it emphasises the name of the library that it's a wrapper for. No shortage of opinions in this thread, but I thought I'd throw mine in anyway; Memcached::libmemcached. This name, to me, implies that it's a Memcached interface that uses libmemcached. It doesn't say raw or anything, but the lib part strongly implies that. YMMV JHMO :) You're right on both counts Jonathan... No shortage of opinions in this thread, and Memcached::libmemcached is a great name. Sold! Thanks Jonathan, and thanks to everyone who contributed. Tim.
Re: Naming convention for thin wrappers around C libfoo.so ?
On Sat, Dec 08, 2007 at 06:38:50PM +0100, Sébastien Aperghis-Tramoni wrote: Tim Bunce wrote: If there's a libfoo.so and I want to create a perl module/distribution that's just a very thin wrapper around libfoo, what should I call it? LibFoo Lib::Foo Lib::foo Lib::libfoo libfoo SomeCategory::Libfoo ??? Following the Category::Foo scheme: [...] Following the top-level namespace Foo scheme: [...] It's clear the Category::Foo scheme has the greater number of distributions. It's also clear there's no firmly established best practice here. I wanted to discuss the issue in the abstract first because the actual library/module is in a category/namespace with it's own set of problems. I'm looking to build a very thin wrapper around libmemcached (http://tangent.org/552/libmemcached.html) a high-performance feature-rich interface to memcached. The natural category would be Cache:: but that namespace is a bit of a mess. There are two big distributions (Cache and Cache::Cache) that have different APIs. Each ships with a bunch of cache modules (Cache::Memory vs Cache::MemoryCache). Then there are lots of other Cache::* distributions and modules that may or may not conform to one or the other API. Since the extension I want to implement would not itself implement either the Cache or Cache::Cache API I'm reluctant to put it into the Cache:: namespace. I was thinking in terms of a low-level 'thin' extension called Lib::libmemcached with separate pure-perl modules implementing the Cache and Cache::Cache interfaces. So, here's the point: does anyone have any good objections to my establishing a new precident by using the Lib:: namespace for this? (Or perhaps CLib:: or SysLib:: or ...) Or should I just add to the general mess in the Cache:: namespace? Tim.
Re: Naming convention for thin wrappers around C libfoo.so ?
On Mon, Dec 10, 2007 at 12:17:16PM +0100, Dominique Quatravaux wrote: Tim Bunce wrote: I was thinking in terms of a low-level 'thin' extension called Lib::libmemcached with separate pure-perl modules implementing the Cache and Cache::Cache interfaces. Surely you found out about Cache::Memcached and friends? Basically what you're proposing is a refactoring of these. Are the current maintainers of same aware of your efforts? If so, perhaps they could hand over some namespace slots to you. It's not as simple as it may seem at first. Cache::Memcached is pure-perl. That's an advantage for some people. Cache::Memcached::XS is compiled but links to the libmemcache library (not the libmemcache*d* library that I'll be using). There is a need for a perl wrapper for the libmemcached library, but that doesn't invalidate the needs of others for other APIs. So, here's the point: does anyone have any good objections to my establishing a new precident by using the Lib:: namespace for this? +1, imho it makes good sense to have (some future version of) Cache::Memcached depend on Lib::Memcached. I'd rather say it makes good sense for the libmemcached library to be usable via multiple APIs, including an API compatible with Cache::Memcached, an API compatible with Cache, and an API compatible with Cache::Cache. Hence the two-level approach. Re the choice of name for the low level library... Lib::Memcached Lib::memcached Lib::libmemcached My preference is for Lib::libmemcached because it emphasises the name of the library that it's a wrapper for. (Think of libmemcached as a brand name. It's certainly distinct from memcached and even distinct from libmemcache.) I want people searching for libmemcached to also easily find Lib::libmemcached. My goal for this kind of thin wrapper is to be so thin that the documentation just defines a set of principles (type conversions etc) and then refers to the user to the documentation for the function in the underlying library. That's made easier in this case by libmemcached having a fairly well abstracted API. Tim.
Re: Naming convention for thin wrappers around C libfoo.so ?
On Mon, Dec 10, 2007 at 09:00:57PM +0100, Andreas J. Koenig wrote: On Mon, 10 Dec 2007 11:07:38 +, Tim Bunce [EMAIL PROTECTED] said: I'm looking to build a very thin wrapper around libmemcached (http://tangent.org/552/libmemcached.html) a high-performance feature-rich interface to memcached. But there is BRADFITZ/Cache-Memcached-1.24.tar.gz already?? libmemcached is (I'd guess) more than an order of magnitude faster than the pure-perl Cache::Memcached module. Also it's about to gain support for Consistent Hashing, which none of the other memcached APIs available from perl support. Tim.
Naming convention for thin wrappers around C libfoo.so ?
If there's a libfoo.so and I want to create a perl module/distribution that's just a very thin wrapper around libfoo, what should I call it? LibFoo Lib::Foo Lib::foo Lib::libfoo libfoo SomeCategory::Libfoo ??? Tim.
Re: [ANNOUNCE] Test::Builder/More/Simple 0.72
On Mon, Sep 24, 2007 at 08:54:12AM +0200, Andreas J. Koenig wrote: On Thu, 20 Sep 2007 03:00:59 -0700, Michael G Schwern [EMAIL PROTECTED] said: Most CPAN smoke testers wouldn't have caught it because even though they often run alphas they usually don't install them. So the interactions with dependencies would be lost. Not true. My smokes would have caught it. And I'm very grateful. Certainly helped with DBI testing. Tim.
Re: Who controls svn.perl.org?
On Mon, Apr 02, 2007 at 10:29:12AM -0400, David Golden wrote: As with most things relating to Perl infrastructure, I'd start by asking Ask: [EMAIL PROTECTED] Or, more generally, [EMAIL PROTECTED] Tim. Regards, David On 4/2/07, Jerry D. Hedden [EMAIL PROTECTED] wrote: Who do I need to contact to get access permission on svn.perl.org so I can add the 'threads' and 'threads::shared' modules to it? Sucker-punch spam with award-winning protection. Try the free Yahoo! Mail Beta. http://advision.webevents.yahoo.com/mailbeta/features_spam.html
Re: Dependency trees
On Thu, Jul 20, 2006 at 10:24:49PM -0500, Andy Lester wrote: Is there anything out there that will generate a tree of dependencies, probably based on META.yml? I figure I can pass in Mason, Test::WWW::Mechanize and Catalyst and get back a list of dependencies that those require. It would be the entire tree, so like so: Test::WWW::Mechanize Test::Builder WWW::Mechanize LWP::UserAgent HTTP::Response HTML::Form HTML::Tree Blah::Blah Test::LongString Test::Builder Blah::Blah If it doesn't exist, I'll write it. I just don't want to reinvent the wheel. That's exactly what Module::Dependency will (now) do. Except that the output is upside down since I'm using a parents and children analogy. For example, to show one level of parent dependencies and two levels of child dependencies for ExtUtils::MakeMaker, while only showing each module once, you could say: $ pmd_dump.pl -f=key -h -p=1 -c=2 -U ExtUtils::MakeMaker Carp Cwd Data::Dumper Exporter ExtUtils::MM ExtUtils::MY ExtUtils::MakeMaker::Config ExtUtils::Manifest File::Path File::Spec VMS::Filespec strict vars ExtUtils::MakeMaker ExtUtils::Installed /bin/cpan/instmodsh ExtUtils::MM_AIX ExtUtils::MM_Any ExtUtils::MM_BeOS ExtUtils::MM_Cygwin ExtUtils::MM_DOS ExtUtils::MM_OS2 ExtUtils::MM_Unix ExtUtils::MM_VMS ExtUtils::MM_Win32 ExtUtils::MM_NW5 ExtUtils::MM_OS2 ExtUtils::MM_Unix ExtUtils::MM_QNX ExtUtils::MM_UWIN ExtUtils::MM_VOS ExtUtils::MM_VMS ExtUtils::MM_Win32 ExtUtils::MM_Win95 Module::Build::Compat Module::Build::Base Changing the -f=key to -f=filename and removing the -h would make each like look like: ExtUtils::MM_AIX filename: path/to/lib/site_perl/ExtUtils/MM_AIX.pm See http://search.cpan.org/~timb/Module-Dependency/pmd_dump.pl Tim.
Module::Dependency 1.84
I needed some code to trawl through a directory tree parsing perl modules and scripts to determine their dependencies. The closest existing CPAN code was Module::Dependency but it fell short of what I needed. The original author (P Kent) has passed over maintenance to me. My latest release is: file: $CPAN/authors/id/T/TI/TIMB/Module-Dependency-1.84.tar.gz size: 52161 bytes md5: 90a83b2aee39f5d25060ebdb6cc3105d With the changes I've made I've pretty much 'scratched my own itch' for the time being. (Most recently with a completely new query script - docs appended below.) But the core code is still basically as it was when I came to it. I'm posting this here to see if anyone would like to contribute to it. The code is in subversion on perl.org and I'll happily give write access to anyone interested. Some random things I'd like to see done: - make items be real objects with methods etc - use overloading to stringify to $obj-{key} - move some pmd_dump.pl subs into object methods - abstract the modules and give them proper APIs - move to using Sqlite with a proper schema for example to handle multiple packages per file, not to mention supporting arbitrary queries - Look at using Graph::Easy to rewrite/replace Module::Dependency::Grapher. Tim. =head1 NAME pmd_dump.pl - Query and print Module::Dependency info =head1 SYNOPSIS pmd_dump.pl [options] object-patterns object-patterns can be: f=S- Select objects where field f equals string S f=~R - Select objects where field f matches regex R S$ - Same as filename=~S$ to match by file suffix S - Same as key=S For example: package=Foo::Bar - that specific package package=~^Foo:: - all packages that start with Foo:: filename=~sub/dir/path - everything with that path in the filename filename=~'\.pm$'- all modules restart.pl$ - all files with names ending in restart.pl foo - same as key=foo Fields available are: filename - dir/subdir/foo.pl package - strict key - same as package for packages, or filename for other files filerootdir - /abs/path depends_on - Carp strict Foo::Bar depended_upon_by - Other::Module dir/subdir/foo.pl dir2/bar.pl Another:Module Selected objects can be augmented using: -P=N Also pre-select N levels of parent objects -C=N Also pre-select N levels of child objects Then filtered: -F=P Filter OUT objects matching the object-pattern P -S=P Only SELECT objects matching the object-pattern P Then merged: -M Merge data for selected objects into a single pseudo-object. Removes internally resolved dependencies. Handy to see all external dependencies of a group of files. The -P and -C flags are typically only useful with -M. Then modified: -D Delete dependencies on modules which weren't indexed but can be found in @INC Then dumped: -f=f1,f2,... - only dump these fields (otherwise all) And for each one dumped: -p=N Recurse to show N levels of indented parent objects first -c=N Recurse to show N levels of indented child objects after -i=S Use S as the indent string (default is a tab) -u Unique - only show a child or parent once -k Don't show key in header, just the fieldname -h Don't show header (like grep -h), used with -f=fieldname -s sort by name -r=P Show the relationship between the item and those matching P Other options: -help Displays this help -t Displays tracing messages -o the location of the datafile (default is /var/tmp/dependence/unified.dat) -r State the relationship, if any, between item1 and item2 - both may be scripts or modules. =head1 EXAMPLE pmd_dump.pl -o ./unified.dat Module::Dependency::Info Select and merge everything in the database (which removes internally resolved dependencies) and list the names of all unresolved packages: pmd_dump.pl -f=depends_on -h -M '' Do the same but feed the results back into pmd_dump.pl to get details of what depends on those unresolved items: pmd_dump.pl -f=depended_upon_by `pmd_dump.pl -f=depends_on -h -M ''` | less -S =cut
Re: Module naming advice
On Fri, May 26, 2006 at 11:26:22PM +0200, David Landgren wrote: Jeff Lavallee wrote: Hi all, before I upload a new module, I thought I'd make sure the namespace I intend to use makes sense. I've been working on a set of modules to make interacting with the next generation of Yahoo's marketing web services easier. The modules insulate the user from a lot of the SOAP::Lite details. Currently, I'm planning on calling it Yahoo::Marketing. Yahoo::Marketing.pm itself would just serve as a place holder (with POD) for the time being, with all the meat under that namespace (for example, Yahoo::Marketing::AccountService, Yahoo::Marketing::Account, etc). The POD-in-progress for Yahoo::Marketing is below. Any thoughts/comments/suggestions about the intended namespace would be greatly appreciated. There is already at least one module in WWW::Yahoo::*. I would suggest slotting your modules in at that level as well. The WWW:: space is overcrowded and confused. The WebService:: namespace was created for modules interfacing with web sercices. So WebService::Yahoo::* seems like the best home. Tim.
HOW-TO for publishing a perl module? (was: Publishing my DBI subclass)
On Tue, Sep 27, 2005 at 11:20:01AM -0400, Chuck Fox wrote: Tim, I am interested in publishing my subclass as some folks have contacted me concerning it after my reply to your story request. How do I go about publishing something like this ? Is there a useful link or set of links that provides guideline information ? The module is already in package format. Funnily enough I don't know where the current docs are (certainly perldoc perlmodlib seems rather dated). You could just get a PAUSE id at pause.perl.org then make dist in your module directory and upload the resulting .tar file. But I'm sure there's decent docs somewhere with more details of 'best practice' so I'm CC'ing this to module-authors@perl.org (not least because I'll being uploading a new module myself soonish). Tim.
Re: New Author
On Tue, Sep 27, 2005 at 11:28:57AM -0700, Terrence Brannon wrote: On 9/27/05, Chuck Fox [EMAIL PROTECTED] wrote: I have a subclassed DBI module subclassing DBI is tough! This is off-topic, but please don't spread this meme. Subclassing any factory-based set of classed just takes a little extra work. Just a little - and the DBI has made it just about as easy as possible. As per the DBI docs, here's an example that subclasses the DBI, including database and statement handles: package MySubDBI; use strict; use DBI; use vars qw(@ISA); @ISA = qw(DBI); package MySubDBI::db; use vars qw(@ISA); @ISA = qw(DBI::db); sub prepare { my ($dbh, @args) = @_; my $sth = $dbh-SUPER::prepare(@args) or return; $sth-{private_mysubdbi_info} = { foo = 'bar' }; return $sth; } package MySubDBI::st; use vars qw(@ISA); @ISA = qw(DBI::st); sub fetch { my ($sth, @args) = @_; my $row = $sth-SUPER::fetch(@args) or return; do_something_magical_with_row_data($row) or return $sth-set_err(1234, The magic failed, undef, fetch); return $row; } So instead of just one class you need two more, one with ::db appended and one with ::st appended. Hardly rocket science. Tim.
Re: Perl6 goes where?
On Thu, Jul 28, 2005 at 05:47:51PM +, Smylers wrote: Andy Lester writes: I don't think we need another CPAN at all. There's nothing wrong with putting require 6; at the top of Makefile.PL and keeping everything in one happy CPAN. Some observations: - CPAN is just an ftp mirror network - PAUSE is not CPAN, it's just how modules get onto CPAN - search.cpan.org is not CPAN, it's just one interface to it There is a problem if it interferes with people trying to use identically named Perl 5 modules. If a Perl 6 DBI module exists, I posit that it would not be a good thing if this was what the CPAN or CPANPLUS modules automatically down loads People are going to have to get used to being more specific about version numbers and even authors. Hopefully the tools (CPANPLUS.pm etc) will improve to assist them. nor if that's what the Cpan Search website presents as being the most recent version of DBI. In Perl 6 it's perfectly possible to have multiple modules from different authors with the same 'short name'. Even using them at the same time. The 'long name' of 'my' DBI would be something like DBI-1.46-TIMB. (The details of how use DBI; selects which of possibly many DBIs are installed haven't been fully worked out.) CPAN itself is just an ftp mirror network and the existing directory and file naming conventions might suffice. I'm sure changes, possibly quite deep changes, are needed PAUSE, search.cpan.org, and peoples expectations to accommodate this and other aspects of perl6. However both PAUSE and search.cpan.org are (I believe) maintained by single individuals who may not be willing or able to put in the time to make the required changes. CPAN, PAUSE and the search.cpan.org grew evolved together over quite a long period. I suspect we're in for a bumpy ride with perl6 with a conflict between people expecting the old tools to evolve quickly and others getting frustrated and creating alternatives. Tim.
Re: Should DSLIP codes be updated?
On Tue, Mar 29, 2005 at 03:06:33PM -0600, Andy Lester wrote: On Tue, Mar 29, 2005 at 07:16:11PM +, Robert Rothenberg ([EMAIL PROTECTED]) wrote: Some food for thought and debate. I'm wondering if the DSLIP codes [1] be updated, if revamped altogether. Note the following issues: Or thrown away entirely, along with the rest of the the archaic idea of a module list. The Module List is dead. Module Registration is different. Tim.
Re: Should DSLIP codes be updated?
On Tue, Mar 29, 2005 at 04:14:46PM -0600, Andy Lester wrote: On Tue, Mar 29, 2005 at 11:06:37PM +0100, Tim Bunce ([EMAIL PROTECTED]) wrote: Or thrown away entirely, along with the rest of the the archaic idea of a module list. The Module List is dead. Module Registration is different. Mea culpa. I'll rephrase. Or thrown away entirely, along with the rest of the archaic idea of module registration. :-) The time has come to recognize that CPAN is an unregulated free-for-all, and that the existing way of trying to wrap our heads around its contents hasn't scaled and needs to go away. The good parts (knowing who is authoritative for a module) need to get pulled out, and put into a new system. I don't mind if the current system gets fixed (which could be done, per my previous emails) or something new gets implemented. Ultimately what matters most is that something gets done by someone. Personally I've done my time, all ten years of it, as a please give your modules a sensible name advocate. I'm letting others do that now, to whatever extent they want. Tim.
Re: DBIx::DBH - Perl extension for simplifying database connectio ns
On Fri, Dec 17, 2004 at 03:17:25AM +, Terrence Brannon wrote: Christopher Hicks [EMAIL PROTECTED] writes: Personally I'd like to see a solution based on AppConfig. We have our database configs in AppConfig. The config files look something like: I did that two years ago: http://search.cpan.org/author/TBONE/DBIx-Connect-1.13/lib/DBIx/Connect.pm However I got sick of AppConfig because I found it unwieldy. I then wrote a new module based on Config::ApacheFormat. Now, I decided to write something based on pure Perl data structures so that people could use whatever config module and nest/wrap/merge as they please. Hence DBIx::DBH Funny how some things change over time. I wonder what'll be next... ;-) Tim.
Re: DFA::StateMachine
On Wed, Dec 15, 2004 at 10:08:43AM -, Orton, Yves wrote: Ovid and I were getting fed up with the horrible DFA::Simple module, so I wrote a new module, DFA::StateMachine, to take its place in our work. But I'm no computer scientist, so I'm not even sure whether the name is right or if the module functions the way a DFA state machine is supposed to behave. [...] Maybe: FSA::Rules is better? There's a Computer::Theory::FSA module already: http://search.cpan.org/~frighetti/Computer-Theory-FSA-0.1_05/lib/Computer/Theory/FSA.pm but it doesn't look pleasant to use. FSA::Rules seems okay, but it doesn't express the simple utility of the module. I hate to suggest FSA::Simple but it almost seems appropriate here. Having said that it looks like an interesting module. Id be curious as to what you use it for tho. I'm looking for a simple FSA module to help manage states in a GUI. Some quick observations: - I'd prefix some of the actions with on_ on_enter = sub { ...}, on_leave = sub { ...}, because they don't 'perform' the enter/leave they're just triggered by it. - The docs aren't very clear about when do actions are run. They talk about while in the state and in the state. Saying after entering the state would be more clear. - Using undef to mean 'always' in the goto rules is confusing. Using 1 (or any true scalar) would seem more natural. - Hooks for tracing execution would be helpful. Using empty methods and requiring users to sub-class would suffice. - There's scope for refactoring into finer-grained methods. - I'd suggest renaming check() to attempt_transition() and have it return the new state or undef (not croak) if it can't transition at the moment. - Then define check() to be { self-attempt_transition() || croak ... } But a better name than check() would also be good. - Looks good! Any idea how soon it might reach CPAN once a name is chosen? Tim.
Re: DBIx::DBH - Perl extension for simplifying database connectio ns
On Tue, Dec 07, 2004 at 11:51:41AM -0600, Chris Josephes wrote: Either way, does this traffic need to be replicated on both dbi-users and module-authors?? I would think the DBI list would supercede the other. I agree. Can anyone replying to this thread please remove [EMAIL PROTECTED] from the CC list. Tim.
Re: DBIx::DBH - Perl extension for simplifying database connections
On Wed, Dec 01, 2004 at 09:56:01AM -0500, John Siracusa wrote: On Wed, 1 Dec 2004 09:46:24 +, Tim Bunce [EMAIL PROTECTED] wrote: Do you generally pass URLs around as a string or broken up into a hash? If they had different formats for different consumers, I would. (And even today, I use my own URI objects when I know I'll have to do any significant amount of manipulation.) I think this module is definitely useful. I already store my DSNs in hashes and assemble the pieces as necessary depending on the driver. Lots of people do, it seems, but I'm not getting much background about why. FWIW, the reason I'm digging here is because I agree there may be some value in the DBI supporting something along these lines, but I need a better understanding of the underlying issues. More real- world examples would help. It'll always come down to the issue of why not store complete DSNs? and so far that's not been well covered by the feedback I've got. Tim.
Re: DBIx::DBH - Perl extension for simplifying database connectio ns
On Wed, Dec 01, 2004 at 06:43:51PM -, Orton, Yves wrote: It'll always come down to the issue of why not store complete DSNs? and so far that's not been well covered by the feedback I've got. Duplication of data in multiple places is the answer I think. The more DSN strings you have the more needs to be changed later on, and the bigger the chance that those changes include errors. Having a single transparent interface would reduce that error (and the frustration associated with it) Can you modify the example you gave earlier to show how you'd use DBIx::DBH (or whatever it's called :) to do the same thing? Tim.
Re: Module Class::Stringify?
On Sun, Nov 14, 2004 at 09:21:17AM -0500, Robert Rothenberg wrote: Reference: some code for testing if an argument is string-like: sub is_string_like { return 1, why the comma? unless (defined $_[0] ref $_[0]); # We don't evaluate whether the . and .= operators are # supported, since for many applications that use strings, the # comparison operators are the most important. eval { ($_[0]) (($_[0] cmp $_[0])==0) ($_[0] eq $_[0]) (!($_[0] ne $_[0])) (!($_[0] le $_[0])) (!($_[0] ge $_[0])) (!($_[0] lt $_[0])) (!($_[0] gt $_[0])) } and return 1; # Testing for behavior related to copy constructors is another issue # to be determined This all seems overkill. Isn't something like this (untested) enough?: sub is_string_like { return 1 unless ref $_[0]; # returns 1 for undef return UNIVERSAL::can($_[0], ''); } Tim.
Re: MySQL::Backup?
On Tue, Oct 26, 2004 at 07:32:29PM -0400, Christopher Hicks wrote: On Tue, 26 Oct 2004, _brian_d_foy wrote: In article [EMAIL PROTECTED], Smylers [EMAIL PROTECTED] wrote: I think the opposite -- that DBIx:: should be for things that are generally usable with DBI, where the I is independent. Things such as backing up tend not to be database-independent. if we work it right, DBIx::Backup could be independent, while DBIx::Backup::MySQL implements the MySQL bits. :) Exactly. If DBIx::Backup::MySQL has a clean interface it might even inspire a generic DBIx::Backup and become the MySQL implementation of DBIx::Backup and spark a revolution in database administration. :) DBIx isn't for this kind of thing (frameworks of modules working together). Modules are generally be named for what they do not how they do it. So DBIx in a name is only appropriate when the what it does is closely tied to the DBI. If anyone wants to start a database independant backup project, using 'plug in' modules for different databases, then they ought to use a new top-level namespace like DatabaseBackup::* Tim.
Re: Finding prior art Perl modules (was: new module: Time::Seconds::GroupedBy)
On Wed, Jul 14, 2004 at 06:30:59PM +0100, Fergal Daly wrote: On Wed, Jul 14, 2004 at 06:08:16PM +0100, Leon Brocard wrote: Simon Cozens sent the following bits through the ether: The searching in search.cpan.org is, unfortunately, pretty awful. At some point I plan to sit down and try using Plucene as a search engine for module data. I thought that would be a good idea too, so I tried it. It works *fairly* well. http://search.cpan.org/dist/CPAN-IndexPod/ Does META.yaml have a place for keyowrds? It would be nice if it did and if search.cpan.org indexed it. That would mean that it would be no longer necessary to name your module along the lines of XML::HTTP::Network::Daemon::TextProcessing::Business::Papersize::GIS so that people can find it, That's what the Description field is for. Tim.
Future of the Module List
On Wed, Jul 14, 2004 at 12:40:03PM -0500, Dave Rolsky wrote: On Wed, 14 Jul 2004, A. Pagaltzis wrote: * Dave Rolsky [EMAIL PROTECTED] [2004-07-14 19:26]: Some of them _are_ registered, but that document you're referring to hasn't been regenerated since 2002/08/27! I wish the CPAN folks would just remove it if it won't be generated regularly. Does anyone else here think that the list should probably just be done away with entirely? The _file_ should go, yes. The concept of registering modules is different. Given the fact that most authors seem to not register their stuff, the [EMAIL PROTECTED] list is slow as heck, and that the web pages never get regenerated, yes. Those are all fixable. Volunteers? The real issues are bigger and deeper. I've appended a couple of emails. Tim. On Mon, Feb 16, 2004 at 10:37:12AM +1300, Sam Vilain wrote: On Mon, 16 Feb 2004 01:32, Tim Bunce wrote; I'd like to see a summary of what those needs of the community are. (Maybe I missed it as I've not been following as closely as I'd have liked. In which case a link to an archived summary would be great.) It's very important to be clear about what the problems actually are. I don't really want to argue this side of things, I think that the problems pretty much speak for themselves. But I hate unspoken consensus, so let me suggest a few from my perspective; this applies to the combined Perl 5 modules list / using search.cpan.org: I'll play devils advocate here and point out some alternative remedies for the problems. By doing so I'm _not_ trying to detract for your suggestion, which I like, I'm just trying to show how existing mechanisms could be improved incrementally. a) searching for modules for a particular task takes a long time and unless you get your key words right, you might not find it at all. Refer the recent Mail::SendEasy thread. Calls for a richer set of categories and cross-links of some kind. (Editorial content alone is basically just more words to a search engine.) b) it is very difficult to find good reviews weighing the pros and cons of similar modules; they exist, but are scattered. CPAN ratings was a nice idea, but has too many First Post! style reviews to be useful in its current form IMHO. Argues for moderation of reviews and a minimum review length. A was this review helpful mechanism would also help to bring better reviews to the top. Also the search.cpan.org should not just show the overall rating, it should show the underlying three individual ratings (docs, interface, ease of use). c) it is nearly impossible to tell which modules are the wisest choices from a community size point of view; using modules that are more likely to fall out of maintenance is easy to do. Argues for more stats. I think useful *relative* download stats could be extracted from a sample of CPAN sites. Also search.cpan.org could provide relative page-*view* stats for modules. d) some great modules are not registered (I am referring of course to such masterpieces as Pixie, Heritable::Types, Maptastic :), Spiffy, Autodia, Want ... and those are just the ones missing in my bag of tricks) Argues for fixing the registration process. Originally the Module List had two goals: 1: to help people find perl modules for a particular task. 2: to provide a second-tier of modules above the 'anarchy' of people uploading half-baked ideas with half-baked names. Honourable goals, which it solved adequately for a period of time, and full credit where it is due. But now let's look at where we are. We've got masses of modules, truckloads of categories and thousands of contributors. This task cannot be left in the hands of a handful of hackers, no matter how much awe they inspire, they probably still have lives and day jobs ;) The registration process can, and should, be automatic for any modules for which no one objects. You'd apply and RT would automatically register if no one commented on the application. I will maintain that the current format, or even simply adding some more fields to the database is *not* enough information to give uninformed people looking for a module the information to make an informed decision. It is my gut feeling that only editorial content, managed by people who are experts in the field, will truly perform this task - and that to gain maximum support, that it should be included in the content mirrored along with the rest of cpan.org. I agree that comparative editorial reviews would be very valuable for Goal 1 above. I wouldn't address Goal 2 effecively at all. I think we're mature enough as a community to be able to produce this content without it disolving into flamewars or being too one-sided. In particular, I really think that as little red tape should be applied to this system as possible. Let's just set up a few
Re: ANNOUNCE: WWW::Map 0.01
On Sat, Jul 10, 2004 at 08:46:31AM -0500, Dave Rolsky wrote: On Sat, 10 Jul 2004, Smylers wrote: How about WebService::Map? Search the [EMAIL PROTECTED] archives for WebService and you'll see that there have been recent attempts to distinguish between modules that help implement generic webby things (in WWW::) from those which are an interface to a service which just happens to be provided by websites (in WebService::). Yeah, taking a look at the existing WebService modules it seems like this might be a better top level namespace. My only reservation is that unlike all the others, this module does not actually interact with the websites, it simply generates a link. Not a major distinction in this case. WebService::* seems fine. As for the module name, how about: WebService::LocationMapLink Tim.
Re: Namespace convenions
On Tue, May 18, 2004 at 07:54:21PM +0100, Orton, Yves wrote: The 'ex::' namespace is intended for experimental modules afaik. ex:: is for experimental *pragmas* Tim. -Original Message- From: Erik Norgaard [mailto:[EMAIL PROTECTED] Sent: 18 May 2004 20:52 To: [EMAIL PROTECTED] Subject: Namespace convenions Hi, I just browsed through the namespace guidelines and the concern about avoiding the namspace to clutter up. So I was wondering if there are some reserved namespace tags such as for mime-types: X- for experimental, maybe P- for proprietary or private. Of course this would just mean that the X- space would clutter up, but the idea is that if you use X-modules then you asked for it :-) As modules mature they could be adopted into the registred name- space, dropping the X-. As it is right now module registration is an all or none. With the above new modules could appear and mature, merge with others or disappear without getting into the regiestrered namespace only to become obsolete or unsupported. I also ask to avoid unintentional conflicts when brewing my own modules, such as File::X-Backup instead of File::Backup. Just a thought, cheers, Erik PS: sorry - i'm new on the list :-) GnuPG Key: http://www.locolomo.org/home/norgaard/norgaard.gpg.asc pub 1024D/B02CC311 2004-04-05 Erik Norgaard [EMAIL PROTECTED] Key fingerprint = 6C11 B9B1 52BD F16D 34AD 9893 D3EC E6DB B02C C311
Re: pure perl Zlib
On Sun, Feb 15, 2004 at 09:51:18PM +, Nicholas Clark wrote: On Mon, Feb 16, 2004 at 10:43:27AM +1300, Sam Vilain wrote: On Mon, 16 Feb 2004 10:19, Nicholas Clark wrote; Autrijus suggested Compress::Zlib::PurePerl, which I think is reasonable. ...but it doesn't use Zlib! :) Compress::Gzip? But it doesn't compress. Compress:Gunzip? Uncompress::Gzip (Neither really meant as serious suggestions) Problem is that it's an emulation of bits of Compress::Zlib's interface, so I feel that a clue should be in the name. As should the bit that it's pure perl, as otherwise it's like huh, why another front end to some C code? I agree. Compress::Zlib::PurePerl seems okay, but there's really no need for the extra level. Compress::ZlibPP would be fine. (It seems that 'PP' is becoming a convention for 'pure perl'.) Sure, it doesn't compress today, but it might in future. (Meanwhile it could emulate the whole API and just return errors when interfaces it doesn't support are called.) Tim.
Re: Module lists: defining the problem, restating the goals [was Re: OK, so we've decided...]
On Mon, Feb 16, 2004 at 10:37:12AM +1300, Sam Vilain wrote: On Mon, 16 Feb 2004 01:32, Tim Bunce wrote; I'd like to see a summary of what those needs of the community are. (Maybe I missed it as I've not been following as closely as I'd have liked. In which case a link to an archived summary would be great.) It's very important to be clear about what the problems actually are. I don't really want to argue this side of things, I think that the problems pretty much speak for themselves. But I hate unspoken consensus, so let me suggest a few from my perspective; this applies to the combined Perl 5 modules list / using search.cpan.org: I'll play devils advocate here and point out some alternative remedies for the problems. By doing so I'm _not_ trying to detract for your suggestion, which I like, I'm just trying to show how existing mechanisms could be improved incrementally. a) searching for modules for a particular task takes a long time and unless you get your key words right, you might not find it at all. Refer the recent Mail::SendEasy thread. Calls for a richer set of categories and cross-links of some kind. (Editorial content alone is basically just more words to a search engine.) b) it is very difficult to find good reviews weighing the pros and cons of similar modules; they exist, but are scattered. CPAN ratings was a nice idea, but has too many First Post! style reviews to be useful in its current form IMHO. Argues for moderation of reviews and a minimum review length. A was this review helpful mechanism would also help to bring better reviews to teh top. Also the search.cpan.org should not just show the overall rating, it should show the underlying three individual ratings (docs, interface, ease of use). c) it is nearly impossible to tell which modules are the wisest choices from a community size point of view; using modules that are more likely to fall out of maintenance is easy to do. Argues for more stats. I think useful *relative* download stats could be extracted from a sample of CPAN sites. Also search.cpan.org could provide relative page-*view* stats for modules. d) some great modules are not registered (I am referring of course to such masterpieces as Pixie, Heritable::Types, Maptastic :), Spiffy, Autodia, Want ... and those are just the ones missing in my bag of tricks) Argues for fixing the registration process. Originally the Module List had two goals: 1: to help people find perl modules for a particular task. 2: to provide a second-tier of modules above the 'anarchy' of people uploading half-baked ideas with half-baked names. Honourable goals, which it solved adequately for a period of time, and full credit where it is due. But now let's look at where we are. We've got masses of modules, truckloads of categories and thousands of contributors. This task cannot be left in the hands of a handful of hackers, no matter how much awe they inspire, they probably still have lives and day jobs ;) The registration process can, and should, be automatic for any modules for which no one objects. You'd apply and RT would automatically register if no one commented on the application. I will maintain that the current format, or even simply adding some more fields to the database is *not* enough information to give uninformed people looking for a module the information to make an informed decision. It is my gut feeling that only editorial content, managed by people who are experts in the field, will truly perform this task - and that to gain maximum support, that it should be included in the content mirrored along with the rest of cpan.org. I agree that comparative editorial reviews would be very valuable for Goal 1 above. I wouldn't address Goal 2 effecively at all. I think we're mature enough as a community to be able to produce this content without it disolving into flamewars or being too one-sided. In particular, I really think that as little red tape should be applied to this system as possible. Let's just set up a few CVS / subversion accounts, to edit content that is autopublishing to the www.cpan.org site, with a few disclaimers chucked on the bottom. LARTing the naive and troublesome as appropriate. I agree. This all seems very similar to the DMOZ.org project that maintains reviews of millions of web sites: 6,095,104 sites - 61,277 editors - 551,043 categories That's a mature and very sucessful model (used by google directory etc) that's well worth learning from. The text file is out of date. The underlying database isn't: [...] Please work with the PAUSE system, and Andreas and myself, to enhance what already exists (which includes a UI for module authors to pick which category they want the module in). I'd be honoured to. I think that the plan you propose would be an excellent
Re: OK, so we've decided that the right modules are too hard to find.
On Sun, Feb 15, 2004 at 03:56:39PM +1300, Sam Vilain wrote: ___ / _ \ | | | |_ _ | |_| ||\ | ||___ \___/ | \| |___ |___ upon a time, the Perl 5 modules list was an excellent resource for those seeking to do anything non-core with Perl. However, it has not kept pace adequately with the needs of the community. I'd like to see a summary of what those needs of the community are. (Maybe I missed it as I've not been following as closely as I'd have liked. In which case a link to an archived summary would be great.) It's very important to be clear about what the problems actually are. I propose, what we do about this situation is : - expand the modules list into a new section of the www.cpan.org site, by; - deciding if the current categories are good enough 'in this day and age'; using the current list as a starting point, go through each category and decide whether it is a useful grouping any more. This will ideally also involve individuals with experience with other languages trawling over the appropriate CPAN equivalents, ie PEAR, RAA, etc, and providing nice, *brief*, informative reports on their structure. We will then hopefully have a half-decent list of categories. This process should be quick, perhaps reporting back its progress to the list every few days until there is a general consensus. Okay. - encourage curators to step forward, or groups of curators, for each category; possibly even create mailing lists for people with a general interest in the technology in that category; to field questions about naming for new modules to fit into each category. These curators must have the power to update the contents of the relevant portions of the www.cpan.org site. The idea would be to have each category something like the http://poop.sourceforge.net/ site - but on a standard template to lend it more credibility. Ideally with space for user feedback. I would hesitate to seed the listing of actual modules on the current long Perl 5 modules list. Factors such as the usage that the module has seen, whether long standing bugs were ever fixed, whether a better module has come along since and gained widespread acceptance, etc need to be taken into consideration. I disagree. You're mixing different goals that ought to be kept separate. Originally the Module List had two goals: 1: to help people find perl modules for a particular task. 2: to provide a second-tier of modules above the 'anarchy' of people uploading half-baked ideas with half-baked names. You could argue that Goal 1 is now largely addressed by searching search.cpan.org (although that's certainly not without problems). However the limited integration with the Module List's hierarchical categories and other metadata is unfortunate. I think that's partly due to Graham Barr's view (as I remember it, I might be wrong here) that the Module List was too incomplete relative to the whole of CPAN to be useful and he's right. So let's fix that - see below. Goal 2 is still important - as you can see from archives of [EMAIL PROTECTED] when there have been discussions leading to a better choice of module name. But [EMAIL PROTECTED] has it's own set of problems (that I hope will be addressed when it's integrated with RT so requests don't fall between the cracks). Many popular modules are missing from the list altogether. The text file is out of date. The underlying database isn't: http://www.cpan.org/modules/03modlist.data.gz (Though it is incomplete because not enough authors take the steps to get listed, but that's a different set of issues.) Please work with the PAUSE system, and Andreas and myself, to enhance what already exists (which includes a UI for module authors to pick which category they want the module in). Here's what I'd suggest: 0. Don't underestimate how difficult and subtle naming issues can be. 1. Split and extend the list of categories aiming for about 2 to 3 times the number to keep it managable, perhaps grouping the categories into a two-level hierarchy (but the top level is just for human use, the 2nd level names should be self-descriptive without the 2st level names). 2. Map modules from the old categories to the new ones This needs to end up as a sequence of SQL statements that can be run on the PAUSE mysql db to update the category number for each module that has one. 3. Write a script to generate http://www.cpan.org/modules/00modlist.long.html from http://www.cpan.org/modules/03modlist.data.gz or ideally from the underlying mysql database (with simpler formatting and focussing on just the modules) and give it to Andreas for PAUSE to use automatically. 4. http://www.cpan.org/modules/03modlist.data.gz is actually a perl module called CPAN::Modulelist but
Re: New module Mail::SendEasy
On Thu, Jan 29, 2004 at 12:23:51PM -, Orton, Yves wrote: I think MIME::Lite isn't in the Module List so the name wasn't peer-reviewed. The peer-review process offered by [EMAIL PROTECTED] certainly isn't perfect, but I do believe it's very valuable. Unless I read the file incorrectly MIME::Lite is indeed in the module list, at least I see it there. Afaik its been in the wild since at least 98, if not earlier. (I dont know the full history, I am only the module maintainer) Ah, thanks. I'd missed it. (And I wish search.cpan.org made it easier to tell if a module is registered). Also, I believe that MIME::Lite quite likely predates the peer review process, it certainly predates these newfangled root level names like Mail:: and such. There's always been a review process for the Module List. But it's always hard to look several years into the future when trying to see how namespaces might evolve. Tim. I would argue that MIME isnt actually that bad a name. MIME is the protocol for the contents of a mail. Not related to how mails are recieved, stored, searched, or transmitted. The fact that MIME::Lite knows how to talk to modules that know how to transmit is seperate from the fact that it intends to manage MIME content mails. Since a mail need not be MIME there is no reason for it to be called Mail:: or whatever. Anyway, if someone wants to argue that I should put MIME::Lite into a different namespace ill consider it. It wouldnt be too difficult to also have it called Mail::MIME::Lite or whatever. yves
Re: Possible module for 'safer' signal handling....
On Sun, Jan 11, 2004 at 10:15:20PM -0500, Lincoln A. Baxter wrote: On dbi-users... I responded to Tim Bunce's most helpful suggestions as follows... He suggested Sys::SigAction as name. Since the module is all about wrapping up system (POSIX) sigaction() calls, I like it! On Sun, 2004-01-11 at 15:50, Tim Bunce wrote: [snip] It might also be worth adding some mechanism to integrate with Sys::Signal http://search.cpan.org/src/DOUGM/Sys-Signal/Signal.pm I took a look that this. It is little bit of perlxs glue which uses perl's internals to set signal handlers, and have them restored when the object returned gets destroyed as it goes out of scope. It does not help us with our problem however, as it just does what perl does. I see no real benefit to this over: eval { local $SIG{ALRM} = sub { ... }; } Perhaps there was a time when the above trick was not well known, and Sys::Signal was implemented to do that. After looking at the truss and strace outputs for the way the above code works, I would say that Sys::Signal is pretty unnecessary. The Sys::Signal docs suggest the key feature is with the added functionality of restoring the underlying signal handler to the previous C function, rather than Perl's. Perhaps perl didn't do that before. Um, or perhaps the only way to do that from perl is to use local() but there are times where local() doesn't provide the right lifespan. Either way your Sys::SigAction is sufficient. The only thing it's lacking that Sys::Signal has is the automatic restoration of the old value when the object is destroyed. It would be trivial to add a variant of sig_set_action() to do that for those that want it. Something along the lines of: sub sig_set_action_auto_restore { my $class = shift; return bless sig_set_action(@_), $class; } sub DESTROY { shify-sig_set_action(); } Tim.
Re: Simple multi-level tie
On Wed, Dec 17, 2003 at 02:00:23PM -0600, Andrew Sterling Hanenkamp wrote: I would like the ability to store a complicated record inside of a DBM file. I looked in the usual places and perldoc -q DBM gives me: Either stringify the structure yourself (no fun), or else get the MLDBM (which uses Data::Dumper) module from CPAN and layer it on top of either DB_File or GDBM_File. Therefore, I went in search of a solution to automate the stringification. I didn't find anything other than MLDBM for doing something like this and it seems like a little much for my purposes. All I need is something like this: $hash{name} = value1:value2:value3:...; I've done some work with Tie::Memoize and really like it's interface, so I decided to write something like it for wrapping hashes. Thus, Tie::HashWrapper was born. It may be used like this: tie my %wrappee, 'AnyDBM_File', ...; tie my %wrapper, 'Tie::HashWrapper', \%wrappee, -deflate_value = sub { join ':', @{$_[0]} }, -inflate_value = sub { split /:/, $_[0] }; $wrapper{name} = [ value1, value2, value3 ]; I'd add a -1 to the split and not in the docs that the example won't handle undefs. Does Tie::HashWrapper seem reasonable? Or does anyone have a better name? Have I gone off the deep-end again and rewritten something that already exists and I missed it? I didn't like it at first but the more I try to think of alternatives, and understand the purpose and use, the more I like it. A key point is that although it was created for inflating/deflating values, there's no need to use it for that. It does 'wrap' access to the underlying hash and that wrapping can be used for other purposes, including logging or recording where/how the hash is used. It's similar in some ways to: http://search.cpan.org/~pmqs/BerkeleyDB-0.25/BerkeleyDB.pod#DBM_Filters And I think it would be worth making it more similar. Consider tie my %wrapper, 'Tie::HashWrapper', \%wrappee, store_key = sub { lc(shift) }, store_value = sub { join ':', @{$_[0]} }, fetch_value = sub { lc(shift) }, fetch_key = sub { split /:/, $_[0], -1 }; Tim. p.s. I trust your tests cover things like FIRSTKEY, NEXTKEY, DELETE etc.
Re: BTRIEVE::*
On Thu, Dec 18, 2003 at 03:49:07PM +0100, Steffen Goeldner wrote: I'm still open for namespace suggestions. The following list BTRIEVE::File BTRIEVE::ISAM::File BTRIEVE::ISAMFile BTRIEVE::IsamFile with descending preference comes into my mind. Assuming ISAM is implied by BTRIEVE there's no need to include that. A hint that it provides i/o might be good: BTRIEVE::FileIO otherwise BTRIEVE::File seems okay, if a little minimal. Tim.
Re: more on Ivy.pm [was: Ivy.pm: name change]
On Wed, Nov 26, 2003 at 09:10:11AM +0100, Christophe MERTZ wrote: On Tue, 2003-11-25 at 18:04, Tim Bunce wrote: (I'm disappointed the module isn't an interface to the C library.) Don't be disappointed... The gains would be minimal in our view. Not if you're trying to process hundreds or thousands of messages per second. If I ever have the time I might look at doing it myself. Tim.
Re: Ivy.pm: name change? to upload on CPAN
Sadly it turns out to be not quite that trivial because the interface has this kind of style $obj-foo($bar); Ivy::foo($bar); But even that's not a big deal. If the functions are exported then do things like: use base Exporter; our @EXPORT = @Net::Ivy::EXPORT; if it's not then do something like *$_ = \{Net::Ivy::$_} for (qw(foo bar baz func names)); Or do both. Either way, it's just a bit of plumbing. (If the interface has deeper issues that'll cause problems then I'd be tempted to say it's broken and Ivy.pm should have more hacks for legacy support and Net::Ivy should have a better interface design.) Tim. On Tue, Nov 25, 2003 at 04:07:53PM -0500, Lincoln A. Baxter wrote: As if Tim's opinions don't carry a huge amount of weight already, I will add my $0.02 and agree with him 100%. Put it in the Net:: name space and Net::Ivy, and provide the 3 line rewrapper for your internal use, that is not at all complicated for error prone. On Tue, 2003-11-25 at 12:04, Tim Bunce wrote: It's a single module implementing a class. The wrapper ought to be no more complicated than: package Ivy; use base Net::Ivy; 1; Hardly error-prone. Do you have some suggestions? I'd suggest uploading as Net::Ivy and bundling an Ivy.pm as a wrapper. -- Lincoln A. Baxter [EMAIL PROTECTED]
Re: Submitting a new module? (Linux::ForkControl)
On Thu, Nov 13, 2003 at 11:56:02AM -0500, Brad Lhotsky wrote: but the idea is to extend the module using the /proc filesystem (hence the name space) 2) Is 'Linux::ForkControl' a decent name for this module? Other operating systems have /proc interfaces. (Perhaps not identical to Linux but some are probably supportable). Why not use the Proc:: namespace? Proc::ForkControl But aren't ther similar modules already? Why add a new module instead of extending an exisiting one? Tim.
Re: Tie::Array::Sorted
On Wed, Nov 12, 2003 at 09:42:05PM +, Nicholas Clark wrote: On Wed, Nov 12, 2003 at 10:16:51PM +0100, Paul Johnson wrote: On Wed, Nov 12, 2003 at 01:32:13PM +, Simon Cozens wrote: Hi. I'm about to write a module which presents an array in sorted order; $a[0] will always be the least element by some comparator. Miraculously, there doesn't seem to be such a beast on CPAN already. Is Tie::Array::Sorted a reasonable name for it, or would another one be more obvious? s/Tie::// ? Do I need to be concerned with how the module is implemented? Presumably the documentation will tell me how to use it. Yes, this probably applies to the rest of the Tie:: namespace too. Oh, I was going to say this. Tie::Array::Sorted is a good name because it is consistent with many other modules. But I consider them all to be misnamed. The implementation is not the most important feature of these modules - why is it the most important part of their names? I disagree. The Tie:: doesn't just describe the implementation, it describes the interface. And that's often a key aspect of modules that offer functionality behind a tie interface. Given a choice between Array::Sorted and Tie::Array::Sorted I'd know that Tie::Array::Sorted provides a tie interface and so will, for example, let me pass a ref to the array to code I don't control and still get the behaviour I want. Tim.
Re: what to do with dead camels ?
On Tue, Aug 05, 2003 at 11:17:36AM +0100, Nicholas Clark wrote: On Tue, Aug 05, 2003 at 01:27:47AM -0400, Christopher Hicks wrote: Maybe the e-mail should do something informative like list how many years, months and days it's been since a given module has been updated. Some weak souls might be guilted into pushing out bug fixes sooner. If there are no bugs, there is no need for bug fixes. MJD gets very irritated with people asking whether certain of his modules are abandoned, simply because the most recent version is old. The Mature development status (http://search.cpan.org/dlsip) is meant to address this. But it's not well integrated on search.cpan.org, in the sense that viewing a distribution page (http://search.cpan.org/author/MJD/Tie-File) doesn't show the DLSIP flags for the modules it contains. Tim.
Re: UDPM name space finalization
All seems fine to me. Tim. On Tue, Jun 03, 2003 at 09:17:13PM -0400, Kevin C. Krinke wrote: UI::Dialog UI::Dialog::GNOME UI::Dialog::KDE UI::Dialog::Console UI::Dialog::Backend::Zenity UI::Dialog::Backend::GDialog UI::Dialog::Backend::XDialog UI::Dialog::Backend::KDialog UI::Dialog::Backend::CDialog UI::Dialog::Backend::Whiptail UI::Dialog::Backend::ASCII (future native perl extensions) UI::Dialog::PurePerl UI::Dialog::Backend::PurePerl::GTK UI::Dialog::Backend::PurePerl::Wx UI::Dialog::Backend::PurePerl::Tk UI::Dialog::Backend::PurePerl::QT ...and the beat goes on... -- Kevin C. Krinke [EMAIL PROTECTED] Open Door Software Inc.
Re: CPAN Upload: E/EL/ELIZABETH/Thread-Needs-0.01.tar.gz
On Tue, Jul 30, 2002 at 11:38:56AM +0200, Elizabeth Mattijsen wrote: At 01:56 PM 7/29/02 +0200, Arthur Bergman wrote: At 10:44 AM 7/29/02 +0100, Tim Bunce wrote: Thread::Needs isn't a very descriptive name - it's too general. Something like Thread::NeedsModules would be better. I have been thinking maybe it should be called Thread::Modules; use Thread::Modules qw(foo bar baz); #these must be cloned no Thread::Modules qw(don't need this); #these should not be cloned Hmmm... it would be nice if you could mark modules not to be cloned. Unfortunately, at the current state of things, we can only remove module stashes _after_ they have been cloned ;-( Currently the no Thread::Needs removes module names from the hash of module names to be kept. I basically see two ways of dealing with modules in threads currently: an aggresive way (removing _everything_ except the stuff you specify should stay) and a non-aggresive way (just specifying those modules that you know you don't need). The problem with the first is that it may throw away too much. But you will save the most memory that way. The problem with the latter is that you only remove the modules of which you _know_ as an author that they're loaded. But any modules that have been loaded under the hood will remain in memory, even if you don't need them. I think the two approaches should not reside in the same module. I would therefore like to suggest the following: Thread::With and Thread::Without. So: Thread::With - mark modules to remain in thread memory Thread::Without - mark modules to be removed from thread memory Or maybe we should _interfix_ the word Module here? Thread::Module::With - mark modules to remain in thread memory Thread::Module::Without - mark modules to be removed from thread memory Or maybe we should _postfix_ the word Module? Thread::With::Module - mark modules to remain in thread memory Thread::Without::Module - mark modules to be removed from thread memory The no calls of these modules would then simply unmark the modules, either for keeping or removing. I think that would be a cleaner interface. Thread::Without would then be for the faint of heart, and Thread::With would be for the more brave and savvy developers... Or generalize it to focus on the 'at clone time hook' nature: Thread::OnClone delete_modules = qw(Foo::Bar Baz); which could then easily be extended to support other 'at clone time' actions. But I'm rather uncomfortable with the global impact this module has (or seems to from the API, I've not looked at the implementation). It seems likely to cause problems with module that start their own private threads and may need to use a module that some other module has declared it's threads don't need. Can't some lexically scoped implementation be found? Tim.
Re: New Module Advice
Maybe FileMetaInfo::Miner::StarOffice FileMetaInfo::Miner::HTML etc Tim. On Wed, Jul 17, 2002 at 05:41:17PM -0500, Midh Mulpuri wrote: You are right. These modules are not general purpose parsers. In fact, I am using HTML::Parser to implement a HTML miner to extract data from HTML/HEAD/META. There is a miner that wraps around stat() to make the data provided by stat() in the Miner/Store framework. The beauty of the framework is that it provides a uniform way to get Meta data on many sources in the same application and to use several miners to obtain information on the same file. It is also fairly easy to write a Miner that analyzes a particular file and constructs data such as number of words, keywords, version number. Since file formats vary, I believe that a uniform way to obtain this information would be useful. I am at a loss as far as the Namespace is concerned because there don't seem to be related modules in CPAN. There is a Metadata module but it implements interfaces that I believe are not useful for what I am trying to do. This is what the modules do: Process files to obtain Metadata. The best alternative I can come up with is File::Metadata. Is this any better? -Original Message- From: Ade Olonoh [mailto:[EMAIL PROTECTED]] Sent: Tuesday, July 16, 2002 8:27 PM To: Midh Mulpuri Cc: [EMAIL PROTECTED] Subject: Re: New Module Advice What kind of meta data do you mean? It sounds like there is application-specific functionality that the Miner/Store modules provide, rather than being a general Star/Open Office parser or an HTML parser. The type of meta data you're retrieving from the files would probably hint towards a better name, since (IMHO) MetaInfo is too vague to signal what kind of problem could be solved with the module. --Ade. On Mon, 2002-07-15 at 17:36, Midh Mulpuri wrote: I am writing an application that collects and stores meta data from a variety of files (e.g. Star/Open Office files, HTML files) etc. The application is written as one set of modules that extract this meta data from a file and another set of modules that store this information. The information is exchanged between a miner and a store in a hash. I believe that the miner modules would be useful to every one. I would like to release to CPAN. At the same time the store modules are a nice way to store this meta information but they do not implement anything that is useful separate from the Miner modules. Would it be a good thing to release both the Miner and Store modules. At last count I have four of the former and two of the latter. There is one store Module that write the Metadata to a XML file and another to a DBI supported database. Another problem would be the Namespace. MetaInfo sees available. I could release the modules as MetaInfo::Miner::- and MetaInfo::Store::-- if I wanted to release both sets. On the other hand I could release just the Miners under the MetaInfo namespace. Any advice and pointers would be appreciated since this would be my first release to CPAN. - Midh Mulpuri
Re: SQL translator module: DBIx:: or SQL::?
On Fri, Jan 02, 1970 at 06:15:56AM -0500, Terrence Brannon wrote: Specifically, I was thinking SQL::Translator for the package name, with all the rest of my modules (Parsers, Producers, etc) living under there That sounds good but it sounds like it only does schemas, so how about: SQL::Translator::Schema I'd prefer SQL::SchemaTranslator Tim
Re: SQL translator module: DBIx:: or SQL::?
On Thu, Feb 28, 2002 at 03:23:44PM -0500, Ken Y Clark wrote: All, I have the beginnings of something that might actually be CPAN-worthy: a translator for converting one database's create syntax into another's I had personal need to convert MySQL and Sybase to Oracle, so I've got the basics of those worked out, and I'm trying to find a nice way to produce XML Currently, I'm using Parse::RecDescent to parse, and some general purpose print code to produce the output I'm breaking it into smaller modules (Parsers and Producers), with the idea being that any Parser can be used with any Producer in the conversion process So, if you wanted PostgreSQL-to-Oracle, you could just write the PostgreSQL parser and use the existing Oracle producer, so half the work would be done Apart from any general input you might like to proffer, I'm eager to fix upon a good namespace so I can fix it before I've gotten in too much further I thought perhaps the SQL:: namespace, but a friend also suggested DBIx:: The DBIx:: space is overused It should only really be for things that are very closely _related_ to the DBI and not for things that just happen to _use_ the DBI [That's mostly an observation directed at my fellow modules list members for their future reference incase I'm not around when the next DBIx request comes in :-] Since what I've got isn't really related to DBI[1], I figure it probably more belongs in SQL:: Also, SQL:: is less crowded, so I'm hoping my module wouldn't get lost in the shuffle Specifically, I was thinking SQL::Translator for the package name, with all the rest of my modules (Parsers, Producers, etc) living under there Sounds good to me Tim
Re: DBD naming question?
On Wed, Jan 23, 2002 at 04:34:03PM -0800, Schuyler Erle wrote: Hello. I've written a DBD module to wrap other DBD handles and provide intelligent drop-in support for asymmetrically replicated databases (e.g., MySQL v3). First I was going to call it DBD::Switch, but then I noticed that DBI.pm implements a DBD::Switch. So I decided to call it DBD::Multiplex. I wrote a first draft and *then* discovered that DBI ships with a DBD::Multiplex as well. So I can't figure out what the hell to call this module. Suggestions? Is it not possible to implement the functionality you need by using (or extending) DBD::Multiplex? http://www.cpan.org/modules/by-authors/id/T/TK/TKISHEL/Multiplex-1.6.pm It's exactly the kind of application that DBD::Multiplex exists for and if DBD::Multiplex can't support it now then it needs to be fixed/extended so it can. Edwin Pratomo recently contributed some master/slave logic that may already do exactly want you need. Tim. Please find the relevant code attached. I welcome other code-related recommendations, as well. (Please note that I haven't actually tested the code in its current incarnation yet -- I'll write tests and make sure they work before releasing to the CPAN.) TIA for your help. SDE =head1 NAME DBD::Multiplex - Perl extension for intelligently multiplexing DBI database handles =head1 SYNOPSIS use DBI; # Create a single multiplexed DBI handle. # my $dsn = DBI:Multiplex:driver:dbname:master_host; my $dbh = DBI-connect( $dsn, $user1, $pass1, { multi_read = [ DBI:driver:dbname:slave_host, $user2, $pass2, ... ] } ); # Create a multiplexed handle using the same username and password. # my $dbh = DBI-connect( DBI:Multiplex:driver:dbname,master_host, $user, $pass, { multi_read = DBI:driver:dbname:slave_host } ); # Create a multiplexed handle using the same driver, database name, # username *and* password. # my $dbh = DBI-connect( DBI:Multiplex:driver:dbname:master_host:slave_host, $user, $pass, { ... } ); # Use replicated read-only database. # my $sth = $dbh-prepare( SELECT * FROM foo ); # Use master read/write database. # my $sth = $dbh-prepare( UPDATE foo SET bar = ? ); # Use a callback to provide custom SQL dispatch. # my %attr = ( multi_prepare = \my_special_prepare, ... custom params here ... ); my $dbh = DBI-connect( DBI:Multiplex:..., $user, $pass, \%attr ); =head1 DESCRIPTION DBD::Multiplex attempts to address the problem of clustering database servers that only support asymmetric replication. MySQL version 3 is a notable example: Writes made to a master server are instantly replicated by slave servers, but writes to slave servers are essentially ignored. DBD::Multiplex takes a multi_read attribute that points to a slave database. Henceforth, SELECT statements made on the multiplexed handle are always directed to this read-only sub-handle, while all other database traffic is directed to the master database. DBD::Multplex handles behave in every other respect just as do the original DBD handles being multplexed. DBD::Multiplex is hence a virtually drop-in solution for porting existing DBI code to an asymmetrically replicated database cluster. Although designed for use with MySQL v3, this module uses no database-specfic features, and can theoretically be used to multiplex any DBD driver. Additionally, DBD::Multiplex features pluggable custom multiplexing via callbacks. =head1 RELATED METHODS =over 4 =item DBI-connect( $dsn, $user, $pass, $attr ) Same as your typical DBI call, except that the DSN takes the form DBI:Multiplex:actual_driver: DBD::Multiplex also supports a unified DSN of the form DBI:Multiplex:driver:database:master_host:slave_host, where the same driver, database name, username and password are used to connect to both the master and slave database servers. =item $dbh-prepare( $statement ) Works just like you'd expect. Transparently sends SELECT statements to the read-only handle listed in the multi_read database handle attribute, directs all other traffic to the main database handle. =item Other Database Statement Methods Had better work just like they ordinarily would, or I've screwed something up. =back =head1 ATTRIBUTES The following attributes can be passed in the attribute hash in the call to DBI-connect(): =over 4 =item multi_read ( $dsn ) multi_read should be set to the DSN of the read-only slave server, unless you use the unified DSN style described above. =item multi_read_user ( $user ) =item multi_read_pass ( $pass ) =item multi_read_attr ( { ... } ) The username, password, and attribute hash for the read-only slave database. If either or both of these is left unset, the value passed to the master database handle is used instead. =item multi_connect ( CODE ) =item multi_prepare ( CODE ) multi_connect and
Re: Help Name This DBIx:: module...
Given that it's got wide functionality I'd suggest that you give it an abstract name (ala Alabazoo, Tangram etc) rather than try to find a name that describes the functionailty specifically. Tim. On Wed, Aug 29, 2001 at 10:08:23PM -0700, Jeremy Zawodny wrote: Some of you may recall my DBIx:: namespace? post from several weeks back. Now that I'm convinced that the module we're (we being Yahoo!) looking to release belongs there, I need a name for it. And I'm not too good at coming up with names, so I'll describe sort of what it does and see if anyone is inspired (or at least more inspired than I am). This module is or does the following: * Provides a wrapper around DBI which makes it really really easy to build applications that use MySQL. It caters (to a degree) to folks that don't deal with databases much at all and don't want to learn DBI's API. But it has some advanced features, too. * It is currently MySQL-specific, but may not be. There was a bit of discussion about Oracle support. * Gives you very short, intuitive names that wrap existing DBI functionality. $db-Hashes($sql) will give you a list of hashrefs, for example. * Provides a simple and consistent error handling mechanism. * Provides mechanisms for dealing with database servers that are temporarily unavailable because of a flaky network or whatever other reason. Can auto-reconnect, sleep, or die immediately. * Can notify (e-mail, page, etc) someone when an application is unable to reach the database server. * Makes it easy to have something like ODBC DSNs on Unix. You can name a set of connection parameters (user, host, password, database) and use that name in many apps. If the parameters change, you make the change in one place. * Allows you to call native DBI methods, as some advanced or more seasoned with DBI folks will want to do. It doesn't sub-class DBI, but knows how to delegate those calls. * Provides a way for an application to request a connection to a slave (or replica) database server for read-only queries. * Other stuff that I'm probably forgetting. My goal is to clean the code up enough so plugthat I can release it in time for my presentation at the Open Source Database Summit./plug http://www.osdn.com/conferences/osdb2/ Would the DBI list be a better place to discuss this, maybe? Name suggestions? Thoughts? Feedback? Questions? Jeremy -- Jeremy D. Zawodny | Perl, Web, MySQL, Linux Magazine, Yahoo! [EMAIL PROTECTED] | http://jeremy.zawodny.com/
Re: RFC: DBIx::Util
On Sun, Feb 25, 2001 at 05:51:18PM +, Leon Brocard wrote: Johan Vromans sent the following bits through the ether: Have you contacted the DBI people to see whether they are interested in adding this to standard DBI? FWIW, I submitted a patch to Tim Bunce in November adding these two to the core DBI API. I've benchmarked selectall_hashref as being ~30-50% faster than a loop over fetchrow_hashref. Tim accepted it, but I'm not sure who's in charge of DBI these days... :-) Those two methods are in the DBI ready for the mext release. Thanks. Tim.