I have two image,
A is a photo,
B is a part of A,
How can I know where (x,y) is the photo B in Photo A?
or AI::Categorize::NaiveBayes or
something).
--- ---
Ken Williams Last Bastion of Euclidity
[EMAIL PROTECTED]The Math Forum
Hi,
I've made some progress on this. I've written a couple modules,
AI::Categorize and AI::Categorize::NaiveBayes. I've also written a
paper that describes what they do:
http://forum.swarthmore.edu/~ken/bayes/bayes.pod(same paper,
http://forum.swarthmore.edu/~ken/b
ts of categorization under various conditions.
--- ---
Ken Williams Last Bastion of Euclidity
[EMAIL PROTECTED]The Math Forum
unds of most people.
But the description of this list on http://lists.perl.org/ simply says
"Discussions of Artificial Intelligence in Perl." It doesn't say
anything about newbies being unwelcome.
------
Ken Williams Last Bastion of Euclidity
[EMAIL PROTECTED]The Math Forum
PJ a couple
years ago.
----------
Ken Williams Last Bastion of Euclidity
[EMAIL PROTECTED]The Math Forum
't very
good at large n-dimensional array storage & computation, so they tried
to fix it. I suggest that we either harness modules like that, or
identify some problems that are worth solving and solve them.
------
Ken
Hi perl-ai list,
The slides from my YAPC talk on AI::Categorize are online now, at:
http://mathforum.com/~ken/categorize/
Please take a look if you're interested. The same slides will be
available on www.yapc.org when Kevin has time to put them there.
Several people at the talk expr
et up my mail system to make a call to
>some external program, and it would return a category.
>Is this possible to do in the Outlook client,
>or in the Exchange server?
No idea, but let us know if you find out it can.
-----
[EMAIL PROTECTED] (Tom Fawcett) wrote:
>Ken Williams <[EMAIL PROTECTED]> wrote:
>> [EMAIL PROTECTED] (Tom Fawcett) wrote:
>> >It would be nice to have a numeric discretization module as well so
>> >these would work with mixed numerics and text, but that
eople have subscribed to this list, and can point me either to a good
reference or discuss the ideas themselves. True?
------
Ken Williams Last Bastion of Euclidity
[EMAIL PROTECTED]
[EMAIL PROTECTED] (John Porter) wrote:
>Ken Williams wrote:
>> one suggestion was to use cross-entropy
>> measurements to reduce the number of features (words) considered.
>
>Um, have you tried a web search? Seems to me there's a fair
>amount of info out there...
[EMAIL PROTECTED] (Ken Williams) wrote:
>You're right that there are a lot of resources to be found in a web
>search, but most of it is about very specific applications - perhaps
>introductory material is best found in a textbook.
...speaking of which, is anyone familiar
d. Theoretically, both should go up.
I hope to release an updated version of the modules soon.
BTW, I'm still hoping someone wants to implement other AI::Categorize::
modules!
--- ---
Ken Williams
if for no
other reason than to learn their particularities.
----------
Ken Williams Last Bastion of Euclidity
[EMAIL PROTECTED]The Math Forum
script 'evaluate.pl' to the new 'eg/' directory,
because otherwise 'make install' would install it into site_perl/ .
If you installed previous versions of AI::Categorize, you may want
to remove 'evaluate.pl' from your site_perl/ director
naries. Since Microsoft already had big natural language
parsers (the grammar checker in Word), it was natural to re-use them for
reading dictionaries.
--- ---
Ken Williams Last
t_map() method to AI::Categorize class, which returns
the AI::Categorize::Map object so you can query this information.
-------
-Ken
Hi all,
Recently, I started reading about GAs, and thought that the best way to
learn is to actually write a Perl module, and play around with it.
The module is attached to this email. Please bear in mind that I am new
to this area, so the module is still in early alpha mode. There is a
test
briefly on this list before, but I'll mention it
again: for a good introduction to ML topics/terms, check out Tom
Mitchell's excellent book "Machine Learning".
-Ken
e project. You'd probably have to start with some of the
academic research (which I'm not up-to-date on).
> As I understand it,
> AI::Categorize can categorize items which are similar to items that
> are already categorized.
That's right, it probably wouldn't be much help for collaborative
filtering.
-Ken
hine
Learning", which will probably form the basis for the tutorial.
-Ken
4 0.220 0.078 990 sec * <-
features_kept => 0.2
* 01-kNN: 0.606 0.149 0.223 0.0731221 sec * <-
features_kept => 0.1
*******
-Ken
al
browsers of CPAN won't find things there as easily.
2) The AI:: namespace already exists and contains some
interesting things, and it would be a shame to split the
namespace without a good reason.
Comments?
I ask because I'm working on a rewrite of AI::Categorize, and
I'm wondering whether it might fit better under the ML::
namespace.
-Ken
n the Earley
algorithm to me, I'll freak. It just happened again last week!
-Ken
s shielding in an atom smasher,
but a gem nonetheless.
-Ken
dback is welcome. Thanks!
>
> J
> -- PPSN2002 => http://ppsn2002.ugr.es
> Home => http://geneura.ugr.es/~jmerelo
> Tutorial Perl => http://granavenida.com/perl
>
>
-Ken
de it needs them in order to be shown in public. See the
"LIMITATIONS" section of the docs.
Since it's a new module, I thought I'd include the docs here.
-Ken
NAME
AI::DecisionTree - Automatically Learns Decision Trees
SYNOPSIS
use AI::DecisionTree;
On Monday, June 10, 2002, at 02:44 AM, Elias Assmann wrote:
> Hi,
>
> I can't really say anything interesting about your module, but I do
> think I've spotted a mistake in your documentation. Observe:
>
>
> On Sun, 9 Jun 2002, Ken Williams wrote:
>
>
rain-o that I've just fixed. The
second one was only slightly less stupid, fixed as well.
If I make any other more substantial changes in the next few
days I'll upload these fixes with a new version, or maybe just
release a new version anyway.
-Ken
ation. I think there's probably a need for
both. There's also an AI::jNeural, but I haven't been able to
make heads or tails of its documentation. I'm also not sure
about the comparative strengths of libjneural versus other
libraries.
-Ken
s() method.
------
-Ken
tree now contains information about
how many training examples contributed to training this node, and
what the distribution of their classes was.
- Added an as_graphviz() method, which will help visualize trees.
They're not terribly pretty graphviz objects yet, but they're
visual.
-Ken
because it was spending a lot of time in accessor methods for the C
structures it was using. Don't worry, I'm not going C-crazy. I
won't be making many (any?) more of these kinds of changes, but
these ones were probably necessary.
- Removed a bit of debugging code that I left in for 0.03.
-Ken
after
training, using the 'purge' parameter to new() and/or the
do_purge() method.
- Added the set_results() and copy_instances() methods, which let you
re-use training instances from one tree to another.
- Added the instances() and purge() accessor methods.
-Ken
erm weighting methods (see Salton
& Buckley, "Term Weighting Approaches in Automatic Text Retrieval",
in journal "Information Processing & Management", 1988 #5)
-Ken
evant to me, but if
you browse the conference site you might find other ML stuff. The
presentation materials are available on the site (I've asked them to
put my tutorial slides up, but they're not there yet).
-Ken
-alone module that can be used for other things besides
text. The attributes and labels in Algorithm::NaiveBayes are
arbitrary, and it doesn't do any text processing. This frees it up for
other machine learning purposes.
-Ken
be making it using Maciej Cegłowski's
Search::ContextGraph, which is a pretty simple new concept and fairly
lightweight.
In order to get enough data for a demo, I'm hoping I can get people to
run the script at http://limnus.com/~ken/list_modules.pl and send me
the output. It will tell
On Thursday, June 12, 2003, at 11:37 AM, Ken Williams wrote:
In order to get enough data for a demo, I'm hoping I can get people to
run the script at http://limnus.com/~ken/list_modules.pl and send me
the output.
Thanks to all who replied. I've now got about 35 examples of installe
My congratulations on creating a message with such consistency of line
length!
-Ken
On Monday, August 25, 2003, at 06:25 AM, Arthur T. Murray wrote:
An effort to create multi-species AI Minds is 'Net-wide underway.
http://mentifex.virtualentity.com/perl.html is Perl AI evolution.
Not on
have an alternative package? Are there any plans to
introduce numerical attributes in the modules?
Algorithm::NaiveBayes uses numerical attributes:
$nb->add_instance
(attributes => {foo => 1.7, bar => 3.234},
label => 'whatever');
Or do I misunderstand your question?
-Ken
On Tuesday, September 23, 2003, at 09:12 AM, Dominique Vlieghe wrote:
On Tue, 2003-09-23 at 16:00, Ken Williams wrote:
Algorithm::NaiveBayes uses numerical attributes:
$nb->add_instance
(attributes => {foo => 1.7, bar => 3.234},
label => 'whatever');
O
does the failure
look like?
-Ken
On Monday, March 15, 2004, at 10:52 AM, Jack Tanner wrote:
"Ken Williams" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
I haven't yet tried it - a patch would be great. What does the
failure
look like?
Actually, it looks like all the errors boil down to SMO
to hard-code that, you can do:
$nb = AI::Categorizer::Learner::NaiveBayes->restore_state('filename');
eval( 'use ' . ref($nb) );
To solve this correctly, I think I'd have to add this to the
restore_state() method in the NaiveBayes learner class.
-Ken
On Aug 31, 2
Lingua::CollinsParser can also do a good job at this, but it's going to
be slower.
-Ken
On Oct 2, 2004, at 6:58 PM, Jon Orwant wrote:
Lingua::LinkParser
On Oct 2, 2004, at 7:17 PM, Emil Perhinschi wrote:
I followed with interest the thread "Just starting out.. newbie help
:)"
ueries" are the noisy strings you're trying to clean up. Sometimes
that works pretty well.
Or you could try the Levenshtein edit distance that Samy suggested. Or
you could try something else that you invent. =)
-Ken
On Feb 4, 2005, at 4:18 AM, Jason Armstrong wrote:
Perhaps someo
27;d be better off
using bsvm instead.
http://www.csie.ntu.edu.tw/~cjlin/
I think there may be other alternatives by now too, but unfortunately I
haven't had any time to look into them, and I think this cause is sort
of "waiting for a champion."
-Ken
{
return [split ' ', $_[1]];
}
}
my $c = new AI::Categorizer(
document_class => 'My::Documents',
);
...
-Ken
On Feb 5, 2005, at 11:23 AM, Jason Armstrong wrote:
Thanks for all the good feedback, I'll certainly be following up on it.
I did find one reas
"Number of docs: ", $collection->count_documents, "\n";
while (my $doc = $collection->next) {
print $doc->name, " => [", join( ", ", map $_->name, $doc->categories
), "]\n";
}
===
Number of docs: 2
Seahawks => [trucks, cars]
Seattle =&
areer web site:
http://jobsearch.monster.com/getjob.asp?JobID=37099311
http://www.thomsoncareercenter.com/search/view_job_xml.asp?
src=rs&jobID=154545&loc=Ext
Note that I am not the hiring manager or an HR person, I'm a fellow
Research Scientist.
Eagan is a suburb of Minneapoli
On Jan 2, 2007, at 8:53 PM, Russell Foltz-Smith wrote:
Does someone have an examples category text file that works with the
demo.pl?
Yup, you can download it from http://campstaff.com/~ken/
reuters-21578.tar.gz .
Also, does anyone know of an online/web service implementation for web
y that will be
as accessible from a perl world, but you might want to look at some
language modeling papers - I really like the LDA papers from Michael
Jordan (no, not that Michael Jordan, this one: http://
citeseer.ist.psu.edu/541352.html), which are by no means
straightforward, but they will indeed let you describe each document
as generated by a mixture of categories.
-Ken
On Jan 8, 2007, at 10:51 AM, Tom Fawcett wrote:
Just to add a note here: Ken is correct -- both NB and SVMs are
known to be rather poor at providing accurate probabilities. Their
scores tend to be too extreme. Producing good probabilities from
these scores is called calibrating the
ated it in our test (t/03-weka.t) of the Weka wrapper
too. [Sebastien Aperghis-Tramoni]
-Ken
Hi Ignacio,
Is this when loading a pre-trained categorizer from a saved file?
This is a known problem, but I haven't settled on a good solution.
A simple workaround is to just put:
use Algorithm::NaiveBayes::Model::Frequency;
in the script that's currently failing.
-Ken
57 matches
Mail list logo