On Sun, Mar 24, 2013 at 3:36 AM, Steve Richfield
<[email protected]> wrote:

>> Do you really think that these companies don't already own hundreds or
>> thousands of patents on the methods they invented and are using? I
>> suppose if your price is low, they will pay it to avoid the hassle. If
>> you ask for too much, they will point out prior art to invalidate your
>> patent.
>
> This sounds like a game I can't lose!!! If the patent won't stand anyway, 
> challenging them would get THEM to do the hard work of finding any prior art, 
> before I spend any more in this direction. If they fail, then I will have 
> vetted my approach and it WILL be worth lots of money. Hence, I should demand 
> a fortune, and let the chips fall where they may. The odds would almost 
> certainly be better than a lottery ticket.
>
> Do you see any flaws in this obviously non-conservative logic?

Yes. It will be very expensive to file a lawsuit if you lose.

Did you really look at all 8 million patents (including expired ones)
in the U.S. to see if your invention has prior art, or even the 1
million or so that relate to computer technology? What about foreign
patents? And remember, that something doesn't even need to be patented
to be considered prior art. It just has to be disclosed, as in a
research paper, technical document, or open source code. Are you
absolutely, positively sure that you are the first to describe these
ideas?

> The BIG question is whether it works, and whether others believe it. The 
> thing missing in my (and other) parsing approach(es) is a canonical form to 
> represent syntax and semantics, with plenty of hooks to attach new code to do 
> new things. BNF would require a **LOT** of extension. My present plan is to 
> get people from various areas to work on such a representation. I am now 
> creating a paper for WORLDCOMP about the next steps, the initial submitted 
> draft of which I will post here in a week or two for comment.

WORLDCOMP will publish any paper as long as you pay the conference fees.

>> http://en.wikipedia.org/wiki/JPEG#Patent_issues
>> The patent claim in this case was the "invention" of using a single
>> code to represent a run of zeros followed by a non-zero value. Who
>> would have thought it would be worth $105 million? The claim wasn't
>> even valid, due to prior art.
>
> They screwed up. The trick is to just sue medium-sized companies for lunch 
> money, until the patent lapses. THEN sue the big guys for past infringement. 
> Also, look for settlements to avoid testing the patent. Further, make a deal 
> with one of them to fund the suits, in return for a free ride. That way, you 
> collect the money that can be collected now without a court hassle, and risk 
> invalidation only after the patent lapses and you have already collected all 
> that you can collect without a court hassle.

You think Forgent could have made more than $105 million before the
USPTO invalidated the key claims of the patent, 6 months before it was
to expire anyway?

>> You might want to pay attention to what Kurzweil is doing at Google.
>> He has put together a team of several top researchers to tackle
>> exactly the natural language problem. He has access to a model with
>> 300 million concepts, and an awful lot of computing power.
>
> Sounds like he is doing SOMETHING the hard way, which is a standard problem 
> with people who think they have SO much money that they don't have to worry 
> about scaling issues. Our hundred billion neurons is the result of a hundred 
> million of years of optimization. People who think they can "throw iron" at 
> AI problems are just wasting everyone's time.

Google doesn't want to build a language model equivalent to a single
human. That would not be very useful. They want to build a model of
all of the billions of people that use the internet.

>> > Cyc will never ever do anything useful.
>>
>> I agree. So why are you proposing a rule based system too?
>
> The trick is to use rules to identify the exceptions and problems, and ignore 
> the rest of the real world that bogs down Cyc. That seems to be what we do in 
> our own brains, as even 100 billion neurons couldn't begin to be able to 
> track all the things that are working just as they should.

Cyc is not bogged down by rule evaluation. Rules are only evaluated as
needed using forward and backward chaining. This worked quite
efficiently, even on 1980's computers. What bogs down Cyc is the
manual process of entering millions of rules. It's not just typing
them in, either. You have to make sure the rules are consistent, and
have a way of debugging them when you discover they aren't because
different people have different beliefs. What are the chances of
entering a million rules with no errors?

--
-- Matt Mahoney, [email protected]


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to