Java Language Ontology and .java to RDF parser?

2011-06-26 Thread Aldo Bucchi
Hi,

Does anyone know of a Java language ontology? ( with a JavaClass,
JavaMethod, JavaField, etc classes, for example. ).
And a parser for such an ontology? ( that takes .java sources as input ).

I need to analyze some Java codebases and it would be really useful to
create an in memory graph of the language constructs, particularly in
RDF, so I could use some of the amazing tools that we all know and
love ;)

Thanks!
A

-- 
Aldo Bucchi
@aldonline
skype:aldo.bucchi
http://facebook.com/aldo.bucchi ( -- add me * )
http://aldobucchi.com/
* I prefer Facebook as a networking
and communications tool.



Re: Java Language Ontology and .java to RDF parser?

2011-06-26 Thread Aldo Bucchi
Hi Michael,

Thanks for the reference!
I will email Aftab off-list ;)

Regards,
A

On Sun, Jun 26, 2011 at 4:08 AM, Michael Hausenblas
michael.hausenb...@deri.org wrote:

 Aldo,

 Does anyone know of a Java language ontology? ( with a JavaClass,
 JavaMethod, JavaField, etc classes, for example. ).
 And a parser for such an ontology? ( that takes .java sources as input ).

 Yes, see [1]. Ping Aftab (one of my PhD students, in CC) if you need more
 details ...

 Cheers,
        Michael

 [1] Aftab Iqbal, Oana Ureche, Michael Hausenblas, Giovanni Tummarello.
    LD2SD: Linked Data Driven Software Development, 21st International
 Conference on Software Engineering and Knowledge Engineering, 2009.
    http://sw-app.org/pub/seke09-ld2sd.pdf

 --
 Dr. Michael Hausenblas, Research Fellow
 LiDRC - Linked Data Research Centre
 DERI - Digital Enterprise Research Institute
 NUIG - National University of Ireland, Galway
 Ireland, Europe
 Tel. +353 91 495730
 http://linkeddata.deri.ie/
 http://sw-app.org/about.html

 On 26 Jun 2011, at 08:28, Aldo Bucchi wrote:

 Hi,

 Does anyone know of a Java language ontology? ( with a JavaClass,
 JavaMethod, JavaField, etc classes, for example. ).
 And a parser for such an ontology? ( that takes .java sources as input ).

 I need to analyze some Java codebases and it would be really useful to
 create an in memory graph of the language constructs, particularly in
 RDF, so I could use some of the amazing tools that we all know and
 love ;)

 Thanks!
 A

 --
 Aldo Bucchi
 @aldonline
 skype:aldo.bucchi
 http://facebook.com/aldo.bucchi ( -- add me * )
 http://aldobucchi.com/
 * I prefer Facebook as a networking
 and communications tool.






-- 
Aldo Bucchi
@aldonline
skype:aldo.bucchi
http://facebook.com/aldo.bucchi ( -- add me * )
http://aldobucchi.com/
* I prefer Facebook as a networking
and communications tool.



Loooking for high quality list of current, real countries

2011-03-22 Thread Aldo Bucchi
Hi guys,
I don't usually do this but I would like some help in finding a high
quality list of current ( not Sparta ) and real ( not Mordor )
countries with the following props:

* ISO 3166-1-alpha-2 code
* English Name
* Flag
* Local Name ( optional )
* Language ( optional )
* Population ( optional )
* Facebook ID ( would be cool, but I imagine this is harder )

I am finding pieces of this dataset here and there ( DBPedia,
Freebase, CIAWFB, GeoNames ) with varying coverage and quality.
In case you are wondering: Yes, I *AM* trying to save myself some work ;)

Shameless crowd sourcing.

Thanks!
A

--
Aldo Bucchi
@aldonline
skype:aldo.bucchi
http://facebook.com/aldo.bucchi
http://aldobucchi.com/



Get 40k to startup your startup in Chile

2010-12-23 Thread Aldo Bucchi
Hi Guys,

I am helping the Startup-Chile program.

http://techcrunch.com/2010/12/18/chile%E2%80%99s-grand-innovation-experiment/
http://startupchile.org/

We will give you 40k, no strings attached, if you startup in Chile.

The networking is great and the experience itself is very innovative.
You will be benefited from being the first, which will create unusual
networking opportunities. In SF you're usually just one more guy

The project is mega-ambitious in itself and its objective is to
bootstrap a silicon valley / innovation pole here in Chile by
importing a human network and letting it blend with the local network.
This will attract talent, generate success, stories, etc.

Given that Chile is small, it has a pretty big chance of succeeding.

In contrast with other projects, they are not focusing on
infrastructure, universities, etc which is an indirect route. They are
just bringing in entrepreneurs.

You need to have a startup with an initial business model, and someone
needs to come here. I can point you to more information if you're
interested.

Think of this as a bootstrapping paid leave in a totally new place
where your mom won't be calling you every 5 minutes. You will meet
other entrepreneurs.

Personally. I think this is a fun opportunity.

Disclaimer: I am not getting any money out of this. Most chilean
entrepreneurs are simply helping. We love the idea that we can create
a high tech sub-culture right here. And I love the fact that I am
meeting more people and ideas than I usually do in California in the
same span of time. That's the main reward.

Also. I am available to partner up if someone has a startup that's
within my are of expertise. I can contribute execution, talent, and my
head of expertise.

Regards,
A

-- 
Aldo Bucchi
@aldonline
skype:aldo.bucchi
http://aldobucchi.com/



Re: ISIC as Linked Data?

2010-12-02 Thread Aldo Bucchi
Hi Martin,

On Thu, Dec 2, 2010 at 7:28 AM, Martin Hepp
martin.h...@ebusiness-unibw.org wrote:
 Hi Aldo,
 Not directly an answer to your question, but: In GoodRelations, we use ISIC
 as a datatype property,

  http://purl.org/goodrelations/v1#hasISICv4 ,

 because replicating standardized numbering schemes is usually difficult due
 to maintenance and legal issues.

Aha! I am starting to understand the rationale behind your decision ;)

Having said that, internally, we do need to create a dataset because
we need the relations between categories, labels, etc.

We are using it to fill this predicate.


 Martin

 On 02.12.2010, at 07:32, Aldo Bucchi wrote:

 Hi,

 Is there any ISIC Linked Data Dataset out there that you know of?
 http://unstats.un.org/unsd/cr/registry/isic-4.asp

 Thanks!
 A

 --
 Aldo Bucchi
 @aldonline
 skype:aldo.bucchi
 http://aldobucchi.com/






-- 
Aldo Bucchi
@aldonline
skype:aldo.bucchi
http://aldobucchi.com/



Re: ISIC as Linked Data?

2010-12-02 Thread Aldo Bucchi
On Thu, Dec 2, 2010 at 1:50 PM, Kingsley Idehen kide...@openlinksw.com wrote:
 On 12/2/10 11:32 AM, Aldo Bucchi wrote:

 Hi Martin,

 On Thu, Dec 2, 2010 at 7:28 AM, Martin Hepp
 martin.h...@ebusiness-unibw.org  wrote:

 Hi Aldo,
 Not directly an answer to your question, but: In GoodRelations, we use
 ISIC
 as a datatype property,

  http://purl.org/goodrelations/v1#hasISICv4 ,

 because replicating standardized numbering schemes is usually difficult
 due
 to maintenance and legal issues.

 Aha! I am starting to understand the rationale behind your decision ;)

 Having said that, internally, we do need to create a dataset because
 we need the relations between categories, labels, etc.

 Aldo,

 Make your own data space (internal or public) by populating our own Named
 Graphs with said data.

 You can take the Microsoft Access dump, re-org the SQL data if need be, zap
 it through the RDF Views Wizard (using ODBC connection to Access) and you're
 set re. RDF based Linked Data Space.

So you noticed there is a MsSQL dump :)
My real problem is not doing this part, I do it often.

The question was aimed at taking advantage of network effect.
1. If it is done, don't do it again
2. Reuse IDs which will eventually align my dataset with another
datasets. For example, someone will add images to these categories,
and my apps will benefit from that with a simple import.

I think Virtuoso's RDF Views are best ( in fact, crucial ) when
dealing with a live dataset which is stored in an RDBMS and being
operated on by other apps. Like sugar CRM for example.


 You get the picture

 Kingsley

 We are using it to fill this predicate.

 Martin

 On 02.12.2010, at 07:32, Aldo Bucchi wrote:

 Hi,

 Is there any ISIC Linked Data Dataset out there that you know of?
 http://unstats.un.org/unsd/cr/registry/isic-4.asp

 Thanks!
 A

 --
 Aldo Bucchi
 @aldonline
 skype:aldo.bucchi
 http://aldobucchi.com/






 --

 Regards,

 Kingsley Idehen
 President  CEO
 OpenLink Software
 Web: http://www.openlinksw.com
 Weblog: http://www.openlinksw.com/blog/~kidehen
 Twitter/Identi.ca: kidehen









-- 
Aldo Bucchi
@aldonline
skype:aldo.bucchi
http://aldobucchi.com/



ISIC as Linked Data?

2010-12-01 Thread Aldo Bucchi
Hi,

Is there any ISIC Linked Data Dataset out there that you know of?
http://unstats.un.org/unsd/cr/registry/isic-4.asp

Thanks!
A

-- 
Aldo Bucchi
@aldonline
skype:aldo.bucchi
http://aldobucchi.com/



Re: FW: Failed to port datastore to RDF, will go Mongo

2010-11-24 Thread Aldo Bucchi
 that would of course be very cool, we could then try to 
 evaluate how far this went. Progress will be at: 
 http://bitbucket.org/pudo/wdmmg-core

My exec summary to you is this:
* Instead of mongo, use Virtuoso with your own predicates. You will
get a lot of power and you will be able to make your data live
natively as RDF. This means it will be easily importable and meshable
with other datasets, initially.
* If UI is an issue, you can throw in your questions to public-lod and
lots of us will answer with patterns, strategies, etc.

Regards,
A


 Friedrich



 ___
 wdmmg-discuss mailing list
 wdmmg-disc...@lists.okfn.org
 http://lists.okfn.org/mailman/listinfo/wdmmg-discuss

 - End forwarded message -

 --
 William Waites
 http://eris.okfn.org/ww/foaf#i
 9C7E F636 52F6 1004 E40A  E565 98E3 BBF3 8320 7664





-- 
Aldo Bucchi
@aldonline
skype:aldo.bucchi
http://aldobucchi.com/



Re: FW: Failed to port datastore to RDF, will go Mongo

2010-11-24 Thread Aldo Bucchi
Sorry, I forgot to add something critical.

Ease of integration ( moving triples ) is just the beginning. Once you
get a hold on the power of ontologies and inference as views your
data starts becoming more and more useful.

But the first step is getting your data into RDF and the return on
that investment is SPARQL and the ease to integrate.

I usually end up with several transformation pipelines and accesory
TTL files which get all combined into one dataset. TTLs are easily
editable by hand, collaboratively versiones, while giving you full
expressivity.

TTL files alone are why some developers fall in love with Linked Data.


On Wed, Nov 24, 2010 at 10:33 AM, Aldo Bucchi aldo.buc...@gmail.com wrote:
 Hi William, Friederich.

 This is an excellent email. My replies inlined. Hope I can help.

 On Wed, Nov 24, 2010 at 9:47 AM, William Waites w...@styx.org wrote:
 Friedrich, I'm forwarding your message to one of the W3 lists.

 Some of your questions could be easily answered (e.g. for euro in your
 context, you don't have a predicate for that, you have an Observation
 with units of a currency and you could take the currency from
 dbpedia, the predicate is units).

 But I think your concerns are quite valid generally and your
 experience reflects that of most web site developers that encounter
 RDF.

 LOD list, Friedrich is a clueful developer, responsible for
 http://bund.offenerhaushalt.de/ amongst other things. What can we
 learn from this? How do we make this better?

 -w


 - Forwarded message from Friedrich Lindenberg friedr...@pudo.org -

 From: Friedrich Lindenberg friedr...@pudo.org
 Date: Wed, 24 Nov 2010 11:56:20 +0100
 Message-Id: a9089567-6107-4b43-b442-d09dcc0c3...@pudo.org
 To: wdmmg-discuss wdmmg-disc...@lists.okfn.org
 Subject: [wdmmg-discuss] Failed to port datastore to RDF, will go Mongo

 (reposting to list):

 Hi all,

 As an action from OGDCamp, Rufus and I agreed that we should resume porting 
 WDMMG to RDF in order to make the data model more flexible and to allow a 
 merger between WDMMG, OffenerHaushalt and similar other projects.

 After a few days, I'm now over the whole idea of porting WDMMG to RDF. 
 Having written a long technical pro/con email before (that I assume 
 contained nothing you don't already know), I think the net effect of using 
 RDF would be the following:

 * Lots of coolness, sucking up to linked data people.
 * Further research regarding knowledge representation.

 I will quickly outline some points that I think are advantages from a
 developer POV. ( once you tackle the problems you outline below, of
 course ).
 * A highly expressive language ( SPARQL )
 * Ease of creating workflows where data moves from one app to another.
 And this is not just buzz. The self-contained nature of triples and
 IDs make it so that you can SPARQL select on one side and SPARQL
 insert on another. I do this all the time, creating data pipelines.
 I admit it has taken some time to master, but I can peform magic
 from my customer's point of view.


 vs.

 * Unstable and outdated technological base. No triplestore I have seen so 
 far seemed on par with MySQL 4.

 * You definitely need to give Virtuoso a try. It is a mature SQL
 database that grew into RDF. I Strongly disagree with this point as I
 have personally created highly demanding projects for large companies
 using Virtuoso's Quad Store. To give you a real life case, the recent
 Brazilian Election portal by Globo.com (
 http://g1.globo.com/especiais/eleicoes-2010/ ) has Virtuoso under the
 hood and, being a highly important, mission critical app in a major (
 4th ) media company  it is not a toy application.
 I know many others but in this one I participated so I can tell you it
 is Virtuoso w/o fear mistake.

 * No freedom wrt to schema, instead modelling overhead. Spent 30 minutes 
 trying to find a predicate for Euro.

 Yes!
 This is a major problem and we as a community need to tackle it.
 I am intrigued to see what ideas come up in this thread. Thanks for
 bringing it up.

 As an alternative, you can initially model everything using a simple
 urn:foo:xxx or http://mydomain.com/id/xxx schema ( this is what I do )
 and as you move fwd you can refactor the model. Or not.

 You can leave it as is and it will still be integratable ( able to
 live along other datasets in the same store ).

 Deploying the Linked part of Linked Data ( the dereferencing
 protocols ) later on is another game.

 * Scares off developers. Invested 2 days researching this, which is how long 
 it took me to implement OHs backend the first time around. Project would 
 need to be sustained through linked data grad students.
 * Less flexibility wrt to analytics, querying and aggregation. SPARQL not so 
 hot.

 Did you try Virtuoso? Seriously.
 It provides out of the box common aggregates and is highly extensible.
 You basically have a development platform at your disposal.

 * Good chance of chewing up the UI, much harder to implement editing

Re: Making Linked Data Fun

2010-11-19 Thread Aldo Bucchi
Kingsley,

On Fri, Nov 19, 2010 at 1:07 PM, Kingsley Idehen kide...@openlinksw.com wrote:
 All,

 Here is an example of what can be achieved with Linked Data, for instance
 using BBC Wild Life Finder's data:

 1. http://uriburner.com/c/DI463N -- remote SPARQL queries between two
 instances (URIBurner and LOD Cloud Cache) with results serialized in CXML
 (image processing part of the SPARQL query pipeline) .

This is excellent!
Single most powerful demo available. Really looking fwd to what's coming next.

Let's see how this shifts gears in terms of Linked Data comprehension.
Even in its current state, this is an absolute game changer.

I know this was not easy. My hat goes off to the team for their focus.

Now, just let me send this link out to some non-believers that have
been holding back my evangelization pipeline ;)

Regards,
A



 Enjoy!

 --

 Regards,

 Kingsley Idehen   
 President  CEO
 OpenLink Software
 Web: http://www.openlinksw.com
 Weblog: http://www.openlinksw.com/blog/~kidehen
 Twitter/Identi.ca: kidehen








-- 
Aldo Bucchi
@aldonline
skype:aldo.bucchi
http://aldobucchi.com/



Re: Making Linked Data Fun

2010-11-19 Thread Aldo Bucchi
You started ;)

On Nov 19, 2010, at 13:39, John Erickson olyerick...@gmail.com wrote:

 This single most powerful demo available is an epic fail on Ubuntu
 10.10 + Chrome. The most recent release of Moonlight just doesn't cut
 it (and shouldn't have to).

What you see as a fail I see as a win.



 
 Could we as a community *possibly* work towards a rich data
 visualization/presentation toolkit built on, say, HTML5?

Will happen. But we need to stop investing asymmetrically. All tech, no 
marketing collateral. We need big players to see what we see.

This demo makes a major CTO visualize what linked data could do for his 
company in the long run, thus lowering the entry barrier for us today. In order 
for an industry to grow, we need participation and engagement.



 
 On Fri, Nov 19, 2010 at 11:20 AM, Aldo Bucchi aldo.buc...@gmail.com wrote:
 Kingsley,
 
 On Fri, Nov 19, 2010 at 1:07 PM, Kingsley Idehen kide...@openlinksw.com 
 wrote:
 All,
 
 Here is an example of what can be achieved with Linked Data, for instance
 using BBC Wild Life Finder's data:
 
 1. http://uriburner.com/c/DI463N -- remote SPARQL queries between two
 instances (URIBurner and LOD Cloud Cache) with results serialized in CXML
 (image processing part of the SPARQL query pipeline) .
 
 This is excellent!
 Single most powerful demo available. Really looking fwd to what's coming 
 next.
 
 Let's see how this shifts gears in terms of Linked Data comprehension.
 Even in its current state, this is an absolute game changer.
 
 I know this was not easy. My hat goes off to the team for their focus.
 
 Now, just let me send this link out to some non-believers that have
 been holding back my evangelization pipeline ;)
 
 Regards,
 A
 
 
 
 Enjoy!
 
 --
 
 Regards,
 
 Kingsley Idehen
 President  CEO
 OpenLink Software
 Web: http://www.openlinksw.com
 Weblog: http://www.openlinksw.com/blog/~kidehen
 Twitter/Identi.ca: kidehen
 
 
 
 
 
 
 
 
 --
 Aldo Bucchi
 @aldonline
 skype:aldo.bucchi
 http://aldobucchi.com/
 
 
 
 
 
 -- 
 John S. Erickson, Ph.D.
 http://bitwacker.wordpress.com
 olyerick...@gmail.com
 Twitter: @olyerickson
 Skype: @olyerickson
 



Re: [Virtuoso-users] Reification alternative

2010-10-13 Thread Aldo Bucchi
Hi Ivan,

Hehe, I knew you were going to jump in, that's why I CC'd this to
virtuoso-users ;)

Before getting into the content of your response, let me just say this:

I think Mirko's example is actually really common. Every application
that I have built needs to keep track of ( at least ) two other
dimensions beyond the core data model/state:
* Time ( Be it audit trail or just timestamp )
* Author

You provide some really valuable tips in your reply as to how you can
tune your Virtuoso installation to actually accomplish this.

On Wed, Oct 13, 2010 at 3:49 PM, Ivan Mikhailov
imikhai...@openlinksw.com wrote:
 Hello Aldo,

 I'd recommend to keep RDF_QUAD unchanged and use RDF Views to keep n-ary
 things in separate tables. The reason is that the access to RDF_QUAD is
 heavily optimized, we've never polished any other table to such a
 degree (and I hope we will not :), and any changes may result in severe
 penalties in scalability. Triggers should be possible as well, but we
 haven't tried them, because it is relatively cheap to redirect data
 manipulations to other tables. Both the loader of files and SPARUL
 internals are flexible enough so it may be more convenient to change
 different tables depending on parameters: the loader can call arbitrary
 callback functions for each parsed triple and SPARUL manipulations are
 configurable via define output:route pragma at the beginning of the
 query.

Interesting! ;)
From the docs:

output:route: works only for SPARUL operators and tells the SPARQL
compiler to generate procedure names that differ from default. As a
result, the effect of operator will depend on application. That is for
tricks. E.g., consider an application that extracts metadata from DAV
resources stored in the Virtuoso and put them to RDF storage to make
visible from outside. When a web application has permissions and
credentials to execute a SPARUL query, the changed metadata can be
written to the DAV resource (and after that the trigger will update
them in the RDF storage), transparently for all other parts of
application.

Where can I find more docs on this feature?
( I don't actually need this, just asking )


 In this case there will be no need in writing special SQL to triplify
 data from that wide tables because RDF Views will do that
 automatically. Moreover, it's possible to automatically create triggers
 by  RDF Views that will materialize changes in wide tables in RDF_QUAD
 (say, if you need inference). So instead of editing RDF_QUAD and let
 triggers on RDF_QUAD reproduce the changes in wide tables, you may edit
 wide tables and let triggers reproduce the changes in RDF_QUAD. The
 second approach is much more flexible and it promise better performance
 due to much smaller activity in triggers. For cluster, I'd say that the
 second variant is the only possible thing, because fast manipulations
 with RDF_QUAD are _really_ complicated there.

Great to know all this!
Again, I think the possibility to mix and match SPARQL + SQL via RDF
Views, triggers, output:route, etc is a really good solution for 4ary
relations.

Built-in Time Dimension is something I am looking forward to implement
to some of my applications as they provide enormous business value.

Thanks,
A


 Best Regards,

 Ivan Mikhailov
 OpenLink Software
 http://virtuoso.openlinksw.com


 On Wed, 2010-10-13 at 12:57 -0300, Aldo Bucchi wrote:
 Hi Mirko,

 Here's a tip that is a bit software bound but it may prove useful to
 keep it in mind.

 Virtuoso's Quad Store is implemented atop an RDF_QUAD table with 4
 columns (g, s, p o). This is very straightforward. It may even seem
 naive at first glance. ( a table!!? ).

 Now, the great part is that the architecture is very open. You can
 actually modify the table via SQL statements directly: insert, delete,
 update, etc. You can even add columns and triggers to it.

 Some ideas:
 * Keep track of n-ary relations in the same table by using accessory
 columns ( time, author, etc ).
 * Add a trigger and log each add/delete to a separate table where you
 also store more data
 * When consuming this data, you can use SQL or you can run a SPARQL
 construct based on a SQL query, so as to triplity the n-tuple as you
 wish.

 The bottom suggestion here is: Take a look at what's possible when you
 escape SPARQL only and start working in a hybrid environment ( SQL +
 SPARQL ).
 Also note that the self-contained nature of RDF assertions ( facts,
 statements ) makes it possible to do all sorts of tricks by taking
 them into 3+ tuple structures.

 My coolest experiment so far is a time machine. I log adds and deletes
 and can recreate the state of the system ( Quad Store ) up to any
 point in time.

 Imagine a Queue management system where you can replay the state of
 the system, for example.

 Regards,
 A






-- 
Aldo Bucchi
@aldonline
skype:aldo.bucchi
http://aldobucchi.com/



Re: Metaweb joins Google

2010-07-23 Thread Aldo Bucchi
I see two things here:

1. Google got into the broader LInked Data Web. That's powerful
validation and you should use that to pitch your customers, investors
or the girl you always wanted to talk to but you didn't have the balls
to face, even during that trip where she approached you and...  ( ok,
end joke ;)
2. Freebase has IDs for each entity. We can always proxy their URIs
and they will be spreading these IDs across the web. The data is open
as well, and we can link to it/use it to consolidate other datasets.
They will hardly be able to restrict that as they need the data to
flow ( its a paradox ).

They won't kill linked data. Remember our worst enemy has historically
been people not getting it, the tech has been ready for way too
long.

People get it when Facebook talks about a graph thing and Google talks
about a graph thing.

Regards,
A

On Fri, Jul 23, 2010 at 5:37 AM, Daniel O'Connor
daniel.ocon...@gmail.com wrote:
 I disagree with your overall tone here: sure there's a bunch of propriety
 tech in use; but there are certainly links back to the linked data world.

 No commitment I would disagree with. Not sure how to proceed serving the
 best of both worlds? That might be more accurate.

 On Tue, Jul 20, 2010 at 7:10 PM, Hondros, Constantine
 constantine.hond...@wolterskluwer.com wrote:

 It's big news for the wider Semantic Web community, as it shows that
 Google is determined to extract better semantics from pages it crawls ...
 but it's mediocre news for the LOD community. Freebase is based on
 proprietary database technology, it relies on its own graph data format, is
 queryable by its own query language (MQL, based on JSON), and makes no
 commitment to RDf, OWL and SPARQL beyond supporting a SPARQL end-point (in
 beta).

 The best case is that Google is just buying the entity extraction
 expertise and software deployed by Freebase ... the worst case is that they
 end up leapfrogging the Semantic Web standards in favour of their own ...

 C.

 -Original Message-
 From: public-lod-requ...@w3.org [mailto:public-lod-requ...@w3.org] On
 Behalf Of Nathan
 Sent: Friday, July 16, 2010 9:57 PM
 To: Semantic Web; Linked Data community
 Subject: Metaweb joins Google

 Suprised this one isn't already posted!

 Metaweb (inc Freebase) has joined google:


 http://googleblog.blogspot.com/2010/07/deeper-understanding-with-metaweb.html
 http://blog.freebase.com/2010/07/16/metaweb-joins-google/

 Big (huge) news  congrats to all involved,

 Best,

 Nathan



 This email and any attachments may contain confidential or privileged
 information
 and is intended for the addressee only. If you are not the intended
 recipient, please
 immediately notify us by email or telephone and delete the original email
 and attachments
 without using, disseminating or reproducing its contents to anyone other
 than the intended
 recipient. Wolters Kluwer shall not be liable for the incorrect or
 incomplete transmission of
 of this email or any attachments, nor for unauthorized use by its
 employees.

 Wolters Kluwer nv has its registered address in Alphen aan den Rijn, The
 Netherlands, and is registered
 with the Trade Registry of the Dutch Chamber of Commerce under number
 33202517.







-- 
Aldo Bucchi
@aldonline
skype:aldo.bucchi
http://aldobucchi.com/

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to which it is
addressed and may contain information that is privileged and confidential. If
you are not the intended recipient, please do not distribute or copy this
communication, by e-mail or otherwise. Instead, please notify us immediately by
return e-mail.



Re: LInked Data events in the US between July 12-25

2010-04-21 Thread Aldo Bucchi
On Sun, Apr 18, 2010 at 4:40 PM, Kingsley Idehen kide...@openlinksw.com wrote:
 Aldo Bucchi wrote:

 I'll take that as a no ;)

 On Monday, April 12, 2010, Aldo Bucchi aldo.buc...@gmail.com wrote:


 Hi,

 Is there any interesting Linked Data event in the US anytime between
 July 12 - 25th?

 Thx,
 A

 --
 Aldo Bucchi
 skype:aldo.bucchi
 http://aldobucchi.com/

 PRIVILEGED AND CONFIDENTIAL INFORMATION
 This message is only for the use of the individual or entity to which it
 is
 addressed and may contain information that is privileged and
 confidential. If
 you are not the intended recipient, please do not distribute or copy this
 communication, by e-mail or otherwise. Instead, please notify us
 immediately by
 return e-mail.





 Aldo,

 There are data spaces such as meetup.com, upcoming.com, and eventful, all
 sponger friendly.

 Why not take that route :-)

Good idea! Will take a look. Meetups are definitely a possibility.

Also, I should broaden my search to include all data and data
integration events, not just linked data/semantic web.

Any ideas given this broadened criteria?


 See my meetup.com meshup at: http://bit.ly/bReD6T .

 --

 Regards,

 Kingsley Idehen       President  CEO OpenLink Software     Web:
 http://www.openlinksw.com
 Weblog: http://www.openlinksw.com/blog/~kidehen
 Twitter/Identi.ca: kidehen








-- 
Aldo Bucchi
skype:aldo.bucchi
http://aldobucchi.com/

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to which it is
addressed and may contain information that is privileged and confidential. If
you are not the intended recipient, please do not distribute or copy this
communication, by e-mail or otherwise. Instead, please notify us immediately by
return e-mail.



Re: LInked Data events in the US between July 12-25

2010-04-17 Thread Aldo Bucchi
I'll take that as a no ;)

On Monday, April 12, 2010, Aldo Bucchi aldo.buc...@gmail.com wrote:
 Hi,

 Is there any interesting Linked Data event in the US anytime between
 July 12 - 25th?

 Thx,
 A

 --
 Aldo Bucchi
 skype:aldo.bucchi
 http://aldobucchi.com/

 PRIVILEGED AND CONFIDENTIAL INFORMATION
 This message is only for the use of the individual or entity to which it is
 addressed and may contain information that is privileged and confidential. If
 you are not the intended recipient, please do not distribute or copy this
 communication, by e-mail or otherwise. Instead, please notify us immediately 
 by
 return e-mail.


-- 
Aldo Bucchi
skype:aldo.bucchi
http://aldobucchi.com/

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to which it is
addressed and may contain information that is privileged and confidential. If
you are not the intended recipient, please do not distribute or copy this
communication, by e-mail or otherwise. Instead, please notify us immediately by
return e-mail.



LInked Data events in the US between July 12-25

2010-04-12 Thread Aldo Bucchi
Hi,

Is there any interesting Linked Data event in the US anytime between
July 12 - 25th?

Thx,
A

-- 
Aldo Bucchi
skype:aldo.bucchi
http://aldobucchi.com/

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to which it is
addressed and may contain information that is privileged and confidential. If
you are not the intended recipient, please do not distribute or copy this
communication, by e-mail or otherwise. Instead, please notify us immediately by
return e-mail.



Re: What would you build with a web of data?

2010-04-09 Thread Aldo Bucchi
Georgi you are mistaken. Links to resources or even simple things such  
as reusing labels are already saving me time and money when building  
apps for customers.
in reality I get more than labels, I get maps, relations, etc which  
otherwise would have been costly to attain.
Also, the exploration of datasets by jumping links is a great way to  
attain insights an ideas. Intranet-wise, linkeddata is a killer app  
and I have hard, front line evidence.


On Apr 9, 2010, at 11:13, Georgi Kobilarov georgi.kobila...@gmx.de  
wrote:



Hi Bernard,

well, why did I ask people to write about their ideas for apps?
My observation is that there are zero real apps using linked open  
data (i.e.

data from the cloud). Not even a single one. Null.
After 3 years of linking open data...

There are applications that re-use identifiers, and there are  
applications
that use single, hand-picked data sources.  But let's be honest,  
that's not
using the linked data cloud. So, why's that? There must be a  
reason. Which

part of the ecosystem sucks?

In my opinion we won't get to solve that question if we stick to  
linked
data will save the planet, one day. But instead, figure out which  
apps
people would want to build now, and then see why it's not possible.  
If it
doesn't work on the small scale of some simple app, how will linked  
data

ever save our planet?

Cheers,
Georgi


-Original Message-
From: Bernard Vatant [mailto:bernard.vat...@mondeca.com]
Sent: Friday, April 09, 2010 12:25 PM
To: Georgi Kobilarov
Cc: public-lod
Subject: Re: What would you build with a web of data?

Hi Georgi
Copying below the comment I just posted on ReadWriteWeb. Looks like a
rant, but could have been worse ... I could have added that if the  
Web of
Data is used to find out more cute cats images, well, I wonder what  
I do

on

this boat.
I'm amazed, not to say frightened, by the egocentrism and lack of
imagination of the applications proposed so far. Will the Web of  
Data be

an
effective tool for tackling our planet critical issues, or just  
another

toy for

spoiled children of the Web?
I would like to see the Web of Data enable people anywhere in the  
world to

find out smart, sustainable and low-cost solutions to their local

development
issues. What are the success (or failure) stories in e.g., farming,  
water

supply,
energy, education, health etc. in environments similar to mine,  
anywhere

in

the world? Something along the lines of http://www.wiserearth.org (of
which data, BTW would be great to have in the Linked Data cloud).
Best
Bernard

2010/4/9 Georgi Kobilarov georgi.kobila...@gmx.de
Yesterday issued a challenge on my blog for ideas for concrete  
linked open
data applications. Because talking about concrete apps helps  
shaping the
roadmap for the technical questions for the linked data community  
ahead.

The
real questions, not the theoretical ones...

Richard MacManus of ReadWriteWeb picked up the challenge:
http://www.readwriteweb.com/archives/web_of_data_what_would_you_
build.php

Let's be creative about stuff we'd build with the web of data.  
Assume the

Linked Data Web would be there already, what would build?

Cheers,
Georgi

--
Georgi Kobilarov
Uberblic Labs Berlin
http://blog.georgikobilarov.com




--
Bernard Vatant
Senior Consultant
Vocabulary  Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com








Re: What would you build with a web of data?

2010-04-09 Thread Aldo Bucchi

Hi

On Apr 9, 2010, at 11:45, Georgi Kobilarov georgi.kobila...@gmx.de  
wrote:



Hi Aldo,

Georgi you are mistaken. Links to resources or even simple things  
such as
reusing labels are already saving me time and money when building  
apps for

customers.


well, do we then really need all that sophisticated rdf data? If it  
comes down to reusing labels?


1. It is not complex! Rdf is very basic, least common denominator for  
a webby graph model
2. We need RDF or similar to cover diverse verticals. Anything less  
does not scale





in reality I get more than labels, I get maps, relations, etc which  
otherwise

would have been costly to attain.


Are you describing an application of linked open data, or just open  
data?


A linked data intranet. They used to have a combo box with countries.
They now have a map an detail pages for each country in multiple  
languages integrated to the intranet.

Same for other seconday entities

All I did was owl:sameAs to dbpedia and virtuoso sponge




Also, the exploration of datasets by jumping links is a great way  
to attain
insights an ideas. Intranet-wise, linkeddata is a killer app and I  
have hard,

front line evidence.


Please, share your evidence...


Will blog



Cheers,
Georgi


On Apr 9, 2010, at 11:13, Georgi Kobilarov  
georgi.kobila...@gmx.de

wrote:


Hi Bernard,

well, why did I ask people to write about their ideas for apps?
My observation is that there are zero real apps using linked open  
data

(i.e.
data from the cloud). Not even a single one. Null.
After 3 years of linking open data...

There are applications that re-use identifiers, and there are
applications that use single, hand-picked data sources.  But let's  
be

honest, that's not using the linked data cloud. So, why's that?
There must be a reason. Which part of the ecosystem sucks?

In my opinion we won't get to solve that question if we stick to
linked data will save the planet, one day. But instead, figure out
which apps people would want to build now, and then see why it's not
possible.
If it
doesn't work on the small scale of some simple app, how will linked
data ever save our planet?

Cheers,
Georgi


-Original Message-
From: Bernard Vatant [mailto:bernard.vat...@mondeca.com]
Sent: Friday, April 09, 2010 12:25 PM
To: Georgi Kobilarov
Cc: public-lod
Subject: Re: What would you build with a web of data?

Hi Georgi
Copying below the comment I just posted on ReadWriteWeb. Looks  
like a

rant, but could have been worse ... I could have added that if the
Web of Data is used to find out more cute cats images, well, I  
wonder

what I do

on

this boat.
I'm amazed, not to say frightened, by the egocentrism and lack of
imagination of the applications proposed so far. Will the Web of  
Data

be

an

effective tool for tackling our planet critical issues, or just
another

toy for

spoiled children of the Web?
I would like to see the Web of Data enable people anywhere in the
world to find out smart, sustainable and low-cost solutions to  
their

local

development

issues. What are the success (or failure) stories in e.g., farming,
water

supply,

energy, education, health etc. in environments similar to mine,
anywhere

in
the world? Something along the lines of http://www.wiserearth.org  
(of

which data, BTW would be great to have in the Linked Data cloud).
Best
Bernard

2010/4/9 Georgi Kobilarov georgi.kobila...@gmx.de Yesterday  
issued

a challenge on my blog for ideas for concrete linked open data
applications. Because talking about concrete apps helps shaping the
roadmap for the technical questions for the linked data community
ahead.
The
real questions, not the theoretical ones...

Richard MacManus of ReadWriteWeb picked up the challenge:


http://www.readwriteweb.com/archives/web_of_data_what_would_you_

build.php

Let's be creative about stuff we'd build with the web of data.
Assume the
Linked Data Web would be there already, what would build?

Cheers,
Georgi

--
Georgi Kobilarov
Uberblic Labs Berlin
http://blog.georgikobilarov.com




--
Bernard Vatant
Senior Consultant
Vocabulary  Data Engineering
Tel:   +33 (0) 971 488 459
Mail: bernard.vat...@mondeca.com

Mondeca
3, cité Nollez 75018 Paris France
Web:http://www.mondeca.com
Blog:http://mondeca.wordpress.com










Re: Nice Data Cleansing Tool Demo

2010-03-29 Thread Aldo Bucchi
Hi David,

I love it and I NEED it ;)
Awesome work, really.

I heard it will be opensource so I will probably be able to extend it
myself, but here are some ideas for (missing?) features:
* Importing custom Lookups/Dictionaries ( to go from text to IDs or
the other way around ). Maybe this is possible using a different hook
for the reconciliation mechanism.
* Related: Plug in other reconciliation services ( not sure how this
stands up to freebase biz alignment )
* Command line engine. To add a GW project as a step in a traditional
transformation job and execute steps sequentially.
* Expose Gazetteers ( dictionaries ) generated within the tool ( when
equating facets )

I have other ideas but I need to try it first it looks like you've
covered a lot of ground here.

Amazing, Amazing. Thanks!
A


On Sun, Mar 28, 2010 at 8:06 PM, David Huynh dfhu...@alum.mit.edu wrote:
 On Mar/29/10 12:31 am, Kingsley Idehen wrote:

 All,

 A very nice data cleansing tool from David and Co. at Freebase.

 CSVs are clearly the dominant data format in the structured open data realm.
 This tool deals with ETL very well. Of course, for those who appreciate OWL,
 a lot of what's demonstrated in this demo is also achievable via context
 rules. Bottom line (imho), nice tool that will only aid improving Web of
 Linked Data quality at the data set production stage.

 Links:

 1. http://vimeo.com/10081183 -- Freebase Gridworks

 Thanks, Kingsley. The second screencast, by Stefano Mazzocchi, also
 demonstrates a few other interesting features:

     http://www.vimeo.com/10287824

 David




-- 
Aldo Bucchi
skype:aldo.bucchi
http://www.univrz.com/
http://aldobucchi.com/

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to which it is
addressed and may contain information that is privileged and confidential. If
you are not the intended recipient, please do not distribute or copy this
communication, by e-mail or otherwise. Instead, please notify us immediately by
return e-mail.



Re: Contd: Nice Data Cleansing Tool Demo

2010-03-29 Thread Aldo Bucchi
Hi,

On Mon, Mar 29, 2010 at 3:22 PM, Nathan nat...@webr3.org wrote:
 Georgi Kobilarov wrote:
 Kingsley,

 So by the time you can
 use Pivot on SW/linked data, you will already have solved all the
 interesting and challenging problems.

 This part is what I call an innovation slot since we have hooked it
 into

 our

 DBMS hosted faceted engine and successfully used it over very large
 data
 sets.
 Kingsley, I'm wondering: How did you do that? I tried it myself, and
 it doesn't work.
 Did I indicate that my demo instance was public? How did you come to
 overlook that?

 I wasn't referring to a demo of yours, but to the general task of using
 Pivot as a frontend to a faceted browsing backend engine.


 Pivot can't make use of server-side faceted browsing engines.

 Why do you speculate? You are incorrect and Virtuoso *doing* what you
 claim is impossible will be emphatic proof, nice and simple.

 Pivot consumes data from HTTP accessible collections (which may be static
 or
 dynamic [1]). A dynamic collection is comprised of CXML resources
 (basically
 XML) .

 I don't speculate. Which parts of my does not work and can't use did
 sound like a speculation?


 You need to send *all* the data to the Pivot client, and it computes
 the facets and performs any filtering operation client-side.
 You make a collection from a huge corpus of data (what I demonstrate) then
 you Save As (which I demonstrate as the generation point re. CXML
 resource) and then Pivot consumes. All the data is Virtuoso hosted.

 There are two things you are overlooking:

 1. The dynamic collection is produced at the conclusion of Virtuoso based
 faceted navigation (the interactions basically describes the Facet
 membership to Virtuoso) 2. Pivot works with static and dynamic collections
 .
 *I specifically state, this is about using both products together to solve
 a
 major problem. #1 Faceted Browsing UX #2 Faceting over a huge data
 corpus.*

 Virtuoso is an HTTP server, it can serve a myriad of representations of
 data to
 user agents (it has its own DBMS hosted XSLT Processor and XML Schema
 Validator with XQuery/XPath to boot, all very old stuff).

 Yes, you make a collection and save as that to CXML, exactly! That is not
 using Pivot as a frontend to Virtuoso. Sure, you can construct a small
 dataset from a huge dataset using SPARQL, or your Virtuoso facet engine or
 whatever. And then export that resulting dataset to Pivot collection XML and
 load that CXML into Pivot. But that is very different to using Pivot as a
 frontend to a huge data set.


 BTW -- how do you think Peter Haase got his variant working? I am sure he
 will shed identical light on the matter for you.

 Yes, Peter, please do. From what I saw in the Fluidops demo, it works
 exactly as I wrote above: A sparql-query constructs a small dataset from the
 sparql endpoint, converts that via a proxy to CXML and loads it into Pivot.

 I don't say Pivot doesn't make a nice demo, or a useful tool to explore a
 small dataset via faceted filtering. But it's not a frontend that can be put
 on top of a faceted browsing engine like
 http://developer.nytimes.com/docs/article_search_api


 The last thing I want is an argument about this; but surely virtually
 every service in the world; faceted browsing included, works by querying
 a large dataset to get a smaller set of results, transforming it in to a
 the needed format an then displaying? sounds like every system I've ever
 seen from the simple html view of an sql query right up to the mighty
 google itself.

 Maybe I'm being naive here; what am I missing?

Nathan,

You're not missing much. From what I see:
Georgi's point is that the level of integration is not ideal. It is
basically a load style integration, not a connect style
integration.
Kingsley's point is that they can be integrated, and he has a demo
to prove it.

Both are right ;)

I can relate to both but I lean towards Kingsley's because he is, as
usual, projecting. He knows that this integration is enough to make a
point, and that the rest will happen.
Show the value! The architecture will follow. ( this is what M$ does
all the time ). Plus they already have a lock-in on the runtime side
and seadragon tech, so I think they can afford to open the platform up
some more on the integration side of things.

Regards,
A


 Many Regards,

 Nathan





-- 
Aldo Bucchi
skype:aldo.bucchi
http://www.univrz.com/
http://aldobucchi.com/

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to which it is
addressed and may contain information that is privileged and confidential. If
you are not the intended recipient, please do not distribute or copy this
communication, by e-mail or otherwise. Instead, please notify us immediately by
return e-mail.



National Identification Number URIs ( NIN URIs )

2010-03-07 Thread Aldo Bucchi
Hi,

All countries have a National Identification Number scheme ( NIN ).
http://en.wikipedia.org/wiki/National_identification_number

Also, all countries have code points in different schemes.

So, can't we combine both to create a de-facto URI for people based on
country ids?

For example: http://dbpedia.org/nin/cl/14168212
That would be me based on my Chilean NIN.

Is there some namespace for this already?

When you have real world problems, like we have now in Chile, it is
simple solutions like these that would make integration easier.

For example, we have assigned a NS for chilean IDs. But some of the
missing people here are tourists and we the only IFP we have is their
national ID ( not emails ).

Thx,
A


-- 
Aldo Bucchi
skype:aldo.bucchi
http://www.univrz.com/
http://aldobucchi.com/

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to which it is
addressed and may contain information that is privileged and confidential. If
you are not the intended recipient, please do not distribute or copy this
communication, by e-mail or otherwise. Instead, please notify us immediately by
return e-mail.



Re: National Identification Number URIs ( NIN URIs )

2010-03-07 Thread Aldo Bucchi
Hi,

On Sun, Mar 7, 2010 at 7:10 PM, Deborah MacPherson debm...@gmail.com wrote:
 Sounds like a good idea.
 In the US all of the police and law enforcement organizations have unique
 ID's, which the Nlets network has geo-encoded, which in turn helps build
 more intelligent GIS data.
 I work in the building industry with a focus on life safety and fire
 departments - specifically exchanging open floor plans. But fire departments
 don't have nationwide unique IDs. Let alone world wide.
 The more simple solutions that can be created like you have proposed, the
 easier it will be at some point to create maps between all countries have
 code points in different schemes so urgent messages can be directed where
 they need to go at a moments notice.

Well yeah.
I guess the victim's families would pretty much love that. Today.

Creating simple URI schemes that can be transmitted via word-of-mouth
is crucial at moments like these. I am working with a large team now
and asking them to craft URIs for their data would go a long way if
those URIs were globally meaningful.

As you say, they could become some sort of discovery hub... or
messaging platform.

I found this guy, and just by looking at his passport I can craft a
URI. Then I can access all his personal details and viceversa, others
will know that I found him.

This NIN scheme should be taken seriously...

 Deborah MacPherson
 On Sun, Mar 7, 2010 at 4:03 PM, Aldo Bucchi aldo.buc...@gmail.com wrote:

 Hi,

 All countries have a National Identification Number scheme ( NIN ).
 http://en.wikipedia.org/wiki/National_identification_number

 Also, all countries have code points in different schemes.

 So, can't we combine both to create a de-facto URI for people based on
 country ids?

 For example: http://dbpedia.org/nin/cl/14168212
 That would be me based on my Chilean NIN.

 Is there some namespace for this already?

 When you have real world problems, like we have now in Chile, it is
 simple solutions like these that would make integration easier.

 For example, we have assigned a NS for chilean IDs. But some of the
 missing people here are tourists and we the only IFP we have is their
 national ID ( not emails ).

 Thx,
 A


 --
 Aldo Bucchi
 skype:aldo.bucchi
 http://www.univrz.com/
 http://aldobucchi.com/

 PRIVILEGED AND CONFIDENTIAL INFORMATION
 This message is only for the use of the individual or entity to which it
 is
 addressed and may contain information that is privileged and confidential.
 If
 you are not the intended recipient, please do not distribute or copy this
 communication, by e-mail or otherwise. Instead, please notify us
 immediately by
 return e-mail.




 --
 

 Deborah L. MacPherson CSI CCS, AIA
 Specifications and Research Cannon Design
 Projects Director, AccuracyAesthetics


 The content of this email may contain private
 and confidential information. Do not forward,
 copy, share, or otherwise distribute without
 explicit written permission from all correspondents.

 




-- 
Aldo Bucchi
skype:aldo.bucchi
http://www.univrz.com/
http://aldobucchi.com/

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to which it is
addressed and may contain information that is privileged and confidential. If
you are not the intended recipient, please do not distribute or copy this
communication, by e-mail or otherwise. Instead, please notify us immediately by
return e-mail.



Re: Crowdsourcing request 2: Crisis platform data from http://chile.ushahidi.com/

2010-03-05 Thread Aldo Bucchi
Hi Kanzaki,

The transformation looks very good. Thanks!
Sorry for the delay I am killing mails as fast as I can.

Now, how do you keep it updated?
Do you have scripts we can use?

Thanks!
A

On Thu, Mar 4, 2010 at 1:07 PM, KANZAKI Masahide mkanz...@gmail.com wrote:
 Hello Aldo,

 I tried to generate RDF/Turtle from CSV dump of 536 records, resulting
 5085 triples. All URIs are just minted for this. Please see the file
 at

 http://www.kanzaki.com/works/2010/test/chile.ttl

 best regards,

 2010/3/4 Aldo Bucchi aldo.buc...@gmail.com:
 Hi,

 Here's another RDF conversion task that is in the queue and it falls
 in the public domain.

 Ushahidi is a crisis management platform that has been used in the
 Haiti crisis and other and is being used in Chile as well via
 http://chile.ushahidi.com/

 It provides a facility to download the data as CSV from:
 http://chile.ushahidi.com/download

 Again, the requirement is to transform this to RDF using pertinent
 ontologies and doing it as richly as possible
 We then slurp it into a Virtuoso instance where we will try to link
 this with main organizational entities in the country and data from
 other feeds.

 Anyone? :)

 Thanks!
 A

 --
 @prefix : http://www.kanzaki.com/ns/sig# .  :from [:name
 KANZAKI Masahide; :nick masaka; :email mkanz...@gmail.com].




-- 
Aldo Bucchi
skype:aldo.bucchi
http://www.univrz.com/
http://aldobucchi.com/

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to which it is
addressed and may contain information that is privileged and confidential. If
you are not the intended recipient, please do not distribute or copy this
communication, by e-mail or otherwise. Instead, please notify us immediately by
return e-mail.



Re: Crowdsourcing request: Google People Finder Data as RDF

2010-03-05 Thread Aldo Bucchi
Hi Stephane,

On Thu, Mar 4, 2010 at 5:39 PM, Stephane Fellah fella...@gmail.com wrote:
 Hi,
 I am interested to help for this project.  I have about than 10 years
 experience with semantic web technology and it is my dog food everyday. I
 had the idea of doing it during the Haiti Earthquake. I looked at the People
 Finder Interchange Format (PFIF) that is used by
 google http://zesty.ca/pfif/1.2/ . The problem is XML format is mainly its
 fixed structure and its difficulty to extend it for specific purpose (like
 address).

Great link. I had not found the spec anywhere ;)

 I would be interested to work on developing core ontology that would fix the
 defect of PFIF and then use it as a foundation to develop extensions. There

OK. But at this point we need something that works
We don't want to extend the Google service, we want to consume it and
integrate it with other services.

 are other ontologies that could be taken in account such as Sahana
 http://ontology.nursix.org/sahana-person.owl.    I think it is important we
 do it right that just going to a straight conversion from PFIF format. It
 requires some effort but it should pay off in the long term.

Long term is the key word here... we don't have much time ;)
We;re looking for people and guiding rescue teams

 I would appreciate if you can tell me where to start (forum, wiki, code base
 ..etc)

Well all forums are in spanish. There are some small english hubs, for example:
http://wiki.crisiscommons.org/wiki/Chile/2010_2_27_Earthquake

I think a simple RDF converter should be easy. Just don't have the
time for it. Other teams are using the data as-is. Time is critical
that's why we're asking for help.

Thanks!


 Best regards
 Stephane Fellah


 On Thu, Mar 4, 2010 at 1:12 PM, Bill Roberts b...@swirrl.com wrote:

 Hi Aldo - I'd like to help, but I see you posted your mail a few hours
 ago.  Do you have updated information on what still needs done?  Do you have
 a wiki or similar to coordinate volunteer programming efforts?

 Regards

 Bill


 On 4 Mar 2010, at 14:06, Aldo Bucchi wrote:

  Hi,
 
  As most of you heard things were a bit shaky down here in Chile. We
  have some requests and hope you guys can help. This is a moment to
  prove what we always boast about: that Linked Data can solve real
  problems.
 
  Google provides a prople finder service
  (http://chilepersonfinder.appspot.com/) which is right now
  centralizing some ( but not all ) of the missing people data. This
  service is OK but it lacks some features plus we need to integrate
  with other sources to perform analysis and aid our rescue teams /
  alleviate families.
 
  This is serious matter but it is indeed taken a bti lightly by
  existing software. ( there is a tradeoff between the amount of
  structure you can impose and ease of use in the front-line ).
 
  What we would love to have is a way to access all feeds from
  http://chilepersonfinder.appspot.com/ as RDF
 
  We already have some databases operating on these feeds, but we're
  still far away a clean solution because of its loose structure ( take
  a look and you'll see what I mean ).
 
  Who wants to take a shot at this?
 
  Requirements.
  - Take all feeds originating from
  http://chilepersonfinder.appspot.com/
  - Generate an initial RDF dump ( big TTL file )
  - Generate Incremental RDF dumps every hour
 
  The transfromation should do its best guess at the ideal data
  structure and try not to loose granularity but shield us a bit from
  this feed based model.
 
  We then take care of downloading this, integrating with other systems,
  further processing, geocoding, etc.
 
  There's a lot of work to do and the more we can outsource, the bettter.
 
  On Friday ( tomorrow ) there will be the first nation-wide
  announcement of our search platform and we expect lots of people to
  use our services. So this is something really urgent and really,
  really important for those who need it.
 
  Ah. Volunteers are moving all this data into a Virtuoso instance that
  will also have more stuff. It will be available soon at
  http://opendata.cl/ so stay tuned.
 
  We really hope we had something like DBpedia in place by now, it would
  make all this much easier. But now is the time.
  Guys, the tsunami casualties could have been avoided it was all about
  mis-information.
  Same goes for relief efforts. They are not optimal and this is all
  about data in the end.
 
  I know you know how valuable data is. But it is now that you can
  really make your point! Triple by Triple.
 
  Thanks!
  A
 
  --
  Aldo Bucchi
  skype:aldo.bucchi
  http://www.univrz.com/
  http://aldobucchi.com/
 
  PRIVILEGED AND CONFIDENTIAL INFORMATION
  This message is only for the use of the individual or entity to which it
  is
  addressed and may contain information that is privileged and
  confidential. If
  you are not the intended recipient, please do not distribute or copy
  this
  communication, by e-mail or otherwise. Instead, please notify

Crowdsourcing request: Google People Finder Data as RDF

2010-03-04 Thread Aldo Bucchi
Hi,

As most of you heard things were a bit shaky down here in Chile. We
have some requests and hope you guys can help. This is a moment to
prove what we always boast about: that Linked Data can solve real
problems.

Google provides a prople finder service
(http://chilepersonfinder.appspot.com/) which is right now
centralizing some ( but not all ) of the missing people data. This
service is OK but it lacks some features plus we need to integrate
with other sources to perform analysis and aid our rescue teams /
alleviate families.

This is serious matter but it is indeed taken a bti lightly by
existing software. ( there is a tradeoff between the amount of
structure you can impose and ease of use in the front-line ).

What we would love to have is a way to access all feeds from
http://chilepersonfinder.appspot.com/ as RDF

We already have some databases operating on these feeds, but we're
still far away a clean solution because of its loose structure ( take
a look and you'll see what I mean ).

Who wants to take a shot at this?

Requirements.
- Take all feeds originating from http://chilepersonfinder.appspot.com/
- Generate an initial RDF dump ( big TTL file )
- Generate Incremental RDF dumps every hour

The transfromation should do its best guess at the ideal data
structure and try not to loose granularity but shield us a bit from
this feed based model.

We then take care of downloading this, integrating with other systems,
further processing, geocoding, etc.

There's a lot of work to do and the more we can outsource, the bettter.

On Friday ( tomorrow ) there will be the first nation-wide
announcement of our search platform and we expect lots of people to
use our services. So this is something really urgent and really,
really important for those who need it.

Ah. Volunteers are moving all this data into a Virtuoso instance that
will also have more stuff. It will be available soon at
http://opendata.cl/ so stay tuned.

We really hope we had something like DBpedia in place by now, it would
make all this much easier. But now is the time.
Guys, the tsunami casualties could have been avoided it was all about
mis-information.
Same goes for relief efforts. They are not optimal and this is all
about data in the end.

I know you know how valuable data is. But it is now that you can
really make your point! Triple by Triple.

Thanks!
A

-- 
Aldo Bucchi
skype:aldo.bucchi
http://www.univrz.com/
http://aldobucchi.com/

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to which it is
addressed and may contain information that is privileged and confidential. If
you are not the intended recipient, please do not distribute or copy this
communication, by e-mail or otherwise. Instead, please notify us immediately by
return e-mail.



Crowdsourcing request 2: Crisis platform data from http://chile.ushahidi.com/

2010-03-04 Thread Aldo Bucchi
Hi,

Here's another RDF conversion task that is in the queue and it falls
in the public domain.

Ushahidi is a crisis management platform that has been used in the
Haiti crisis and other and is being used in Chile as well via
http://chile.ushahidi.com/

It provides a facility to download the data as CSV from:
http://chile.ushahidi.com/download

Again, the requirement is to transform this to RDF using pertinent
ontologies and doing it as richly as possible
We then slurp it into a Virtuoso instance where we will try to link
this with main organizational entities in the country and data from
other feeds.

Anyone? :)

Thanks!
A



-- 
Aldo Bucchi
skype:aldo.bucchi
http://www.univrz.com/
http://aldobucchi.com/

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to which it is
addressed and may contain information that is privileged and confidential. If
you are not the intended recipient, please do not distribute or copy this
communication, by e-mail or otherwise. Instead, please notify us immediately by
return e-mail.



Earthquake in Chile; Ideas? to help

2010-02-28 Thread Aldo Bucchi
Hi,

As many of you probably know, we just had a mega quake here in Chile.
This next week will be critical in terms of logistics, finding lost
people... and as you probably know it is all about information in the
end.

In a scenario like this, everything is chaotic.

We will soon have a SPARQL endpoint available with all the data we can
find, hoping that people around the world can extract some insights.
In the meantime, I would love to hear any kind of ideas.

They needn't be high tech. Sometimes simple ideas go a long way!

Thanks!
A


-- 
Aldo Bucchi
skype:aldo.bucchi
http://www.univrz.com/
http://aldobucchi.com/

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to which it is
addressed and may contain information that is privileged and confidential. If
you are not the intended recipient, please do not distribute or copy this
communication, by e-mail or otherwise. Instead, please notify us immediately by
return e-mail.



Re: DBpedia-based entity recognition service / tool?

2010-02-04 Thread Aldo Bucchi

Nathan,

On Feb 4, 2010, at 8:10, Nathan nat...@webr3.org wrote:


Juan Sequeda wrote:

we followed several domain term extraction techniques.


any chance you could name drop / point to a few of the techniques -  
very

interested in this myself and in all honesty, no idea where to start
(other than a crude string split and check word combinations against a
dictionary - not very practical!)


--- http://gate.ac.uk/



Many Regards,

Nathan





Fresnel: State of the Art?

2010-02-01 Thread Aldo Bucchi
Hi,

I was looking at the current JFresnel codebase and the project seems
to have little movement. I was wondering if this is the state of the
art regarding Declarative Presentation Knowledge for RDF or have
efforts moved elsewhere and I have missed it?

Thanks!
A

-- 
Aldo Bucchi
skype:aldo.bucchi
http://www.univrz.com/
http://aldobucchi.com/

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to which it is
addressed and may contain information that is privileged and confidential. If
you are not the intended recipient, please do not distribute or copy this
communication, by e-mail or otherwise. Instead, please notify us immediately by
return e-mail.



Enterprise level RDF Scripting ( was: [foaf-protocols] foaf classes for primary Topic )

2010-02-01 Thread Aldo Bucchi
Hi,

On Mon, Feb 1, 2010 at 9:11 AM, Story Henry henry.st...@bblfish.net wrote:

 On 31 Jan 2010, at 17:25, Peter Williams wrote:

 Let's build that linq2rdfa driver! It’s the killer app for the semweb, in
 Microsoft land.

Cool!
This topic is coming up again ;)


 I agreee. From the Java perspective this is very much what I found too.

 When I first learned RDF I was really intrigued about how it related to 
 Object Oriented Programming, which I was familiar with. So I tried to write a 
 mapper for it to Java, which to my astonishment was really a lot easier than 
 I thought. This really helped me understand how OO programming and the 
 semantic web mesh together. Having these tools for run of the mill 
 programmers is really important as Peter points out to get them to overcome 
 the fear of the new: by bringing the semweb back to something they know. This 
 thought is what led me to develop the Sommer library:

   https://sommer.dev.java.net/sommer/index.html

 Doin this made it clear that really the major difference is that we name 
 things and fields with URIs.

 The next difference is that of Graphs, which is a little more difficult to 
 merge correctly into OO languages (or for that matter most traditional 
 programming languages).

 Still before tools such as Hibernate and such for Java came out, people 
 worked with SQL directly. So people went on a long time with this pain 
 point... To start off we need really good examples of the usage of RDF. And I 
 think foaf+ssl is that key driver, because it has the ability to give people 
 access to things they would not otherwise have had access to: eg: going to a 
 party.

Henry:
Here's an idea. Instead of trying to bake this cake by ourselves, can
you tap into Martin Odersky's team and propose some use cases? I could
help with that if you get his attention.
He's the guy behind Scala and they are on a rampage these days.

Now. Even w/o tweaking the compiler or the grammar to support URIs,
the DSL capabilities of Scala make it possible to do beautiful things:
http://code.google.com/p/scardf/

That project is a good start but there are still some issues to address.

I have my own experimental framework but, just as you describe, N1
issue is native type integration and we should explore this at a lower
level. If we could just get native URIs / XSD datatypes things would
be awesome.

Then, Number 2 is graph traversal which can be solved by having some
sort of path language baked in, ( which is compiled to a single query
underneath instead of being evaluated step by step via API, which is
the case in scardf ) .

Number 3 is datasource management. Tying up a datasource as an
implicit value or bound to a thread. Nothing new under the Sun here (
pun intended ) ;)
This is where things get interesting as the datasource could actually
be a full blown Linked Data client such as Virtuoso with Spongers
turned on.

Finally. My initial look at the Scala grammar suggests that we should
be able to fit in URIs. There is support for XML literals and URIs are
easily recognizable as they have a strict syntax. Qnames could also
happen.

Regards,
A


        Henry










-- 
Aldo Bucchi
skype:aldo.bucchi
http://www.univrz.com/
http://aldobucchi.com/

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to which it is
addressed and may contain information that is privileged and confidential. If
you are not the intended recipient, please do not distribute or copy this
communication, by e-mail or otherwise. Instead, please notify us immediately by
return e-mail.



Re: Introducing BOLD (Business Of Linked Data) Discussion Space

2009-12-01 Thread Aldo Bucchi
Hi,

On Tue, Dec 1, 2009 at 1:15 PM, Kingsley Idehen kide...@openlinksw.com wrote:
 All,

 A while back there was a request (by Aldo Bucchi, I believe) for a more
 business and marketing oriented discussion space re. Linked Data. The need
 for what Aldo requested has remained, but nothing every came of his request.
 Thus, I've opened up a Google discussion space for this very purpose, called
 BOLD [1] :-)

Excellent Kingsley!
I think we're all running into this space now. Meeting with customers
and having to come up with pitches that are both
* Attractive enough to compete with current established technologies
* Grounded in practical reality ( it is easy to fall into the
snake-oil side of things when talking about the future )

In my experience this has been very hard but its getting easier thanks
to the amount of technology and showcases ( in particuar, DBpedia
and the rest of the LOD setup as well as the many consultants and
other vendors that are emerging ).

Also, and this is to Kingsley's credit. Having Virtuoso has been
unvaluable in terms of being able to relate to something with built-in
prestige. It is quite different to say:
- This is the future
Than to say
- This tool here, which has a proven track record and looks like
something you understand ( a database ), allows you to do this and
that. And I can show you.


 In retrospect, this discussion space should have been created much earlier;
 especially, as messaging surrounding Linked Data remains challenging esp.
 with those outside the core Semantic Web community.

 Anyway, if you are more interested in the business and marketing aspects of
 HTTP based Linked Data, you now have a space for open discussion and
 brainstorming.

Let's get it on!

( I have linkeddata.biz BTW and was going to do something on this
line. But, as usual, Kingsley speedy Idehen beat me to it ;).

Regards,
A

PS: I copy this email to the BOLD list and if you read carefully I
already stated a few core interesting points out of my own experience
living *and surviving* by selling Linked Data Only projects and I
would love to start a debate on that.


 Enjoy!

 Link:

 1. http://groups.google.com/group/business-of-linked-data-bold

 --


 Regards,

 Kingsley Idehen       Weblog: http://www.openlinksw.com/blog/~kidehen
 President  CEO OpenLink Software     Web: http://www.openlinksw.com









-- 
Aldo Bucchi
skype:aldo.bucchi
http://www.univrz.com/
http://aldobucchi.com/

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to which it is
addressed and may contain information that is privileged and confidential. If
you are not the intended recipient, please do not distribute or copy this
communication, by e-mail or otherwise. Instead, please notify us immediately by
return e-mail.



Need help mapping two letter country code to URI

2009-11-09 Thread Aldo Bucchi
Hi,

I found a dataset that represents countries as two letter country
codes: DK, FI, NO, SE, UK.
I would like to turn these into URIs of the actual countries they represent.

( I have no idea on whether this follows an ISO standard or is just
some private key in this system ).

Any ideas on a set of candidata URIs? I would like to run a complete
coverage test and take care I don't introduce distortion ( that is
pretty easy by doing some heuristic tests against labels, etc ).

There are some border cases that suggest this isn't ISO3166-1, but I
am not sure yet. ( and if it were, which widely used URIs are based on
this standard? ).

Thanks!
A

-- 
Aldo Bucchi
skype:aldo.bucchi
http://www.univrz.com/
http://aldobucchi.com/

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to which it is
addressed and may contain information that is privileged and confidential. If
you are not the intended recipient, please do not distribute or copy this
communication, by e-mail or otherwise. Instead, please notify us immediately by
return e-mail.



RDF and FSLDM ( Financial Services Logical Data Model )

2009-08-24 Thread Aldo Bucchi
Hi all,

Has anyone seen, thought, or heard about a FSLDM ( Financial Services
Logical Data Model ) OWL ontology/vocab?

Regards,
A

-- 
Aldo Bucchi
skype:aldo.bucchi
http://www.univrz.com/
http://aldobucchi.com/

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to which it is
addressed and may contain information that is privileged and confidential. If
you are not the intended recipient, please do not distribute or copy this
communication, by e-mail or otherwise. Instead, please notify us immediately by
return e-mail.



Re: [HELP] Can you please update information about your dataset?

2009-08-11 Thread Aldo Bucchi

Hi,

On Aug 11, 2009, at 13:46, Kingsley Idehen kide...@openlinksw.com  
wrote:



Leigh Dodds wrote:

Hi,

I've just added several new datasets to the Statistics page that
weren't previously listed. Its not really a great user experience
editing the wiki markup and manually adding up the figures.

So, thinking out loud, I'm wondering whether it might be more
appropriate to use a Google spreadsheet and one of their submission
forms for the purposes of collectively the data. A little manual
editing to remove duplicates might make managing this data a little
more easier. Especially as there are also pages that separately list
the available SPARQL endpoints and RDF dumps.

I'm sure we could create something much better using Void, etc but  
for

now, maybe using a slightly better tool would give us a little more
progress? It'd be a snip to dump out the Google Spreadsheet data
programmatically too, which'd be another improvement on the current
situation.

What does everyone else think?

Nice Idea! Especially as Google Spreadsheet to RDF is just about  
RDFizers for the Google Spreadsheet API :-)


Hehe. I have this in my todo (literally). A website that exposes a  
google spreadsheet as SPARQL endpoint. Internally we use it as UI to  
quickly create config files et Al.

But It will remain in my todo forever...;)

Kingsley, this could be sponged. The trick is that the spreadsheet  
must have an accompanying page/sheet/book with metadata (the NS or  
explicit URIs for cols).




Kingsley

Cheers,

L.

2009/8/7 Jun Zhao jun.z...@zoo.ox.ac.uk:


Dear all,

We are planning to produce an updated data cloud diagram based on  
the

dataset information on the esw wiki page:
http://esw.w3.org/topic/TaskForces/CommunityProjects/LinkingOpenData/DataSets/Statistics

If you have not published your dataset there yet and you would  
like your

dataset to be included, can you please add your dataset there?

If you have an entry there for your dataset already, can you  
please update

information about your dataset on the wiki?

If you cannot edit the wiki page any more because the recent  
update of esw
wiki editing policy, you can send the information to me or Anja,  
who is

cc'ed. We can update it for you.

If you know your friends have dataset on the wiki, but are not on  
the
mailing list, can you please kindly forward this email to them? We  
would

like to get the data cloud as up-to-date as possible.

For this release, we will use the above wiki page as the information
gathering point. We do apologize if you have published information  
about
your dataset on other web pages and this request would mean extra  
work for

you.

Many thanks for your contributions!

Kindest regards,

Jun


__



This email has been scanned by the MessageLabs Email Security  
System.

For more information please visit http://www.messagelabs.com/email
__














--


Regards,

Kingsley Idehen  Weblog: http://www.openlinksw.com/blog/~kidehen
President  CEO OpenLink Software Web: http://www.openlinksw.com









Re: .htaccess a major bottleneck to Semantic Web adoption / Was: Re: RDFa vs RDF/XML and content negotiation

2009-07-02 Thread Aldo Bucchi
Hi,

One solution for this is for someone to create and distribute a simple
to deploy Linked Data server with integrated CN that can cover common
personal ( introductory ) use cases and eventually scale to enterprise
demands.

And maybe it could even be opensource and already packaged to be
deployed via Amazon EC2.

Oh wait...!

Regards,
A

PS. And then, someone else could build an alternative, validate the
market, etc. The same old story ;)

-- 
Aldo Bucchi
skype:aldo.bucchi
http://www.univrz.com/
http://aldobucchi.com/

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to which it is
addressed and may contain information that is privileged and confidential. If
you are not the intended recipient, please do not distribute or copy this
communication, by e-mail or otherwise. Instead, please notify us immediately by
return e-mail.



Commercial Linked Data Initiatives list?

2009-07-02 Thread Aldo Bucchi
Hi,

I was wondering if anyone is keeping track of major commercial Linked
Data initiatives such as BBC, NYT, BestBuy, etc.
A vocabulary for annotating these would be oh so meta-meta ;)
Something like

select distinct ?initiative ?customer ?sector where { ?customer
initiative ?initiative . ?customer industrysector ?sector }

Would be quite interesting to have.
Oh, and plotting them on a timeline!

Regards,
A

-- 
Aldo Bucchi
skype:aldo.bucchi
http://www.univrz.com/
http://aldobucchi.com/

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to which it is
addressed and may contain information that is privileged and confidential. If
you are not the intended recipient, please do not distribute or copy this
communication, by e-mail or otherwise. Instead, please notify us immediately by
return e-mail.



Fwd: Commercial Linked Data Initiatives list?

2009-07-02 Thread Aldo Bucchi
Oops. We had gone off the list.


-- Forwarded message --
From: Aldo Bucchi aldo.buc...@gmail.com
Date: Thu, Jul 2, 2009 at 9:39 PM
Subject: Re: Commercial Linked Data Initiatives list?
To: Adrian Walker adriandwal...@gmail.com


Adrian,

On Thu, Jul 2, 2009 at 6:00 PM, Adrian Walkeradriandwal...@gmail.com wrote:
 Hi Aldo --

 It's one thing to have linked data.

 It's quite another thing to enable non-programmers to ask questions that
 no-one has thought to pre-program for them.

 Fortunately, projects such as the Tabulator and the system online at the
 site below [1] are starting to point the way towards higher level platforms
 on which non-programmers can put linked data to their own uses.

 Perhaps a useful extension to TIMBL's recent paper [2] might be a list of
 such platforms, and other design approaches?  (Tim does mention the
 Tabulator in passing).

Yes you're right, eventually we will be able to provide casual users
with such tools.
Note that faceted browsing is already perhaps usable for the task I describe.

However, my post was mainly aimed to the data side. A programmer ( one
of us ) could craft a specific visualization for this if the data was
available.

I think it would be great to have such visualization, or provide the
fodder for others to craft it. This could depict the current state of
the Linked Data battlle in commercial land ;)

Putting the wins up for others to see is always positive. Putting them
out in Linked Data for real time consumption is interesting in many
ways.

Thanks,
A





    Cheers,  -- Adrian

 [1] Internet Business Logic
 A Wiki and SOA Endpoint for Executable Open Vocabulary English over SQL and
 RDF
 Online at www.reengineeringllc.com    Shared use is free

 [2] http://www.w3.org/DesignIssues/GovData.html

 On Thu, Jul 2, 2009 at 4:54 PM, Aldo Bucchi aldo.buc...@gmail.com wrote:

 Hi,

 I was wondering if anyone is keeping track of major commercial Linked
 Data initiatives such as BBC, NYT, BestBuy, etc.
 A vocabulary for annotating these would be oh so meta-meta ;)
 Something like

 select distinct ?initiative ?customer ?sector where { ?customer
 initiative ?initiative . ?customer industrysector ?sector }

 Would be quite interesting to have.
 Oh, and plotting them on a timeline!

 Regards,
 A

 --
 Aldo Bucchi
 skype:aldo.bucchi
 http://www.univrz.com/
 http://aldobucchi.com/

 PRIVILEGED AND CONFIDENTIAL INFORMATION
 This message is only for the use of the individual or entity to which it
 is
 addressed and may contain information that is privileged and confidential.
 If
 you are not the intended recipient, please do not distribute or copy this
 communication, by e-mail or otherwise. Instead, please notify us
 immediately by
 return e-mail.






--
Aldo Bucchi
skype:aldo.bucchi
http://www.univrz.com/
http://aldobucchi.com/

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to which it is
addressed and may contain information that is privileged and confidential. If
you are not the intended recipient, please do not distribute or copy this
communication, by e-mail or otherwise. Instead, please notify us immediately by
return e-mail.



-- 
Aldo Bucchi
skype:aldo.bucchi
http://www.univrz.com/
http://aldobucchi.com/

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to which it is
addressed and may contain information that is privileged and confidential. If
you are not the intended recipient, please do not distribute or copy this
communication, by e-mail or otherwise. Instead, please notify us immediately by
return e-mail.



Linked Data tech market; Data Web Servers, anyone?

2009-06-25 Thread Aldo Bucchi
Hi guys,

Believe me or not, we will soon have a Data Web Server market ( just
as we once had Document Web Servers market ). Data Web Servers publish
data/rdbms to the Data Web. Just like Document Web Servers publish
your document/file system to the web.

Of course, Data Web Servers will also be Document Web Servers.

In this hypothetical context, what would be a proper way of describing RDFa?
A way of embedding data into Document Web Servers ?
Probably, yes.

Anyway, for me RDF is a non-intrusive augmentation of the URI address
space by adding a second dimension ( columns ), just like URIs were a
non-intrusive augmentation of the IP address space by adding more rows
( to reach documents *within* computers ).

It baiscally answers this question: how do I put my spreadsheet/table
on the web?

I have been selling semweb as middleware for 6 years using such
stories and explanations, and they just make total sense.  I have
accumulated quite a bit of sales material.

So, here's a ( totally raw ) sampler ;)

http://files.getdropbox.com/u/497895/semweb-animation-reel.mov

OK. Now annihilate me ;)
Blasphemy!

But, seriously. Next time you're in front of a customer and he's
showing you the door after 20 mins of SW story, you'll probably
remember this ;)

Regards,
A

PS. Anyway, when the MKT guys kick in, all the SW purity will be left
to the annals of history. It's the same old story ;)

-- 
Aldo Bucchi
skype:aldo.bucchi
http://www.univrz.com/
http://aldobucchi.com/

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to which it is
addressed and may contain information that is privileged and confidential. If
you are not the intended recipient, please do not distribute or copy this
communication, by e-mail or otherwise. Instead, please notify us immediately by
return e-mail.



Re: vocabularies and data alignment

2009-06-12 Thread Aldo Bucchi
Hi,

 * Francois said, initially:
There has been a couple of discussions already on this list on the need
for a vocabulary to represent correspondences between terms of different
vocabularies

 * Hugh asked:
Are you after the algorithms we use to identify when two instances
are the same?

 * Francois says:
Yes!

So, terms or instances?

Thanks,
A

On Fri, Jun 12, 2009 at 3:57 AM, François
Scharffefrancois.schar...@inria.fr wrote:
 Hugh Glaser wrote:

 Hi,
 To put it in simple terms for me :-)
 Are you after the algorithms we use to identify when two instances are the
 same?
 Best
 Hugh

 Yes !

 François


 On 11/06/2009 12:57, François Scharffe francois.schar...@inria.fr
 wrote:

 Dear LODers,

 There has been a couple of discussions already on this list on the need
 for a vocabulary to represent correspondences between terms of different
 vocabularies. We also saw recently various tools (e.g. Silk, ODDlinker)
 allowing to automatically interlink datasets given a specification of
 what should be linked.

 However, there is currently no common way to publish and share this
 information (i.e., not the links but the way to generate them, see [1]
 for precision).

 We are setting up an experiment [1] to see if it is possible to provide
 useful services from this data. But for that purpose we need your help.

 So this is a call for contribution: we are collecting any specification
 of link generator for the LOD graph.

 Of course, do not hesitate to comment on the idea or to tell us if you
 want to be involved.

 We promise a report on this by the end of summer (northern hemisphere :).

 Cheers,
 François

 [1] http://melinda.inrialpes.fr













-- 
Aldo Bucchi
U N I V R Z
Office: +56 2 795 4532
Mobile:+56 9 7623 8653
skype:aldo.bucchi
http://www.univrz.com/
http://aldobucchi.com/

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to which it is
addressed and may contain information that is privileged and confidential. If
you are not the intended recipient, please do not distribute or copy this
communication, by e-mail or otherwise. Instead, please notify us immediately by
return e-mail.
INFORMACIÓN PRIVILEGIADA Y CONFIDENCIAL
Este mensaje está destinado sólo a la persona u organización al cual está
dirigido y podría contener información privilegiada y confidencial. Si usted no
es el destinatario, por favor no distribuya ni copie esta comunicación, por
email o por otra vía. Por el contrario, por favor notifíquenos inmediatamente
vía e-mail.



Re: vocabularies and data alignment

2009-06-12 Thread Aldo Bucchi
François,

On Fri, Jun 12, 2009 at 9:07 AM, François
Scharffefrancois.schar...@inria.fr wrote:
 Kingsley Idehen wrote:

 François Scharffe wrote:

 Hugh Glaser wrote:

 Hi,
 To put it in simple terms for me :-)
 Are you after the algorithms we use to identify when two instances are
 the same?
 Best
 Hugh

 Yes !

 François

 So if the answer is Yes. Then do you mean things in the ABox and TBox?
 Must be clear here as being too generic leads to confusion.

 Link generators are working at the instance level (ABox), they generate
 links between instances. They need some input, a specification of what
 should be interlinked. We think this specification can be lifted to an
 alignment between vocabularies (TBoxes). Well we are not 100% sure this will
 work, that's why we would like to get such tools and their linkage
 specifications.
 I can take an example, interlinking persons: one dataset is described with
 FOAF, the other with VCard.
 ?x foaf:name ?name.
 ?y vc:n [
        vc:family-name ?fn;
        vc:given-name ?gn.
        ].
 the linkage specification might be something like:
 if compare(?name, concat(?gn, ,?fn))  threshold
 then output(?x owl:sameAs ?y)

 In fact, this specification says
 foaf:name - concat(vc:given-name, ,vc:family-name)
 which is an alignment at the TBox level that can be lifted from the linkage
 specification.

 I hope I was clear enough this time ;)

Yes you did. Hugh got it right but I was a bit lost ;)

My quick take on this issue is usually:
Strategy:
Use subPropertyOf, etc if possible, otherwise resort to SWRL or
SPARQL, otherwise use custom code ( for example, if the IFPs are
embedded in URIs ).

Implementation:
Stick with inference. If not possible, materialize intermediate graph.

Of course the above is not very useful as what you're looking for is
real world examples to mine for patterns, generalize, and try to push
the knowledge up to the TBox.

Good luck, it sounds interesting ;)
Thanks,
A



 Cheers,
 François

 sameAs is not the best way to align things in the TBox.

 Kingsley


 On 11/06/2009 12:57, François Scharffe francois.schar...@inria.fr
 wrote:

 Dear LODers,

 There has been a couple of discussions already on this list on the need
 for a vocabulary to represent correspondences between terms of different
 vocabularies. We also saw recently various tools (e.g. Silk, ODDlinker)
 allowing to automatically interlink datasets given a specification of
 what should be linked.

 However, there is currently no common way to publish and share this
 information (i.e., not the links but the way to generate them, see [1]
 for precision).

 We are setting up an experiment [1] to see if it is possible to provide
 useful services from this data. But for that purpose we need your help.

 So this is a call for contribution: we are collecting any specification
 of link generator for the LOD graph.

 Of course, do not hesitate to comment on the idea or to tell us if you
 want to be involved.

 We promise a report on this by the end of summer (northern hemisphere
 :).

 Cheers,
 François

 [1] http://melinda.inrialpes.fr
















-- 
Aldo Bucchi
U N I V R Z
Office: +56 2 795 4532
Mobile:+56 9 7623 8653
skype:aldo.bucchi
http://www.univrz.com/
http://aldobucchi.com/

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to which it is
addressed and may contain information that is privileged and confidential. If
you are not the intended recipient, please do not distribute or copy this
communication, by e-mail or otherwise. Instead, please notify us immediately by
return e-mail.
INFORMACIÓN PRIVILEGIADA Y CONFIDENCIAL
Este mensaje está destinado sólo a la persona u organización al cual está
dirigido y podría contener información privilegiada y confidencial. Si usted no
es el destinatario, por favor no distribuya ni copie esta comunicación, por
email o por otra vía. Por el contrario, por favor notifíquenos inmediatamente
vía e-mail.



Re: Bestbuy.com goes Semantic Web the GoodRelations Way

2009-06-09 Thread Aldo Bucchi
Guys,

Let me add: Having real world use cases validate the cause in many
levels. In practice, this will generate a second sell, and a third,
etc. Domino effect.

Or rather, the CYA effect: Until someone else has taken the risk, why
put your A on the line?

So, I would call this a nice escape goat as well. And that's a LOT.
First kiss is most elusive. From then on it's downhill ;)

Congratulations to whoever made it happen!

Regards
A

PS. In some cases, its a good idea to stop at the first kiss... but I
think that the prospect of permeating the e-commerce dinosaur is sexy
as hell ;)

On Tue, Jun 9, 2009 at 7:41 AM, Kingsley Idehenkide...@openlinksw.com wrote:
 Bill Roberts wrote:

 Yeah, I too think this is a big deal.  Semantic web in the commercial
 world obviously suffers the same chicken and egg problem as elsewhere, but
 if a big company like BestBuy just does it anyway, then services that
 consume and aggregate this kind of data are likely to spring up.   Semantic
 SEO industry starts here?



 Yes, but I would say SDQ in addition to SEO :-)




 Links:
 1. http://tr.im/iv9e -- post about Serendipitous Discovery Quotient (SDQ)

 --


 Regards,

 Kingsley Idehen       Weblog: http://www.openlinksw.com/blog/~kidehen
 President  CEO OpenLink Software     Web: http://www.openlinksw.com









-- 
Aldo Bucchi
U N I V R Z
Office: +56 2 795 4532
Mobile:+56 9 7623 8653
skype:aldo.bucchi
http://www.univrz.com/
http://aldobucchi.com/

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to which it is
addressed and may contain information that is privileged and confidential. If
you are not the intended recipient, please do not distribute or copy this
communication, by e-mail or otherwise. Instead, please notify us immediately by
return e-mail.
INFORMACIÓN PRIVILEGIADA Y CONFIDENCIAL
Este mensaje está destinado sólo a la persona u organización al cual está
dirigido y podría contener información privilegiada y confidencial. Si usted no
es el destinatario, por favor no distribuya ni copie esta comunicación, por
email o por otra vía. Por el contrario, por favor notifíquenos inmediatamente
vía e-mail.



Re: Segment RDF on BBC Programmes

2009-05-07 Thread Aldo Bucchi
Hi,

On Thu, May 7, 2009 at 6:42 AM, Yves Raimond yves.raim...@gmail.com wrote:
 Hello!

 Wading into this conversation a little late, but feel compelled to comment...

 I'll be honest, I find these kind of RDFa vs RDF/XML vs A.N. Other
 Publishing Setup discussions tedious and counter-productive.
 Different technical approaches will be appropriate in different
 scenarios (*), so whatever our personal preferences let's not make
 blanket statements in favour of one approach over another without
 providing qualifying information for people who may be newer to the
 field and not have in depth appreciation of the subtleties. One of the
 great strengths of the Linked Data community has been its pragmatism,
 and while RDFa may be the pragmatic choice in some situations it won't
 be in others.

 I completely agree with Tom here, and find this RDFa vs RDF/XML debate
 quite tedious. For example, in our programmes pages (e.g.
 http://www.bbc.co.uk/programmes/b00k6mpd)  we don't expose all the
 available versions (signed, shortened, original, etc.) because it is
 not directly relevant for human consumption - we just merge different
 things version-related that are relevant (e.g. on-demand audio/video,
 etc.) to provide a good user experience. So if we were to use only
 RDFa, we would loose that valuable bit of information.

 Some data needs to be prodded and merged to not overload the user with
 information and just present him with bits relevant to human
 consumption. However, in the raw RDF views, we can provide all these
 details, that may be relevant for applications, e.g. getting all
 broadcasts of a signed version of a particular programme.

 So different publishing methodologies are appropriate for different needs :-)

 Cheers,
 y



Agree here. In fact, let me say that my own RDFavoritisms have
morphed over time as I have run into situations where one approach is
better than the other, for any number of reasons. I am doing things
now that I once thought to be heresy.
I guess the trick is not to argue about what's better/best but to make
every possible choice CONSISTENT with the conceptual framework being
built, so that every drop of participation adds up and crystallizes.
Truth is none of us can foresee which one approach will have the most
data 3 years from now. This thing shifts with the winds ( there are
more surprises to come, for sure ).

Now, what would be useful is a decision tree or a list of recipes. Let
newcomers choose but don't overwhelm them with total freedom either.

That's the wonder of Linked Data. The simple recipes... that... work! ;)
80/20

Regards,
A


-- 
Aldo Bucchi
U N I V R Z
Office: +56 2 795 4532
Mobile:+56 9 7623 8653
skype:aldo.bucchi
http://www.univrz.com/
http://aldobucchi.com/

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to which it is
addressed and may contain information that is privileged and confidential. If
you are not the intended recipient, please do not distribute or copy this
communication, by e-mail or otherwise. Instead, please notify us immediately by
return e-mail.
INFORMACIÓN PRIVILEGIADA Y CONFIDENCIAL
Este mensaje está destinado sólo a la persona u organización al cual está
dirigido y podría contener información privilegiada y confidencial. Si usted no
es el destinatario, por favor no distribuya ni copie esta comunicación, por
email o por otra vía. Por el contrario, por favor notifíquenos inmediatamente
vía e-mail.



Re: Announcing OpenLink Virtuoso, Open-Source Edition, v5.0.11

2009-04-23 Thread Aldo Bucchi
Tim,

That's a LOT of new features. Impressive.
( I guess we have all grown used to this, but there is a couple of
startups worth of work in every VOS release if you look closely ).

So, let's be honest:  Are you planning to release the time machine
code anytime soon?

I could really use some time traveling these days, my deadlines are
flying by. LOL!

Regards,
A


On Wed, Apr 22, 2009 at 8:38 AM, Tim Haynes thay...@openlinksw.co.uk wrote:
 Hi,

 OpenLink Software is pleased to announce a new release of Virtuoso,
 Open-Source Edition, version 5.0.11.

 This version includes:

  * Database engine
    - Added x.509 Certificate Generation  Management functions
    - Improvements to session-handling (strses) to avoid temp-files and
 improve threading support
    - Added initial support for gzipped stream session
    - Added support for HTTP, socks4 and socks5 proxying with
 authentication options
    - Added support for URIQA methods in http_client()
    - Added support for gunzip in http_client
    - Various fixes for FT optimization, fractions in datetime,
 checkpoint-rollback and the compile/build process.

  * SPARQL and RDF
    - Added compiler extensions for SPARQL graph-level security
    - Added initial implementation of RDF graph-level security metadata
 functions
    - Added initial infrastructure for new SPARQL result serialization
    - Added support for SSG_VALMODE_SHORT_OR_LONG
    - Added support for define sparql-get:proxy for RDF mappers
    - Enhanced N3 syntax support
    - Added support for XML literals in RDF/XML, SPARQL XML resultset and
      JSON outputs
    - Enhanced speed of TTL output
    - Fixed SPARQL/SPARUL security
    - VoiD graph generation for describing Quad Store

  * Sponger Cartridges Related
    - Added U.S. Congress Web service
    - Added Del.icio.us Tag Lookup Meta Cartridge
    - Added GoodRelations and Barters for eCommerce Services
    - Added NYT Articles Lookup Meta Cartridge
    - Added OpenStreetMap Cartridge
    - Added O'Reilly Books Catalog Lookup Meta Cartridge
    - Added PowerPoint (PPTX) Cartridge
    - Added SCOT based Tag Cloud
    - Added Technocrati Lookup Meta Cartridge
    - Misc. fixes
    - Fixed GPF in rare case when using NOT FROM / NOT FROM NAMED
    - Fixed handling of class instance array
    - Fixed i18N issues with freetext search in RDF
    - Fixed i18N serialization of RDF/XML box
    - Fixed incorrect result when Accept is set to text/rdf-n3
    - Fixed passing retvals of variables from OPTION(), like ?SCORE ?x,
      from deeply nested subselects

  * ODS Applications
    - Added FOAF+SSL and FOAF+SSL+OpenID
    - Added Bibliographical ontology usage in ODS Graph
    - Added Calendar API and upstream commands
    - Added One-Click X.509 Certificate, Private Key generation plus
 Browser import, and write to FOAF profile
    - Added Messaging Services
    - Added Relationships Ontology terms to ODS-AddressBook for qualifying
 relationships in Social Network
    - Added Biographical Ontology terms added to Profile Page UI
    - Added Support for MS Live Contacts API
    - Additional Ubiquity commands for relationship qualification in Social
 Network data spaces
    - Added Support for Portable contacts api
    - Fixed OpenID registration/auth in FOAF+SSL+OpenID implementation


 For more details, see the release notes:
 https://sourceforge.net/project/shownotes.php?group_id=161622release_id=677418

 Other links:

 Virtuoso Open Source Edition:
   * Home Page: http://virtuoso.openlinksw.com/wiki/main/
   * Download Page:
  http://virtuoso.openlinksw.com/wiki/main/Main/VOSDownload

 OpenLink Data Spaces:
   * Home Page: http://virtuoso.openlinksw.com/wiki/main/Main/OdsIndex
   * SPARQL Usage Examples (re. SIOC, FOAF, AtomOWL, SKOS):
  http://virtuoso.openlinksw.com/wiki/main/Main/ODSSIOCRef

 OpenLink AJAX Toolkit (OAT):
   * Project Page: http://sourceforge.net/projects/oat
   * Live Demonstration: http://demo.openlinksw.com/oatdemo
   * Interactive SPARQL Demo: http://demo.openlinksw.com/isparql/

 OpenLink Data Explorer (Firefox extension for RDF browsing):
   * Home Page: http://ode.openlinksw.com/


 Regards,

 ~Tim
 --
 Tim Haynes
 Product Development Consultant
 OpenLink Software
 http://www.openlinksw.com/







-- 
Aldo Bucchi
U N I V R Z
Office: +56 2 795 4532
Mobile:+56 9 7623 8653
skype:aldo.bucchi
http://www.univrz.com/
http://aldobucchi.com/

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to which it is
addressed and may contain information that is privileged and confidential. If
you are not the intended recipient, please do not distribute or copy this
communication, by e-mail or otherwise. Instead, please notify us immediately by
return e-mail.
INFORMACIÓN PRIVILEGIADA Y CONFIDENCIAL
Este mensaje está destinado sólo a la persona u organización al cual está
dirigido y podría contener información privilegiada y confidencial. Si usted no
es el destinatario, por favor

Re: A Personal Note ( from the human side of me... or what's left of it )

2009-04-18 Thread Aldo Bucchi
Hi Sherman,

On Fri, Apr 17, 2009 at 10:22 PM, Sherman Monroe sdmon...@gmail.com wrote:
 Aldo,

 Thank you so much for your visionary comments, it's guys like yourself who
 fuel humanities progress. And I believe you've spoken for many of the others
 of us here

Thanks, but...
My post was a thank you note to all the community. Personally, I don't
deserve credit for this.
So I proxy your beautiful reply to those behind the project ;)

I know emotional rants can quickly become cheesy, so I won't say anything else.
Back to work!

Regards,
A


 -sherman

 On Tue, Mar 17, 2009 at 8:51 PM, Aldo Bucchi aldo.buc...@gmail.com wrote:

 Hi,

 I rarely post on this list, mostly because I have nothing to say on
 this level ;)
 And when I do, I usually start a fight. LOL.

 Of course, I know I'm not the only one who is extremely passionate
 about the Semantic Web. Everyone here is working hard in their own
 reality. Weaving dreams, spreading the word and... well, sometimes
 just hitting deadlines.

 I do believe that there is a greater good to this story, and it is
 literally waiting around the corner. As things stand today, we're
 reaching the knee of the curve of the Data Web's crystallization and
 outreach process and, what's probably just as important, as society
 we're being violently reminded that our fate as a global village is
 strongly tied to our ability to collaborate transparently and
 efficiently.

 This is a very, very strange convergence in many fronts:
 technological, economical, cultural and even political.
 This date last year who would have thought that we were going to
 experience such a harsh economic and paradigmatic blaze?
 The world is now literally crying for more transparency and
 structured, granular data integration.

 ( ideas, anyone? ;)

 Perhaps for someone like Tim this is just one more step in what has
 already been a very long journey, but for a young-ish guy like me,
 being immersed in this particular portion of the story is already
 overwhelming.

 So, as a future beneficiary of the Linked Data Web, and before this
 list becomes crowded enough that my words vanish behind the noise, I
 want to express *my sincere gratitude towards all the contributors of
 this project*. I have witnessed first hand the long years of hard work
 (and endless discussions) and I know I am not alone when I think to
 myself:

 We certainly can't yet see the impact this will have on the future but
 I am certain it will make a big, BIG difference.
 This project rocks!

 Keep it up guys! ;)

 Regards,
 A

 --
 Aldo Bucchi
 U N I V R Z
 Office: +56 2 795 4532
 Mobile:+56 9 7623 8653
 skype:aldo.bucchi
 http://www.univrz.com/
 http://aldobucchi.com/

 PRIVILEGED AND CONFIDENTIAL INFORMATION
 This message is only for the use of the individual or entity to which it
 is
 addressed and may contain information that is privileged and confidential.
 If
 you are not the intended recipient, please do not distribute or copy this
 communication, by e-mail or otherwise. Instead, please notify us
 immediately by
 return e-mail.
 INFORMACIÓN PRIVILEGIADA Y CONFIDENCIAL
 Este mensaje está destinado sólo a la persona u organización al cual está
 dirigido y podría contener información privilegiada y confidencial. Si
 usted no
 es el destinatario, por favor no distribuya ni copie esta comunicación,
 por
 email o por otra vía. Por el contrario, por favor notifíquenos
 inmediatamente
 vía e-mail.




 --

 Thanks,
 -sherman

 I pray that you may prosper in all things and be healthy, even as your soul
 prospers
 (3 John 1:2)




-- 
Aldo Bucchi
U N I V R Z
Office: +56 2 795 4532
Mobile:+56 9 7623 8653
skype:aldo.bucchi
http://www.univrz.com/
http://aldobucchi.com/

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to which it is
addressed and may contain information that is privileged and confidential. If
you are not the intended recipient, please do not distribute or copy this
communication, by e-mail or otherwise. Instead, please notify us immediately by
return e-mail.
INFORMACIÓN PRIVILEGIADA Y CONFIDENCIAL
Este mensaje está destinado sólo a la persona u organización al cual está
dirigido y podría contener información privilegiada y confidencial. Si usted no
es el destinatario, por favor no distribuya ni copie esta comunicación, por
email o por otra vía. Por el contrario, por favor notifíquenos inmediatamente
vía e-mail.



Re: Have you seen this story?

2009-04-10 Thread Aldo Bucchi
Kingsley,

On Thu, Apr 9, 2009 at 3:09 PM, Kingsley Idehen kide...@openlinksw.com wrote:
 Aldo Bucchi wrote:

 Hi guys,

 I didn't find that post even challenging ( and as some of you might
 know I really like to argue ), because it makes a fundamental mistake
 and all drips from there:

 Do the manufacturers of, say, a new form of carbon nanotubes, use it
 as material for their own tools?

 Well, the answer is: not necessarily (and most probably, not at all).
 At least not in its raw form. It needs processing, it might be more
 expensive and the tools probably won't make the job better than the
 old ones.
 But the material is still better than alluminium, but tools are
 complex and require other skills that these developers need not
 necessarily have. It needs to take its place on the low level of a
 complex industry and value will eventually flourish.

 This is not different than Linked Data in this context.
 So, why can someone come to such blunt observation by relating creator
 dogfooding to the ultimate value of the technology?

 One could argue that this is closely related to the semantic curse.

 The answer appears when you try to answer this simple question:
 * How is this material better?

 Which inevitably leads you, at least, to:
 * What do these materials have in common?
 * What specific qualities of value, present in both, are being improved?

 We only recently did that for Linked Data!
 So, the fundamental and shared flaw here has been to attribute a
 magical, one-of-a-kind nature to something instead of characterizing
 it in terms of the previously existing alternatives, which results in
 confusion and... well, what do you expect if we start from there ;)

 He might be right that there were mistakes, but the real flaws were
 related to non-specific communication from the SW community ( there
 was not clear definition of the what is this, what does it compare
 to and why its better ) and then a lack of deep analysis on part of
 the writer, who got stuck in his myopia and is calling carbon nanotube
 developers snake oil salesmen because they don't use the material in
 their labs.

 However, I do believe in dogfooding and I do it mostly for personal
 purposes. But one thing is to support it, another to demand it.

 OTOH. I like to think that these weren't mistakes. I mean, that the
 time this project took to lift off due to poor communicational
 strategies was not in vain.
 It would have been awfully hard and controversial to explain Linked
 Data in terms of distributed database technology back in the days.
 While it would have been certainly understood by a much larger
 audience, in terms of its development it probably would have entered a
 state of enthropy and evolved into several JSR kind of process, not to
 mention strategic oppositions from industry leaders and the inevitable
  competition ( which, when it comes to standardization processes, is
 not usually welcome ).


 Aldo,

 No argument re. the above, as you know anyhow :-)

 In more concrete terms. We didn't give M$ a chance to create RDF-MS
 Edition by staying off the radar.
 ( I hope so )


 Hmm. ADO.NET's Entity Frameworks is RDF-MS salvo #1

 Project M is salvo #2.

 They get it right at #3 and by then it will be them playing well with the
 Linked Data Web.

 IE is no doable on the Linked Data Web :-)

Ah good examples.

Now, regarding IE. Of course it is not the same game so their trick
would have been different... But how about Windows Live + Semantic
Office + a centralized registry to provide IDs for things.

I agree that they will now have to play along with Linked Data web.
But they didn't see it coming, if they had... ( well we'll never know
I'm just fantasizing here ).


 Semantic was a great codename, but for the wrong reasons!



 Great codename for a great Thing.
 Stinker of a name for discerning meaning of the Thing :-)

Semantically correct, but comunicationally impaired ;)


 Anyway, Semantic Web issues are now water under the bridge in my world
 view, the big MO is behind Linked Data and we should simply carry this into
 related and vital realms such as OWL (the TBox side is very important), but
 do so with pragmatism and coherence. We know this works, based on the Linked
 Data journey experience.

 This community has succeeded were alternative approaches have failed. We
 must never forget that when moving forward.

Yes I agree, this is old stuff. I was just looking back for a second.
There are so many things I never got to say that I sometimes try to
slip them in here.

I have to admit that regarding the *Macro* TBox aspect I am only now
starting to pay attention. My imagination needs some more training to
understand where this is going to take us in the mid term.
( I have groked the ABox aspect for 5 years now, but the other one
quickly leads me to AI and out of my comfort zone ).

This community is amazing ;)
I would put all my chips on the table for you anytime now.
( wait... I already am! )

A.


 Kingsley

Re: Have you seen this story?

2009-04-09 Thread Aldo Bucchi
Hi guys,

I didn't find that post even challenging ( and as some of you might
know I really like to argue ), because it makes a fundamental mistake
and all drips from there:

Do the manufacturers of, say, a new form of carbon nanotubes, use it
as material for their own tools?

Well, the answer is: not necessarily (and most probably, not at all).
At least not in its raw form. It needs processing, it might be more
expensive and the tools probably won't make the job better than the
old ones.
But the material is still better than alluminium, but tools are
complex and require other skills that these developers need not
necessarily have. It needs to take its place on the low level of a
complex industry and value will eventually flourish.

This is not different than Linked Data in this context.
So, why can someone come to such blunt observation by relating creator
dogfooding to the ultimate value of the technology?

One could argue that this is closely related to the semantic curse.

The answer appears when you try to answer this simple question:
* How is this material better?

Which inevitably leads you, at least, to:
* What do these materials have in common?
* What specific qualities of value, present in both, are being improved?

We only recently did that for Linked Data!
So, the fundamental and shared flaw here has been to attribute a
magical, one-of-a-kind nature to something instead of characterizing
it in terms of the previously existing alternatives, which results in
confusion and... well, what do you expect if we start from there ;)

He might be right that there were mistakes, but the real flaws were
related to non-specific communication from the SW community ( there
was not clear definition of the what is this, what does it compare
to and why its better ) and then a lack of deep analysis on part of
the writer, who got stuck in his myopia and is calling carbon nanotube
developers snake oil salesmen because they don't use the material in
their labs.

However, I do believe in dogfooding and I do it mostly for personal
purposes. But one thing is to support it, another to demand it.

OTOH. I like to think that these weren't mistakes. I mean, that the
time this project took to lift off due to poor communicational
strategies was not in vain.
It would have been awfully hard and controversial to explain Linked
Data in terms of distributed database technology back in the days.
While it would have been certainly understood by a much larger
audience, in terms of its development it probably would have entered a
state of enthropy and evolved into several JSR kind of process, not to
mention strategic oppositions from industry leaders and the inevitable
 competition ( which, when it comes to standardization processes, is
not usually welcome ).

In more concrete terms. We didn't give M$ a chance to create RDF-MS
Edition by staying off the radar.
( I hope so )

Semantic was a great codename, but for the wrong reasons!

Regards,
A

On Thu, Apr 9, 2009 at 10:00 AM, Tom Heath tom.he...@talis.com wrote:
 Hi Daniel,

 2009/4/9 Daniel Schwabe dschw...@inf.puc-rio.br:
 Dear all,
 this may be old stuff, but I was surprised to read
 http://www.intelligententerprise.com/blog/archives/2009/02/semantic_web_sn.html...

 Me too!

 He does have some points...

 In 99% of cases with respect to me he doesn't ;)

 As I say in my response on his blog (copied into that post of mine
 that Juan refers to) I agree that we, the Semantic Web community, have
 not always done as much as we could in the dog food department, but
 that has been changing rapidly since 2006 and we should keep
 up/increase the pace.

 I won't comment on that blog post any further here; it's already
 sapped too many hours of my life :)

 Cheers,

 Tom.

 --
 Dr Tom Heath
 Researcher
 Platform Division
 Talis Information Ltd
 T: 0870 400 5000
 W: http://www.talis.com/





-- 
Aldo Bucchi
U N I V R Z
Office: +56 2 795 4532
Mobile:+56 9 7623 8653
skype:aldo.bucchi
http://www.univrz.com/
http://aldobucchi.com/

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to which it is
addressed and may contain information that is privileged and confidential. If
you are not the intended recipient, please do not distribute or copy this
communication, by e-mail or otherwise. Instead, please notify us immediately by
return e-mail.
INFORMACIÓN PRIVILEGIADA Y CONFIDENCIAL
Este mensaje está destinado sólo a la persona u organización al cual está
dirigido y podría contener información privilegiada y confidencial. Si usted no
es el destinatario, por favor no distribuya ni copie esta comunicación, por
email o por otra vía. Por el contrario, por favor notifíquenos inmediatamente
vía e-mail.



Re: Semantic Web logo. Copyrights, etc

2009-03-23 Thread Aldo Bucchi
Ivan,

On Mon, Mar 23, 2009 at 4:47 AM, Ivan Herman i...@w3.org wrote:
 Aldo,

 Yes. The box

 http://www.w3.org/Icons/SW/sw-cube.{svg,png,giv}

 can be used as you describe. If put into a composition but it is clearly
 recognizable, I do not see any issue with that either. A good example is
 the logo used by RPI:

 http://tw.rpi.edu/wiki/Main_Page

 (They actually went out of their way by creating an image map, so that
 the box links to the W3C site, whereas other parts of the logo links to
 RPI...)

OK Great ;)


 'Distortion' is not listed on the logo page and, I presume, this might
 be a borderline. I could apply strong distortion that would make the
 logo barely recognizable, and I think W3C might have an issue with that.
 But mild distortion (ie, slight change in scale factors) should be fine.
 If you really want to apply such distortion, you may want to check with
 W3C first.

I can't remember now, but I am sure there are some distorted versions
out there in the wild. In particular I remember one with a twirl.

In a dogfood manner, is there an RDF vocabulary to describe such
relations? ( derived from ).
It would be useful if the W3C demanded the generated logos to be
RDF-linked to the page via a wiki for example.
Well, that's too much of a stretch, but I think this is not too crazy
an idea: Dogfood, show benefits (right on the page via a dynamic link)
and save some work W3C work patrolling for logos.

This is also in line with CC and, perhaps, even Open Data licensing, etc.


 Cheers

 Ivan

Thanks,
A

PS. I said derivatives. The D word... sorry for that ;)


 Aldo Bucchi wrote:
 Hi all,

 Not sure if this is the place for this, but I believe it is common concern.

 I have seen several projects using tweaks (color, distortion,
 composition) of the Semantic Web box logo. After reading the
 policies I believe that this is allowed as long as there is no W3C
 logo involved.

 http://www.w3.org/2007/10/sw-logos.html

 Is this correct? Can I create such modifiied derivatives from the
 clean box logo w/o running into any problems?
 Composition is the border case as the logo is unmodified and
 recognizable, therefore legally troublesome.

 My 2 cents is that they should allow anything as long as you strip
 the W3C part.

 Thanks,
 A


 --

 Ivan Herman, W3C Semantic Web Activity Lead
 Home: http://www.w3.org/People/Ivan/
 mobile: +31-641044153
 PGP Key: http://www.ivan-herman.net/pgpkey.html
 FOAF: http://www.ivan-herman.net/foaf.rdf




-- 
Aldo Bucchi
U N I V R Z
Office: +56 2 795 4532
Mobile:+56 9 7623 8653
skype:aldo.bucchi
http://www.univrz.com/
http://aldobucchi.com/

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to which it is
addressed and may contain information that is privileged and confidential. If
you are not the intended recipient, please do not distribute or copy this
communication, by e-mail or otherwise. Instead, please notify us immediately by
return e-mail.
INFORMACIÓN PRIVILEGIADA Y CONFIDENCIAL
Este mensaje está destinado sólo a la persona u organización al cual está
dirigido y podría contener información privilegiada y confidencial. Si usted no
es el destinatario, por favor no distribuya ni copie esta comunicación, por
email o por otra vía. Por el contrario, por favor notifíquenos inmediatamente
vía e-mail.



Semantic Web logo. Copyrights, etc

2009-03-20 Thread Aldo Bucchi
Hi all,

Not sure if this is the place for this, but I believe it is common concern.

I have seen several projects using tweaks (color, distortion,
composition) of the Semantic Web box logo. After reading the
policies I believe that this is allowed as long as there is no W3C
logo involved.

http://www.w3.org/2007/10/sw-logos.html

Is this correct? Can I create such modifiied derivatives from the
clean box logo w/o running into any problems?
Composition is the border case as the logo is unmodified and
recognizable, therefore legally troublesome.

My 2 cents is that they should allow anything as long as you strip
the W3C part.

Thanks,
A

-- 
Aldo Bucchi
U N I V R Z
Office: +56 2 795 4532
Mobile:+56 9 7623 8653
skype:aldo.bucchi
http://www.univrz.com/
http://aldobucchi.com/

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to which it is
addressed and may contain information that is privileged and confidential. If
you are not the intended recipient, please do not distribute or copy this
communication, by e-mail or otherwise. Instead, please notify us immediately by
return e-mail.
INFORMACIÓN PRIVILEGIADA Y CONFIDENCIAL
Este mensaje está destinado sólo a la persona u organización al cual está
dirigido y podría contener información privilegiada y confidencial. Si usted no
es el destinatario, por favor no distribuya ni copie esta comunicación, por
email o por otra vía. Por el contrario, por favor notifíquenos inmediatamente
vía e-mail.



A Personal Note ( from the human side of me... or what's left of it )

2009-03-17 Thread Aldo Bucchi
Hi,

I rarely post on this list, mostly because I have nothing to say on
this level ;)
And when I do, I usually start a fight. LOL.

Of course, I know I'm not the only one who is extremely passionate
about the Semantic Web. Everyone here is working hard in their own
reality. Weaving dreams, spreading the word and... well, sometimes
just hitting deadlines.

I do believe that there is a greater good to this story, and it is
literally waiting around the corner. As things stand today, we're
reaching the knee of the curve of the Data Web's crystallization and
outreach process and, what's probably just as important, as society
we're being violently reminded that our fate as a global village is
strongly tied to our ability to collaborate transparently and
efficiently.

This is a very, very strange convergence in many fronts:
technological, economical, cultural and even political.
This date last year who would have thought that we were going to
experience such a harsh economic and paradigmatic blaze?
The world is now literally crying for more transparency and
structured, granular data integration.

( ideas, anyone? ;)

Perhaps for someone like Tim this is just one more step in what has
already been a very long journey, but for a young-ish guy like me,
being immersed in this particular portion of the story is already
overwhelming.

So, as a future beneficiary of the Linked Data Web, and before this
list becomes crowded enough that my words vanish behind the noise, I
want to express *my sincere gratitude towards all the contributors of
this project*. I have witnessed first hand the long years of hard work
(and endless discussions) and I know I am not alone when I think to
myself:

We certainly can't yet see the impact this will have on the future but
I am certain it will make a big, BIG difference.
This project rocks!

Keep it up guys! ;)

Regards,
A

-- 
Aldo Bucchi
U N I V R Z
Office: +56 2 795 4532
Mobile:+56 9 7623 8653
skype:aldo.bucchi
http://www.univrz.com/
http://aldobucchi.com/

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to which it is
addressed and may contain information that is privileged and confidential. If
you are not the intended recipient, please do not distribute or copy this
communication, by e-mail or otherwise. Instead, please notify us immediately by
return e-mail.
INFORMACIÓN PRIVILEGIADA Y CONFIDENCIAL
Este mensaje está destinado sólo a la persona u organización al cual está
dirigido y podría contener información privilegiada y confidencial. Si usted no
es el destinatario, por favor no distribuya ni copie esta comunicación, por
email o por otra vía. Por el contrario, por favor notifíquenos inmediatamente
vía e-mail.



Re: Pubs data

2009-03-06 Thread Aldo Bucchi
Hi All,

My compliments to John on the Facebook Thread. That's a scary sight!

I think the solution to this problem will probably come with Web of
Trust and Identity.

First off, the thought that I might someday be on the other side,
fighting against an unfounded review, makes me think about the
following twice.

OTOH, We will always have people trying to silence free speech, as it
the line is blurry by nature. I can even write something possitive
that, taken out of context, seems negative.

We need both: control and freedom. How can that be achieved?

Let me touch on one possible extremist scenario.

Remember Freenet?
( http://freenetproject.org/ )

They go to the extreme of blurring the identity of the author and
physical location of bits of information. This pretty much prevents
censorship from being applied ( even when it is right to do so ).
Now, I wonder if RDF as is might be used to build something like this
using P2P architectures, or simple mirrors, or bots. It seems that
from a tech POV it would be easy since the only thing moving around is
the bit of data.

Of course this is not a solution either. We do want control to be applicable.

The ideal solution is more in the lines of:
1) Attach identity to the review ( so people take responsibility for
what they say ). Free speech as in freedom, not in free
2) Provide voting mechanisms that also attach identity
3) This makes a review more or less discoverable by whoever consumes the service

Just throwing in some elements into the discussion

Thanks,
A


On Fri, Mar 6, 2009 at 11:38 AM, John Goodwin
john.good...@ordnancesurvey.co.uk wrote:

 Thanks Tom. I think I've managed to calm them down now :)

 I decided to take the pubs linked data because:

 1) The reviews were not actually my own (I'd never visited the pub that
 caused the trouble) and reading some of the other reviews I realised
 that (while amusing) they could easily have been taken the wrong way. I
 figured it was probably not great having my name associated with reviews
 I didn't write. My site was static and unlike revyu there was no way for
 people to write counter reviews.

 2) I mainly did it as an experiment to try the whole linked data thing
 on an amateur level. I think things have moved on since I started it -
 the site was pretty primitive.

 3) Southampton locals with burning torches and pitchforks angry their
 favourite pub was given a bad review is not a pretty site :)

 John

 -Original Message-
 From: t...@talisplatform.com [mailto:t...@talisplatform.com] On
 Behalf Of Tom Heath
 Sent: 06 March 2009 14:26
 To: Kingsley Idehen
 Cc: John Goodwin; public-lod@w3.org
 Subject: Re: Pubs data

 Hi Kingsley, Hi John,

 First off, sympathies John for the roasting on Facebook - not
 a nice experience I'm sure.


 .


 This email is only intended for the person to whom it is addressed and may 
 contain confidential information. If you have received this email in error, 
 please notify the sender and delete this email which must not be copied, 
 distributed or disclosed to any other person.

 Unless stated otherwise, the contents of this email are personal to the 
 writer and do not represent the official view of Ordnance Survey. Nor can any 
 contract be formed on Ordnance Survey's behalf via email. We reserve the 
 right to monitor emails and attachments without prior notice.

 Thank you for your cooperation.

 Ordnance Survey
 Romsey Road
 Southampton SO16 4GU
 Tel: 08456 050505
 http://www.ordnancesurvey.co.uk






-- 
Aldo Bucchi
U N I V R Z
Office: +56 2 795 4532
Mobile:+56 9 7623 8653
skype:aldo.bucchi
http://www.univrz.com/
http://aldobucchi.com/

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to which it is
addressed and may contain information that is privileged and confidential. If
you are not the intended recipient, please do not distribute or copy this
communication, by e-mail or otherwise. Instead, please notify us immediately by
return e-mail.
INFORMACIÓN PRIVILEGIADA Y CONFIDENCIAL
Este mensaje está destinado sólo a la persona u organización al cual está
dirigido y podría contener información privilegiada y confidencial. Si usted no
es el destinatario, por favor no distribuya ni copie esta comunicación, por
email o por otra vía. Por el contrario, por favor notifíquenos inmediatamente
vía e-mail.



Granular dereferencing ( prop by prop ) using REST + LinkedData; Ideas?

2008-12-29 Thread Aldo Bucchi

Hi All,

I am in the process of LODing a dataset in which certain properties
are generated on the fly ( props derived from aggregate calculations
over the dataset, remote calls, etc ). I would like to let the clients
choose which of these expensive properties they need on demand and on
a granular level.

For example, lets say I am interested in knowing more about resource
http://ex.com/a.
Per LD conventions, dereferencing http://ex.com/a ( via 303 ) returns

http://ex.com/a a ex:Thing ;
  rdfs:label a sample dynamic resource;
  ex:dynamic1 45567 .

The problem is that the value for ex:dynamic1 is very expensive to
compute. Therefore, I would like to partition the document in such a
way that the client can ask  for the property on a lazy, deferred
manner ( a second call in the future ).
The same is true for dynamic2, dynamic3, dynamic4, etc. All should be
retrievable independently and on demand.

* I am aware that this can be achieved by extending SPARQL in some
toolkits. But I need LOD.
* I am also aware that most solutions require us to break URI
obscurity by stuffing the subject and predicate in the uri for a doc.
* Finally, seeAlso is too broad as it doesn't convey the information I need.

Anyone came up with a clean pattern for this?
Ideas?

Something as simple as:
GET http://x.com/sp?s={subject}p={predicate} --- required RDF

works for me... but... .
If possible, I would like to break conventions in a conventional manner ;)

Best,
A

-- 
Aldo Bucchi
U N I V R Z
Office: +56 2 795 4532
Mobile:+56 9 7623 8653
skype:aldo.bucchi
http://www.univrz.com/
http://aldobucchi.com

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to which it is
addressed and may contain information that is privileged and confidential. If
you are not the intended recipient, please do not distribute or copy this
communication, by e-mail or otherwise. Instead, please notify us immediately by
return e-mail.
INFORMACIÓN PRIVILEGIADA Y CONFIDENCIAL
Este mensaje está destinado sólo a la persona u organización al cual está
dirigido y podría contener información privilegiada y confidencial. Si usted no
es el destinatario, por favor no distribuya ni copie esta comunicación, por
email o por otra vía. Por el contrario, por favor notifíquenos inmediatamente
vía e-mail.



Re: Granular dereferencing ( prop by prop ) using REST + LinkedData; Ideas?

2008-12-29 Thread Aldo Bucchi

btw- Using DESCRIBE and teaching the client to fallback to a sparql
call is also an alternative.
I am just looking for a proven pattern here...

On Mon, Dec 29, 2008 at 12:51 PM, Aldo Bucchi aldo.buc...@gmail.com wrote:
 Hi All,

 I am in the process of LODing a dataset in which certain properties
 are generated on the fly ( props derived from aggregate calculations
 over the dataset, remote calls, etc ). I would like to let the clients
 choose which of these expensive properties they need on demand and on
 a granular level.

 For example, lets say I am interested in knowing more about resource
 http://ex.com/a.
 Per LD conventions, dereferencing http://ex.com/a ( via 303 ) returns

 http://ex.com/a a ex:Thing ;
  rdfs:label a sample dynamic resource;
  ex:dynamic1 45567 .

 The problem is that the value for ex:dynamic1 is very expensive to
 compute. Therefore, I would like to partition the document in such a
 way that the client can ask  for the property on a lazy, deferred
 manner ( a second call in the future ).
 The same is true for dynamic2, dynamic3, dynamic4, etc. All should be
 retrievable independently and on demand.

 * I am aware that this can be achieved by extending SPARQL in some
 toolkits. But I need LOD.
 * I am also aware that most solutions require us to break URI
 obscurity by stuffing the subject and predicate in the uri for a doc.
 * Finally, seeAlso is too broad as it doesn't convey the information I need.

 Anyone came up with a clean pattern for this?
 Ideas?

 Something as simple as:
 GET http://x.com/sp?s={subject}p={predicate} --- required RDF

 works for me... but... .
 If possible, I would like to break conventions in a conventional manner ;)

 Best,
 A

 --
 Aldo Bucchi
 U N I V R Z
 Office: +56 2 795 4532
 Mobile:+56 9 7623 8653
 skype:aldo.bucchi
 http://www.univrz.com/
 http://aldobucchi.com

 PRIVILEGED AND CONFIDENTIAL INFORMATION
 This message is only for the use of the individual or entity to which it is
 addressed and may contain information that is privileged and confidential. If
 you are not the intended recipient, please do not distribute or copy this
 communication, by e-mail or otherwise. Instead, please notify us immediately 
 by
 return e-mail.
 INFORMACIÓN PRIVILEGIADA Y CONFIDENCIAL
 Este mensaje está destinado sólo a la persona u organización al cual está
 dirigido y podría contener información privilegiada y confidencial. Si usted 
 no
 es el destinatario, por favor no distribuya ni copie esta comunicación, por
 email o por otra vía. Por el contrario, por favor notifíquenos inmediatamente
 vía e-mail.




-- 
Aldo Bucchi
U N I V R Z
Office: +56 2 795 4532
Mobile:+56 9 7623 8653
skype:aldo.bucchi
http://www.univrz.com/
http://aldobucchi.com

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to which it is
addressed and may contain information that is privileged and confidential. If
you are not the intended recipient, please do not distribute or copy this
communication, by e-mail or otherwise. Instead, please notify us immediately by
return e-mail.
INFORMACIÓN PRIVILEGIADA Y CONFIDENCIAL
Este mensaje está destinado sólo a la persona u organización al cual está
dirigido y podría contener información privilegiada y confidencial. Si usted no
es el destinatario, por favor no distribuya ni copie esta comunicación, por
email o por otra vía. Por el contrario, por favor notifíquenos inmediatamente
vía e-mail.



Re: Granular dereferencing ( prop by prop ) using REST + LinkedData; Ideas?

2008-12-29 Thread Aldo Bucchi

I meant CONSTRUCT...  {s} {p} ?o...

( sorry, I am being kidnapped for lunch break ).

On Mon, Dec 29, 2008 at 12:54 PM, Aldo Bucchi aldo.buc...@gmail.com wrote:
 btw- Using DESCRIBE and teaching the client to fallback to a sparql
 call is also an alternative.
 I am just looking for a proven pattern here...

 On Mon, Dec 29, 2008 at 12:51 PM, Aldo Bucchi aldo.buc...@gmail.com wrote:
 Hi All,

 I am in the process of LODing a dataset in which certain properties
 are generated on the fly ( props derived from aggregate calculations
 over the dataset, remote calls, etc ). I would like to let the clients
 choose which of these expensive properties they need on demand and on
 a granular level.

 For example, lets say I am interested in knowing more about resource
 http://ex.com/a.
 Per LD conventions, dereferencing http://ex.com/a ( via 303 ) returns

 http://ex.com/a a ex:Thing ;
  rdfs:label a sample dynamic resource;
  ex:dynamic1 45567 .

 The problem is that the value for ex:dynamic1 is very expensive to
 compute. Therefore, I would like to partition the document in such a
 way that the client can ask  for the property on a lazy, deferred
 manner ( a second call in the future ).
 The same is true for dynamic2, dynamic3, dynamic4, etc. All should be
 retrievable independently and on demand.

 * I am aware that this can be achieved by extending SPARQL in some
 toolkits. But I need LOD.
 * I am also aware that most solutions require us to break URI
 obscurity by stuffing the subject and predicate in the uri for a doc.
 * Finally, seeAlso is too broad as it doesn't convey the information I need.

 Anyone came up with a clean pattern for this?
 Ideas?

 Something as simple as:
 GET http://x.com/sp?s={subject}p={predicate} --- required RDF

 works for me... but... .
 If possible, I would like to break conventions in a conventional manner ;)

 Best,
 A

 --
 Aldo Bucchi
 U N I V R Z
 Office: +56 2 795 4532
 Mobile:+56 9 7623 8653
 skype:aldo.bucchi
 http://www.univrz.com/
 http://aldobucchi.com

 PRIVILEGED AND CONFIDENTIAL INFORMATION
 This message is only for the use of the individual or entity to which it is
 addressed and may contain information that is privileged and confidential. If
 you are not the intended recipient, please do not distribute or copy this
 communication, by e-mail or otherwise. Instead, please notify us immediately 
 by
 return e-mail.
 INFORMACIÓN PRIVILEGIADA Y CONFIDENCIAL
 Este mensaje está destinado sólo a la persona u organización al cual está
 dirigido y podría contener información privilegiada y confidencial. Si usted 
 no
 es el destinatario, por favor no distribuya ni copie esta comunicación, por
 email o por otra vía. Por el contrario, por favor notifíquenos inmediatamente
 vía e-mail.




 --
 Aldo Bucchi
 U N I V R Z
 Office: +56 2 795 4532
 Mobile:+56 9 7623 8653
 skype:aldo.bucchi
 http://www.univrz.com/
 http://aldobucchi.com

 PRIVILEGED AND CONFIDENTIAL INFORMATION
 This message is only for the use of the individual or entity to which it is
 addressed and may contain information that is privileged and confidential. If
 you are not the intended recipient, please do not distribute or copy this
 communication, by e-mail or otherwise. Instead, please notify us immediately 
 by
 return e-mail.
 INFORMACIÓN PRIVILEGIADA Y CONFIDENCIAL
 Este mensaje está destinado sólo a la persona u organización al cual está
 dirigido y podría contener información privilegiada y confidencial. Si usted 
 no
 es el destinatario, por favor no distribuya ni copie esta comunicación, por
 email o por otra vía. Por el contrario, por favor notifíquenos inmediatamente
 vía e-mail.




-- 
Aldo Bucchi
U N I V R Z
Office: +56 2 795 4532
Mobile:+56 9 7623 8653
skype:aldo.bucchi
http://www.univrz.com/
http://aldobucchi.com

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to which it is
addressed and may contain information that is privileged and confidential. If
you are not the intended recipient, please do not distribute or copy this
communication, by e-mail or otherwise. Instead, please notify us immediately by
return e-mail.
INFORMACIÓN PRIVILEGIADA Y CONFIDENCIAL
Este mensaje está destinado sólo a la persona u organización al cual está
dirigido y podría contener información privilegiada y confidencial. Si usted no
es el destinatario, por favor no distribuya ni copie esta comunicación, por
email o por otra vía. Por el contrario, por favor notifíquenos inmediatamente
vía e-mail.



Re: Granular dereferencing ( prop by prop ) using REST + LinkedData; Ideas?

2008-12-29 Thread Aldo Bucchi

Olaf,

On Mon, Dec 29, 2008 at 3:58 PM, Olaf Hartig
har...@informatik.hu-berlin.de wrote:
 Hey Aldo,

 On Monday 29 December 2008 16:51:05 Aldo Bucchi wrote:
 Hi All,

 I am in the process of LODing a dataset in which certain properties
 are generated on the fly ( props derived from aggregate calculations
 over the dataset, remote calls, etc ). I would like to let the clients
 choose which of these expensive properties they need on demand and on
 a granular level.
 [..]

 What about the Expect header field [1] for HTTP requests?

 According to the spec, a client may express its expectations regarding the
 behaviour of the server with this field.

 May be you could use this field as follows. To request the RDF representation
 of a resource with expensive ex:dynamic1 properties a client may issue a GET
 request

  GET http://ex.com/a

 with

  Expect: expectedproperties = ex:dynamic1

 The server evaluates the expectedproperties parameter and computes the
 properties expected by the client. For requests without the Expect header
 field the server responds with an RDF description that contains only
 inexpensive properties.

Interesting idea. Will give it a try ;)

I wonder...

* what the pundits may think of this.
* The max header size or any other limiations

I am now finding that a simple (and conventional) way of achieving
what I want is to announce a void:sparqlEndpoint as part of the
response and then teach the clients to perform a simple describe query
to get the desired props.

However, the domain of void:sparqlEndpoint is restricted to
void:Dataset, so we need a small triangulation:

[ void:sparqlEndpoint http://ex.com/sparql ] void:sampleResource
http://ex.com/a .

The trouble with this is that you need to somehow tap into the sparql
processor or intercept a certain type of queries. But it is doable.

Anyway, keep em coming ;)


 Greetings,
 Olaf

Thanks,
A


 [1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.20




-- 
Aldo Bucchi
U N I V R Z
Office: +56 2 795 4532
Mobile:+56 9 7623 8653
skype:aldo.bucchi
http://www.univrz.com/
http://aldobucchi.com

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to which it is
addressed and may contain information that is privileged and confidential. If
you are not the intended recipient, please do not distribute or copy this
communication, by e-mail or otherwise. Instead, please notify us immediately by
return e-mail.
INFORMACIÓN PRIVILEGIADA Y CONFIDENCIAL
Este mensaje está destinado sólo a la persona u organización al cual está
dirigido y podría contener información privilegiada y confidencial. Si usted no
es el destinatario, por favor no distribuya ni copie esta comunicación, por
email o por otra vía. Por el contrario, por favor notifíquenos inmediatamente
vía e-mail.



Re: Granular dereferencing ( prop by prop ) using REST + LinkedData; Ideas?

2008-12-29 Thread Aldo Bucchi

Hi Peter,

( reply inlined )

On Mon, Dec 29, 2008 at 7:14 PM, Peter Ansell ansell.pe...@gmail.com wrote:
 2008/12/30 Aldo Bucchi aldo.buc...@gmail.com:

 Hi All,

 I am in the process of LODing a dataset in which certain properties
 are generated on the fly ( props derived from aggregate calculations
 over the dataset, remote calls, etc ). I would like to let the clients
 choose which of these expensive properties they need on demand and on
 a granular level.

 For example, lets say I am interested in knowing more about resource
 http://ex.com/a.
 Per LD conventions, dereferencing http://ex.com/a ( via 303 ) returns

 http://ex.com/a a ex:Thing ;
  rdfs:label a sample dynamic resource;
  ex:dynamic1 45567 .

 If you are expecting the value to always be directly integrated as a
 plain literal you either put it in or you don't. If you are willing to
 say that it doesn't have to be a literal you can make it
 ex:dynamicUri1 http://ex.com/dynamic1/a or some similar way to
 allow them to resolve that URI that if they recognise the predicate,
 but ignore it and hence ignore the computation otherwise.

 LD conventions don't say that properties always have to be directly
 attributed to their elements. Having the URI link to another RDF
 document, even for literals fits all the guidelines as far as I can
 see.

Sorry, I am a bit slow today ;)

How would you make this work for literals?

if you say

http://ex.com/a ex:dynamic1 http://ex.com/dynamic1/a .

Then you are stating the value for the dynamic1 predicate. If it is a
literal I don't see how you can retract the latter statement and
replace it by whatever statement you get when dereferencing the URI.
What do you mean with:

 LD conventions don't say that properties always have to be directly
 attributed to their elements. Having the URI link to another RDF
 document, even for literals fits all the guidelines as far as I can
 see.

Of course this does work for resources ( hmm... it is the basis of
Linked Data ;).

By reading your reply I somehow remembered I once overhauled seeAlso
so I could do stuff like this, by creating an nary relation where a
one or more predicates are associated with a document.

http://ex.com/a foo:seeAlso [ foo:doc http://ex.com/dynamic1/a;
foo:predicate ex:dynamic1, ex:dynamic2  ] .

Of course many variations like the latter are possible.


 Cheers,

 Peter


Best,
A

-- 
Aldo Bucchi
U N I V R Z
Office: +56 2 795 4532
Mobile:+56 9 7623 8653
skype:aldo.bucchi
http://www.univrz.com/
http://aldobucchi.com

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to which it is
addressed and may contain information that is privileged and confidential. If
you are not the intended recipient, please do not distribute or copy this
communication, by e-mail or otherwise. Instead, please notify us immediately by
return e-mail.
INFORMACIÓN PRIVILEGIADA Y CONFIDENCIAL
Este mensaje está destinado sólo a la persona u organización al cual está
dirigido y podría contener información privilegiada y confidencial. Si usted no
es el destinatario, por favor no distribuya ni copie esta comunicación, por
email o por otra vía. Por el contrario, por favor notifíquenos inmediatamente
vía e-mail.



Re: What's the Value Prop. of Linked Data?

2008-12-05 Thread Aldo Bucchi

Juan,

Linked Data is not just a new tool that helps you develop faster ;)
Kingsley is hinting just that. This is an industry changer.
IT is there to help people and companies do things. In this case,
the how is also an enabler of new whats.

Go back to when the web started: It was a developer toy... but in
retrospective, it enabled new levels of communication and changed the
way business, health, education, government and life happens around
the world.

Developers are only a small segment in the larger value chain that
linked data will soon come to be.

This is a very neat exercise ;)
Would love to see it evolve ( I will make my contributions as I
can. killing bugs now ).

Best,
A

On Fri, Dec 5, 2008 at 8:29 PM, Juan Sequeda [EMAIL PROTECTED] wrote:
 Kingsley,

 These are great questions, but you missed the questions from the web2.0
 developer. That is who I am worried about.

 I sent even more basic questions in the previous thread.

 In conclusion, this proves that we need to have Linked Data Education and
 Outreach

 Juan Sequeda, Ph.D Student
 Dept. of Computer Sciences
 The University of Texas at Austin
 www.juansequeda.com
 www.semanticwebaustin.org


 On Fri, Dec 5, 2008 at 4:41 PM, Kingsley Idehen [EMAIL PROTECTED]
 wrote:

 Juan,

 Enterprise:

 Problem: I am an executive trying to figure out how Linked Data will help
 me. My biggest problem is connecting the dots across disparate data sources
 associated with a myriad of enterprise applications. I though SOA was the
 solution?

 Web SaaS App. User:

 Problem: I am a Web user, I blog, I share bookmarks, I am on many
 social-networks, I can't find anything.  I am also beginning to realize that
 I don't own my own data.  How does Linked Data help me Find things and
 reclaim ownership of my data? Even better, enable me to use the Web
 productivity bearing in mind there are only so many hours in a day.

 Government:

 Problem: I am bailing out every conceivable national behemoth in my local
 economy, how did we get here? Can Linked Data help me?

 --


 Regards,

 Kingsley Idehen   Weblog: http://www.openlinksw.com/blog/~kidehen
 President  CEO OpenLink Software Web: http://www.openlinksw.com










-- 
Aldo Bucchi
U N I V R Z
Office: +56 2 795 4532
Mobile:+56 9 7623 8653
skype:aldo.bucchi
http://www.univrz.com/
http://aldobucchi.com

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to which it is
addressed and may contain information that is privileged and confidential. If
you are not the intended recipient, please do not distribute or copy this
communication, by e-mail or otherwise. Instead, please notify us immediately by
return e-mail.
INFORMACIÓN PRIVILEGIADA Y CONFIDENCIAL
Este mensaje está destinado sólo a la persona u organización al cual está
dirigido y podría contener información privilegiada y confidencial. Si usted no
es el destinatario, por favor no distribuya ni copie esta comunicación, por
email o por otra vía. Por el contrario, por favor notifíquenos inmediatamente
vía e-mail.



Re: Potential Home for LOD Data Sets

2008-12-04 Thread Aldo Bucchi
,

 Kingsley Idehen   Weblog: http://www.openlinksw.com/blog/~kidehen
 President  CEO OpenLink Software Web: http://www.openlinksw.com











-- 
Aldo Bucchi
U N I V R Z
Office: +56 2 795 4532
Mobile:+56 9 7623 8653
skype:aldo.bucchi
http://www.univrz.com/
http://aldobucchi.com

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to which it is
addressed and may contain information that is privileged and confidential. If
you are not the intended recipient, please do not distribute or copy this
communication, by e-mail or otherwise. Instead, please notify us immediately by
return e-mail.
INFORMACIÓN PRIVILEGIADA Y CONFIDENCIAL
Este mensaje está destinado sólo a la persona u organización al cual está
dirigido y podría contener información privilegiada y confidencial. Si usted no
es el destinatario, por favor no distribuya ni copie esta comunicación, por
email o por otra vía. Por el contrario, por favor notifíquenos inmediatamente
vía e-mail.


Re: Can we afford to offer SPARQL endpoints when we are successful? (Was linked data hosted somewhere)

2008-11-27 Thread Aldo Bucchi
 a message from [EMAIL PROTECTED] and I can't work out
 or
 find out whether it means the message was rejected or something else, such
 as awaiting moderation.
 So I've done this as a reply.
 ===
 And now a response to the message from Aldo, done here to reduce traffic:

 Very generous of you to write in this way.
 And yes, humour is good.
 And sorry to all for the traffic.

 On 27/11/2008 00:02, Aldo Bucchi [EMAIL PROTECTED] wrote:

 OK Hugh,

 I see what you mean and I understand you being upset. Just re-read the
 conversation word by word because I felt something was not right.
 I did say wacky... is that it?

 In that case, and if this caused the confusion, I am really sorry.

 I was not talking about your software, this was just a joke. Talking in
 general.
 You replied to my joke with an absurd reply.

 My point was simply that, if you want to push things over the edge,
 why not get your own box. We all take care of our infrastructure and
 know its limitations.

 So, I formally apologize.
 I am by no means endorsing one piece of software over another ( save
 for mine, but it does't exist yet ;).
 My preferences for virtuoso come from experiential bias.

 I hope this clears things up.
 I apologize for the traffic.

 However, I do make a formal request for some sense of humor.
 This list tends to get into this kind of discussions, and we will
 start getting more and more visits from outsiders who are not used to
 this sort of sharpness.

 Best,
 A







-- 
Aldo Bucchi
U N I V R Z
Office: +56 2 795 4532
Mobile:+56 9 7623 8653
skype:aldo.bucchi
http://www.univrz.com/
http://aldobucchi.com

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to which it is
addressed and may contain information that is privileged and confidential. If
you are not the intended recipient, please do not distribute or copy this
communication, by e-mail or otherwise. Instead, please notify us immediately by
return e-mail.
INFORMACIÓN PRIVILEGIADA Y CONFIDENCIAL
Este mensaje está destinado sólo a la persona u organización al cual está
dirigido y podría contener información privilegiada y confidencial. Si usted no
es el destinatario, por favor no distribuya ni copie esta comunicación, por
email o por otra vía. Por el contrario, por favor notifíquenos inmediatamente
vía e-mail.



Re: Can we afford to offer SPARQL endpoints when we are successful? (Was linked data hosted somewhere)

2008-11-27 Thread Aldo Bucchi

One clarification,

I am not saying that this discussion is not interesting. We still need
to figure many things out and they will rise through debate.
It's just that this is the public-lod list and lod enthusiasts
arrive here. One read of a discussion like this one and off they go.
Should we segment this more?
lod-dev vs lod-users?

Many times I have hit a wall after having a customer of mine reading
someone from the SW lists arguing that there is something not right
about SPARQL, or RDF, or etc... And he is an authoritative figure.

Take a look at Java conferences or Adobe MAX. How many times do these
guys question their decisions in public?
NONE.

Because they know it is suicide to do so.
The dynamic that follows is more or less this:
If XXX has room for doubts about tech YYY, and he knows so much more
than I do, then... why bother?

Best,
A

On Thu, Nov 27, 2008 at 1:47 PM, Aldo Bucchi [EMAIL PROTECTED] wrote:
 All,

 A simple metadata on this therad, from my POV.
 I am obviously missing some more thorough analysis.
 What I intend to show is just how this discussions usually tend to be
 too broad and get dispersed.

 I asked a colleague ( he is a PhD, very smart guy ) to tell me what he
 thinks. He's pretty much in line with me.

 * Hugh is worried about *why* would anyone go through the problems of
 publishing their data and points out some problems
 * Juan talks about how this benefits the end developers
 * Peter provides his experience as a provider of a large dataset
 * Kingsley provides some tech background to alleviate the concerns and
 cross referes with a broader brackground drawing from his vast
 experience
 * I tried to cut the SQL vs SPARQL link and factor in the complex
 system scenario

 Etc...

 Maybe the problem with the SW group is that is has drawn people that
 are far too smart and given them too much power. We should, at this
 point, be giving some things for granted instead of keeping on
 touching the elephant.

 We might need a big corporate like M$ to tell us that we need to open
 up our datasets and period.

 Not to say that there is no room for free discussion and that many
 observations might be fair, but there should be a place where people (
 the rest of the world ) can go and just find a simple, recomforting
 answer: ah, I can open my data. it is good.

 ( perhaps we could create a new list for hindsight discussions ).
 LOD list should be reassuring.

 Just like GMail did it for Ajax. Before that authoritative example,
 there was too much room for debate. Heck, I was using ajax back in
 1999 and I didn't even know it. But it was a mess. Now it is an
 organized industry.

 It's all about critical mass, we socially act like sheep. We look for
 reassurance.

 Please, if my analysis of this thread was too shallow it is because I
 am having lunch... do your own and notice that we are all just
 throwing in our own comments and not really going anywhere concrete,
 giving the idea of too much academia.

 I have been selling SW projects for many years I have some say this...
 we need to balance the concrete/abstract equation for PR.

 Best,
 A

 On Thu, Nov 27, 2008 at 9:12 AM, Richard Cyganiak [EMAIL PROTECTED] wrote:
 Hugh,

 Here's what I think we will see in the area of RDF publishing in a few
 years:

 - few public SPARQL endpoints over popular datasets (for obvious reasons)

 - linked data sites offer limited query capabilities (e.g. a scientific
 bibliography site could offer search paper by title, search author by
 name, search paper by category and/or date range) (think the advanced
 search form on a website, clad into a REST-style API that returns RDF)

 - those query capabilities are described in RDF and hence can be invoked by
 tools such as SQUIN/SemWebClient to answer certain queries efficiently

 - everyone who wants more advanced query capabilities, will crawl the site
 and run their own local SPARQL store

 At the moment we don't have the technology for describing non-SPARQL query
 interfaces in RDF, and crawling linked data is still a fairly complex
 business. As long as these problems are not solved, we pretty much are stuck
 with SPARQL endpoints.

 Best,
 Richard


 On 27 Nov 2008, at 00:18, Hugh Glaser wrote:


 Prompted by the thread on linked data hosted somewhere I would like to
 ask
 the above question that has been bothering me for a while.

 The only reason anyone can afford to offer a SPARQL endpoint is because it
 doesn't get used too much?

 As abstract components for studying interaction, performance, etc.:
 DB=KB, SQL=SPARQL.
 In fact, I often consider the components themselves interchangeable; that
 is, the first step of the migration to SW technologies for an application
 is
 to take an SQL-based back end and simply replace it with a SPARQL/RDF back
 end and then carry on.

 However.
 No serious DB publisher gives direct SQL access to their DB (I think).
 There are often commercial reasons, of course.
 But even when there are not (the Open

Re: Can we afford to offer SPARQL endpoints when we are successful? (Was linked data hosted somewhere)

2008-11-27 Thread Aldo Bucchi

Ian,

After 10 years of debate the SW is finally giving birth to something
concrete or tangible.
I made reference to the concrete/abstract equation.

If this is public-lod, then let's create a users-lod and a dev-lod,
and let one stabilize.

Please, I am on the sales side and have been for years.
Java is easy to sell because you are talking low level details of
APIs, not macro business models.
( ok, you can say that standardizing the content repository api has a
business impact for day software, but its does not reshape the IT
industry ).

I agree on the debate, all I am saying is that there should be some
stability at the edge. This thread was originated when an apparent
newcomer asked for a way to use LOD ( which is what we all want to the
world to start asking for! ).

And look into what it turned.
Let's send a customer satisfaction survey and see how he now feels
about this ;) ( joke ).

Just pointing that out.

I was involved in both Java and flex from the beginning.
In fact, the very early days of flex, and have followed the process
with attention, contributing my grain of sand when possible.

I am tech savvy a salesman. I operate on that front.

Best,
A

On Thu, Nov 27, 2008 at 6:25 PM, Ian Dickinson [EMAIL PROTECTED] wrote:

 Aldo Bucchi wrote:

 Take a look at Java conferences or Adobe MAX. How many times do these
 guys question their decisions in public?
 NONE.

 Um, Java Community Process?

 If you think that there has been no open discussion of the technical design
 decisions or business models or standardisation or control of the direction
 of either Java or Flex/Flash then you must be viewing the web through a very
 small aperture.

 Because they know it is suicide to do so.
 The dynamic that follows is more or less this:
 If XXX has room for doubts about tech YYY, and he knows so much more
 than I do, then... why bother?
 Any technology broad enough to be interesting has a community around it with
 a spectrum of opinion. You might equally well suppose that your customers
 would argue that any tech YYY that doesn't engender any difference of
 opinions is too narrowly supported to be commercially viable. Debate is a
 positive sign, not a negative one.

 Ian


 
 Ian Dickinson   http://www.hpl.hp.com/personal/Ian_Dickinson
 HP Laboratories Bristol  mailto:[EMAIL PROTECTED]
 Hewlett-Packard LimitedRegistered No: 690597 England
 Registered Office:  Cain Road, Bracknell, Berks RG12 1HN





-- 
Aldo Bucchi
U N I V R Z
Office: +56 2 795 4532
Mobile:+56 9 7623 8653
skype:aldo.bucchi
http://www.univrz.com/
http://aldobucchi.com

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to which it is
addressed and may contain information that is privileged and confidential. If
you are not the intended recipient, please do not distribute or copy this
communication, by e-mail or otherwise. Instead, please notify us immediately by
return e-mail.
INFORMACIÓN PRIVILEGIADA Y CONFIDENCIAL
Este mensaje está destinado sólo a la persona u organización al cual está
dirigido y podría contener información privilegiada y confidencial. Si usted no
es el destinatario, por favor no distribuya ni copie esta comunicación, por
email o por otra vía. Por el contrario, por favor notifíquenos inmediatamente
vía e-mail.



Re: linked data hosted somewhere

2008-11-26 Thread Aldo Bucchi
 to be clear.

If I told you that you had access to the WORLD SQL database.
Would you run a SELECT * FROM PERSON on it?

Why take it to an extreme, I was just suggesting that you can get
your own comfy apartment to do whatever you want if you feel like it.
I didn't suggest that you should throw a party and invite three
elephants and Ozzie.
However, if you did, it's up to you to clean the mess ( or live with
the frustration of Ozzie not showing up ).

As people start experimenting though, it will be ultimately beneficial
to allow them to instantiate their own, independent agents and mirrors
of the datasets ( with all due conditions ).
And this is the way to start methinks, through grassroots comments.


 But I think it is sensible to take the question to a new thread...

I'm sorry to contribute my grain of sand to the noise on this list.
Out ;)


 Best
 Hugh

 Kingsley




Best,
A

-- 
Aldo Bucchi
U N I V R Z
Office: +56 2 795 4532
Mobile:+56 9 7623 8653
skype:aldo.bucchi
http://www.univrz.com/
http://aldobucchi.com

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to which it is
addressed and may contain information that is privileged and confidential. If
you are not the intended recipient, please do not distribute or copy this
communication, by e-mail or otherwise. Instead, please notify us immediately by
return e-mail.
INFORMACIÓN PRIVILEGIADA Y CONFIDENCIAL
Este mensaje está destinado sólo a la persona u organización al cual está
dirigido y podría contener información privilegiada y confidencial. Si usted no
es el destinatario, por favor no distribuya ni copie esta comunicación, por
email o por otra vía. Por el contrario, por favor notifíquenos inmediatamente
vía e-mail.



Re: Can we afford to offer SPARQL endpoints when we are successful? (Was linked data hosted somewhere)

2008-11-26 Thread Aldo Bucchi

Hugh,

Let's just look forward. This is not the same world, not the same game
and definitely not the same problem.
The comparison stops the minute you realize we now have billions of
computers connected, and a globally distributed DSN.

The trick is understanding that we are not exposing SQL endpoints,
standing on the shore and throwing stones at the ocean hoping to fill
it up. We are throwing powder jelly that will create solid land over
which we will be able to walk very soon. What we are doing is
assembling ONE big database, because we have ONE namespace that meshes
everything ( thanks to the URIs ) and one transport mechanism.

And the force that will drive us to open data is economic.

Your database contains facts that complement my records and we both
benefit from the mutual exchange, and this happens more efficiently in
an unplanned manner.

Serendipity and unplanned knowledge generation.

So, if you consider the URI and the WWW, the comparison between SPARQL
and SQL and Databases is not enough. However, I admit it is fair and
sometimes necessary at a micro-level.

The tech details will be solved in a snap. As you can see from
Kingsley's response, this is not a new problem, but rather a new
opportunity.

I guess the question is: Why would anyone open up their data before?
when the integration had to be done manually...

Best.,
A

On Wed, Nov 26, 2008 at 9:18 PM, Hugh Glaser [EMAIL PROTECTED] wrote:
 Prompted by the thread on linked data hosted somewhere I would like to ask
 the above question that has been bothering me for a while.

 The only reason anyone can afford to offer a SPARQL endpoint is because it
 doesn't get used too much?

 As abstract components for studying interaction, performance, etc.:
 DB=KB, SQL=SPARQL.
 In fact, I often consider the components themselves interchangeable; that
 is, the first step of the migration to SW technologies for an application is
 to take an SQL-based back end and simply replace it with a SPARQL/RDF back
 end and then carry on.

 However.
 No serious DB publisher gives direct SQL access to their DB (I think).
 There are often commercial reasons, of course.
 But even when there are not (the Open in LOD), there are only search options
 and possibly download facilities.
 Even government organisations that have a remit to publish their data don't
 offer SQL access.

 Will we not have to do the same?
 Or perhaps there is a subset of SPARQL that I could offer that will allow me
 to offer a safer service that conforms to other's safer service (so it is
 well-understood?
 Is this defined, or is anyone working on it?

 And I am not referring to any particular software - it seems to me that this
 is something that LODers need to worry about.
 We aim to take over the world; and if SPARQL endpoints are part of that
 (maybe they aren't - just resolvable URIs?), then we should make damn sure
 that we think they can be delivered.

 My answer to my subject question?
 No, not as it stands. And we need to have a story to replace it.

 Best
 Hugh

 ===
 Sorry if this is a second copy, but the first, sent as a new post, seemed to
 only elicit a message from [EMAIL PROTECTED] and I can't work out or
 find out whether it means the message was rejected or something else, such
 as awaiting moderation.
 So I've done this as a reply.
 ===
 And now a response to the message from Aldo, done here to reduce traffic:

 Very generous of you to write in this way.
 And yes, humour is good.
 And sorry to all for the traffic.

 On 27/11/2008 00:02, Aldo Bucchi [EMAIL PROTECTED] wrote:

 OK Hugh,

 I see what you mean and I understand you being upset. Just re-read the
 conversation word by word because I felt something was not right.
 I did say wacky... is that it?

 In that case, and if this caused the confusion, I am really sorry.

 I was not talking about your software, this was just a joke. Talking in
 general.
 You replied to my joke with an absurd reply.

 My point was simply that, if you want to push things over the edge,
 why not get your own box. We all take care of our infrastructure and
 know its limitations.

 So, I formally apologize.
 I am by no means endorsing one piece of software over another ( save
 for mine, but it does't exist yet ;).
 My preferences for virtuoso come from experiential bias.

 I hope this clears things up.
 I apologize for the traffic.

 However, I do make a formal request for some sense of humor.
 This list tends to get into this kind of discussions, and we will
 start getting more and more visits from outsiders who are not used to
 this sort of sharpness.

 Best,
 A






-- 
Aldo Bucchi
U N I V R Z
Office: +56 2 795 4532
Mobile:+56 9 7623 8653
skype:aldo.bucchi
http://www.univrz.com/
http://aldobucchi.com

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to which it is
addressed and may contain information that is privileged and confidential. If
you

Oh No. The Global Mind does not know Madonna divorced Guy Ritchie!

2008-10-18 Thread Aldo Bucchi

http://bit.ly/2fmmcL

I know its not time to worry about this kind of stuff, and this
happens in the real world, etc.
But. Is there a way to trace this statement back? At least date of
extraction from its source?

In other words:
How would you build The mythical Oh Yeah!? button in this case.
Manually speaking.

Perhaps efforts such as VoiD could include some metadata to trace
sources back to originating authority. Some rudimentary protocol, so I
can say.
Trust wikipedia, or trust this daterange, etc.

Best,
A


-- 
 Aldo Bucchi 
+56 9 7623 8653
skype:aldo.bucchi
twitter:aldonline
http://aldobucchi.com/
http://univrz.com/



Re: Drilling into the LOD Cloud

2008-09-29 Thread Aldo Bucchi

Hummm, sorry.
When talking about classes it does make sense ;)
Or even skos:concepts ( which we can declared to belong to a given
vocab, for example ).

( have been working with instance data for too long ).

This might call for a way to make statements about a specific IRI, but
that would require tertiary relations or some syntactic convention...
sounds like going the wrong way.

But no worse than the inferencing complexity that would arise from
discarding owl:sameAs as a simple conceptual bridge between
vocabularies.

Sounds like a prob indeed.

Best,
A




On Mon, Sep 29, 2008 at 1:16 AM, Aldo Bucchi [EMAIL PROTECTED] wrote:
 I think you're overlooking something
 If you're using dc:author to state

 http://dbpedia.org/resource/R%C3%B6yksopp dc:author example:me

 ( which translates to: I *created Royksopp*, the music band )

 And then Umbel states that *they created* the music band ( I made this
 up for the example ).

 http://umbel.org/umbel/ne/wikipedia/R%C3%B6yksopp dc:author ex:someoneElse .

 you will indeed run into problems when equating the two IDs.

 http://dbpedia.org/resource/R%C3%B6yksopp owl:sameAs
 http://umbel.org/umbel/ne/wikipedia/R%C3%B6yksopp

 But, AFAIK, this is *incorrect usage of dc:author* and not a design
 flaw re. owl:sameAs.
 Luckily, neither UMBEL nor  DBpedia seem to be using dc:author incorrectly.

 Authorship metadata should not be attached to the ID for the concept,
 but to the vocabulary namespace or through other indirection.
 Which brings up another point: how do you state that a URI belongs to
 a given vocabulary.
 - URI opaqueness plays against here
 - is this really something we want/need?
 - ...

 If what you intend to equate is a document ( which usually have dc:*
 metadata ) with another doc that has different metadata, stop and
 rethink it. You might be wanting to equate the concepts they
 reference.

 A

 On Sun, Sep 28, 2008 at 6:30 PM, Damian Steer [EMAIL PROTECTED] wrote:


 On 28 Sep 2008, at 19:01, Kingsley Idehen wrote:


 Dan Brickley wrote:

 Kingsley Idehen wrote:

 Then between UMBEL and OpenCyc:

 1. owl:sameAs
 2. owl:equivalentClass

 If these thingies are owl:sameAs, then presumably they have same
 IP-related characteristics, owners, creation dates etc?

 Does that mean Cycorp owns UMBEL?

 Dan,

 No, it implies that in the UMBEL data space you have equivalence between
 Classes used to define UMBEL subject concepts (subject matter entities) and
 OpenCyc.

 I think Dan's point is that owl:sameAs is a very strong statement, as
 illustrated by the ownership question. If opencyc:Motorcyle
 owl:equivalentClass umbel:Motorcycle then they have the same extension.
 Informally, any use you make of one as a class can be replaced by the other
 without changing the meaning of the whole. However if the are owl:sameAs
 they name the same thing, so dc:creationDate, dc:creator, cc:license,
 rdfs:isDefinedBy etc etc are the same for each, which strike me as unhelpful
 side effects. owl:equivalentClass is the vocabulary mappers' friend :-)

 Damian






 --
  Aldo Bucchi 
 +56 9 7623 8653
 skype:aldo.bucchi
 twitter:aldonline
 http://aldobucchi.com/
 http://univrz.com/




-- 
 Aldo Bucchi 
+56 9 7623 8653
skype:aldo.bucchi
twitter:aldonline
http://aldobucchi.com/
http://univrz.com/



Re: Drilling into the LOD Cloud

2008-09-29 Thread Aldo Bucchi

On Mon, Sep 29, 2008 at 2:08 PM, Dan Brickley [EMAIL PROTECTED] wrote:

 Damian Steer wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 [sorry, forgot to reply-all]

 Peter Ansell wrote:

 | That is fine for classes, but how do you map individuals without
 metadata side-effects?

 Looking at my previous answer again I think I should take another run at
 this  :-)

 One way of thinking about equivalence is in terms of substitution: if :x
 :equiv :y when is it safe to substitute :x with :y?

 The default is: it's not safe (obviously).

 :x equivalentClass :y - safe when used as a class (e.g. rdf:type :x)
 :x sameAs :y - always safe (watch out!)
 :x sameConceptAs :y - safe in subject positions

 do you mean subject in the 'topic, dc:subject' sense, rather than
 subject/predicate/object sense? On first reading I thought you meant the
 latter, which would be problematic since any inverse of property can flip
 values into 'object' role. So assuming the former, I quite agree.

 Also thanks for expanding on what I meant. I was indeed picking a strong
 example to emphasise my (under articulated) case. Using an owl:sameAs claim
 between independently developed classes and properties is almost always
 misleading, confusing and untrue. So yup I agree that application-specific
 properties such as these are often more useful.

 Also btw folks there is no such thing as dc:author. It was changed in the
 mid-late 1990s to be dc:creator, so as to better cover non-written creations
 such as images, museum artifacts etc. So best to avoid/fix it in mail
 threads before 'dc:author' ends up getting mentioned in books etc.

Oops.
Made a quick search and I don't use author anywhere in my codebase ( I
use creator ).
But for some reason it is still sticking in my mind ( band, music, author )
If it shows up in a book I'll feel really guilty, as the material
author of that crime

( or should I say creator )...

This brings up another point ( re. retroactive edition ). Why haven't
we found a replacement for mailing lists?
Dog fooding anyone?
( rhetoric question perhaps. don't reply on this thread or it will go
seriously off topic ).

Thx,
A



 cheers,

 Dan

 :x sameWorkAs :y - safe for author, creation date
 :x sameManifestationAs :y - safe for the the above, and length (?)

 Does that help?

 Damian

 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.6 (GNU/Linux)
 Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

 iD8DBQFI4NdRAyLCB+mTtykRAppqAJ9kzkWgcWt0zeWfWdMfWwqtdhoblwCeNCWH
 54J20+PaNd1xmCmcFLs9DN8=
 =/gW2
 -END PGP SIGNATURE-








-- 
 Aldo Bucchi 
+56 9 7623 8653
skype:aldo.bucchi
twitter:aldonline
http://aldobucchi.com/
http://univrz.com/



Re: Drilling into the LOD Cloud

2008-09-29 Thread Aldo Bucchi

This is definitely too late, but still..

Have you seen http://der-mo.net/ 's visualizations?

On Mon, Sep 22, 2008 at 6:43 AM, Ed Summers [EMAIL PROTECTED] wrote:

 I am doing a presentation (tomorrow) on lcsh.info at DC2008, and of
 course (hat tip to PaulMiller) I'm using the LOD Cloud to illustrate
 how you all actually putting RDF to work.

 While the LOD Cloud is great for visualizing the different data sets
 that are getting linked together, I was wondering if anyone happens to
 have some visualizations of the type of links that exist between
 resources in the LOD Cloud.

 I guess what I'm thinking of is the usual graph illustration with
 nodes, arcs, etc -- with resource URIs that live in different
 namespaces: geonames, dbpedia, etc ... I'd like to illustrate the
 importance of those linking assertions.

 If you have something handy, I'd love to use it, and spend more time
 drinking beer here in Berlin :-)

 //Ed





-- 
 Aldo Bucchi 
+56 9 7623 8653
skype:aldo.bucchi
twitter:aldonline
http://aldobucchi.com/
http://univrz.com/



Re: Drilling into the LOD Cloud

2008-09-28 Thread Aldo Bucchi

I think you're overlooking something
If you're using dc:author to state

http://dbpedia.org/resource/R%C3%B6yksopp dc:author example:me

( which translates to: I *created Royksopp*, the music band )

And then Umbel states that *they created* the music band ( I made this
up for the example ).

http://umbel.org/umbel/ne/wikipedia/R%C3%B6yksopp dc:author ex:someoneElse .

you will indeed run into problems when equating the two IDs.

http://dbpedia.org/resource/R%C3%B6yksopp owl:sameAs
http://umbel.org/umbel/ne/wikipedia/R%C3%B6yksopp

But, AFAIK, this is *incorrect usage of dc:author* and not a design
flaw re. owl:sameAs.
Luckily, neither UMBEL nor  DBpedia seem to be using dc:author incorrectly.

Authorship metadata should not be attached to the ID for the concept,
but to the vocabulary namespace or through other indirection.
Which brings up another point: how do you state that a URI belongs to
a given vocabulary.
- URI opaqueness plays against here
- is this really something we want/need?
- ...

If what you intend to equate is a document ( which usually have dc:*
metadata ) with another doc that has different metadata, stop and
rethink it. You might be wanting to equate the concepts they
reference.

A

On Sun, Sep 28, 2008 at 6:30 PM, Damian Steer [EMAIL PROTECTED] wrote:


 On 28 Sep 2008, at 19:01, Kingsley Idehen wrote:


 Dan Brickley wrote:

 Kingsley Idehen wrote:

 Then between UMBEL and OpenCyc:

 1. owl:sameAs
 2. owl:equivalentClass

 If these thingies are owl:sameAs, then presumably they have same
 IP-related characteristics, owners, creation dates etc?

 Does that mean Cycorp owns UMBEL?

 Dan,

 No, it implies that in the UMBEL data space you have equivalence between
 Classes used to define UMBEL subject concepts (subject matter entities) and
 OpenCyc.

 I think Dan's point is that owl:sameAs is a very strong statement, as
 illustrated by the ownership question. If opencyc:Motorcyle
 owl:equivalentClass umbel:Motorcycle then they have the same extension.
 Informally, any use you make of one as a class can be replaced by the other
 without changing the meaning of the whole. However if the are owl:sameAs
 they name the same thing, so dc:creationDate, dc:creator, cc:license,
 rdfs:isDefinedBy etc etc are the same for each, which strike me as unhelpful
 side effects. owl:equivalentClass is the vocabulary mappers' friend :-)

 Damian






-- 
 Aldo Bucchi 
+56 9 7623 8653
skype:aldo.bucchi
twitter:aldonline
http://aldobucchi.com/
http://univrz.com/



Semantinet raises $3.4M for web information discovery

2008-09-05 Thread Aldo Bucchi

From VentureBeat:

Semantinet, an Israeli company working on a browsing product
incorporating semantic technology, has taken a first round of funding
totaling $3.4 million.

There aren't many details yet on what Semantinet's product will look
like. The company has only said that it is working on software
enabling the discovery of new information as part of the natural
browsing experience. 

Their websites says they launch sep2008.
Sounds like Linked Data under the hood(?)

Best,
A

-- 
 Aldo Bucchi 
+56 9 7623 8653
skype:aldo.bucchi
twitter:aldonline
http://aldobucchi.com/
http://univrz.com/



Re: Linked Data Visualization Platforms; Different Approaches

2008-08-22 Thread Aldo Bucchi

Hmm... this Silence probably proves my point
Or maybe my post wasn't clear enough.

Consider tapping into established visualization frameworks and richer
client platforms ;)
If we get RDF there ( by extending their frameworks and evangelizing
), then those particular communities can make their decentralized
contribution.

Piggyback the Global Mind.
These problems are not new to the world.
They just have different names for different people.

Best,
A


On Wed, Aug 20, 2008 at 6:26 AM, Aldo Bucchi [EMAIL PROTECTED] wrote:
 Hi,

 I was trying to hold back... but isolation is killing me. I guess we
 are social in nature.
 Some of you know I spent quite some time working on a visualization
 platform. Part of it was built on a Java / processing hybrid and
 surfaced through a thin ( flash ) client using some pretty complex
 streaming tricks ( think VNC ).
 It looks nothing like the UIs you are working on, although
 conceptually we went down similar paths.
 Since we stood on a much more solid platform, with infinite tools at
 hand and infinite local *processing ;)* power, it was easier to
 experiment ( we didn't rely on the client, Ajax, etc ) and we moved
 forward faster.

 You can say I cheated ;)

 Now, with the advent of Flash/Flex/AIR and the other RIA platforms (
 Silverlight, JavaFX and maybe even Gears ), similar power will
 eventually be available on the client side.

 ( forget about having powerful and modular Javascript for a while,
 things are not looking nice[1] )

 I guess my point is: consider breaking FREE OF THE RULES and use all
 the tools to experiment ( even if they break some standards, you can
 then come back ). There are infinite paradigms to experiment with.

 Some pointers:
 A very nice vis newcomer in Flex: SpatialKey [2]
 My incipient RDF framework for the Flash platform [3]

 I am not saying you don't know that there are other ways to do this.
 Just that you should reconsider the benefits of using a better
 framework... even if it is only for research. Back to the Haystack
 days perhaps.
 Imagine how fast faceted browsing can be with a co-located triple
 store feeding a specially crafted federated querying engine.

 Processing on the edge.

 Best,
 A

 [1] https://mail.mozilla.org/pipermail/es-discuss/2008-August/003400.html
 [2] http://www.spatialkey.com/
 [3] http://www.semanticflash.org/

 --
  Aldo Bucchi 
 +56 9 7623 8653
 skype:aldo.bucchi
 http://aldobucchi.com/
 http://univrz.com/




-- 
 Aldo Bucchi 
+56 9 7623 8653
skype:aldo.bucchi
http://aldobucchi.com/



Linked Data Visualization Platforms; Different Approaches

2008-08-20 Thread Aldo Bucchi

Hi,

I was trying to hold back... but isolation is killing me. I guess we
are social in nature.
Some of you know I spent quite some time working on a visualization
platform. Part of it was built on a Java / processing hybrid and
surfaced through a thin ( flash ) client using some pretty complex
streaming tricks ( think VNC ).
It looks nothing like the UIs you are working on, although
conceptually we went down similar paths.
Since we stood on a much more solid platform, with infinite tools at
hand and infinite local *processing ;)* power, it was easier to
experiment ( we didn't rely on the client, Ajax, etc ) and we moved
forward faster.

You can say I cheated ;)

Now, with the advent of Flash/Flex/AIR and the other RIA platforms (
Silverlight, JavaFX and maybe even Gears ), similar power will
eventually be available on the client side.

( forget about having powerful and modular Javascript for a while,
things are not looking nice[1] )

I guess my point is: consider breaking FREE OF THE RULES and use all
the tools to experiment ( even if they break some standards, you can
then come back ). There are infinite paradigms to experiment with.

Some pointers:
A very nice vis newcomer in Flex: SpatialKey [2]
My incipient RDF framework for the Flash platform [3]

I am not saying you don't know that there are other ways to do this.
Just that you should reconsider the benefits of using a better
framework... even if it is only for research. Back to the Haystack
days perhaps.
Imagine how fast faceted browsing can be with a co-located triple
store feeding a specially crafted federated querying engine.

Processing on the edge.

Best,
A

[1] https://mail.mozilla.org/pipermail/es-discuss/2008-August/003400.html
[2] http://www.spatialkey.com/
[3] http://www.semanticflash.org/

-- 
 Aldo Bucchi 
+56 9 7623 8653
skype:aldo.bucchi
http://aldobucchi.com/
http://univrz.com/



Re: Modular Browsers (was RE: freebase parallax: user interface for browsing graphs of data)

2008-08-20 Thread Aldo Bucchi

Hello,

( replies inlined )

On Wed, Aug 20, 2008 at 9:19 PM, Kingsley Idehen [EMAIL PROTECTED] wrote:
 Aldo Bucchi wrote:

 HI,

 Scanning the thread on Parallax I see some terms reoccurring:

 Outgoing Connections.
 Lenses.
 Lists.
 Facets.
 Free text search.
 IFP to IRI resolution.
 Find documents that contain IRIs
 etc...

 They are all implemented in different ways but tend to share semantics
 across different browsers and services.
 How far are we from defining a modular framework so we can mix and
 math these as atomic interaction pieces?

 Both services and probably UI parts.

 Of course, RDF and HTTP can be used to describe them and deliver the
 descriptions and, in the case of widgets, some OOTB implementations.
 XForms, Dojo widgets, SWFs?

 I have done something similar but much simpler in a Flex platform ( I
 serve Flex modules, described in RDF and referenced by Fresnel vocabs,
 but only for presentation ).
 And then on a functional side I have several services that do
 different things, and I can hot swap them.
 For example, the free text search service is a (S)WS.
 Faceter service idem.

 I guess we still need to see some more diversity to derive a taxonomy
 and start working on the framework.
 But it is nice to keep this in sight.

 The recurring topics.

 Best,
 A




 Aldo,

 Really nice to see you are looking at things holistically.

I showed up with a narrow interface, I know ;)


 As you can see, we are veering gradually towards recognizing that the Web,
 courtesy of HTTP,  gives us a really interesting infrasructure for the
 time-tested MVC pattern (I've been trying to bring attention to this aspect
 of the Web for a while now).

 If you look at ODE (closely) you will notice it's an MVC vessel. We have
 components for Data Access (RDFiztion Cartridges), components for UI
 (xslt+css templates and fresnel+xslt+css templates), and components for
 actions (*Cartridges not released yet*).

Ah... I remember telling Daniel Lewis something was missing from his
UPnP diagram: a way to modify the Model.
aka: a Controller / Actions.

You are right, technically an agent like ODE ( assuming you can hook
in actions ) is all that you need to allow users to interact with
linked data.

Let's say that this sort of solution can cover 80% of user interaction
cases ( launching simple actions and direct manipulation of resources
), and operates on top of 80% of data ( anything that can be published
as linked data/SPARQL and fits within the expressiveness of RDF's
abstract model ).
Not a bad MVC structure at all!

So, how do you plan on hooking up the actions to the shell, is this
in the cartridges?
How will they surface. Context menu?


 We've tried to focus on the foundation infrastructure that uses HTTP for the
 messaging across M-V-C so that you get:
 M--http-V---http---C

 Unfortunately, our focus on the MC doesn't permeate. Instead, we find all
 focus coming at us from the V part where we've released minimal templates
 with hope that 3rd parties will eventually focus on Display Cartridges (via
 Fresnel, XSLT+SPARQL, xml+xslt+css, etc..).

Well. The M part is the data, isn't it? ( so it is permeating, people
are publishing data ).
Unless you mean building some higher functionality services ( on top
of SPARQL and RDF ) such as faceters, free text search, IFP
resolution, etc. But in that case it is also moving forward, although
not with a standardized interface.
This could be thought of as higher level Data Access components.

The C part... that's another story.
As I pointed out before, you need to define the way and an environment
to hook in the actions. What is the shell?

For example, you could provide a JS API for ODE where developers could
hook up methods using Adenine like signatures ( which, if I remember
correctly, use rdf:type hinting ) and then surface them on the context
menu.
Or perhaps a server side registry of actions is more suitable.

Many options here. I am curious about the Action Cartridges.

Best solution overall should be agent independent ( and here we go
down the SWS road once again ).



 btw - David Schwabe [1] also alluded to the architectural modularity that I
 am fundamentally trying to bring to broader attention in this evolving
  conversation re.  Linked oriented Web applications.

 The ultimate goal is to deliver a set of options that enable Web Users to
 Explore the Web coherently and productively (imho).

And that will eventually be ( dynamically ) assembled to deliver the
functionality that today is present in walled garden applications.


 Humans can only do so much, and likewise Machines, put both together and we
 fashion a recipe for real collective intelligence (beyond  the buzzword).
  We desperately need to tap into collective intelligenceen route to
 solving many of the real problems facing the world today.

 The Web should make seeing and connecting the dots easier, but this is down
 to the MVC combo as opposed to any single component of the pattern  :-)

Aha

SW Marketing: Effective Sales Pitch for VCs and Non-Geeks in General

2008-07-10 Thread Aldo Bucchi

Hi,

I have to apologize. The previous subject of the thread arguably used
the term girlfriend as a pejorative. I was referring directly to my
girlfriend, who has had the patience to listen to my ramblings on the
semantic web over and over again. Of course the generalization is
totally out of line. It might sound sexist.

What this mistake proves is probably quite the opposite, for the Nth
time: I am the thoughtless one in this relationship!

I copy the original thread below:


-- Forwarded message --
From: Hausenblas, Michael [EMAIL PROTECTED]
Date: Thu, Jul 10, 2008 at 1:22 AM
Subject: RE: SW Marketing: Effective Sales Pitch for VCs / Girlfriends / etc?
To: Aldo Bucchi [EMAIL PROTECTED]
Cc: public-lod@w3.org, Semantic Web [EMAIL PROTECTED]



[Note: changed LOD mailing list to the new one; the MIT-list is not used
anymore]

Aldo,

+1 to more or less everything you say.

1) I think Mr Cyganiak's diagram could use some numbers ( how many
records are there in this giant database? )

This is actually one of the driving force behind voiD [1] - have a look
at the stats examples; pls. note that we're still defining the
vocabulary, so take this as a strawman proposal, rather. We are
currently heavily working on it, please expect some solid results by end
of August, latest. I've put a simple editor online as well [2]; this
tool should by in sync with the voiD vocabulary all time.

2) We could setup a space devoted to marketing. Maybe in some of the
community wikis ( does this exist already? )

There is! I'd recommend using

http://community.linkeddata.org/MediaWiki/

for all linked data related marketing; please feel free to add as
needed, there.

Cheers,
   Michael


[1] http://community.linkeddata.org/MediaWiki/index.php?VoiD
[2] http://sw.joanneum.at/ve/


--
 Michael Hausenblas, MSc.
 Institute of Information Systems  Information Management
 JOANNEUM RESEARCH Forschungsgesellschaft mbH

 http://www.joanneum.at/iis/
--


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Aldo Bucchi
Sent: Wednesday, July 09, 2008 10:03 PM
To: Semantic Web; [EMAIL PROTECTED]
Subject: SW Marketing: Effective Sales Pitch for VCs /
Girlfriends / etc?


Hi All,

Hopefully more and more of you are meeting up with VCs or selling
internal projects using semantic web technologies.

I sold my first RDF based project 5 or 6 years ago and lately I have
had some tough VC/fund raising presentations. I realized that I have
definitely improved my sales pitch over the years, so I thought I
would share some brief thoughts with you guys and hopefully get some
feedback and novel ideas.

My current approach when selling a project that includes Semantic Web
( specifically Linked Data ) technology is simple. I only use only TWO
slides to talk about Semantic Web per-se:

1) I replaced the web layer cake for this slide[1]
2) Then I jump to the LOD cloud diagram by Richard Cyganiak[2]. I
emphasize on the size of the datasets and how this is something that
has never happened before.
3) That's it. No more semweb talk. Focus on the value proposition and
answer questions as they appear.

The other useful diagram I have found useful is Nova Spivack's web
evolution timeline[3] ( very buzz friendly ). The problem with that
particular diagram is that it imposes an abritrary classification
(1.0, 2.0, 3.0 ) and you will most probably run into some friction.


So... I am sure lots of you have learned and mastered their
own sales pitch...

What is yours?
Any new diagrams that I don't know of?


And two suggestions:
1) I think Mr Cyganiak's diagram could use some numbers ( how many
records are there in this giant database? )
2) We could setup a space devoted to marketing. Maybe in some of the
community wikis ( does this exist already? )

And don't tell me that marketing is not important. This whole Linked
Data / SW re-branding has really made some noise.
( I know LD is not the SW, but it is a visible part of it ).


Thanks!
A

[1]
http://blog.aldobucchi.com/2008/07/how-to-explainthe-semantic-w
eb-in-3.html
[2] http://richard.cyganiak.de/2007/10/lod/
[3] http://novaspivack.typepad.com/RadarNetworksTowardsAWebOS.jpg

--
 Aldo Bucchi 
+56 9 7623 8653
skype:aldo.bucchi
http://aldobucchi.com/





-- 
 Aldo Bucchi 
+56 9 7623 8653
skype:aldo.bucchi
http://aldobucchi.com/


-- 
 Aldo Bucchi 
+56 9 7623 8653
skype:aldo.bucchi
http://aldobucchi.com/



Re: TimBL mentions Linking Open Data on BBC Radio4

2008-07-10 Thread Aldo Bucchi

My 2 cents.

Organizational data after a merger.
If it is published as Linked data they are already merged ( one link away ).

Some executives SERIOUSLY relate to that problem.

The non-enterprise version of this is mashing up your social networks.

Thanks,
A


On Thu, Jul 10, 2008 at 7:05 AM, Georgi Kobilarov [EMAIL PROTECTED] wrote:

 Hi Ian,

 Which examples does
 anyone else use to get the idea of LOD across in the mainstream?

 One I example I've been telling recently is around travel.
 Find me the cheapest/fastest route from my place in Berlin to Florence
 combining busses, trains, flights, etc.

 For a human, that's quite a hard problem, and the best route might not
 be easy to find. Machines could do so much better...

 Cheers,
 Georgi

 --
 Georgi Kobilarov
 Freie Universität Berlin
 www.georgikobilarov.com


 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
 Behalf Of Dickinson, Ian J. (HP Labs, Bristol, UK)
 Sent: Wednesday, July 09, 2008 11:45 AM
 To: public-lod@w3.org
 Subject: RE: TimBL mentions Linking Open Data on BBC Radio4


 Hi Tom,
 I heard the interview too. It was cool (and slightly weird) to hear
 semantic web discussed on prime-time news, but I thought that Tim
 could have used a more compelling example. The interviewer didn't seem
 overly impressed by Tim's find me music by people born within 100
 miles of my location. OTOH, it's hard to come up with really
 compelling examples to use with non-specialists. Which examples does
 anyone else use to get the idea of LOD across in the mainstream?

 Ian


  -Original Message-
  From: [EMAIL PROTECTED]
  [mailto:[EMAIL PROTECTED] On Behalf Of Tom Heath
  Sent: 09 July 2008 10:27
  To: public-lod@w3.org
  Subject: TimBL mentions Linking Open Data on BBC Radio4
 
 
  TimBL was on the Today programme on Radio 4 this morning (the
  BBCs prime morning news/current affairs radio programme)
  talking about the Semantic Web, and specifically mentions
  Linking Open Data:
 
  http://news.bbc.co.uk/today/hi/today/newsid_7496000/7496976.stm
 
  Nice :)
 
  --
  Tom Heath


 
 Ian Dickinson   http://www.hpl.hp.com/personal/Ian_Dickinson
 HP Laboratories Bristol  mailto:[EMAIL PROTECTED]
 Hewlett-Packard LimitedRegistered No: 690597 England
 Registered Office:  Cain Road, Bracknell, Berks RG12 1HN





-- 
 Aldo Bucchi 
+56 9 7623 8653
skype:aldo.bucchi
http://aldobucchi.com/



Re: help

2008-06-26 Thread Aldo Bucchi

Call 911.
We haven't developed super-powers yet.


On Thu, Jun 26, 2008 at 6:14 PM, Bob Wyman [EMAIL PROTECTED] wrote:
 help




-- 
 Aldo Bucchi 
+56 9 7623 8653
skype:aldo.bucchi
http://aldobucchi.com/