Re: Semantic Web logo. Copyrights, etc

2009-03-23 Thread Ivan Herman
Aldo,

Yes. The box

http://www.w3.org/Icons/SW/sw-cube.{svg,png,giv}

can be used as you describe. If put into a composition but it is clearly
recognizable, I do not see any issue with that either. A good example is
the logo used by RPI:

http://tw.rpi.edu/wiki/Main_Page

(They actually went out of their way by creating an image map, so that
the box links to the W3C site, whereas other parts of the logo links to
RPI...)

'Distortion' is not listed on the logo page and, I presume, this might
be a borderline. I could apply strong distortion that would make the
logo barely recognizable, and I think W3C might have an issue with that.
But mild distortion (ie, slight change in scale factors) should be fine.
If you really want to apply such distortion, you may want to check with
W3C first.

Cheers

Ivan

Aldo Bucchi wrote:
 Hi all,
 
 Not sure if this is the place for this, but I believe it is common concern.
 
 I have seen several projects using tweaks (color, distortion,
 composition) of the Semantic Web box logo. After reading the
 policies I believe that this is allowed as long as there is no W3C
 logo involved.
 
 http://www.w3.org/2007/10/sw-logos.html
 
 Is this correct? Can I create such modifiied derivatives from the
 clean box logo w/o running into any problems?
 Composition is the border case as the logo is unmodified and
 recognizable, therefore legally troublesome.
 
 My 2 cents is that they should allow anything as long as you strip
 the W3C part.
 
 Thanks,
 A
 

-- 

Ivan Herman, W3C Semantic Web Activity Lead
Home: http://www.w3.org/People/Ivan/
mobile: +31-641044153
PGP Key: http://www.ivan-herman.net/pgpkey.html
FOAF: http://www.ivan-herman.net/foaf.rdf


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Announcement: Bio2RDF 0.3 released

2009-03-23 Thread Kei Cheung
As part of the biordf query federation task, we are currently exploring 
a federation scenario involving integration of neuroreceptor-related 
information. For example, IUPHAR provides information for different 
classes of receptors. For example, in the table shown at  
http://www.iuphar-db.org/GPCR/ReceptorListForward?class=class%20A, 
ligands are provided for receptors but not InChI codes ...


-Kei

One types of Egon Willighagen wrote:

On Mon, Mar 23, 2009 at 12:09 AM, Peter Ansell ansell.pe...@gmail.com wrote:
  

2009/3/22 Egon Willighagen egon.willigha...@gmail.com:


On Sun, Mar 22, 2009 at 1:42 AM, Peter Ansell ansell.pe...@gmail.com wrote:
  

Do you also provide InChIKey resolution?


No. That requires look up, so only works against an existing database.
Chemspider is doing this, but is not a general solution. InChIKey's
are not unique, though clashes rare, and not observed so far.
  

I didn't think it required a lookup to derive an InChIKey given an
InChI.



Ah, sorry. InChIKey can be computed, but I thought you meant resolving
what structure has a given InChIKey... going from InChIKey to
structure does require lookup, generation from InChIKey from structure
(or InChI) does not.

  

I realise that clashes are rare but possible, just wondering
whether it would be supported. Leaving them out altogether just seems
like missing possibly extra information.



I'll add them where missing.

  

[1] It is just that InChI's
can get pretty long for complex molecules and it makes it harder for
people to accurately copy and paste them around when needed.


Indeed. However, InChIKey is less precise. RDF allowing us to be do
things in an exact manner, I rather use InChI.

  

InChiKey's might be better for general use in RDF because they have a
guaranteed identifier length and therefore won't become cumbersome for
complex molecules.


But can never be used for owl:sameAs like relations.
  

Having them as properties could give someone a quick clue as to
whether they are looking at the same molecule. Humans do interact with
RDF (inevitably), and having short hash values can still be valuable.
Given that hashes are usually designed to amplify small changes, it is
easier than reading a 10 line InChiKey to determine whether there was
a difference.



Agreed.

  

Currently all of the InChI's that I have seen have been as Literals,
but it would be relatively easy to also provide them as URI's to
provide the link since you have a resolver for them set up.


That was precisely the reason why I started the service.
  

Good work.



Thanx for the feedback!

Egon

  






Re: Announcement: Bio2RDF 0.3 released

2009-03-23 Thread Egon Willighagen
Hi Kei,

On Mon, Mar 23, 2009 at 3:37 PM, Kei Cheung kei.che...@yale.edu wrote:
 As part of the biordf query federation task, we are currently exploring a
 federation scenario involving integration of neuroreceptor-related
 information. For example, IUPHAR provides information for different classes
 of receptors. For example, in the table shown at
  http://www.iuphar-db.org/GPCR/ReceptorListForward?class=class%20A, ligands
 are provided for receptors but not InChI codes ...

That's an interesting table... not Open it seems... did you ask
permission (and get) permission to  redistribute under a free license,
perhaps? The list is not overly long, and InChIs could be added
manually, though one would have to assume the compound names (btw,
some are compound classses!) are unique...

PubChem also has links to MeSH terms, and I also see a MeSH term in
the ChemBox on WikiPedia... that would be open data, and could provide
similary information.

I have been pondering about setting up open source semantic wiki to
linking data, where there is no Open source for that available, but
have not had time for that yet.

Egon

-- 
Post-doc @ Uppsala University
http://chem-bla-ics.blogspot.com/



Re: Semantic Web logo. Copyrights, etc

2009-03-23 Thread Aldo Bucchi
Ivan,

On Mon, Mar 23, 2009 at 4:47 AM, Ivan Herman i...@w3.org wrote:
 Aldo,

 Yes. The box

 http://www.w3.org/Icons/SW/sw-cube.{svg,png,giv}

 can be used as you describe. If put into a composition but it is clearly
 recognizable, I do not see any issue with that either. A good example is
 the logo used by RPI:

 http://tw.rpi.edu/wiki/Main_Page

 (They actually went out of their way by creating an image map, so that
 the box links to the W3C site, whereas other parts of the logo links to
 RPI...)

OK Great ;)


 'Distortion' is not listed on the logo page and, I presume, this might
 be a borderline. I could apply strong distortion that would make the
 logo barely recognizable, and I think W3C might have an issue with that.
 But mild distortion (ie, slight change in scale factors) should be fine.
 If you really want to apply such distortion, you may want to check with
 W3C first.

I can't remember now, but I am sure there are some distorted versions
out there in the wild. In particular I remember one with a twirl.

In a dogfood manner, is there an RDF vocabulary to describe such
relations? ( derived from ).
It would be useful if the W3C demanded the generated logos to be
RDF-linked to the page via a wiki for example.
Well, that's too much of a stretch, but I think this is not too crazy
an idea: Dogfood, show benefits (right on the page via a dynamic link)
and save some work W3C work patrolling for logos.

This is also in line with CC and, perhaps, even Open Data licensing, etc.


 Cheers

 Ivan

Thanks,
A

PS. I said derivatives. The D word... sorry for that ;)


 Aldo Bucchi wrote:
 Hi all,

 Not sure if this is the place for this, but I believe it is common concern.

 I have seen several projects using tweaks (color, distortion,
 composition) of the Semantic Web box logo. After reading the
 policies I believe that this is allowed as long as there is no W3C
 logo involved.

 http://www.w3.org/2007/10/sw-logos.html

 Is this correct? Can I create such modifiied derivatives from the
 clean box logo w/o running into any problems?
 Composition is the border case as the logo is unmodified and
 recognizable, therefore legally troublesome.

 My 2 cents is that they should allow anything as long as you strip
 the W3C part.

 Thanks,
 A


 --

 Ivan Herman, W3C Semantic Web Activity Lead
 Home: http://www.w3.org/People/Ivan/
 mobile: +31-641044153
 PGP Key: http://www.ivan-herman.net/pgpkey.html
 FOAF: http://www.ivan-herman.net/foaf.rdf




-- 
Aldo Bucchi
U N I V R Z
Office: +56 2 795 4532
Mobile:+56 9 7623 8653
skype:aldo.bucchi
http://www.univrz.com/
http://aldobucchi.com/

PRIVILEGED AND CONFIDENTIAL INFORMATION
This message is only for the use of the individual or entity to which it is
addressed and may contain information that is privileged and confidential. If
you are not the intended recipient, please do not distribute or copy this
communication, by e-mail or otherwise. Instead, please notify us immediately by
return e-mail.
INFORMACIÓN PRIVILEGIADA Y CONFIDENCIAL
Este mensaje está destinado sólo a la persona u organización al cual está
dirigido y podría contener información privilegiada y confidencial. Si usted no
es el destinatario, por favor no distribuya ni copie esta comunicación, por
email o por otra vía. Por el contrario, por favor notifíquenos inmediatamente
vía e-mail.



Re: Potential Home for LOD Data Sets

2009-03-23 Thread Kingsley Idehen

Steve Judkins wrote:

It seems like this has the potential to become a nice collaborative
production pipeline. It would be nice to have a feed for data updates, so we
can fire up our EC2 instance when the data has been processed and packaged
by the providers we are interested in.  For example, if Openlink wants to
fire up their AMI to processes the raw dumps from
http://wiki.dbpedia.org/Downloads32 into this cloud storage, we can wait
until a virtuoso ready package has been produced before we update.  As more
agents get involved in processing the data, this will allow for more
automation notifications of updated dumps or SPARQL endpoints.
  

Yes, certainly.

Kingsley

-Steve

-Original Message-
From: public-lod-requ...@w3.org [mailto:public-lod-requ...@w3.org] On Behalf
Of Kingsley Idehen
Sent: Thursday, December 04, 2008 9:20 PM
To: Hugh Glaser
Cc: public-lod@w3.org
Subject: Re: Potential Home for LOD Data Sets


Hugh Glaser wrote:
  

Thanks for the swift response!
I'm still puzzled - sorry to be slow.
http://aws.amazon.com/publicdatasets/#2
Says:
Amazon EC2 customers can access this data by creating their own personal


Amazon EBS volumes, using the public data set snapshots as a starting point.
They can then access, modify and perform computation on these volumes
directly using their Amazon EC2 instances and just pay for the compute and
storage resources that they use.
  
  
Does this not mean it costs me money on my EC2 account? Or is there some


other way of accessing the data? Or am I looking at the wrong bit?
  
  

Okay, I see what I overlooked: the cost of paying for an AMI that mounts 
these EBS volumes, even though Amazon is charging $0.00 for uploading 
these huge amounts of data where it would usually charge.


So to conclude, using the loaded data sets isn't free, but I think we 
have to be somewhat appreciative of a value here, right? Amazon is 
providing a service that is ultimately pegged to usage (utility model), 
and the usage comes down to value associated with that scarce resource 
called time.
  

Ie Can you give me a clue how to get at the data without using my credit


card please? :-)
  
  

You can't you will need someone to build an EC2 service for you and eat 
the costs on your behalf. Of course such a service isn't impossible in a 
Numerati [1] economy, but we aren't quite there yet, need the Linked 
Data Web in place first :-)


Links:

1. http://tinyurl.com/64gsan

Kingsley
  

Best
Hugh

On 05/12/2008 02:28, Kingsley Idehen kide...@openlinksw.com wrote:



Hugh Glaser wrote:
  


Exciting stuff, Kingsley.
I'm not quite sure I have worked out how I might use it though.
The page says that hosting data is clearly free, but I can't see how to
  

get at it without paying for it as an EC2 customer.
  

Is this right?
Cheers


  

Hugh,

No, shouldn't cost anything if the LOD data sets are hosted in this
particular location :-)


Kingsley
  


Hugh


On 01/12/2008 15:30, Kingsley Idehen kide...@openlinksw.com wrote:



All,

Please see: http://aws.amazon.com/publicdatasets/ ; potentially the
final destination of all published RDF archives from the LOD cloud.

I've already made a request on behalf of LOD, but additional requests
from the community will accelerate the general comprehension and
awareness at Amazon.

Once the data sets are available from Amazon, database constructions
costs will be significantly alleviated.

We have DBpedia reconstruction down to 1.5 hrs (or less) based on
Virtuoso's in-built integration with Amazon S3 for backup and
restoration etc..  We could get the reconstruction of the entire LOD
cloud down to some interesting numbers once all the data is situated in
an Amazon data center.


--


Regards,

Kingsley Idehen   Weblog: http://www.openlinksw.com/blog/~kidehen
President  CEO
OpenLink Software Web: http://www.openlinksw.com









  

--


Regards,

Kingsley Idehen   Weblog: http://www.openlinksw.com/blog/~kidehen
President  CEO
OpenLink Software Web: http://www.openlinksw.com







  




  



--


Regards,

Kingsley Idehen   Weblog: http://www.openlinksw.com/blog/~kidehen
President  CEO 
OpenLink Software Web: http://www.openlinksw.com








Re: Potential Home for LOD Data Sets

2009-03-23 Thread Kingsley Idehen

Steve Judkins wrote:

Another goal would be to allow this pipeline to extend full circle back to
Wikipedia so that users and agents can pass corrections and new content back
to Wikipedia for review and inclusion in future release without editing the
wiki directly (we need to protect our watershed).  Is there another thread
that addresses this somewhere?
  
We are getting closer to DBpedia real-time i.e. DBpedia staying in close 
sync with Wikipedia. Of course, routing changes back to Wikipedia would 
be great, but I think there is the whole Wikipedia process to deal with 
etc..


Kingsley

-steve

-Original Message-
From: Steve Judkins [mailto:st...@wisdomnets.com] 
Sent: Monday, March 23, 2009 2:40 PM

To: 'Kingsley Idehen'; 'Hugh Glaser'
Cc: 'public-lod@w3.org'
Subject: RE: Potential Home for LOD Data Sets

It seems like this has the potential to become a nice collaborative
production pipeline. It would be nice to have a feed for data updates, so we
can fire up our EC2 instance when the data has been processed and packaged
by the providers we are interested in.  For example, if Openlink wants to
fire up their AMI to processes the raw dumps from
http://wiki.dbpedia.org/Downloads32 into this cloud storage, we can wait
until a virtuoso ready package has been produced before we update.  As more
agents get involved in processing the data, this will allow for more
automation notifications of updated dumps or SPARQL endpoints.

-Steve

-Original Message-
From: public-lod-requ...@w3.org [mailto:public-lod-requ...@w3.org] On Behalf
Of Kingsley Idehen
Sent: Thursday, December 04, 2008 9:20 PM
To: Hugh Glaser
Cc: public-lod@w3.org
Subject: Re: Potential Home for LOD Data Sets


Hugh Glaser wrote:
  

Thanks for the swift response!
I'm still puzzled - sorry to be slow.
http://aws.amazon.com/publicdatasets/#2
Says:
Amazon EC2 customers can access this data by creating their own personal


Amazon EBS volumes, using the public data set snapshots as a starting point.
They can then access, modify and perform computation on these volumes
directly using their Amazon EC2 instances and just pay for the compute and
storage resources that they use.
  
  
Does this not mean it costs me money on my EC2 account? Or is there some


other way of accessing the data? Or am I looking at the wrong bit?
  
  

Okay, I see what I overlooked: the cost of paying for an AMI that mounts 
these EBS volumes, even though Amazon is charging $0.00 for uploading 
these huge amounts of data where it would usually charge.


So to conclude, using the loaded data sets isn't free, but I think we 
have to be somewhat appreciative of a value here, right? Amazon is 
providing a service that is ultimately pegged to usage (utility model), 
and the usage comes down to value associated with that scarce resource 
called time.
  

Ie Can you give me a clue how to get at the data without using my credit


card please? :-)
  
  

You can't you will need someone to build an EC2 service for you and eat 
the costs on your behalf. Of course such a service isn't impossible in a 
Numerati [1] economy, but we aren't quite there yet, need the Linked 
Data Web in place first :-)


Links:

1. http://tinyurl.com/64gsan

Kingsley
  

Best
Hugh

On 05/12/2008 02:28, Kingsley Idehen kide...@openlinksw.com wrote:



Hugh Glaser wrote:
  


Exciting stuff, Kingsley.
I'm not quite sure I have worked out how I might use it though.
The page says that hosting data is clearly free, but I can't see how to
  

get at it without paying for it as an EC2 customer.
  

Is this right?
Cheers


  

Hugh,

No, shouldn't cost anything if the LOD data sets are hosted in this
particular location :-)


Kingsley
  


Hugh


On 01/12/2008 15:30, Kingsley Idehen kide...@openlinksw.com wrote:



All,

Please see: http://aws.amazon.com/publicdatasets/ ; potentially the
final destination of all published RDF archives from the LOD cloud.

I've already made a request on behalf of LOD, but additional requests
from the community will accelerate the general comprehension and
awareness at Amazon.

Once the data sets are available from Amazon, database constructions
costs will be significantly alleviated.

We have DBpedia reconstruction down to 1.5 hrs (or less) based on
Virtuoso's in-built integration with Amazon S3 for backup and
restoration etc..  We could get the reconstruction of the entire LOD
cloud down to some interesting numbers once all the data is situated in
an Amazon data center.


--


Regards,

Kingsley Idehen   Weblog: http://www.openlinksw.com/blog/~kidehen
President  CEO
OpenLink Software Web: http://www.openlinksw.com









  

--


Regards,

Kingsley Idehen   Weblog: http://www.openlinksw.com/blog/~kidehen
President  CEO
OpenLink Software Web: http://www.openlinksw.com







  




  



--


Regards,

Kingsley Idehen   Weblog: http://www.openlinksw.com/blog/~kidehen
President  CEO 
OpenLink 

Re: Potential Home for LOD Data Sets

2009-03-23 Thread Kingsley Idehen

Steve Judkins wrote:

I found Medline to have a pretty nice model for this.  Every so often they
ship a full DB dump in XML as chunked zip files (not more than a 1Gb each if
I remember).  Subscribers just synchronize the FTP directories between the
Medline server and local server.  After that you can process daily diff
dumps. The downloads were just XML with a stream of record URIs with an
Add/Modify/Delete attribute, and the data fields that  changed.  A well
known graph where you can look for changes to the LOD datasources you care
about, and get SIOC markup for this that describes the Items, Date, and
Agents/People doing the modifications. This is a great use case for the
FOAF+SSL  OAuth because you may only automatically process updates from
Agents you trust (e.g. Wikipedia might only take changes from DBPedia).  
  

Steve,

You're very much on the ball here, this is very much the kind of thing 
foaf+ssl [1] is about :-) I was going to unveil similar capabilities re. 
DBpedia endpoint down the line i.e. SPARQL endpoint behavior aligned to 
trusted identities etc..


Links:

1. http://esw.w3.org/topic/foaf+ssl - FOAF+SSL


Kingsley

-Steve

-Original Message-
From: public-lod-requ...@w3.org [mailto:public-lod-requ...@w3.org] On Behalf
Of Kingsley Idehen
Sent: Monday, March 23, 2009 3:34 PM
To: Steve Judkins
Cc: 'Hugh Glaser'; public-lod@w3.org
Subject: Re: Potential Home for LOD Data Sets

Steve Judkins wrote:
  

It seems like this has the potential to become a nice collaborative
production pipeline. It would be nice to have a feed for data updates, so


we
  

can fire up our EC2 instance when the data has been processed and packaged
by the providers we are interested in.  For example, if Openlink wants to
fire up their AMI to processes the raw dumps from
http://wiki.dbpedia.org/Downloads32 into this cloud storage, we can wait
until a virtuoso ready package has been produced before we update.  As


more
  

agents get involved in processing the data, this will allow for more
automation notifications of updated dumps or SPARQL endpoints.
  


Yes, certainly.

Kingsley
  

-Steve

-Original Message-
From: public-lod-requ...@w3.org [mailto:public-lod-requ...@w3.org] On


Behalf
  

Of Kingsley Idehen
Sent: Thursday, December 04, 2008 9:20 PM
To: Hugh Glaser
Cc: public-lod@w3.org
Subject: Re: Potential Home for LOD Data Sets


Hugh Glaser wrote:
  


Thanks for the swift response!
I'm still puzzled - sorry to be slow.
http://aws.amazon.com/publicdatasets/#2
Says:
Amazon EC2 customers can access this data by creating their own personal

  

Amazon EBS volumes, using the public data set snapshots as a starting


point.
  

They can then access, modify and perform computation on these volumes
directly using their Amazon EC2 instances and just pay for the compute and
storage resources that they use.
  

  
Does this not mean it costs me money on my EC2 account? Or is there some

  

other way of accessing the data? Or am I looking at the wrong bit?
  

  

  
Okay, I see what I overlooked: the cost of paying for an AMI that mounts 
these EBS volumes, even though Amazon is charging $0.00 for uploading 
these huge amounts of data where it would usually charge.


So to conclude, using the loaded data sets isn't free, but I think we 
have to be somewhat appreciative of a value here, right? Amazon is 
providing a service that is ultimately pegged to usage (utility model), 
and the usage comes down to value associated with that scarce resource 
called time.
  


Ie Can you give me a clue how to get at the data without using my credit

  

card please? :-)
  

  

  
You can't you will need someone to build an EC2 service for you and eat 
the costs on your behalf. Of course such a service isn't impossible in a 
Numerati [1] economy, but we aren't quite there yet, need the Linked 
Data Web in place first :-)


Links:

1. http://tinyurl.com/64gsan

Kingsley
  


Best
Hugh

On 05/12/2008 02:28, Kingsley Idehen kide...@openlinksw.com wrote:



Hugh Glaser wrote:
  

  

Exciting stuff, Kingsley.
I'm not quite sure I have worked out how I might use it though.
The page says that hosting data is clearly free, but I can't see how to
  


get at it without paying for it as an EC2 customer.
  


Is this right?
Cheers


  


Hugh,

No, shouldn't cost anything if the LOD data sets are hosted in this
particular location :-)


Kingsley
  

  

Hugh


On 01/12/2008 15:30, Kingsley Idehen kide...@openlinksw.com wrote:



All,

Please see: http://aws.amazon.com/publicdatasets/ ; potentially the
final destination of all published RDF archives from the LOD cloud.

I've already made a request on behalf of LOD, but additional requests
from the community will accelerate the general comprehension and
awareness at Amazon.

Once the data sets are available from Amazon, database constructions