package
collections written by the actual users.
Have a look at those projects - your contributions to any of them is
certainly welcome.
best,
Leo Lahti
--
View this message in context:
http://r.789695.n4.nabble.com/The-Future-of-R-API-to-Public-Databases-tp4293526p4690960.html
Sent from
-project.org
Subject: Re: [R] The Future of R | API to Public Databases
Web services are only part of the problem. In essence, there are at
least two facets:
1. downloading the data using some protocol
2. mapping the data to a common model
Having #1 makes the import/download easier
| API to Public Databases
Web services are only part of the problem. In essence, there are at
least two facets:
1. downloading the data using some protocol
2. mapping the data to a common model
Having #1 makes the import/download easier, but it really becomes
useful
when both are included. I think
Spencer
I highly appreciate your input. What we need is a standard for
statistics. That may reinvent the way how we see data.
The recent crisis is the best proof that we are lost in our own
generated information overload. The traditional approach is not
working anymore.
Finding the right members
Web services are only part of the problem. In essence, there are at
least two facets:
1. downloading the data using some protocol
2. mapping the data to a common model
Having #1 makes the import/download easier, but it really becomes useful
when both are included. I think #2 is the harder
.
Date: Sat, 14 Jan 2012 10:21:23 -0500
From: ja...@rampaginggeek.com
To: r-help@r-project.org
Subject: Re: [R] The Future of R | API to Public Databases
Web services are only part of the problem. In essence, there are at
least two facets:
1
...@rampaginggeek.com
To: r-help@r-project.org
Subject: Re: [R] The Future of R | API to Public Databases
Web services are only part of the problem. In essence, there are at
least two facets:
1. downloading the data using some protocol
2. mapping the data to a common model
Having #1 makes
...@rampaginggeek.com
To: r-help@r-project.org
Subject: Re: [R] The Future of R | API to Public Databases
Web services are only part of the problem. In essence, there are at
least two facets:
1. downloading the data using some protocol
2. mapping the data to a common model
Having #1 makes the import
-help@r-project.org
Subject: [R] The Future of R | API to Public Databases
Message-ID:
cany9q8k+zyvrkjjgbjp+jtnyaw15gqkocivyvpgwgyqa9dl...@mail.gmail.com
Content-Type: text/plain; charset=UTF-8
Dear R Users -
R is a wonderful software package. CRAN provides a variety of tools to
work on your
. Data of course are quite variable and there is
nothingwrong with giving provider his choice.
Date: Sat, 14 Jan 2012 10:21:23 -0500
From: ja...@rampaginggeek.com
To: r-help@r-project.org
Subject: Re: [R] The Future of R | API to Public Databases
Web
] The Future of R | API to Public Databases
Web services are only part of the problem. In essence, there are at
least two facets:
1. downloading the data using some protocol
2. mapping the data to a common model
Having #1 makes the import/download easier, but it really becomes useful
when both
Dear R Users -
R is a wonderful software package. CRAN provides a variety of tools to
work on your data. But R is not apt to utilize all the public
databases in an efficient manner.
I observed the most tedious part with R is searching and downloading
the data from public databases and putting it
R is Open Source. You're welcome to write tools, and submit your
package to CRAN. I think some part of this has been done, based
on questions to the list asking about those parts.
Personally, I've been using S-Plus and then R for 18 years, and never
required data from any of them. Which doesn't
The WDI package on CRAN already provide access to the World Bank data
through their API, we also have an inhouse package for FAOSTAT here at
FAO but it is not mature enough to be released on CRAN yet.
Not sure about other international organisations but I do agree that it
would be nice if
HI Benjamin:
What would make this easier is if these sites used standardized web services,
so it would only require writing once. data.gov is the worst example, they
spun the own, weak service.
There is a lot of environmental data available through OPenDAP, and that is
supported in the ncdf4
Sarah,
I agree; I think it would be the exception rather than the rule that one
would access these public data sources given the range of needs of R users,
who are generally analyzing their own data. Plus, IMO, it just is not very
difficult to reformat the data to a suitable format, if need be,
It's a nice idea, but I wouldn't be optimistic about it happening:
Each of these public databases no doubt has its own more or less unique
API, and the people likely to know the API well enough to write R code to
access any particular database will be specialists in that field. They
likely won't
On 1/13/2012 2:26 PM, MacQueen, Don wrote:
It's a nice idea, but I wouldn't be optimistic about it happening:
Each of these public databases no doubt has its own more or less unique
API, and the people likely to know the API well enough to write R code to
access any particular database will be
The whole issue is related to the mismatch of (1) the publisher of the
data and (2) the user at the rendezvous point.
Both the publisher and the user don't know anything about the
rendezvous point. Both want to meet but don't meet in reality.
The user wastes time to find the rendezvous point
A traditional way to exit a chaotic situation as you describe is
to try to establish a standards committee, invite participation from
suppliers and users of whatever (data in this case), apply for
registration with the International Standards Organization, and organize
meetings, draft
20 matches
Mail list logo