Hi,
Following :
http://lists.openstreetmap.org/pipermail/dev/2011-December/023945.html
http://lists.openstreetmap.org/pipermail/dev/2011-December/023981.html
I've set up a read load balancer/proxy to the main OSM API
Better than a thousand words, here it is :
http://beta.letuffe.org/api/
It
Thanks Shaun/Matt, I didn't get to try it out today but will do so as
soon as I can.
Shaun McDonald wrote:
Hi Brett,
I'm starting to write up the instructions for postgres at
http://wiki.openstreetmap.org/wiki/OSM_Protocol_Version_0.6/postgres
The adapter I'm using is postgresql
I now
Hi All,
From recent posts on this list it looks likely that the 0.6 API will
use a PostgreSQL database. I'll need to update osmosis to support it
sometime between now and April 16th.
How do I go about setting it up? Does it use similar rake db:migrate
commands to the mysql schema? I
Okay, I should have checked the source code first.
I'm guessing I need to modify db/database.yml and set the adapter to
pgsql. And then compile the libmyosm library as explained in db/README.
I'll try it out tonight.
Brett Henderson wrote:
Hi All,
From recent posts on this list it looks
On Tue, Mar 31, 2009 at 10:07 PM, Brett Henderson br...@bretth.com wrote:
Okay, I should have checked the source code first.
I'm guessing I need to modify db/database.yml and set the adapter to
pgsql. And then compile the libmyosm library as explained in db/README.
the adaptor is postgresql,
Hi Brett,
I'm starting to write up the instructions for postgres at
http://wiki.openstreetmap.org/wiki/OSM_Protocol_Version_0.6/postgres
The adapter I'm using is postgresql
I now have the automated builder using the postgres database:
http://cruise.shaunmcdonald.me.uk/builds/api06-postgres
On Fri, Mar 13, 2009 at 03:41:16AM +0100, Stefan de Konink wrote:
Mirroring will not increase performance because your RAID card will not
a priori know what files you are interested in, only the blocks you are
interested in and in the worst case will grab the same data from the
same disks
On Fri, Mar 13, 2009 at 8:34 AM, Florian Lohoff f...@rfc822.org wrote:
So it is sensible to make it mirroring and it might even be a benefit to
do an 1 - N mirroring.
Rule of thumb:
More concurrent readers - More spindles
this is why we have ROMA/TRAPI/etc... they're able to satisy
Dear all
The API downtime scheduled for the 0.6 API transition has been postponed
due to delays acquiring the new database server.
The re-scheduled API downtime for the 0.6 API upgrade is now the weekend
of the 17-20th April 2009.
Original announcement...
Grant Slater wrote:
The API downtime scheduled for the 0.6 API transition has been postponed
due to delays acquiring the new database server.
So it is impossible to buy a machine for 15k? Only one response: wow!
Stefan
___
dev mailing list
Stefan de Konink wrote:
Grant Slater wrote:
The API downtime scheduled for the 0.6 API transition has been
postponed due to delays acquiring the new database server.
So it is impossible to buy a machine for 15k? Only one response: wow!
Took awhile to get all the quotes in and then asked for
Claudomiro Nascimento Junior wrote:
Can you bring joy to our hearts describing the winning specs?
Full spec here:
http://wiki.openstreetmap.org/wiki/Servers/smaug
Summary:
2x Intel Xeon Processor E5420 Quad Core
32GB ECC (max 128GB)
2x 73GB SAS 15k
10x 450GB SAS 15k (expensive, but stupidly
Stefan de Konink wrote:
Maybe a stupid question; but is your database server able to exploit
the above configuration? Especially related to your processor choice.
Yes, the disks are _currently_ over spec'ed, but not for 6 month's time.
Replacing the hardware for the central database server is
On Fri, Mar 13, 2009 at 1:21 AM, Stefan de Konink ste...@konink.de wrote:
Grant Slater wrote:
Summary:
2x Intel Xeon Processor E5420 Quad Core
32GB ECC (max 128GB)
2x 73GB SAS 15k
10x 450GB SAS 15k (expensive, but stupidly low latency)
IPMI + KVM
Maybe a stupid question; but is your
Grant Slater wrote:
Large imports in the pipeline.
Partitioning is a scalable solution to that, not buying new hardware.
Now it is nice you put 32GB (extra expensive) memory in there, but
most likely your hot performance would be far better with more (cheap)
memory than more disks. At the
Matt Amos wrote:
At the time I wrote my paper on OSM Dec2008, there was
about 72GB of CSV data. Thus with lets say 128GB you will have your
entire database *IN MEMORY* no fast disks required.
in 8Gb kits? that would be *extra* expensive (about £8,680 according
to froogle).
Some people are
Stefan de Konink wrote:
Wow... (serious wow) I have never seen the database THAT expanded unless
I was using an XML database.
And now I think of it; that is probably because *I* wasn't able to
download the history tables. That makes sense; but does it make sense to
have the history tables at
El Viernes, 13 de Marzo de 2009, Stefan de Konink escribió:
[...] Therefore your seek times will only decrease if you can search on the
individual disk not as a combined pair.
I actually wonder what the DB performance could be with some of those new
shiny SSD drives...
(And how expensive
Stefan de Konink wrote:
Stefan de Konink wrote:
Wow... (serious wow) I have never seen the database THAT expanded
unless I was using an XML database.
And now I think of it; that is probably because *I* wasn't able to
download the history tables. That makes sense; but does it make sense
Stefan de Konink wrote:
Iván Sánchez Ortega wrote:
El Viernes, 13 de Marzo de 2009, Stefan de Konink escribió:
[...] Therefore your seek times will only decrease if you can search
on the individual disk not as a combined pair.
I actually wonder what the DB performance could be with some of
Grant Slater wrote:
But as detailed below by Stefan, the internal block fragmentation is a
serious issue, which needs to be fixed first.
I am also still very sceptical about SSD MTBF on DB server load levels.
Write 1 bit = Full SSD block write.
Big community site in NL reported less than a
2009/1/22 Frederik Ramm frede...@remote.org:
Hi,
Shaun McDonald wrote:
It would be best if the bulk_import.py script was updated for 0.6. As
everything needs to be wrapped into a changeset, it makes the bulk
upload more complex than before.
Yes and no... if you're talking uploads that are
Hi all,
I have a homebrew OSM 0.6 test server, and about 4000 JOSM-compatible .osm
files. JOSM is able to upload those files nicely with no hassles.
I would like to automatically upload all those files to my server, but for the
information I've read at the wiki, bulk_import.py is not ready for
Iván Sánchez Ortega wrote:
Any other ideas?
It seems 0.6 supports uploading diffs:
http://wiki.openstreetmap.org/wiki/OSM_Protocol_Version_0.6#Diff_uploads
Stefan
___
dev mailing list
dev@openstreetmap.org
Iván Sánchez Ortega wrote:
Hi all,
I have a homebrew OSM 0.6 test server, and about 4000 JOSM-compatible .osm
files. JOSM is able to upload those files nicely with no hassles.
I would like to automatically upload all those files to my server, but for
the
information I've read at the
It seems 0.6 supports uploading diffs:
http://wiki.openstreetmap.org/wiki/OSM_Protocol_Version_0.6#Diff_uploads
Yummy... Transactions!
___
dev mailing list
dev@openstreetmap.org
http://lists.openstreetmap.org/listinfo/dev
On 21 Jan 2009, at 23:40, Iván Sánchez Ortega wrote:
Hi all,
I have a homebrew OSM 0.6 test server, and about 4000 JOSM-
compatible .osm
files. JOSM is able to upload those files nicely with no hassles.
I would like to automatically upload all those files to my server,
but for the
Shaun McDonald wrote:
It would be best if the bulk_import.py script was updated for 0.6. As
everything needs to be wrapped into a changeset, it makes the bulk
upload more complex than before.
More? How is this possible? It would be one changeset, put entire file.
Done.
Stefan
Hi,
Shaun McDonald wrote:
It would be best if the bulk_import.py script was updated for 0.6. As
everything needs to be wrapped into a changeset, it makes the bulk
upload more complex than before.
Yes and no... if you're talking uploads that are small enough to fit
into one diff upload
Frederik Ramm wrote:
BTW: It seems that we're not currently imposing an upper limit for the
number of changes in a diff upload, is that true? If so, we should
perhaps add such a limit because the transacionality of diff uploads
would otherwise make it too easy for the thoughtless script
Hi,
Stefan de Konink wrote:
Where do we need a limit?
I assume that while doing all the inserts, the Ruby code has to keep
track of all the Ids involved in order to be able to adjust the
references in other objects. This will consume memory which is a limited
resource. Also, it is my
On Tue, Dec 9, 2008 at 10:41 PM, Maarten Deen [EMAIL PROTECTED] wrote:
I'm not? Than how do I create the database so that I can store OSM data with
osmosis?
I mean: have it however you want, but if people can't make the database, then
how are they going to use it?
The concept of the database
In osmosis 0.29.4 there is a PostgreSQL script for creating the tables for
version 0.6, but is there also one for MySQL?
Maarten
___
dev mailing list
dev@openstreetmap.org
http://lists.openstreetmap.org/listinfo/dev
For MySQL, you can setup rails api06 branch, and run rake db:migrate.
http://wiki.openstreetmap.org/wiki/OSM_Protocol_Version_0.6
http://wiki.openstreetmap.org/wiki/Rails (There are some specific 0.6
items on that page).
Shaun
Maarten Deen wrote:
In osmosis 0.29.4 there is a PostgreSQL script
Shaun McDonald wrote:
For MySQL, you can setup rails api06 branch, and run rake db:migrate.
I don't need Rails and really have no inclination whatsover to install it just
to make the database.
Regards,
Maarten
___
dev mailing list
Shaun McDonald wrote:
For MySQL, you can setup rails api06 branch, and run rake db:migrate.
And to comment on rails_port_branches/api06/db/migrate/001_create_osm_db.rb in
svn: I thought the tags on nodes were put in a separate table? At least that's
what I I understand from the 0.6 wiki page.
Maarten Deen wrote:
Shaun McDonald wrote:
For MySQL, you can setup rails api06 branch, and run rake db:migrate.
And to comment on rails_port_branches/api06/db/migrate/001_create_osm_db.rb
in
svn: I thought the tags on nodes were put in a separate table? At least
that's
what I I
The migrations are for one version to the next, so when setting up,
rails will start with the initial config then migrate all the data to
the next version as it goes.
I guess this is to maintain an easy way to upgrade older databases in
the future.
A full version of the rails schema (in the
Maarten Deen wrote:
Shaun McDonald wrote:
For MySQL, you can setup rails api06 branch, and run rake db:migrate.
And to comment on rails_port_branches/api06/db/migrate/001_create_osm_db.rb in
svn: I thought the tags on nodes were put in a separate table? At least that's
what I I
Shaun McDonald wrote:
Maarten Deen wrote:
Shaun McDonald wrote:
For MySQL, you can setup rails api06 branch, and run rake db:migrate.
And to comment on rails_port_branches/api06/db/migrate/001_create_osm_db.rb
in
svn: I thought the tags on nodes were put in a separate table? At least
Maarten Deen wrote:
Shaun McDonald wrote:
Maarten Deen wrote:
Shaun McDonald wrote:
For MySQL, you can setup rails api06 branch, and run rake db:migrate.
And to comment on rails_port_branches/api06/db/migrate/001_create_osm_db.rb
in
svn: I thought the tags
Maarten Deen wrote:
And IMHO it is not very considerate to expect people who just want to have
the
database to have them install rails and then get the output of one mangy
little
file.
Well I'm very sorry about that. Please accept my humblest apologies for
failing to meet your needs.
Tom Hughes wrote:
To be honest unless you're running rails you probably don't want to use
that schema anyway.
I'm not? Than how do I create the database so that I can store OSM data with
osmosis?
I mean: have it however you want, but if people can't make the database, then
how are they
On Tue, Oct 14, 2008 at 10:29 AM, Tom Hughes [EMAIL PROTECTED] wrote:
Shaun McDonald wrote:
I would like to propose the weekend of 9th November 2008, in the
CloudMade offices in London. Is this data suitable for the api and
application devs?
Fine by me.
On 15 Oct 2008, at 23:35, Brett Henderson wrote:
On Wed, Oct 15, 2008 at 12:47 AM, Shaun McDonald [EMAIL PROTECTED]
wrote:
On 14 Oct 2008, at 10:18, Brett Henderson wrote:
Shaun McDonald wrote:
[..]
Ideally we need to have all the main editors and osm tools ready for
the 0.6 API
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
Shaun McDonald schreef:
Okay, so it sounds like the short answer is that there's no upgrading
an already populated database so no easy way of building a db with 0.6
data.
The migrations should work, though it is known that they break when
Shaun McDonald wrote:
Am I right in thinking that when you import the planet [extract] into
mysql using osmosis it will populate the history tables for you, and
with each changeset, will add to the history, updating the current
tables? I've not yet had a need to use osmosis.
That's right,
On 16 Oct 2008, at 10:35, Stefan de Konink wrote:
On Thu, 16 Oct 2008, bvh wrote:
On Thu, Oct 16, 2008 at 08:07:07AM +0100, Shaun McDonald wrote:
JOSM is the only option at the moment as Potlatch and Merkator
haven't
been upgraded to 0.6 yet.
Merkaartor has a 'use 0.6 API' switch since
On Thu, 16 Oct 2008, Frederik Ramm wrote:
On 16.10.2008, at 12:01, Stefan de Konink wrote:
Stop talking about platform specific test! We need test that is an
application server *without logic* that sends out requests for a pre
definined dataset that should always result in the same output.
On Thu, Oct 16, 2008 at 11:37 AM, Tom Hughes [EMAIL PROTECTED] wrote:
Andy Allan wrote:
Actually, we won't. We still desperately need more unit and functional
tests for the code base - I've been writing unit tests for utf-8
handling, and finding truncation errors in parts of the code. I also
Would it (finally) be possible to have a *complete* unittest for the 0.6
api. For clients and servers?
Sure fine for Merkaartor.
Besides server infra, it would be very nice to have a defined set of test
cases, too.
This would be reusable for every single app interacting with the API and
On 16 Oct 2008, at 09:31, bvh wrote:
On Thu, Oct 16, 2008 at 08:07:07AM +0100, Shaun McDonald wrote:
JOSM is the only option at the moment as Potlatch and Merkator
haven't
been upgraded to 0.6 yet.
Merkaartor has a 'use 0.6 API' switch since July (0.11). It is hidden
inside our big
Andy Allan wrote:
I'm not sure there's many other options available, and I think we'd
discussed it that way before. The sanest way seems to be to suck it
out of the database with a latin-1 connection (as now) and stuff it
back in through a proper utf-8 connection - unless anyone wants to
Hi,
On 16.10.2008, at 12:01, Stefan de Konink wrote:
Stop talking about platform specific test! We need test that is an
application server *without logic* that sends out requests for a pre
definined dataset that should always result in the same output.
Good idea. Write one.
Bye
Frederik
--
On Thu, 16 Oct 2008, bvh wrote:
On Thu, Oct 16, 2008 at 08:07:07AM +0100, Shaun McDonald wrote:
JOSM is the only option at the moment as Potlatch and Merkator haven't
been upgraded to 0.6 yet.
Merkaartor has a 'use 0.6 API' switch since July (0.11). It is hidden
inside our big preferences
On Thu, 16 Oct 2008, Shaun McDonald wrote:
We're currently working on rails unit and functional tests for API
0.6. There will probably be some integration tests coming later.
You can see the status of the tests at http://cruise.shaunmcdonald.me.uk/
Stop talking about platform specific test!
On Tue, Oct 14, 2008 at 5:16 PM, Shaun McDonald
[EMAIL PROTECTED] wrote:
What you say is correct in terms of the database (the uid field will be
renamed to changeset_id). The API however will still be returning the user
and also the uid and changeset_id from API 0.6, so that it doesn't break
On Wed, Oct 15, 2008 at 12:47 AM, Shaun McDonald
[EMAIL PROTECTED]wrote:
On 14 Oct 2008, at 10:18, Brett Henderson wrote:
Shaun McDonald wrote:
[..]
Ideally we need to have all the main editors and osm tools ready for the
0.6 API transition by the time the 0.6 API goes live in November.
Shaun McDonald wrote:
Hi Devs,
With the rapid progress on the 0.6 API, I'd like to set the date of
the 0.6 API hack-a-thon in London to complete the transition to the
0.6 API.
I would like to propose the weekend of 9th November 2008, in the
CloudMade offices in London. Is this data
On 14 Oct 2008, at 10:18, Brett Henderson wrote:
Shaun McDonald wrote:
[..]
Ideally we need to have all the main editors and osm tools ready
for the 0.6 API transition by the time the 0.6 API goes live in
November. As things currently stand, you can use JOSM, with a clean
checkout of
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
Shaun McDonald schreef:
http://svn.openstreetmap.org/applications/utils/planet.osm/perl/planetosm-to-db.pl
will
also need to be updated to 0.6, would anyone like to update the script?
I have currently a native C implementation of an osm to db.
Hi Devs,
With the rapid progress on the 0.6 API, I'd like to set the date of
the 0.6 API hack-a-thon in London to complete the transition to the
0.6 API.
I would like to propose the weekend of 9th November 2008, in the
CloudMade offices in London. Is this data suitable for the api and
On 14 Oct 2008, at 17:04, Martijn van Oosterhout wrote:
On Tue, Oct 14, 2008 at 5:44 PM, Shaun McDonald
[EMAIL PROTECTED] wrote:
It is basically the current format with each node, way and relation
containing the following 3 additional fields changeset=nn,
version=nn,
uid=nn. uid is the
On Tue, Oct 14, 2008 at 5:44 PM, Shaun McDonald
[EMAIL PROTECTED] wrote:
It is basically the current format with each node, way and relation
containing the following 3 additional fields changeset=nn, version=nn,
uid=nn. uid is the user.id.
I musta missed a step. I thought the user id was
On 14 Oct 2008, at 15:05, Stefan de Konink wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
Shaun McDonald schreef:
http://svn.openstreetmap.org/applications/utils/planet.osm/perl/planetosm-to-db.pl
will
also need to be updated to 0.6, would anyone like to update the
script?
I
Shaun McDonald wrote:
I would like to propose the weekend of 9th November 2008, in the
CloudMade offices in London. Is this data suitable for the api and
application devs?
Fine by me.
Tom
--
Tom Hughes ([EMAIL PROTECTED])
http://www.compton.nu/
Andy Allan wrote:
On Wed, Sep 24, 2008 at 12:30 PM, Brett Henderson [EMAIL PROTECTED] wrote:
Perhaps I completely misunderstood you. I was worried that you might do
something like this:
1. way 12345 is currently at version 1.
2. Via potlatch you edit way 12345 adding a new tag and
Seems like a good idea.
Doesn't necessarily need to be in London.
I'll be in touch with NLNet to see if they want to fund another hack-a-
thon.
Let's discuss it at SOTM. See you all there!
Martijn
Op 9 jul 2008, om 11:54 heeft Shaun McDonald het volgende geschreven:
Frederik Ramm wrote:
Hi All,
Just wondering where the API 0.6 changes are up to. I haven't been
paying much attention and might have missed something but haven't
noticed any discussion recently. I've checked out the 0.6 wiki page and
it seems fairly quiet. Is it still soldiering on quietly in the
background or
On Thu, May 15, 2008 at 05:45:22PM +0200, Frederik Ramm wrote:
I was not aware this is also a server
load issue. And frankly if this is about server load then there
are better ways to mitigate that like rewriting the map call as
a C/C++ apache module...
The map call is not involved.
No,
On Thu, May 15, 2008 at 06:45:01PM +0200, Frederik Ramm wrote:
Out of interest, why's that a bad thing? We have several editors, for
example: each of them works differently and presents information
differently. Why do we have to have One True Rollback?
We have enough trouble as it is
Hi,
Use your imagination then. If the user requests, say, a graphical
representation of the changes effected by change set X, you will not
want to show the intermediate steps. So you would have to collapse the
change set - something changed first and deleted later, show it only as
Dair Grant wrote:
If an editor wants to monitor individual edits in order to provide
coaching
or feedback to users, that's best done by watching user actions as
they
happen and providing feedback based on that (recording that info in
the DB
for every action for every user is
On Fri, May 16, 2008 at 10:13 AM, Dair Grant [EMAIL PROTECTED] wrote:
Frederik Ramm wrote:
First of you might want to do rollback from Z to Y only because
you want to keep Y to X.
Changesets are a grouping of edits that make things easier because one
only has to work with the groups - e.g.
Dave Stubbs wrote:
this whole argument is over the format of a changeset
query where it would be quite possible to implement it both ways, and
have both available at the same time.
That's a very good point.
cheers
Richard
___
dev mailing list
In message [EMAIL PROTECTED]
Richard Fairhurst [EMAIL PROTECTED] wrote:
Dave Stubbs wrote:
this whole argument is over the format of a changeset
query where it would be quite possible to implement it both ways, and
have both available at the same time.
That's a very good point.
An
On Wed, May 14, 2008 at 11:04 PM, bvh [EMAIL PROTECTED] wrote:
But apparently the server code expects version, not old_version.
Personally I slightly prefer version as it would then become
identical code from just saving the file.
Heh. We started from the same principles and ended up in a
On Wed, May 14, 2008 at 08:21:45PM -0400, Christopher Schmidt wrote:
There is one other new-style piece of information which is only
applicable to changeset-uploading: the 'old id', the placeholder ID with
which 'create' objects were uploaded. I think that 'old_id' is
appropriate for this.
On Thu, May 15, 2008 at 08:30:08AM +0200, Martijn van Oosterhout wrote:
So, we specced out what you would need for a changeset download,
invented old_version and new_version and then used old_version
everywhere we needed it. If we do it your way then we would get
old_version and version for
On Thu, May 15, 2008 at 7:28 AM, bvh [EMAIL PROTECTED] wrote:
On Thu, May 15, 2008 at 08:30:08AM +0200, Martijn van Oosterhout wrote:
So, we specced out what you would need for a changeset download,
invented old_version and new_version and then used old_version
everywhere we needed it. If we
On Thu, May 15, 2008 at 09:10:42AM +0200, Martijn van Oosterhout wrote:
I'm not talking abot diff upload responses, I'm talking about
changeset downloads. Say I note that object 123 was modified in
changeset 456 six weeks ago. And I go to the API and say: show me
everything done in changeset
In message [EMAIL PROTECTED]
bvh [EMAIL PROTECTED] wrote:
On Thu, May 15, 2008 at 09:10:42AM +0200, Martijn van Oosterhout wrote:
I'm not talking abot diff upload responses, I'm talking about
changeset downloads. Say I note that object 123 was modified in
changeset 456 six weeks ago.
On Thu, May 15, 2008 at 09:02:04AM +0100, Tom Hughes wrote:
But isn't old_version+1 always equal to version?
Under the current plan, yes. We didn't think it was reasonable to
push that assumption down to the clients thought.
Why would that be unreasonable? In what (futuristic) scenario would
Hi,
But isn't old_version+1 always equal to version?
Under the current plan, yes. We didn't think it was reasonable to
push that assumption down to the clients thought.
Why would that be unreasonable? In what (futuristic) scenario would
version numbers not increment monotonically one by
On Thu, May 15, 2008 at 12:50:39PM +0200, Frederik Ramm wrote:
An online editor like Potlatch will open a changeset and then let the
user do whatever he wants, including changing an object again and
again and again; since edits are not buffered by the editor but
rather uploaded whenever
On Thu, May 15, 2008 at 07:28:29AM +0200, bvh wrote:
On Thu, May 15, 2008 at 08:30:08AM +0200, Martijn van Oosterhout wrote:
So, we specced out what you would need for a changeset download,
invented old_version and new_version and then used old_version
everywhere we needed it. If we do it
On Thu, May 15, 2008 at 08:30:08AM +0200, Martijn van Oosterhout wrote:
On Wed, May 14, 2008 at 11:04 PM, bvh [EMAIL PROTECTED] wrote:
But apparently the server code expects version, not old_version.
Personally I slightly prefer version as it would then become
identical code from just
On Thu, May 15, 2008 at 11:46 AM, Christopher Schmidt
[EMAIL PROTECTED] wrote:
On Thu, May 15, 2008 at 11:31:13AM +0200, Frederik Ramm wrote:
It is also possible to change the same object multiple times within the
same changeset, so one single changeset might catapult the object
version from 1
Hi,
It is also possible to change the same object multiple times
within the
same changeset, so one single changeset might catapult the object
version from 1 to 15.
Is that a design goal? That behavior seems unexpected to me.
An online editor like Potlatch will open a changeset and then
On Thu, May 15, 2008 at 06:40:41AM -0400, Christopher Schmidt wrote:
How so? Why would you save the changeset of an individual object to a
file?
(read entire thread, then respond, Chris: Responding in order is silly.)
--
Christopher Schmidt
MetaCarta
On Thu, May 15, 2008 at 11:31:13AM +0200, Frederik Ramm wrote:
Under the current plan, yes. We didn't think it was reasonable to
push that assumption down to the clients thought.
Why would that be unreasonable? In what (futuristic) scenario would
version numbers not increment monotonically
On Thu, May 15, 2008 at 12:02:39PM +0100, Dave Stubbs wrote:
On Thu, May 15, 2008 at 11:31:13AM +0200, Frederik Ramm wrote:
It is also possible to change the same object multiple times within the
same changeset, so one single changeset might catapult the object
version from 1 to 15.
Is
On Thu, May 15, 2008 at 11:42:03AM +0200, bvh wrote:
On Thu, May 15, 2008 at 11:31:13AM +0200, Frederik Ramm wrote:
Under the current plan, yes. We didn't think it was reasonable to
push that assumption down to the clients thought.
Why would that be unreasonable? In what (futuristic)
On Thu, May 15, 2008 at 07:18:30AM -0400, Christopher Schmidt wrote:
But one of them will be accepted first and the other will later
be judged as sufficiently different, right? So the actually history
in the database will have two transitions, one from v1 to v2 and
the other from v2 to v3.
Hi,
A single changeset can change an object from version 1-15, but in
that
case, each of the changes within that changeset should still be
laid out
in the changeset response, right? so one would be the change from 1-
2,
2-3, ... 14-15? Or not?
Don't know.
The most usable type of
Hi,
I think it is the client who should filter what to present to the
user. The response of the database should be as complete as possible,
including sending intermediate states.
I disagree.
If you wanted raw access, then you can have it - just download the
history of every object. You can
On Thu, May 15, 2008 at 01:26:56PM +0200, Frederik Ramm wrote:
The most usable type of response for the user would certainly be:
As a result of this changeset, Object X was changed from state A to
state B.
As a user, I am not interested in the 318 intermediate editing steps;
I want to
On 15 May 2008, at 11:30, bvh wrote:
[...]
I think it is the client who should filter what to present to the
user. The response of the database should be as complete as possible,
including sending intermediate states.
There may be cases, such as wanting to visualise the data changes, or
On Thu, May 15, 2008 at 02:20:42PM +0200, Frederik Ramm wrote:
If you wanted raw access, then you can have it - just download the
history of every object. You can do that even now.
Changesets are introduced to lessen the complexity. We want one big
edit, ideally associated with a comment
Frederik Ramm wrote:
We could do rollback even now without any grouping
(We do!)
[...]
This collapsing would have to be implemented in every piece of software
that deals with changesets, and my hunch is that everybody would
implement it slightly differently.
Out of interest, why's that a
1 - 100 of 190 matches
Mail list logo