Re: [Talk-ca] Building Import update

2019-01-26 Thread Pierre Béland via Talk-ca
Nate je viens juste de publier les résultats pour Kingston. Un ratio de 66% de 
polygones avec formes irrégulières. A voir si la simplification éliminerait les 
noeuds qui ont pour effet de créer des formes irrégulières. 

Je n'ai pas encore regardé de près les résultats. Cependant, m on expérience en 
République démocratique du Congo depuis l'an dernier, Kisenso et récemment 
Butembo, a montré qu'a partir de ces diagostics, la validation / correction si 
nécessaire des polygones permettait de réduire fortement les ratios, et ce sous 
les 3% des bâtiments.
Je pense aussi qu'il faut prendre le temps de corriger les données qui risque 
de ne pas être modifiées par la suite. 


 
Pierre 
 

Le samedi 26 janvier 2019 21 h 06 min 39 s HNE, Nate Wessel 
 a écrit :  
 
  
James, 
 
 
It does seem that someone will need to properly simplify the data since you 
don't seem willing to do the necessary work. I've already offered to help, but 
I can't do it today, or tomorrow for that matter. My suggestion, again, is that 
we slow down and take the time to do this right. Rushing ahead can only lead to 
hurt feelings, angry emails, and extra work for everyone. Given how much 
editing goes on in the areas I know, many of these imported buildings might not 
be touched again for another decade - can't we make them right the first time?
 
 
I think Pierre is on the right track here with his thoughtful analysis of the 
buildings that have been imported so far - this is the kind of stuff that I'm 
talking about when I say we need some validation. Some questions that I'd like 
to see answered (Pierre, when you have some more time!): just how many 
buildings imported so far are not orthogonal, but seem like they should be? 
What percentage of buildings would benefit from simplification, and is the 
problem worse/better in some areas compared to others?
 
I actually don't think the problem is technically difficult to solve - we just 
have to understand the nature and extent off the problem before we rush to 
solutions. That's the point of validation - understanding what the problems are.
 
 
Best,
 
 Nate Wessel
 Jack of all trades, Master of Geography, PhD candidate in Urban Planning
 NateWessel.com 
 
 
  ___
Talk-ca mailing list
Talk-ca@openstreetmap.org
https://lists.openstreetmap.org/listinfo/talk-ca


Re: [Talk-ca] Building Import update

2019-01-26 Thread Pierre Béland via Talk-ca
Voici mon analyse de la géométrie des bâtiments pour Kingston.  À partir des 
uid des contributeurs ayant participé à l'import, j'ai téléchargé pour Kingston 
5,261 batîments créés ou modifiés par eux depuis le 24 décembre. Le fichier 
résultat montre 5,253 batiments, quelques polygones en erreur éliminés.
Requête Overpass http://overpass-turbo.eu/s/FzI 
Fichier OSM et Résultats d'analyse  
https://www.dropbox.com/s/1dn76c7gmk996ql/on_kingston.import_2018_12_24.osm.zip?dl=0
L'analyse de la géométrie des bâtiments ci-haut révèle que 66% d'entre eux  
(3,475 / 5,261) ont des formes irrégulières.  Ce ratio de géométries 
irrégulières est très élevé, bien au delà de ce à quoi on devrait normalement 
s'attendre. 


méthodologie
À noter que l'analyse qualité avec JOSM permet de détecter de nombreux 
problèmes, incluant les doublons, superpositions.  Mais aucune analyse de la 
géométrie n'est effectuée.  

Mais il est possible malgré tout de d'analyser les géométries et développer des 
indicateurs qui permettent de lever un Drapeau Regarder de plus près au-dela 
d'un certain niveau. J'identifie les formes régulières comme ci-dessous et 
accepte une tolérance de 2.2% plus ou moins avant de signaler comme forme 
irrégulière.

Polygones avec formes régulières
- forme avec angles droits (90 degrés, 270 degrés)- forme avec angles constants 
(hexagone, octogone, etc)

C'est un domaine où on ne peut mesurer à partir d'une simple formule les 
géométries valides même si avec des formes irrégulières.Par contre, tout ratio 
supérieur à 5 % mérite à mon avis d'être analysé de plus près pour expliquer 
les écarts.

Mon script qualité tient compte de tous les noeuds sauf angle=180 degrés pour 
évaluation géométrie.  Vous pourrez comparer dans le fichier analyse les 
diagnostics individuels pour chaque polygone et pour chacun les angles 
correspondants.

ci-dessous, voici des exemples de résultas de l'analyse des 3,475 bâtiments 
avec formes irrégulières. 

 id nb_angles    forme   type angle angles

"56709982"    "5"    "FB_irreg"    "{o,ir,ir,o,o}"    
"{90,174.9,94.6,90.1,90.3}"
id ci-haut a un 5ième angle presque a 180 degrés. faire zoom-in pour voir.

"56997713"    "14"    "FB_oo"    "{oo,oo,o,o,o,o,o,o,o,o,o,o,o,o}"    
"{92.8,92.2,89.9,90.3,89.9,90.4,90,89.7,89.5,89.5,89.3,88.9,89.2,90.1}"id 
ci-haut, 14 angles, très pres de 90 degrés, mais imprécis
À noter que le script est en développement. Si vous trouvez des incohérences, 
me le signaler.
o et r :     formes régulièresFB_oo et FB_rr : formes presque 
régulièresFB_irreg    formes irrégulières

Pierre 

___
Talk-ca mailing list
Talk-ca@openstreetmap.org
https://lists.openstreetmap.org/listinfo/talk-ca


Re: [Talk-ca] Building Import update

2019-01-26 Thread Nate Wessel

James,

It does seem that someone will need to properly simplify the data since 
you don't seem willing to do the necessary work. I've already offered to 
help, but I can't do it today, or tomorrow for that matter. My 
suggestion, again, is that we slow down and take the time to do this 
right. Rushing ahead can only lead to hurt feelings, angry emails, and 
extra work for everyone. Given how much editing goes on in the areas I 
know, many of these imported buildings might not be touched again for 
another decade - can't we make them right the first time?


I think Pierre is on the right track here with his thoughtful analysis 
of the buildings that have been imported so far - this is the kind of 
stuff that I'm talking about when I say we need some validation. Some 
questions that I'd like to see answered (Pierre, when you have some more 
time!): just how many buildings imported so far are not orthogonal, but 
seem like they should be? What percentage of buildings would benefit 
from simplification, and is the problem worse/better in some areas 
compared to others?


I actually don't think the problem is technically difficult to solve - 
we just have to understand the nature and extent off the problem before 
we rush to solutions. That's the point of validation - understanding 
what the problems are.


Best,

Nate Wessel
Jack of all trades, Master of Geography, PhD candidate in Urban Planning
NateWessel.com 

On 1/26/19 2:10 PM, James wrote:
I'm not installing postgesql for you to accept simplification, that 
YOU said was required because there were 2x as many points(which was 
proved wrong via the simplification) If you want to have fun with the 
file, go a head.


On Sat., Jan. 26, 2019, 2:00 p.m. Nate Wessel  wrote:


Building count doesn't really have anything to do with preserving
topology, and I'm not sure a visual inspection would cut it - Can
you look at the documentation for this tool and verify that it
preserves the topology of polygon layers?

This is a good illustration of the (potential) problem:
https://trac.osgeo.org/postgis/wiki/UsersWikiSimplifyPreserveTopology

Nate Wessel
Jack of all trades, Master of Geography, PhD candidate in Urban
Planning
NateWessel.com 

On 1/26/19 12:31 PM, James wrote:

it does if you saw my analysis of building(polygon count) remains
the same also visually inspected a few and there was preservation
of them

On Sat., Jan. 26, 2019, 11:43 a.m. Nate Wessel mailto:bike...@gmail.com> wrote:

Does that preserve topology between buildings that share nodes?

Nate Wessel
Jack of all trades, Master of Geography, PhD candidate in
Urban Planning
NateWessel.com 

On 1/26/19 11:31 AM, James wrote:

no need for scripts, qgis does this fine via the  Vector
menu -> Geometry tools -> Simplify Geometries utility. I
simplified it to 20cm with the , but I think 40cm is too
aggressive.

I already have scripts to compile it into the dataformat
needed to be served.

On Sat., Jan. 26, 2019, 11:16 a.m. Nate Wessel
mailto:bike...@gmail.com> wrote:

Hi all,

The wiki page is indeed looking a whole lot better right
now - my thanks and congrats to everyone who
contributed! There is a still a ways to go, but we seem
to be getting there quickly.

I'll echo John in saying that I would appreciate hearing
from some of the other people who chimed in to express
their doubts about the import. For my part, I'm not
satisfied yet - no surprise, I'm sure ;-). I'm thrilled
that we're talking and working together in the open, and
that addresses the biggest concern I had with the import.

These are the big issues I see remaining:

1. *Validation*: Ideally I'd like to see a good chunk
(more than half) of the data that has been imported
already validated by another user before we proceed with
importing more data. Validation is part of the import
plan, so the import isn't done until validation is done
anyway. My hope is that this will flag any issues that
we can fix before moving forward, and give people time
to chime in on the import plan who maybe haven't
already. I don't want to see everything imported and
only then do we start systematically checking the
quality of our work, if ever. If no one wants to do it
now, no one is going to want to do it later either, and
that doesn't bode well.

2. *Simplification*: James' analysis showed that
simplification could save several hundred megabytes (and
probably more) in 

Re: [Talk-ca] Talk-ca Digest, Vol 131, Issue 48

2019-01-26 Thread James
There is also fours states to a task..clear..no action, yellow...completed
and green: validated! (there's also unvalidated to flag a tile as not being
done again/not being validated) You can leave comments as well!

On Sat., Jan. 26, 2019, 7:53 p.m. Nate Wessel  I'm all for this, so long as it really is just for validation. I believe
> we can leave notes on tasks via the tasking manager (?), which might be a
> good way to catalogue any localized issues we see.
> Nate Wessel
> Jack of all trades, Master of Geography, PhD candidate in Urban Planning
> NateWessel.com 
>
> On 1/26/19 2:16 PM, john whelan wrote:
>
> Perhaps a way forward at the moment would be to open the task manager up
> so the tiles imported so far can be validated.
>
> Having lived with computers for many years I'm in total agreement, they
> work very quickly but have no common sense what so ever.
>
> Cheerio John
>
> On Sat, Jan 26, 2019, 1:56 PM Nate Wessel 
>> Getting a clear idea of what needs to be fixed is what validation is all
>> about. Having a second set of eyes look through everyone's imported data in
>> a systematic way will give us ideas for what we need to fix moving forward.
>> It can't be just a matter of looking at a bunch of automated validation
>> script outputs and issuing a checkmark. Machines can do that - us humans
>> can do better, and that's a big part of the beauty of OSM: the human
>> element.
>>
>> If I may be permitted a tangent, I was fairly troubled at the last State
>> of the Map US conference that the focus of attention seemed to have turned
>> to a surprising degree toward "what cool things can machines do with data"
>> from the focus I saw in earlier years, which was much more "how can we get
>> more people engaged?". Machines don't make quality data - only consistent
>> errors. I'm glad the big tech companies were buying us all beers (there was
>> s much free beer...) but we shouldn't adopt their narrow focus on labor
>> efficiency and automation. I don't think efficiency is why we are all here.
>>
>> ...
>>
>> I was going to address some of your other points, but I think my little
>> digression actually highlighted some of the differences in the way we seem
>> to be approaching all of these issues. I'm not a fan of automation and
>> efficiency at the cost of quality (in this context), while that is a
>> compromise you and others seem willing to make. We may not be able to talk
>> our way out of that difference of opinion; the root of the issue is likely
>> just a different vision of OSM and why we each care about it.
>> Nate Wessel
>> Jack of all trades, Master of Geography, PhD candidate in Urban Planning
>> NateWessel.com 
>>
>> On 1/26/19 12:48 PM, Danny McDonald wrote:
>>
>> 1. In terms of validation, it would be helpful to have a clear idea of
>> what sorts of problems need to be fixed.  I have re-validated almost all of
>> the areas I imported (and all of them in Central Toronto), and fixed all of
>> the building related errors/warnings I found (with a few exceptions) there
>> weren't many errors, and many pre-dated the import.  The only JOSM warning
>> I didn't fix is "Crossing building/residential area".  Yaro's and John's
>> areas don't seem to have many errors either, although there a few isolated
>> "Crossing building/highway" warnings (and some "building duplicated nodes"
>> errors).  I have also split big retail buildings in dense areas.
>> 2. I'm fine with simplification, I think we should just do it.  In terms
>> of orthogonalization, I don't understand why non-orthogonal buildings are a
>> problem.  If they are, JOSM allows them to be auto-fixed.
>> 3. I agree that the task manager squares are too big in central Toronto.
>> A separate task can be created for central Toronto only, with smaller
>> squares.  I think the square size is fine outside of Toronto, as long as
>> the squares are split appropriately.
>> 4. In terms of conflation, I agree that deleting and re-adding buildings
>> is not desirable, but I don't agree that that means it should never be
>> done, no matter the time cost.  The ideal solution here is some sort of
>> script/plugin that auto-merges new and recently added buildings -
>> basically, an iterated "replace geometry".
>> DannyMcD
>>
>>>
>>>
>> ___
>> Talk-ca mailing 
>> listTalk-ca@openstreetmap.orghttps://lists.openstreetmap.org/listinfo/talk-ca
>>
>> ___
>> Talk-ca mailing list
>> Talk-ca@openstreetmap.org
>> https://lists.openstreetmap.org/listinfo/talk-ca
>>
> ___
> Talk-ca mailing list
> Talk-ca@openstreetmap.org
> https://lists.openstreetmap.org/listinfo/talk-ca
>
___
Talk-ca mailing list
Talk-ca@openstreetmap.org
https://lists.openstreetmap.org/listinfo/talk-ca


Re: [Talk-ca] Talk-ca Digest, Vol 131, Issue 48

2019-01-26 Thread Nate Wessel
I'm all for this, so long as it really is just for validation. I believe 
we can leave notes on tasks via the tasking manager (?), which might be 
a good way to catalogue any localized issues we see.


Nate Wessel
Jack of all trades, Master of Geography, PhD candidate in Urban Planning
NateWessel.com 

On 1/26/19 2:16 PM, john whelan wrote:
Perhaps a way forward at the moment would be to open the task manager 
up so the tiles imported so far can be validated.


Having lived with computers for many years I'm in total agreement, 
they work very quickly but have no common sense what so ever.


Cheerio John

On Sat, Jan 26, 2019, 1:56 PM Nate Wessel  wrote:


Getting a clear idea of what needs to be fixed is what validation
is all about. Having a second set of eyes look through everyone's
imported data in a systematic way will give us ideas for what we
need to fix moving forward. It can't be just a matter of looking
at a bunch of automated validation script outputs and issuing a
checkmark. Machines can do that - us humans can do better, and
that's a big part of the beauty of OSM: the human element.

If I may be permitted a tangent, I was fairly troubled at the last
State of the Map US conference that the focus of attention seemed
to have turned to a surprising degree toward "what cool things can
machines do with data" from the focus I saw in earlier years,
which was much more "how can we get more people engaged?".
Machines don't make quality data - only consistent errors. I'm
glad the big tech companies were buying us all beers (there was
s much free beer...) but we shouldn't adopt their narrow focus
on labor efficiency and automation. I don't think efficiency is
why we are all here.

...

I was going to address some of your other points, but I think my
little digression actually highlighted some of the differences in
the way we seem to be approaching all of these issues. I'm not a
fan of automation and efficiency at the cost of quality (in this
context), while that is a compromise you and others seem willing
to make. We may not be able to talk our way out of that difference
of opinion; the root of the issue is likely just a different
vision of OSM and why we each care about it.

Nate Wessel
Jack of all trades, Master of Geography, PhD candidate in Urban
Planning
NateWessel.com 

On 1/26/19 12:48 PM, Danny McDonald wrote:

1. In terms of validation, it would be helpful to have a clear
idea of what sorts of problems need to be fixed.  I have
re-validated almost all of the areas I imported (and all of them
in Central Toronto), and fixed all of the building related
errors/warnings I found (with a few exceptions) there weren't
many errors, and many pre-dated the import. The only JOSM warning
I didn't fix is "Crossing building/residential area".  Yaro's and
John's areas don't seem to have many errors either, although
there a few isolated "Crossing building/highway" warnings (and
some "building duplicated nodes" errors).  I have also split big
retail buildings in dense areas.
2. I'm fine with simplification, I think we should just do it. 
In terms of orthogonalization, I don't understand why
non-orthogonal buildings are a problem.  If they are, JOSM allows
them to be auto-fixed.
3. I agree that the task manager squares are too big in central
Toronto.  A separate task can be created for central Toronto
only, with smaller squares.  I think the square size is fine
outside of Toronto, as long as the squares are split appropriately.
4. In terms of conflation, I agree that deleting and re-adding
buildings is not desirable, but I don't agree that that means it
should never be done, no matter the time cost.  The ideal
solution here is some sort of script/plugin that auto-merges new
and recently added buildings - basically, an iterated "replace
geometry".
DannyMcD



___
Talk-ca mailing list
Talk-ca@openstreetmap.org  
https://lists.openstreetmap.org/listinfo/talk-ca

___
Talk-ca mailing list
Talk-ca@openstreetmap.org 
https://lists.openstreetmap.org/listinfo/talk-ca

___
Talk-ca mailing list
Talk-ca@openstreetmap.org
https://lists.openstreetmap.org/listinfo/talk-ca


Re: [Talk-ca] OSM Canada building import

2019-01-26 Thread OSM Volunteer stevea
On Jan 26, 2019, at 12:37 PM, john whelan  wrote:
A history of building data released by Stats Can and how these were entered 
into OSM via an Ottawa pilot project, with some success and some lessons 
learned.  Good for OSM!

> The other complicating factor here is a lot of people are very interested in 
> using the data one way or another.

No doubt it's a factor, though I fail to see how it complicates — unless there 
is a rush to enter the data before they are fully vetted.  (That won't work).  
Should the data be used "from OSM," they must enter OSM with our community 
consensus, standards and practices.  That is beginning to be achieved:  
https://wiki.osm.org/wiki/Canada_Building_Import improves, related Task Manager 
tasks appear to get closer to being "opened again" as Nate's four steps of what 
might be accomplished reach wider consensus and are implemented (or not, or 
something else happens...) and discussion continues right here on talk-ca.  
Yes, "broad philosophical discussions" are included:  they are helpful, likely 
to some more than others.

> The take away, have fun if you can.

"Have fun" is "OSM tenet #2" (#1 is "Don't copy from other maps.")  As we're 
not violating #1 here, I enthusiastically agree with John's "take away:"  
please do have fun (yes, you can, yes, many do).  Yet, as we are a data 
project, we must also be a community who cares deeply about data quality, that 
our data "pass muster" as they enter (especially when via a nationwide Import). 
 I speak for myself only, but others in this project concur.  Canada gets 
there, I'm delighted to see.  Yes, it's taking a bit of thrashing to do so, but 
that's all for the greater good, as the ends justify the (polite, patient, 
correct) means by which we do.  We're many steps into this 10,000-kilometer 
journey, let's keep going, as the goal is worthy.

Now, who is rolling up their sleeves and addressing Nate's four steps?  Those 
discussions can take place here, though I think the Discussion tab ("Talk 
page") of the link above seems more appropriate.  (And, I'd prefer to "get out 
of the way" here, if anybody were to go so far as to feel "get lost, already, 
SteveA," I wouldn't be offended, though I remain watching this Import for that 
greater good).

SteveA
___
Talk-ca mailing list
Talk-ca@openstreetmap.org
https://lists.openstreetmap.org/listinfo/talk-ca


Re: [Talk-ca] OSM Canada building import

2019-01-26 Thread john whelan
Bringing building outline Open Data into OSM has taken some years.  The
first problem to overcome was knowledge of OpenStreetMap by various levels
of government.  An early contact with the City of Ottawa was made by two
students aged around twelve who used OpenStreetMap to build an Open Data
App for a competition.  It didn't win first prize but it was the only one
that could be used off-line.  One is now a qualified schoolteacher by the
way so you can tell how long ago it was.

The head of OC Transpo was heard to say don't worry about the license we
want you to have the bus stops.

So the next step was the Open Data license.  Somehow or other I was invited
to some sort of round table with Treasury Board about Open Data and raised
the issue of license compatibility with OpenStreetMap.

It took them five years of consultations etc before the license was changed
to the 2.0 licence which both they and I thought was compatible.

It took about another year or so to have the City of Ottawa formally adopt
the new license.  Stats Canada played a role in persuading the City of
Ottawa to adopt the new license and to make a file of the building outlines
available to OSM.  Initially they weren't sure if they had one that could
be made available, the file for property taxes is held by a separate agency
in Ontario that is very jealous of its data.

The license was questioned and eventually made its way to the OSM Legal
Working Group where it was confirmed to be acceptable.  Going this route
can add considerable time to an import by the way so if you can use a
license that has already been approved its a lot faster.

For Ottawa we had the data from a single source, we had a local group of
mappers who knew the area and thoroughly discussed the import over coffee
for some months before deciding to do the import.  We were exceptionally
lucky in the skill set of the local mappers and their ability to work as a
team.

What was really interesting was what happened next and that was the
building outlines were enriched both with the type of building and
additional tags with address information, quite a few commercial buildings
had websites etc added.  The added tags was exactly what Stats had been
after.  Many of those tagging were mapping for the first time in response
to a Stats Can Web page.  So OSM gained some mappers.

Stats had funding for the pilot until March 31st.  Anything done after that
needed more funding or would be done in spare time if there was any.  Hence
the 2020 project.  The idea was that mapathons would accurately map
buildings across Canada.  It did generate a lot of interest from University
GIS departments and schools however and a number of government departments
and agencies expressed an interest in the data. A comment from Treasury
Board was about 60% of the government Open Data consumed was by other
government departments which surprised them.  This use of Open Data
consumption by other government departments is worth mentioning to
governments by the way.

" Yea, we had pretty good success having highschool students add in
attributes for buildings using walking maps!"

Unfortunately the accuracy of the buildings mapped in iD left something to
be desired.

So licensing is big.  Stats released some building outlines under the
Federal Government's 2.0 licence and that's what this import is all about.
How should it be handled?

Basically the problem becomes one of who are the local mappers since these
are the ones who say if an import should go ahead or not.  Canada is big.
Ottawa was small enough that issues could be talked through face to face.
We had a short discussion on talk-ca before starting the import and a
suggestion was made that a single import plan was the way to go.  The other
complicating factor here is a lot of people are very interested in using
the data one way or another.

The take away, have fun if you can.

Cheerio John

On Sat, 26 Jan 2019 at 13:55, Javier Carranza 
wrote:

> Hi there to all,
>
> Really interested in this thread as we are precisely a community in
> contact with National Statistics Offices (NSOs) like Stat Can and we see a
> growing interest in OSM's geodatabase.
>
> I can tell the interest will remain in the coming years and we need to be
> prepared. As NSOs are planning their censuses for 2020 (and onwards) like
> in the case of Uganda: https://opendri.org/uganda-open-mapping-for-resilience/
> , the geo open data concept will prevail.
>
> Other insights, anyone?
>
>
>
___
Talk-ca mailing list
Talk-ca@openstreetmap.org
https://lists.openstreetmap.org/listinfo/talk-ca


Re: [Talk-ca] OSM Canada building import

2019-01-26 Thread Danny McDonald
As I said before, I'd like to hear about specific problems that need to be
fixed,  For instance, the issues Nate raised before about large retail
buildings and buildings in buildings were helpful to know about, and I
believe I have fixed those issues in the areas I imported.  I have also
done "human" verification, and corrected a few imported buildings with
wonky footprints, as well as fixing churches and garages that were
improperly not tagged as buildings.

I don't think broad philosophical discussions are helpful to this
discussion - there has been all too much of that on this list (which is why
I have tried to not comment, until now)

P..S. James, could you upload the simplified data to data.osmcanada.ca ,
and point the tasking manager there?  That makes it easier to examine
snippets, and will need to be done anyway if we're using the simplified
data going forward.

DannyMcD
___
Talk-ca mailing list
Talk-ca@openstreetmap.org
https://lists.openstreetmap.org/listinfo/talk-ca


Re: [Talk-ca] Talk-ca Digest, Vol 131, Issue 48

2019-01-26 Thread john whelan
Perhaps a way forward at the moment would be to open the task manager up so
the tiles imported so far can be validated.

Having lived with computers for many years I'm in total agreement, they
work very quickly but have no common sense what so ever.

Cheerio John

On Sat, Jan 26, 2019, 1:56 PM Nate Wessel  Getting a clear idea of what needs to be fixed is what validation is all
> about. Having a second set of eyes look through everyone's imported data in
> a systematic way will give us ideas for what we need to fix moving forward.
> It can't be just a matter of looking at a bunch of automated validation
> script outputs and issuing a checkmark. Machines can do that - us humans
> can do better, and that's a big part of the beauty of OSM: the human
> element.
>
> If I may be permitted a tangent, I was fairly troubled at the last State
> of the Map US conference that the focus of attention seemed to have turned
> to a surprising degree toward "what cool things can machines do with data"
> from the focus I saw in earlier years, which was much more "how can we get
> more people engaged?". Machines don't make quality data - only consistent
> errors. I'm glad the big tech companies were buying us all beers (there was
> s much free beer...) but we shouldn't adopt their narrow focus on labor
> efficiency and automation. I don't think efficiency is why we are all here.
>
> ...
>
> I was going to address some of your other points, but I think my little
> digression actually highlighted some of the differences in the way we seem
> to be approaching all of these issues. I'm not a fan of automation and
> efficiency at the cost of quality (in this context), while that is a
> compromise you and others seem willing to make. We may not be able to talk
> our way out of that difference of opinion; the root of the issue is likely
> just a different vision of OSM and why we each care about it.
> Nate Wessel
> Jack of all trades, Master of Geography, PhD candidate in Urban Planning
> NateWessel.com 
>
> On 1/26/19 12:48 PM, Danny McDonald wrote:
>
> 1. In terms of validation, it would be helpful to have a clear idea of
> what sorts of problems need to be fixed.  I have re-validated almost all of
> the areas I imported (and all of them in Central Toronto), and fixed all of
> the building related errors/warnings I found (with a few exceptions) there
> weren't many errors, and many pre-dated the import.  The only JOSM warning
> I didn't fix is "Crossing building/residential area".  Yaro's and John's
> areas don't seem to have many errors either, although there a few isolated
> "Crossing building/highway" warnings (and some "building duplicated nodes"
> errors).  I have also split big retail buildings in dense areas.
> 2. I'm fine with simplification, I think we should just do it.  In terms
> of orthogonalization, I don't understand why non-orthogonal buildings are a
> problem.  If they are, JOSM allows them to be auto-fixed.
> 3. I agree that the task manager squares are too big in central Toronto.
> A separate task can be created for central Toronto only, with smaller
> squares.  I think the square size is fine outside of Toronto, as long as
> the squares are split appropriately.
> 4. In terms of conflation, I agree that deleting and re-adding buildings
> is not desirable, but I don't agree that that means it should never be
> done, no matter the time cost.  The ideal solution here is some sort of
> script/plugin that auto-merges new and recently added buildings -
> basically, an iterated "replace geometry".
> DannyMcD
>
>>
>>
> ___
> Talk-ca mailing 
> listTalk-ca@openstreetmap.orghttps://lists.openstreetmap.org/listinfo/talk-ca
>
> ___
> Talk-ca mailing list
> Talk-ca@openstreetmap.org
> https://lists.openstreetmap.org/listinfo/talk-ca
>
___
Talk-ca mailing list
Talk-ca@openstreetmap.org
https://lists.openstreetmap.org/listinfo/talk-ca


Re: [Talk-ca] Building Import update

2019-01-26 Thread James
I'm not installing postgesql for you to accept simplification, that YOU
said was required because there were 2x as many points(which was proved
wrong via the simplification) If you want to have fun with the file, go a
head.

On Sat., Jan. 26, 2019, 2:00 p.m. Nate Wessel  Building count doesn't really have anything to do with preserving
> topology, and I'm not sure a visual inspection would cut it - Can you look
> at the documentation for this tool and verify that it preserves the
> topology of polygon layers?
>
> This is a good illustration of the (potential) problem:
> https://trac.osgeo.org/postgis/wiki/UsersWikiSimplifyPreserveTopology
> Nate Wessel
> Jack of all trades, Master of Geography, PhD candidate in Urban Planning
> NateWessel.com 
>
> On 1/26/19 12:31 PM, James wrote:
>
> it does if you saw my analysis of building(polygon count) remains the same
> also visually inspected a few and there was preservation of them
>
> On Sat., Jan. 26, 2019, 11:43 a.m. Nate Wessel 
>> Does that preserve topology between buildings that share nodes?
>> Nate Wessel
>> Jack of all trades, Master of Geography, PhD candidate in Urban Planning
>> NateWessel.com 
>>
>> On 1/26/19 11:31 AM, James wrote:
>>
>> no need for scripts, qgis does this fine via the  Vector menu -> Geometry
>> tools -> Simplify Geometries utility. I simplified it to 20cm with the ,
>> but I think 40cm is too aggressive.
>>
>> I already have scripts to compile it into the dataformat needed to be
>> served.
>>
>> On Sat., Jan. 26, 2019, 11:16 a.m. Nate Wessel >
>>> Hi all,
>>>
>>> The wiki page is indeed looking a whole lot better right now - my thanks
>>> and congrats to everyone who contributed! There is a still a ways to go,
>>> but we seem to be getting there quickly.
>>>
>>> I'll echo John in saying that I would appreciate hearing from some of
>>> the other people who chimed in to express their doubts about the import.
>>> For my part, I'm not satisfied yet - no surprise, I'm sure ;-). I'm
>>> thrilled that we're talking and working together in the open, and that
>>> addresses the biggest concern I had with the import.
>>>
>>> These are the big issues I see remaining:
>>>
>>> 1. *Validation*: Ideally I'd like to see a good chunk (more than half)
>>> of the data that has been imported already validated by another user before
>>> we proceed with importing more data. Validation is part of the import plan,
>>> so the import isn't done until validation is done anyway. My hope is that
>>> this will flag any issues that we can fix before moving forward, and give
>>> people time to chime in on the import plan who maybe haven't already. I
>>> don't want to see everything imported and only then do we start
>>> systematically checking the quality of our work, if ever. If no one wants
>>> to do it now, no one is going to want to do it later either, and that
>>> doesn't bode well.
>>>
>>> 2. *Simplification*: James' analysis showed that simplification could
>>> save several hundred megabytes (and probably more) in Ontario alone. This
>>> is totally worth doing, but we have to document the process and be very
>>> careful not to lose valuable data. I believe there was also a concern
>>> raised about orthogonal buildings being not quite orthogonal - this is
>>> something that we should handle at the same time, again, very carefully. We
>>> certainly don't want to coerce every building into right angles. With
>>> respect to James, I'm not sure this is something that can be done with a
>>> few clicks in QGIS. I would be willing to develop a script to handle this,
>>> but it would take me about a week or two to find the time to do this
>>> properly. We would need to simultaneously A) simplify straight lines B)
>>> orthogonalize where possible and C) preserve topology between connected
>>> buildings. This is not impossible, it just takes time and care to do
>>> correctly.
>>>
>>> 3. *Speed and Size*: To John's point, it seems like people certainly
>>> are not sticking to the areas they know, unless they get around a whole
>>> hell of a lot more than I do, and yes this is a problem. The whole Toronto
>>> region was basically imported by two people: DannyMcD seems to have done
>>> the entire west side of the region (hundreds of square kilometers) while
>>> zzptichka imported the entire east side of the region (again a truly
>>> massive area), both in the matter of a week or two. They only stopped in
>>> the middle where there were more buildings already and things got a bit
>>> more difficult. The middle is where I live, and when I saw that wave of
>>> buildings coming, I sounded the alarms.
>>> This is way too fast - no one person should be able to import the GTA in
>>> a couple weeks. A big part of the problem, IMO is that the task squares are
>>> much too large, and allow/require a user to import huge areas at once. At
>>> the least, some of the task squares in central Toronto are impossibly
>>> large, including 

Re: [Talk-ca] Building Import update

2019-01-26 Thread Nate Wessel
Building count doesn't really have anything to do with preserving 
topology, and I'm not sure a visual inspection would cut it - Can you 
look at the documentation for this tool and verify that it preserves the 
topology of polygon layers?


This is a good illustration of the (potential) problem:
https://trac.osgeo.org/postgis/wiki/UsersWikiSimplifyPreserveTopology

Nate Wessel
Jack of all trades, Master of Geography, PhD candidate in Urban Planning
NateWessel.com 

On 1/26/19 12:31 PM, James wrote:
it does if you saw my analysis of building(polygon count) remains the 
same also visually inspected a few and there was preservation of them


On Sat., Jan. 26, 2019, 11:43 a.m. Nate Wessel  wrote:


Does that preserve topology between buildings that share nodes?

Nate Wessel
Jack of all trades, Master of Geography, PhD candidate in Urban
Planning
NateWessel.com 

On 1/26/19 11:31 AM, James wrote:

no need for scripts, qgis does this fine via the  Vector menu ->
Geometry tools -> Simplify Geometries utility. I simplified it to
20cm with the , but I think 40cm is too aggressive.

I already have scripts to compile it into the dataformat needed
to be served.

On Sat., Jan. 26, 2019, 11:16 a.m. Nate Wessel mailto:bike...@gmail.com> wrote:

Hi all,

The wiki page is indeed looking a whole lot better right now
- my thanks and congrats to everyone who contributed! There
is a still a ways to go, but we seem to be getting there
quickly.

I'll echo John in saying that I would appreciate hearing from
some of the other people who chimed in to express their
doubts about the import. For my part, I'm not satisfied yet -
no surprise, I'm sure ;-). I'm thrilled that we're talking
and working together in the open, and that addresses the
biggest concern I had with the import.

These are the big issues I see remaining:

1. *Validation*: Ideally I'd like to see a good chunk (more
than half) of the data that has been imported already
validated by another user before we proceed with importing
more data. Validation is part of the import plan, so the
import isn't done until validation is done anyway. My hope is
that this will flag any issues that we can fix before moving
forward, and give people time to chime in on the import plan
who maybe haven't already. I don't want to see everything
imported and only then do we start systematically checking
the quality of our work, if ever. If no one wants to do it
now, no one is going to want to do it later either, and that
doesn't bode well.

2. *Simplification*: James' analysis showed that
simplification could save several hundred megabytes (and
probably more) in Ontario alone. This is totally worth doing,
but we have to document the process and be very careful not
to lose valuable data. I believe there was also a concern
raised about orthogonal buildings being not quite orthogonal
- this is something that we should handle at the same time,
again, very carefully. We certainly don't want to coerce
every building into right angles. With respect to James, I'm
not sure this is something that can be done with a few clicks
in QGIS. I would be willing to develop a script to handle
this, but it would take me about a week or two to find the
time to do this properly. We would need to simultaneously A)
simplify straight lines B) orthogonalize where possible and
C) preserve topology between connected buildings. This is not
impossible, it just takes time and care to do correctly.

3. *Speed and Size*: To John's point, it seems like people
certainly are not sticking to the areas they know, unless
they get around a whole hell of a lot more than I do, and yes
this is a problem. The whole Toronto region was basically
imported by two people: DannyMcD seems to have done the
entire west side of the region (hundreds of square
kilometers) while zzptichka imported the entire east side of
the region (again a truly massive area), both in the matter
of a week or two. They only stopped in the middle where there
were more buildings already and things got a bit more
difficult. The middle is where I live, and when I saw that
wave of buildings coming, I sounded the alarms.
This is way too fast - no one person should be able to import
the GTA in a couple weeks. A big part of the problem, IMO is
that the task squares are much too large, and allow/require a
user to import huge areas at once. At the least, some of the
task squares in central Toronto are 

Re: [Talk-ca] OSM Canada building import

2019-01-26 Thread Javier Carranza
Hi there to all,

Really interested in this thread as we are precisely a community in contact
with National Statistics Offices (NSOs) like Stat Can and we see a growing
interest in OSM's geodatabase.

I can tell the interest will remain in the coming years and we need to be
prepared. As NSOs are planning their censuses for 2020 (and onwards) like
in the case of Uganda: https://opendri.org/uganda-open-mapping-for-resilience/
, the geo open data concept will prevail.

Other insights, anyone?

Regards
[image: geocensos]
*Javier Carranza** Tresoldi** CEO*




*, GeoCensosLic. en EconomíaMSc. in Geoinformation Twente UniversityM.A. in
Economics Georgetown University@geocensos*Skype: javiercarranza

https://www.youtube.com/watch?v=B7TKVurhKoU

https://www.youtube.com/watch?v=9cfdYdQHZVY
https://www.youtube.com/watch?v=gJQzhM52Zp0 
https://www.youtube.com/watch?v=HrGIA5Zzpc0 
Colombia  (57) 1 4595159
Mexico:(52) 1 55 35436613
Panama Mobile: (507) 688 - 04892
www.geocensos.com
*Lets map together a better world*
 [image: Twitter]
 [image: LinkedIn]


"La información aquí contenida es para uso exclusivo de la persona o
entidad de destino. Está estrictamente prohibida su utilización, copia,
descarga, distribución, modificación y/o reproducción total o parcial, sin
el permiso expreso del representante legal de Fundación Geocensos, pues su
contenido puede ser de carácter confidencial y/o contener material
privilegiado. Si usted recibió esta información por error, por favor
contacte en forma inmediata a quien la envió y borre este material de su
computador. La Fundación GeoCensos no es responsable por la información
contenida en esta comunicación, el directo responsable es quien la firma o
el autor de la misma."


On Sat, Jan 26, 2019 at 11:49 AM OSM Volunteer stevea <
stevea...@softworkers.com> wrote:

> I'm changing the Subject to delete "Stats Can" as this is an import into
> OSM, not a Stats Can import.  True, they published the data, so "thanks for
> the data," but Stats Can isn't a part of this conversation, they merely
> published the data.  I say it like this to emphasize that OSM is quite
> aware of a good analogy:  the US Census Bureau, who published the TIGER
> data which was imported massive road and rail data into the USA (roughly,
> many agree), had nothing to do with the import, nothing to say about it and
> don't to this day:  they merely published the data into the public domain
> (as the federal US government do all their/our data, except when it is
> "classified") and OSM chose to import the data.  OSM wishes in retrospect
> we had done a better job of it, as we improve it to this day (and will for
> years/decades, likely) and OSM has learned from this.  Please, Canada, see
> this import as the opportunity it truly is:  do NOT be in a rush to import
> lower-quality, not fully community-vetted data, or you will be quite sorry
> at the mess you'll have to clean up later.  Doing that would be much more
> work than the dialog we are having now to prevent this.  It is worth it to
> have these dialogs and achieve the consensus that the data are as we wish
> them to be.  Are they yet?  It sounds like they are not (Nate's four
> points).
>
> On Jan 26, 2019, at 7:49 AM, John Whelan  wrote:
> > Currently we seem to be at the point where some on the mailing list feel
> there wasn't enough discussion on talk-ca before the import.
>
> MANY agree there wasn't enough discussion.  But that was before.  Rather
> than looking back (though there is nothing wrong from learning from
> missteps), we are in a "now" where that is changing.  So, we continue to
> discuss.  That's fine.  That's actually excellent.
>
> > Quebec I think we should put on one side until the Quebec mappers feel
> more comfortable.
>
> OK, so we await Québécois suggestions / improvements to the process to
> their satisfaction, their input that they are (widely amongst themselves)
> with "comfortable to where it has finally evolved" (but I haven't heard
> that yet), or both.  Usually in that order, but certainly not "well, enough
> time has elapsed, yet we haven't heard much, so let's proceed anyway."
> That doesn't work, that won't work.
>
> > Nate I feel has been involved in a smaller import before and in that
> case there was benefit by simplifying the outlines.  In this case verifying
> nothing gets screwed up adds to the cost.
>
> Nate has done a lot of things in OSM, including a very
> positively-recognized (award-winning?) Ohio bicycle map that included very
> wide coordination with other OSM volunteers, the academic world and local
> community.  (It is absolutely delicious; take a look at it).  Another
> example of what a single person in OSM who reaches out to the community
> with a vision and a plan can achieve — 

Re: [Talk-ca] Talk-ca Digest, Vol 131, Issue 48

2019-01-26 Thread Nate Wessel
Getting a clear idea of what needs to be fixed is what validation is all 
about. Having a second set of eyes look through everyone's imported data 
in a systematic way will give us ideas for what we need to fix moving 
forward. It can't be just a matter of looking at a bunch of automated 
validation script outputs and issuing a checkmark. Machines can do that 
- us humans can do better, and that's a big part of the beauty of OSM: 
the human element.


If I may be permitted a tangent, I was fairly troubled at the last State 
of the Map US conference that the focus of attention seemed to have 
turned to a surprising degree toward "what cool things can machines do 
with data" from the focus I saw in earlier years, which was much more 
"how can we get more people engaged?". Machines don't make quality data 
- only consistent errors. I'm glad the big tech companies were buying us 
all beers (there was s much free beer...) but we shouldn't adopt 
their narrow focus on labor efficiency and automation. I don't think 
efficiency is why we are all here.


...

I was going to address some of your other points, but I think my little 
digression actually highlighted some of the differences in the way we 
seem to be approaching all of these issues. I'm not a fan of automation 
and efficiency at the cost of quality (in this context), while that is a 
compromise you and others seem willing to make. We may not be able to 
talk our way out of that difference of opinion; the root of the issue is 
likely just a different vision of OSM and why we each care about it.


Nate Wessel
Jack of all trades, Master of Geography, PhD candidate in Urban Planning
NateWessel.com 

On 1/26/19 12:48 PM, Danny McDonald wrote:
1. In terms of validation, it would be helpful to have a clear idea of 
what sorts of problems need to be fixed. I have re-validated almost 
all of the areas I imported (and all of them in Central Toronto), and 
fixed all of the building related errors/warnings I found (with a few 
exceptions) there weren't many errors, and many pre-dated the import.  
The only JOSM warning I didn't fix is "Crossing building/residential 
area".  Yaro's and John's areas don't seem to have many errors either, 
although there a few isolated "Crossing building/highway" warnings 
(and some "building duplicated nodes" errors).  I have also split big 
retail buildings in dense areas.
2. I'm fine with simplification, I think we should just do it.  In 
terms of orthogonalization, I don't understand why non-orthogonal 
buildings are a problem.  If they are, JOSM allows them to be auto-fixed.
3. I agree that the task manager squares are too big in central 
Toronto.  A separate task can be created for central Toronto only, 
with smaller squares.  I think the square size is fine outside of 
Toronto, as long as the squares are split appropriately.
4. In terms of conflation, I agree that deleting and re-adding 
buildings is not desirable, but I don't agree that that means it 
should never be done, no matter the time cost.  The ideal solution 
here is some sort of script/plugin that auto-merges new and recently 
added buildings - basically, an iterated "replace geometry".

DannyMcD



___
Talk-ca mailing list
Talk-ca@openstreetmap.org
https://lists.openstreetmap.org/listinfo/talk-ca
___
Talk-ca mailing list
Talk-ca@openstreetmap.org
https://lists.openstreetmap.org/listinfo/talk-ca


Re: [Talk-ca] Building Import update

2019-01-26 Thread OSM Volunteer stevea
On Jan 26, 2019, at 8:42 AM, Nate Wessel  wrote:
Four absolutely OUTSTANDING aspects of this project which can (seemingly must) 
be addressed before the Task Manager releases these (or improved/simplified) 
data.

A salute to you, Nate, for these thoughtful words and their potential to very 
positively drive forward this import.

SteveA
___
Talk-ca mailing list
Talk-ca@openstreetmap.org
https://lists.openstreetmap.org/listinfo/talk-ca


Re: [Talk-ca] OSM Canada building import

2019-01-26 Thread OSM Volunteer stevea
I'm changing the Subject to delete "Stats Can" as this is an import into OSM, 
not a Stats Can import.  True, they published the data, so "thanks for the 
data," but Stats Can isn't a part of this conversation, they merely published 
the data.  I say it like this to emphasize that OSM is quite aware of a good 
analogy:  the US Census Bureau, who published the TIGER data which was imported 
massive road and rail data into the USA (roughly, many agree), had nothing to 
do with the import, nothing to say about it and don't to this day:  they merely 
published the data into the public domain (as the federal US government do all 
their/our data, except when it is "classified") and OSM chose to import the 
data.  OSM wishes in retrospect we had done a better job of it, as we improve 
it to this day (and will for years/decades, likely) and OSM has learned from 
this.  Please, Canada, see this import as the opportunity it truly is:  do NOT 
be in a rush to import lower-quality, not fully community-vetted data, or you 
will be quite sorry at the mess you'll have to clean up later.  Doing that 
would be much more work than the dialog we are having now to prevent this.  It 
is worth it to have these dialogs and achieve the consensus that the data are 
as we wish them to be.  Are they yet?  It sounds like they are not (Nate's four 
points).

On Jan 26, 2019, at 7:49 AM, John Whelan  wrote:
> Currently we seem to be at the point where some on the mailing list feel 
> there wasn't enough discussion on talk-ca before the import.

MANY agree there wasn't enough discussion.  But that was before.  Rather than 
looking back (though there is nothing wrong from learning from missteps), we 
are in a "now" where that is changing.  So, we continue to discuss.  That's 
fine.  That's actually excellent.

> Quebec I think we should put on one side until the Quebec mappers feel more 
> comfortable.

OK, so we await Québécois suggestions / improvements to the process to their 
satisfaction, their input that they are (widely amongst themselves) with 
"comfortable to where it has finally evolved" (but I haven't heard that yet), 
or both.  Usually in that order, but certainly not "well, enough time has 
elapsed, yet we haven't heard much, so let's proceed anyway."  That doesn't 
work, that won't work.

> Nate I feel has been involved in a smaller import before and in that case 
> there was benefit by simplifying the outlines.  In this case verifying 
> nothing gets screwed up adds to the cost.

Nate has done a lot of things in OSM, including a very positively-recognized 
(award-winning?) Ohio bicycle map that included very wide coordination with 
other OSM volunteers, the academic world and local community.  (It is 
absolutely delicious; take a look at it).  Another example of what a single 
person in OSM who reaches out to the community with a vision and a plan can 
achieve — given planning, the time it really takes and wide consensus.

> Buildings not absolutely square, yes but different GIS systems use different 
> accuracy so if the incoming data has a few more decimal places then rounding 
> will occur which can lead to minor inaccuracies. I feel the simplest is just 
> to leave them.

Others seem to feel that these inaccuracies are too rough (data quality too 
poor) to enter OSM.  And "different GIS systems" only matter in a historical 
context (as in, for example "these data came from QGIS" or "these data were run 
through GDAL and turned into a shapefile" or many other workflows).  The only 
"GIS system" that matters is OSM.  Each individual contributor who enters data 
into OSM is responsible for entering high-quality data, or risks having those 
data redacted by the community (though that means the process was broken to 
begin with) — that's simply how OSM works.  Again, this is an OSM project:  a 
data import, which follows rules and community standards judging its quality as 
much as the individual entering the data itself.  If the data should and can be 
improved before they enter (especially with a "data-wide" application of some 
algorithm), like "this squares buildings and we want to do this" or "this turns 
a true rectangle into four nodes instead of eleven and we should do this to 
reduce the amount of data and simplify future edits" then we should.

This isn't being "anti-import."  It IS about "data which ARE imported must be 
high-quality," so let's discuss what we mean by that in the case of these data. 
 That's what we're doing now.

> Selecting everything and squaring is really a mechanical edit and you can get 
> some odd results which again would need to be carefully compared and adds to 
> the workload.

Sometimes "mechanical edits" are OK, sometimes they are not.  It seems John is 
saying "these are not."  Whether this adds to the workload or not is moot, the 
workload will be what it takes for high-quality data to enter, and "what that 
means" is achieved by the discussions we have had, have now and what 

Re: [Talk-ca] Talk-ca Digest, Vol 131, Issue 48

2019-01-26 Thread Danny McDonald
1. In terms of validation, it would be helpful to have a clear idea of what
sorts of problems need to be fixed.  I have re-validated almost all of the
areas I imported (and all of them in Central Toronto), and fixed all of the
building related errors/warnings I found (with a few exceptions) there
weren't many errors, and many pre-dated the import.  The only JOSM warning
I didn't fix is "Crossing building/residential area".  Yaro's and John's
areas don't seem to have many errors either, although there a few isolated
"Crossing building/highway" warnings (and some "building duplicated nodes"
errors).  I have also split big retail buildings in dense areas.
2. I'm fine with simplification, I think we should just do it.  In terms of
orthogonalization, I don't understand why non-orthogonal buildings are a
problem.  If they are, JOSM allows them to be auto-fixed.
3. I agree that the task manager squares are too big in central Toronto.  A
separate task can be created for central Toronto only, with smaller
squares.  I think the square size is fine outside of Toronto, as long as
the squares are split appropriately.
4. In terms of conflation, I agree that deleting and re-adding buildings is
not desirable, but I don't agree that that means it should never be done,
no matter the time cost.  The ideal solution here is some sort of
script/plugin that auto-merges new and recently added buildings -
basically, an iterated "replace geometry".
DannyMcD

>
>
___
Talk-ca mailing list
Talk-ca@openstreetmap.org
https://lists.openstreetmap.org/listinfo/talk-ca


Re: [Talk-ca] Building Import update

2019-01-26 Thread James
it does if you saw my analysis of building(polygon count) remains the same
also visually inspected a few and there was preservation of them

On Sat., Jan. 26, 2019, 11:43 a.m. Nate Wessel  Does that preserve topology between buildings that share nodes?
> Nate Wessel
> Jack of all trades, Master of Geography, PhD candidate in Urban Planning
> NateWessel.com 
>
> On 1/26/19 11:31 AM, James wrote:
>
> no need for scripts, qgis does this fine via the  Vector menu -> Geometry
> tools -> Simplify Geometries utility. I simplified it to 20cm with the ,
> but I think 40cm is too aggressive.
>
> I already have scripts to compile it into the dataformat needed to be
> served.
>
> On Sat., Jan. 26, 2019, 11:16 a.m. Nate Wessel 
>> Hi all,
>>
>> The wiki page is indeed looking a whole lot better right now - my thanks
>> and congrats to everyone who contributed! There is a still a ways to go,
>> but we seem to be getting there quickly.
>>
>> I'll echo John in saying that I would appreciate hearing from some of the
>> other people who chimed in to express their doubts about the import. For my
>> part, I'm not satisfied yet - no surprise, I'm sure ;-). I'm thrilled that
>> we're talking and working together in the open, and that addresses the
>> biggest concern I had with the import.
>>
>> These are the big issues I see remaining:
>>
>> 1. *Validation*: Ideally I'd like to see a good chunk (more than half)
>> of the data that has been imported already validated by another user before
>> we proceed with importing more data. Validation is part of the import plan,
>> so the import isn't done until validation is done anyway. My hope is that
>> this will flag any issues that we can fix before moving forward, and give
>> people time to chime in on the import plan who maybe haven't already. I
>> don't want to see everything imported and only then do we start
>> systematically checking the quality of our work, if ever. If no one wants
>> to do it now, no one is going to want to do it later either, and that
>> doesn't bode well.
>>
>> 2. *Simplification*: James' analysis showed that simplification could
>> save several hundred megabytes (and probably more) in Ontario alone. This
>> is totally worth doing, but we have to document the process and be very
>> careful not to lose valuable data. I believe there was also a concern
>> raised about orthogonal buildings being not quite orthogonal - this is
>> something that we should handle at the same time, again, very carefully. We
>> certainly don't want to coerce every building into right angles. With
>> respect to James, I'm not sure this is something that can be done with a
>> few clicks in QGIS. I would be willing to develop a script to handle this,
>> but it would take me about a week or two to find the time to do this
>> properly. We would need to simultaneously A) simplify straight lines B)
>> orthogonalize where possible and C) preserve topology between connected
>> buildings. This is not impossible, it just takes time and care to do
>> correctly.
>>
>> 3. *Speed and Size*: To John's point, it seems like people certainly are
>> not sticking to the areas they know, unless they get around a whole hell of
>> a lot more than I do, and yes this is a problem. The whole Toronto region
>> was basically imported by two people: DannyMcD seems to have done the
>> entire west side of the region (hundreds of square kilometers) while
>> zzptichka imported the entire east side of the region (again a truly
>> massive area), both in the matter of a week or two. They only stopped in
>> the middle where there were more buildings already and things got a bit
>> more difficult. The middle is where I live, and when I saw that wave of
>> buildings coming, I sounded the alarms.
>> This is way too fast - no one person should be able to import the GTA in
>> a couple weeks. A big part of the problem, IMO is that the task squares are
>> much too large, and allow/require a user to import huge areas at once. At
>> the least, some of the task squares in central Toronto are impossibly
>> large, including hundreds or thousands of buildings already mapped in OSM.
>> Conflation on these, if done properly would take the better part of a day,
>> and people are likely to get sloppy.
>> I would like to see the task squares dramatically reduced in size as a
>> way of slowing people down, helping them stick to areas they know well, and
>> keeping them focused on data quality over quantity. This would also make
>> the process much more accessible to local mappers who don't already have
>> tons of experience importing.
>>
>> 4. *Conflation*: I don't think the current conflation plan is
>> adequate(ly documented). In practice, what people are actually doing may be
>> fine, but I really want to see some better thought on how to handle
>> existing buildings. Right now the wiki says for example "*Before merging
>> buildings data switch to OSM layer and see if there are any clusters of
>> buildings 

Re: [Talk-ca] Talk-ca Digest, Vol 131, Issue 46

2019-01-26 Thread Pierre Béland via Talk-ca
Bonjour James
Je réponds rapidement et reviendrai plus tard avec une analyse quantitative 
pour Kingston où j'observe visuellement beaucoup de formes irrégulières. Pour 
une analyse argumentée, il faut régler les petits soucis. Mon ordinateur plante 
lors de lecture du gros fichier geojson dans QGIS ou JOSM et je n'ai pas de 
script pour le fractionner en fichiers plus petits. J'ai donc fait des extraits 
osm des données importées récemment.

J'analyse donc les données à partir de OSM pour voir les formes irrégulières et 
celles qui pourraient être corrigées.  J'ai fait quelques extraits dans 
différentes villes d'Ontario mais n'ai pas eu le temps de calculer la 
proportion de formes irrégulières.

Pour ce qui est des angles, je ne parles pas ici de que quelques décimales que 
seuls les ordinateurs peuvent déceler !

On doit évidemment ignorer les bâtiments dont les images révèlent une 
architecture de formes irrégulières.  L'analyse consiste davantage à identifier 
les bâtiments mal tracés, dont les angles sont presque droits mais tracés avec 
des angles différents.

L'imagerie Esri haute résolution i a un certain offset vs Bing mais permet de 
zoomer davantage et mieux comparer les formes. Ðans cette zone à Kingston, je 
vois plusieurs bâtiments presque a angle droit. Il est facile dans un tel cas 
de corriger a 90 degrés. 
https://www.openstreetmap.org/#map=21/44.2369514/-76.4905765
Formes régulières dont les angles presque droits devraient être 
corrigésexemplesway(id: 657792023, 657792735, 55652473, 657791586, 61429073, 
657793764); out meta; >; out meta;

Pour angles a 180 degrés, il est possible d'enlever les noeuds à condition que 
non partagés avec bâtiment adjacent. Par contre, ce n'est sans doute pas facile 
à programmer.

Je vois aussi des cas où un bâtiment apparait rectangulaire mais près d'un 
angle un point avec un angle prononcé semble inutile et crée une forme 
irrégulière. Parfois c'est une question d'interprétation lorsque en particulier 
des images ne sont pas prises à la verticale. Un toit, comme ici sur image 
DigitalGlobe Premium laisse croire qu'il y a un angle. Par contre bien tracé 
dans OSM.
http://openstreetmap.org/way/657793764
Et l'architecture d'une ville à l'autre, d'une époque à l'autre crée des formes 
aux géométries très différentes. Cependant mon analyse au cours des derniers 
mois de la géométrie des bâtimemts me fait dire que si le ratio ( batiments 
irréguliers (non orthogonal) / Total bâtiments) est plus grande que 5%, c'est 
un signal qu'il faut analyser les données de près et expliquer pourquoi tant de 
bâtiments ont des formes irrégulières. 

Exemples où je vois des corrections possibles 
way(id: 657794001, 55652494, 657790090, 657794217); out meta; >; out meta;


 
Pierre 
 

Le samedi 26 janvier 2019 10 h 01 min 25 s HNE, James  
a écrit :  
 
 ___
Talk-ca mailing list
Talk-ca@openstreetmap.org
https://lists.openstreetmap.org/listinfo/talk-ca
  i haven't simplified because, no one gave me feedback on the data...Not going 
to process a bunch of datafiles for someone to turn around and say the 
simplification broke something.
On Sat., Jan. 26, 2019, 9:57 a.m. Danny McDonald https://lists.openstreetmap.org/listinfo/talk-ca

___
Talk-ca mailing list
Talk-ca@openstreetmap.org
https://lists.openstreetmap.org/listinfo/talk-ca


Re: [Talk-ca] Building Import update

2019-01-26 Thread Nate Wessel

Does that preserve topology between buildings that share nodes?

Nate Wessel
Jack of all trades, Master of Geography, PhD candidate in Urban Planning
NateWessel.com 

On 1/26/19 11:31 AM, James wrote:
no need for scripts, qgis does this fine via the Vector menu -> 
Geometry tools -> Simplify Geometries utility. I simplified it to 20cm 
with the , but I think 40cm is too aggressive.


I already have scripts to compile it into the dataformat needed to be 
served.


On Sat., Jan. 26, 2019, 11:16 a.m. Nate Wessel  wrote:


Hi all,

The wiki page is indeed looking a whole lot better right now - my
thanks and congrats to everyone who contributed! There is a still
a ways to go, but we seem to be getting there quickly.

I'll echo John in saying that I would appreciate hearing from some
of the other people who chimed in to express their doubts about
the import. For my part, I'm not satisfied yet - no surprise, I'm
sure ;-). I'm thrilled that we're talking and working together in
the open, and that addresses the biggest concern I had with the
import.

These are the big issues I see remaining:

1. *Validation*: Ideally I'd like to see a good chunk (more than
half) of the data that has been imported already validated by
another user before we proceed with importing more data.
Validation is part of the import plan, so the import isn't done
until validation is done anyway. My hope is that this will flag
any issues that we can fix before moving forward, and give people
time to chime in on the import plan who maybe haven't already. I
don't want to see everything imported and only then do we start
systematically checking the quality of our work, if ever. If no
one wants to do it now, no one is going to want to do it later
either, and that doesn't bode well.

2. *Simplification*: James' analysis showed that simplification
could save several hundred megabytes (and probably more) in
Ontario alone. This is totally worth doing, but we have to
document the process and be very careful not to lose valuable
data. I believe there was also a concern raised about orthogonal
buildings being not quite orthogonal - this is something that we
should handle at the same time, again, very carefully. We
certainly don't want to coerce every building into right angles.
With respect to James, I'm not sure this is something that can be
done with a few clicks in QGIS. I would be willing to develop a
script to handle this, but it would take me about a week or two to
find the time to do this properly. We would need to simultaneously
A) simplify straight lines B) orthogonalize where possible and C)
preserve topology between connected buildings. This is not
impossible, it just takes time and care to do correctly.

3. *Speed and Size*: To John's point, it seems like people
certainly are not sticking to the areas they know, unless they get
around a whole hell of a lot more than I do, and yes this is a
problem. The whole Toronto region was basically imported by two
people: DannyMcD seems to have done the entire west side of the
region (hundreds of square kilometers) while zzptichka imported
the entire east side of the region (again a truly massive area),
both in the matter of a week or two. They only stopped in the
middle where there were more buildings already and things got a
bit more difficult. The middle is where I live, and when I saw
that wave of buildings coming, I sounded the alarms.
This is way too fast - no one person should be able to import the
GTA in a couple weeks. A big part of the problem, IMO is that the
task squares are much too large, and allow/require a user to
import huge areas at once. At the least, some of the task squares
in central Toronto are impossibly large, including hundreds or
thousands of buildings already mapped in OSM. Conflation on these,
if done properly would take the better part of a day, and people
are likely to get sloppy.
I would like to see the task squares dramatically reduced in size
as a way of slowing people down, helping them stick to areas they
know well, and keeping them focused on data quality over quantity.
This would also make the process much more accessible to local
mappers who don't already have tons of experience importing.

4. *Conflation*: I don't think the current conflation plan is
adequate(ly documented). In practice, what people are actually
doing may be fine, but I really want to see some better thought on
how to handle existing buildings. Right now the wiki says for
example "/Before merging buildings data switch to OSM layer and
see if there are any clusters of buildings without any meaningful
tags you can delete to save time when merging/."
With respect to whoever wrote 

Re: [Talk-ca] Building Import update

2019-01-26 Thread James
no need for scripts, qgis does this fine via the  Vector menu -> Geometry
tools -> Simplify Geometries utility. I simplified it to 20cm with the ,
but I think 40cm is too aggressive.

I already have scripts to compile it into the dataformat needed to be
served.

On Sat., Jan. 26, 2019, 11:16 a.m. Nate Wessel  Hi all,
>
> The wiki page is indeed looking a whole lot better right now - my thanks
> and congrats to everyone who contributed! There is a still a ways to go,
> but we seem to be getting there quickly.
>
> I'll echo John in saying that I would appreciate hearing from some of the
> other people who chimed in to express their doubts about the import. For my
> part, I'm not satisfied yet - no surprise, I'm sure ;-). I'm thrilled that
> we're talking and working together in the open, and that addresses the
> biggest concern I had with the import.
>
> These are the big issues I see remaining:
>
> 1. *Validation*: Ideally I'd like to see a good chunk (more than half) of
> the data that has been imported already validated by another user before we
> proceed with importing more data. Validation is part of the import plan, so
> the import isn't done until validation is done anyway. My hope is that this
> will flag any issues that we can fix before moving forward, and give people
> time to chime in on the import plan who maybe haven't already. I don't want
> to see everything imported and only then do we start systematically
> checking the quality of our work, if ever. If no one wants to do it now, no
> one is going to want to do it later either, and that doesn't bode well.
>
> 2. *Simplification*: James' analysis showed that simplification could
> save several hundred megabytes (and probably more) in Ontario alone. This
> is totally worth doing, but we have to document the process and be very
> careful not to lose valuable data. I believe there was also a concern
> raised about orthogonal buildings being not quite orthogonal - this is
> something that we should handle at the same time, again, very carefully. We
> certainly don't want to coerce every building into right angles. With
> respect to James, I'm not sure this is something that can be done with a
> few clicks in QGIS. I would be willing to develop a script to handle this,
> but it would take me about a week or two to find the time to do this
> properly. We would need to simultaneously A) simplify straight lines B)
> orthogonalize where possible and C) preserve topology between connected
> buildings. This is not impossible, it just takes time and care to do
> correctly.
>
> 3. *Speed and Size*: To John's point, it seems like people certainly are
> not sticking to the areas they know, unless they get around a whole hell of
> a lot more than I do, and yes this is a problem. The whole Toronto region
> was basically imported by two people: DannyMcD seems to have done the
> entire west side of the region (hundreds of square kilometers) while
> zzptichka imported the entire east side of the region (again a truly
> massive area), both in the matter of a week or two. They only stopped in
> the middle where there were more buildings already and things got a bit
> more difficult. The middle is where I live, and when I saw that wave of
> buildings coming, I sounded the alarms.
> This is way too fast - no one person should be able to import the GTA in a
> couple weeks. A big part of the problem, IMO is that the task squares are
> much too large, and allow/require a user to import huge areas at once. At
> the least, some of the task squares in central Toronto are impossibly
> large, including hundreds or thousands of buildings already mapped in OSM.
> Conflation on these, if done properly would take the better part of a day,
> and people are likely to get sloppy.
> I would like to see the task squares dramatically reduced in size as a way
> of slowing people down, helping them stick to areas they know well, and
> keeping them focused on data quality over quantity. This would also make
> the process much more accessible to local mappers who don't already have
> tons of experience importing.
>
> 4. *Conflation*: I don't think the current conflation plan is adequate(ly
> documented). In practice, what people are actually doing may be fine, but I
> really want to see some better thought on how to handle existing buildings.
> Right now the wiki says for example "*Before merging buildings data
> switch to OSM layer and see if there are any clusters of buildings without
> any meaningful tags you can delete to save time when merging*."
> With respect to whoever wrote this, this approach seems to value time over
> data integrity and I just don't think that's how OSM should operate. We
> need to be more careful with the existing data, and we need to show that
> care with clear and acceptable guidelines for handling the data that
> countless people have already spent their time contributing. We don't do
> OSM any favours by carelessly deleting and replacing data. Help 

Re: [Talk-ca] Building Import update

2019-01-26 Thread Nate Wessel

Hi all,

The wiki page is indeed looking a whole lot better right now - my thanks 
and congrats to everyone who contributed! There is a still a ways to go, 
but we seem to be getting there quickly.


I'll echo John in saying that I would appreciate hearing from some of 
the other people who chimed in to express their doubts about the import. 
For my part, I'm not satisfied yet - no surprise, I'm sure ;-). I'm 
thrilled that we're talking and working together in the open, and that 
addresses the biggest concern I had with the import.


These are the big issues I see remaining:

1. *Validation*: Ideally I'd like to see a good chunk (more than half) 
of the data that has been imported already validated by another user 
before we proceed with importing more data. Validation is part of the 
import plan, so the import isn't done until validation is done anyway. 
My hope is that this will flag any issues that we can fix before moving 
forward, and give people time to chime in on the import plan who maybe 
haven't already. I don't want to see everything imported and only then 
do we start systematically checking the quality of our work, if ever. If 
no one wants to do it now, no one is going to want to do it later 
either, and that doesn't bode well.


2. *Simplification*: James' analysis showed that simplification could 
save several hundred megabytes (and probably more) in Ontario alone. 
This is totally worth doing, but we have to document the process and be 
very careful not to lose valuable data. I believe there was also a 
concern raised about orthogonal buildings being not quite orthogonal - 
this is something that we should handle at the same time, again, very 
carefully. We certainly don't want to coerce every building into right 
angles. With respect to James, I'm not sure this is something that can 
be done with a few clicks in QGIS. I would be willing to develop a 
script to handle this, but it would take me about a week or two to find 
the time to do this properly. We would need to simultaneously A) 
simplify straight lines B) orthogonalize where possible and C) preserve 
topology between connected buildings. This is not impossible, it just 
takes time and care to do correctly.


3. *Speed and Size*: To John's point, it seems like people certainly are 
not sticking to the areas they know, unless they get around a whole hell 
of a lot more than I do, and yes this is a problem. The whole Toronto 
region was basically imported by two people: DannyMcD seems to have done 
the entire west side of the region (hundreds of square kilometers) while 
zzptichka imported the entire east side of the region (again a truly 
massive area), both in the matter of a week or two. They only stopped in 
the middle where there were more buildings already and things got a bit 
more difficult. The middle is where I live, and when I saw that wave of 
buildings coming, I sounded the alarms.
This is way too fast - no one person should be able to import the GTA in 
a couple weeks. A big part of the problem, IMO is that the task squares 
are much too large, and allow/require a user to import huge areas at 
once. At the least, some of the task squares in central Toronto are 
impossibly large, including hundreds or thousands of buildings already 
mapped in OSM. Conflation on these, if done properly would take the 
better part of a day, and people are likely to get sloppy.
I would like to see the task squares dramatically reduced in size as a 
way of slowing people down, helping them stick to areas they know well, 
and keeping them focused on data quality over quantity. This would also 
make the process much more accessible to local mappers who don't already 
have tons of experience importing.


4. *Conflation*: I don't think the current conflation plan is 
adequate(ly documented). In practice, what people are actually doing may 
be fine, but I really want to see some better thought on how to handle 
existing buildings. Right now the wiki says for example "/Before merging 
buildings data switch to OSM layer and see if there are any clusters of 
buildings without any meaningful tags you can delete to save time when 
merging/."
With respect to whoever wrote this, this approach seems to value time 
over data integrity and I just don't think that's how OSM should 
operate. We need to be more careful with the existing data, and we need 
to show that care with clear and acceptable guidelines for handling the 
data that countless people have already spent their time contributing. 
We don't do OSM any favours by carelessly deleting and replacing data. 
Help convince me that this isn't what's happening.


Until some effort has been made to address these concerns, I will 
continue to oppose this import moving forward. And to be clear, I don't 
want to oppose this import - I have too much else I should be focusing 
on. I just don't want to see another shoddy import in Toronto (or 
elsewhere).


Best,

Nate Wessel
Jack of all trades, Master of 

[Talk-ca] Stats Can building import

2019-01-26 Thread John Whelan
Currently we seem to be at the point where some on the mailing list feel 
there wasn't enough discussion on talk-ca before the import.


Quebec I think we should put on one side until theQuebec mappers feel 
more comfortable.


Nate I feel has been involved in a smaller import before and in that 
case there was benefit by simplifying the outlines.  In this case 
verifying nothing gets screwed up adds to the cost.


Buildings not absolutely square, yes but different GIS systems use 
different accuracy so if the incoming data has a few more decimal places 
then rounding will occur which can lead to minor inaccuracies. I feel 
the simplest is just to leave them.  Selecting everything and squaring 
is really a mechanical edit and you can get some odd results which again 
would need to be carefully compared and adds to the workload.


California Steve has put forward some proposals in the 2020 page of the 
wiki which to me amount to minor variations on what we were doing.  The 
intent always was to involve local mappers but locating them is not 
always easy.


The 2020 project is about not only adding building outlines but also 
about enriching the tags on them and that to me is more important.


I'm not hearing specific concerns which can be addressed and I'd like to 
hear them.


So question to Daniel Begin, Andrew Lester and Pierre what can we do to 
improve the project?


Is there anything else people would like to discuss about either the 
2020 project or the building outline import?


Thanks John



Danny McDonald wrote on 2019-01-26 9:55 AM:
Personally, I'm eager to re-start importing, but I'd like to hear what 
Nate has to offer.  Nate, are you OK with the wiki import process as 
written?  If not, are there specific things you want changed?  The 
current process is the one Yaro followed, although John and I 
basically did the same thing (I didn't always replace existing 
building footprints unless the geometry was really bad).  It doesn't 
seem that the building data has been simplified, although this should 
be an easy fix.


DannyMcD
___
Talk-ca mailing list
Talk-ca@openstreetmap.org
https://lists.openstreetmap.org/listinfo/talk-ca


--
Sent from Postbox 

___
Talk-ca mailing list
Talk-ca@openstreetmap.org
https://lists.openstreetmap.org/listinfo/talk-ca


Re: [Talk-ca] Talk-ca Digest, Vol 131, Issue 46

2019-01-26 Thread James
i haven't simplified because, no one gave me feedback on the data...Not
going to process a bunch of datafiles for someone to turn around and say
the simplification broke something.

On Sat., Jan. 26, 2019, 9:57 a.m. Danny McDonald  Personally, I'm eager to re-start importing, but I'd like to hear what
> Nate has to offer.  Nate, are you OK with the wiki import process as
> written?  If not, are there specific things you want changed?  The current
> process is the one Yaro followed, although John and I basically did the
> same thing (I didn't always replace existing building footprints unless the
> geometry was really bad).  It doesn't seem that the building data has been
> simplified, although this should be an easy fix.
>
> DannyMcD
> ___
> Talk-ca mailing list
> Talk-ca@openstreetmap.org
> https://lists.openstreetmap.org/listinfo/talk-ca
>
___
Talk-ca mailing list
Talk-ca@openstreetmap.org
https://lists.openstreetmap.org/listinfo/talk-ca


Re: [Talk-ca] Talk-ca Digest, Vol 131, Issue 46

2019-01-26 Thread Danny McDonald
Personally, I'm eager to re-start importing, but I'd like to hear what Nate
has to offer.  Nate, are you OK with the wiki import process as written?
If not, are there specific things you want changed?  The current process is
the one Yaro followed, although John and I basically did the same thing (I
didn't always replace existing building footprints unless the geometry was
really bad).  It doesn't seem that the building data has been simplified,
although this should be an easy fix.

DannyMcD
___
Talk-ca mailing list
Talk-ca@openstreetmap.org
https://lists.openstreetmap.org/listinfo/talk-ca


Re: [Talk-ca] Building Import update

2019-01-26 Thread john whelan
I'm not certain how this addresses the concerns raised by Andrew Lester and
Pierre Béland, and I seem to recall one other person who expressed concerns.

I think it is important that their concerns are addressed.

Perhaps they would be kind enough to comment on whether or not this
approach addresses their concerns.

Do we have a concern that some mappers have been importing buildings
further than say twenty kilometers from where they live?


Have you found volunteers of local mappers in
Alberta
British Columbia
Manitoba
New Brunswick
Newfoundland and Labrador
Northwest Territories
Nova Scotia
Nunavut
Ontario
Prince Edward Island
Quebec
Saskatchewan
Yukon

Who will be willing to oversee the import in each province?

Does this mean the smaller provinces may not see any data?

How will you handle cities of say 80,000 population in a smaller province
who have an interest in seeing their buildings available but have no idea
on how to contact the provincial group?



If we go back to earlier times it was a suggestion in talk-ca that we use
the single import approach and it was mentioned at the time there didn't
seem to be a list of local mapper groups in Canada.

I'm not saying the approach of a single import as far as the import list
and talk-ca followed by a procedure of locally organised mappers bringing
in the data is wrong I'm just trying to ensure the project moves forward
and we are in agreement.

Thanks

Cheerio John

On Sat, 26 Jan 2019 at 00:17, OSM Volunteer stevea <
stevea...@softworkers.com> wrote:

> Thanks to some good old-fashioned OSM collaboration, both the
> https://wiki.osm.org/wiki/Canada_Building_Import and
> https://wiki.osm.org/wiki/WikiProject_Canada/Building_Canada_2020#NEWS.2C_January_2019
> have been updated.  (The latter points to the former).
>
> In short, it says there are now step-by-steps to begin an import for a
> particular province, and that as the steps get fine-tuned (they look good,
> but might get minor improvements), building a community of at least one or
> two mappers in each of the provinces with data available, the Tasking
> Manager can and will lift the "On Hold" or "Stopped" status.
>
> Nice going, Canada!
>
> See you later,
>
> SteveA
> California
> ___
> Talk-ca mailing list
> Talk-ca@openstreetmap.org
> https://lists.openstreetmap.org/listinfo/talk-ca
>
___
Talk-ca mailing list
Talk-ca@openstreetmap.org
https://lists.openstreetmap.org/listinfo/talk-ca