Re: [OSGeo-Discuss] Conference selection transparency (Was Announcement: Call for Location global FOSS4G 2023)

2022-01-13 Thread Jeroen Ticheler via Discuss
My philosophy is and hopefully will always be that we have trust in the 
committee members that do the voting. They all put in their time and more 
importantly their heart. Whatever method you come up with, bias and personal 
preferences come into play. I trust in the people/members + the guidelines as 
we already have them. Sometimes the result is unfavorable for myself, most of 
the times it fits what the majority of the community (of like minded people 
working on FOSS) is happy with. Which one is more important?!
Merit is what a lot of trust within the FOSS community is based on. It is a 
core value of OSGeo and FOSS4G from my point of view. 
Cheers, Jeroen

> Op 13 jan. 2022 om 15:32 heeft Jonathan Moules via Discuss 
>  het volgende geschreven:
> 
> > And cognitive bias suddenly does not play a role anymore when you score a 
> > good friend vs a hated enemy against a "list of requirements"? It might 
> > look transparent but is not the tiniest bit more fair.
> 
> Sure the biases will still be there, but the justification for the score is 
> written down for all to see. Hence: Transparent. It'll be available for the 
> entire community to then read; if it's a rationalisation it'll be there for 
> all to see (and call out). 
> 
> Suggestions for even more fairness are welcome.
> 
___
Discuss mailing list
Discuss@lists.osgeo.org
https://lists.osgeo.org/mailman/listinfo/discuss


Re: [OSGeo-Discuss] Conference selection transparency (Was Announcement: Call for Location global FOSS4G 2023)

2022-01-13 Thread María Arias de Reyna via Discuss
On Thu, Jan 13, 2022 at 1:13 PM Jonathan Moules via Discuss
 wrote:
> I don't think there's any need to reinvent the wheel here; a number of 
> open-source initiatives seem to use scoring for evaluating proposals. Chances 
> are something from one of them can be borrowed.
>
> Apache use it for scoring mentee proposals for GSOC: 
> https://community.apache.org/mentee-ranking-process.html
>
> Linux Foundation scores their conference proposals for example: 
> https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/program/scoring-guidelines/

Am I understanding it wrong or this is to accept talk proposals, not
conference proposals?

Scoring a contractor for a well defined project (as you pointed public
administrations do), choosing the right person for a specified job, or
deciding if a talk deserves to be in a schedule is more or less "easy"
compared to decide who is hosting a conference.

If you want to propose a draft of score requirements for FOSS4G, I
think it would be interesting to go through them and try to come up
with something. Even if the scoring is not binding, it may help future
proposals see what is the path.

My only "but" with this system (which I use almost always when I have
to review anything and I intended to use for this FOSS4G voting) is
that it is hard to come up with an objective system that counts all
the variables. And if the score does not match the final decision, it
may be difficult to process.

I have been on the GSoC as mentor with the ASF and true, we have a
ranking process, but it helped us mostly to order the candidates and
reject those that deviate too much. The final decision was not a clear
numeric decision. When the difference is small, you do have to
consider other things. And from what I have seen these past few years
on FOSS4G, either there is one candidate that outshines obviously, or
the difference is really small between candidates and it comes down to
things that may not be even defined on the RFP.

And there's things you have to consider that a generic scoring system
can't help you with. We used this system in FOSS4G 2021 to decide
which talks to accept on the conference, where the community voting
had a strong weight but was not binding. And we had to make some
exceptions with good talks that were experimental but didn't get a
good score and objectively numerically they were rejected. We also had
to reject some duplicated talks that had a high score but we couldn't
argue both were accepted. Which one to reject? Usually the one that
had a speaker with more talks. But what if both have a speaker with no
more talks? That's something you have to check case by case.

Which leads us that with the scoring there is less room for
experimentation because the candidates will focus on getting high
scores on specific questions. Not on offering what is their best. For
example, the proposal we made for FOSS4G Sevilla 2019 in a pirate
amusement park to celebrate Magallanes... no score could have
predicted that.

So I may agree on scoring, not on binding scoring.

But first we need some draft to work on to score proposals :)
___
Discuss mailing list
Discuss@lists.osgeo.org
https://lists.osgeo.org/mailman/listinfo/discuss


Re: [OSGeo-Discuss] Conference selection transparency (Was Announcement: Call for Location global FOSS4G 2023)

2022-01-13 Thread Kobben, Barend (UT-ITC) via Discuss
Quoting " To work around this, with public sector contracts in the western 
world you have a list of requirements and then all the bids are scored against 
those requirements. The one with the highest score wins the contract. *That* is 
transparent. "

Really...? And cognitive bias suddenly does not play a role anymore when you 
score a good friend vs a hated enemy against a "list of requirements"? It 
might look transparent but is not the tiniest bit more fair.

--
Barend Köbben


From: Discuss  on behalf of Jonathan Moules 
via Discuss 
Organisation: LightPear
Reply to: "jonathan-li...@lightpear.com" 
Date: Thursday, 13 January 2022 at 13:13
To: Bruce Bannerman 
Cc: "discuss@lists.osgeo.org" 
Subject: Re: [OSGeo-Discuss] Conference selection transparency (Was 
Announcement: Call for Location global FOSS4G 2023)


Excellent question Bruce!

I don't think there's any need to reinvent the wheel here; a number of 
open-source initiatives seem to use scoring for evaluating proposals. Chances 
are something from one of them can be borrowed.



Apache use it for scoring mentee proposals for GSOC: 
https://community.apache.org/mentee-ranking-process.html

Linux Foundation scores their conference proposals for example: 
https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/program/scoring-guidelines/


A comprehensive web-page with tons of suggestions and guidance for how to do 
it: https://rfp360.com/rfp-weighted-scoring/

Best,

Jonathan
On 2022-01-13 11:43, Bruce Bannerman wrote:
Jonathan,

Do you have a suggestion as to how the process can be improved?

Kind regards,

Bruce

Disclosure:

I was a member of the LOC for FOSS4G-2009.

I personally don’t have a problem with the process as is, but it may be 
possible to improve things. That is, provided that we don’t make the job of our 
volunteers more difficult than it needs to be.

In the end the people who have stepped up to do the work will need to make the 
call. We may not like the outcome, but we need to trust that they are acting in 
OSGeo’s best interest and respect their decision.


On 13 Jan 2022, at 20:58, Jonathan Moules via Discuss 
<mailto:discuss@lists.osgeo.org> wrote:

> Anyone can ask questions to the candidates.

Yes, they can (and yes, I have asked questions), but here's the thing: The only 
people who actually matter are the people who vote. And we have no idea what 
they vote (for the valid reason stated) or what their criteria are for their 
vote (which is a problem). If the committee don't read and/or care about the 
questions asked/answered then said questions/answers are meaningless.

> The only two things that are not public are:

I disagree, the third thing that's not public, and by far the most important, 
is the actual scoring criteria. Each committee member is a black-box in this 
regard. Not only do we not find out *what* they voted (fine), we also never 
know *why* they voted a specific way.

Did Buenos Aires win because:

* it had the shiniest brochure?

* it was cheapest?

* that's where the committee members wanted to go on holiday?

* nepotism?

* the region seemed like it'd benefit the most?

* they were feeling grumpy at the chair of the other RfP that day?

* they had the "best" bid?

... etc



Disclosure: I am definitely *NOT* stating those are the reasons it was 
chosen!!! I'm highlighting them because the lack of transparency means we can't 
know what the actual reasons were. Frankly, given the absolutely huge list of 
cognitive biases that exist, there's a reasonable chance that the voters aren't 
voting why they think they're voting either. That's just the human condition; 
we're great at deceiving ourselves and rationalisations (me included).

To work around this, with public sector contracts in the western world you have 
a list of requirements and then all the bids are scored against those 
requirements. The one with the highest score wins the contract. *That* is 
transparent.



TL;DR: We don't know why the voters vote as they do. The public sector solves 
this by requiring scoring of bids against a list of pre-published requirements.

I hope that clears things up. I'm not in any way suggesting impropriety, I'm 
highlighting we have no way of knowing there's no impropriety. Hence my claim 
as to a lack of transparency; the votes are opaque.

Cheers,

Jonathan


On 2022-01-13 07:35, María Arias de Reyna wrote:

On Wed, Jan 12, 2022 at 10:50 PM Jonathan Moules via Discuss

<mailto:discuss@lists.osgeo.org> wrote:

On the surface, this is a good idea, but unfortunately it has a fundamental 
problem:

There are no "criteria for selection" of the conference beyond "the committee 
members voted for this proposal". There's zero transparency in the process.

I can't let this serious accusation go unanswered.



All the process is done via public mailing lists. All the criteria is

published on the Request For 

Re: [OSGeo-Discuss] Conference selection transparency (Was Announcement: Call for Location global FOSS4G 2023)

2022-01-13 Thread Jonathan Moules via Discuss
> And cognitive bias suddenly does not play a role anymore when you 
score a good friend vs a hated enemy against a "list of 
requirements"? It might look transparent but is not the tiniest bit 
more fair.


Sure the biases will still be there, but the justification for the score 
is written down for all to see. Hence: Transparent. It'll be available 
for the entire community to then read; if it's a rationalisation it'll 
be there for all to see (and call out).


Suggestions for even more fairness are welcome.


On 2022-01-13 14:25, Kobben, Barend (UT-ITC) wrote:


Quoting "To work around this, with public sector contracts in the 
western world you have a list of requirements and then all the bids 
are scored against those requirements. The one with the highest score 
wins the contract. *That* is transparent. "


Really...? And cognitive bias suddenly does not play a role anymore 
when you score a good friend vs a hated enemy against a "list of 
requirements"? It might look transparent but is not the tiniest 
bit more fair.


/-- /

/Barend Köbben/

*From: *Discuss  on behalf of 
Jonathan Moules via Discuss 

*Organisation: *LightPear
*Reply to: *"jonathan-li...@lightpear.com" 
*Date: *Thursday, 13 January 2022 at 13:13
*To: *Bruce Bannerman 
*Cc: *"discuss@lists.osgeo.org" 
*Subject: *Re: [OSGeo-Discuss] Conference selection transparency (Was 
Announcement: Call for Location global FOSS4G 2023)


Excellent question Bruce!

I don't think there's any need to reinvent the wheel here; a number of 
open-source initiatives seem to use scoring for evaluating proposals. 
Chances are something from one of them can be borrowed.


Apache use it for scoring mentee proposals for GSOC: 
https://community.apache.org/mentee-ranking-process.html


Linux Foundation scores their conference proposals for example: 
https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/program/scoring-guidelines/


A comprehensive web-page with tons of suggestions and guidance for how 
to do it: https://rfp360.com/rfp-weighted-scoring/


Best,

Jonathan

On 2022-01-13 11:43, Bruce Bannerman wrote:

Jonathan,

Do you have a suggestion as to how the process can be improved?

Kind regards,

Bruce

Disclosure:

I was a member of the LOC for FOSS4G-2009.

I personally don’t have a problem with the process as is, but it
may be possible to improve things. That is, provided that we don’t
make the job of our volunteers more difficult than it needs to be.

In the end the people who have stepped up to do the work will need
to make the call. We may not like the outcome, but we need to
trust that they are acting in OSGeo’s best interest and respect
their decision.



On 13 Jan 2022, at 20:58, Jonathan Moules via Discuss
 <mailto:discuss@lists.osgeo.org> wrote:

> Anyone can ask questions to the candidates.

Yes, they can (and yes, I have asked questions), but here's
the thing: The only people who actually matter are the people
who vote. And we have no idea what they vote (for the valid
reason stated) or what their criteria are for their vote
(which is a problem). If the committee don't read and/or care
about the questions asked/answered then said questions/answers
are meaningless.

> The only two things that are not public are:

I disagree, the third thing that's not public, and by far the
most important, is the actual scoring criteria. Each committee
member is a black-box in this regard. Not only do we not find
out *what* they voted (fine), we also never know *why* they
voted a specific way.

Did Buenos Aires win because:

* it had the shiniest brochure?

* it was cheapest?

* that's where the committee members wanted to go on holiday?

* nepotism?

* the region seemed like it'd benefit the most?

* they were feeling grumpy at the chair of the other RfP that day?

* they had the "best" bid?

... etc

Disclosure: I am definitely **NOT** stating those are the
reasons it was chosen!!! I'm highlighting them because the
lack of transparency means we can't know what the actual
reasons were. Frankly, given the absolutely huge list of
cognitive biases that exist, there's a reasonable chance that
the voters aren't voting why they think they're voting either.
That's just the human condition; we're great at deceiving
ourselves and rationalisations (me included).

To work around this, with public sector contracts in the
western world you have a list of requirements and then all the
bids are scored against those requirements. The one with the
highest score wins the contract. *That* is transparent.

TL;DR: We 

Re: [OSGeo-Discuss] Conference selection transparency (Was Announcement: Call for Location global FOSS4G 2023)

2022-01-13 Thread Jonathan Moules via Discuss

Excellent question Bruce!

I don't think there's any need to reinvent the wheel here; a number of 
open-source initiatives seem to use scoring for evaluating proposals. 
Chances are something from one of them can be borrowed.



Apache use it for scoring mentee proposals for GSOC: 
https://community.apache.org/mentee-ranking-process.html


Linux Foundation scores their conference proposals for example: 
https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/program/scoring-guidelines/



A comprehensive web-page with tons of suggestions and guidance for how 
to do it: https://rfp360.com/rfp-weighted-scoring/


Best,

Jonathan

On 2022-01-13 11:43, Bruce Bannerman wrote:

Jonathan,

Do you have a suggestion as to how the process can be improved?

Kind regards,

Bruce

Disclosure:

I was a member of the LOC for FOSS4G-2009.

I personally don’t have a problem with the process as is, but it may 
be possible to improve things. That is, provided that we don’t make 
the job of our volunteers more difficult than it needs to be.


In the end the people who have stepped up to do the work will need to 
make the call. We may not like the outcome, but we need to trust that 
they are acting in OSGeo’s best interest and respect their decision.


On 13 Jan 2022, at 20:58, Jonathan Moules via Discuss 
 wrote:




> Anyone can ask questions to the candidates.

Yes, they can (and yes, I have asked questions), but here's the 
thing: The only people who actually matter are the people who vote. 
And we have no idea what they vote (for the valid reason stated) or 
what their criteria are for their vote (which is a problem). If the 
committee don't read and/or care about the questions asked/answered 
then said questions/answers are meaningless.


> The only two things that are not public are:

I disagree, the third thing that's not public, and by far the most 
important, is the actual scoring criteria. Each committee member is a 
black-box in this regard. Not only do we not find out *what* they 
voted (fine), we also never know *why* they voted a specific way.


Did Buenos Aires win because:

* it had the shiniest brochure?

* it was cheapest?

* that's where the committee members wanted to go on holiday?

* nepotism?

* the region seemed like it'd benefit the most?

* they were feeling grumpy at the chair of the other RfP that day?

* they had the "best" bid?

... etc


Disclosure: I am definitely **NOT** stating those are the reasons it 
was chosen!!! I'm highlighting them because the lack of transparency 
means we can't know what the actual reasons were. Frankly, given the 
absolutely huge list of cognitive biases that exist, there's a 
reasonable chance that the voters aren't voting why they think 
they're voting either. That's just the human condition; we're great 
at deceiving ourselves and rationalisations (me included).


To work around this, with public sector contracts in the western 
world you have a list of requirements and then all the bids are 
scored against those requirements. The one with the highest score 
wins the contract. *That* is transparent.



TL;DR: We don't know why the voters vote as they do. The public 
sector solves this by requiring scoring of bids against a list of 
pre-published requirements.


I hope that clears things up. I'm not in any way suggesting 
impropriety, I'm highlighting we have no way of knowing there's no 
impropriety. Hence my claim as to a lack of transparency; the votes 
are opaque.


Cheers,

Jonathan


On 2022-01-13 07:35, María Arias de Reyna wrote:

On Wed, Jan 12, 2022 at 10:50 PM Jonathan Moules via Discuss
  wrote:

On the surface, this is a good idea, but unfortunately it has a fundamental 
problem:
There are no "criteria for selection" of the conference beyond "the committee 
members voted for this proposal". There's zero transparency in the process.

I can't let this serious accusation go unanswered.

All the process is done via public mailing lists. All the criteria is
published on the Request For Proposals. Anyone on the community can
review the RFP and propose changes to it. Anyone on the community can
read the proposals and interact with the candidatures.

The only two things that are not public are:
  * Confidentiality issues with the proposals. For example sometimes
providers give you huge discounts in exchange of not making that
discount public. So you can't show the budget publicly, unless you are
willing to not use the discount.
  * What each member of the committee votes. And this is to ensure they
can freely vote without fearing consequences.

Which are two very reasonable exceptions.

Anyone can ask questions to the candidates. If I am right, you
yourself have been very active on this process for the past years.
Were you not the one that asked what a GeoChica is or am I confusing
you with some other Jonathan? If I am confusing you with some other
Jonathan, my mistake. Maybe you are not aware of the transparency of
the process.

The process is 

Re: [OSGeo-Discuss] Conference selection transparency (Was Announcement: Call for Location global FOSS4G 2023)

2022-01-13 Thread Bruce Bannerman via Discuss
Jonathan,

Do you have a suggestion as to how the process can be improved?

Kind regards,

Bruce

Disclosure:

I was a member of the LOC for FOSS4G-2009.

I personally don’t have a problem with the process as is, but it may be 
possible to improve things. That is, provided that we don’t make the job of our 
volunteers more difficult than it needs to be.

In the end the people who have stepped up to do the work will need to make the 
call. We may not like the outcome, but we need to trust that they are acting in 
OSGeo’s best interest and respect their decision.

> On 13 Jan 2022, at 20:58, Jonathan Moules via Discuss 
>  wrote:
> 
> 
> > Anyone can ask questions to the candidates.
> 
> Yes, they can (and yes, I have asked questions), but here's the thing: The 
> only people who actually matter are the people who vote. And we have no idea 
> what they vote (for the valid reason stated) or what their criteria are for 
> their vote (which is a problem). If the committee don't read and/or care 
> about the questions asked/answered then said questions/answers are 
> meaningless.
> 
> > The only two things that are not public are:
> 
> I disagree, the third thing that's not public, and by far the most important, 
> is the actual scoring criteria. Each committee member is a black-box in this 
> regard. Not only do we not find out *what* they voted (fine), we also never 
> know *why* they voted a specific way.
> 
> Did Buenos Aires win because:
> 
> * it had the shiniest brochure?
> 
> * it was cheapest?
> 
> * that's where the committee members wanted to go on holiday?
> 
> * nepotism?
> 
> * the region seemed like it'd benefit the most?
> 
> * they were feeling grumpy at the chair of the other RfP that day?
> 
> * they had the "best" bid?
> 
> ... etc
> 
> 
> 
> Disclosure: I am definitely *NOT* stating those are the reasons it was 
> chosen!!! I'm highlighting them because the lack of transparency means we 
> can't know what the actual reasons were. Frankly, given the absolutely huge 
> list of cognitive biases that exist, there's a reasonable chance that the 
> voters aren't voting why they think they're voting either. That's just the 
> human condition; we're great at deceiving ourselves and rationalisations (me 
> included).
> 
> To work around this, with public sector contracts in the western world you 
> have a list of requirements and then all the bids are scored against those 
> requirements. The one with the highest score wins the contract. *That* is 
> transparent.
> 
> 
> 
> TL;DR: We don't know why the voters vote as they do. The public sector solves 
> this by requiring scoring of bids against a list of pre-published 
> requirements.
> 
> I hope that clears things up. I'm not in any way suggesting impropriety, I'm 
> highlighting we have no way of knowing there's no impropriety. Hence my claim 
> as to a lack of transparency; the votes are opaque.
> 
> Cheers,
> 
> Jonathan
> 
> 
> 
> On 2022-01-13 07:35, María Arias de Reyna wrote:
>> On Wed, Jan 12, 2022 at 10:50 PM Jonathan Moules via Discuss
>>  wrote:
>>> On the surface, this is a good idea, but unfortunately it has a fundamental 
>>> problem:
>>> There are no "criteria for selection" of the conference beyond "the 
>>> committee members voted for this proposal". There's zero transparency in 
>>> the process.
>> I can't let this serious accusation go unanswered.
>> 
>> All the process is done via public mailing lists. All the criteria is
>> published on the Request For Proposals. Anyone on the community can
>> review the RFP and propose changes to it. Anyone on the community can
>> read the proposals and interact with the candidatures.
>> 
>> The only two things that are not public are:
>>  * Confidentiality issues with the proposals. For example sometimes
>> providers give you huge discounts in exchange of not making that
>> discount public. So you can't show the budget publicly, unless you are
>> willing to not use the discount.
>>  * What each member of the committee votes. And this is to ensure they
>> can freely vote without fearing consequences.
>> 
>> Which are two very reasonable exceptions.
>> 
>> Anyone can ask questions to the candidates. If I am right, you
>> yourself have been very active on this process for the past years.
>> Were you not the one that asked what a GeoChica is or am I confusing
>> you with some other Jonathan? If I am confusing you with some other
>> Jonathan, my mistake. Maybe you are not aware of the transparency of
>> the process.
>> 
>> The process is transparent and public except on those two exceptions
>> that warrantee the process is going to be safe.
> ___
> Discuss mailing list
> Discuss@lists.osgeo.org
> https://lists.osgeo.org/mailman/listinfo/discuss
___
Discuss mailing list
Discuss@lists.osgeo.org
https://lists.osgeo.org/mailman/listinfo/discuss


[OSGeo-Discuss] Conference selection transparency (Was Announcement: Call for Location global FOSS4G 2023)

2022-01-13 Thread Jonathan Moules via Discuss

> Anyone can ask questions to the candidates.

Yes, they can (and yes, I have asked questions), but here's the thing: 
The only people who actually matter are the people who vote. And we have 
no idea what they vote (for the valid reason stated) or what their 
criteria are for their vote (which is a problem). If the committee don't 
read and/or care about the questions asked/answered then said 
questions/answers are meaningless.


> The only two things that are not public are:

I disagree, the third thing that's not public, and by far the most 
important, is the actual scoring criteria. Each committee member is a 
black-box in this regard. Not only do we not find out *what* they voted 
(fine), we also never know *why* they voted a specific way.


Did Buenos Aires win because:

* it had the shiniest brochure?

* it was cheapest?

* that's where the committee members wanted to go on holiday?

* nepotism?

* the region seemed like it'd benefit the most?

* they were feeling grumpy at the chair of the other RfP that day?

* they had the "best" bid?

... etc


Disclosure: I am definitely **NOT** stating those are the reasons it was 
chosen!!! I'm highlighting them because the lack of transparency means 
we can't know what the actual reasons were. Frankly, given the 
absolutely huge list of cognitive biases that exist, there's a 
reasonable chance that the voters aren't voting why they think they're 
voting either. That's just the human condition; we're great at deceiving 
ourselves and rationalisations (me included).


To work around this, with public sector contracts in the western world 
you have a list of requirements and then all the bids are scored against 
those requirements. The one with the highest score wins the contract. 
*That* is transparent.



TL;DR: We don't know why the voters vote as they do. The public sector 
solves this by requiring scoring of bids against a list of pre-published 
requirements.


I hope that clears things up. I'm not in any way suggesting impropriety, 
I'm highlighting we have no way of knowing there's no impropriety. Hence 
my claim as to a lack of transparency; the votes are opaque.


Cheers,

Jonathan


On 2022-01-13 07:35, María Arias de Reyna wrote:

On Wed, Jan 12, 2022 at 10:50 PM Jonathan Moules via Discuss
  wrote:

On the surface, this is a good idea, but unfortunately it has a fundamental 
problem:
There are no "criteria for selection" of the conference beyond "the committee 
members voted for this proposal". There's zero transparency in the process.

I can't let this serious accusation go unanswered.

All the process is done via public mailing lists. All the criteria is
published on the Request For Proposals. Anyone on the community can
review the RFP and propose changes to it. Anyone on the community can
read the proposals and interact with the candidatures.

The only two things that are not public are:
  * Confidentiality issues with the proposals. For example sometimes
providers give you huge discounts in exchange of not making that
discount public. So you can't show the budget publicly, unless you are
willing to not use the discount.
  * What each member of the committee votes. And this is to ensure they
can freely vote without fearing consequences.

Which are two very reasonable exceptions.

Anyone can ask questions to the candidates. If I am right, you
yourself have been very active on this process for the past years.
Were you not the one that asked what a GeoChica is or am I confusing
you with some other Jonathan? If I am confusing you with some other
Jonathan, my mistake. Maybe you are not aware of the transparency of
the process.

The process is transparent and public except on those two exceptions
that warrantee the process is going to be safe.___
Discuss mailing list
Discuss@lists.osgeo.org
https://lists.osgeo.org/mailman/listinfo/discuss