On Thu, Sep 21, 2017 at 10:47 AM, Andy Turner <[email protected]> wrote:
> Hi,
>
> It is possible to come up with a set of tasks and tests used to confirm and 
> classify what software are capable of. Working out what is included and how 
> this is included is non-trivial and I think this is in the domain of the Open 
> Geospatial Consortium and the standards defining organisations generally. 
> Sorry, I've not been engaging there of late, but when I did interoperability 
> was the primary goal and standardisation of data and services and how to use 
> the services was key. Anyway, there can be other descriptors of 
> software/services too, like the nature of the user interfaces (whether there 
> are optional GUI/command line/whether things operate via web protocols and 
> indeed whether it is more a single desktop application or something that has 
> more of a client/server architecture, whether it is modular, whether there is 
> an API (and what the nature of this is), what language(s) it is written in 
> and possibly loads of other things).
>
> Sorry, I digress, let me try to get to the point...
>
> If there was a breakdown of what functions there are and how the software 
> works then this may help in identifying not only similarities between one 
> FOSS offering and other proprietary ones, but between FOSS ones. This could 
> be useful in a number of ways, one of which might be identifying whether 
> there is a single FOSS offering that does everything that a user currently 
> wants to do (and may do already using other software).
>
> Migrating from using one set of software to using another to perform the same 
> tasks can be quite a job for any organisation. It might require a significant 
> amount of research, the development of educational resources and training.
>
> It would be great if there was a set of educational resources that show how 
> to perform tasks in different software (and indeed using different 
> programming languages). Whatever the platform, there are metrics on the 
> complexity the level of automation and the computational efficiency that can 
> be developed. With a set of metrics it would be easier to measure the 
> similarity and difference between software.
>
> Sorry, having rambled on I realise that I have gone a bit off topic, I expect 
> this has already been suggested and is being worked on, I've dared not to 
> read the entire thread before posting, and I have very little time to help 
> get this in place! Also I have not replied to the very last post on this 
> thread but one a bit back as these others have spun off in other important 
> directions.
>
> Anyway, you have my moral support, thanks for all your efforts developing the 
> OSGeo website, educational resources, services and software.
>
> Best wishes,
>
> Andy
> http://www.geog.leeds.ac.uk/people/a.turner/index.html
>



Hi Andy,

Somehow we are already including this. Although, it's true, we didin't
distinguish between client and server, which could be quite confusing
(to improve!). For example, for GeoNetwork, find attached image.
_______________________________________________
Discuss mailing list
[email protected]
https://lists.osgeo.org/mailman/listinfo/discuss

Reply via email to