Hi All,

I am a also a new OTB user trying to perform OBIA on a WorldView2 image.

I was able to get a segmentation with guidance from this tutorial :

http://wiki.awf.forst.uni-goettingen.de/wiki/index.php/Object-based_classification_(Tutorial)

However I have not been able to classify the segments as described in the 
last part of the tutorial.  After reading this thread and several others it 
sounds like classification might not be a straight forward task using 
current tools (at least for inexperienced users like myself). I have the 
segmented polygons and training polygons but cannot figure out how to get a 
classification using any combination of tools. Any suggestions? Thank you.

Mark 


On Saturday, May 20, 2017 at 11:15:15 PM UTC+8, Stephen Woodbridge wrote:
>
> Jordi,
>
> Regarding your question: On the other hand, maybe we should give up to 
> OBIA and just use deep learning with CNNs as most people have started 
> doing? 
>
> This sent me off reading about CNNs and with just enough info to be 
> dangerous :), I like the idea but it is not without some problems that 
> would need to be solved for example:
>
>    - CNNs seem to be good at recognizing features, but not at feature 
>    extraction so some addition infrastructure would be need to do that
>    - given a large area one would have to feed it to the CNN in pieces to 
>    have it recognize features, maybe using a variable sized sliding window
>
> I'm sure there are other issues, but these two jumped out at me. Where 
> there might be some useful connection is after the segmentation using the 
> image area about the segment and passing that to the CNN might be a good 
> way to classify the segment object.
>
> Anyway a really good thought provoking question!
>
> -Steve
>
> On Thursday, May 18, 2017 at 10:49:49 AM UTC-4, jordi.inglada wrote:
>>
>> Hi Stephen, 
>>
>> Thanks for this feedback. I think the OTB community could (should?) start 
>> working on the design of a generic OBIA framework for land-cover mapping. 
>> Two releases ago, OTB provided a new sampling framework for the 
>> classification pipeline [1] which makes dealing with huge amounts of data 
>> feasible (and allows to avoid the use of a data base). 
>>
>> At CESBIO, we are using it for land-cover mapping at the country scale 
>> [2,3] and it has allowed us to get x20 speed-up and also enormous savings 
>> in terms of disk storage (since applications can be connected in memory). 
>>
>> Although we are still doing pixel based classification, I think that this 
>> new sampling framework could work just as well with "objects", since 
>> samples are geometries on a vector representation (points by now, but could 
>> be polygons). 
>>
>> Since, OTB has the best large scale segmentation ;), I think the missing 
>> step is the feature extraction for polygons. There are some interesting 
>> things in OTB for this (ObjectsRadiometricStatistics from the Sertit 
>> module, for instance), but we have nothing for shapes, let alone context 
>> (what you do with FRAGSTATS). 
>>
>> On the other hand, some say that the classical OBIA workflow may be 
>> sub-optimal since all segmentation errors (and they exist!) will undermine 
>> the following steps of the processing chain. Superpixels seem to be an 
>> alternative (we may be contributing a large scale SLIC implementation 
>> soon). I am not aware of any framework which is generic enough to integrate 
>> all these different alternatives (pixel-based + regularisation, classical 
>> OBIA, superpixel+??), but if some OTB users feel like it, we could try to 
>> sketch something allowing to build a scalable pipeline (parallel and with 
>> reduced IO) which generalises the current pixel-based framework. 
>>
>> From my point of view, having a coherent solution in OTB (not only for 
>> classification, but for any remote sensing pipeline) has huge advantages 
>> with respect to using heterogeneous tools (OTB + postgres + FRAGSTATS + 
>> scikit-learn): when you have finished prototyping and want to scale up, 
>> being able to connect applications in memory to minimise IO, use MPI to 
>> distribute on a cluster (or keep just multi-threading), etc. is the key. 
>> The issue nowadays, is that even for prototyping, we are faced with TB of 
>> data, and in this case, I'd rather be efficient from the get go. 
>>
>> On the other hand, maybe we should give up to OBIA and just use deep 
>> learning with CNNs as most people have started doing? 
>>
>> Any thoughts on this? 
>>
>> Jordi 
>>
>> [1] https://www.orfeo-toolbox.org/CookBook/recipes/pbclassif.html 
>> [2] http://dx.doi.org/10.3390/rs9010095 
>> [3] http://tully.ups-tlse.fr/jordi/iota2 
>>
>> On Wed 17-May-2017 at 23:27:40 +0200, Stephen Woodbridge <
>> [email protected]> wrote: 
>> > I'm new to OTB, but have the same need to do OBIA. I just finished a 
>> project where we used a proprietary segmentation module then using python 
>> integrated the results with scikit-learn. I'm rewriting that to work with 
>> otb LSMS Segmentation which works much better. So my current 
>> > workflow is something like this: 
>> > 
>> > step 1: run LSMSSegmentation and get polygon shapefile with attributes 
>> like: 
>> > 
>> > Num Name Type Len Decimal 
>> > 1. LABEL N 9 0 
>> > 2. NBPIXELS N 9 0 
>> > 3. MEANB0 N 24 15 
>> > 4. MEANB1 N 24 15 
>> > 5. MEANB2 N 24 15 
>> > 6. MEANB3 N 24 15 
>> > 7. MEANB4 N 24 15 
>> > 8. MEANB5 N 24 15 
>> > 9. VARB0 N 24 15 
>> > 10. VARB1 N 24 15 
>> > 11. VARB2 N 24 15 
>> > 12. VARB3 N 24 15 
>> > 13. VARB4 N 24 15 
>> > 14. VARB5 N 24 15 
>> > 
>> > step 2: and then to that I add polygon stats based roughly on FRAGSTATS 
>> like: 
>> > 
>> > 15. AREA N 24 15 
>> > 16. PERIM N 24 15 
>> > 17. PARA N 24 15 
>> > 18. COMPACT N 24 15 
>> > 19. COMPACT2 N 24 15 
>> > 20. SMOOTH N 24 15 
>> > 21. SHAPE N 24 15 
>> > 22. FRAC N 24 15 
>> > 23. CIRCLE N 24 15 
>> > 
>> > step 3: In my case I load these into a postgres database where we have 
>> training polygons and we select the polygon segments that match our 
>> exemplars 
>> > step 4: and feed them into scikit-learn to build a trained classifier. 
>> > step 5: we segment a search areas and add the polygon stats as above 
>> > step 6: we run those segments through trained classifier to get 
>> probability that they belong to a specific class. 
>> > 
>> > It sounds like otb has modules for the classifiers that I might need to 
>> look into, but I like using the database just because there is a lot of 
>> data to manage. Gluing everything together with python means I can automate 
>> the workflow and the data management of various runs against 
>> > different areas. 
>> > 
>> > -- 
>>
>

-- 
-- 
Check the OTB FAQ at
http://www.orfeo-toolbox.org/FAQ.html

You received this message because you are subscribed to the Google
Groups "otb-users" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to
[email protected]
For more options, visit this group at
http://groups.google.com/group/otb-users?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"otb-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to