All super helpful. Thanks for the details. I'll try to tackle soon.

Best!

Jay

> On May 15, 2015, at 7:04 AM, Christian Dietz 
> <christian.di...@uni-konstanz.de> wrote:
> 
> Hi Jay,
> 
> See my answers below. I hope this helps.
> 
> Christian
> 
> 
> 
> On 14.05.2015 20:44, Jay Warrick wrote:
>> Hi Christian,
>> 
>> Thanks so much! This helps a lot. I probably wouldn't have discovered the 
>> tools for labeling immediately without your suggestion. Although I've gone 
>> through the tutorials, I'm still new to ImgLib2 so I wonder if I could try 
>> to summarize what I think the general workflow to be with an example and 
>> maybe you can correct me / direct me as I go? As a place to start, I'm 
>> imagining a scenario where I have a 1-plane, 1-channel nuclear stained image 
>> of cells and its associated thresholded image to 
>> identhttps://github.com/imglib/imglib2-roi/tree/master/src/main/java/net/imglib2/roiify
>>  nuclear regions for quantifying feature sets. At a high-level, what I 
>> imagine trying to do is...
> 
> In KNIME we are still using the deprecated versions of the Labeling. Tobias 
> Pietzsch rewrote parts of this framework (thats why NativeImgLabeling for 
> example is deprecated). The new implementations, you can find here: 
> https://github.com/imglib/imglib2-roi/tree/master/src/main/java/net/imglib2/roi
>  
> <https://github.com/imglib/imglib2-roi/tree/master/src/main/java/net/imglib2/roi>.
>  We are going to switch to these implementations in KNIME soon.
> 
>> 
>> 1) Generate a labeling object. For example, using the following...
>> 
>> final Labeling< T > labeling = new NativeImgLabeling< T, IntType >( Img < T 
>> > );
> 
> In the new ROI implementations you would use ImgLabeling 
> (https://github.com/imglib/imglib2-roi/blob/master/src/main/java/net/imglib2/roi/labeling/ImgLabeling.java
>  
> <https://github.com/imglib/imglib2-roi/blob/master/src/main/java/net/imglib2/roi/labeling/ImgLabeling.java>)
>  instead of NativeImgLabeling.
> 
>> 
>> I assume this simplified constructor takes what could be a "black and white" 
>> image and associates Long numbers with each separate region of white pixels. 
>> Right? How might I reassign labels or construct them if, for example, I have 
>> specific IDs that I would like to associate with each white region of the 
>> mask image I would like to label?
> 
> Actually, the constructor takes an Img of arbitrary integer-type which serves 
> as an index image. From each Integer there exists a mapping to a Set of 
> labels. If you want to derive a Labeling from a "Black and White" image, you 
> have to apply an algorithm. For example a Connected Component Analysis (this 
> is one way to do it in KNIME, for example after a simple thresholding), to 
> identify the individual objects in your "Black and White" image.
> 
>> 
>> 2) Once I have a labeling, it seems like the next step is to obtain an 
>> IterableInterval over an ROI that can return "x-y locations" of pixels with 
>> a particular label. I assume to get this I would use something like...
>> 
>> labeling.getIterableRegionOfInterest(i).getIterableIntervalOverROI(src)
> 
> Right! In the new implementations you would do something like:  
> 
> ImgLabeling<String, IntType> labeling = new ImgLabeling<String, IntType>(...);
> LabelRegions<String> regions = new LabelRegions<String>(labeling); 
> Iterator<LabelRegion<String>> labelIter = regions.iterator(); 
> 
> and then use Regions.sample(...) to overlay the region on some arbitrary 
> image.
>> 
>> I can see where sometimes I only care about the shape of the ROI (e.g., with 
>> Zernike shape features) and other times I want pixel intensities at those 
>> locations (e.g., texture features). For the latter case, I assume "src" 
>> could be an image whose pixel values I'm interested at these locations, 
>> essentially applying an ROI to a particular image. Right? For other 
>> scenarios where only shape matters, would I use 
>> "labeling.getRegionOfInterest()" and that would be sufficient? Does the 
>> Zernike feature set prefer ROIs that define the boundaries of the shape or 
>> the whole region of the shape and it figures out the boundaries itself?
> 
> Correct. In some cases (e.g. Shape) the input is just the LabelRegion (new 
> wording from new implementation of Tobias), in other cases you want to sample 
> over the pixels of another image in this region (Regions.sample()).
> 
>> 
>> 3) Then I suppose I loop through the labels in the "labeling" object to get 
>> these ROIs and pass them to the FeatureSet ops accessing the data as you do 
>> in the provided example and storing the data however I like.
>> 
>> Is that about right?
> 
> Sounds good.
> 
>> 
>> Lastly, related to Step 1,it looks like a labeling can hold multiple labels 
>> for each pixel location. Is the idea there to allow regions to overlap?
> 
> Absolutely. Overlap or more meta-information about objects (TrackID, 
> Classification Result, ...).
> 
>> If I'm generally interested in keeping pixels associated with a maximum of 
>> one cell, would I use this feature for anything else and if so, what sort of 
>> workflow is envisioned for creating these different labelings and 
>> essentially ending up with the merged information in a single labeling 
>> object?
> 
> Speaking about workflows: On 
> https://tech.knime.org/community/image-processing 
> <https://tech.knime.org/community/image-processing> there is an example 
> workflow called High-Content-Screening where we make use of a labeling. Maybe 
> this helps you, too.
> 
>> 
>> Thanks!!!
>> 
>> Jay
>> 
>> 
>> 
>> 
>>> On May 13, 2015, at 8:22 AM, Christian Dietz 
>>> <christian.di...@uni-konstanz.de <mailto:christian.di...@uni-konstanz.de>> 
>>> wrote:
>>> 
>>> Hi Jay,
>>> 
>>> you are right, we are working on this project since a while. The goal of 
>>> the work we are doing is to have efficient implementations for most of the 
>>> well known features which can be derived from images or ROIs. The main work 
>>> is done on the following branch:
>>> 
>>> https://github.com/imagej/imagej-ops/tree/outputop-service 
>>> <https://github.com/imagej/imagej-ops/tree/outputop-service>
>>> 
>>> and there is an open issue documenting the process:
>>> 
>>> https://github.com/imagej/imagej-ops/issues/67 
>>> <https://github.com/imagej/imagej-ops/issues/67>
>>> 
>>> A very small example how to use the features is given at:
>>> 
>>> https://github.com/imagej/imagej-ops/blob/outputop-service/src/main/java/net/imagej/ops/features/Example.java
>>>  
>>> <https://github.com/imagej/imagej-ops/blob/outputop-service/src/main/java/net/imagej/ops/features/Example.java>
>>> 
>>> Concerning your question with ROIs: The features implementation do not 
>>> really care if they get an Img or a ROI. The only thing they expect is to 
>>> get some IterableInterval or Iterable or RandomAccessibleInterval etc 
>>> (depends on the type of feature). In KNIME we use Labelings 
>>> (https://github.com/imglib/imglib2-roi/tree/master/src 
>>> <https://github.com/imglib/imglib2-roi/tree/master/src>) to describe our 
>>> segmentations and derive or ROIs.
>>> 
>>> Does this help you? It would be great if you want to contribute to the 
>>> outputop-service and maybe implement some of the missing features.
>>> 
>>> Best,
>>> 
>>> Christian
>>> 
>>> 
>>> 
>>> 
>>> On 13.05.2015 06:27, Jay Warrick wrote:
>>>> Hi All,
>>>> 
>>>> I was hoping to find some info on the 'feature service' or 'haralick' 
>>>> branch of imagej-ops (at least those look like to two most developed 
>>>> branches for feature extraction). The creation of feature set ops is a 
>>>> really great idea and thanks to everyone who is working on it. Likewise, I 
>>>> would certainly be willing to try and help fill out some features if it 
>>>> seems appropriate, especially when I get more familiar with the ops 
>>>> framework. Also, please let me know if there are any concerns with me 
>>>> using any of these tools prior to the authors publishing on/with these 
>>>> implementations themselves. My work is still preliminary, but just wanted 
>>>> to ask to be safe.
>>>> 
>>>> I realize the 'feature service' and 'haralick' branches are somewhat WIPs 
>>>> but it seems there are many rich feature sets that appear to be nearly or 
>>>> completely implemented and was hoping to try and use them if possible... 
>>>> Towards this goal, I was able to use the FirstOrderStatFeatureSet and 
>>>> ZernikeFeatureSet classes to get information from an Img / ImgPlus / 
>>>> SCIFIOImgPlus using the example provided in the branch. However, it is 
>>>> unclear to me how the classes should be used to do this for each cell in 
>>>> an image. Is it assumed that we are feeding in small cropped and masked 
>>>> regions to the feature set ops? If so, suggestions on an efficient way to 
>>>> do so (or links to examples in other projects... Knime?) would be amazing. 
>>>> I'm generally able to identify cells and create ROIs and mask images etc 
>>>> programmatically in Java with ImageJ classes, but haven't done so with 
>>>> Img-related image objects yet. With a hint or two, I can try and take it 
>>>> from there. Maybe do the cropping etc with old I
>>>>  mageJ classes and wrap the resultant cropped regions in Img objects? 
>>>> Maybe I'm way off base and I'm supposed to be using ROIs somewhere in the 
>>>> mix with the ops. Hopefully someone can set me straight :-)
>>>> 
>>>> Thanks a bunch in advance.
>>>> 
>>>> Best,
>>>> 
>>>> Jay
>>>> _______________________________________________
>>>> ImageJ-devel mailing list
>>>> ImageJ-devel@imagej.net <mailto:ImageJ-devel@imagej.net>
>>>> http://imagej.net/mailman/listinfo/imagej-devel 
>>>> <http://imagej.net/mailman/listinfo/imagej-devel>
>>> 
>>> 
>>> 
>>> _______________________________________________
>>> ImageJ-devel mailing list
>>> ImageJ-devel@imagej.net <mailto:ImageJ-devel@imagej.net>
>>> http://imagej.net/mailman/listinfo/imagej-devel 
>>> <http://imagej.net/mailman/listinfo/imagej-devel>
>> 
> 

_______________________________________________
ImageJ-devel mailing list
ImageJ-devel@imagej.net
http://imagej.net/mailman/listinfo/imagej-devel

Reply via email to