On 2011-11-21 19:23, Nantel Andre wrote:
> That issue goes beyond the ImaGene format. When doing contact
> printing of microarrays it is still fairly common to spot the same
> probe 2 or more times as shown in the example here
> It is also very common for commercial arrays (Agilent for example) to
> have multiple copies of their control spots.
I wouldn't call that duplicate spots. They are different spots on
different coordinates that just happen to have the same reporter/gene.
The importer should import those as two separate entries and it is then
up to down-stream analysis if they should be merged into a single value
and what kind of average method to use for merging.
> We'll take a look at the Illumina plug-ins but I'm sure that there is
> a more elegant solution.
Does the ImaGene data file have spot coordinates in the data files or
not? If not, then the Illumina case might be useful, but not otherwise.
> What were are trying to do is,
> theoretically, simple since our data is already
> background-substracted and normalized (ImaGene does a perfectably
> acceptable job in doing that). We have two collums with ch1 and ch2
> normalized intensities in Log2 and a Flag column showing spots that
> have to be filtered out. We were hoping to use BASE to help up
> organize our experiments before sending them off to MeV.
> In the Base2 demo server, the demoHyb1 bioassay is similar to our
> situation. When I open that item and click on the "Raw data" tab, It
> appears that the spot coordinate were imported and then used to
> produce columns entitled [Rep] Name and [Rep] ID that clearly come
> from duplicate spots. That's what we are trying to replicate.
The demo data set is a GenePix data set and for each spot the file
contains the coordinates, reporter id and a lot of measured intensities
and other values. So if your data is similar to this, then I think you
don't need any special new importer. Use the generic "Raw data flat file
importer" and make sure to map spot coordinates, reporter id and
intensity columns that are in your data files.
Maybe you can post a few lines from the files you are working with so I
don't have to guess...
All the data continuously generated in your IT infrastructure
contains a definitive record of customers, application performance,
security threats, fraudulent activity, and more. Splunk takes this
data and makes sense of it. IT sense. And common sense.
The BASE general discussion mailing list
unsubscribe: send a mail with subject "unsubscribe" to