Hello,

Our lab is trying to automate the process of  creating links that our users can 
use to view multiple custom wiggle tracks stored on our server in the genome 
browser.

We've created a program that builds a textfile that consists of track lines and 
urls to gzipped wiggle track files on our server.

This textfile is specified in the url after the browser line as 
"hgct_customText=https://...";
http://genome.ucsc.edu/cgi-bin/hgTracks?db=hg19&position=chr8:128451376-128834335&hgct_customText=https://.../myTest.txt

For larger files (that appear to cause the browser to time out) we have in this 
aforementioned textfile a reference to another textfile that has urls to each 
part of the wiggle track that is split into seperate gzipped pieces.

We have encountered two major issues here:
    1. For all files tested that did not appear to time out the browser, this 
error showed up: 
    "Expecting two words for variableStep data at line 11848720, found 1"
    This is confusing because all of the variableStep lines have exactly two 
words on them.
    
    
    2. All other files/data tested appeared to time out the Genome Browser.
    If no error message is displayed; the browser reaches 100% loaded and shows 
a blank page. 
    Is time-out what is happenning here?
    

What method (indirection, splitting?) do you reccomend for loading large (100+ 
mb gzipped) wiggle tracks from a remote server?
Is there any reliable way to sidestep (what looks like) timeout?

-Thanks


On 06/09/11, Galt Barber   wrote:
> 
> bigDataUrl is only for these types: bigWig, bigBed, bam, vcfData
> These are binary formats that usually incorporate compression
> internally for further savings and in some cases have been
> optimized for efficient random access and may have views
> at multiple zoom levels.
> 
> All other custom track types are just text.
> 
> If it sees a line beginning with a URL beginning with
> ftp://
> http://
> https://
> then it will read that URL inserting its contents
> inline.  This works just like an #include <file>
> in C and other languages.  It is recursive, i.e.
> if the text contains another line starting with a url
> it will include those lines too and so on.
> 
> If the url extension has .gz .zip .bz2 it will
> automatically decompress the incoming lines of text.
> 
> Note that the protocol file:// is not supported and wouldn't work anyway.
> 
> A trackline must be present for each track.
> 
> For bigDatUrl type tracks, one line defines the entire track.
> 
> For other text types, the track line may either precede a URL
> that is to be included, or because of how it works, there
> may be a track line in the included file itself.
> Whenever the system sees an expanded text line starting with "track "
> it knows that a new custom track definition is beginning.
> 
> For all recursive included url lines (not bigDataUrl),
> the resulting text is just as if all lines were expanded into memory
> and then processed.
> 
> Because of this, you can create higherarchies of urls.
> You can create a set by making a text file that just includes
> the several tracks' urls or data.
> 
> Then you can create another text file that includes the
> urls of the sets you made.  Providing the url to that
> file will cause the system to recursively load an entire
> tree of custom tracks.
> 
> The contents of the custom track are passed as you know in
> hgct_customText, and must be url-encoded.
> 
> Because both the browser and our library can only
> handle a url that is of limited length (around 1500 bytes),
> you would not want to try to create massive input data
> passed directly in the hgct_customText.  Instead,
> put your text into a file and pass its url to hgct_customText.
> 
> -Galt
> 
> 6/9/2011 7:39 AM, Rebecca Hudson:
> >Our lab is trying to streamline the process of creating links that can be 
> >clicked directly rather than loaded in the Custom Track tool. (Links of the 
> >form:
> >http://genome.ucsc.edu/cgi-bin/hgTracks?db=hg18&position=chr21:33038447-33041505&hgct_customText=track%20type=bigBed%20name=myBigBedTrack%20description=%22a%20bigBed%20track%22%20visibility=full%20bigDataUrl=http://genome.ucsc.edu/goldenPath/help/examples/bigBedExample.bb)
> >
> >We've had success creating links for viewing bigWig tracks; but most
> >of our files are in .wig.gz format.
> >
> >When we attempt to visit this link (names removed):
> >http://genome.ucsc.edu/cgi-bin/hgTracks?db=hg19&position=chr8:128451376-128834335&hgct_customText=track%20type=wiggle_0%20name=%22trackName%22%20description=%22trackName%201,25%22%20color=233,43,75%20visibility=full%20horizGrid=on%20yLineOnOff=off%20autoScale=on%20alwaysZero=on%20graphType=points%20maxHeightPixels=22:33:44%20windowingFunction=mean+whiskers%20smoothingWindow=off%20priority=12%20viewLimits=0:50%20bigDataUrl=https://ourdomain.edu/wiggleData.wig.gz
> >
> >
> >This error is displayed at the top of the window.
> >     * track load error (track name='ct_trackName_5944'):
> >       ERROR: wigEncode: empty input file: 'stdin'
> >
> >We've had success loading the same file manually as a custom track; it's not 
> >empty.
> >
> >Is it possible to load a wig.gz file as a custom track using the bigDataUrl? 
> >What could we be missing?
> >
> >thanks!
> >
> >_______________________________________________
> >Genome maillist  -  [email protected]
> >https://lists.soe.ucsc.edu/mailman/listinfo/genome
> 

_______________________________________________
Genome maillist  -  [email protected]
https://lists.soe.ucsc.edu/mailman/listinfo/genome

Reply via email to