Hello Jin,

100 million records is significantly larger than our largest table. You 
are probably running out of memory. hgLoadBed reads all bed input into 
memory (even with -noSort). If there is any way that you could partition 
the data into a few subtracks of a composite track and load them 
separately, that would reduce the resource requirements.

Best regards,

Pauline Fujita

UCSC Genome Bioinformatics Group
http://genome.ucsc.edu



Ma, Jin wrote:
> Dear browser colleagues,
>
> I was trying to load a wiggle track of more than 100 million records
> with the following command and the processed either got killed or got
> out-of-memory errors? Did I use the right command or that was just the
> resource limitation of my user profile in our system? Is there a better
> way of loading it? Thanks.
>
> hgLoadBed -bedGraph=4 hg18 mytrack mytrack.wig
>
> Jin
>
>
> Notice:  This e-mail message, together with any attachments, contains
> information of Merck & Co., Inc. (One Merck Drive, Whitehouse Station,
> New Jersey, USA 08889), and/or its affiliates (which may be known
> outside the United States as Merck Frosst, Merck Sharp & Dohme or
> MSD and in Japan, as Banyu - direct contact information for affiliates is
> available at http://www.merck.com/contact/contacts.html) that may be
> confidential, proprietary copyrighted and/or legally privileged. It is
> intended solely for the use of the individual or entity named on this
> message. If you are not the intended recipient, and have received this
> message in error, please notify us immediately by reply e-mail and
> then delete it from your system.
> _______________________________________________
> Genome maillist  -  [email protected]
> http://www.soe.ucsc.edu/mailman/listinfo/genome
>   

_______________________________________________
Genome maillist  -  [email protected]
http://www.soe.ucsc.edu/mailman/listinfo/genome

Reply via email to