If the data frames are outcomes of experiments (wet lab or computational)
use ExperimentHub
If annotation, use AnnotationHub
Submission instructions for these Hub contributions are available at, e.g.,
I have a package I would like to submit to Bioconductor. Some functions of the
package utilize precalculated data frames currently stored in the inst/extdata
directory. These data frames result in the source package exceeding 5 MB (~25
MB).
Are there any best practices or recommendations for
Perhaps this helps explain things?
> z <- entrez_search("gtr","muscle_weakness", retmax = )$id
> any(z %in% keys(org.Hs.eg.db))
[1] FALSE
> zz <- entrez_summary("gene", z)
> table(do.call(c,sapply(extract_from_esummary(zz, "organism"),
> function(x) x$scientificname)))
Perhaps this helps explain things?
> z <- entrez_search("gtr","muscle_weakness", retmax = )$id
> any(z %in% keys(org.Hs.eg.db))
[1] FALSE
> zz <- entrez_summary("gene", z)
> table(do.call(c,sapply(extract_from_esummary(zz, "organism"),
> function(x) x$scientificname)))
As announced earlier, I wanted to follow up with instructions on how join the
challenge introductions for developers interested in joining the sessions:
1. BioPlex challenge (Wed, Aug 4, 2:30-3:30 PM EST):
- login to airmeet
- take a seat at the "BioPlex" table in the "Lounge" area
- details: