[Rd] R-intro.texi patch

2016-11-17 Thread Evelyn Mitchell
Note, the R-intro.R is correct.

svn diff R-intro.texi
Index: R-intro.texi
===
--- R-intro.texi(revision 71664)
+++ R-intro.texi(working copy)
@@ -1525,7 +1525,7 @@
After this assignment, the standard errors are calculated by

@example
-> incster <- tapply(incomes, statef, stderr)
+> incster <- tapply(incomes, statef, stdError)
@end example

@noindent



Evelyn Mitchell

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] problem with normalizePath()

2016-11-17 Thread Evan Cortens
I wonder if this could be related to the issue that I submitted to bugzilla
about two months ago? (
https://bugs.r-project.org/bugzilla3/show_bug.cgi?id=17159)

That is to say, could it be that it's treating the first path after the
single backslash as an actual directory, rather than as the name of the
share?

-- 
Evan Cortens, PhD
Institutional Analyst - Office of Institutional Analysis
Mount Royal University
403-440-6529

On Thu, Nov 17, 2016 at 2:28 PM, Laviolette, Michael <
michael.laviole...@dhhs.nh.gov> wrote:

> The packages "readxl" and "haven" (and possibly others) no longer access
> files on shared network drives. The problem appears to be in the
> normalizePath() function. The file can be read from a local drive or by
> functions that don't call normalizePath(). The error thrown is
>
> Error: path[1]="\\Hzndhhsvf2/data/OCPH/EPI/BHSDM/Group/17.xls": The
> system cannot find the file specified
>
> Here's my session:
>
> library(readxl)
> library(XLConnect)
>
> # attempting to read file from network drive
> df1 <- read_excel("//Hzndhhsvf2/data/OCPH/EPI/BHSDM/Group/17.xls")
> # pathname is fully qualified, but error thrown as above
>
> cat(normalizePath("//Hzndhhsvf2/data/OCPH/EPI/BHSDM/Group/17.xls"))
> # throws same error
>
> # reading same file with different function
> df2 <- readWorksheetFromFile("//Hzndhhsvf2/data/OCPH/EPI/BHSDM/Group/17.xls",
> 1)
> # completes successfully
>
> # reading same file from local drive
> df3 <- read_excel("C:/17.xls")
> # completes successfully
>
> sessionInfo()
> R version 3.3.2 (2016-10-31)
> Platform: x86_64-w64-mingw32/x64 (64-bit)
> Running under: Windows 7 x64 (build 7601) Service Pack 1
>
> locale:
> [1] LC_COLLATE=English_United States.1252  LC_CTYPE=English_United
> States.1252
> [3] LC_MONETARY=English_United States.1252 LC_NUMERIC=C
> [5] LC_TIME=English_United States.1252
>
> attached base packages:
> [1] stats graphics  grDevices utils datasets  methods   base
>
> other attached packages:
> [1] readxl_0.1.1 dplyr_0.5.0  XLConnect_0.2-12
> [4] XLConnectJars_0.2-12 ROracle_1.2-1DBI_0.5-1
>
> loaded via a namespace (and not attached):
> [1] magrittr_1.5   R6_2.2.0   assertthat_0.1 tools_3.3.2haven_1.0.0
> [6] tibble_1.2 Rcpp_0.12.7rJava_0.9-8
>
> Please advise.
> Thanks,
>
> Michael Laviolette PhD MPH
> Public Health Statistician
> Bureau of Public Health Statistics and Informatics
> New Hampshire Division of Public Health Services
> 29 Hazen Drive
> Concord, NH 03301-6504
> Phone: 603-271-5688
> Fax: 603-271-7623
> Email: michael.laviole...@dhhs.nh.gov
>
>
>
> [[alternative HTML version deleted]]
>
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
>

[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Bioc-devel] SRAdb::sraConvert returns results in arbitrary order

2016-11-17 Thread Ryan C. Thompson

Hello,

I was recently bitten by an unexpected behavior in the sraConvert 
function from the SRAdb package. I wanted to fetch the other SRA IDs 
associated with the SRX numbers of 32 samples, and I used the sraConvert 
function to do so. However, I did not realized that sraConvert returns 
the results in arbitrary order rather than sorting them in the same 
order as the input, so I just used cbind to add these IDs to my existing 
sample table. This effectively shuffled my samples, and I did not notice 
until far downstream when I started making PCA plots and the clustering 
made no sense.


I leave it up to the developers of the SRAdb package to decide whether 
or not this is a bug, but I think it should at least be documented that 
the sort order of the output of sraConvert is arbitrary and will not 
necessarily match the input.


-Ryan

___
Bioc-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/bioc-devel


Re: [Rd] new function to tools/utils package: dependencies based on DESCRIPTION file

2016-11-17 Thread Michael Lawrence
Hi Jan,

Thanks for volunteering. You, me, Duncan Murdoch (if interested) and
anyone else who is interested should setup an informal chat. We need
to ensure that the API is right and that it fits in well with other
ongoing efforts.

Michael

On Thu, Nov 17, 2016 at 1:40 PM, Jan Gorecki  wrote:
> Hi Michael,
> Are you willing to accept patch for this? I'm already using this and
> few related functions for a while, it plays well. I could wrap it as
> patch to utils, or tools?
> Best,
> Jan
>
> On 16 June 2016 at 14:00, Michael Lawrence  wrote:
>> I agree that the utils package needs some improvements related to
>> this, and hope to make them eventually. This type of feedback is very
>> helpful.
>>
>> Thanks,
>> Michael
>>
>>
>>
>> On Thu, Jun 16, 2016 at 1:42 AM, Jan Górecki  wrote:
>>> Dear Joris,
>>>
>>> So it does looks like the proposed function makes a lot sense then, isn't 
>>> it?
>>>
>>> Cheers,
>>> Jan
>>>
>>> On 16 June 2016 at 08:37, Joris Meys  wrote:
 Dear Jan,

 It is unavoidable to have OS and R dependencies for devtools. The building
 process for packages is both OS and R dependent, so devtools has to be too
 according to my understanding.

 Cheers
 Joris

 On 14 Jun 2016 18:56, "Jan Górecki"  wrote:

 Hi Thierry,

 I'm perfectly aware of it. Any idea when devtools would be shipped as
 a base R package, or at least recommended package? To actually answer
 the problem described in my email.
 I have range of useful functions available tools/utils packages which
 are shipped together with R. They doesn't require any OS dependencies
 or R dependencies, unlike devtools which requires both. Installing
 unnecessary OS dependencies and R dependencies just for such a simple
 wrapper doesn't seem to be an elegant way to address it, therefore my
 proposal to include that simple function in tools, or utils package.

 Regards,
 Jan Gorecki

 On 14 June 2016 at 16:17, Thierry Onkelinx  
 wrote:
> Dear Jan,
>
> Similar functionality is available in devtools::dev_package_deps()
>
> Best regards,
>
> ir. Thierry Onkelinx
> Instituut voor natuur- en bosonderzoek / Research Institute for Nature and
> Forest
> team Biometrie & Kwaliteitszorg / team Biometrics & Quality Assurance
> Kliniekstraat 25
> 1070 Anderlecht
> Belgium
>
> To call in the statistician after the experiment is done may be no more
> than
> asking him to perform a post-mortem examination: he may be able to say
> what
> the experiment died of. ~ Sir Ronald Aylmer Fisher
> The plural of anecdote is not data. ~ Roger Brinner
> The combination of some data and an aching desire for an answer does not
> ensure that a reasonable answer can be extracted from a given body of
> data.
> ~ John Tukey
>
> 2016-06-14 16:54 GMT+02:00 Jan Górecki :
>>
>> Hi all,
>>
>> Packages tools and utils have a lot of useful stuff for R developers.
>> I find one task still not as straightforward as it could. Simply to
>> extract dependencies of a package from DESCRIPTION file (before it is
>> even installed to library). This would be valuable in automation of CI
>> setup in a more meta-data driven way.
>> The simple function below, I know it is short and simple, but having
>> it to be defined in each CI workflow is a pain, it could be already
>> available in tools or utils namespace.
>>
>> package.dependencies.dcf <- function(file = "DESCRIPTION", which =
>> c("Depends","Imports","LinkingTo")) {
>> stopifnot(file.exists(file), is.character(which))
>> which_all <- c("Depends", "Imports", "LinkingTo", "Suggests",
>> "Enhances")
>> if (identical(which, "all"))
>> which <- which_all
>> else if (identical(which, "most"))
>> which <- c("Depends", "Imports", "LinkingTo", "Suggests")
>> stopifnot(which %in% which_all)
>> dcf <- read.dcf(file, which)
>> # parse fields
>> raw.deps <- unlist(strsplit(dcf[!is.na(dcf)], ",", fixed = TRUE))
>> # strip stated dependency version
>> deps <- trimws(sapply(strsplit(trimws(raw.deps), "(", fixed =
>> TRUE), `[[`, 1L))
>> # exclude base R pkgs
>> base.pkgs <- c("R", rownames(installed.packages(priority = "base")))
>> setdiff(deps, base.pkgs)
>> }
>>
>> This allows to easily install all package dependencies just based on
>> DESCRIPTION file, so simplify that in custom CI workflows to:
>>
>> if (length(pkgs<-package.dependencies.dcf(which="all")))
>> install.packages(pkgs)
>>
>> And would not require to install custom packages or shell scripts.

Re: [Rd] new function to tools/utils package: dependencies based on DESCRIPTION file

2016-11-17 Thread Jan Gorecki
Hi Michael,
Are you willing to accept patch for this? I'm already using this and
few related functions for a while, it plays well. I could wrap it as
patch to utils, or tools?
Best,
Jan

On 16 June 2016 at 14:00, Michael Lawrence  wrote:
> I agree that the utils package needs some improvements related to
> this, and hope to make them eventually. This type of feedback is very
> helpful.
>
> Thanks,
> Michael
>
>
>
> On Thu, Jun 16, 2016 at 1:42 AM, Jan Górecki  wrote:
>> Dear Joris,
>>
>> So it does looks like the proposed function makes a lot sense then, isn't it?
>>
>> Cheers,
>> Jan
>>
>> On 16 June 2016 at 08:37, Joris Meys  wrote:
>>> Dear Jan,
>>>
>>> It is unavoidable to have OS and R dependencies for devtools. The building
>>> process for packages is both OS and R dependent, so devtools has to be too
>>> according to my understanding.
>>>
>>> Cheers
>>> Joris
>>>
>>> On 14 Jun 2016 18:56, "Jan Górecki"  wrote:
>>>
>>> Hi Thierry,
>>>
>>> I'm perfectly aware of it. Any idea when devtools would be shipped as
>>> a base R package, or at least recommended package? To actually answer
>>> the problem described in my email.
>>> I have range of useful functions available tools/utils packages which
>>> are shipped together with R. They doesn't require any OS dependencies
>>> or R dependencies, unlike devtools which requires both. Installing
>>> unnecessary OS dependencies and R dependencies just for such a simple
>>> wrapper doesn't seem to be an elegant way to address it, therefore my
>>> proposal to include that simple function in tools, or utils package.
>>>
>>> Regards,
>>> Jan Gorecki
>>>
>>> On 14 June 2016 at 16:17, Thierry Onkelinx  wrote:
 Dear Jan,

 Similar functionality is available in devtools::dev_package_deps()

 Best regards,

 ir. Thierry Onkelinx
 Instituut voor natuur- en bosonderzoek / Research Institute for Nature and
 Forest
 team Biometrie & Kwaliteitszorg / team Biometrics & Quality Assurance
 Kliniekstraat 25
 1070 Anderlecht
 Belgium

 To call in the statistician after the experiment is done may be no more
 than
 asking him to perform a post-mortem examination: he may be able to say
 what
 the experiment died of. ~ Sir Ronald Aylmer Fisher
 The plural of anecdote is not data. ~ Roger Brinner
 The combination of some data and an aching desire for an answer does not
 ensure that a reasonable answer can be extracted from a given body of
 data.
 ~ John Tukey

 2016-06-14 16:54 GMT+02:00 Jan Górecki :
>
> Hi all,
>
> Packages tools and utils have a lot of useful stuff for R developers.
> I find one task still not as straightforward as it could. Simply to
> extract dependencies of a package from DESCRIPTION file (before it is
> even installed to library). This would be valuable in automation of CI
> setup in a more meta-data driven way.
> The simple function below, I know it is short and simple, but having
> it to be defined in each CI workflow is a pain, it could be already
> available in tools or utils namespace.
>
> package.dependencies.dcf <- function(file = "DESCRIPTION", which =
> c("Depends","Imports","LinkingTo")) {
> stopifnot(file.exists(file), is.character(which))
> which_all <- c("Depends", "Imports", "LinkingTo", "Suggests",
> "Enhances")
> if (identical(which, "all"))
> which <- which_all
> else if (identical(which, "most"))
> which <- c("Depends", "Imports", "LinkingTo", "Suggests")
> stopifnot(which %in% which_all)
> dcf <- read.dcf(file, which)
> # parse fields
> raw.deps <- unlist(strsplit(dcf[!is.na(dcf)], ",", fixed = TRUE))
> # strip stated dependency version
> deps <- trimws(sapply(strsplit(trimws(raw.deps), "(", fixed =
> TRUE), `[[`, 1L))
> # exclude base R pkgs
> base.pkgs <- c("R", rownames(installed.packages(priority = "base")))
> setdiff(deps, base.pkgs)
> }
>
> This allows to easily install all package dependencies just based on
> DESCRIPTION file, so simplify that in custom CI workflows to:
>
> if (length(pkgs<-package.dependencies.dcf(which="all")))
> install.packages(pkgs)
>
> And would not require to install custom packages or shell scripts.
>
> Regards,
> Jan Gorecki
>
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel


>>>
>>> __
>>> R-devel@r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-devel
>>
>> __
>> R-devel@r-project.org mailing list
>> 

[Rd] problem with normalizePath()

2016-11-17 Thread Laviolette, Michael
The packages "readxl" and "haven" (and possibly others) no longer access files 
on shared network drives. The problem appears to be in the normalizePath() 
function. The file can be read from a local drive or by functions that don't 
call normalizePath(). The error thrown is

Error: path[1]="\\Hzndhhsvf2/data/OCPH/EPI/BHSDM/Group/17.xls": The system 
cannot find the file specified

Here's my session:

library(readxl)
library(XLConnect)

# attempting to read file from network drive
df1 <- read_excel("//Hzndhhsvf2/data/OCPH/EPI/BHSDM/Group/17.xls")
# pathname is fully qualified, but error thrown as above

cat(normalizePath("//Hzndhhsvf2/data/OCPH/EPI/BHSDM/Group/17.xls"))
# throws same error

# reading same file with different function
df2 <- readWorksheetFromFile("//Hzndhhsvf2/data/OCPH/EPI/BHSDM/Group/17.xls", 1)
# completes successfully

# reading same file from local drive
df3 <- read_excel("C:/17.xls")
# completes successfully

sessionInfo()
R version 3.3.2 (2016-10-31)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 7 x64 (build 7601) Service Pack 1

locale:
[1] LC_COLLATE=English_United States.1252  LC_CTYPE=English_United States.1252
[3] LC_MONETARY=English_United States.1252 LC_NUMERIC=C
[5] LC_TIME=English_United States.1252

attached base packages:
[1] stats graphics  grDevices utils datasets  methods   base

other attached packages:
[1] readxl_0.1.1 dplyr_0.5.0  XLConnect_0.2-12
[4] XLConnectJars_0.2-12 ROracle_1.2-1DBI_0.5-1

loaded via a namespace (and not attached):
[1] magrittr_1.5   R6_2.2.0   assertthat_0.1 tools_3.3.2haven_1.0.0
[6] tibble_1.2 Rcpp_0.12.7rJava_0.9-8

Please advise.
Thanks,

Michael Laviolette PhD MPH
Public Health Statistician
Bureau of Public Health Statistics and Informatics
New Hampshire Division of Public Health Services
29 Hazen Drive
Concord, NH 03301-6504
Phone: 603-271-5688
Fax: 603-271-7623
Email: michael.laviole...@dhhs.nh.gov



[[alternative HTML version deleted]]

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Bioc-devel] adding new features to current release branch

2016-11-17 Thread Kasper Daniel Hansen
No, you should not add new functions to the release branch.  The
documentation is pretty clear.  The only thing you should change is bug
fixes (and perhaps documentation improvements which could be thought of as
bug fixes).

Kasper

On Thu, Nov 17, 2016 at 4:13 AM, Anand MT  wrote:

> Hi Mods,
>
>
> My package maftools is now in release branch. Since the release, I haven't
> made any bug fixes or any sort of changes. Now I have few bug fixes and new
> functions that I want to add as they are helpful to users. Since, its
> mentioned only bug fixes to be ported to release, my question is, whether
> to add these new functions to devel or is it still acceptable/okay to port
> them to current release branch?
>
>
> Thanks,
>
> -Anand.
>
> [[alternative HTML version deleted]]
>
> ___
> Bioc-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/bioc-devel
>

[[alternative HTML version deleted]]

___
Bioc-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/bioc-devel


Re: [Bioc-devel] adding new features to current release branch

2016-11-17 Thread Martin Morgan

On 11/17/2016 04:13 AM, Anand MT wrote:

Hi Mods,


My package maftools is now in release branch. Since the release, I
haven't made any bug fixes or any sort of changes. Now I have few bug
fixes and new functions that I want to add as they are helpful to
users. Since, its mentioned only bug fixes to be ported to release,
my question is, whether to add these new functions to devel or is it
still acceptable/okay to port them to current release branch?


add the bug fixes and new functions to devel. Make sure that these are 
ok in the nightly builds. Port the bug fixes to release. Be careful not 
to let new features or 'API' changes creep into release -- the release 
user gets the current functions and a constant interface so that they 
can carry on with use of your package in a stable environment.


Ideally, the commits to devel are separated so that it is easy to port 
the bug fixes, e.g., using the svn merge command.


Martin




Thanks,

-Anand.

[[alternative HTML version deleted]]

___
Bioc-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/bioc-devel




This email message may contain legally privileged and/or...{{dropped:2}}

___
Bioc-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/bioc-devel


Re: [Bioc-devel] vmatchPattern Returns Out of Bounds Indices

2016-11-17 Thread Hervé Pagès

Hi,

These questions really belong to the support site.

On 11/16/2016 04:00 PM, Dario Strbenac wrote:

Hello,

If using vmatchPattern to find a sequence in another sequence, the resulting 
end index can be beyond the length of the subject XStringSet. For example:

forwardPrimer <- "TCTTGTGGAAAGGACGAAACACCG"

range(width(reads))

[1] 75 75
primerEnds <- vmatchPattern(forwardPrimer, reads, max.mismatch = 1)

range(unlist(endIndex(primerEnds))

[1] 23 76

This causes problems if using extractAt to obtain the sequences within each 
read.


That is the expected behavior for matchPattern() and family when
max.mismatch != 0. This was a conscious choice. If you don't
allow indels, the matches will always have the length of the pattern.
This allows straightforward one-to-one mapping between the letters
in the pattern and their positions on the subject, which is a nice
property. For example, if the pattern is ACGT but the reported match
is range 1-3, you don't have an easy way to know how the 3 nucleotides
in the pattern are actually mapped to the subject.

Truncating the match when it goes out of bounds actually means
clipping the pattern. You'll get that behavior by allowing indels:

  > matchPattern("AAACCTT", "AAACCT")
Views on a 14-letter BString subject
  subject: AAACCT
  views: NONE

  > matchPattern("AAACCTT", "AAACCT", max.mismatch=1)
Views on a 14-letter BString subject
  subject: AAACCT
  views:
  start end width
  [1] 9  15 7 [AAACCT ]

  > matchPattern("AAACCTT", "AAACCT", max.mismatch=1, 
with.indels=TRUE)

Views on a 14-letter BString subject
  subject: AAACCT
  views:
  start end width
  [1] 9  14 6 [AAACCT]

But then of course you loose the one-to-one mapping between the
letters in the pattern and their positions on the subject.

The 2 other alternatives are (1) to get rid of the out of bounds
matches or (2) to trim them. For example, to trim them:

  subjects <- DNAStringSet(c("AACCTT", "AAACCT"))
  m <- vmatchPattern("AAACCTT", subjects, max.mismatch=1)
  m
  # MIndex object of length 2
  # [[1]]
  # IRanges object with 1 range and 0 metadata columns:
  #   start   end width
  # 
  #  [1] 0 6 7
  #
  # [[2]]
  # IRanges object with 1 range and 0 metadata columns:
  #   start   end width
  # 
  #   [1] 915 7

  m2 <- IRangesList(start=pmax(start(m), 1),
end=pmin(end(m), width(subjects)))
  m2
  # IRangesList of length 2
  # [[1]]
  # IRanges object with 1 range and 0 metadata columns:
  #   start   end width
  # 
  #   [1] 1 6 6
  #
  # [[2]]
  # IRanges object with 1 range and 0 metadata columns:
  #   start   end width
  # 
  #   [1] 914 6

Then:

  extractAt(subjects, m2)
  # DNAStringSetList of length 2
  # [[1]] AACCTT
  # [[2]] AAACCT

H.


For example:

sequences = extractAt(reads, locations)
Error in .normarg_at2(at, x) :
  some ranges in 'at' are off-limits with respect to their corresponding 
sequence
  in 'x'

It's rare, but still a problem, nonetheless.


table(unlist(endIndex(primerLocations)) >  75)


 FALSE   TRUE
366225  2

This happens with Biostrings 2.42.0.

--
Dario Strbenac
University of Sydney
Camperdown NSW 2050
Australia

___
Bioc-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/bioc-devel



--
Hervé Pagès

Program in Computational Biology
Division of Public Health Sciences
Fred Hutchinson Cancer Research Center
1100 Fairview Ave. N, M1-B514
P.O. Box 19024
Seattle, WA 98109-1024

E-mail: hpa...@fredhutch.org
Phone:  (206) 667-5791
Fax:(206) 667-1319

___
Bioc-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/bioc-devel


Re: [Bioc-devel] OrganismDb package for Drosophila.melanogaster

2016-11-17 Thread Pariksheet Nanda
On Tue, Nov 15, 2016 at 7:34 PM, Martin Morgan wrote:
> On 11/15/2016 09:52 AM, Obenchain, Valerie wrote:
>> On 11/15/2016 03:32 AM, Pariksheet Nanda wrote:
>>>
>>> It would be great to have an OrganismDb package for
>>> Drosophila.melanogaster, similar to Homo.sapiens, Mus.musculus and
>>> Rattus.norvegicus.
--snip--
>>> In other words, like Rattus.norvegicus, it might be good do add a UCSC
>>> "refGene" TxDb package for dm6 as "ensGene" doesn't appear to be as
good of
>>> a candidate (at least without some ugliness)?  I was looking at
creating a
>>> dm6 UCSC "refGene" TxDb.
>>
>> You can use GenomicFeatures::makeTxDbFromUCSC() to create the TxDb. The
>> man page, ?makeTxDbFromUCSC, also has helper functions that display
>> available genomes, tables and tracks.
>
> I'm not completely sure of the result, but
>
> library(OrganismDb)
> odb = makeOrganismDbFromUCSC("dm6", tableName="refGene")
>
> might be most of the way there?

Thanks Valerie and Martin for pointing out the make*() functions!

As my lab uses the same UCSC tables frequently, I used the
make*Package() functions (namely,
GenomicFeatures::makeTxDbPackageFromUCSC and
OrganismDbi::makeOrganismPackage).

For others who run OrganismDbi::makeOrganismPackage, don't forget
to edit the generated PACKAGE file and add your new TxDb package
to "Depends".


>> Valerie

> Martin

Pariksheet

[[alternative HTML version deleted]]

___
Bioc-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/bioc-devel