On Sep 28, 2012, at 12:44 PM, Jonathan Greenberg wrote:
> Rui:
>
> Quick follow-up -- it looks like seek does do what I want (I see Simon
> suggested it some time ago) -- what do mean by "trash your disk"?
I can't speak for Rui, but the difference between seeking and explicit write is
that t
Hello,
I've written a function to try to answer to your op request, but I've
run into a problem. See in the end.
In the mean time, inline.
Em 28-09-2012 17:44, Jonathan Greenberg escreveu:
Rui:
Quick follow-up -- it looks like seek does do what I want (I see Simon
suggested it some time ago)
Rui:
Quick follow-up -- it looks like seek does do what I want (I see Simon
suggested it some time ago) -- what do mean by "trash your disk"? What I'm
trying to accomplish is getting parallel, asynchronous writes to a large
binary image (just a binary file) working. Each node writes to a differe
Jonathan,
ff has a utility function file.resize() which allows to give a new filesize
in bytes using doubles.
See ?file.resize
Regards
Jens Oehlschlägel
Gesendet: Donnerstag, 27. September 2012 um 21:17 Uhr
Von: "Jonathan Greenberg"
An: r-help , r-sig-...@r-project.org
Hello,
If you really need to trash your disk, why not use seek()?
> fl <- file("Test.txt", open = "wb")
> seek(fl, where = 1024, origin = "start", rw = "write")
[1] 0
> writeChar(character(1), fl, nchars = 1, useBytes = TRUE)
Warning message:
In writeChar(character(1), fl, nchars = 1, useBytes
Folks:
Asked this question some time ago, and found what appeared (at first) to be
the best solution, but I'm now finding a new problem. First off, it seemed
like ff as Jens suggested worked:
# outdata_ncells = the number of rows * number of columns * number of bands
in an image:
out<-ff(vmode="
Thanks, all! I'll try these out. I'm trying to work up something that is
platform independent (if possible) for use with mmap. I'll do some tests
on these suggestions and see which works best. I'll try to report back in a
few days. Cheers!
--j
2012/5/3 "Jens Oehlschlägel"
> Jonathan,
>
>
Jonathan,
On some filesystems (e.g. NTFS, see below) it is possible to create 'sparse'
memory-mapped files, i.e. reserving the space without the cost of actually
writing initial values.
Package 'ff' does this automatically and also allows to access the file in
parallel. Check t
Jonathon,
10,000 numbers is pretty small, so I don't think time will be a
big problem. You could write this using writeBin with no problems. For
larger files, why not just use a loop? The writing is pretty fast, so I
don't think you'll have too many problems.
On my machine:
> ptm <- proc
On most UNIX systems this will leave a large unallocated virtual "hole" in the
file. If you are not bothered by spreading the allocation task out over the
program execution interval, this won't matter and will probably give the best
performance. However, if you wanted to benchmark your algorith
On May 2, 2012, at 6:23 PM, Jonathan Greenberg wrote:
> R-helpers:
>
> What would be the absolute fastest way to make a large "empty" file (e.g.
> filled with all zeroes) on disk, given a byte size and a given number
> number of empty values. I know I can use writeBin, but the "object" in
> thi
Something like:
http://markus.revti.com/2007/06/creating-empty-file-with-specified-size/
Is one way I know of.
Jeff
Jeffrey Ryan|Founder|jeffrey.r...@lemnica.com
www.lemnica.com
On May 2, 2012, at 5:23 PM, Jonathan Greenberg wrote:
> R-helpers:
>
> What would be the absolu
Look at the man page for dd (assuming you are on *nix)
A quick google will get you a command to try. I'm not at my desk or I would as
well.
Jeff
Jeffrey Ryan|Founder|jeffrey.r...@lemnica.com
www.lemnica.com
On May 2, 2012, at 5:23 PM, Jonathan Greenberg wrote:
> R-helpers:
13 matches
Mail list logo