Feh.. I've gotta lay off the crackpipe.. let me put a disclaimer on
anybody ever thinking of listening to my ramblings in the future..
apparently it's all pretty much bullshit. :)
The filesystem I'm thinking of is not 'extent based'. I'm done with
this thread now... lucky for everybody else before I try to convince
them that there's a bomb which will explode all the photons in the sun
and destroy the universe.. (I think that's a 'plan nine' reference...but
at this point, who knows...)
:)
-Brian
Brian Chrisman wrote:
James Washer wrote:
whoops, my first response went direct to Brian, so I'll copy it here:
debugfs? How's that going to help if after chopping off the first 300
lines, the data is not block aligned? You can debugfs all night long
and you'll not get around that problem. IMPOSSIBLE I say!
Heh.. annoyingly enough yer right... for ext* at least... :)
Damn hard coded blocksizes... an extent based filesystem would manage
this much better though. It always boggles me how ancient our most
popular linux fs really is underneath. :-) Can't argue with its
stability and general performance though.. :-)
To fix this issue in ext*, you would really have to write fs code...
basically adding a call to store an 'offset' and have the file pointer
positioned there at 'zero' whenever opened in the future. What a mess.
Kind of reminds me of a certain filesystem vendor I used to work
for... we had a customer who ended up corrupting their filesystem by
taking an Oracle dump on a Solaris box. Turned out the dump was >
2GB, and the Solaris core dump routine used a 'signed int' for
offset. So the core dump started at offset zero.. went up to 2GB,
then went back to -2GB, and continued writing the damn dump all
over... well.. vital structural stuff that wasn't there anymore
afterwards... :-)
feh... glad those horrible days of OS-level coding are over... it's
like programming with your hands tied behind your back... :)
- jim
On Wed, 28 Jun 2006 18:41:44 -0700
Brian Chrisman <[EMAIL PROTECTED]> wrote:
Hey.. not 'impossible'.. just *really annoying*... ie, firing up
debugfs or whatever they're using these days to manipulate
filesystem internals. :)
Sorry, but the impossible in all caps is just begging for
technicalities.. :-)
-Brian
James Washer wrote:
If he wanted to lop off the first xxx bytes, without "touching" the
remainder of the file... then it's IMPOSSIBLE under linux/unix.
You mention flat-file-database... it's that's the case, I hope he
can stop updates while removing the first 300 lines... else, the
problem is just a bit harder (in fact, it can become impossible if
his data base engine does not use some form of file locking.)
- jim
On Wed, 28 Jun 2006 17:50:14 -0700 (PDT)
Sebastian Smith <[EMAIL PROTECTED]> wrote:
I think what Grant meant by "in place" was that he didn't want to
read the entire file, just lop off the first 300 lines -- perhaps
he can correct me. I'm guessing storage space isn't an issue.
Regardless, I don't know of a way to do that.
You really need to get away from those flat file databases Grant ;)
On Wed, 28 Jun 2006, James Washer wrote:
it's the "working in place" that makes this difficult, else there
are countless ways to do this, including the simple perl
perl -ne 'next unless $. >300;print'
On Wed, 28 Jun 2006 17:26:47 -0700
"Brandon Mitchell" <[EMAIL PROTECTED]> wrote:
This is an interesting problem, and seems to be revealing a
little bit
about what type of user/admin is in each of us.
So where's Nick with an Emacs Lisp macro for this task? :P
On 6/28/06, Anna <[EMAIL PROTECTED]> wrote:
find out which byte terminates the first 300 lines. maybe...
~$ BYTES=$(head -300 nameofbigfile.txt | wc -c)
then use that info with dd skip the first part of the input
file...
~$ dd if=nameofbigfile.txt of=truncatedversion.pl ibs=$BYTES
skip=1
one of many ways, I'm sure. I think this way should be pretty
fast
because it works on a line by line basis for just a small part
of the
file. The rest, with dd, is done in larger pieces.
- Anna
On Wed, Jun 28, 2006 at 01:01:03PM -0700, Grant Kelly wrote:
Alright unix fans, who can answer this the best?
I have a text file, it's about 2.3 GB. I need to delete the
first 300
lines, and I don't want to have to load the entire thing into an
editor.
I'm trying `sed '1,300d' inputfile > output file` but it's
taking a
long time (and space) to output everything to the new file.
There has got to be a better way, a way that can do this
in-place...
Grant
_______________________________________________
RLUG mailing list
RLUG@rlug.org
http://lists.rlug.org/mailman/listinfo/rlug
_______________________________________________
RLUG mailing list
RLUG@rlug.org
http://lists.rlug.org/mailman/listinfo/rlug
--
If UNIX doesn't have the solution you have the wrong problem.
UNIX is simple, but it takes a genius to understand it's
simplicity.
_______________________________________________
RLUG mailing list
RLUG@rlug.org
http://lists.rlug.org/mailman/listinfo/rlug
_______________________________________________
RLUG mailing list
RLUG@rlug.org
http://lists.rlug.org/mailman/listinfo/rlug
_______________________________________________
RLUG mailing list
RLUG@rlug.org
http://lists.rlug.org/mailman/listinfo/rlug
_______________________________________________
RLUG mailing list
RLUG@rlug.org
http://lists.rlug.org/mailman/listinfo/rlug
_______________________________________________
RLUG mailing list
RLUG@rlug.org
http://lists.rlug.org/mailman/listinfo/rlug
_______________________________________________
RLUG mailing list
RLUG@rlug.org
http://lists.rlug.org/mailman/listinfo/rlug
_______________________________________________
RLUG mailing list
RLUG@rlug.org
http://lists.rlug.org/mailman/listinfo/rlug
_______________________________________________
RLUG mailing list
RLUG@rlug.org
http://lists.rlug.org/mailman/listinfo/rlug