----- Original Message ----- 
From: "Bob"

Hi Rob,
Thanks very much for your help.
I'm going to have to work though it at a later date.
I'm just managing to check my email daily, and that's it at the moment.

I transfer replies to my colour coded editor, then print them out for 
reference.
I have noticed on your past posts that you have some interesting techniques 
also, which I have saved.
Thanks again.
Regards, Bob E.

-----------------------------------
Hi Bob,

Most of what I post is hand written code, often bugs and all!

But that last script for php 4.x.x was straight off the php.net web site.

I really don't spend any time with php 4.x.x any more. I looked to find one 
of my servers that still had it available but no good. And I didn't want to 
install an older version on a WAMP.

Someone mentioned the use of $d = implode($d); and said it would use extra 
memory. I don't see why it would although I agree something like $new_var = 
implode($d); would. Perhaps they can enlighten me?

In any case I only write strings to files. For objects I use serialize(); 
and then write the resulting string.

The code I sent is a bit of a dog with the itineration over an array. Php 
5.x.x has far superior array manipulating functions. You may be able to 
achieve the same if you push the new value and old array into an empty stack 
in php 4.x.x

If I get time I will see what I can do in php 4.x.x but in reality I think 
it is time for you to look for new hosting!

Someone mentioned using fseek. The basic principle is to feed straight from 
one stream to another. This vastly improves speed and the memory usage in 
absolutely minimal compared to other methods. Because it is file steam to 
file stream, the large data file never has to sit in memory. The script I 
sent was only intended for about 100 or less data items.

The problem with fseek is that you need to determine where to seek to. To 
use this efficiently you need to pad your data items out to a fixed record 
length then you can calculate the seek position easily without loading the 
stream into memory. Remember that NULL is stored as a NULL value and uses 
the same storage as a normal character.

If you don't want to used fixed length records then you need a delimiter 
that only exists between records. Then you read character by character and 
count delimiters.
Alternatively you can focus on a file size rather than a number of records. 
This way you can seek to a position say 20KB from the end of the file and 
then just go character by character until immediately past the first found 
delimiter.

Basic principle for fixed record-

Get new data item
Pad it to a fixed record length
fopen input file
flock input file
get size of input file
f seek to size of input file - (desired number of records -1) * record 
length + any needed common adjustment
fopen output file
pass through remaining data from input file to output file
write new record to output file
fclose file streams
unlink input file
rename output file to input file
release flocks

In the past I have done this and there can be server issues from time to 
time so it is best to have a backup file.
new file becomes > log file
log file becomes > backup file
backup file is unlinked

Hope this helps 


------------------------------------

Please remember to write your response BELOW the previous text. 

Community email addresses:
  Post message: php-list@yahoogroups.com
  Subscribe:    [EMAIL PROTECTED]
  Unsubscribe:  [EMAIL PROTECTED]
  List owner:   [EMAIL PROTECTED]

Shortcut URL to this page:
  http://groups.yahoo.com/group/php-listYahoo! Groups Links

<*> To visit your group on the web, go to:
    http://groups.yahoo.com/group/php-list/

<*> Your email settings:
    Individual Email | Traditional

<*> To change settings online go to:
    http://groups.yahoo.com/group/php-list/join
    (Yahoo! ID required)

<*> To change settings via email:
    mailto:[EMAIL PROTECTED] 
    mailto:[EMAIL PROTECTED]

<*> To unsubscribe from this group, send an email to:
    [EMAIL PROTECTED]

<*> Your use of Yahoo! Groups is subject to:
    http://docs.yahoo.com/info/terms/

Reply via email to