Occasionally, I have need to read the entire contents of a file (which
can be of any length) just so I can write it out again. For example, I
may want to create a copy of a file, or simply read a file and dump its
contents to stdout. The following program, for example simply reads a
file called tester.txt and prints it to stdout:
#include <stdio.h>
#include <fcntl.h>
#include <unistd.h>
#define BUFSIZE 1024
void print_file(const char *pathname) {
int fd;
char buf[BUFSIZE];
int bytes_read;
fd = open(pathname,O_RDONLY);
if(fd == -1) {
puts("error opening");
return;
}
do {
bytes_read = read(fd,buf,BUFSIZE);
if(bytes_read != -1)
if(write(1,buf,bytes_read) != bytes_read) {
puts("error writing.");
return;
}
} while(bytes_read > 0);
if(close(fd) == -1)
puts("error closing");
}
void main() {
print_file("tester.txt");
}
The technique is to read and write the data in BUFSIZE byte chunks.
First, is this a reasonable approach?
Second, is there an exceptionally good number to use for BUFSIZE, for a
particular installation of linux? I've noticed that the struct_stat
structure,which is used by the stat() system call has a member called
st_blksize. For files on my system, st_blksize is 4096 = = 4K. Is this
the 'ideal' size of BUFSIZE (because maybe the OS reads data in those
sized chunks.)?
Is st_blksize similar in concept to the "cluster size" on MS-DOS disks?
Where does the 'inode' size fit into the picture? When I prepped my
hard disk during the Slackware install, I set the inode size to 1K.
Anyone have any experience in this area?
Thanks,
Steve Narmontas