OK,
Here is the source of the problem. The cache file generated by
webazolver is the source of the problem. Based on the information of the
software webalizer, as this:
Cached DNS addresses have a TTL (time to live) of 3 days. This may be
changed at compile time by editing the dns_resolv.h
On Tue, 17 Jan 2006, Daniel Ouellet wrote:
OK,
Here is the source of the problem. The cache file generated by webazolver is
the source of the problem. Based on the information of the software webalizer,
as this:
Cached DNS addresses have a TTL (time to live) of 3 days. This may be
On Tue, Jan 17, 2006 at 02:15:57PM +0100, Otto Moerbeek wrote:
On Tue, 17 Jan 2006, Daniel Ouellet wrote:
OK,
Here is the source of the problem. The cache file generated by
webazolver is the source of the problem. Based on the information of
the software webalizer, as this:
On Tue, 17 Jan 2006, Joachim Schipper wrote:
On Tue, Jan 17, 2006 at 02:15:57PM +0100, Otto Moerbeek wrote:
You are wrong in thinking sparse files are a problem. Having sparse
files quite a nifty feature, I would say.
Are we talking about webazolver or OpenBSD?
I'd argue that relying
You are wrong in thinking sparse files are a problem. Having sparse
files quite a nifty feature, I would say.
Are we talking about webazolver or OpenBSD?
I'd argue that relying on the OS handling sparse files this way instead
of handling your own log data in an efficient way *is* a problem,
On Tue, Jan 17, 2006 at 05:49:24PM +0100, Otto Moerbeek wrote:
On Tue, 17 Jan 2006, Joachim Schipper wrote:
On Tue, Jan 17, 2006 at 02:15:57PM +0100, Otto Moerbeek wrote:
You are wrong in thinking sparse files are a problem. Having sparse
files quite a nifty feature, I would say.
On Tue, Jan 17, 2006 at 02:36:44PM -0500, Daniel Ouellet wrote:
[...] But having a
file that is let say 1MB of valid data that grow very quickly to 4 and
6GB quickly and takes time to rsync between servers were in one instance
fill the fill system and create other problem. (: I wouldn't
Hi all,
First let me start with my apology to some of you for having waisted
your time!
As much as this was/is interesting and puzzling to me and that I am
trying obviously to get my hands around this issue and usage of sparse
files, the big picture of it, is obviously something missing in
On Sun, 15 Jan 2006, Daniel Ouellet wrote:
Otto Moerbeek wrote:
On Sun, 15 Jan 2006, Daniel Ouellet wrote:
Since the bsize and fsize differ, it is expected that the used kbytes of the
file systems differ. Also, the inode table size will not be the same.
Not sure that I would agree
run du on both filesystems and compare the results.
Ted Unangst wrote:
run du on both filesystems and compare the results.
OK, just because I am curious more then think there is a problem, and
because I am still puzzle from what Otto and Ted said, here is what I
did and the answer to question from Otto as well.
- Both system run 3.8. (www1
Otto Moerbeek wrote:
Now I agree that the difference you are seeing is larger than I would
expect. I would run a ls -laR or du -k on the filesystems and diff the
results to see if the contents are realy the same. My bet is that
you'll discover some files that are not on the system with a smaller
Just a bit more information on this.
As I couldn't understand if that was an AMD64 issue as illogical as that
might be, I decided to put that to the test. So, I pull out an other
AMD64 server and it's running 3.8, same fsize and bsize, one drive, etc.
Use rsync to mirror the content and the
On Mon, 16 Jan 2006, Daniel Ouellet wrote:
Just a bit more information on this.
As I couldn't understand if that was an AMD64 issue as illogical as that might
be, I decided to put that to the test. So, I pull out an other AMD64 server
and it's running 3.8, same fsize and bsize, one drive,
Otto Moerbeek wrote:
On Mon, 16 Jan 2006, Daniel Ouellet wrote:
Just a bit more information on this.
As I couldn't understand if that was an AMD64 issue as illogical as that might
be, I decided to put that to the test. So, I pull out an other AMD64 server
and it's running 3.8, same fsize and
Here is something I can't put my hands around to well and I don't really
understand why that is, other then may be the fize of each mount point
not process properly on AMD64, but that's just an idea. See lower below
for why I think it might be the case. In any case, I would welcome a
logical
On Sun, 15 Jan 2006, Daniel Ouellet wrote:
[snip lots of talk by a confused person]
16 partitions:
# sizeoffset fstype [fsize bsize cpg]
a:52409763 4.2BSD 2048 16384 328 # Cyl 0*- 519
b: 8388576524160swap
Otto Moerbeek wrote:
On Sun, 15 Jan 2006, Daniel Ouellet wrote:
[snip lots of talk by a confused person]
16 partitions:
# sizeoffset fstype [fsize bsize cpg]
a:52409763 4.2BSD 2048 16384 328 # Cyl 0*- 519
b: 8388576524160
18 matches
Mail list logo