On Sun, 16 Jan 2005, Dvir Volk wrote:
> I'm using large files with
> -D_FILE_OFFSET_BITS=64 as a compiler flag, and that's it.
> i'm using open without any special flag, and without all the lseek64
> calls, etc, even the sizeof off_t is 64bit automatically.
that's the second way to do it - but i
I'm using large files with
-D_FILE_OFFSET_BITS=64 as a compiler flag, and that's it.
i'm using open without any special flag, and without all the lseek64
calls, etc, even the sizeof off_t is 64bit automatically.
guy keren wrote:
On Sat, 15 Jan 2005, Shachar Shemesh wrote:
I'm trying to make su
On Sat, 15 Jan 2005, Shachar Shemesh wrote:
> I'm trying to make sure a program I'm writing works with files greater
> than 4GB in length. In order to do that, I'm trying to create a sparse
> file of this size. The program is this:
> #define _LARGEFILE64_SOURCE
> #include
> #include
> #include
Shachar Shemesh wrote:
File system is reiserfs 3. I have tried running it also on a partition
that has more than 4GB of free space. Does reiser not support large
files?
Thanks,
Shachar
Just to add one piece of info - create non-sparsed file using "dd" or
"echo >>" of the same size does
ecuting the "write":
File size limit exceeded
HOWEVER, when doing "ulimit -a":
core file size(blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m
On Thu, Jun 26, 2003 at 01:21:24AM +0300, Shaul Karl wrote:
> On Wed, Jun 25, 2003 at 03:23:39PM +0300, Muli Ben-Yehuda wrote:
> >
> > . For virtual memory, I've never heard of a 2 GB
> > limitation (4, definitly, 3 or 3.5, or even 2.9, fine[0]). References?
>
> There is also an option to m
On Wed, Jun 25, 2003 at 03:23:39PM +0300, Muli Ben-Yehuda wrote:
>
> . For virtual memory, I've never heard of a 2 GB
> limitation (4, definitly, 3 or 3.5, or even 2.9, fine[0]). References?
>
There is also an option to make the kernel use 2G out of the available
4G. I remember seeing it
> On 2003-06-25 Honen, Oren wrote:
> I have a 5.4G file on an ext2 filesystem, no special block size was
> needed.
If I recall correctly(This was stated in PostgresSQL docs somewhere), such
large files are possible, but will also result in degraded
performance(sounds reasonable, considering 64-bi
See the below article for the relations between block size and maximum
file size.
http://www.linuxhq.com/lnxlists/linux-kernel/lk_9906_01/msg1.html
Oren.
-Original Message-
From: Christoph Bugel [mailto:[EMAIL PROTECTED]
Sent: Wednesday, June 25, 2003 4:05 PM
To: Honen, Oren
Cc
On 2003-06-25 Honen, Oren wrote:
> Hi,
>
> I'm using a graphical application on 32bit RH Linux. The user saved
> files are getting larger and larger. Currently the largest file is at
> about 2GB . The vendor of the application says that larger files will
> require the 64bit version which he doesn
On Wed, Jun 25, 2003 at 03:09:47PM +0300, Honen, Oren wrote:
> 1. Does 32bit systems force files <=2GB ?
No. See http://themes.freshmeat.net/articles/view/709/ for a
discussion, which also links to http://www.suse.de/~aj/linux_lfs.html
> 2. Does NFS has that limit ?
I highly doubt it, but I do
Hi,
I’m using a graphical application on 32bit RH Linux.
The user saved files are getting larger and larger. Currently the largest file
is at about 2GB . The vendor of the application says that larger files will
require the 64bit version which he doesn’t have on Linux. The files are
saved
hi..jenya.
> I would like to get a total file size reporting for multy-level
> directory
> structure, like right click-> Properties in Windows.
>
> Note, i don't need a disk usage ( du -sk ), just total file size.
try:
[else@where]#du -smL
it will give u the size in m
yevgeny <[EMAIL PROTECTED]> writes:
> I would like to get a total file size reporting for multy-level directory
> structure, like right click-> Properties in Windows.
>
> Note, i don't need a disk usage ( du -sk ), just total file size.
>
> wc -c gives me the
I would like to get a total file size reporting for multy-level directory
structure, like right click-> Properties in Windows.
Note, i don't need a disk usage ( du -sk ), just total file size.
wc -c gives me the total for files, not recursively drops into directories.
Is it correct t
In the ext2fs filesystem, the limit is 2GB. So if you need to manipulate
bigger files, check out a more recent filesystem.
--- Omer
WARNING TO SPAMMERS: at http://www.zak.co.il/spamwarning.html
On Mon, 2 Apr 2001, Moshe Ashkenazi wrote:
> Hi, List -
Hi, List -
I'm new in this list so forgive me if my question is silly.
Does any one know if ther is a limit for how big file can be on linux ?
=
To unsubscribe, send mail to [EMAIL PROTECTED] with
the word "unsubscribe" in the mess
Thanks to all for their answers (I forgot about the obvious fstat() ...) .
Regarding what Ariel Biener wrote - I have to find the file size from within
the application code, on run time - so indeed it has to be a system call.
Ariel Biener wrote:
> On Tue, 12 Dec 2000, Ben-Nes Michael wr
On Tue, 12 Dec 2000, Ben-Nes Michael wrote:
system call Michael, not command.
--Ariel
> Hi
>
> du -hs /root/stf.phtml
> 4.0k /root/stf.phtml
>
> Edward Gold wrote:
>
> > Is there a system call that will give a file size (given it's name &
> > pat
stat, or fstat
look at :
man 2 stat
Oded
..
"It's God. No, not Richard Stallman, or Linus Torvalds, but God."
(By Matt Welsh)
- Original Message -
From: "Edward Gold" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, December 12, 2000 15:
Hi
du -hs /root/stf.phtml
4.0k/root/stf.phtml
Edward Gold wrote:
> Is there a system call that will give a file size (given it's name &
> path) ?
> I know that a possible answer can be the use of the system() call, and
> using ls -l| sort -bn +4 from the shell it open
Hi, Edward!
On Tue, Dec 12, 2000 at 03:29:00PM +0200, you wrote the following:
> Is there a system call that will give a file size (given it's name &
> path) ?
> I know that a possible answer can be the use of the system() call, and
> using ls -l| sort -bn +4 from the s
Is there a system call that will give a file size (given it's name &
path) ?
I know that a possible answer can be the use of the system() call, and
using ls -l| sort -bn +4 from the shell it open- but I'm looking for
something more s
I came across an email which talks about NetBSD filesize limits. They have
code for up to 256TB files.
That's 8.5 years of MP3 compressed sound.
--
Itamar - [EMAIL PROTECTED]
-o-o
Whole Pop Magazine Online| The only good mor
24 matches
Mail list logo