Thanks a lot for deep exlplanation Andrew. I tested for 5 GB and it worked very 
well. 

Thx,

      GopiKrishna Komanduri
Software engineer
[email protected]
   

--- On Thu, 31/12/09, andrew clarke <[email protected]> wrote:

From: andrew clarke <[email protected]>
Subject: Re: [c-prog] query about writing a large filen using c++
To: [email protected]
Date: Thursday, 31 December, 2009, 5:01 AM







 



  


    
      
      
      On Tue 2009-12-29 22:17:32 UTC-0700, Thomas Hruska (thru...@cubiclesoft 
.com) wrote:



> Gopi Krishna Komanduri wrote:

> >       Hi,

> > 

> >    I need to write a huge amount of data to a file (size can be more

> > than 100 GB , some times even more than 1000GB) , upto my knowledge ,

> > using FILE pointer , I can write to file , max of 4GB size as the size

> > of FILE pointer is 32 bits.

> > 

> >  Can I use fstream to write a large amount of data?  Please suggest

> > 

> > BTW , though I am developing on windows , it should be platform independent 
> > as I need to port the code to various OSs .

> > 

> > Thx,

> > --Gopi

> > 9705355488

> 

> The size of FILE * doesn't matter.  It is an abstract data structure. 

> What is contained _within_ the data structure is what matters.  Most 

> modern compilers these days _supposedly_ deal with large files just fine 

> as long as you aren't trying to get or set the current file location. 



Expanding on Thomas's answer a little...



Modern Windows compilers should have no problem with seek/tell on

files bigger than 4 GB.  Sometimes you need to use non-standard API

calls, eg. fseek64().



> You should try creating a really large file to see if you can read and 

> write just fine.

> 

> If not, check your compiler to see if fopen64() and friends are available.



As Thomas says the quickest way to find out is to create a large (eg.

5 GB) file.  Write a predictable pattern in it, try to seek to random

locations and verify the data is what you expect.  If this works for a

5 GB file it should work the same for 1 TB file, as the compiler will

be using 64-bit integers for the offset.



An easy way would be to create the file that was just one huge array

of 64-bit ints created sequentially, then write the current offset of

the file into the file itself.  You just need an incrementing counter

for this - no need to use ftell().



If in doubt, check the compiler documentation, or look for hints in

the include files.  You really shouldn't need to resort to calling the

Windows API to do this.  Especially not if you're using a compiler

released in the last 5 years.



If you get stuck, post your code.  I primarily use Linux and FreeBSD

these days, where some function names related to the above might be

different to Windows, but the principles are the same.



Video editing software had to deal with this sort of thing early on.

MJPEG and MPEG-2 video can grow to sizes beyond 4 GB in a very short

time.



Also, forget about creating large files on FAT32.  Someone else said

the maximum file size limit is 4 GB but I seem to recall it being 2

GB.  Wikipedia will know.  Either way you're going to hit that

boundary early on.  I suppose you could work around it by splitting

your data file into multiple pieces (and creating wrapper functions to

handle that), but using NTFS is really the way to go.  Although how

well it handles 1 TB files is another thing!



Regards

Andrew



    
     

    
    


 



  






      The INTERNET now has a personality. YOURS! See your Yahoo! Homepage. 
http://in.yahoo.com/

[Non-text portions of this message have been removed]

Reply via email to