Bec, Read this article and download the demo.. It goes through several different options for "find and replace" in large text strings..
http://www.codeproject.com/KB/string/fastestcscaseinsstringrep.aspx Grant On Mon, Feb 7, 2011 at 9:43 AM, Bec Carter <[email protected]> wrote: > On Mon, Feb 7, 2011 at 10:35 AM, Noon Silk <[email protected]> wrote: > > On Mon, Feb 7, 2011 at 10:29 AM, Bec Carter <[email protected]> > wrote: > >> On Mon, Feb 7, 2011 at 10:25 AM, Noon Silk <[email protected]> > wrote: > >>> On Mon, Feb 7, 2011 at 10:21 AM, Bec Carter <[email protected]> > wrote: > >>>> Good mornin' all! > >>>> > >>>> I've a requirement to put certain values (after computing them) in > >>>> various spots in a very large text file. So basically the starting > >>>> text file can have placeholders where these computed values will end > >>>> up- like a template. Then my code will compute some values based on > >>>> user input and i need to fill in the placeholders. > >>>> > >>>> Is there a better way to do this besides a simple string replace? > >>> > >>> Well, no. You'll need to read the file in and find your tokens and > >>> replace them. Depending on how large the file it, you might need to do > >>> this line by line, or chunk by chunk, writing out as you read in, but > >>> inevitably it comes down to looking for a sequence and replacing it > >>> with another. > >>> > >>> How large is "very large"? Megs? Gigs? > >>> > >> > >> Yup reading all into a string right now and replacing. File is around > 750 megs > > > > Mm, in that case I would definitely think reading in chunk by chunk > > would be better. > > > > So, you read in chunks of chars in a char[], and then you must look > > for the start of your token. Taking care to note that you could end up > > in the middle of your token, something like: > > > > // Pseudocode > > char[] data = { "$te" } > > char[] nextData = { "st$" } > > > > Where the token is "$test$". > > > > Depending on your data, it might be that reading lines is enough, and > > do the replace on that basis. Hopefully this is somewhat clear. I had > > a quick search and couldn't find a nice example on doing this, but it > > should be easy enough using StreamReader or friends. If it's not clear > > I can show an example later on. > > > > That's fine thanks. Line by line would be ok as the data will never be > broken up and flow onto the next line. > > I was kinda hoping there'd be something specifically built for this- > seems like I'm creating mail merge all over again :-) > > > -- > > Noon Silk > > > > http://dnoondt.wordpress.com/ (Noon Silk) | > http://www.mirios.com.au:8081 > > > > > Fancy a quantum lunch? > > http://www.mirios.com.au:8081/index.php?title=Quantum_Lunch > > > > "Every morning when I wake up, I experience an exquisite joy — the joy > > of being this signature." > > >
