> I'm not sure exactly what you're trying to test, but your repeat loop seems > a little off. Perhaps you intended something like this (assuming tSource is > a variable containing the text of a field): > > repeat with N = number of lines of tSource down to 1 > if item 2 of line N of tSource = "0" then delete line N of tSource > end repeat > > Another option (assemble text in a new variable): > > > repeat for each line thisLine in tSource > if item 2 of thisLine <> "0" then put thisLine & cr after tFilteredText > end repeat > delete last char of tFilteredText > -- do something with tFilteredText > > Each of the above should result in removal of any lines in the source text > that contain 0 in item 2 of the line. Another thing to check is that you're > using the appropriate itemDelimiter (comma is the default, but you can set > the itemDelimiter to another character). > > If the above is not what you're after, you might need to explain what is > actually stored in tSource.
Hello Scott, I'm sorry, I just didn't think to mention that I'm moving the data from the text field into a temporary variable before processing. Perhaps I should add some details as to what I'm trying to accomplish as well. I'm working with large datasets (sometimes over 80,000 records) that originate from a variety of sources, often in different formats (CSV, Tabbed,etc.) and all of which with just a few columns of data that would be common to each and of interest to me. I've added enough "smarts" to the app where I can preselect certain options based on where the file originated (delimiters and positioning of the data being analyzed), as well as how the data must be handled accordingly. In this case, the zero represents a "code" indicating that the item has been discontinued and is no longer available, so I just want to remove those items from the source entirely. This is just a tiny fraction of the overall processing that must be done. Basically, I'm just trying to automate the same general processes that I would otherwise go through using Excel, for example. This is my first real attempt to do heavy data processing with Rev (and chunking to a large degree), so most of what I'm doing is simple trial and error. I really appreciate everyone's input and will start trying some of the suggestions to see what works best. David _______________________________________________ use-revolution mailing list [email protected] Please visit this url to subscribe, unsubscribe and manage your subscription preferences: http://lists.runrev.com/mailman/listinfo/use-revolution
