Creating a bunch of temporary databases that you’re going to delete doesn’t 
sound like the most efficient way to process this data. 
But it hard to tell what alternative to recommend without more info about what 
your desired end result is. 

Is this something that you’re going to do once, or do repeatedly for different 
strings ? 

Taking your description literally, I would use ‘grep -l ‘ to generate a list of 
files with the specific string, and either feed that list into ‘cat’ or else 
use it to build a database of a subset of files for further investigation. 

There are also other ways to filter the data through a stream to select a 
subset if that is what you want to do. 

But if you’re going to do this repeatedly for different subsets, then it might 
make more sense to try to get everything parsed and indexed into the database 
once. If it really is too large for a single database after adjusting the java 
memory parameters in the basex scripts, you could try sharding the data into 
several databases, repeat the search on each collection, and concatenate the 
results. 




> On Aug 5, 2019, at 2:41 AM, Greg Kawchuk <greg.kawc...@ualberta.ca> wrote:
> 
> Hi everyone,
> I'm wondering if someone could provide what I think is a brief script for a 
> scientific project to do the following. 
> The goal is to identify XML documents from a very large collection that would 
> be too big to load into a database all at once.
> 
> Here is how I see the functions provided by the code. 
> 1. In the script, the user could enter the path of the target folder (with 
> millions of XML documents).
> 2. In the script, the user would enter the number of documents to load into a 
> database at a given time (i =. 1,000) depending on memory limitations.
> 3. The code would then create a temporary database from the first (i) xml 
> files in the target folder.
> 4. The code would then search the 1000 xml documents in the database for a 
> pre-defined text string.
> 5. If hits exist for the text string, the code would write those documents to 
> a unique XML file.
> 6. Clear the database.
> 7. Read in the next 1000 files (or remaining files in the folder).
> 8. Return to #4.
> 
> There would be no need to append XML files in step 5. The resulting XML files 
> could be concatenated afterwards. 
> Thank you in advance. If you have any questions, please feel free to email me 
> here. 
> Greg
> 
> ***************************************************
> Greg Kawchuk BSC, DC, MSc, PhD.
> Professor, Faculty of Rehabilitation Medicine
> University of Alberta
> greg.kawc...@ualberta.ca <mailto:greg.kawc...@ualberta.ca>
> 780-492-6891

Attachment: smime.p7s
Description: S/MIME cryptographic signature

Reply via email to