Thank you Irene. I’ll try 16/32Gb RAM. Also, the resultant file will be loaded
into a triple store; no need to open it in TB
> On Feb 2, 2018, at 7:18 PM, Irene Polikoff wrote:
>
> When TBC works with files, it loads them into memory in order to display
> them. You will not have enough RAM on
When TBC works with files, it loads them into memory in order to display them.
You will not have enough RAM on a laptop or a desktop to load a file of this
size. The resulting RDF graph will have 81M triples. You need to convert it
using some process that does not involve TBC UI.
Further, if y
Eight columns & thanks for looking into this
On Friday, February 2, 2018 at 1:55:43 AM UTC-5, Irene Polikoff wrote:
>
> How many columns?
>
> Sent from my iPhone
>
> On Feb 1, 2018, at 3:35 PM, Sina > wrote:
>
> Hi,
>
> Do you have a recommended setting/memory/etc. for importing an 800Mb tab
> d
How many columns?
Sent from my iPhone
> On Feb 1, 2018, at 3:35 PM, Sina wrote:
>
> Hi,
>
> Do you have a recommended setting/memory/etc. for importing an 800Mb tab
> delimited text file (9 million rows) into an ontology through the TB import
> spreadsheet function?
>
> Thanks!
>
> Sina
Hi Sina,
That's a very large file! I'd recommend allocating your entire machine and
shutting every other unnecessary process down. If this doesn't work you'll
probably have to split up the file into two separate files.
Let me know if this works!
Aaron
ᐧ
On Thu, Feb 1, 2018 at 3:35 PM, Sina wrot