The file head is already being scanned, but right now the code stops  
at the first @prefix declaration. We could in theory check for a  
default namespace, but this is not always present and just a heuristic  
to find the base URI.

Holger


On Mar 19, 2009, at 7:36 PM, Arthur Keen wrote:

>
> They are n3 files. I like your idea of scanning the file head.  Would
> this be as simple as scanning for #baseURI or PREFIX :    and if that
> does not exist go through the rest of the file?
>
> A
>
> Sent from my iPhone
>
> On Mar 19, 2009, at 7:31 PM, Holger Knublauch <[email protected]>
> wrote:
>
>>
>> Hi Arthur,
>>
>> yes this is a problem. TBC scans the whole workspace at start-up to
>> discover the base URIs of each file so that other files that import
>> that base URI get redirected to the local file instead of going to  
>> the
>> web.
>>
>> With files created (or at least saved) with TBC, this is normally no
>> problem, because they will contain the #baseURI comment in the
>> beginning and scanning can then proceed without loading the whole
>> file.
>>
>> Files created with other tools may not have this convention (or
>> contain @base which Jena currently does not support; see other thread
>> recently). Then, TBC needs to decide whether it should try to load it
>> to learn about its base URI (which is normally an instance of
>> owl:Ontology). We currently do this, and this is the problem with
>> large files that you discover. I could add an option to switch this
>> behavior off, but then those files will known under a different (more
>> or less meaningless file:/// baseURI). Nor will it be clear to the
>> user what happens. Another solution might be to try to just load the
>> first few lines of the file. I need to think about better solutions.
>>
>> In the meantime, yes please add the #baseURI comment to your files
>> when you generate them.
>>
>> Are these NT or N3 files?
>>
>> Thanks
>> Holger
>>
>>
>>
>> On Mar 19, 2009, at 4:10 PM, Arthur wrote:
>>
>>>
>>> As I generate larger model files (using Jena to parse proprietary
>>> reservoir model files) and save them in Composer's namespace, it
>>> takes
>>> Composer  longer and longer to scan for the baseURI, often several
>>> minutes, before I can open them in Composer.  Using a text editor, I
>>> can see that Composer alters the freshly generated model files when
>>> it
>>> scans them and puts in  metadata comments (#Saved
>>> by...#baseURI...#imports...).  So Composer is reading and writing  
>>> the
>>> model files to add these comments to the header when it scans.
>>>
>>> Since acquiring these large files, Composer also takes over 10
>>> minutes
>>> to launch.  I assume it is scanning for base URI's in all the files
>>> in
>>> the workspace.
>>>
>>> What can I do to avoid this, beyond the obvious reduction in file
>>> size?  Would it help to insert the (#Saved
>>> by...#baseURI...#imports...) comments whenthe models are created or
>>> is
>>> there a statement I can add to a model so that composer can quickly
>>> locate the baseURI?
>>>
>>> Thanks
>>>
>>> Arthur
>>>>
>>
>>
>>>
>
> >


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"TopBraid Composer Users" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to 
[email protected]
For more options, visit this group at 
http://groups.google.com/group/topbraid-composer-users?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to