Re: Missing solution in SPARQL select result, however this solution exists in the dataset

2017-09-28 Thread Laurent Rucquoy
Hello,

I tested the tdbdump with Jena 3 instead of Jena 2.
It seemed to finish successfully (and I was able to load this dump into a
TDB)

Thank you for your help.

Laurent


On 28 September 2017 at 14:04, Andy Seaborne  wrote:

>
>
> On 28/09/17 09:33, Laurent Rucquoy wrote:
> ...
>
>>
>> Note that the wrong behavior discussed here is strange because the given
>> SPARQL query does not return any data and when I remove the triple pattern
>> concerning the "annotationDimension" linked resource object (i.e. not a
>> literal object) the query returns the expected data (as if the linked
>> resource object did not exist... but this object exists)
>>
>>
>> I've run the tdbdump on the concerned dataset but the process ended
>> earlier
>> than expected with the following stacktrace:
>>
>> com.hp.hpl.jena.tdb.TDBException: Unrecognized node id type: 10
>>  at com.hp.hpl.jena.tdb.store.NodeId.extract(NodeId.java:346)
>>  at com.hp.hpl.jena.tdb.nodetable.NodeTableInline.getNodeForNode
>> Id(
>> NodeTableInline.java:64)
>>
>
> It looks like the database files are damaged in some way - there isn't a
> "type: 10" NodeId.  It's been a long time but I don't remember any mention
> of this before for any version of TDB. (It's not the same as the "Invalid
> NodeId" errors.)
>
> All I can think is that at some time in the past, maybe a very long time
> ago, there was a non-transaction update that didn't get flushed.
>
> Or, maybe, have you run a Jena3 TDB on the database before trying to back
> it up?  I don't see why it would cause that particular message but it is a
> possibility to consider.
>
> Andy
>
>
>  at com.hp.hpl.jena.tdb.lib.TupleLib.triple(TupleLib.java:126)
>>  at com.hp.hpl.jena.tdb.lib.TupleLib.triple(TupleLib.java:114)
>>  at com.hp.hpl.jena.tdb.lib.TupleLib.access$000(TupleLib.java:45)
>>  at com.hp.hpl.jena.tdb.lib.TupleLib$3.convert(TupleLib.java:76)
>>  at com.hp.hpl.jena.tdb.lib.TupleLib$3.convert(TupleLib.java:72)
>>  at org.apache.jena.atlas.iterator.Iter$4.next(Iter.java:299)
>>  at org.apache.jena.atlas.iterator.Iter$4.next(Iter.java:299)
>>  at org.apache.jena.atlas.iterator.Iter.next(Iter.java:909)
>>  at org.apache.jena.atlas.iterator.IteratorCons.next(
>> IteratorCons.java:92)
>>  at org.apache.jena.riot.system.StreamRDFLib.quadsToStream(
>> StreamRDFLib.java:69)
>>  at org.apache.jena.riot.writer.NQuadsWriter.write(
>> NQuadsWriter.java:40)
>>  at org.apache.jena.riot.writer.NQuadsWriter.write(
>> NQuadsWriter.java:67)
>>  at org.apache.jena.riot.RDFDataMgr.write$(RDFDataMgr.java:1133)
>>  at org.apache.jena.riot.RDFDataMgr.write(RDFDataMgr.java:1007)
>>  at org.apache.jena.riot.RDFDataMgr.write(RDFDataMgr.java:997)
>>  at tdb.tdbdump.exec(tdbdump.java:50)
>>  at arq.cmdline.CmdMain.mainMethod(CmdMain.java:101)
>>  at arq.cmdline.CmdMain.mainRun(CmdMain.java:63)
>>  at arq.cmdline.CmdMain.mainRun(CmdMain.java:50)
>>  at tdb.tdbdump.main(tdbdump.java:32)
>>
>>
>> Thank you again for your help.
>> Sincerely,
>> Laurent
>>
>>
>>
>> On 27 September 2017 at 13:27, Lorenz Buehmann > leipzig.de> wrote:
>>
>> Query works for me on the sample data.
>>>
>>> Btw, there is an error in the first URI in the OPTIONAL clause. I'd
>>> suggest to use SPARQL 1.1 VALUES to avoid redundant declaration of the
>>> same URI.
>>>
>>>
>>> On 27.09.2017 11:35, Andy Seaborne wrote:
>>>
 That's a lot of data and it's broken by email.  A small extract to
 illustrate the problem is all that is needed together with a stripped
 down query that shows the effect in question.  Something runnable.

 The query is different to the original a well - some of it is matching
 strings so you wil need to reload the data.

  Andy


 On 26/09/17 20:32, Laurent Rucquoy wrote:

> - I will test to reload the data
> - The last source code is not what I sent before because I removed some
> specific parts when I transcribed because I thought these parts not
> relevant for this case but I can be mistaken...
>
> - Here is a data sample:
>
>  <
> http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <
> http://www.w3.org/2000/01/rdf-schema#Class> .
>  <
> http://thewebsemantic.com/javaclass>
> "com.telemis.core.aim.base.CalculationCollection" .
> <
> http://www.telemis.com/ImagingObservation/654f1e39-a3ce-
>
 483d-9a49-06cd7763d53c>
>>>

>  "654f1e39-a3ce-483d-9a49-06cd7763d53c" .
> <
> http://www.telemis.com/ImagingObservation/654f1e39-a3ce-
>
 483d-9a49-06cd7763d53c>
>>>

>  <
> 

Re: Memory only fuseki dying in a big heap

2017-09-28 Thread Andy Seaborne

Hi Kieron,

On 28/09/17 10:39, Kieron Taylor wrote:

Hi everyone,

I'm trying to use Fuseki as a temporary memory-only server, i.e. load RDF into 
memory, run queries, dispose of server.

My testing was going really well until I tried to take it from development on 
laptop to a compute farm.

JVM 1.8.0_112-b15
Fuseki version 3.4.0
Redhat enterprise 7 via LSF

Server invocation: java --Xmx24GB -Xms24GB -jar fuseki-server.jar --update 
--port 3355 --mem /test

 ^^
It's -Xmx, not --Xmx

I thought that caused an error - but if it doesn't the max heap isn't 
set, which explain why -Xms is needed.


I'd have thought 24G is plenty for 23 million triples unless there are 
many very large literals.




I load my data (totalling 23 million triples across tens of files) using the 
s-put utilty into two graphs, and with time and progress depending on how much 
heap I have allocated (14 GB up to 40 GB), it loads for a while and then 
explodes. See below for a sample of the whole error.

The perplexing part is that I cannot see any sign of an error to trigger the 
dump or predict when it will die. If I do not set -Xms to the same as the -Xmx 
parameter, it dies within ten seconds of starting to load (where loading should 
take 30 minutes or more). If I give it loads of heap, the crash seems to occur 
around it receiving its first SPARQL query after the data is loaded. The client 
(calling s-post) sees generic_request.rb:206:in `copy_stream': Broken pipe - 
sendfile (Errno::EPIPE), which I infer to mean that the server has gone away 
mid-request.


Data is added transactionally so even if a bad update happens the rest 
of the data should be safe.




I have tried the following so far:

1. Add heap
2. Change JVM to another Java 8 release
3. Turn up Fuseki logging - No debug messages obviously indicate an error prior 
to the crash

Can anybody recommend a course of action to diagnose and fix my issue?


I hate to say it but it smells a bit like a hardware fault (or JVM 
fault?), especially the unpredictability. Anything software is usually 
reasonably predictable.


Andy




Regards,

Kieron

- thread dump---
Full thread dump Java HotSpot(TM) 64-Bit Server VM (25.112-b15 mixed mode):

"qtp596910004-172" #172 prio=5 os_prio=0 tid=0x2b9c54002000 nid=0xcc67 
waiting on condition [0x2b9bd3a7f000]
   java.lang.Thread.State: WAITING (parking)


This is the webserver (Jetty) waiting for something.


at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0x0004659a2800> (a 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:173)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:672)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:590)
at java.lang.Thread.run(Thread.java:745)

"qtp596910004-41" #41 prio=5 os_prio=0 tid=0x2b9c64001000 nid=0xb393 
runnable [0x2b9bd3c2d000]
   java.lang.Thread.State: RUNNABLE
. lots more threads

"VM Thread" os_prio=0 tid=0x2b9b3c3c5800 nid=0xd1da runnable

"GC task thread#0 (ParallelGC)" os_prio=0 tid=0x2b9b3c01f000 nid=0xd1c0 
runnable
 more GC

"VM Periodic Task Thread" os_prio=0 tid=0x2b9b3c432800 nid=0xd1ff waiting 
on condition

JNI global references: 499

Heap
PSYoungGen  total 3185664K, used 1577280K [0x00069c58, 
0x0007c000, 0x0007c000)
  eden space 1592832K, 99% used 
[0x00069c58,0x0006fc9d0308,0x0006fd90)
  from space 1592832K, 0% used 
[0x0006fd90,0x0006fd90,0x00075ec8)
  to   space 1592832K, 0% used 
[0x00075ec8,0x00075ec8,0x0007c000)
ParOldGen   total 9557504K, used 9557170K [0x00045500, 
0x00069c58, 0x00069c58)
  object space 9557504K, 99% used 
[0x00045500,0x00069c52c950,0x00069c58)
Metaspace   used 27217K, capacity 27644K, committed 28032K, reserved 
1073152K
  class spaceused 3508K, capacity 3636K, committed 3712K, reserved 1048576K


I don't see more the 10G being used here.



Kieron Taylor PhD.
Ensembl Developer

EMBL, European Bioinformatics Institute



Re: Missing solution in SPARQL select result, however this solution exists in the dataset

2017-09-28 Thread Andy Seaborne



On 28/09/17 09:33, Laurent Rucquoy wrote:
...


Note that the wrong behavior discussed here is strange because the given
SPARQL query does not return any data and when I remove the triple pattern
concerning the "annotationDimension" linked resource object (i.e. not a
literal object) the query returns the expected data (as if the linked
resource object did not exist... but this object exists)


I've run the tdbdump on the concerned dataset but the process ended earlier
than expected with the following stacktrace:

com.hp.hpl.jena.tdb.TDBException: Unrecognized node id type: 10
 at com.hp.hpl.jena.tdb.store.NodeId.extract(NodeId.java:346)
 at com.hp.hpl.jena.tdb.nodetable.NodeTableInline.getNodeForNodeId(
NodeTableInline.java:64)


It looks like the database files are damaged in some way - there isn't a 
"type: 10" NodeId.  It's been a long time but I don't remember any 
mention of this before for any version of TDB. (It's not the same as the 
"Invalid NodeId" errors.)


All I can think is that at some time in the past, maybe a very long time 
ago, there was a non-transaction update that didn't get flushed.


Or, maybe, have you run a Jena3 TDB on the database before trying to 
back it up?  I don't see why it would cause that particular message but 
it is a possibility to consider.


Andy



 at com.hp.hpl.jena.tdb.lib.TupleLib.triple(TupleLib.java:126)
 at com.hp.hpl.jena.tdb.lib.TupleLib.triple(TupleLib.java:114)
 at com.hp.hpl.jena.tdb.lib.TupleLib.access$000(TupleLib.java:45)
 at com.hp.hpl.jena.tdb.lib.TupleLib$3.convert(TupleLib.java:76)
 at com.hp.hpl.jena.tdb.lib.TupleLib$3.convert(TupleLib.java:72)
 at org.apache.jena.atlas.iterator.Iter$4.next(Iter.java:299)
 at org.apache.jena.atlas.iterator.Iter$4.next(Iter.java:299)
 at org.apache.jena.atlas.iterator.Iter.next(Iter.java:909)
 at org.apache.jena.atlas.iterator.IteratorCons.next(
IteratorCons.java:92)
 at org.apache.jena.riot.system.StreamRDFLib.quadsToStream(
StreamRDFLib.java:69)
 at org.apache.jena.riot.writer.NQuadsWriter.write(
NQuadsWriter.java:40)
 at org.apache.jena.riot.writer.NQuadsWriter.write(
NQuadsWriter.java:67)
 at org.apache.jena.riot.RDFDataMgr.write$(RDFDataMgr.java:1133)
 at org.apache.jena.riot.RDFDataMgr.write(RDFDataMgr.java:1007)
 at org.apache.jena.riot.RDFDataMgr.write(RDFDataMgr.java:997)
 at tdb.tdbdump.exec(tdbdump.java:50)
 at arq.cmdline.CmdMain.mainMethod(CmdMain.java:101)
 at arq.cmdline.CmdMain.mainRun(CmdMain.java:63)
 at arq.cmdline.CmdMain.mainRun(CmdMain.java:50)
 at tdb.tdbdump.main(tdbdump.java:32)


Thank you again for your help.
Sincerely,
Laurent



On 27 September 2017 at 13:27, Lorenz Buehmann  wrote:


Query works for me on the sample data.

Btw, there is an error in the first URI in the OPTIONAL clause. I'd
suggest to use SPARQL 1.1 VALUES to avoid redundant declaration of the
same URI.


On 27.09.2017 11:35, Andy Seaborne wrote:

That's a lot of data and it's broken by email.  A small extract to
illustrate the problem is all that is needed together with a stripped
down query that shows the effect in question.  Something runnable.

The query is different to the original a well - some of it is matching
strings so you wil need to reload the data.

 Andy


On 26/09/17 20:32, Laurent Rucquoy wrote:

- I will test to reload the data
- The last source code is not what I sent before because I removed some
specific parts when I transcribed because I thought these parts not
relevant for this case but I can be mistaken...

- Here is a data sample:

 <
http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <
http://www.w3.org/2000/01/rdf-schema#Class> .
 <
http://thewebsemantic.com/javaclass>
"com.telemis.core.aim.base.CalculationCollection" .
<
http://www.telemis.com/ImagingObservation/654f1e39-a3ce-

483d-9a49-06cd7763d53c>


 "654f1e39-a3ce-483d-9a49-06cd7763d53c" .
<
http://www.telemis.com/ImagingObservation/654f1e39-a3ce-

483d-9a49-06cd7763d53c>


 <
http://www.telemis.com/ImagingObservation> .
<
http://www.telemis.com/ImagingObservation/654f1e39-a3ce-

483d-9a49-06cd7763d53c>


 <
http://www.telemis.com/ImagingObservationCharacteristicColle

ction/f5cbfd6b-062b-4ca8-9ded-3e3b83170975>


.
<
http://www.telemis.com/ImagingObservation/654f1e39-a3ce-

483d-9a49-06cd7763d53c>



_:B5d507a5X3A159288c607fX3A71f1 .
<
http://www.telemis.com/ImagingObservation/654f1e39-a3ce-

483d-9a49-06cd7763d53c>


 "TELEMIS" .
<

Memory only fuseki dying in a big heap

2017-09-28 Thread Kieron Taylor
Hi everyone,

I'm trying to use Fuseki as a temporary memory-only server, i.e. load RDF into 
memory, run queries, dispose of server.

My testing was going really well until I tried to take it from development on 
laptop to a compute farm. 

JVM 1.8.0_112-b15
Fuseki version 3.4.0
Redhat enterprise 7 via LSF

Server invocation: java --Xmx24GB -Xms24GB -jar fuseki-server.jar --update 
--port 3355 --mem /test

I load my data (totalling 23 million triples across tens of files) using the 
s-put utilty into two graphs, and with time and progress depending on how much 
heap I have allocated (14 GB up to 40 GB), it loads for a while and then 
explodes. See below for a sample of the whole error.

The perplexing part is that I cannot see any sign of an error to trigger the 
dump or predict when it will die. If I do not set -Xms to the same as the -Xmx 
parameter, it dies within ten seconds of starting to load (where loading should 
take 30 minutes or more). If I give it loads of heap, the crash seems to occur 
around it receiving its first SPARQL query after the data is loaded. The client 
(calling s-post) sees generic_request.rb:206:in `copy_stream': Broken pipe - 
sendfile (Errno::EPIPE), which I infer to mean that the server has gone away 
mid-request.

I have tried the following so far:

1. Add heap
2. Change JVM to another Java 8 release
3. Turn up Fuseki logging - No debug messages obviously indicate an error prior 
to the crash

Can anybody recommend a course of action to diagnose and fix my issue? 


Regards,

Kieron

- thread dump---
Full thread dump Java HotSpot(TM) 64-Bit Server VM (25.112-b15 mixed mode):

"qtp596910004-172" #172 prio=5 os_prio=0 tid=0x2b9c54002000 nid=0xcc67 
waiting on condition [0x2b9bd3a7f000]
  java.lang.Thread.State: WAITING (parking)
   at sun.misc.Unsafe.park(Native Method)
   - parking to wait for  <0x0004659a2800> (a 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
   at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
   at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:173)
   at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:672)
   at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:590)
   at java.lang.Thread.run(Thread.java:745)

"qtp596910004-41" #41 prio=5 os_prio=0 tid=0x2b9c64001000 nid=0xb393 
runnable [0x2b9bd3c2d000]
  java.lang.Thread.State: RUNNABLE
. lots more threads

"VM Thread" os_prio=0 tid=0x2b9b3c3c5800 nid=0xd1da runnable

"GC task thread#0 (ParallelGC)" os_prio=0 tid=0x2b9b3c01f000 nid=0xd1c0 
runnable
 more GC

"VM Periodic Task Thread" os_prio=0 tid=0x2b9b3c432800 nid=0xd1ff waiting 
on condition

JNI global references: 499

Heap
PSYoungGen  total 3185664K, used 1577280K [0x00069c58, 
0x0007c000, 0x0007c000)
 eden space 1592832K, 99% used 
[0x00069c58,0x0006fc9d0308,0x0006fd90)
 from space 1592832K, 0% used 
[0x0006fd90,0x0006fd90,0x00075ec8)
 to   space 1592832K, 0% used 
[0x00075ec8,0x00075ec8,0x0007c000)
ParOldGen   total 9557504K, used 9557170K [0x00045500, 
0x00069c58, 0x00069c58)
 object space 9557504K, 99% used 
[0x00045500,0x00069c52c950,0x00069c58)
Metaspace   used 27217K, capacity 27644K, committed 28032K, reserved 
1073152K
 class spaceused 3508K, capacity 3636K, committed 3712K, reserved 1048576K

Kieron Taylor PhD.
Ensembl Developer

EMBL, European Bioinformatics Institute



Re: Missing solution in SPARQL select result, however this solution exists in the dataset

2017-09-28 Thread Laurent Rucquoy
Hello,

Here is a data sample subset:

<
http://www.telemis.com/ImageAnnotation/000b3231-a9c3-42b1-bb71-2d416f729db8-msr>
 <
http://www.telemis.com/ImageAnnotation> .
<
http://www.telemis.com/ImageAnnotation/000b3231-a9c3-42b1-bb71-2d416f729db8-msr>
 "ROI Circle measure" .
<
http://www.telemis.com/ImageAnnotation/000b3231-a9c3-42b1-bb71-2d416f729db8-msr>
 "MSR-ROI002" .
<
http://www.telemis.com/ImageAnnotation/000b3231-a9c3-42b1-bb71-2d416f729db8-msr>
 <
http://www.telemis.com/AnnotationDimension/dim-4ViewAsymR3> .
 <
http://www.telemis.com/numberOfDimension> "3"^^<
http://www.w3.org/2001/XMLSchema#integer> .
 <
http://www.telemis.com/mprLayout> "4ViewAsymR" .
 <
http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <
http://www.telemis.com/AnnotationDimension> .

Note that the wrong behavior discussed here is strange because the given
SPARQL query does not return any data and when I remove the triple pattern
concerning the "annotationDimension" linked resource object (i.e. not a
literal object) the query returns the expected data (as if the linked
resource object did not exist... but this object exists)


I've run the tdbdump on the concerned dataset but the process ended earlier
than expected with the following stacktrace:

com.hp.hpl.jena.tdb.TDBException: Unrecognized node id type: 10
at com.hp.hpl.jena.tdb.store.NodeId.extract(NodeId.java:346)
at com.hp.hpl.jena.tdb.nodetable.NodeTableInline.getNodeForNodeId(
NodeTableInline.java:64)
at com.hp.hpl.jena.tdb.lib.TupleLib.triple(TupleLib.java:126)
at com.hp.hpl.jena.tdb.lib.TupleLib.triple(TupleLib.java:114)
at com.hp.hpl.jena.tdb.lib.TupleLib.access$000(TupleLib.java:45)
at com.hp.hpl.jena.tdb.lib.TupleLib$3.convert(TupleLib.java:76)
at com.hp.hpl.jena.tdb.lib.TupleLib$3.convert(TupleLib.java:72)
at org.apache.jena.atlas.iterator.Iter$4.next(Iter.java:299)
at org.apache.jena.atlas.iterator.Iter$4.next(Iter.java:299)
at org.apache.jena.atlas.iterator.Iter.next(Iter.java:909)
at org.apache.jena.atlas.iterator.IteratorCons.next(
IteratorCons.java:92)
at org.apache.jena.riot.system.StreamRDFLib.quadsToStream(
StreamRDFLib.java:69)
at org.apache.jena.riot.writer.NQuadsWriter.write(
NQuadsWriter.java:40)
at org.apache.jena.riot.writer.NQuadsWriter.write(
NQuadsWriter.java:67)
at org.apache.jena.riot.RDFDataMgr.write$(RDFDataMgr.java:1133)
at org.apache.jena.riot.RDFDataMgr.write(RDFDataMgr.java:1007)
at org.apache.jena.riot.RDFDataMgr.write(RDFDataMgr.java:997)
at tdb.tdbdump.exec(tdbdump.java:50)
at arq.cmdline.CmdMain.mainMethod(CmdMain.java:101)
at arq.cmdline.CmdMain.mainRun(CmdMain.java:63)
at arq.cmdline.CmdMain.mainRun(CmdMain.java:50)
at tdb.tdbdump.main(tdbdump.java:32)


Thank you again for your help.
Sincerely,
Laurent



On 27 September 2017 at 13:27, Lorenz Buehmann  wrote:

> Query works for me on the sample data.
>
> Btw, there is an error in the first URI in the OPTIONAL clause. I'd
> suggest to use SPARQL 1.1 VALUES to avoid redundant declaration of the
> same URI.
>
>
> On 27.09.2017 11:35, Andy Seaborne wrote:
> > That's a lot of data and it's broken by email.  A small extract to
> > illustrate the problem is all that is needed together with a stripped
> > down query that shows the effect in question.  Something runnable.
> >
> > The query is different to the original a well - some of it is matching
> > strings so you wil need to reload the data.
> >
> > Andy
> >
> >
> > On 26/09/17 20:32, Laurent Rucquoy wrote:
> >> - I will test to reload the data
> >> - The last source code is not what I sent before because I removed some
> >> specific parts when I transcribed because I thought these parts not
> >> relevant for this case but I can be mistaken...
> >>
> >> - Here is a data sample:
> >>
> >>  <
> >> http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <
> >> http://www.w3.org/2000/01/rdf-schema#Class> .
> >>  <
> >> http://thewebsemantic.com/javaclass>
> >> "com.telemis.core.aim.base.CalculationCollection" .
> >> <
> >> http://www.telemis.com/ImagingObservation/654f1e39-a3ce-
> 483d-9a49-06cd7763d53c>
> >>
> >>  "654f1e39-a3ce-483d-9a49-06cd7763d53c" .
> >> <
> >> http://www.telemis.com/ImagingObservation/654f1e39-a3ce-
> 483d-9a49-06cd7763d53c>
> >>
> >>  <
> >> http://www.telemis.com/ImagingObservation> .
> >> <
> >>