On export/import the RIDs can change, but the import process takes care to
remap all the rid in all the relationships.

Lvc@


On 23 February 2014 02:57, Ted Smith <[email protected]> wrote:

> So for PLOCAL, the RID can be considered(or made) "permanent" across
> backup/restore or export/import,
> since it uses an extra pointer for actual physical position, which could
> be changed during the whole process.
> Is the above statement correct?
>
>
>
>
> On Sat, Feb 22, 2014 at 7:16 PM, Luca Garulli <[email protected]> wrote:
>
>> Hi,
>> how records are managed is storage dependent. So LOCAL keep the RID to
>> offsets ad then store pointers to the real position on disk. By using a
>> middle structure allow to move records during defrag without changing RID.
>>
>> This is not index, but an indirect lookup (2 lookups)
>>
>> Lvc@
>>
>>
>>
>> On 23 February 2014 00:16, Steve <[email protected]> wrote:
>>
>>>  Thanks Luca,
>>>
>>> So if it is a physical offset what happens if the record has to be
>>> physically moved?  i.e. say we have record n at offset 100 and record n+1
>>> at offset 200.  If we update record n and add a property that is 101 bytes
>>> long don't we have to move the record since it won't fit without
>>> overwriting part of record n+1?  Or during a defrag operation I would
>>> assume many records get moved around?
>>>
>>> Perhaps I am not making a distinction between a record and an object.
>>> Is it that a logical object can be represented by many records in it's
>>> lifecycle?
>>>
>>> If a move does happen then wouldn't this mean the entire database has to
>>> be scanned to find any references to that physical cluster position?  Or is
>>> there a table of back-references stored somewhere?
>>>
>>>
>>> On 23/02/14 09:07, Luca Garulli wrote:
>>>
>>> It's the offset inside a cluster, so it never can change during the
>>> record lifecycle up to the delete. And with plocal the RID is not recycled.
>>>
>>>  Lvc@
>>>
>>>
>>>
>>> On 22 February 2014 02:26, Steve <[email protected]> wrote:
>>>
>>>> Is it actual offset of the record in the cluster or is it and index in a
>>>> lookup table?  If it is literal wouldn't that mean if a record has to be
>>>> moved (perhaps due to growing in size) that all references to it have to
>>>> be found and updated?
>>>>
>>>> --
>>>>
>>>> ---
>>>> You received this message because you are subscribed to the Google
>>>> Groups "OrientDB" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>> an email to [email protected].
>>>> For more options, visit https://groups.google.com/groups/opt_out.
>>>>
>>>
>>>  --
>>>
>>> ---
>>> You received this message because you are subscribed to the Google
>>> Groups "OrientDB" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to [email protected].
>>> For more options, visit https://groups.google.com/groups/opt_out.
>>>
>>>
>>>  --
>>>
>>> ---
>>> You received this message because you are subscribed to the Google
>>> Groups "OrientDB" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to [email protected].
>>> For more options, visit https://groups.google.com/groups/opt_out.
>>>
>>
>>  --
>>
>> ---
>> You received this message because you are subscribed to the Google Groups
>> "OrientDB" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to [email protected].
>> For more options, visit https://groups.google.com/groups/opt_out.
>>
>
>  --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "OrientDB" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> For more options, visit https://groups.google.com/groups/opt_out.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"OrientDB" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to