It seems like there might be some confusion about objects and ref objects.
In Nim, a raw object is like a C struct. It is passed everywhere by value, and
all of its memory is stored on the stack. That means that the compiler has to
know the total size of the object wherever it is used. For example in a seq,
objects are stored back-to-back in memory. If you looked at the contents of the
seq that had 3 B's in it, you could read it as a series of ints like: B1.x B1.y
B2.x B2.y B3.x B3.y
Whereas a seq of 3 As would look like: A1.x A1.y A1.z
Notice how the seq of B's is bigger than the seq of A's, because the elements
are larger. You wouldn't be able to fit a B in the seq of A's because it would
take up too much space and throw off the indexing.
However, with `ref` objects, things are different.
If A and B were declared as `ref object [of A]`, then you would be able to have
an array with both:
type
A = ref object of RootObj
x: int
B = ref object of A
y: int
X = ref object
values: seq[A]
var myX = X(values: @[])
myX.values.add B(x:0, y:5)
Run
This is because `ref object``s are more like objects in fully managed languages
such as Java. Whenever you have a variable that is a ``ref object`, it's
actually a pointer to some space on the heap where the object's full contents
are stored. The pointers are all the same size but they can point at objects
that have different sizes. So it's just fine to have a seq of these references:
ptr[A1] ptr[B2] ptr[A3]
If B was not an `object of A`, you could still have a heterogenous array of
them, but you wouldn't be able to do anything with it because you wouldn't know
anything about the types. The type system lets you do this as long as they have
the same ancestor (RootObj, usually).