Now as it turns out we're actually mostly agreeing (stripped the quote, it's
simply
too much), the one point I don't understand:
On Wednesday 31 October 2007 12:44, Geoffroy Vallee wrote:
> > I tried with the gforge online repository and with the local ones. Under RPM
> > based distros this was fast enough. But I agree that this is a potential
> > performance problem. But it is unclear to me when we'd want to synchronize
> > with the online repositories.
>
> I disagree with that, it is clear that with yume/rapt, we will never
> scale to few hundred nodes, especially based on the few tests i did with
> yum.
You say "yume/rapt, we will never scale to few hundred nodes". For what
operation, please? My choices:
1) query opkgs which _need_ to be installed on 1000 nodes ==
query the image for these nodes == 1 query, no matter how many nodes
you have. This is information you don't have in the Packages table anyway!
2) query which opkgs are really installed on _each_ node:
2.1) if this is done by the node itself: no problem to scale at all. Each
node uses only its RPM database and doesn't disturb the others.
If the node would need to access the central database, THIS would be
spoiling scalability.
2.2) if the master node needs to query each node: why would I want to ever
do this by yume/rapt. I am not insane. This information lives in
Packages_Nodes_Status and I NEVER talked about getting rid of that.
I though you implemented some stuff along the lines of 2.1). Maybe I'm wrong.
Regards,
Erich
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
Oscar-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/oscar-devel