It still helps a lot, because you can have many reporting programs, each 
talking to different processes on the server, and those processes are able 
to  get the transactions done very quickly, with multiple write daemons and 
journaling daemons doing the actual I/O from the shared buffer pool.  (and 
yes, this has been used precisely for collecting metrics, on a very large 
scale, across all of Germany in one case, IIRC)

On Thursday, May 21, 2015 at 2:28:20 AM UTC-4, Andrei Zh wrote:
>
> But it also means that this trick won't work with separate machines for 
> database (metric server) and reporting program, which is one of the goals. 
>
> On Thu, May 21, 2015 at 12:19 AM, Scott Jones <[email protected] 
> <javascript:>> wrote:
>
>>
>>
>> On Wednesday, May 20, 2015 at 4:26:40 PM UTC-4, Andrei Zh wrote:
>>>
>>>
>>>>>
>>> Well, if they don't use any tricks like passing data through shared 
>>> memory or heavy batching, then it's pretty impressive. But, as you 
>>> mentioned, in this particular case Caché is not an option.
>>>
>>
>> I would say that *any* decent database does "tricks" like using shared 
>> memory... Aerospike does, I don't know about Redis...  Caché has a large 
>> shared buffer pool... all processes can read or wrote
>> B+ tree blocks via that buffer pool, and there are daemons that take care 
>> of making sure the journal is sync'ed to disk, that the blocks get out to 
>> disk every so often, etc. 
>>
>
>

Reply via email to