On Sun, Aug 15, 2010 at 12:07 AM, Tim Starling wrote:
> He might have shot holes in it if he hadn't suggested it 5 days ago:
>
> my armchair architect idea was writing to some cyclical shared
> buffer
> and allowing other threads to pick stuff from it
>
> The tricky thing would be efficiently sy
On 14/08/10 03:56, Aryeh Gregor wrote:
> While we're having fun speculating on possible designs without
> actually volunteering to write the code ;): wouldn't a circular buffer
> make more sense than a linked list?
[...]
> Now I want Domas to shoot holes in my idea too! :)
He might have shot h
On Fri, Aug 13, 2010 at 8:55 AM, Magnus Manske
wrote:
> Disk dump thread:
> * Get mutex for list start pointer
> * Copy list start pointer
> * Reset list start pointer = NULL
> * Release mutex
> * Write list to disk
> * Release memory
>
> If you allocate memory per list item, the freed ones should
On Fri, Aug 13, 2010 at 6:16 AM, Domas Mituzas wrote:
>> Without having looked at any code, can't the threads just add data to
>> a semaphore linked list (fast), and a single separate thread writes
>> the stuff to disk occasionally?
>
> Isn't that the usual error that threaded software developers
> Without having looked at any code, can't the threads just add data to
> a semaphore linked list (fast), and a single separate thread writes
> the stuff to disk occasionally?
Isn't that the usual error that threaded software developers do:
1. get all threads depend on single mutex
2. watch them
On Thu, Aug 12, 2010 at 10:54 AM, Domas Mituzas wrote:
> There're no context changes, as it running fully on a core.
> "plenty of CPU" is 100% core use, most of time is spent in write(), and
> apparently syscalls aren't free.
Without having looked at any code, can't the threads just add data to
Hi!
> Sure. Make each thread call accept and let the kernel give incoming
> sockets to one of them. There you have the listener done :)
> Solaris used to need an explicit locking, but it is now fixed there, too.
Heh, I somewhat ignored this way - yeah, it would work just fine - one can do
per-fi
Domas Mituzas wrote:
> Hi!
>
>> Going multithread is really easy for a socket listener.
>
> Really? :)
Sure. Make each thread call accept and let the kernel give incoming
sockets to one of them. There you have the listener done :)
Solaris used to need an explicit locking, but it is now fixed th
Hi!
> Going multithread is really easy for a socket listener.
Really? :)
> However, not so
> much in the LogProcessors. If they are shared accross threads, you may
> end up with all threads blocked in the fwrite and if they aren't shared,
> the files may easily corrupt (depends on what you are
Rob Lanphier wrote:
> Since it's single threaded, it's handling each of the
> configured logging destinations before reading the next packet. We're
> not CPU-bound at this point. The existing solution seems to start
> flaking out at 40% CPU with a complicated configuration, and is
> humming along
Hi Mark,
Thanks for the helpful reply. Comments inline:
On Tue, Aug 10, 2010 at 2:54 AM, Mark Bergsma wrote:
> As already stated elsewhere, we didn't really saturate any NICs, just
> some socket buffers. Because of the large number of configured log
> pipes, the software (udp2log) could not emp
Hi!
> multiple collectors with distinct log pipes setup. E.g. one machine for
> the sampled logging, and another, independent machine to do all the
> special purpose log streams. I do like more efficient software solutions
> rather than throwing more iron at the problem, though. :)
Frankly, we cou
On 10-08-10 07:16, Rob Lanphier wrote:
> At any rate, there are a couple of problems with the way that it works:
> 1. Once we saturate the NIC on the logging machine, the quality of
> our sampling degrades pretty rapidly. We've generally had a problem
> with that over the past few months.
>
A
Robert Rohde schrieb:
> Rob,
>
> I'm not completely sure whether or not you are talking about the same
> logging infrastructure that leads to our traffic stats at
> stats.grok.se [1]. However, having worked with those stats and the
> raw files provides by Domas [2], I am pretty sure that those sq
Rob,
I'm not completely sure whether or not you are talking about the same
logging infrastructure that leads to our traffic stats at
stats.grok.se [1]. However, having worked with those stats and the
raw files provides by Domas [2], I am pretty sure that those squid
traffic stats are intended to
On Mon, Aug 9, 2010 at 11:17 PM, Tim Starling wrote:
> On 10/08/10 15:16, Rob Lanphier wrote:
>> We have a single collection point for all of our logging, which is
>> actually just a sampling of the overall traffic (designed to be
>> roughly one out of every 1000 hits). The process is described h
On 10/08/10 15:16, Rob Lanphier wrote:
> We have a single collection point for all of our logging, which is
> actually just a sampling of the overall traffic (designed to be
> roughly one out of every 1000 hits). The process is described here:
> http://wikitech.wikimedia.org/view/Squid_logging
>
Hi everyone,
We're in the process of figuring out how we fix some of the issues in
our logging infrastructure. I'm both sending this email out to get
the more knowledgeable folks to chime in about where I've got the
details wrong, and for general comment on how we're doing our logging.
We may ne
18 matches
Mail list logo