On Sunday, 6 March 2016 at 13:21:41 UTC, Shachar Shemesh wrote:
You have an iret whether you switched or not. Going into the
kernel and coming back has that, whether we switched or not. In
particular, since you need a system call in order to call
"read", "write" and "accept", these are there
For switching between threads, this seems wrong.
On 04/03/16 20:29, deadalnix wrote:
The minimal cost of a context switch is one TLB miss (~300), one cache
miss (~300)
Why do you claim the first two? Again, assuming threads of the same
process, where the address space (minus the TLS) is the
On 05/03/16 20:50, Sean Kelly wrote:
On Friday, 4 March 2016 at 06:55:29 UTC, Shachar Shemesh wrote:
On 03/03/16 19:31, Andrei Alexandrescu wrote:
https://www.mailinator.com/tymaPaulMultithreaded.pdf
On a completely different note, me and a colleague started a proof of
concept to disprove
On Friday, 4 March 2016 at 06:55:29 UTC, Shachar Shemesh wrote:
On 03/03/16 19:31, Andrei Alexandrescu wrote:
https://www.mailinator.com/tymaPaulMultithreaded.pdf
On a completely different note, me and a colleague started a
proof of concept to disprove the claim that blocking+threads is
Am 03.03.2016 um 18:31 schrieb Andrei Alexandrescu:
https://www.mailinator.com/tymaPaulMultithreaded.pdf
Andrei
A few points that come to mind:
- Comparing random different high-level libraries is bound to give
results that measure abstraction overhead/non-optimal system API use.
Comparing
On Friday, 4 March 2016 at 23:10:28 UTC, deadalnix wrote:
Not if it is hyper-threaded, as pairs of threads are sharing
resources.
Irrelevant.
I knew you would say that, but you are wrong.
async do not make any sense on a DB. You want async when you
are bound by a 3rd party system.
Err...
On Friday, 4 March 2016 at 23:26:02 UTC, Chris Wright wrote:
You can get that control by interacting with the scheduler.
Thread schedulers tend to be in the kernel and fiber schedulers
tend to be in userspace, so as a practical matter, it should be
easier to get that control with fibers.
But
On Fri, 04 Mar 2016 22:22:48 +, Ola Fosheim Grøstad wrote:
> On Friday, 4 March 2016 at 03:14:01 UTC, Ali Çehreli wrote:.
>> And that's exactly one of the benefits of fibers: two workers ping pong
>> back and forth, without much risk of losing their cached data.
>>
>> Is my assumption
On Friday, 4 March 2016 at 22:22:48 UTC, Ola Fosheim Grøstad
wrote:
On Friday, 4 March 2016 at 03:14:01 UTC, Ali Çehreli wrote:.
And that's exactly one of the benefits of fibers: two workers
ping pong back and forth, without much risk of losing their
cached data.
Is my assumption correct?
On Friday, 4 March 2016 at 03:14:01 UTC, Ali Çehreli wrote:.
And that's exactly one of the benefits of fibers: two workers
ping pong back and forth, without much risk of losing their
cached data.
Is my assumption correct?
Not if it is hyper-threaded, as pairs of threads are sharing
On Friday, 4 March 2016 at 03:14:01 UTC, Ali Çehreli wrote:
I imagine that lost cache is one of the biggest costs in thread
switching. It would be great if a thread could select a thread
with something like "I'm done, now please switch to my reader".
And that's exactly one of the benefits of
On 03/03/16 19:31, Andrei Alexandrescu wrote:
https://www.mailinator.com/tymaPaulMultithreaded.pdf
Andrei
You just stepped on a pet peeve of mine. NIO isn't async IO. It's
non-blocking IO. Many people (including Microsoft's MSDN) confuse the
two, but they are completely and utterly
On 03/03/2016 06:01 PM, deadalnix wrote:
> 2x64kb L1 cache, which probably reduce the cache trashing due
> to context switches
(I am speaking without measuring anything.)
I imagine that lost cache is one of the biggest costs in thread
switching. It would be great if a thread could select a
On Thursday, 3 March 2016 at 20:31:51 UTC, Vladimir Panteleev
wrote:
3. The first benchmark essentially measures the overhead of
fiber context switching and nothing else
Ha yes, forgot that. Many JVM use fiber instead of thread
internally.
On Thursday, 3 March 2016 at 17:31:59 UTC, Andrei Alexandrescu
wrote:
https://www.mailinator.com/tymaPaulMultithreaded.pdf
Andrei
A lot of data presented are kind of skewed. For instance, the
synchronization costs across cores are done at 0% writes. It
comes to no surprise that
On Thursday, 3 March 2016 at 17:31:59 UTC, Andrei Alexandrescu
wrote:
https://www.mailinator.com/tymaPaulMultithreaded.pdf
Andrei
Not an expert on the subject, but FWIW:
1. This is from 2008
2. Seems to be highly specific to Java
3. The first benchmark essentially measures the overhead of
On 03/03/2016 09:31 AM, Andrei Alexandrescu wrote:
https://www.mailinator.com/tymaPaulMultithreaded.pdf
Andrei
Another interesting architecture is LMAX Disruptor:
https://lmax-exchange.github.io/disruptor/
Ali
On Thursday, 3 March 2016 at 17:31:59 UTC, Andrei Alexandrescu
wrote:
https://www.mailinator.com/tymaPaulMultithreaded.pdf
Andrei
Related to this, I watched a talk about user-level threads by
another Google engineer awhile back (also named Paul but not the
same person).
On Thursday, 3 March 2016 at 17:31:59 UTC, Andrei Alexandrescu
wrote:
https://www.mailinator.com/tymaPaulMultithreaded.pdf
Slide decks are so unbelievably bad at conveying information.
But, looking through it, I think I agree. There's a reason why I
stick with my cgi.d - using a simple
https://www.mailinator.com/tymaPaulMultithreaded.pdf
Andrei
20 matches
Mail list logo