On Fri, Jun 28, 2013 at 12:26:44AM +0300, Alexander Motin wrote:
Hi.
While doing some profiles of GEOM/CAM IOPS scalability, on some test
patterns I've noticed serious congestion with spinning on global
pbuf_mtx mutex inside getpbuf() and relpbuf(). Since that code is
already very
On Thu, Jun 27, 2013 at 03:23:36PM -0600, Chris Torek wrote:
OK, I wasted :-) way too much time, but here's a text file that
can be comment-ified or stored somewhere alongside the code or
whatever...
I think it would be a nice addition to the VM article in the doc
collection. The content is
On 28.06.2013 09:57, Konstantin Belousov wrote:
On Fri, Jun 28, 2013 at 12:26:44AM +0300, Alexander Motin wrote:
While doing some profiles of GEOM/CAM IOPS scalability, on some test
patterns I've noticed serious congestion with spinning on global
pbuf_mtx mutex inside getpbuf() and relpbuf().
On 6/27/13, Brian Kim briansa...@gmail.com wrote:
howdy all,
As a junior computer engineering major who has dreams of developing an
operating system more ubiquitous than ms windows, I have come to appreciate
the complexity and elegance of both the freebsd os and community. While
achieving
On 28.06.2013 08:15, Sreenivasa Honnur wrote:
if (rv != 0) {
printf(sock create ipv6 %s failed %d.\n,
tbuf, rv);
return NULL;
}
Can you try insert here?
.. i'd rather you narrow down _why_ it's performing better before committing it.
Otherwise it may just creep up again after someone does another change
in an unrelated part of the kernel.
You're using instructions-retired; how about using l1/l2 cache loads,
stores, etc? There's a lot more CPU
On 28.06.2013 18:14, Adrian Chadd wrote:
.. i'd rather you narrow down _why_ it's performing better before committing it.
If you have good guesses -- they are welcome. All those functions are so
small, that it is hard to imagine how congestion may happen there at
all. I have strong feeling
On 28 June 2013 08:37, Alexander Motin m...@freebsd.org wrote:
Otherwise it may just creep up again after someone does another change
in an unrelated part of the kernel.
Big win or small, TAILQ is still heavier then STAILQ, while it is not needed
there at all.
You can't make that assumption.
On Fri, Jun 28, 2013 at 08:14:42AM -0700, Adrian Chadd wrote:
.. i'd rather you narrow down _why_ it's performing better before committing
it.
Otherwise it may just creep up again after someone does another change
in an unrelated part of the kernel.
Or penalize some other set of machines
On Fri, Jun 28, 2013 at 8:56 AM, Adrian Chadd adr...@freebsd.org wrote:
On 28 June 2013 08:37, Alexander Motin m...@freebsd.org wrote:
Otherwise it may just creep up again after someone does another change
in an unrelated part of the kernel.
Big win or small, TAILQ is still heavier then
On 28.06.2013 18:56, Adrian Chadd wrote:
On 28 June 2013 08:37, Alexander Motin m...@freebsd.org wrote:
Otherwise it may just creep up again after someone does another change
in an unrelated part of the kernel.
Big win or small, TAILQ is still heavier then STAILQ, while it is not needed
there
On Wed, Jun 26, 2013 at 10:11:44AM -0600, Chris Torek wrote:
diff --git a/conf/options.amd64 b/conf/options.amd64
index 90348b7..f3ce505 100644
--- a/conf/options.amd64
+++ b/conf/options.amd64
@@ -1,6 +1,7 @@
# $FreeBSD$
# Options specific to AMD64 platform kernels
+AMD64_HUGE
On 28 June 2013 09:18, m...@freebsd.org wrote:
You can't make that assumption. I bet that if both pointers are in the
_same_ cache line, the overhead of maintaining a double linked list is
trivial.
No, it's not. A singly-linked SLIST only needs to modify the head of the
list and the
[combining two messages and adding kib and alc to cc per Oliver Pinter]
the CPU's CR4 on entry to the kernel.
It is %cr3.
Oops, well, easily fixed. :-)
(If we used bcopy() to copy the kernel pmap's NKPML4E and NDMPML4E
entries into the new pmap, the L3 pages would not have to be
physically
On 28.06.2013 09:57, Konstantin Belousov wrote:
On Fri, Jun 28, 2013 at 12:26:44AM +0300, Alexander Motin wrote:
While doing some profiles of GEOM/CAM IOPS scalability, on some test
patterns I've noticed serious congestion with spinning on global
pbuf_mtx mutex inside getpbuf() and relpbuf().
On 28 June 2013 15:15, Alexander Motin m...@freebsd.org wrote:
I think it indeed may be a cache trashing. I've made some profiling for
getpbuf()/relpbuf() and found interesting results. With patched kernel using
SLIST profiling shows mostly one point of RESOURCE_STALLS.ANY in relpbuf()
--
On Sat, Jun 29, 2013 at 01:15:19AM +0300, Alexander Motin wrote:
On 28.06.2013 09:57, Konstantin Belousov wrote:
On Fri, Jun 28, 2013 at 12:26:44AM +0300, Alexander Motin wrote:
While doing some profiles of GEOM/CAM IOPS scalability, on some test
patterns I've noticed serious congestion
17 matches
Mail list logo