Re: dtrace/cyclic deadlock

2010-11-23 Thread Andriy Gapon
on 23/11/2010 08:33 Andriy Gapon said the following:
 I think that this is quite similar to what we do for per-CPU caches in UMA and
 so the same approach should work here.
 That is, as in (Open)Solaris, the data should be accessed only from the owning
 CPU and spinlock_enter()/spinlock_exit() should be used to prevent races 
 between
 non-interrupt code and nested interrupt code.

Here's a patch that makes our version of cyclic.c a little bit closer to the
upstream version whilst implementing the above idea:
http://people.freebsd.org/~avg/cyclic-deadlock.diff

All accesses to per-CPU cyclics data are performed strictly from the 
corresponding
CPUs in an interrupt or interrupt-like context.  Upcalls occur in event 
timer's
interrupt filter and all down calls are performed via smp_rendezvous_cpus().

I will appreciate reviews and testing.
Thanks!
-- 
Andriy Gapon
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: Best way to determine if an IRQ is present

2010-11-23 Thread Andriy Gapon
on 22/11/2010 16:24 John Baldwin said the following:
 Well, the real solution is actually larger than described in the PR.  What 
 you 
 really want to do is take the logical CPUs offline when they are halted.  
 Taking a CPU offline should trigger an EVENTHANDLER that various bits of code 
 could invoke.  In the case of platforms that support binding interrupts to 
 CPUs (x86 and sparc64 at least), they would install an event handler that 
 searches the MD interrupt tables (e.g. the interrupt_sources[] array on x86) 
 and move bound interrupts to other CPUs.  However, I think all the interrupt
 bits will be MD, not MI.

That's a good idea and a comprehensive approach.
One minor technical detail - should an offlined CPU be removed from all_cpus 
mask/set?

-- 
Andriy Gapon
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


UFS Snapshots and iowait

2010-11-23 Thread Chris St Denis
I have started using mount -u -o snapshot as part of my backup process 
in order to have a week worth of local differential backups to allow 
quick and easy recovery of lost/overwritten/etc files.


The snapshot of the partition (~250G and 2.3 million inodes used. ~10GB 
of data change per day) takes around 10 minutes to complete. During the 
first 5 minutes everything seems to be find, but during the second 5 
minutes the Apache processes that are logging to this drive start 
building up in L (logging) state until they hit MaxClients.


Is this just due to the very high io bandwidth usage associated with 
making a snapshot, or does the creation of this snapshot completely 
block IO writes for around 5 minutes?


Any suggested workarounds? I already bumped up the number of Apache 
slots to 166% but it looks like I would have to increase the number much 
more to use that as a primary solution.

___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: UFS Snapshots and iowait

2010-11-23 Thread Kurt Lidl
On Tue, Nov 23, 2010 at 10:38:31AM -0800, Chris St Denis wrote:
 Is this just due to the very high io bandwidth usage associated with 
 making a snapshot, or does the creation of this snapshot completely 
 block IO writes for around 5 minutes?

It blocks updates to the filesystem while during part of the
snapshot process.

See the comments in /usr/src/sys/ufs/ffs/ffs_snapshot.c

I found using UFS snapshots on a production fileserver untenable
during normal working hours.  I have a backup fileserver that I
rsync the files to, and then use the UFS snapshots there.

 Any suggested workarounds? I already bumped up the number of Apache 
 slots to 166% but it looks like I would have to increase the number much 
 more to use that as a primary solution.

Use ZFS.  The way snapshots work there, they are nearly instantanous
to create, and you are not limited to 20 snapshots per filesystem.

-Kurt
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: [call for testing] userland debug symbols

2010-11-23 Thread Mark Johnston
On Tue, Nov 16, 2010 at 03:57:45PM -0500, Mark Johnston wrote:
 Hello all,
 
 I've been sitting on my changes for a while, but I think they're ready
 for testing at this point. They are described here:
 
 http://lists.freebsd.org/pipermail/freebsd-hackers/2010-November/033474.html
 
 Some minor changes from my last patch:
 
 - Changed gdb's default debug-file-directory to /usr/lib/debug.
   I have no problem changing this again, but this seems like a good place.
 - Removed hard-coded paths to strip(1) and objcopy(1) from stripbin.sh.
   I explicitly added /usr/bin/ to PATH.
 
 The patch is available here:
 
 www.student.cs.uwaterloo.ca/~m6johnst/patch/symbdir.patch
 
 Would anybody be willing to test this? Of particular interest is
 non-i386/amd64 architectures and cross-compiles.
 
 Thanks,
 -Mark

If there are no objections to these changes, would someone be able to
commit them?

Thanks,
-Mark
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: UFS Snapshots and iowait

2010-11-23 Thread krad
On 23 November 2010 19:49, Kurt Lidl l...@pix.net wrote:

 On Tue, Nov 23, 2010 at 10:38:31AM -0800, Chris St Denis wrote:
  Is this just due to the very high io bandwidth usage associated with
  making a snapshot, or does the creation of this snapshot completely
  block IO writes for around 5 minutes?

 It blocks updates to the filesystem while during part of the
 snapshot process.

 See the comments in /usr/src/sys/ufs/ffs/ffs_snapshot.c

 I found using UFS snapshots on a production fileserver untenable
 during normal working hours.  I have a backup fileserver that I
 rsync the files to, and then use the UFS snapshots there.

  Any suggested workarounds? I already bumped up the number of Apache
  slots to 166% but it looks like I would have to increase the number much
  more to use that as a primary solution.

 Use ZFS.  The way snapshots work there, they are nearly instantanous
 to create, and you are not limited to 20 snapshots per filesystem.

 -Kurt
 ___
 freebsd-hackers@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
 To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


I can testify zfs snapshots are very usable, as we use it to backup our
mysql and oracle databases. Issue a write lock, flush, snap, remove lock,
backup snapshot All takes a few seconds and is fairly seamless
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: PostgreSQL performance scaling

2010-11-23 Thread Luis Neves

On 11/22/2010 12:21 AM, Ivan Voras wrote:


The semwait part is from PostgreSQL - probably shared buffer locking,
but there's a large number of processes regularly in sbwait - maybe
something can be optimized here?


I think this paper was mentioned before, did you read it?... An 
Analysis of Linux Scalability to Many Cores?

http://pdos.csail.mit.edu/papers/linux:osdi10.pdf


ABSTRACT.
This paper analyzes the scalability of seven system applications (Exim,
memcached, Apache, PostgreSQL, gmake, Psearchy, and MapReduce) running
on Linux on a 48- core computer.


The paper is about Linux, but it also focus on some changes that can be 
made to PostgreSQL to achieve better concurrency.



--
Luis Neves

___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org