Re: Extending your zfs pool with multiple devices

2010-09-03 Thread Don Lewis
On  2 Sep, Jeremy Chadwick wrote:
 On Thu, Sep 02, 2010 at 04:56:04PM -0400, Zaphod Beeblebrox wrote:
 [regarding getting more disks in a machine]

 An inexpensive option are SATA port replicators.  Think SATA switch or
 hub.  1:4 is common and cheap.
 
 I have a motherboard with intel ICH10 chipset.  It commonly provides 6
 ports.  This chipset is happy to configure port replicators.  Meaning
 you can put 24 drives on this motherboard.

 ...

 With 1.5T disks, I find that the 4 to 1 multipliers have a small
 effect on speed.  The 4 drives I have on the multipler are saturated
 at 100% a little bit more than the drives directly connected.
 Essentially you have 3 gigabit for 4 drives instead of 3 gigabit for 1
 drive.
 
 1:4 SATA replicators impose a bottleneck on the overall bandwidth
 available between the replicator and the disks attached, as you stated.
 Diagram:
 
 ICH10
   |||___ (SATA300) Port 0, Disk 0
   || (SATA300) Port 1, Disk 1
   |_ (SATA300) Port 2, eSATA Replicator
 (SATA300) Port 0, Disk 2
|||_ (SATA300) Port 1, Disk 3
||__ (SATA300) Port 2, Disk 4
|___ (SATA300) Port 3, Disk 5
 
 If Disks 2 through 5 are decent disks (pushing 100MB/sec), essentially
 you have 100*4 = 400MB/sec worth of bandwidth being shoved across a
 300MB/sec link.  That's making the assumption the disks attached are
 magnetic and not SSD, and not taking into consideration protocol
 overhead.
 
 Given the evolutionary rate of hard disks and SSDs, replicators are (in
 my opinion) not a viable solution mid or long-term.

 A better choice is a SATA multilane HBA, which are usually PCIe-based
 with a single connector on the back of the HBA which splits out to
 multiple disks (usually 4, but sometimes more).
 
 An ideal choice is ane Areca ARC-1300 series SAS-based PCIe x4 multilane
 adapters, which provides SATA300 to each individual disk and uses PCIe
 x4 (which can handle about 1GByte/sec in each direction, so 2GByte/sec
 total)...
 
 http://www.areca.com.tw/products/sasnoneraid.htm
 
 ...but there doesn't appear to be driver support for FreeBSD for this
 series of controller (arcmsr(4) doesn't mention the ARC-1300 series).  I
 also don't know what Areca means on their site when they say
 BSD/FreeBSD (will be available with 6Gb/s Host Adapter), given that
 none of the ARC-1300 series cards are SATA600.
 
 If people are more focused on total number of devices (disks) that are
 available, then they should probably be looking at dropping a pretty
 penny on a low-end filer.  Otherwise, consider replacing the actual hard
 disks themselves with drives of a higher capacity.

[raises hand]

Here's what I've got on my mythtv box (running Fedora ... sorry):

FilesystemSize  
/dev/sda4 439G  
/dev/sdb1 1.9T  
/dev/sdc1 1.9T  
/dev/sdd1 1.9T  
/dev/sde1 1.9T  
/dev/sdf1 1.4T  
/dev/sdg1 1.4T  
/dev/sdh1 932G  
/dev/sdi1 932G  
/dev/sdj1 1.4T  
/dev/sdk1 1.9T  
/dev/sdl1 932G  
/dev/sdm1 1.9T  
/dev/sdn1 932G  
/dev/sdo1 699G  
/dev/sdp1 1.4T  

I'm currently upgrading the older drives as I run out of space, and I'm
really hoping that  2TB drives arrive soon.  The motherboard is
full-size ATX with six onboard SATA ports, all of which are in use.  The
only x16 PCIe slot is occupied by a graphics card, and all but one of
the x1 PCIe slots are in use.  One of the x1 PCIe slots has a Silicon
Image two-port ESATA controller, which connects to two external
enclosures with 1:4 and 1:5 port replicators.  At the moment there are
also three external USB drives.  This weekend's project is to install a
new 2TB drive and do some consolidation.

Fortunately the bandwidth requirements aren't too high ...

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Extending your zfs pool with multiple devices

2010-09-03 Thread Michal
What is really odd is I see your replies but not my original post, how 
very strange??


Thank you for all of your assistance. I would like to move to being able 
to build a cheap san-like storage area for a DB, I don't know how well 
it would work but I'd like to try it anyway since things like HP MSA's 
are hugely expensive.


I like these suggestions of filling a second box and connecting this to 
the 1st box using these expanders and port replicators. I don't really 
need as fast  as I can get as this is not a high-use DB backend or many 
user file server. A few users here and there but nothing that worries me 
about the bottleneck caused by these replicators. This way is ALOT 
better then my system of trying to export iscsi disks or something like 
that. This way I can add create a second box then have a cable into an 
expander or replicator on the 1st box, a 3rd box could then be added to 
the expander/replicator at a later date. There is a limit on how far 
this could go realistically, but I like this way. I could go further by 
adding SSD's for the L2ARC and ZIL if I wanted to. I found zfsbuild.com 
to be a quite nice site/blog


Thanks for all your help
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Extending your zfs pool with multiple devices

2010-09-03 Thread jhell
On 09/03/2010 04:25, Michal wrote:
 What is really odd is I see your replies but not my original post, how
 very strange??
 
 Thank you for all of your assistance. I would like to move to being able
 to build a cheap san-like storage area for a DB, I don't know how well
 it would work but I'd like to try it anyway since things like HP MSA's
 are hugely expensive.
 
 I like these suggestions of filling a second box and connecting this to
 the 1st box using these expanders and port replicators. I don't really
 need as fast  as I can get as this is not a high-use DB backend or many
 user file server. A few users here and there but nothing that worries me
 about the bottleneck caused by these replicators. This way is ALOT
 better then my system of trying to export iscsi disks or something like
 that. This way I can add create a second box then have a cable into an
 expander or replicator on the 1st box, a 3rd box could then be added to
 the expander/replicator at a later date. There is a limit on how far
 this could go realistically, but I like this way. I could go further by
 adding SSD's for the L2ARC and ZIL if I wanted to. I found zfsbuild.com
 to be a quite nice site/blog
 

Thanks for the link: zfsbuild.com I'm going to check that out.

Anyway... not that this is a great solution but if it is windows clients
that are connecting to this that your worried about and would like to
split off storage to separate machines etc... you can use DFS with
Samba. Imagine building two more machines and having them be completely
transparent to the clients that connect to the main server.

Using a Samba DFS server would allow you to distribute out the
file-systems to different shares on different machines without the
client ever having to know that the actual location that the directory
lays on is another machine and allows you to easily migrate new servers
into the network without client ever seeing the change.

Implement ISCSI, ZFS  HAST into this mix and you have yourself one hell
of a network.

Just an idea, Regards,

-- 

 jhell,v
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Tuning the scheduler? Desktop with a CPU-intensive task becomes rapidly unusable.

2010-09-03 Thread Jim Bryant
i just noticed this too...  had a build going of qt-creator, and then 
started a /usr/src make clean, and had to abort the qt-creator build to 
get the make clean to finish.  it was taking forever to even paint the 
xterm in the make clean window.


-stable built as of last week, amd64 kernel, core2 duo e8200, 4G ram 
(3.9G usable), intel dq45ek motherboard, kde4, compositing turned off.


Luca Pizzamiglio wrote:

Hello,

My machine has a similar behavior. For instance, during intensive 
workload (portupgrade), everything is quite not-responsive.


I made an alias of portupgrade, nice -n 5 portupgrade, that solves the 
problem just in that particular case.


My system is AMD Athlon(tm) 64 Processor 3000+ (1809.28-MHz 686-class 
CPU) with openGL effects disabled, KDE as desktop environment.


There is an interesting mib kern.sched.interact. But I don't know the 
meaning of it (my value is 30).


Cheers,
Luca

On 09/02/2010 12:46, jan.gr...@bristol.ac.uk wrote:

On Thu, 2 Sep 2010, Andriy Gapon wrote:


on 02/09/2010 12:08 jan.gr...@bristol.ac.uk said the following:

On Wed, 1 Sep 2010, Ivan Voras wrote:


On 09/01/10 15:08, jan.gr...@bristol.ac.uk wrote:

I'm running -STABLE with a kde-derived desktop. This setup (which is
pretty standard) is providing abysmal interactive performance on an
eight-core machine whenever I try to do anything CPU-intensive 
(such as

building a port).

Basically, trying to build anything from ports rapidly renders 
everything
else so non-interactive in the eyes of the scheduler that, for 
instance,

switching between virtual desktops (I have six of them in reasonably
frequent use) takes about a minute of painful waiting on redraws to
complete.


Are you sure this is about the scheduler or maybe bad X11 drivers?


Not 100%, but mostly convinced; I've just started looking at this. 
It's my
first stab at what might be going on. X11 performance is usually 
pretty

snappy. There's no paging pressure at all.


 From my experience:
1. system with Athlon II X2 250 CPU and onboard AMD graphics - no 
issues with
interaction between buildworld and GUI with all KDE4 effects enabled 
(OpenGL).
2. system with comparable Core2 Duo CPU and onboard Intel graphics 
(G33) -
enabling OpenGL desktop effects in KDE4 leads to the consequences 
like what you
describe.  With all GUI bells and whistles disabled the system 
behaves quite

like the AMD system.


All desktop effects are disabled. The graphics are from an nVidia 
GeForce

8500 GT (G86) with the X.org driver. (It's not _just_ desktop behaviour
that's affected, though: the box runs a number of small headless
[interactive] server processes which also appear to get rapidly 
starved of

CPU time.)

The behaviour isn't visible with the 4bsd scheduler; stuff generally
remains snappy and responsive.

I'll keep poking around and see if I can get to the bottom of it.






___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


what's up with cvsup?

2010-09-03 Thread Jim Bryant

is it just me, or are all the cvsup servers down?

i've tried this on several machines, i can ping some of them (not all), 
and the ones that can be pinged all timeout when doing a make update in 
/usr/src.


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: what's up with cvsup?

2010-09-03 Thread Jeremy Chadwick
On Fri, Sep 03, 2010 at 03:00:04PM -0500, Jim Bryant wrote:
 is it just me, or are all the cvsup servers down?
 
 i've tried this on several machines, i can ping some of them (not
 all), and the ones that can be pinged all timeout when doing a make
 update in /usr/src.

It's just you.  Timestamp below is in PDT (UTC-0700).

(13:27:42 j...@icarus) ~ $ all_csup
--
 Running /usr/bin/csup
--
Parsing supfile /usr/share/examples/cvsup/ports-supfile
Connecting to cvsup10.freebsd.org
Connected to 69.147.83.48
Server software version: SNAP_16_1h
Negotiating file attribute support
Exchanging collection information
Establishing multiplexed-mode data connection
Running
Updating collection ports-all/cvs
^CCleaning up ...
Interrupted

If you received any sort of error or informational message from any of
the servers (such as indication that the server was unreachable and the
client would retry in 10 (?) minutes), then all of the cvsup servers you
tried were, at that moment in time, syncing from cvsup-master.

-- 
| Jeremy Chadwick   j...@parodius.com |
| Parodius Networking   http://www.parodius.com/ |
| UNIX Systems Administrator  Mountain View, CA, USA |
| Making life hard for others since 1977.  PGP: 4BD6C0CB |

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Tuning the scheduler? Desktop with a CPU-intensive task becomes rapidly unusable.

2010-09-03 Thread Michal Varga
On Fri, 2010-09-03 at 14:03 -0500, Jim Bryant wrote:
 i just noticed this too...  had a build going of qt-creator, and then 
 started a /usr/src make clean, and had to abort the qt-creator build to 
 get the make clean to finish.  it was taking forever to even paint the 
 xterm in the make clean window.

This has been the state of -stable for at least an year, I have yet to
see a 7-stable machine that doesn't exhibit this behavior. This wasn't
the case with 7 in the very beginning and only started slowly building
up over time, particularly around the time of one specific xorg import -
which one it was? 7.4? I guess. Every bit of performance went down the
drain hole by that time and it's on that same level ever since (it's
rather easy to get used to it while working on a 7-stable desktop, but
would be nice having the old pre-ULE performance levels back, sometime).

On the other hand, at least from some of my observations, the terrible
desktop performance isn't strictly CPU-bound, I/O definitely has some
say in this. You can (you should, mileage may vary) see this by trying
to extract a few-GB archive in the background. While clearly no more
than a single CPU is ever occupied by that process (and there's few
other happily idling), you can spend waiting up to a few minutes just to
get a new application launched (or even just a running one getting
redrawn, in case part of it was swapped out at the moment).

But as I said, for 7-stable, this has been the case for a veery long
time.

m.

-- 
Michal Varga,
Stonehenge (Gmail account)


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Tuning the scheduler? Desktop with a CPU-intensive task becomes rapidly unusable.

2010-09-03 Thread Jeremy Chadwick
On Fri, Sep 03, 2010 at 10:08:38PM +0200, Michal Varga wrote:
 On Fri, 2010-09-03 at 14:03 -0500, Jim Bryant wrote:
  i just noticed this too...  had a build going of qt-creator, and then 
  started a /usr/src make clean, and had to abort the qt-creator build to 
  get the make clean to finish.  it was taking forever to even paint the 
  xterm in the make clean window.

 ... 
 
 On the other hand, at least from some of my observations, the terrible
 desktop performance isn't strictly CPU-bound, I/O definitely has some
 say in this. You can (you should, mileage may vary) see this by trying
 to extract a few-GB archive in the background. While clearly no more
 than a single CPU is ever occupied by that process (and there's few
 other happily idling), you can spend waiting up to a few minutes just to
 get a new application launched (or even just a running one getting
 redrawn, in case part of it was swapped out at the moment).

Could this be caused by lack of disk I/O scheduler on FreeBSD, at least
with regards to launching a new application?  Can you try making use of
gsched(8) and see if things improve in this regard?

Just be aware of this problem[1] when using it.  (I've been working on a
proper fix -- not a hack -- for the problem for about a week now.
Stress level is very high given the ambiguous nature of many aspects of
GEOM and libgeom lacking in numerous areas.  So far I've managed to
figure out how to parse the results from geom_gettree() in attempt to
replace kern.geom.conftxt...)

[1]: http://lists.freebsd.org/pipermail/freebsd-current/2010-April/016883.html

-- 
| Jeremy Chadwick   j...@parodius.com |
| Parodius Networking   http://www.parodius.com/ |
| UNIX Systems Administrator  Mountain View, CA, USA |
| Making life hard for others since 1977.  PGP: 4BD6C0CB |

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: what's up with cvsup?

2010-09-03 Thread Jim Bryant

my bad.  it was a router firewall setting.

Jeremy Chadwick wrote:

On Fri, Sep 03, 2010 at 03:00:04PM -0500, Jim Bryant wrote:
  

is it just me, or are all the cvsup servers down?

i've tried this on several machines, i can ping some of them (not
all), and the ones that can be pinged all timeout when doing a make
update in /usr/src.



It's just you.  Timestamp below is in PDT (UTC-0700).

(13:27:42 j...@icarus) ~ $ all_csup
--
  

Running /usr/bin/csup


--
Parsing supfile /usr/share/examples/cvsup/ports-supfile
Connecting to cvsup10.freebsd.org
Connected to 69.147.83.48
Server software version: SNAP_16_1h
Negotiating file attribute support
Exchanging collection information
Establishing multiplexed-mode data connection
Running
Updating collection ports-all/cvs
^CCleaning up ...
Interrupted

If you received any sort of error or informational message from any of
the servers (such as indication that the server was unreachable and the
client would retry in 10 (?) minutes), then all of the cvsup servers you
tried were, at that moment in time, syncing from cvsup-master.

  

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


apcupsd, USB and FreeBSD 8.1 aren't getting along

2010-09-03 Thread Ben Schumacher
All-

It seems that something about the combination of FreeBSD 8.1 and
apcupsd connecting to an APC Back-UPS RS 1500.

Here's what I've got:
1. FreeBSD 8.1 (source compiled up to RELENG_8_1 for security fixes)
2. apcupsd 3.14.8 compiled from FreeBSD Ports
3. APC Back-UPS RS 1500

This was working fine in FreeBSD 8.0, but it appears to be broken with
FreeBSD 8.1. I've done a little debugging to try to figure out what's
going on and was best I can tell it's not able to communicate at all
with the UPS.

Here's what I've come up with so far:

# usbconfig list
ugen0.1: UHCI root HUB Intel at usbus0, cfg=255 md=HOST spd=FULL
(12Mbps) pwr=ON
ugen1.1: UHCI root HUB Intel at usbus1, cfg=0 md=HOST spd=FULL (12Mbps) pwr=ON
ugen2.1: UHCI root HUB Intel at usbus2, cfg=0 md=HOST spd=FULL (12Mbps) pwr=ON
ugen3.1: EHCI root HUB Intel at usbus3, cfg=0 md=HOST spd=HIGH
(480Mbps) pwr=ON
ugen4.1: UHCI root HUB Intel at usbus4, cfg=0 md=HOST spd=FULL (12Mbps) pwr=ON
ugen5.1: UHCI root HUB Intel at usbus5, cfg=0 md=HOST spd=FULL (12Mbps) pwr=ON
ugen6.1: UHCI root HUB Intel at usbus6, cfg=0 md=HOST spd=FULL (12Mbps) pwr=ON
ugen7.1: EHCI root HUB Intel at usbus7, cfg=0 md=HOST spd=HIGH
(480Mbps) pwr=ON
ugen3.2: Mass Storage Device Prolific Technology Inc. at usbus3,
cfg=0 md=HOST spd=HIGH (480Mbps) pwr=ON
ugen1.2: Back-UPS RS 1500 FW:8.g9 .D USB FW:g9 American Power
Conversion at usbus1, cfg=0 md=HOST spd=LOW (1.5Mbps) pwr=ON
# usbconfig -d 1.2 dump_info
ugen1.2: Back-UPS RS 1500 FW:8.g9 .D USB FW:g9 American Power
Conversion at usbus1, cfg=0 md=HOST spd=LOW (1.5Mbps) pwr=ON

# usbconfig -d 1.2 dump_device_desc
ugen1.2: Back-UPS RS 1500 FW:8.g9 .D USB FW:g9 American Power
Conversion at usbus1, cfg=0 md=HOST spd=LOW (1.5Mbps) pwr=ON

  bLength = 0x0012
  bDescriptorType = 0x0001
  bcdUSB = 0x0110
  bDeviceClass = 0x
  bDeviceSubClass = 0x
  bDeviceProtocol = 0x
  bMaxPacketSize0 = 0x0008
  idVendor = 0x051d
  idProduct = 0x0002
  bcdDevice = 0x0106
  iManufacturer = 0x0003  American Power Conversion
  iProduct = 0x0001  Back-UPS RS 1500 FW:8.g9 .D USB FW:g9 
  iSerialNumber = 0x0002  JB0704015634  
  bNumConfigurations = 0x0001

# truss -o apctest.truss apctest -d 200


2010-09-03 14:23:00 apctest 3.14.8 (16 January 2010) freebsd
Checking configuration ...
0.000 apcupsd: apcconfig.c:799 After config scriptdir: /usr/local/etc/apcupsd
0.000 apcupsd: apcconfig.c:800 After config pwrfailpath: /var/run
0.000 apcupsd: apcconfig.c:801 After config nologinpath: /var/run
0.000 apcupsd: newups.c:102 write_lock at drivers.c:208
0.000 apcupsd: drivers.c:210 Looking for driver: usb
0.000 apcupsd: drivers.c:214 Driver apcsmart is configured.
0.000 apcupsd: drivers.c:214 Driver net is configured.
0.000 apcupsd: drivers.c:214 Driver usb is configured.
0.000 apcupsd: drivers.c:217 Driver usb found and attached.
0.000 apcupsd: newups.c:108 write_unlock at drivers.c:234
0.000 apcupsd: drivers.c:236 Driver ptr=0x8064e60
Attached to driver: usb
sharenet.type = DISABLE
cable.type = USB_CABLE

You are using a USB cable type, so I'm entering USB test mode
mode.type = USB_UPS
Setting up the port ...
usb_set_debug: Setting debugging level to 2 (on)
0.000 apcupsd: newups.c:102 write_lock at generic-usb.c:614
0.000 apcupsd: generic-usb.c:398 Initializing libusb
0.001 apcupsd: generic-usb.c:403 Found 0 USB busses
0.002 apcupsd: generic-usb.c:405 Found 0 USB devices
0.002 apcupsd: newups.c:108 write_unlock at generic-usb.c:633
apctest FATAL ERROR in generic-usb.c at line 636
Cannot find UPS device --
For a link to detailed USB trouble shooting information,
please see http://www.apcupsd.com/support.html.
0.002 apcupsd: newups.c:102 write_lock at generic-usb.c:656
0.002 apcupsd: newups.c:108 write_unlock at generic-usb.c:663
apctest error termination completed

This is what I think the issue is:
# grep '/dev' apctest.truss
open(/dev/usb0,O_RDWR,00)  ERR#2 'No such file
or directory'
open(/dev/usb1,O_RDWR,00)  ERR#2 'No such file
or directory'
open(/dev/usb2,O_RDWR,00)  ERR#2 'No such file
or directory'
open(/dev/usb3,O_RDWR,00)  ERR#2 'No such file
or directory'
open(/dev/usb4,O_RDWR,00)  ERR#2 'No such file
or directory'
open(/dev/usb5,O_RDWR,00)  ERR#2 'No such file
or directory'
open(/dev/usb6,O_RDWR,00)  ERR#2 'No such file
or directory'
open(/dev/usb7,O_RDWR,00)  ERR#2 'No such file
or directory'
open(/dev/usb8,O_RDWR,00)  ERR#2 'No such file
or directory'
open(/dev/usb9,O_RDWR,00)  ERR#2 'No such file
or directory'

FreeBSD's USB stack no longer appears to generate the '/dev/usb#'
entries. I tried symlinking the appropriate 'ugen' to 'usb0', but this
didn't help either.

# ln -s /dev/ugen1.2 /dev/usb0
# apctest -d 200


2010-09-03 14:29:04 apctest 3.14.8 (16 January 2010) freebsd
Checking configuration ...
0.000 apcupsd: apcconfig.c:799 After 

Re: Tuning the scheduler? Desktop with a CPU-intensive task becomes rapidly unusable.

2010-09-03 Thread Bruce Cran
On Fri, 3 Sep 2010 13:50:10 -0700
Jeremy Chadwick free...@jdc.parodius.com wrote:

 Just be aware of this problem[1] when using it.  (I've been working
 on a proper fix -- not a hack -- for the problem for about a week now.
 Stress level is very high given the ambiguous nature of many aspects
 of GEOM and libgeom lacking in numerous areas.  So far I've managed to
 figure out how to parse the results from geom_gettree() in attempt to
 replace kern.geom.conftxt...)

I'm hoping to replace most of the geom code in sysinstall for 9.0 - it
needs to parse the output of geom_gettree, use gpart to create
partitions etc. So far I've got code that can parse the existing
partition layout but not much more. Take a look at
user/ae/usr.sbin/sade in svn to see how to interact with geom - ae@ has
been working on adding support for gpt, zfs etc. to sade.

-- 
Bruce Cran
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Tuning the scheduler? Desktop with a CPU-intensive task becomes rapidly unusable.

2010-09-03 Thread David Xu

jan.gr...@bristol.ac.uk wrote:

On Thu, 2 Sep 2010, Andriy Gapon wrote:

  

on 02/09/2010 12:08 jan.gr...@bristol.ac.uk said the following:


On Wed, 1 Sep 2010, Ivan Voras wrote:

  

On 09/01/10 15:08, jan.gr...@bristol.ac.uk wrote:


I'm running -STABLE with a kde-derived desktop. This setup (which is
pretty standard) is providing abysmal interactive performance on an
eight-core machine whenever I try to do anything CPU-intensive (such as
building a port).

Basically, trying to build anything from ports rapidly renders everything
else so non-interactive in the eyes of the scheduler that, for instance,
switching between virtual desktops (I have six of them in reasonably
frequent use) takes about a minute of painful waiting on redraws to
complete.
  

Are you sure this is about the scheduler or maybe bad X11 drivers?

Not 100%, but mostly convinced; I've just started looking at this. It's my 
first stab at what might be going on. X11 performance is usually pretty 
snappy. There's no paging pressure at all.
  

From my experience:
1. system with Athlon II X2 250 CPU and onboard AMD graphics - no issues with
interaction between buildworld and GUI with all KDE4 effects enabled (OpenGL).
2. system with comparable Core2 Duo CPU and onboard Intel graphics (G33) -
enabling OpenGL desktop effects in KDE4 leads to the consequences like what you
describe.  With all GUI bells and whistles disabled the system behaves quite
like the AMD system.



All desktop effects are disabled. The graphics are from an nVidia GeForce 
8500 GT (G86) with the X.org driver. (It's not _just_ desktop behaviour 
that's affected, though: the box runs a number of small headless 
[interactive] server processes which also appear to get rapidly starved of 
CPU time.)


The behaviour isn't visible with the 4bsd scheduler; stuff generally 
remains snappy and responsive.


I'll keep poking around and see if I can get to the bottom of it.



  
I think sysctl kern.sched.preempt_thresh is too low, default is only 64. 
I always tune
it up to 200 on  my desktop machine which is running gnome and other GUI 
applications,

for a heavy GUI deskkop, I would tune it up to 224 to get better result.

Regards,
David Xu

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Why is NFSv4 so slow?

2010-09-03 Thread Rick C. Petty
On Mon, Aug 30, 2010 at 09:59:38PM -0400, Rick Macklem wrote:
 
 I don't tune anything with sysctl, I just use what I get from an
 install from CD onto i386 hardware. (I don't even bother to increase
 kern.ipc.maxsockbuf although I suggest that in the mount message.)

Sure.  But maybe you don't have server mount points with 34k+ files in
them?  I notice when I increase maxsockbuf, the problem of disappearing
files goes away, mostly.  Often a find /mnt fixes the problem
temporarily, until I unmount and mount again.

 The only thing I can suggest is trying:
 # mount -t newnfs -o nfsv3 server:/path /mnt
 and seeing if that performs like the regukar NFSv3 or has
 the perf. issue you see for NFSv4?

Yes, that has the same exact problem.  However, if I use:
mount -t nfs server:/path /mnt
The problem does indeed go away!  But it means I have to mount all the
subdirectories independently, which I'm trying to avoid and is the
reason I went to NFSv4.

 If this does have the perf. issue, then the exp. client
 is most likely the cause and may get better in a few months
 when I bring it up-to-date.

Then that settles it-- the newnfs client seems to be the problem.  Just
to recap...  These two are *terribly* slow (e.g. a VBR mp3 avg 192kbps
cannot be played without skips):
mount -t newnfs -o nfsv4 server:/path /mnt
mount -t newnfs -o nfsv3 server:/path /mnt
But this one works just fine (H.264 1080p video does not skip):
mount -t nfs server:/path /mnt

I guess I will have to wait for you to bring the v4 client up to date.
Thanks again for all of your contributions and for porting NFSv4 to
FreeBSD!

-- Rick C. Petty
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Why is NFSv4 so slow?

2010-09-03 Thread Rick C. Petty
On Wed, Sep 01, 2010 at 11:46:30AM -0400, Rick Macklem wrote:
  
  I am experiencing similar issues with newnfs:
  
  1) I have two clients that each get around 0.5MiB/s to 2.6MiB/s
  reading
  from the NFS4-share on Gbit-Lan
  
  2) Mounting with -t newnfs -o nfsv3 results in no performance gain
  whatsoever.
  
  3) Mounting with -t nfs results in 58MiB/s ! (Netcat has similar
  performance) ??? not a hardware/driver issue from my pov
 
 Ok, so it does sound like an issue in the experimental client and
 not NFSv4. For the most part, the read code is the same as
 the regular client, but it hasn't been brought up-to-date
 with recent changes.

Do you (or will you soon) have some patches I/we could test?  I'm
willing to try anything to avoid mounting ten or so subdirectories in
each of my mount points.

 One thing you could try is building a kernel without SMP enabled
 and see if that helps? (I only have single core hardware, so I won't
 see any SMP races.) If that helps, I can compare the regular vs
 experimental client for smp locking in the read stuff.

I can try disabling SMP too.  Should that really matter, if you're not
even pegging one CPU?  The locks shouldn't have *that* much overhead...

-- Rick C. Petty
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org