Re: IPv6 VM

2012-01-27 Thread Robert Boyer
IPv6 fully operational - named/bind9 resolving all dns and works fine for IPv6 
only hosts…. ipcloud.ws is IPv6 only to the external internet and works fine 
via www, ssh, smtp mail, etc as long as you are on another IPv6 capable host. 
Pretty nice. I am glad you brought this up. If you need a database I will stick 
one on there for you or choose your own.

Now moving on to local dhcp serving up IPv6 only stuff - I like how you can 
delegate dhcp services amongst various dhcpd's in v6 very cool.

RB


Ps. anyone else that wants to mess around is welcome to grab a shell account 
just hit me via email or this list…


On Jan 26, 2012, at 4:03 PM, Robert Boyer wrote:

 I can probably arrange for a tunneled v6 address - should be the same thing 
 at the end of the day…. how much time/mem you need?
 
 RB
 
 On Jan 26, 2012, at 2:10 PM, Steve Bertrand wrote:
 
 Hi all!
 
 I've been away for some time, but I'm now getting back into the full swing 
 of things.
 
 I'm wondering if there is anyone out there who can let me temporarily borrow 
 a CLI-only clean install FBSD virtual machine with a publicly facing IPv4 
 and native IPv6 address. It will be extremely low bandwidth (almost none at 
 all) for testing some v6 DNS software and other v6 statistical programs I'm 
 writing.
 
 Please contact off list.
 
 Thanks!
 
 Steve
 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
 



Re: IPv6 VM

2012-01-27 Thread Robert Boyer
Oh also if you would like to relay smtp mail give me a shout right now it's 
restricted to the IPv6 64 blog that the machine manages - heck if you want an 
IPv6 address I could give you one and it SHOULD work anywhere you are connected 
as long as your IP can deal with IPv6

RB

On Jan 27, 2012, at 10:45 AM, Steve Bertrand wrote:

 On 2012.01.26 23:12, Robert Boyer wrote:
 just an FYI - that VM that you logged into tonight now has verified access 
 via IPv6 from anywhere, is serving up the /64 block to my local devices vi 
 route adverts, has route6d running and appears to work locally (resolves 
 IPv6 name servers from other local machines via dig) nginx is now listing 
 and serving pages via IPv6, and should also work have a working IPv6 email 
 server (not tested yet). Shouldn't be a big deal bringing up named and dhcp6 
 if you want to do that.
 
 Thanks Robert!
 
 Is there any chance that I could get some sudo access to be able to install 
 things globally, and if necessary, make certain global config changes?
 
 I'll be happy to set you up a v6 email server if you wish. Nice to see others 
 interested and knowlegeable about v6. I have about five years experience. I 
 was the 17th entity in Canada to have a v6 prefix advertised into the global 
 IPv6 routing table, and the 1132nd globally :)
 
 Steve



Re: IPv6 VM

2012-01-26 Thread Robert Boyer
I can probably arrange for a tunneled v6 address - should be the same thing at 
the end of the day…. how much time/mem you need?

RB

On Jan 26, 2012, at 2:10 PM, Steve Bertrand wrote:

 Hi all!
 
 I've been away for some time, but I'm now getting back into the full swing of 
 things.
 
 I'm wondering if there is anyone out there who can let me temporarily borrow 
 a CLI-only clean install FBSD virtual machine with a publicly facing IPv4 and 
 native IPv6 address. It will be extremely low bandwidth (almost none at all) 
 for testing some v6 DNS software and other v6 statistical programs I'm 
 writing.
 
 Please contact off list.
 
 Thanks!
 
 Steve
 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org



Re: How to destroy a zombie zpool

2012-01-16 Thread Robert Boyer
most likely a ifs label on the disk from before that you need to get rid of 
before doing the install.

RB

On Jan 16, 2012, at 11:37 AM, Daniel Staal wrote:

 
 I've got a weird problem...  I was working on installing 9.0 w/zfs on my 
 laptop, messed up, rebooted, *formatted the drives* and restarted.  Got much 
 further the next time, however...
 
 There is a zombie copy of the old zpool sitting around interfering with 
 things.  'zpool import' lists it, but it can't import it because the disks 
 don't actually exist.  'zpool destroy' can't delete it, because it's not 
 imported.  ('No such pool')  Any ideas on how to get rid of it?
 
 Daniel T. Staal
 
 ---
 This email copyright the author.  Unless otherwise noted, you
 are expressly allowed to retransmit, quote, or otherwise use
 the contents for non-commercial purposes.  This copyright will
 expire 5 years after the author's death, or in 30 years,
 whichever is longer, unless such a period is in excess of
 local copyright law.
 ---
 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org



Re: How to destroy a zombie zpool

2012-01-16 Thread Robert Boyer
I agree if you move drives and a particular zfs has not seen them before - and 
there is a zfs label at the end things can go pear shaped - however…

if you blast just the end of the drive it should be fine.


RB

Ps. Maybe I;ll title a book fun with zfs and glabel or cheap thrills with zfs, 
glabel and gpt uuid's - how to screw up MacOS/Darwin the easy way…




On Jan 16, 2012, at 11:19 PM, Fritz Wuehler wrote:

 I've got a weird problem...  I was working on installing 9.0 w/zfs on my 
 laptop, messed up, rebooted, *formatted the drives* and restarted.  Got 
 much further the next time, however...
 
 There is a zombie copy of the old zpool sitting around interfering with 
 things.  'zpool import' lists it, but it can't import it because the disks 
 don't actually exist.  'zpool destroy' can't delete it, because it's not 
 imported.  ('No such pool')  Any ideas on how to get rid of it?
 
 zfs is famous for fucking itself like this. the only totally safe way is to
 dd the drive since nailing the label doesn't clear out stuff at the far end
 of the filesystem that can really ruin your day. don't ask me how i know..
 
 it will take a few hours dd'ing from /dev/zero to your devices but it is
 well worth it when you do any major surgery on drives that had zfs at one
 point and you want to use them over again with zfs
 
 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org



Re: freebsd server limits question

2012-01-02 Thread Robert Boyer
To deal with this kind of traffic you will most likely need to set up a mongo 
db cluster of more than a few instances… much better. There should be A LOT of 
info on how to scale mongo to the level you are looking for but most likely you 
will find that on ruby forums NOT on *NIX boards….

The OS boards/focus will help you with fine tuning but all the fine tuning in 
the world will not solve an app architecture issue…

I have setup MASSIVE mongo/ruby installs for testing that can do this sort of 
volume with ease… the stack looks something like this….

Nginix 
Unicorn
Sinatra
MongoMapper
MongoDB

with only one Nginix instance can feed an almost arbitrary number of 
Unicorn/Sinatra/MongoMapper instances that can in turn feed a properly 
configured MongoDB cluster with pre-allocated key distribution so that the 
incoming inserts are spread evenly against the cluster instances…

Even if you do not use ruby that community will have scads of info on scaling 
MongoDB.

One more comment related to L's advice - true you DO NOT want more transactions 
queued up if your back-end resources cannot handle the TPS - this will just 
make the issue harder to isolate and potentially make the recovery more 
difficult. Better to reject the connection at the front-end than take it and 
blow up the app/system.

The beauty of the Nginix/Unicorn solution (Unicorn is ruby specific) is that 
there is no queue that is feed to the workers when there are no workers - the 
request is rejected. The unicorn worker model can be reproduced for any other 
implementation environment (PHP/Perl/C/etc) outside of ruby in about 30 
minutes. It's simple and Nginix is very well suited to low overhead reverse 
proxy to this kind of setup.

Wishing you the best - if i can be of more help let me know…

RB

On Jan 2, 2012, at 3:38 PM, Eduardo Morras wrote:

 At 20:12 02/01/2012, Muhammet S. AYDIN wrote:
 Hello everyone.
 
 My first post here and I'd like to thank everyone who's involved within the
 FreeBSD project. We are using FreeBSD on our web servers and we are very
 happy with it.
 
 We have an online messaging application that is using mongodb. Our members
 send messages to the voice show's (turkish version) contestants. Our two
 mongodb instances ended up in two centos6 servers. We have failed. So hard.
 There were announcements and calls made live on tv. We had +30K/sec
 visitors to the app.
 
 When I looked at the mongodb errors, I had thousands of these:
 http://pastie.org/private/nd681sndos0bednzjea0g. You may be wondering why
 I'm telling you about centos. Well, we are making the switch from centos to
 freebsd FreeBSD. I would like to know what are our limits? How we can set
 it up so our FreeBSD servers can handle min 20K connections (mongodb's
 connection limit)?
 
 Our two servers have 24 core CPUs and 32 GBs of RAM. We are also very open
 to suggestions. Please help me out here so we don't fail deadly, again.
 
 ps. this question was asked in the forums as well however as someone
 suggested in the forums, i am posting it here too.
 
 Is your app limited by cpu or by i/o? What do vmstat/iostat says about your 
 hd usage? Perhaps mongodb fails to read/write fast enough and making process 
 thread pool bigger only will make problem worse, there will be more threads 
 trying to read/write.
 
 Have you already tuned mongodb?
 
 Post more info please, several lines (not the first one) of iostat and vmstat 
 may be a start. Your hd configuration, raid, etc... too.
 
 L 
 
 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: freebsd server limits question

2012-01-02 Thread Robert Boyer
Sorry one more thought and a clarification….


I have found that it is best to run mongos with each app server instance most 
of the mongo interface libraries aren't intelligent about the way that they 
distribute requests to available mongos processes. mongos processes are also 
relatively lightweight and need no coordination or synchronization with each 
other - simplifies things a lot and makes any potential bugs/complexity with 
app server/mongo db connection logic just go away.

It's pretty important when configuring shards to take on the write volume that 
you do your best to pre-allocate chunks and avoid chunk migrations during your 
traffic floods - not hard to do at all. There are also about a million 
different ways to deal with atomicity (if that is a word) and a very mongo 
specific way of ensuring writes actually made it to disk somewhere = from 
your brief description of the app in question it does not sound that it is too 
critical to ensure every single solitary piece of data persists no matter 
what as I am assuming most of it is irrelevant and becomes completely 
irrelevant after the show- or some time there after. Most of the programing and 
config examples make an opposite assumption in that they assume that each 
transaction MUST be completely durable - if you forgo that you can get 
screaming TPS out of a mongo shard.

Also if you do not find what you are looking for via a ruby support group - the 
JS and node JS community also may be of assistance but they tend to have a very 
narrow view of the world…. ;-)

RB
On Jan 2, 2012, at 4:21 PM, Robert Boyer wrote:

 To deal with this kind of traffic you will most likely need to set up a mongo 
 db cluster of more than a few instances… much better. There should be A LOT 
 of info on how to scale mongo to the level you are looking for but most 
 likely you will find that on ruby forums NOT on *NIX boards….
 
 The OS boards/focus will help you with fine tuning but all the fine tuning in 
 the world will not solve an app architecture issue…
 
 I have setup MASSIVE mongo/ruby installs for testing that can do this sort of 
 volume with ease… the stack looks something like this….
 
 Nginix 
 Unicorn
 Sinatra
 MongoMapper
 MongoDB
 
 with only one Nginix instance can feed an almost arbitrary number of 
 Unicorn/Sinatra/MongoMapper instances that can in turn feed a properly 
 configured MongoDB cluster with pre-allocated key distribution so that the 
 incoming inserts are spread evenly against the cluster instances…
 
 Even if you do not use ruby that community will have scads of info on scaling 
 MongoDB.
 
 One more comment related to L's advice - true you DO NOT want more 
 transactions queued up if your back-end resources cannot handle the TPS - 
 this will just make the issue harder to isolate and potentially make the 
 recovery more difficult. Better to reject the connection at the front-end 
 than take it and blow up the app/system.
 
 The beauty of the Nginix/Unicorn solution (Unicorn is ruby specific) is that 
 there is no queue that is feed to the workers when there are no workers - the 
 request is rejected. The unicorn worker model can be reproduced for any other 
 implementation environment (PHP/Perl/C/etc) outside of ruby in about 30 
 minutes. It's simple and Nginix is very well suited to low overhead reverse 
 proxy to this kind of setup.
 
 Wishing you the best - if i can be of more help let me know…
 
 RB
 
 On Jan 2, 2012, at 3:38 PM, Eduardo Morras wrote:
 
 At 20:12 02/01/2012, Muhammet S. AYDIN wrote:
 Hello everyone.
 
 My first post here and I'd like to thank everyone who's involved within the
 FreeBSD project. We are using FreeBSD on our web servers and we are very
 happy with it.
 
 We have an online messaging application that is using mongodb. Our members
 send messages to the voice show's (turkish version) contestants. Our two
 mongodb instances ended up in two centos6 servers. We have failed. So hard.
 There were announcements and calls made live on tv. We had +30K/sec
 visitors to the app.
 
 When I looked at the mongodb errors, I had thousands of these:
 http://pastie.org/private/nd681sndos0bednzjea0g. You may be wondering why
 I'm telling you about centos. Well, we are making the switch from centos to
 freebsd FreeBSD. I would like to know what are our limits? How we can set
 it up so our FreeBSD servers can handle min 20K connections (mongodb's
 connection limit)?
 
 Our two servers have 24 core CPUs and 32 GBs of RAM. We are also very open
 to suggestions. Please help me out here so we don't fail deadly, again.
 
 ps. this question was asked in the forums as well however as someone
 suggested in the forums, i am posting it here too.
 
 Is your app limited by cpu or by i/o? What do vmstat/iostat says about your 
 hd usage? Perhaps mongodb fails to read/write fast enough and making process 
 thread pool bigger only will make problem worse, there will be more threads 
 trying to read/write.
 
 Have you already tuned

Re: freebsd server limits question

2012-01-02 Thread Robert Boyer
Just realized that the MongoDB site now has some recipes up for what you really 
need to do to make sure you can handle a lot of incoming new documents 
concurrently….

Boy you had to figure this stuff out yourself just last year - I guess the 
mongo community has come a very long way….

Splitting Shard Chunks - MongoDB


enjoy….

RB

On Jan 2, 2012, at 5:38 PM, Robert Boyer wrote:

 Sorry one more thought and a clarification….
 
 
 I have found that it is best to run mongos with each app server instance most 
 of the mongo interface libraries aren't intelligent about the way that they 
 distribute requests to available mongos processes. mongos processes are also 
 relatively lightweight and need no coordination or synchronization with each 
 other - simplifies things a lot and makes any potential bugs/complexity with 
 app server/mongo db connection logic just go away.
 
 It's pretty important when configuring shards to take on the write volume 
 that you do your best to pre-allocate chunks and avoid chunk migrations 
 during your traffic floods - not hard to do at all. There are also about a 
 million different ways to deal with atomicity (if that is a word) and a very 
 mongo specific way of ensuring writes actually made it to disk somewhere = 
 from your brief description of the app in question it does not sound that it 
 is too critical to ensure every single solitary piece of data persists no 
 matter what as I am assuming most of it is irrelevant and becomes completely 
 irrelevant after the show- or some time there after. Most of the programing 
 and config examples make an opposite assumption in that they assume that each 
 transaction MUST be completely durable - if you forgo that you can get 
 screaming TPS out of a mongo shard.
 
 Also if you do not find what you are looking for via a ruby support group - 
 the JS and node JS community also may be of assistance but they tend to have 
 a very narrow view of the world…. ;-)
 
 RB
 On Jan 2, 2012, at 4:21 PM, Robert Boyer wrote:
 
 To deal with this kind of traffic you will most likely need to set up a 
 mongo db cluster of more than a few instances… much better. There should be 
 A LOT of info on how to scale mongo to the level you are looking for but 
 most likely you will find that on ruby forums NOT on *NIX boards….
 
 The OS boards/focus will help you with fine tuning but all the fine tuning 
 in the world will not solve an app architecture issue…
 
 I have setup MASSIVE mongo/ruby installs for testing that can do this sort 
 of volume with ease… the stack looks something like this….
 
 Nginix 
 Unicorn
 Sinatra
 MongoMapper
 MongoDB
 
 with only one Nginix instance can feed an almost arbitrary number of 
 Unicorn/Sinatra/MongoMapper instances that can in turn feed a properly 
 configured MongoDB cluster with pre-allocated key distribution so that the 
 incoming inserts are spread evenly against the cluster instances…
 
 Even if you do not use ruby that community will have scads of info on 
 scaling MongoDB.
 
 One more comment related to L's advice - true you DO NOT want more 
 transactions queued up if your back-end resources cannot handle the TPS - 
 this will just make the issue harder to isolate and potentially make the 
 recovery more difficult. Better to reject the connection at the front-end 
 than take it and blow up the app/system.
 
 The beauty of the Nginix/Unicorn solution (Unicorn is ruby specific) is that 
 there is no queue that is feed to the workers when there are no workers - 
 the request is rejected. The unicorn worker model can be reproduced for any 
 other implementation environment (PHP/Perl/C/etc) outside of ruby in about 
 30 minutes. It's simple and Nginix is very well suited to low overhead 
 reverse proxy to this kind of setup.
 
 Wishing you the best - if i can be of more help let me know…
 
 RB
 
 On Jan 2, 2012, at 3:38 PM, Eduardo Morras wrote:
 
 At 20:12 02/01/2012, Muhammet S. AYDIN wrote:
 Hello everyone.
 
 My first post here and I'd like to thank everyone who's involved within the
 FreeBSD project. We are using FreeBSD on our web servers and we are very
 happy with it.
 
 We have an online messaging application that is using mongodb. Our members
 send messages to the voice show's (turkish version) contestants. Our two
 mongodb instances ended up in two centos6 servers. We have failed. So hard.
 There were announcements and calls made live on tv. We had +30K/sec
 visitors to the app.
 
 When I looked at the mongodb errors, I had thousands of these:
 http://pastie.org/private/nd681sndos0bednzjea0g. You may be wondering why
 I'm telling you about centos. Well, we are making the switch from centos to
 freebsd FreeBSD. I would like to know what are our limits? How we can set
 it up so our FreeBSD servers can handle min 20K connections (mongodb's
 connection limit)?
 
 Our two servers have 24 core CPUs and 32 GBs of RAM. We are also very open
 to suggestions. Please help me out here so we don't fail deadly

Re: named/bind problems....

2011-01-19 Thread Robert Boyer
Sorry to see you are still having issues. I thought you were set when we fixed 
your resolv last night.

Okay - let's start from scratch here

Are you sure you need a named? Are you actually serving dns for your own IP 
addresses or are you using it as a caching server. Getting a new named 
working/installed is not an issue. Config files are usually and issue. If you 
can explain your network topology and what you are trying to make work I can 
probably point you in the right direction.


We did get your local resolution issue solved didn't we?

RB

On Jan 19, 2011, at 6:03 PM, Gary Kline wrote:

 Yesterday noon my time I rebooted my server.  Things seemed to be slow.
 Several streams were hanging or stopping, and because ethic.thought.org had
 been up for 61 days I figured it wouldn't hurt to reinitialize stuff.
 
 Well, nutshell, disaster.  For hours it wasn't clear whether the server would
 survive, but eventually i got a portupgrade -avOPk going and now I am close to
 having every port rebuilt.  
 
 Now host kuow.org gives the the IP address of the U/Washington.  Etc. last
 night for unknown reasons even this failed.  I remembered that late last fall
 I  was warned the bind9 was nearing its end/life.   I okayed the portupgrade
 to remove bind9 and install whatever its follow up would be.  
 
 Since then, my kill9named script[s] and my restartnamed script[s] have failed.
 Can anyone save me from hours of tracking down whatever I have to to put
 things right?   
 
 Everything I get in trouble with this bind stuff it occurs how significant an
 achievement it is to have a
 service that automagically maps quad/dotted-decimals to actual words.
 
 Sorry if this sounds disjoint; it is past time for a lollipop and a blanket
 and a *nap*
 
 gary
 
 
 
 -- 
 Gary Kline  kl...@thought.org  http://www.thought.org  Public Service Unix
The 7.97a release of Jottings: http://jottings.thought.org/index.php
   http://journey.thought.org
 ethic 
 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org



Re: named/bind problems....

2011-01-19 Thread Robert Boyer
okay,

lets start from the beginning here...

1) Do you have your own IP address and IP address block that you are hosting 
DMS for or is it local only?

2) from talking with you last night I want to make sure you are aware of two 
things...

A) resolv.conf is used for name resolution on EVERY system it tells ALL 
 of the software to get name services from. We fixed this last night for one of 
your systems by pointing it at a name server that works (the one you had did 
not work)
B) named provides name services (as well as forwarding to other dns 
services)  and can be pointed to by resolv.conf on you local systems -  if it 
is not working AND your local resolv.conf files are pointing there your name 
resolution will not work.
C) you can get internet name services working temporarily by using some 
of the servers I have you 8.8.8.8 and 8.8.4.4 in all of your resolv.conf files 
- you don't need named to work for this. You can also use /etc/hosts for your 
couple of local name/address translations as a work around until you get named 
working again.

3) dig is your friend for debugging named - you can use dig @local-dns-address 
lookup-name to debug your named while still using external name servers in your 
resolv.conf and local naming in /etc/hosts until you ACTUALLY are sure your 
local named is working.

4) The only thing you really really need a local named for is if you have a 
real IP block that you are responsible for providing name services on the 
internet for - rarely the case and even if you do you can temporarily jamb the 
names you care about in another 
DNS server somewhere out there like zoneedit or free dns temporarily.

Get your stuff working then debug your named.

RB
On Jan 19, 2011, at 6:55 PM, Gary Kline wrote:

 On Wed, Jan 19, 2011 at 06:11:23PM -0500, Robert Boyer wrote:
 Sorry to see you are still having issues. I thought you were set when we 
 fixed your resolv last night.
 
 Okay - let's start from scratch here
 
 Are you sure you need a named? Are you actually serving dns for your own IP 
 addresses or are you using it as a caching server. Getting a new named 
 working/installed is not an issue. Config files are usually and issue. If 
 you can explain your network topology and what you are trying to make work I 
 can probably point you in the right direction.
 
 
 
   Last night I was on the right track; then suddenly things broke and I
   have no idea w hy.  From the modem/router, the wire goes thru my 
   firewa that runs pfSense.  Then output from the firewall plugs
   into my switch.  
 
   My DNS/Mail/web server is a seperate box that plugs into the
   hub/switch as well.  [i think; it is hard for me to get down 
   and crawl around under the desk.]  The server has been running named
   since April, '01.  I read DNS AND BIND to get things going; then in
   late '07 serious network troubles and help from someone in the Dallas
   Ft-Worth area reconfigured my network.This fellow mostly edited
   the /etc/namedb/named.conf and related files.  I also host a friend's
   site, gratis.  He is a builder; we have been friends for nearly
   twenty years.   His site is a vvery small part of the picture; I 
   mention it only to emphasize that my setup is not entirely trivial.
 
   Would it help to shar or tarball up my namedb files?
 
   FWIW, I am logged into ethic ona console.  Usually I work in X11
   and have xset r off set to prevent key bounces.
 
 
 
 We did get your local resolution issue solved didn't we?
 
 
   Ithink in KVM'ing from tao to  ethic and back, the   configuration we 
   set up last night  broke.   At least, in watching portupgrade draw in
   more and more files [on ethic], when I KVM back to my desktop, the
   mutt settings get lost
 
   -gary
 
 
 RB
 
 On Jan 19, 2011, at 6:03 PM, Gary Kline wrote:
 
 Yesterday noon my time I rebooted my server.  Things seemed to be slow.
 Several streams were hanging or stopping, and because ethic.thought.org had
 been up for 61 days I figured it wouldn't hurt to reinitialize stuff.
 
 Well, nutshell, disaster.  For hours it wasn't clear whether the server 
 would
 survive, but eventually i got a portupgrade -avOPk going and now I am close 
 to
 having every port rebuilt.  
 
 Now host kuow.org gives the the IP address of the U/Washington.  Etc. last
 night for unknown reasons even this failed.  I remembered that late last 
 fall
 I  was warned the bind9 was nearing its end/life.   I okayed the 
 portupgrade
 to remove bind9 and install whatever its follow up would be.  
 
 Since then, my kill9named script[s] and my restartnamed script[s] have 
 failed.
 Can anyone save me from hours of tracking down whatever I have to to put
 things right?   
 
 Everything I get in trouble with this bind stuff it occurs how significant 
 an
 achievement it is to have a
 service that automagically maps

Need some device help

2011-01-13 Thread Robert Boyer
 am in the process of moving all of my NAS from open solaris to FreeBSD (I 
hope?) and have run into a few speed bumps along the way. Maybe I am doing 
something way way wrong but I cannot seem to find any info at all on some of my 
issues. I hope this is the right list to ask - if not please steer me in the 
right direction. 

I am not new to BSD's but then again I am not intimately familiar with a lot of 
the inner workings and things have change a bunch since my kernel hacking days 
of NET2 on a vax so this is probably just a parameter somewhere but I cannot 
seem to find it. Here goes...

Question 1?

I am trying to get FreeBSD running ZFS to work as both a server and a consumer 
of iSCSI targets and can't seem to attach to more than 8 devices. In fact it 
seems to break after a TOTAL of 8 SCSI devices on the client (initiator end) 
after attaching 8 SCSI devices da0 through da7 target discovery even stops 
working with this message when running ANY iscontrol commend on the initiator.

ISCSISETSES: Device not configured 

I believe this is on the initiator end and has to do with the total number of 
attached scsi devices but am not completely sure - I am using the built in 
iscsi initiator and kernel module NOT open iscsi and istgt on the server. Any 
ideas? Is this on the server end somehow? Ps. There is absolutely no messages 
in any log server or client after the 8 device and iSCSI stops working the 
above message comes from the iscontrol command with the -v option.

Question 2?
In order to make device names persistent I am using glabel on raw 
non-partitioned devices to simulate solaris persistent device names. I also 
understand that I can use GPT disks and that glabel somehow integrates with the 
GUID? Is this the case or is using GPT a completely independent alternative to 
using glabel? In any case what is the best/most common practice to achieving 
the end result (reliably and dependably) in the FreeBSD world?

Thanks a lot

RB


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


USB and 8.1

2011-01-09 Thread Robert Boyer
I am running release 9.1 under VMware Fusion and it works great - except 

No USB connections on any USB bus work at all - the kernel sees the connect but 
then encounters an error and disables the device immediately.

Searched around a bit but didn't find anything definitive. Seems like this 
would be a fairly common thing? Any ideas on how to make freebsd USB work under 
VMware?

RB
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Help with nanobsd.sh??

2011-01-08 Thread Robert Boyer
I am trying nanobsd for the first time under 8.1 and have two fairly basic 
questions before I go about solving a few issues in my usual brute-force and 
wrong way.

1)Using a box stock system with a fresh install and the default nanobsd.sh with 
default configuration everything looks like it builds fine right up until 

02:11:50 ## build diskimage
02:11:50 ### log: /usr/obj/nanobsd.full//_.di

/usr/obj/nanobsd.full/_.mnt: write failed, filesystem is full


of course my working file systems are not full - far from it. I think it's 
talking about some disk images that the script is creating - what the heck? How 
can this be with the default config?


2)Is there an option to run nanobsd.sh without cleaning the obj directories? 
Really don't want to rebuild world and kernel from scratch for a couple of 
different packages in custom configs - let alone do it for solving build issues.


Thanks

RB
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


SAS HBA card for freebsd?

2010-12-23 Thread Robert Boyer
I have been running FreeBSD for about a year and tracking the ZFS 
implementation for almost as long. I am reasonably happy with the current 
stable 8.1 ZFS configs that I have been running with a few TB of storage all 
managed with an integrated SATA controller  on my test machine. I am about to 
invest a little bit of money in a production machine targeted for a bunch of 
cheapo storage attachment and plan to implement on FreeBSD / ZFS. I have 
searched around on this topic and most info seems to be a bit out of date or 
contradictory, so here is the question at the risk of being redundant.

I need a SAS controller that has preferably 8 ports (two four channel) 
connections per card. I don't mind decent buying a RAID card but really really 
desire it to be configurable in HBA mode vs. RAID or JBOD with RAID signatures. 
There are plenty of HBA only cards that would be suitable but I can find none 
that seem to fit the bill in terms of FreeBSD. I have seen a couple of cheap 
RAID cards recommended but cannot seem to get a definitive answer of whether 
they are actually configurable as plain old disks (HBA mode) vs JBOD w/ RAID 
signature.

Anybody using a reasonably priced card that fits the bill?

Thanks
RB
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Best SAS controller

2010-12-23 Thread Robert Boyer
I have been running FreeBSD for about a year and tracking the ZFS 
implementation for almost as long. I am reasonably happy with the current 
stable 8.1 ZFS configs that I have been running with a few TB of storage all 
managed with an integrated SATA controller  on my test machine. I am about to 
invest a little bit of money in a production machine targeted for a bunch of 
cheapo storage attachment and plan to implement on FreeBSD / ZFS. I have 
searched around on this topic and most info seems to be a bit out of date or 
contradictory, so here is the question at the risk of being redundant.

I need a SAS controller that has preferably 8 ports (two four channel) 
connections per card. I don't mind decent buying a RAID card but really really 
desire it to be configurable in HBA mode vs. RAID or JBOD with RAID signatures. 
There are plenty of HBA only cards that would be suitable but I can find none 
that seem to fit the bill in terms of FreeBSD. I have seen a couple of cheap 
RAID cards recommended but cannot seem to get a definitive answer of whether 
they are actually configurable as plain old disks (HBA mode) vs JBOD w/ RAID 
signature.

Anybody using a reasonably priced card that fits the bill?

Thanks
RB

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org