[OT] Bonded web site developer?

2009-12-18 Thread Dan Jenkins
I was talking to a client the other day, who needs a web site developer 
for a non-profit of his. He was told to make sure he only used bonded 
web site developers. I have never heard, nor been able to find, any 
reference to bonded web site developers. Does anyone know about this?

I do know about EO insurance, bonding, and such, but not the relevance 
specifically to web site development.

Thanks.
--
Dan Jenkins, Rastech Inc., 1-603-206-9951

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: [OT] Bonded web site developer?

2009-12-18 Thread Alan Johnson
On Fri, Dec 18, 2009 at 3:07 AM, Dan Jenkins d...@rastech.com wrote:

 I do know about EO insurance, bonding, and such, but not the relevance
 specifically to web site development.


Bonded Web developers have studied an extensive 20 minute training cartoon
where they learn such all-too-uncommon skills as data tube lubrication and
Nigerian royalty investment.
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


The Quest for the Perfect Cloud Storage

2009-12-18 Thread Alan Johnson
So, I'm trying to build clouds these days, and I'm sold on Citrix XenServer
for all the VM management, but it doesn't provide much in the way of
storage.  It will let you use many kinds of nice third party options out
there for your storage, but it can only provide local storage to VMs itself,
and as such, will not do live migrations without your providing some kind of
network storage for it to run VMs on.  Some other features are impeded
without network storage as well, but no need to digress to such specifics
here.

Anyway, the typical solution is to pay ridiculous $/GB for some proprietary
hardware SAN solution to provide a node-redundant network storage.  You pay
even more to get multilevel storage.  Money aside, I don't like this because
it introduces new potential bottlenecks that are not present in the
alternative I am about to describe, and because is it is a big fat waste of
hardware resources.

My desired solutions revolves around the questions of why can't storage be
treated like the rest of the resources in a cloud?  You've already got all
this redundant hardware providing processor and memory resources to the
cloud.  Why can't you pool the storage resources of the same physical
hardware the same way?  So, my idea of the Perfect Cloud Storage would meet
the following requirements:

   1. multi-node network storage (SAN?)
   2. Ideally, n+2 redundant (like RAID6), but n+1 and mirroring are worth
   considering. In fact mirroring would be a nice option to have for some write
   heavy VMs.  Even stripping would be useful in some instances.
   3. The node software would run on the dom0 of XenServer physical nodes
   giving it direct access to the block devices within.  From what I can tell,
   XenServer is a custom distro of Linux with with RPM/YUM package management.
   4. Multilevel storage support within the nodes, so for example, a set of
   256GB SSD, 300GB 10K, 500GB 2.7K, and 750GB 5.4K drives will all be used
   intelligently without need for human interaction after setup.
   5. Multilevel storage across nodes would be a neat concept, but some
   intelligence about load balancing across identical nodes is certainly
   desired.


The closest I can come up with so far is to run one FreeBSD VM on each
physical node, expose the block devices directly to that VM so it can put
them in a ZFS pool for multilevel functionality (and maybe some local
redundancy).  This gives me a file server for each node that can provide
iSCSI targes to the VMs which then can be mounted in any software RAID
configuration that makes sense for the needs of the VM, mostly RAID6 so that
if I take a physical node down for maintenance, the network storage persists
with n+1 redundancy.

This is not terribly elegant, not as easy to manage as I would like, and
does not meet all the requirements above, but it does get the major ones.
The biggest problem is that I don't have any way to testing this wacky idea
until I order and receive a hardware configuration that depends on it
working!

I'm happy to take ideas from the crowd, but I'd be happier to find some
vendor or consultant with some experience and/or access to testing resources
who can vet this or some other solution and then stand by it.  Feel free to
contact me off list if you think you might fill this role.  Dave Clifton,
I'm thinking of you because Mike Diehn suggested you have a good amount of
SAN experience.  Bill McGonigle, I'm thinking of you because of your ZFS
experience.  Ben Scott, I'm thinking of you because you're the man. =)

Thanks in advance for all the insite I've come to expect from this wonderful
community!

___
Alan Johnson
p: a...@datdec.com
c: 603-252-8451
w: alan.john...@rightthinginc.com
w: 603-442-5330 x1524
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Gaming... for three-year-olds...

2009-12-18 Thread Joshua Judson Rosen
Stephen Ryan step...@sryanfamily.info writes:

 On Tue, 2009-12-15 at 23:31 -0500, Ken D'Ambrosio wrote: 
  Okay.  Kenette 2.0 is approx. 3.5 years in age.  She's currently getting
  into games on her laptop, a Fisher Price doohickey that even has a
  mouse.  Anyway, suggestions on games that might run on a somewhat more
  open architecture like, say... Linux?
 
 World of Goo, from 2dboy.  It's commercial, but reasonably priced ($20)
 and no DRM

http://www.dwheeler.com/essays/commercial-floss.html

-- 
Don't be afraid to ask (λf.((λx.xx) (λr.f(rr.

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: The Quest for the Perfect Cloud Storage

2009-12-18 Thread Tom Buskey
On Fri, Dec 18, 2009 at 1:53 PM, Alan Johnson a...@datdec.com wrote:

 So, I'm trying to build clouds these days, and I'm sold on Citrix XenServer
 for all the VM management, but it doesn't provide much in the way of
 storage.  It will let you use many kinds of nice third party options out
 there for your storage, but it can only provide local storage to VMs itself,
 and as such, will not do live migrations without your providing some kind of
 network storage for it to run VMs on.  Some other features are impeded
 without network storage as well, but no need to digress to such specifics
 here.

 Anyway, the typical solution is to pay ridiculous $/GB for some proprietary
 hardware SAN solution to provide a node-redundant network storage.  You pay
 even more to get multilevel storage.  Money aside, I don't like this because
 it introduces new potential bottlenecks that are not present in the
 alternative I am about to describe, and because is it is a big fat waste of
 hardware resources.


I've been doing headless VirtualBox on a Linux host with the VM images on a
Solaris NFS server.

I had run the VMs on an ESXi server with the same NFS server.  From what
I've read, for gigabit ethernet, NFS vs iSCSI speed is a  wash.  VMware ESXi
will hapily use either.

For multisystem access, NFS works.  A SAN like iSCSI/FibreChannel/etc needs
a clustering filesystem that's much harder to setup.

I've run Sun QFS over 2GB FibreChannel.  It's faster then NFS over gigabit
of course.  For the application we run, we're putting lock files in an NFS
directory.

As you say, a SAN is usually lots of $/GB.  If you're using a SAN as a
backend for a NAS head, it's probably cheaper to have a NAS.

If you have a database a NAS won't work and a SAN or local disk is probably
better.

These are general rules  circumstances may over ride of course.

I've often wondered why some sysadmins use an iSCSI backend with a NAS front
end.
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: Gaming... for three-year-olds...

2009-12-18 Thread Ben Scott
On Fri, Dec 18, 2009 at 2:32 PM, Joshua Judson Rosen
roz...@geekspace.com wrote:
 World of Goo, from 2dboy.  It's commercial, but reasonably priced ($20)
 and no DRM

 http://www.dwheeler.com/essays/commercial-floss.html

http://www.softpanorama.org/Bulletin/Humor/hired_interviews_gsg_founder.shtml

-- Ben

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: The Quest for the Perfect Cloud Storage

2009-12-18 Thread Alan Johnson
On Fri, Dec 18, 2009 at 2:59 PM, Tom Buskey t...@buskey.name wrote:

 I had run the VMs on an ESXi server with the same NFS server.  From what
 I've read, for gigabit ethernet, NFS vs iSCSI speed is a  wash.  VMware ESXi
 will hapily use either.


I was thinking about paying for VMWare ESXi until I found XenServer was
mostly free now, and better IMHO, at least on paper (or pixels?).  Plus, I'm
already fairly well versed on Xen.


 For multisystem access, NFS works.  A SAN like iSCSI/FibreChannel/etc needs
 a clustering filesystem that's much harder to setup.


If it is simpler to deal with once it is setup than ZFS/FreeBSD/VM solutions
I described, I'm interested, especially since I can probably get money to
pay some one to help me set it up.


 I've run Sun QFS over 2GB FibreChannel.  It's faster then NFS over gigabit
 of course.


FWIIW, I'm specing a 10 GigE networking plain mostly dedicated to storage.
I might get 2 for redundancy and extra capacity but talk about $$?  10 GigE
switches still ain't cheap.  The 10 GigE is mostly to support puting some of
our lighter load database servers in the cloud, so I might even fall back to
putting them on dedicated boxes and dropping the cloud storage network  to
1Gig.


 I've often wondered why some sysadmins use an iSCSI backend with a NAS
 front end.


I don't understand that line, but I'm not sure I need to. =)
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: The Quest for the Perfect Cloud Storage

2009-12-18 Thread Dana Nowell
I'm currently using XENServer and iSCSI.  The iSCSI is setup on a Debian
Linux box running iscsitarget (box is called SAN1) and several NICs.
All drives on SAN1 are configured via RAID and added to an LVM2 pool.
The pool is carved up and exported via iSCSI to form various SRs for XEN
as necessary.  If the XEN servers are in a resource pool and the SR is
attached to the pool (not the host), then live migration and similar
seem to work fine.

I'm currently running three XEN pools, several SRs, some storage on SAN1
exported as NFS for common utilities and scripts (mounted on the XEN
servers directly).  I also have several CIFS mounts to another file
server that hosts various ISO images.  All seems to work fine, including
migration.  I have about 45 VMs used for test beds, low volume internal
web servers, print/file servers, network monitoring, development build
machines, remote desktop boxes, and various other tasks configured right
now.  About 35 of those are currently running (other test beds and
monitors are down right now).

Upside is, it works.  I make sure I have N+1 Xen servers for N load so I
can migrate a server clear to reboot, install hardware, etc.  Live
migration works great, update/upgrade happens, server goes back on line,
we live migrate VMs back.

Down side is the iSCSI box is a potential single point of failure.
Current plan is, RAID storage in external cabinet + cold spare + UPS +
backups will save my butt with some down time on power
supply/motherboard/etc issues.  Medium term plan is to investigate DRBD
and a cluster for the SAN box.  If that pans out I'd be migrating the
cold spare to a hot spare at the cost of additional disk space and a
gain of a potential reduction in down time.  I could also run them off
two different power panels/UPS/breakers (but alas a single feed from the
pole) if I want.  (BTW, anyone using DRBD ???)

Additional issues to watch out for include disk I/O volume (those write
heavy VMs you mention).  I'm using 1 gig NICs for the SRs but that
obviously limits me to 1 Gbps throughput per SR.  As a result I tend to
avoid I/O intensive VMs in this configuration (no high volume DB
servers).  I guess you could go to fiber or 10Gbe to help but I'm not
there yet (small company, small budget).  Unfortunately, the DRBD plan
doesn't make the I/O bandwidth look better.  It is a trade off between
I/O bandwidth and hardware failure modes.  In my case, I THINK I may be
OK, that's why I said 'investigate' DRBD.  I may stick with the cold
spare as my VMs are internal use only and mostly test beds.  If you have
write heavy VMs you may want/need better than 1 gig NIC connectivity.
 Several 2-3 Gbps SCSI/SAS/SATA drives sucking data over the a couple of
1 Gbps NIC straws can get old fast under heavy load.

Another choice is to do the same thing but use NFS instead of iSCSI
(allowing ZFS underneath).  I did some tests and iSCSI out performed
current NFS solutions (on Debian at least).  The NFS network
chatter/latency costs on 1 Gbps NICS out weighed any potential ZFS disk
advantages for me.  Consequently I opted for iSCSI in my environment,
your mileage or your NICs may vary.

Supposedly, a lot of this gets simpler with those ridiculous $/GB
solutions. Especially if they have lots of built in high speed NICs,
slots for several hundred drives, built in management and monitoring
software, and redundant hardware.  Alas they DO exceed my capital
budget.  :)


On 12/18/2009 13:53, Alan Johnson wrote:
 So, I'm trying to build clouds these days, and I'm sold on Citrix XenServer
 for all the VM management, but it doesn't provide much in the way of
 storage.  It will let you use many kinds of nice third party options out
 there for your storage, but it can only provide local storage to VMs itself,
 and as such, will not do live migrations without your providing some kind of
 network storage for it to run VMs on.  Some other features are impeded
 without network storage as well, but no need to digress to such specifics
 here.
 
 Anyway, the typical solution is to pay ridiculous $/GB for some proprietary
 hardware SAN solution to provide a node-redundant network storage.  You pay
 even more to get multilevel storage.  Money aside, I don't like this because
 it introduces new potential bottlenecks that are not present in the
 alternative I am about to describe, and because is it is a big fat waste of
 hardware resources.
 
 My desired solutions revolves around the questions of why can't storage be
 treated like the rest of the resources in a cloud?  You've already got all
 this redundant hardware providing processor and memory resources to the
 cloud.  Why can't you pool the storage resources of the same physical
 hardware the same way?  So, my idea of the Perfect Cloud Storage would meet
 the following requirements:
 
1. multi-node network storage (SAN?)
2. Ideally, n+2 redundant (like RAID6), but n+1 and mirroring are worth
considering. In fact mirroring would be a nice option to 

on good software (was: Gaming... for three-year-olds...)

2009-12-18 Thread Joshua Judson Rosen
Ben Scott dragonh...@gmail.com writes:

 On Fri, Dec 18, 2009 at 2:32 PM, Joshua Judson Rosen
 
 http://www.softpanorama.org/Bulletin/Humor/hired_interviews_gsg_founder.shtml

Not to be confused with:

http://codeoffsets.com/

-- 
Don't be afraid to ask (λf.((λx.xx) (λr.f(rr.

___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/


Re: The Quest for the Perfect Cloud Storage

2009-12-18 Thread Alan Johnson
On Fri, Dec 18, 2009 at 4:45 PM, Dana Nowell 
dananow...@cornerstonesoftware.com wrote:

 Another choice is to do the same thing but use NFS instead of iSCSI
 (allowing ZFS underneath).  I did some tests and iSCSI out performed
 current NFS solutions (on Debian at least).  The NFS network
 chatter/latency costs on 1 Gbps NICS out weighed any potential ZFS disk
 advantages for me.  Consequently I opted for iSCSI in my environment,
 your mileage or your NICs may vary.


Based on previous related discussions on this list, my understanding is that
ZFS will also export as iSCSI.  In fact, some kind soul posted the commands
to do so, IIRC.  I'm a bit pressed to find the thread right now.
___
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/