>>>>> "bh" == Brandon High <bh...@freaks.com> writes:
>>>>> "ok" == Orvar Korvar <knatte_fnatte_tja...@yahoo.com> writes:
>>>>> "mp" == matthew patton <patto...@yahoo.com> writes:

    bh> This one holds "only" 24 drives:
    bh> http://www.supermicro.com/products/chassis/4U/846/SC846TQ-R900.cfm
    bh> ($950)

This one holds only 20 drives.  includes fan, not power supply.
the airflow management seems shit but basically works okay:

 
http://www.servercase.com/Merchant2/merchant.mvc?Screen=PROD&Product_Code=CK4020&Category_Code=4UBKBLN
 ($375.38 shipping included)
 (pornographer's-grade sleds)

I have one of these and am happy with it.  It's 20 plain SATA ports,
passively.  Competitors' cases are including SAS switches on the
backplane, and since I don't know how to buy switches apart from cases
you may need to get a competing case if yuo want a SAS switch.

I found it cheaper to use ATI 790fx board that has four 8-lane PCIe
slots (16x for power, but 8 data lanes active when all four slots are
in use) and then use 1068e cards with breakout cables, so basically
the 790fx chip does the multiplexing.  Obviously this does not scale
as far as a SAS fabric: the 4 PCIe will handle 16 drives, an 8x NIC of
some kind, and an nVidia card.

    ok> Is it possible to have large chassi with lots of
    ok> drives, and the opensolaris in another chassi, how do you
    ok> connect them both?

SAS.  The cabling and the switching chips are eerily reminiscent of
infiniband, but while I think IB has open source drivers and stacks
and relatively generic proprietary firmware, more of the brains of the
SAS fabric seem to be controlled by proprietary software running as
``firmware'' on all the LSI chips involved, so I think this landscape
might make harder smartctl, using a cd burner, or offering a SAS
target into the fabric through COMSTAR (AFAIK none of these things
work now---I guess we'll see what evolves).  but I guess no one built
cheap single chips to tunnel SATA inside IB, so here we are with
broken promises and compromises and overpriced sillyness like FCoE.

In the model numbers of the old 3Gbit/s LSI cards, the second digit
was the number of external ports, and the third digit the number of
internal ports.  For example LSI SAS3801E-R is a mega_sas-drivered
(raid-on-a-card) with 8 external ports, and LSI SAS3081E-R has 8
internal ports.  but if you want a cheaper card with IT firmware for
mpt driver, without RAID-on-a card, yuo may have to hunt some more.
The external ports are offered on one or two single connectors with
four ``ports'' per connector---each of the four can be broken out and
connected to an individual disk using a passive four-legged-octopus
cable, or bonded together in sets of four to form a single faster
logical link to a SAS switch chip.  beyond that I don't really know
how it all works.  I'm probably telling you stuff you already know but
at least hopefully now everyone's caught up.

    mp> I buy a Ferrari for the engine and bodywork and chassis
    mp> engineering.

Didn't they go through bankruptcy repeatedly and then get bought by
Fiat?  Whatever this folded-sheetmetal crap thing from so-called
``servercase.com'' is, it's probably backed secretly by the chinese
government, and I bet it outlasts your fancy J4500.  This seems to me
like a bad situation, but I'm not sure what to do about it.

There are many ways to slice the market vertically.  For example you
could also get your integration done by renting whitebox crap through
a server-rental company that rents you dedicated storage or compute
nodes at 10 or 100 at a time pre-connected by network equipment they
won't discuss with you (probably cheaper than the cisco stuff you'd
definitely buy if it were your own ass on the line).  Part of Sun's
function is prequalifying but another part is to reach inside their
customer's organizations, extract experience, and then share it among
all customers discretely without any one customer feeling violated.  A
hardware rental company can do the same thing, and I bet they can do
it at similar scale, with a lot less political bullshit.  I think
there's a big messy underground market of these shady rental companies
in parallel to the above-ground overblogged overpriced flakey EC2-like
stuff.  My hope is that the IB stack, in which Sun's also apparently
deeply invested with both Solaris and IB-included blades and switches
and backplanes and onboard MAC's, will start taking a chunk out of
Cisco's pie.  Meanwhile the box-renting company extracts money from
you by performing an ass-covering function: they can buy cheap risky
things you can't, and then you say to your minders, ``other people buy
from them too.  It was not reckless to become their customer.  I've
both saved you money and manoevered you the agility to evade the
problems we've had without writing off a lot of purchases,'' when
really what you are outsourcing here is your own expensive CYA
tendencies.

But back to the potential pie-slice for Sun to steal: the function of
the IB switching chips themselves are far simpler and cheaper than
Ethernet because the gigantic output buffers in, ex., Arista, are
simply missing---the network itself is fundamentally cheaper and less
capable than Ethernet which has output buffers to eliminate blocking,
but stingily eliminating these buffers and making a cheap HOL-blocking
fabric is actually *better* for storage.  For iSCSI you have to buy
Ethernet switches which have large RED output buffers, while all the
storage optimized networks like FC and SAS are blocking, bufferless
fabrics like IB and Myri.  With FCoE, you pay for the buffers and then
*disable* them in certain CoS's!  idiotic.  Sun looks to be finally
finishing off the IB software and maybe delivering some of the ancient
unkept promises of a converged network, with RDMA bindings for NFS and
COMSTAR (iSER), and cluster-friendly QFS-like schemes like Lustre and
pNFS which are also RDMA-ified.  If the competition is FCoE, then the
competition will have to provide much more complicated and less old
ASIC's in their switching mesh, and huge buffers, while Sun can simply
resurrect some mellanox reference designs from a half-decade ago and
have a faster finished system, so there is some room to extract
``market forces'' premiums from this clue gap.  If you have a look at
some of Cisco's old failed competitors like Extreme, compare the
linecards of 64Gbit/s Extreme Alpine platform which have like 3 chips
on them, and the rest empty green mask, with cisco 6500/sup32 32Gbit/s
platform (which is of similar if not newer vintage) which is
legacy-heavy and packed with multiple layered boards of
expensive-looking chips with messy heatsinks.  Extreme failed for
reasons---the 6500 has fundamental features the Alpine doesn't, and
flow-forwarding can't really be safely used outside a LAN---but just
looking at the Alpine boards next to the 6500 boards and realizing,
for some nontrivial applications, these two platforms are actually
equivalent!  It gives one the idea there's money to be squeezed from
this cow.  and FCoE is just dumb if you have IB, honestly.

this is all just dumb blogger ranting though.  We'll see what happens.

All I'm saying is, there's more in the world to integrate than the way
a hard drive fits inside a chassis, and there are some people on the
cost-conscious non-banking side of the spectrum whose businesses are
retarded from lack of said integration.  The future shouldn't be a
choice between $1000 sleds, and rudderless stagnant peecee crap.

Attachment: pgpfmYUFG9kDT.pgp
Description: PGP signature

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to