-discuss
--
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
On Tue, Feb 26, 2013 at 06:01:39PM +0100, Sašo Kiselkov wrote:
On 02/26/2013 05:57 PM, Eugen Leitl wrote:
On Tue, Feb 26, 2013 at 06:51:08AM -0800, Gary Driggs wrote:
On Feb 26, 2013, at 12:44 AM, Sašo Kiselkov wrote:
I'd also recommend that you go and subscribe to z...@lists.illumos.org
On Thu, Jan 03, 2013 at 03:21:33PM -0600, Phillip Wagstrom wrote:
Eugen,
Thanks Phillip and others, most illuminating (pun intended).
Be aware that p0 corresponds to the entire disk, regardless of how it
is partitioned with fdisk. The fdisk partitions are 1 - 4. By using p0 for
On Fri, Jan 04, 2013 at 06:57:44PM -, Robert Milkowski wrote:
Personally, I'd recommend putting a standard Solaris fdisk
partition on the drive and creating the two slices under that.
Why? In most cases giving zfs an entire disk is the best option.
I wouldn't bother with any
On Sun, Dec 30, 2012 at 06:02:40PM +0100, Eugen Leitl wrote:
Happy $holidays,
I have a pool of 8x ST31000340AS on an LSI 8-port adapter as
Just a little update on the home NAS project.
I've set the pool sync to disabled, and added a couple
of
8. c4t1d0 ATA-INTELSSDSA2M080-02G9 cyl
On Thu, Jan 03, 2013 at 12:44:26PM -0800, Richard Elling wrote:
On Jan 3, 2013, at 12:33 PM, Eugen Leitl eu...@leitl.org wrote:
On Sun, Dec 30, 2012 at 06:02:40PM +0100, Eugen Leitl wrote:
Happy $holidays,
I have a pool of 8x ST31000340AS on an LSI 8-port adapter as
Just
On Thu, Jan 03, 2013 at 03:21:33PM -0600, Phillip Wagstrom wrote:
Eugen,
Be aware that p0 corresponds to the entire disk, regardless of how it
is partitioned with fdisk. The fdisk partitions are 1 - 4. By using p0 for
log and p1 for cache, you could very well be writing to same
On Thu, Jan 03, 2013 at 03:44:54PM -0600, Phillip Wagstrom wrote:
On Jan 3, 2013, at 3:33 PM, Eugen Leitl wrote:
On Thu, Jan 03, 2013 at 03:21:33PM -0600, Phillip Wagstrom wrote:
Eugen,
Be aware that p0 corresponds to the entire disk, regardless of how it
is partitioned
On Sun, Dec 30, 2012 at 10:40:39AM -0800, Richard Elling wrote:
On Dec 30, 2012, at 9:02 AM, Eugen Leitl eu...@leitl.org wrote:
The system is a MSI E350DM-E33 with 8 GByte PC1333 DDR3
memory, no ECC. All the systems have Intel NICs with mtu 9000
enabled, including all switches in the path
Happy $holidays,
I have a pool of 8x ST31000340AS on an LSI 8-port adapter as
a raidz3 (no compression nor dedup) with reasonable bonnie++
1.03 values, e.g. 145 MByte/s Seq-Write @ 48% CPU and 291 MByte/s
Seq-Read @ 53% CPU. It scrubs with 230+ MByte/s with reasonable
system load. No hybrid
On Mon, Dec 03, 2012 at 06:28:17PM -0500, Peter Tripp wrote:
HI Eugen,
Whether it's compatible entirely depends on the chipset of the SATA
controller.
This is what I was trying to find out. I guess I just have to
test it empirically.
Basically that card is just a dual port 6gbps PCIe
On Tue, Dec 04, 2012 at 03:38:07AM -0800, Gary Driggs wrote:
On Dec 4, 2012, Eugen Leitl wrote:
Either way I'll know the hardware support situation soon
enough.
Have you tried contacting Sonnet?
No, but I did some digging. It *might* be a Marvell 88SX7042,
which would be then supported
On Tue, Dec 04, 2012 at 11:07:17AM +0100, Eugen Leitl wrote:
On Mon, Dec 03, 2012 at 06:28:17PM -0500, Peter Tripp wrote:
HI Eugen,
Whether it's compatible entirely depends on the chipset of the SATA
controller.
This is what I was trying to find out. I guess I just have to
test
On Tue, Nov 27, 2012 at 12:12:43PM +, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Eugen Leitl
can I make e.g. LSI SAS3442E
directly do SSD caching (it says
Dear internets,
I've got an old SunFire X2100M2 with 6-8 GBytes ECC RAM, which
I wanted to put into use with Linux, using the Linux
VServer patch (an analogon to zones), and 2x 2 TByte
nearline (WD RE4) drives. It occured to me that the
1U case had enough space to add some SSDs (e.g.
2-4 80
Hi,
after a flaky 8-drive Linux RAID10 just shredded about 2 TByte worth
of my data at home (conveniently just before I could make
a backup) I've decided to both go full redundancy as well as
all zfs at home.
A couple questions: is there a way to make WD20EFRX (2 TByte, 4k
sectors) and WD200FYPS
On Wed, Nov 21, 2012 at 08:31:23AM -0700, Jan Owoc wrote:
HI Eugen,
On Wed, Nov 21, 2012 at 3:45 AM, Eugen Leitl eu...@leitl.org wrote:
Secondly, has anyone managed to run OpenIndiana on an AMD E-350
(MSI E350DM-E33)? If it doesn't work, my only options would
be all-in-one with ESXi
On Thu, Nov 08, 2012 at 04:57:21AM +, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
Yes you can, with the help of Dell, install OMSA to get the web interface
to manage the PERC. But it's a pain, and there is no equivalent option for
most HBA's. Specifcally, on my
On Wed, Nov 07, 2012 at 12:58:04PM +0100, Sašo Kiselkov wrote:
On 11/07/2012 12:39 PM, Tiernan OToole wrote:
Morning all...
I have a Dedicated server in a data center in Germany, and it has 2 3TB
drives, but only software RAID. I have got them to install VMWare ESXi and
so far
On Wed, Nov 07, 2012 at 01:33:41PM +0100, Sašo Kiselkov wrote:
On 11/07/2012 01:16 PM, Eugen Leitl wrote:
I'm very interested, as I'm currently working on an all-in-one with
ESXi (using N40L for prototype and zfs send target, and a Supermicro
ESXi box for production with guests, all booted
/
___
Freenas-announce mailing list
freenas-annou...@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/freenas-announce
- End forwarded message -
--
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
Hi,
I would like to give a short talk at my organisation in order
to sell them on zfs in general, and on zfs-all-in-one and
zfs as remote backup (zfs send).
Does anyone have a short set of presentation slides or maybe
a short video I could pillage for that purpose? Thanks.
-- Eugen
you go for
2x mirror for L2ARC and 2x mirror for ZIL? Some other configuration?
Thanks!
--
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE
I'm currently thinking about rolling a variant of
http://www.napp-it.org/napp-it/all-in-one/index_en.html
with remote backup (via snapshot and send) to 2-3
other (HP N40L-based) zfs boxes for production in
our organisation. The systems themselves would
be either Dell or Supermicro (latter with
On Fri, Aug 03, 2012 at 08:39:55PM -0500, Bob Friesenhahn wrote:
For the slog, you should look for a SLC technology SSD which saves
unwritten data on power failure. In Intel-speak, this is called
Enhanced Power Loss Data Protection. I am not running across any
Intel SSDs which claim
/member/?member_id=22842876id_secret=22842876-a25d3366
Powered by Listbox: http://www.listbox.com
- End forwarded message -
--
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http
://lists.sourceforge.net/lists/listinfo/freenas-announce
- End forwarded message -
--
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE
As a napp-it user who recently needs to upgrade from NexentaCore I recently saw
preferred for OpenIndiana live but running under Illumian, NexentaCore and
Solaris 11 (Express)
as a system recommendation for napp-it.
I wonder about the future of OpenIndiana and Illumian, which
fork is likely to
by storage capacity
Nexenta Enterprise platform charge U $$ for raw capacity
I'm currently using NexentaCore which is EOL. It seems
my choices are either OpenIndiana or Illumian, which seem
to be very closely related.
On 7/11/2012 7:51 AM, Eugen Leitl wrote:
As a napp-it user who recently needs
Sorry for an off-topic question, but anyone knows how to make
network configuration (done with ifconfig/route add) sticky in
nexenta core/napp-it?
After reboot system reverts to 0.0.0.0 and doesn't listen
to /etc/defaultrouter
Thanks.
___
zfs-discuss
killed the Solaris star?
--
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
3.1).
--
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
On Sat, Aug 06, 2011 at 07:19:56PM +0200, Eugen Leitl wrote:
Upgrading to hacked N36L BIOS seems to have done the trick:
eugen@nexenta:~$ zpool status tank
pool: tank
state: ONLINE
scan: none requested
config:
NAMESTATE READ WRITE CKSUM
tank
0 0
errors: No known data errors
Anecdotally, the drive noise and system load have gone
down as well. It seems even with small SSDs hybrid pools
are definitely worthwhile.
On Fri, Aug 05, 2011 at 10:43:02AM +0200, Eugen Leitl wrote:
I think I've found the source of my problem: I need
-To: vser...@list.linux-vserver.org
X-Mailer: Evolution 2.30.3
On Sat, 2011-08-06 at 21:40 +0200, Eugen Leitl wrote:
I've recently figured out how to make low-end hardware (e.g. HP N36L)
work well as zfs hybrid pools. The system (Nexenta Core + napp-it)
exports the zfs pools as CIFS, NFS or iSCSI
: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.18)
Gecko/20110621 Red Hat/3.1.11-2.el6_1 Lightning/1.0b2 Thunderbird/3.1.11
On 08/06/2011 09:30 PM, John A. Sullivan III wrote:
On Sat, 2011-08-06 at 21:40 +0200, Eugen Leitl wrote:
I've recently figured out how to make low-end hardware (e.g
On Thu, Aug 04, 2011 at 11:58:47PM +0200, Eugen Leitl wrote:
On Thu, Aug 04, 2011 at 02:43:30PM -0700, Larry Liu wrote:
root@nexenta:/export/home/eugen# zpool add tank log /dev/dsk/c3d1p0
You should use c3d1s0 here.
Th
root@nexenta:/export/home/eugen# zpool add tank cache /dev/dsk
results. The main one being no emails from
Nexenta informing you that the syspool has moved to a degraded state when it
actually hasn’t :)
...
On Fri, Aug 05, 2011 at 09:05:07AM +0200, Eugen Leitl wrote:
On Thu, Aug 04, 2011 at 11:58:47PM +0200, Eugen Leitl wrote:
On Thu, Aug 04, 2011 at 02:43
5 00:45 /dev/dsk/c3d1s8 -
../../devices/pci@0,0/pci-ide@14,1/ide@0/cmdk@1,0:i
lrwxrwxrwx 1 root root 51 Aug 5 00:45 /dev/dsk/c3d1s9 -
../../devices/pci@0,0/pci-ide@14,1/ide@0/cmdk@1,0:j
It's probably something blindingly obvious to a seasoned
Solaris user, but I'm stumped. Any ideas?
--
Eugen
: No known data errors
--
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
On Sat, Jul 30, 2011 at 12:56:38PM +0200, Eugen Leitl wrote:
apt-get update
apt-clone upgrade
Any first impressions?
I finally came around installing NexentaCore 3.1 along with
napp-it and AMP on a HP N36L with 8 GBytes RAM. I'm testing
it with 4x 1 and 1.5 TByte consumer SATA drives
the internal
USB and external eSATA ports are good for.
--
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443
apt-get update
apt-clone upgrade
Any first impressions?
--
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779
http://blog.backblaze.com/2011/07/20/petabytes-on-a-budget-v2-0revealing-more-secrets/
Seem to be real 512 Byte sectors, too.
--
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http
500gb drives
stripped) but writes top out at about 10 and drop a lot lower... If I where
to add a couple usb keys for zil, would it make a difference?
Speaking of which, is there a point in using an eSATA flash stick?
If yes, which?
--
Eugen* Leitl a href=http://leitl.org;leitl/a http
Anyone running a Crucial CT064M4SSD2? Any good, or should
I try getting a RealSSD C300, as long as these are still
available?
--
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http
raidz2 or mirrored
pools will be.
This is purely my speculation, but now that I thought about it, can't get
rid of the idea ;) ...
--
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http
, that'd probably suck on raidz2, too?
Thanks.
--
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29
--
___
cryptography mailing list
cryptogra...@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography
- End forwarded message -
--
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100
this idea.
Regards,
Zooko
___
cryptography mailing list
cryptogra...@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography
- End forwarded message -
--
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
not bother with compression?
The data set is a lot of scanned documents, already compressed (TIF and PDF).
I presume the incidence of identical blocks will be very low under such
circumstances.
Oh, and with 4x 3 TByte SATA mirrored pool is pretty much without
alternative, right?
--
Eugen* Leitl
).
Known problem? Should I go to stable, or try NexentaStor instead?
(I'd rather keep options open with Nexenta Core and napp-it).
--
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http
I realize this is off-topic, but Oracle has completely
screwed up the support site from Sun. I figured someone
here would know how to obtain
Sun Fire X2100 M2 Server Software 1.8.0 Image contents:
* BIOS is version 3A21
* SP is updated to version 3.24 (ELOM)
* Chipset driver is
drive. Consider it a
fancy anti-shock packaging.
What about Hitachi HDS723030ALA640 (aka Deskstar 7K3000, claimed
24/7)?
--
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com
with an SSD.
That's all I run in my laptops now.
--
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29
, to use ZFS storage on ESX(i)?
Why do you think 10 GBit Ethernet is expensive? An Intel NIC is 200 EUR,
and a crossover cable is enough. No need for a 10 GBit switch.
--
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM
). I have a problem that
people want to have scalable in ~30 TByte increments solution,
and I'd rather avoid adding SAS expander boxes but add
identical boxes in a cluster, and not just as invididual
NFS mounts.
--
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE
On Sat, Oct 30, 2010 at 02:10:49PM -0700, zfs user wrote:
1 Mangy-Cours CPU
^
Dunno whether deliberate, or malapropism, but I love it.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
close to its competitors.
Yeah, no more Sun hardware for us, either. Mostly Supermicro,
Dell, HP.
--
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
as well.
Somebody has already bought this microserver? :)
Not yet, though I'm thinking about putting those new 4x 3 TByte SATA
disks into it. Resilver times in raidz3 will be a nightmare,
though.
--
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
of decades. To everyone who taught me,
worked with me, learned from me, and supported my efforts in countless ways
large and small: Thank you.
--
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820
Anyone had any look getting either OpenSolaris or FreeBSD with
zfs working on
http://h10010.www1.hp.com/wwpc/uk/en/sm/WF06b/15351-15351-4237916-4237917-4237917-4248009-4248034.html
?
The Neo has a lot more oomph than the Atoms, and the box can
handle up to 8 GByte ECC memory.
--
Eugen
How badly would a dual-core 1.6 GHz Atom with 4 GBytes RAM
be underpowered for serving 4-6 SATA drives? What kind of
transfer speed (GBit Ethernet, Intel NICs) can I expect with
raidz2 or raidz3?
Thanks.
--
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
do you
expect?
Thanks for the thumbs-up. Just local GBit LAN. If it does
~20-40 MByte/s it should be quite enough.
--
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http
, but it's also more expensive.
Not in my neck of the woods, Sun have always been most competitive.
You find Sun to be a better deal than Supermicro? Especially,
when you're sticking a very large number of disks into it, and
can't source the diskless caddies elsewhere?
--
Eugen* Leitl
series of anecdotes I've done quite well ditching
Dells and Suns and HPs for Supermicro, and sourcing disks (and sometimes
memory) from the likes of TechData and IngramMicro. No doubt, others have
very different stories to tell.
--
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
of a sample size of maybe 20.
--
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
http://www.natecarlson.com/2010/05/07/review-supermicros-sc847a-4u-chassis-with-36-drive-bays/
Review: SuperMicro’s SC847 (SC847A) 4U chassis with 36 drive bays
May 7, 2010 · 9 comments
in Geek Stuff, Linux, Storage, Virtualization, Work Stuff
SuperMicro SC847 Thumbnail
[Or my quest for
quite leery. Maybe the SAS Seagates are better.
--
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29
of proof is quite onerous, and quite in their court. Words
are not nearly enough.
It seems the technology is finished, unless a credible fork is
forthcoming.
--
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100
.
--
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
recommend you to test both systems. Maybe you will find
some comparative benchmarks in Phoronix, but those don’t guarantee your
application will work better on one or the other.
Hope that helps.
Phobos (Email) - 29 March '10 - 13:12
--
Eugen* Leitl a href=http://leitl.org;leitl/a http
am I a developer), such crucial features are nontrivial to backport because
FreeBSD doesn't practice layer separation. Inasmuch this is still true
for the future we'll see once the Oracle/Sun dust settles.
--
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
of the PCI solutions
that have either battery backup or external power.
But this seems like a good solution if someone has the space.
It doesn't do ECC memory though, which is a real pity.
--
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
? How much disks (assuming 100 MByte/s throughput for each)
would be considered pushing it for a current single-socket quadcore?
Are compression, sha256 checksums, or deduplication enabled for the
filesystem you are using?
--
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
case
scenario, what would be the best candidate for a fork? Nexenta?
Debian already included FreeBSD as a kernel flavor into its
fold, it seems Nexenta could be also a good candidate.
Maybe anyone in the know could provide a short blurb on what
the state is, and what the options are.
--
Eugen
priced (~230 EUR) and stacked with 4GB or 8GB DDR2-ECC should
be more than sufficient.
Wouldn't it be better investing these 300-350 EUR into 16 GByte or more of
system memory, and a cheap UPS?
http://www.hyperossystems.co.uk/07042003/hardware.htm
--
Eugen* Leitl a href=http://leitl.org
?Item=N82E16835203002
--
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
to be able
to use one for a hybrid zfs iSCSI target for VMWare, probably with
10 GBit Ethernet.
There's no way I would use something like this for most installs, but
there is definitely some use. Now that opensolaris supports sata pmp,
you could use a similar chassis for a zfs pool.
--
Eugen
FreeBSD 8.0 which is just out and claims zfs ready for
production will do better.
--
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA
because Quixplorer does not respect system user permissions (BR 2798934).
* Disk temperature not detected correct for SCSI devices (BR 2801565).
* Fix JPCERT/CC JVN#89791790 (Cross-site scripting vulnerability).
--
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
On Wed, Oct 28, 2009 at 01:40:12PM +0800, C. Bergström wrote:
So use Nexenta?
Got data you care about?
Verify extensively before you jump to that ship.. :)
So you're saying Nexenta have been known to drop bits on
the floor, unprovoked? Inquiring minds...
--
Eugen* Leitl a href=http
scaling up its
FreeNAS 0.7 final with zfs will be out Any Day Now. It may lag behind
OpenSolaris, but it is usable.
support for embedded platforms (MIPS, ARM, PowerPC) recently. Not sure of
the porting progress of OpenSolaris off-hand.
--
Eugen* Leitl a href=http://leitl.org;leitl/a http
keeping the
pool active. At this point, the only way I can get it to work is to
offline (ie export) the whole pool, and then pray that nothing
interrupts the expansion process.
Anyone knows how Drobo does it?
--
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
big tower,
and is now dwelling in a cheap Sharkoon case. The fan is a bit noisy,
but then, the server is behind a couple of doors, and serves the
house LAN. It's currently running Linux, but has already a FreeNAS
on an IDE DOM preinstalled.
--
Eugen* Leitl a href=http://leitl.org;leitl/a http
Somewhat hairy, but interesting. FYI.
https://sourceforge.net/apps/phpbb/freenas/viewtopic.php?f=97t=1902
--
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http
.
--
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
to deal with that is watercooling, or operating the thing
out of your earshot (cellar, etc).
There are large enclosures with large, slow-moving fans which are
suitable for the living room, but I doubt you can miss 16-24 drives
in action.
--
Eugen* Leitl a href=http://leitl.org;leitl/a http
configuration to use.
Any takers?
I'm willing to contribute (zfs on Opensolaris, mostly Supermicro
boxes and FreeNAS (FreeBSD 7.2, next 8.x probably)). Is there a
wiki for that somewhere?
--
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
raidz3 into the number crunch?
Thanks!
--
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
layout examples for
odd number of disks (understandable, since Sun has to sell the
Thumper), and is pretty mum on raidz3.
Thank you. This list is fun, and helpful.
--
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM
/X8DAi.cfm
in above box.
anything that's been out for a while should work fine. I presume you're going
for an Intel Xeon solution-- the peripherals on those boards a a bit better
supported than the AMD stuff, but even the AMD boards work well.
Yes, dual-socket quadcore Xeon.
--
Eugen* Leitl
you suggest?
Thanks.
--
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE
94 matches
Mail list logo