[zfs-discuss] Lightning SSD with 180,000 IOPs, 320MB/s writes

2009-09-15 Thread Neal Pollack

http://www.dailytech.com/Startup+Drops+Bombshell+Lightning+SSD+With+180k+IOPS+500320+MBs+ReadWrites/article16249.htm

Pliant Technologies
just released two Lightning high performance enterprise SSDs that threaten to blow 
away the competition.  The drives uses proprietary ASICs to deliver an incredible 
input-output performance per second (IOPS) that close to doubles the fastest of its 
competitors.  The Enterprise Flash Drive (EFD) LS offers 180,000 IOPS in a 3.5 form 
factor, while the 2.5 form factor EFD LB claims 140,000 IOPS of performance.


If that's not enough to sate the appetite of even the most die-hard flash drive 
enthusiast, this will be --  the drives also offer 500MB/sec and 320 MB/sec reads 
and 420MB/sec read and 220MB/sec write rates for the 3.5 and 2.5, respectively.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Using consumer drives in a zraid2

2009-08-26 Thread Neal Pollack

On 08/25/09 10:46 PM, Tim Cook wrote:
On Wed, Aug 26, 2009 at 12:22 AM, thomas tjohnso...@gmail.com 
mailto:tjohnso...@gmail.com wrote:


  I'll admit, I was cheap at first and my
  fileserver right now is consumer drives. nbsp;You
  can bet all my future purchases will be of the enterprise grade.
nbsp;And
  guess what... none of the drives in my array are less than 5
years old, so even
  if they did die, and I had bought the enterprise versions, they'd be
  covered.

Anything particular happen that made you change your mind? I started
with
enterprise grade because of similar information discussed in this
thread.. but I've
also been wondering how zfs holds up with consumer level drives and
if I could save
money by using them in the future. I guess I'm looking for horror
stories that can be
attributed to them? ;)



When it comes to my ZFS project, I am currently lacking horror stories.  
When it comes to what the hell, this drive literally failed a week 
after the warranty was up, I unfortunately PERSONALLY have 3 examples.  
I'm guessing (hoping) it's just bad luck.  



Luck or design/usage ?
Let me explain;   I've also had many drives fail over the last 25
years of working on computers, I.T., engineering, manufacturing,
and building my own PCs.

Drive life can be directly affected by heat.  Many home tower designs,
until the last year or two, had no cooling fans or air flow where
the drives mount.  I'd say over 80% of desktop average PCs do
not have any cooling or air flow for the drive.
(I've replaced many many for friends).
[HP small form factor desktops are the worst offenders
 in what I jokingly call zero cooling design :-)
Just look at the quantity of refurbished ones offered for sale]

Once I started adding cooling fans for my drives in my own
workstations I build, the rate of drive failures went
down by a lot.  The drive life went up by a lot.

You can still have random failures for a dozen reasons, but
heat is one of the big killers.  I did some experiments over
the last 5 years and found that ANY amount of air flow makes
a big difference.  If you run a 12 volt fan at 7 volts by
connecting it's little red and black wires across the outside
of a disk drive connecter (red and orange wires, 12 and 5 volt, difference
is 7), then the fan is silent, moves a small flow of air, and drops
the disk drive temperature by a lot.
[Translation:  It can be as quiet as a dell, but twice as good
since you built it :-) ]

That said, there are some garbage disk drive designs on the market.
But if a lot of yours fail early, close to warranty, they might
be getting abused or run near the max design temperature?

Neal


Perhaps the luck wasn't SO
bad though, as I had backups of all of those (proof, you should never 
rely on a single drive to last up to,or beyond its warranty period.


--Tim




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] snv_110 - snv_121 produces checksum errors on Raid-Z pool

2009-08-25 Thread Neal Pollack

On 08/25/09 05:29 AM, Gary Gendel wrote:

I have a 5-500GB disk Raid-Z pool that has been producing checksum errors right 
after upgrading SXCE to build 121.  They seem to be randomly occurring on all 5 
disks, so it doesn't look like a disk failure situation.

Repeatingly running a scrub on the pools randomly repairs between 20 and a few 
hundred checksum errors.

Since I hadn't physically touched the machine, it seems a very strong 
coincidence that it started right after I upgraded to 121.

This machine is a SunFire v20z with a Marvell SATA 8-port controller (the same 
one as in the original thumper).  I've seen this kind of problem way back 
around build 40-50 ish, but haven't seen it after that until now.

Anyone else experiencing this problem or knows how to isolate the problem 
definitively?

Thanks,
Gary



My group also upgraded a small server with 6 disks to build 121 and almost 
immediately
all 6 disks were showing between dozens and hundreds of checksum errors.

Neal

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Finding SATA cards for ZFS; was Lundman home NAS

2009-08-03 Thread Neal Pollack

On 07/31/09 06:12 PM, Jorgen Lundman wrote:


Finding a SATA card that would work with Solaris, and be hot-swap, and 
more than 4 ports, sure took a while. Oh and be reasonably priced ;)


Let's take this first point; card that works with Solaris

I might try to find some engineers to write device drivers to
improve this situation.
Would this alias be interested in teaching me which 3 or 4 cards they would
put at the top of the wish list for Solaris support? 


I assume the current feature gap is defined as needing driver support
for PCI-express add-in cards that have 4 to 8 ports inexpensive
JBOD, not expensive HW RAID, and can handle hot-swap while running OS.
Would this be correct?

Neal


Double the price of the dual core Atom did not seem right.

The SATA card was a close fit to the jumper were the power-switch 
cable attaches, as you can see in one of the photos. This is because 
the MV8 card is quite long, and has the big plastic SATA sockets. It 
does fit, but it was the tightest spot.


I also picked the 5-in-3 drive cage that had the shortest depth 
listed, 190mm. For example the Supermicro M35T is 245mm, another 5cm. 
Not sure that would fit.


Lund


Nathan Fiedler wrote:

Yes, please write more about this. The photos are terrific and I
appreciate the many useful observations you've made. For my home NAS I
chose the Chenbro ES34069 and the biggest problem was finding a
SATA/PCI card that would work with OpenSolaris and fit in the case
(technically impossible without a ribbon cable PCI adapter). After
seeing this, I may reconsider my choice.

For the SATA card, you mentioned that it was a close fit with the case
power switch. Would removing the backplane on the card have helped?

Thanks

n


On Fri, Jul 31, 2009 at 5:22 AM, Jorgen Lundmanlund...@gmo.jp wrote:
I have assembled my home RAID finally, and I think it looks rather 
good.


http://www.lundman.net/gallery/v/lraid5/p1150547.jpg.html

Feedback is welcome.

I have yet to do proper speed tests, I will do so in the coming week 
should

people be interested.

Even though I have tried to use only existing, and cheap, parts the 
end sum
became higher than I expected. Final price is somewhere in the 
47,000 yen

range. (Without hard disks)

If I were to make and sell these, they would be 57,000 or so, so I 
do not
really know if anyone would be interested. Especially since SOHO NAS 
devices

seem to start around 80,000.

Anyway, sure has been fun.

Lund

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Motherboard for home zfs/solaris file server

2009-07-23 Thread Neal Pollack

On 07/23/09 09:19 AM, Richard Elling wrote:

On Jul 23, 2009, at 5:42 AM, F. Wessels wrote:


Hi,

I'm using asus m3a78 boards (with the sb700) for opensolaris and m2a* 
boards (with the sb600) for linux some of them with 4*1GB and others 
with 4*2Gb ECC memory. Ecc faults will be detected and reported. I 
tested it with a small tungsten light. By moving the light source 
slowly towards the memory banks you'll heat them up in a controlled 
way and at a certain point bit flips will occur.


I am impressed!  I don't know very many people interested in inducing
errors in their garage.  This is an excellent way to demonstrate random
DRAM errors. Well done!


I recommend you to go for a m4a board since they support up to 16 GB.
I don't know if you can run opensolaris without a videocard after 
installation I think you can disable the halt on no video card in 
the bios. But Simon Breden had some trouble with it, see his 
homeserver blog. But you can go for one of the three m4a boards with 
a 780g onboard. Those will give you 2 pci-e x16 connectors. I don't 
think the onboard nic is supported. 



What is the specific model of the onboard nic chip?
We may be working on it right now.

Neal


I always put an intel (e1000) in, just to prevent any trouble. I 
don't have any trouble with the sb700 in ahci mode. Hotplugging works 
like a charm. Transfering a couple of GB's over esata takes 
considerable less time than via usb.
I have a pata to dual cf adapter and two industrial 16gb cf cards as 
mirrored root pool. It takes for ever to install nevada, at least 14 
hours. I suspect the cf cards lack caches. But I don't update that 
regularly, still on snv104.  And have 2 mirrors and a hot spare. The 
sixth port is an esata port I use to transfer large amounts of data. 
This system consumes about 73 watts idle and 82 under load i/o load. 
(5 disks , a separate nic  ,8 gb ram and a be2400 all using just 73 
watts!!!)


How much power does the tungsten light burn? :-)
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SSDs get faster and less expensive

2009-07-21 Thread Neal Pollack

On 07/21/09 03:00 PM, Nicolas Williams wrote:

On Tue, Jul 21, 2009 at 02:45:57PM -0700, Richard Elling wrote:
  

But to put this in perspective, you would have to *delete* 20 GBytes



Or overwrite (since the overwrites turn in to COW writes of new blocks
and the old blocks are released if not referred to from snapshot).

  

of data a day on a ZFS file system for 5 years (according to Intel) to
reach the expected endurance.  I don't know many people who delete
that much data continuously (I suspect that the satellite data vendors
might in their staging servers... not exactly a market for SSDs)



Don't forget atime updates.  If you just read, you're still writing.

Of course, the writes from atime updates will generally be less than the
number of data blocks read, so you might have to read many more times
what you say in order to get the same effect.

(Speaking of atime updates, I run my root datasets with atime updates
disabled.  I don't have hard data, but it stands to reason that things
can go fast that way.  I also mount filesystems in VMs with atime
disabled.
  


You might find this useful;
http://www.sun.com/bigadmin/features/articles/nvm_boot.jsp

It's from a year ago.

In general though, regardless of how you set things in the article,
I was involved in some destructive testing on nand flash memory,
both SLC and MLC in 2007.  Our team found that when used as
a boot disk, the amount of writes, with current wear-leveling
techniques, were such that we estimated the device would not
fail during the anticipated service life of the motherboard (5 to 7 years).

Using an SSD as a data drive or storage cache drive is an entirely different
situation.  Solaris had been optimized to reduce writes to the boot disk
long before SSD, in an attempt to maximize performance and reliability.

So, for example, in using a CF card as a boot disk with unmodified Solaris,
the write were so low per 24 hours that Mike and Krister's team calculated
a best case device life of 779 years and a worst case under abuse of approx
68,250 hours.  The calculations change with device size, wear-level 
algorithm,
etc.  Current SSD's are better.  But the above calculations did not take 
into

account random electronics failures (MTBF), just the failure mode of
exhausting the maximum write count.

So I really sleep fine at night if the SSD or CF is a boot disk, 
especially with
atime disabled.   If it's for a cache, well, that might require some 
additional
testing/modeling/calculation.  If it were a write-cache for critical 
data, I would

calculate, and then simply replace it periodically *before* it fails.

Neal


Yes, I'm picking nits; sorry.

Nico
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, power failures, and UPSes

2009-06-30 Thread Neal Pollack

On 06/30/09 03:00 AM, Andre van Eyssen wrote:

On Tue, 30 Jun 2009, Monish Shah wrote:

The evil tuning guide says The ZIL is an essential part of ZFS and 
should never be disabled.  However, if you have a UPS, what can go 
wrong that really requires ZIL?


Without addressing a single ZFS-specific issue:

* panics
* crashes
* hardware failures
- dead RAM
- dead CPU
- dead systemboard
- dead something else
* natural disasters
* UPS failure
* UPS failure (must be said twice)
* Human error (what does this button do?)
* Cabling problems (say, where did my disks go?)
* Malicious actions (Fired? Let me turn their power off!)

That's just a warm-up; I'm sure people can add both the ZFS-specific 
reasons and also the fallacy that a UPS does anything more than 
mitigate one particular single point of failure.


Actually, they do quite a bit more than that.
They create jobs, generate revenue for battery manufacturers, and tech's 
that change
batteries and do PM maintenance on the large units.  Let's not forget 
that they add
significant revenue to the transportation industry, given their weight 
for shipping.


In the last 28 years of doing this stuff, I've found a few times that 
the UPS has actually
worked and lasted as long as the outage.  Many other times, the unit is 
failed (circuits),
or the batteries are beyond the service life.  But really, something 
approaching 40%

of the time they actually work out OK.

So they also create repair and recycling jobs. :-)




Don't forget to buy two UPSes and split your machine across both. And 
don't forget to actually maintain the UPS. And check the batteries. 
And schedule a load test.


The single best way to learn about the joys of UPS behaviour is to sit 
down and have a drink with a facilities manager who has been doing the 
job for at least ten years. At least you'll hear some funny stories 
about the day a loose screw on one floor took out a house UPS and 100+ 
hosts and NEs with it.


Andre.




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs on 32 bit?

2009-06-16 Thread Neal Pollack

On 06/16/09 02:39 PM, roland wrote:

so, we have a 128bit fs, but only support for 1tb on 32bit?

i`d call that a bug, isn`t it ?  is there a bugid for this? ;)
  


Well, opinion is welcome.
I'd call it an RFE.

With 64 bit versions of the CPU chips so inexpensive these days,
how much money do you want me to invest in moving modern features
and support to old versions of the OS?

I mean, Microsoft could, on a technical level, backport all new features 
from
Vista and Windows Seven to Windows 95.  But if they did that, their 
current offering

would lag, since all the engineers would be working on the older stuff.

Heck, you can buy a 64 bit CPU motherboard very very cheap.  The staff 
that we do have
are working on modern features for the 64bit version, rather than 
spending all their time

in the rear-view mirror.   Live life forward.  Upgrade.
Changing all the data structures in the 32 bit OS to handle super larger 
disks, is, well, sorta
like trying to get a Pentium II to handle HD Video.  I'm sure, with 
enough time and money,
you might find a way.  But is it worth it?  Or is it cheaper to buy a 
new pump?


Neal



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs on 32 bit?

2009-06-16 Thread Neal Pollack

On 06/16/09 03:22 PM, Ray Van Dolson wrote:

On Tue, Jun 16, 2009 at 03:16:09PM -0700, milosz wrote:
  

yeah i pretty much agree with you on this.  the fact that no one has
brought this up before is a pretty good indication of the demand.
there are about 1000 things i'd rather see fixed/improved than max
disk size on a 32bit platform.



I'd say a lot of folks out there have plenty of enterprise-class 32-bit
hardware still in production in their datacenters.  I know I do.
Several IBM BladeCenters with 32-bit blades and attached storage... 


It would be nice to be able to do ZFS on these platforms (1TB that
is), but I understand if it's not a priority.  But there's certainly a
lot of life left in 32-bit hardware, and not all of it is cheap to
replace.
  


Not sure I understand all this concern.  32 bit can use 1.0 TB disks as 
data drives.
ZFS can use more than 1 disk.  So if you hook up 48 of the 1.0 TB disks 
using

ZFS on a 32 bit system, where is the problem?

If someone running a 32bit system is angry because they can't waste a 1.5 TB
seagate disk as the boot drive, then I'll admit I don't understand something
in their requirements.  What is the specific complaint please?

Neal


Ray

  

On Tue, Jun 16, 2009 at 5:55 PM, Neal Pollackneal.poll...@sun.com wrote:


On 06/16/09 02:39 PM, roland wrote:
  

so, we have a 128bit fs, but only support for 1tb on 32bit?

i`d call that a bug, isn`t it ?  is there a bugid for this? ;)



Well, opinion is welcome.
I'd call it an RFE.

With 64 bit versions of the CPU chips so inexpensive these days,
how much money do you want me to invest in moving modern features
and support to old versions of the OS?

I mean, Microsoft could, on a technical level, backport all new features
from
Vista and Windows Seven to Windows 95.  But if they did that, their current
offering
would lag, since all the engineers would be working on the older stuff.

Heck, you can buy a 64 bit CPU motherboard very very cheap.  The staff that
we do have
are working on modern features for the 64bit version, rather than spending
all their time
in the rear-view mirror.   Live life forward.  Upgrade.
Changing all the data structures in the 32 bit OS to handle super larger
disks, is, well, sorta
like trying to get a Pentium II to handle HD Video.  I'm sure, with enough
time and money,
you might find a way.  But is it worth it?  Or is it cheaper to buy a new
pump?

Neal
  

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] X4500 Thumper, config for boot disks?

2009-03-19 Thread Neal Pollack

Hi:

What is the most common practice for allocating (choosing) the two disks 
used for

the boot drives, in a zfs root install, for the mirrored rpool?

The docs for thumper, and many blogs, always point at cfgadm slots 0 and 1,
which are sata3/0 and sata/3/4, which most often map to c5t0d0 and c5t4d0.
But those are on the same controller (yes, I've read all that before).
And these seem to be the ones that BIOS agrees to boot from.

However, the doc below, in section;
http://www.solarisinternals.com/wiki/index.php/ZFS_Configuration_Guide#ZFS_Configuration_Example_.28x4500_with_raidz2.29

mentions using two boot disks for the zfs root on a different controller;
zpool create mpool mirror c5t0d0s0 c4t0d0s0

I'll assume that they meant rpool instead of mpool.  I had thought 
that BIOS
will only agree to boot from the slot 0 and slot 1 disks which are on 
the same
controller. 


Does anyone know which doc is correct, and what two disk devices
are typically being used for the zfs root these days?

If I stick with the x4500 docs and use c5t0d0 and c5t4d0, they both
can be booted from bios, but it makes doing remaining raidz2 data pool
a little trickier.  7 sets of 6-disk raidz2, can't get all vdevs on 
different

controller number.

But if I use the example from SolarisInternals.com guide above, with
both of the zfs root pool disks on different controllers, it makes it easier
to allocate remaining vdevs for the 7 sets of 6-disk raidz2, but I 
can't see

how BIOS could select both of those boot devices?

Sincere Thanks,

Neal
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How do I mirror zfs rpool, x4500?

2009-03-18 Thread Neal Pollack

On 03/18/09 10:43 AM, Tim wrote:
On Wed, Mar 18, 2009 at 12:14 PM, Richard Elling 
richard.ell...@gmail.com mailto:richard.ell...@gmail.com wrote:


Tim wrote:


Just an observation, but it sort of defeats the purpose of
buying sun hardware with sun software if you can't even get a
this is how your drives will map out of the deal...


Sun could fix that, but would you really want a replacement for BIOS?
-- richard


Yes, I really would.  I also have a hard time believing BIOS is the 
issue.  I have a 7110 sitting directly below an x4240 in one of my 
racks... the 7110 has no issues reporting disks properly.


BIOS is indeed an issue.  In many x86/x64 PC architecture designs, and 
the current enumeration design of Solaris,
if you add controller cards, or move a controller card, after a previous 
OS installation, then the controller numbers
and ordering changes on all the devices.  ZFS apparently does not care, 
but UFS would, since bios designates a specific
disk to boot from, and the OS would have a specific boot path including 
a controller number such as

/dev/dsk/c3t4d0s0 that could change, hence no longer boot.

Getting to EFI firmware, dumping BIOS, and redesigning the Solaris 
device enumeration framework would

make things a little more flexible in that type of scenario.




--Tim


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How do I mirror zfs rpool, x4500?

2009-03-18 Thread Neal Pollack

On 03/18/09 11:09 AM, Tim wrote:



On Wed, Mar 18, 2009 at 12:49 PM, Neal Pollack neal.poll...@sun.com 
mailto:neal.poll...@sun.com wrote:


On 03/18/09 10:43 AM, Tim wrote:

On Wed, Mar 18, 2009 at 12:14 PM, Richard Elling
richard.ell...@gmail.com mailto:richard.ell...@gmail.com wrote:

Tim wrote:


Just an observation, but it sort of defeats the purpose
of buying sun hardware with sun software if you can't
even get a this is how your drives will map out of the
deal...


Sun could fix that, but would you really want a replacement
for BIOS?
-- richard


Yes, I really would.  I also have a hard time believing BIOS is
the issue.  I have a 7110 sitting directly below an x4240 in one
of my racks... the 7110 has no issues reporting disks properly.


BIOS is indeed an issue.  In many x86/x64 PC architecture designs,
and the current enumeration design of Solaris,
if you add controller cards, or move a controller card, after a
previous OS installation, then the controller numbers
and ordering changes on all the devices.  ZFS apparently does not
care, but UFS would, since bios designates a specific
disk to boot from, and the OS would have a specific boot path
including a controller number such as
/dev/dsk/c3t4d0s0 that could change, hence no longer boot.

Getting to EFI firmware, dumping BIOS, and redesigning the Solaris
device enumeration framework would
make things a little more flexible in that type of scenario.



How does any of that affect an x4500 with onboard controllers that 
can't ever be moved?


Stick a fiber channel controller card into your x4500 PCI slot, then go 
back and look at your
controller numbering, even for the built-in disk controller chips.  Here 
is the cfgadm output

for an X4500 that I set up yesterday.
Notice that the first two controller numbers are for the fibre channel 
devices, and
then notice that the disk controller numbers no longer match your 
documentation,

or your blogs about suggested configuration;

$ cat zcube1.txt
Ap_Id  Type Receptacle   Occupant 
Condition
c6 fc   connectedunconfigured 
unknown
c7 fc   connectedunconfigured 
unknown

sata0/0::dsk/c0t0d0disk connectedconfigured   ok
sata0/1::dsk/c0t1d0disk connectedconfigured   ok
sata0/2::dsk/c0t2d0disk connectedconfigured   ok
sata0/3::dsk/c0t3d0disk connectedconfigured   ok
sata0/4::dsk/c0t4d0disk connectedconfigured   ok
sata0/5::dsk/c0t5d0disk connectedconfigured   ok
sata0/6::dsk/c0t6d0disk connectedconfigured   ok
sata0/7::dsk/c0t7d0disk connectedconfigured   ok
sata1/0::dsk/c1t0d0disk connectedconfigured   ok
sata1/1::dsk/c1t1d0disk connectedconfigured   ok
sata1/2::dsk/c1t2d0disk connectedconfigured   ok
sata1/3::dsk/c1t3d0disk connectedconfigured   ok
sata1/4::dsk/c1t4d0disk connectedconfigured   ok
sata1/5::dsk/c1t5d0disk connectedconfigured   ok
sata1/6::dsk/c1t6d0disk connectedconfigured   ok
sata1/7::dsk/c1t7d0disk connectedconfigured   ok
sata2/0::dsk/c2t0d0disk connectedconfigured   ok
sata2/1::dsk/c2t1d0disk connectedconfigured   ok
sata2/2::dsk/c2t2d0disk connectedconfigured   ok
sata2/3::dsk/c2t3d0disk connectedconfigured   ok
sata2/4::dsk/c2t4d0disk connectedconfigured   ok
sata2/5::dsk/c2t5d0disk connectedconfigured   ok
sata2/6::dsk/c2t6d0disk connectedconfigured   ok
sata2/7::dsk/c2t7d0disk connectedconfigured   ok
sata3/0::dsk/c3t0d0disk connectedconfigured   
ok   -- Boot disk, slot 0

sata3/1::dsk/c3t1d0disk connectedconfigured   ok
sata3/2::dsk/c3t2d0disk connectedconfigured   ok
sata3/3::dsk/c3t3d0disk connectedconfigured   ok
sata3/4::dsk/c3t4d0disk connectedconfigured   
ok  --  Boot disk, slot 1

sata3/5::dsk/c3t5d0disk connectedconfigured   ok
sata3/6::dsk/c3t6d0disk connectedconfigured   ok
sata3/7::dsk/c3t7d0disk connectedconfigured   ok
sata4/0::dsk/c4t0d0disk connectedconfigured   ok
sata4/1::dsk/c4t1d0disk connectedconfigured   ok
sata4/2::dsk/c4t2d0disk connectedconfigured   ok
sata4/3::dsk

[zfs-discuss] How do I mirror zfs rpool, x4500?

2009-03-17 Thread Neal Pollack

I'm setting up a new X4500 Thumper, and noticed suggestions/blogs
for setting up two boot disks as a zfs rpool mirror during installation.
But I can't seem to find instructions/examples for how to do this using
google, the blogs, or the Sun docs for X4500.

Can anyone share some instructions for setting up the rpool mirror
of the boot disks during the Solaris Nevada (SXCE) install?

Thanks,

Neal

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How do I mirror zfs rpool, x4500?

2009-03-17 Thread Neal Pollack

On 03/17/09 12:32 PM, cindy.swearin...@sun.com wrote:

Neal,

You'll need to use the text-based initial install option.
The steps for configuring a ZFS root pool during an initial
install are covered here:

http://opensolaris.org/os/community/zfs/docs/

Page 114:

Example 4–1 Initial Installation of a Bootable ZFS Root File System

Step 3, you'll be presented with the disks to be selected as in 
previous releases. So, for example, to select the boot disks on the 
Thumper,

select both of them:

[x] c5t0d0
[x] c4t0d0



Why have the controller numbers/mappings changed between Solaris 10 and
Solaris Nevada?   I just installed Solaris Nevada 110 to see what it 
would do.
Thank you, and I now understand that to find the disk name, like above 
c5t0d0

for physical slot 0 on X4500, I can use  cfgadm | grep sata3/0

I also now understand that in the installer screens, I can select 2 
disks and they

will become a mirrored root zpool.

What I do not understand, is that on Solaris Nevada 110,  the x4500 
Thumper physical
disk slots 0 and 1 are labeled as controller 3 and not controller 5. 
For example;


# cfgadm | grep sata3/0
sata3/0::dsk/c3t0d0disk connectedconfigured   ok
# cfgadm | grep sata3/4
sata3/4::dsk/c3t4d0disk connectedconfigured   ok
# uname -a
SunOS zcube-1 5.11 snv_110 i86pc i386 i86pc
#


Of course, that means I shold stay away from all the X4500 and ZFS docs if
I run Solaris Nevada on an X4500?

Any ideas why the mapping is not matching s10 or the docs?

Cheers,

Neal


.
.
.


On our lab Thumper, they are c5t0 and c4t0.

Cindy

Neal Pollack wrote:

I'm setting up a new X4500 Thumper, and noticed suggestions/blogs
for setting up two boot disks as a zfs rpool mirror during installation.
But I can't seem to find instructions/examples for how to do this using
google, the blogs, or the Sun docs for X4500.

Can anyone share some instructions for setting up the rpool mirror
of the boot disks during the Solaris Nevada (SXCE) install?

Thanks,

Neal

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Motherboard for home zfs/solaris file server

2009-02-24 Thread Neal Pollack

On 02/23/09 20:24, Ilya Tatar wrote:

Hello,
I am building a home file server and am looking for an ATX mother 
board that will be supported well with OpenSolaris (onboard SATA 
controller, network, graphics if any, audio, etc). I decided to go for 
Intel based boards (socket LGA 775) since it seems like power 
management is better supported with Intel processors and power 
efficiency is an important factor. After reading several posts about 
ZFS it looks like I want ECC memory as well.


Does anyone have any recommendations?


Any motherboard for the Core2  or Core i7 Intel processors with the ICH 
southbridge (desktop boards) or
ESB2 soutbridge (server boards) will be well supported.  I recommend an 
actual Intel
board since they also always use the Intel network chip (well supported 
and tuned).   Many of the third
party boards from MSI, Gigabyte, Asus, DFI, ECS, and others also work, 
but for some (penny pinching)
reason, they tend to use network chips like Marvell that are not yet 
supported, or Realtek,

for which some of the models are supported.

So using an actual board from Intel Corp will be best supported right 
out of the box.
For that matter, because of the work we do with Intel, almost any of 
their boards will
be supported using the ICH 6, 7, 8, 9, or ICH10 SATA ports in either 
legacy or AHCI
mode.   Again, almost any version of the Intel network (NIC) chips are 
supported across
all their boards.  If you are able to find one that is not, I'd love to 
hear about it and

add it to our work queue.

In the most recent builds of Solaris Nevada (SXCE), the integrated Intel 
graphics
found on many of the boards is well supported.  On other boards, use a 
low end

VGA card.
Again, if you find an Intel board where the graphics is not supported or 
not working,

please let us know the specifics and we'll fix it.

Cheers,

Neal



Here are a few that I found. Any comments about those?

Supermicro C2SBX+
http://www.supermicro.com/products/motherboard/Core2Duo/X48/C2SBX+.cfm

Gigabyte GA-X48-DS4
gigabyte: 
http://www.gigabyte.com.tw/Products/Motherboard/Products_Overview.aspx?ProductID=2810 



Intel S3200SHV
http://www.intel.com/Products/Server/Motherboards/Entry-S3200SH/Entry-S3200SH-overview.htm 



Thanks for any help,
-Ilya



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is ZFS already the default file System for Solaris 10?

2008-11-07 Thread Neal Pollack

On 11/07/08 11:24, Kumar, Amit H. wrote:

Is ZFS already the default file System for Solaris 10?
If yes has anyone tested it on Thumper ??


Yes.   Formal Sun support is for Thumper running s10.  For the latest
ZFS bug fixes, it is important to run the most recent s10 update release.
Right now, that should be s10u6 any day now, if it's not already released
for download.

There are many customers on this list running Thumpers with s10u5 plus 
patches.


Cheers,

Neal


Thank you,
Amit
 
 
 



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] questions on zfs send,receive,backups

2008-11-03 Thread Neal Pollack

On 11/03/08 13:18, Philip Brown wrote:

Ok, I think I understand.  You're going to be told
that ZFS send isn't a backup (and for these purposes
I definately agree),  ...



Hmph. well, even for 'replication' type purposes, what I'm talking about is 
quite useful.
Picture two remote systems, which happen to have mostly identical data. 
Perhaps they were manually synced at one time with tar, or something.
Now the company wants to bring them both into full sync... but first analyze 
the small differences that may be present.
  


um, /usr/bin/rsync ?
but agreed, not for huge amounts of data...

In that scenario, it would then be very useful, to be able to do the following:

hostA# zfs snapshot /zfs/[EMAIL PROTECTED]
hostA# zfs send /zfs/[EMAIL PROTECTED] | ssh hostB zfs receive /zfs/[EMAIL 
PROTECTED]

hostB# diff -r /zfs/prod /zfs/prod/.zfs/snapshots/A /tmp/prod.diffs


One could otherwise find files that are different, with rsync -avn. But doing it with 
zfs in this way, adds value, by allowing you to locally compare old and new files on 
the same machine, without having to do some ghastly manual copy of each different file, to a new 
place, and doing the compare there.
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS boot vs Linux fuse

2008-10-22 Thread Neal Pollack

On 10/22/08 09:02 AM, Andrew Gallatin wrote:

Johan Hartzenberg wrote:
  

Reboot to the grub menu
Move to the failsafe kernel entry



Ugh.  This is OpenSolaris (Indiana), and there *is* no failsafe
as far as I can tell.  There is one grub entry for Solaris:
#-- ADDED BY BOOTADM - DO NOT EDIT --
title OpenSolaris 2008.05 snv_86_rc2a X86
bootfs rpool/ROOT/opensolaris
kernel$ /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS
module$ /platform/i86pc/$ISADIR/boot_archive
#-END BOOTADM


I'm sitting in grub now, trying to figure out what to do.
  


Simple, the equiv of failsafe for OpenSolaris is to boot the live-cd,
then manually mount your disk drive.
Sort of like using Knoppix to repair a linux install,
or WinPE to repair the mistake of installing windows...


From an S10u5 box I have, it looks like there should
be a file /boot/x86.miniroot-safe, but that file does
not exist on the OpenSolaris box.

  

Did the person who imported the pool under Linux use the old (circa Feb
2008) zfs-fuse, or the new one (Sept 2008)?



I think I'm safe on this front.  zpool upgrade says its running version
10, which is what the identical (working) machine also says.

I think I'm just going to give up and re-install..  Sigh.

Drew
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Resilver hanging?

2008-10-08 Thread Neal Pollack
Tom Servo wrote:
 How can I diagnose why a resilver appears to be hanging at a certain
 percentage, seemingly doing nothing for quite a while, even though the
 HDD LED is lit up permanently (no apparent head seeking)?

 The drives in the pool are WD Raid Editions, thus have TLER and should
 time out on errors in just seconds. ZFS nor the syslog however were
 reporting any IO errors, so it weren't the disks.

 Check the FMA logs:
   fmadm faulty
   fmdump -e[vV]
 
 Nothing noteworthy in there. fmadm shows nothing, fmdump just 
 ereport.io.ddi.fm-capability repeatedly, which comes from oss_cmi8788 
 (some OpenSound driver).


ouch.  OSS OpenSound code is riddled with improper use of Solaris DDI,
memory leaks, interrupt storms, and other problems.  It would be interesting
to remove that variable, and see what issues remain.


 
 Stopping the scrub didn't work, the zfs command didn't return. It took a
 hard reset to make it stop.

 scrub is not a zfs subcommand, perhaps you meant zpool?
 Depending on the failure, zpool commands may hang, fixed in b100.
 
 Yeah, sorry, zpool doesn't return.
 
 Regards,
 -mg
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool file corruption

2008-09-25 Thread Neal Pollack

On 09/24/08 10:57 PM, Jeff Bonwick wrote:

It's almost certainly the SIL3114 controller.
Google SIL3114 data corruption -- it's nasty.
  


I've also in the past had the misfortune of experiencing
Silicon Image.  My corruption was with other file types
and not even ZFS.   Silicon Image is something I do not
wish on my enemies.  No attempt to acknowledge or recall
defective silicon.  No interest in customer data loss.
Well, this customer has no further interest in Silicon
Image.  I refuse to acknowledge that they exist.


Jeff

On Thu, Sep 25, 2008 at 07:50:01AM +0200, Mikael Karlsson wrote:
  
I have a strange problem involving changes in large file on a mirrored 
zpool in

Open solaris snv96.
We use it at storage in a VMware ESXi lab environment. All virtual disk 
files gets
corrupted when changes are made within the files (when running the 
machine that is).


The sad thing is that I've created about ~200Gb of random data in 
large files and
even modified those files without any problem (using dd with skip and 
conv=notrunc options).
I've copied the files within the pool and over the network on all 
network interfaces

on the machine - without problems.

It's just those .vmdk files that gets corrupted.

The hardware is an Opteron desktop machine with a SIL3114 sata 
interface. Personally I have exactly
the same interface at home with the same setup without problem. Only the 
other hardware differs (disks and so on).


The disks are WD7500AACS, which is those with variable rotation speed 
5400-7200. Could it
be the disks? Could it be the disk controller or the rest of the 
hardware?? I should mention that the

controller has been flashed with a non-raid bios.

I could provide more information if needed! Is there anyone that have 
any ideas or suggestions?



Some output:

bash-3.00# zpool status -vx
  pool: testing
 state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: scrub completed with 1 errors on Wed Sep 24 16:59:13 2008
config:

NAMESTATE READ WRITE CKSUM
testing ONLINE   0 016
  mirrorONLINE   0 016
c0d1ONLINE   0 051
c1d1ONLINE   0 054

errors: Permanent errors have been detected in the following files:

/testing/ZFS-problem/ZFS-problem-flat.vmdk


Regards

Mikael
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Which is better for root ZFS: mlc or slc SSD?

2008-09-24 Thread Neal Pollack
Erik Trimble wrote:
 I was under the impression that MLC is the preferred type of SSD, but I
 want to prevent myself from having a think-o.


 I'm looking to get (2) SSD to use as my boot drive. It looks like I can
 get 32GB SSDs composed of either SLC or MLC for roughly equal pricing.
 Which would be the better technology?  (I'll worry about rated access
 times/etc of the drives, I'm just wondering about general tech for an OS
 boot drive usage...)



   

SLC is faster and typically more expensive.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Which is better for root ZFS: mlc or slc SSD?

2008-09-24 Thread Neal Pollack

Tim wrote:



On Wed, Sep 24, 2008 at 1:41 PM, Erik Trimble [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


I was under the impression that MLC is the preferred type of SSD,
but I
want to prevent myself from having a think-o.


I'm looking to get (2) SSD to use as my boot drive. It looks like
I can
get 32GB SSDs composed of either SLC or MLC for roughly equal pricing.
Which would be the better technology?  (I'll worry about rated access
times/etc of the drives, I'm just wondering about general tech for
an OS
boot drive usage...)


Depends on the MFG.  The new Intel MLC's have proven to be as fast if 
not faster than the SLC's,


That is not comparing apples to apples.   The new Intel MLCs take the 
slower, lower cost MLC chips,
and put them in parallel channels connected to an internal controller 
chip (think of RAID striping).

That way, they get large aggregate speeds for less total cost.
Other vendors will start to follow this idea.

But if you just take a raw chip in one channel, SLC is faster.

And, in the end, yes, the new intel SSDs are very nice.

but they also cost just as much.  If they brought the price down, I'd 
say MLC all the way.  All other things being equal though, SLC.



--Tim


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] resilver keeps starting over? snv_95

2008-09-17 Thread Neal Pollack
Running Nevada build 95 on an ultra 40.
Had to replace a drive.
Resilver in progress, but it looks like each
time I do a zpool status, the resilver starts over.
Is this a known issue?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] resilver keeps starting over? snv_95

2008-09-17 Thread Neal Pollack

On 09/17/08 02:29 PM, [EMAIL PROTECTED] wrote:
Are you doing snaps? 


No, no snapshots ever.
Logged in as root to do;
zpool replace poolname deaddisk
and then did a few zpool status
as root.  It restarted each time.



 If so unless you have the new bits to handle the
issue,  each snap restarts a scrub or resilver.


Thanks!
Wade Stuart

we are fallon
P: 612.758.2660
C: 612.877.0385

** Fallon has moved.  Effective May 19, 2008 our address is 901 Marquette
Ave, Suite 2400, Minneapolis, MN 55402.

[EMAIL PROTECTED] wrote on 09/17/2008 01:07:53 PM:

  

Running Nevada build 95 on an ultra 40.
Had to replace a drive.
Resilver in progress, but it looks like each
time I do a zpool status, the resilver starts over.
Is this a known issue?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] CF to SATA adapters for boot device

2008-08-20 Thread Neal Pollack
Ian Collins wrote:
 Brian Hechinger wrote:
 On Wed, Aug 20, 2008 at 05:17:45PM +1200, Ian Collins wrote:
   
 Has anyone here had any luck using a CF to SATA adapter?

 I've just tried an Addonics ADSACFW CF to SATA adaptor with an 8GB card 
 that I wanted to use for a boot pool and even though the BIOS reports the 
 disk, Solaris B95 (or the installer) doesn't see it.
 
 I tried this a while back with an IDE to CF adapter.  Real nice looking one 
 too.

 It would constantly cause OpenBSD to panic.

 I would recommend against using this, unless you get real lucky.  If you want
 flash to boot from, buy one of the ones that is specifically made for it (not
 CF, but industrial grade flash meant to be a HDD).  Those things work a LOT
 better.  I can look up the details of the ones my friend uses if you'd like.

   
 I was looking to run some tests with a CF boot drive before we get an
 X4540, which has a CF slot. The installer did see the attached USB sticks...

My team does some of the testing inside Sun for the CF boot devices.
We've used a number of IDE attaced CF adapters, such as;
http://www.addonics.com/products/flash_memory_reader/ad44midecf.asp
and also some random models from www.frys.com.
We also test the CF boot feature on various Sun rack servers and blades
that use a CF socket.

I have not tested the SATA adapters but would not expect issues.
I'd like to know if you find issues.


The IDE attached devices use the legacy ATA/IDE device driver software,
which had some bugs fixed for DMA and misc CF specific issues.
It would be interesting to see if a SATA adapter for CF, set in bios to
use AHCI instead of Legacy/IDE mode, would have any issues with
the AHCI device driver software.  I've had no reason to test this yet, since
the Sun HW models build the CF socket right onto the motherboard/bus.
I can't find a reason to worry about hot-plug, since removing the boot
drive while Solaris is running would be, um, somewhat interesting :-)

True, the enterprise grade devices are higher quality and will last longer.
But do not underestimate the current (2008) device wear leveling firmware
that controls the CF memory usage, and hence life span.  Our in house
destructive life span testing shows that the commercial grade CF device
will last longer than the motherboard will.  The consumer grade devices
that you find in the store or on mail order, may or may not be current
generation, so your device lifespan will vary.  It should still be rather
good for a boot device,  because Solaris does very little writing to the
boot disk.  You can review configuration ideas to maximize the life
of your CF device in this Solaris white paper for non-volatile memory;
http://www.sun.com/bigadmin/features/articles/nvm_boot.jsp

I hope this helps.

Cheers,

Neal Pollack

 
 Any further information welcome.
 
 Ian
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SSD update

2008-08-20 Thread Neal Pollack
Bob Friesenhahn wrote:


 SSDs + ZFS - a marriage made in (computer) heaven!
 
 Where's the beef?
 
 I sense a lot of smoke and mirrors here, similar to Intel's recent CPU 
 announcements which don't even reveal the number of cores.  No 
 prices and funny numbers that the writers of technical articles can't 
 seem to get straight.
 
 Obviously these are a significant improvement for laptop drives but 
 how many laptop users have a need for 11,000 IOPs and 170MB/s?  It 
 seems to me that most laptops suffer from insufficent RAM and 
 low-power components which don't deliver much performance.  The CPUs 
 which come in laptops are not going to be able to process 170MB/s.

I guess you have not used current day laptops.
I've used several brands that come standard with dual-core
processors, 4 gig RAM, and 250 GB disks.
Later this year, they are showing off mobile quad core
laptops.

The limiting factor on boot time and data movement is always
the darn HDD, spining at a fixed 7200rpm.  Using parallel-channel
flash SSDs will indeed improve performance significantly, and when
I can get my hands on one, I'd be happy to show you numbers
and price data.

I've been installing and testing OS's on various SSDs and CF
devices that are single channel (300x speed equiv for CF marketing),
and I can't wait to test the new parallel channel devices.

But in so far as zfs server storage array with heavy write
operations?  Yeah, we'd have to talk write data volumes over time
vs. device life span.  But that is also set to change, as the vendors
are working on newer flash tech that can last much longer.

I still see many applications where an SSD or Flash can improve
storage system performance in the enterprise.  Just stay tuned.
Products/solutions are in progress.


 
 What about the dual-ported SAS models for enterprise use?
 
 Bob
 ==
 Bob Friesenhahn
 [EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
 GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] help me....

2008-08-03 Thread Neal Pollack
Rahul wrote:
 hi 
 can you give some disadvantages of the ZFS file system??
   

Yes, it's too easy to administer.
This makes it rough to charge a lot as a sysadmin.
All the problems, manual decisions during fsck and data recovery,
head-aches after a power failure or getting disk drives mixed up
after replacing a controller, not any more.  Not with ZFS.
It's just not fair
It's really hard to charge a lot to take care of a zfs system.


 plzz its urgent...

 help me.
  
  
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ButterFS

2008-08-01 Thread Neal Pollack
dick hoogendijk wrote:
 I read this just now in the Unix Guardian:

 quote
 BTRFS, pronounced ButterFS:
 BTRFS was launched in June 2007, and is a POSIX-compliant file system
 that will support very large files and volumes (16 exabytes) and a
 ridiculous number of files (two to the power of 64 files, to be
 precise). The file system has object-level mirroring and striping,
 checksums on data and metadata, online file system check, incremental
 backup and file system mirroring, subvolumes with their own file system
 roots, writable snapshots, and index and file packing to conserve
 space, among many other features. BTRFS is not anywhere near primetime,
 and Garbee figures it will take at least three years to get it out the
 door.
 /quote

 I thought that ZFS was/is the way to the future, but reading this it
 seems there are compatitors out there ;-)
   

Not yet :-)  Wait three years, if they are on time
For today, this hour, you can actually use ZFS.

Also, no problem, choice is good.  It keep up the
motivation for ongoing innovation.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] x4500 performance tuning.

2008-07-24 Thread Neal Pollack

Lida Horn wrote:

Richard Elling wrote:
  

There are known issues with the Marvell drivers in X4500s.  You will
want to pay attention to the release notes, SRDBs, InfoDocs, and SunAlerts
for the platform.
http://sunsolve.sun.com/handbook_pub/validateUser.do?target=Systems/SunFireX4500/SunFireX4500

You will want to especially pay attention to SunAlert 201289
http://sunsolve.sun.com/search/document.do?assetkey=1-66-201289-1

If you run into these or other problems which are not already described
in the above documents, please log a service call which will get you
into the folks who track the platform problems specifically and know
about patches in the pipeline.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  

Although I am not in the SATA group any longer, I have in the past 
tested hot plugging and failures
of SATA disks with x4500s, Marvell plug in cards and SuperMicro plug in 
cards.  It has worked
in the past on all of these platforms.  Having said that there are 
things that you might be hitting

or might try.

1) The default behavior when a disk is removed and then re-inserted is 
to leave the disk unconfigured.
The operator must issue a cfgadm -c configure satax/y to bring 
the newly plugged in disk on-line.
There was some work being done to make this automatic, but I am not 
currently aware of the state of

that work.
  


As of build 94, it does not automatically bring the disk online.
I replaced a failed disk on an x4500 today running Nevada build 94, and 
still

had to manually issue

# cfgadm -c configure sata1/3
# zpool replace tank cxt2d0

then wait 7 hours for resilver.
But the above is correct and expected.  They simply have not automated 
that yet.  Apparently.


Neal


2) There were bugs related to disk drive errors that have been addressed 
(several months ago).  If you have old

 code you could be hitting one or more of those issues.

3) I think there was a change in the sata generic module with respect to 
when it declares a failed disk as off-line.

You might want to check if you are hitting a problem with that.

4) There are a significant number of bugs in ZFS that can cause hangs.  
Most have been addressed with recent patches.

Make sure you have all the patches.

If you use the raw disk (i.e. no ZFS involvement) doing something like 
dd bs=128k if=/dev/rdsk/cxtyd0p0 of=/dev/null
and then try pulling out the disk.  The dd should return with an I/O 
error virtually immediately.  If it doesn't then
ZFS is probably not the issue.  You can also issue the command cfgadm 
and see what it lists as the state(s) of the

various disks.

Hope that helps,
Lida Horn
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] USB hard to ZFS

2008-06-16 Thread Neal Pollack
Andrius wrote:
 dick hoogendijk wrote:
   
 On Mon, 16 Jun 2008 18:10:14 +0100
 Andrius [EMAIL PROTECTED] wrote:

 
 zpool does not to create a pool on USB disk (formatted in FAT32).
   
 It's already been formatted.
 Try zpool create -f alpha c5t0d0p0

 

 The same story

 # /usr/sbin/zpool create -f alpha c5t0d0p0
 cannot open '/dev/dsk/c5t0d0p0': Device busy
   

When you insert a USB stick into a running Solaris system, and it is 
FAT32 formatted,
it may be automatically mounted as a filesystem, read/write.

The command above fails since it is already mounted and busy.
You may wish to use the df command to verify this.
If it is mounted, try unmounting it  fist, and then using the command;

# /usr/sbin/zpool create -f alpha c5t0d0p0



 Regards,
 Andrius
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] USB hard to ZFS

2008-06-16 Thread Neal Pollack
Andrius wrote:
 Neal Pollack wrote:
 Andrius wrote:
 dick hoogendijk wrote:
  
 On Mon, 16 Jun 2008 18:10:14 +0100
 Andrius [EMAIL PROTECTED] wrote:

   
 zpool does not to create a pool on USB disk (formatted in FAT32).
   
 It's already been formatted.
 Try zpool create -f alpha c5t0d0p0

 

 The same story

 # /usr/sbin/zpool create -f alpha c5t0d0p0
 cannot open '/dev/dsk/c5t0d0p0': Device busy
   

 When you insert a USB stick into a running Solaris system, and it is 
 FAT32 formatted,
 it may be automatically mounted as a filesystem, read/write.

 The command above fails since it is already mounted and busy.
 You may wish to use the df command to verify this.
 If it is mounted, try unmounting it  fist, and then using the command;

 That is true, disc is detected automatically. But

 # umount /dev/rdsk/c5t0d0p0
 umount: warning: /dev/rdsk/c5t0d0p0 not in mnttab
 umount: /dev/rdsk/c5t0d0p0 not mounted

The umount command works best with a filesystem name.
the mount command will show what filesytems are mounted.
For example, if I stick in a USB thumb-drive:

#mount
...
/media/LEXAR MEDIA on /dev/dsk/c9t0d0p0:1 
read/write/nosetuid/nodevices/hidden/nofoldcase/clamptime/noatime/timezone=28800/dev=e01050
 
on Mon Jun 16 11:01:37 2008

#df -hl
/dev/dsk/c9t0d0p0:1991M   923M68M94%/media/LEXAR MEDIA

#umount /media/LEXAR MEDIA
#

And then it no longer shows up in the df or the mount command.

Neal








 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] pool hangs for 1 full minute?

2008-03-27 Thread Neal Pollack

For the last few builds of Nevada, if I come back to my workstation after
long idle periods such as overnight, and try any command that would touch
the zfs filesystem, it hangs for an entire 60 seconds approximately.

This would include ls,  zpool status, etc.

Does anyone has a hint as to how I wold diagnose this?
Or is it time for extreme measures such as zfs send to another server,
destroy, and rebuild a new zpool?

Config and stat:


Running Nevada build 85 and given;
# zpool status
  pool: tank
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
tankONLINE   0 0 0
  raidz2ONLINE   0 0 0
c2d0ONLINE   0 0 0
c3d0ONLINE   0 0 0
c4d0ONLINE   0 0 0
c5d0ONLINE   0 0 0
c6d0ONLINE   0 0 0
c7d0ONLINE   0 0 0
c8d0ONLINE   0 0 0

errors: No known data errors


Also given:  I have been doing live upgrade every other build since
approx Nevada build 46.  I am running on a Sun Ultra 40 modified
to include 8 disks.  (second backplane and SATA quad cable)

It appears that the zfs filesystems are running version 1 and Nevada 
build 85
is running version 3.

zbit:~# zfs upgrade
This system is currently running ZFS filesystem version 3.

The following filesystems are out of date, and can be upgraded.  After being
upgraded, these filesystems (and any 'zfs send' streams generated from
subsequent snapshots) will no longer be accessible by older software 
versions.

VER  FILESYSTEM
---  
 1   tank
 1   tank/arc



Any hints at how to isolate and fix this would be appreciated.

Thanks,

Neal



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] pool hangs for 1 full minute?

2008-03-27 Thread Neal Pollack
Tomas Ögren wrote:
 On 27 March, 2008 - Neal Pollack sent me these 1,9K bytes:

   
 Also given:  I have been doing live upgrade every other build since
 approx Nevada build 46.  I am running on a Sun Ultra 40 modified
 to include 8 disks.  (second backplane and SATA quad cable)

 It appears that the zfs filesystems are running version 1 and Nevada 
 build 85
 is running version 3.

 zbit:~# zfs upgrade
 This system is currently running ZFS filesystem version 3.
 

 Umm. nevada 78 is at version 10.. so I don't think you've managed to
 upgrade stuff 100% ;)

 This system is currently running ZFS pool version 10.
   

ZFS filesystem version is at 3
My zpool is at version 10

zbit:~# zpool upgrade
This system is currently running ZFS pool version 10.

All pools are formatted using this version.


 The following versions are supported:

 VER  DESCRIPTION
 ---  
  1   Initial ZFS version
  2   Ditto blocks (replicated metadata)
  3   Hot spares and double parity RAID-Z
  4   zpool history
  5   Compression using the gzip algorithm
  6   bootfs pool property
  7   Separate intent log devices
  8   Delegated administration
  9   refquota and refreservation properties
  10  Cache devices
 For more information on a particular version, including supported
 releases, see:

 http://www.opensolaris.org/os/community/zfs/version/N


 /Tomas
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] 30 seond hang, ls command....

2008-01-30 Thread Neal Pollack
I'm running Nevada build 81 on x86 on an Ultra 40.
# uname -a
SunOS zbit 5.11 snv_81 i86pc i386 i86pc
Memory size: 8191 Megabytes

I started with this zfs pool many dozens of builds ago, approx a year ago.
I do live upgrade and zfs upgrade every few builds.

When I have not accessed the zfs file systems for a long time,
if I cd there and do an ls command, nothing happens for approx 30 seconds.

Any clues how I would find out what is wrong?

--

# zpool status -v
  pool: tank
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
tankONLINE   0 0 0
  raidz2ONLINE   0 0 0
c2d0ONLINE   0 0 0
c3d0ONLINE   0 0 0
c4d0ONLINE   0 0 0
c5d0ONLINE   0 0 0
c6d0ONLINE   0 0 0
c7d0ONLINE   0 0 0
c8d0ONLINE   0 0 0

errors: No known data errors


# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
tank   172G  2.04T  52.3K  /tank
tank/arc   172G  2.04T   172G  /zfs/arc

# zpool list
NAME   SIZE   USED  AVAILCAP  HEALTH  ALTROOT
tank  3.16T   242G  2.92T 7%  ONLINE  -



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs corruption w/ sil3114 sata controllers

2007-10-29 Thread Neal Pollack
Ed Saipetch wrote:
 Hello,

 I'm experiencing major checksum errors when using a syba silicon image 3114 
 based pci sata controller w/ nonraid firmware.  I've tested by copying data 
 via sftp and smb.  With everything I've swapped out, I can't fathom this 
 being a hardware problem.  

I can.  But I suppose it could also be in some unknown way a driver issue.
Even before ZFS, I've had numerous situations where various si3112 and 
3114 chips
would corrupt data on UFS and PCFS, with very simple  copy and checksum
test scripts, doing large bulk transfers.

Si chips are best used to clean coffee grinders.  Go buy a real SATA 
controller.

Neal

 There have been quite a few blog posts out there with people having a similar 
 config and not having any problems.

 Here's what I've done so far:
 1. Changed solaris releases from S10 U3 to NV 75a
 2. Switched out motherboards and cpus from AMD sempron to a Celeron D
 3. Switched out memory to use completely different dimms
 4. Switched out sata drives (2-3 250gb hitachi's and seagates in RAIDZ, 
 3x400GB seagates RAIDZ and 1x250GB hitachi with no raid)

 Here's output of a scrub and the status (ignore the date and time, I haven't 
 reset it on this new motherboard) and please point me in the right direction 
 if I'm barking up the wrong tree.

 # zpool scrub tank
 # zpool status
   pool: tank
  state: ONLINE
 status: One or more devices has experienced an error resulting in data
 corruption.  Applications may be affected.
 action: Restore the file in question if possible.  Otherwise restore the
 entire pool from backup.
see: http://www.sun.com/msg/ZFS-8000-8A
  scrub: scrub completed with 140 errors on Sat Sep 15 02:07:35 2007
 config:

 NAMESTATE READ WRITE CKSUM
 tankONLINE   0 0   293
   c0d1  ONLINE   0 0   293

 errors: 140 data errors, use '-v' for a list
  
  
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs corruption w/ sil3114 sata controllers

2007-10-29 Thread Neal Pollack
Edward Saipetch wrote:
 Neal Pollack wrote:
 Ed Saipetch wrote:
 Hello,

 I'm experiencing major checksum errors when using a syba silicon 
 image 3114 based pci sata controller w/ nonraid firmware.  I've 
 tested by copying data via sftp and smb.  With everything I've 
 swapped out, I can't fathom this being a hardware problem.  

 I can.  But I suppose it could also be in some unknown way a driver 
 issue.
 Even before ZFS, I've had numerous situations where various si3112 
 and 3114 chips
 would corrupt data on UFS and PCFS, with very simple  copy and checksum
 test scripts, doing large bulk transfers.

 Si chips are best used to clean coffee grinders.  Go buy a real SATA 
 controller.

 Neal
 I have no problem ponying up money for a better SATA controller.  I 
 saw a bunch of blog posts that people were successful using the card 
 so I thought maybe I had a bad card with corrupt firmware nvram.  Is 
 it worth trying to trace down the bug?

Of course it is.  File a bug so someone on the SATA team can study it.

 If this type of corruption exists, nobody should be using this card.  
 As a side note, what SATA cards are people having luck with?

A lot of people are happy with the 8 port PCI SATA card made by 
SuperMicro that has the Marvell chip on it.
Don't buy other marvell cards on ebay, because Marvell dumped a ton of 
cards that ended up with an earlier
rev of the silicon that can corrupt data.  But all the cards made by 
SuperMicro and sold by them have the c rev
or later silicon and work great.

That said, I wish someone would investigate the Silicon Image issues, 
but there are only so many engineers,
with so little time.

 There have been quite a few blog posts out there with people having 
 a similar config and not having any problems.

 Here's what I've done so far:
 1. Changed solaris releases from S10 U3 to NV 75a
 2. Switched out motherboards and cpus from AMD sempron to a Celeron D
 3. Switched out memory to use completely different dimms
 4. Switched out sata drives (2-3 250gb hitachi's and seagates in 
 RAIDZ, 3x400GB seagates RAIDZ and 1x250GB hitachi with no raid)

 Here's output of a scrub and the status (ignore the date and time, I 
 haven't reset it on this new motherboard) and please point me in the 
 right direction if I'm barking up the wrong tree.

 # zpool scrub tank
 # zpool status
   pool: tank
  state: ONLINE
 status: One or more devices has experienced an error resulting in data
 corruption.  Applications may be affected.
 action: Restore the file in question if possible.  Otherwise restore 
 the
 entire pool from backup.
see: http://www.sun.com/msg/ZFS-8000-8A
  scrub: scrub completed with 140 errors on Sat Sep 15 02:07:35 2007
 config:

 NAMESTATE READ WRITE CKSUM
 tankONLINE   0 0   293
   c0d1  ONLINE   0 0   293

 errors: 140 data errors, use '-v' for a list
  
  
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is there _any_ suitable motherboard?

2007-08-24 Thread Neal Pollack
Ian Collins wrote:
 [EMAIL PROTECTED] wrote:
   
 If power consumption and heat is a consideration, the newer Intel CPUs
 have an advantage in that Solaris supports native power management on
 those CPUs.

   
 
 Are P35 chipset boards supported?
   

The P35 chipset works fine with Solaris.
Whether or not the motherboard works with Solaris is decided by
the vendors choice of additional chips/drivers for things like
SATA, Network, and other ports.
The Intel network core in the P35 chipset (ICH-9 southbridge) works with 
Nevada.
The Intel SATA ports in the ICH-9 southbridge work.
Some of the third party boards add two additional SATA ports on an
unsupported third party chip.  Beware.
Some boards add a second ethernet port using a Marvell or other 
unsupported controller.

Neal

 Ian
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is there _any_ suitable motherboard?

2007-08-14 Thread Neal Pollack
Anon wrote:
 Have the ICH-8 and ICH-9 been physically tested with Solaris?  The page for 
 the ACHI driver still only lists through ICH-6 as having support?  What is 
 the Solaris support for the rest of the ICH-9 chipset such as USB, etc.?
  
  
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

ICH-8 and ICH-9 are indeed supported and running Solaris Nevada.
Here is the current status of ICH-9 support as of build 70, Solaris 
Express edition:

- USB works
- AHCI SATA disks work fine.
- ATAPI over SATA for DVD drives is still being tested prior to integration.
- NIC (network) core changed register sets, recode is complete and 
tested, integration pending.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is there _any_ suitable motherboard?

2007-08-10 Thread Neal Pollack
Alec Muffett wrote:
 Does anyone on this list have experience with a recent board with 6 or more
 SATA ports that they know is supported?
 


 Well so far I have only populated 5 of the ports I have available,
 but my writeup with my 9-port SATS ASUS mobo is at:

   http://www.crypticide.com/dropsafe/article/2091 

 ...and I hope to run a few more tests this weekend, time permitting.

 But you know this, since you've already commented there.  :-)

   - alec

   

In fact, any of the recent Intel chipset motherboards;

Server class:  Chipset ESB-2 southbridge
Desktop class:  Chipset ICH-8 and ICH-9 
 Motherboards known as i965 chipset
 and Intel P35 chipsets

The above all support AHCI and have typically up
to 6 SATA connectors, which the southbridge supports.
On ICH-9, all six are in the southbridge and supported/tested
running Solaris.   On some i965 classs motherboards, there may
only be 4 SATA ports that Solaris supports, since the other 2
may or may not be present and may be a third party SATA
controller chip with no device driver software for Solaris.

So if you want all six SATA ports for Solaris in AHCI mode,
go with a desktop board using the P35 chipset (ICH-9 southbridge)
or a server board based on ESB-2 southbridge (Intel Series S-5000
server boards).

Hope this helps.

Neal

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Karma Re: Re: Best use of 4 drives?

2007-06-15 Thread Neal Pollack

Tom Kimes wrote:

Here's a start for a suggested equipment list:

Lian Li case with 17 drive bays (12 3.5 , 5 5.25)   
http://www.newegg.com/Product/Product.aspx?Item=N82E1682064
  


So it only has room for one power supply.  How many disk drives will you 
be installing?
It's not the steady state current that matters, as much as it is the 
ability to handle the surge current
of starting to spin 17 disks from zero rpm.   That initial surge can 
stall a lot of lesser power supplies.

Will be interesting to see what happens here.

Asus M2N32-WS motherboard has PCI-X and PCI-E slots. I'm using Nevada b64 for iSCSI targets: 
http://www.newegg.com/Product/Product.aspx?Item=N82E16813131026


Your choice of CPU and memory.

I'm using an Opteron 1212 
http://www.newegg.com/Product/Product.aspx?Item=N82E16819105016


and DDR2-800 memory 
http://www.newegg.com/Product/Product.aspx?Item=N82E16820145034.


BTW, I'm not spamming for Newegg, it's just who I used and had the links handy ;^] 


TK
 
 
This message posted from opensolaris.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Export ZFS over NFS ?

2007-01-30 Thread Neal Pollack

I've got my first server deployment with ZFS.
Consolidating a pair of other file servers that used to have
a dozen or so NFS exports in /etc/dfs/dfstab similar to;

/export/solaris/images
/export/tools
/export/ws
. and so on

For the new server, I have one large zfs pool;
-bash-3.00# df -hl
bigpool 16T   1.5T15T10%/export

that I am starting to populate.   Should I simply share /export,
or should I separately share the individual dirs in /export
like the old dfstab did?

I am assuming that one single command;
# zfs set sharenfs=ro bigpool
would share /export as a read-only NFS point?

Opinions/comments/tutoring?

Thanks,

Neal

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Export ZFS over NFS ?

2007-01-30 Thread Neal Pollack

Neal Pollack wrote:

I've got my first server deployment with ZFS.
Consolidating a pair of other file servers that used to have
a dozen or so NFS exports in /etc/dfs/dfstab similar to;

/export/solaris/images
/export/tools
/export/ws
. and so on

For the new server, I have one large zfs pool;
-bash-3.00# df -hl
bigpool 16T   1.5T15T10%/export

that I am starting to populate.   Should I simply share /export,
or should I separately share the individual dirs in /export
like the old dfstab did?

I am assuming that one single command;
# zfs set sharenfs=ro bigpool
would share /export as a read-only NFS point?

Opinions/comments/tutoring?


The only thing I found in docs was page 99 of the admin guide.
So it says I should do;

zfs set sharenfs=on bigpool

to get all sub dirs shared rw via NFS, and then do

zfs set sharenfs=ro bigpool/dirname   for those I want to protect read-only.

Is that the current best practice?

Thanks



Thanks,

Neal

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Can you turn on zfs compression when the fs is already populated?

2007-01-24 Thread Neal Pollack

I have an 800GB raidz2 zfs filesystem.  It already has approx 142Gb of data.
Can I simply turn on compression at this point, or do you need to start 
with compression
at the creation time?  If I turn on compression now, what happens to the 
existing data?


Thanks,

Neal

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] need advice: ZFS config ideas for X4500 Thumper?

2007-01-23 Thread Neal Pollack

Hi:   (Warning, new zfs user question)

I am setting up an X4500 for our small engineering site file server.
It's mostly for builds, images, doc archives, certain workspace 
archives, misc

data.

I'd like a trade off between space and safety of data.  I have not set 
up a large

ZFS system before, and have only played with simple raidz2 with 7 disks.
After reading  
http://blogs.sun.com/relling/entry/raid_recommendations_space_vs_mttdl;
I am leaning toward a RAID-Z2 config with spares, for approx 15 
terabytes, but I

do not yet understand the nomenclature and exact config details.
For example, the graph/chart shows that 7+2 RAID-Z2  with spares would 
be a good
balance in capacity and data safety, but I do not know what to do with 
that number, how
it maps to an actual setup?   Does that type of config also provide a 
balance between

performance and data safety?

Can someone provide an actual example of how the config should look?
If I save two disks for the boot, how do the other 46 disks get configured
between spares and zfs groups? 


Thanks,

Neal

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Thumper Origins Q

2007-01-23 Thread Neal Pollack

Jason J. W. Williams wrote:

Hi All,

This is a bit off-topic...but since the Thumper is the poster child
for ZFS I hope its not too off-topic.

What are the actual origins of the Thumper? I've heard varying stories
in word and print. It appears that the Thumper was the original server
Bechtolsheim designed at Kealia as a massive video server. However,
when we were first told about it a year ago through Sun contacts
Thumper was described as a part of a scalabe iSCSI storage system,
where Thumpers would be connected to a head (which looked a lot like a
pair of X4200s) via iSCSI that would then present the storage over
iSCSI and NFS. Recently, other sources mentioned they were told about
the same time that Thumper was part of the Honeycomb project.

So I was curious if anyone had any insights into the history/origins
of the Thumper...or just wanted to throw more rumors on the fire. ;-)



Thumper was created to hold the the entire electronic transcript of the
Bill Clinton impeachment proceedings...




Thanks in advance for your indulgence.

Best Regards,
Jason
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss