[zfs-discuss] timeslider autodelete -vs.- my retention goals

2011-06-15 Thread Jacob Ritorto

 Hi,
I'm trying to do a simple data retention hack wherein I keep 
hourly, daily, weekly and monthly zfs auto snapshots.


To save space,
I want the dailies to go away when the weekly is taken.
I want the weeklies to go away when the monthly is taken.


From what I've gathered, it seems time slider just deletes the eldest 
snapshot when there's space contention.  Is there a simple way to alter 
its logic to achieve my retention goals, or must I just write this logic 
myself?


thx
jake
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Any use for extra drives?

2011-03-25 Thread Jacob Ritorto
Right, put some small (30GB or something trivial) disks in for root and 
then make a nice fast multi-spindle pool for your data.  If your 320s 
are around the same performance as your 500s, you could stripe and 
mirror them all into a big pool.  ZFS will waste the extra 180 on the 
bigger disks but that's fine because your pool will bigger and faster 
anyway.


On 03/25/11 09:18, Mark Sandrock wrote:


On Mar 24, 2011, at 7:23 AM, Anonymous wrote:


Generally, you choose your data pool config based on data size,
redundancy, and performance requirements.  If those are all satisfied with
your single mirror, the only thing left for you to do is think about
splitting your data off onto a separate pool due to better performance
etc.  (Because there are things you can't do with the root pool, such as
striping and raidz)

That's all there is to it.  To split, or not to split.


Thanks for the update. I guess there's not much to do for this box since
it's a development machine and doesn't have much need for extra redundancy
although if I would have had some extra 500s I would have liked to stripe
the root pool. I see from your answer that's not possible anyway. Cheers.


If you plan to generate a lot of data, why use the root pool? You can put the 
/home
and /proj filesystems (/export/...) on a separate pool, thus off-loading the 
root pool.

My two cents,
Mark


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS ... open source moving forward?

2010-12-10 Thread Jacob Ritorto

On 12/10/10 09:54, Bob Friesenhahn wrote:

On Fri, 10 Dec 2010, Edward Ned Harvey wrote:



It's been a while since I last heard anybody say anything about this.
What's the latest version of publicly
released ZFS?  Has oracle made it closed-source moving forward?


Nice troll.

Bob


Totally!  But is this really happening?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Bursty writes - why?

2010-10-12 Thread Jacob Ritorto
Thanks for posting your findings.  What was incorrect about the client's
config?

On Oct 7, 2010 4:15 PM, Eff Norwood sm...@jsvp.com wrote:

Figured it out - it was the NFS client. I used snoop and then some dtrace
magic to prove that the client (which was using O_SYNC) was sending very
bursty requests to the system. I tried a number of other NFS clients with
O_SYNC as well and got excellent performance when they were configured
correctly. Just for fun I disabled the DDRdrive X1 (pair of them) that I use
for the ZIL and performance tanked across the board when using O_SYNC. I
can't recommend the DDRdrive X1 enough as a ZIL! Here is a great article on
this behavior here: http://blogs.sun.com/brendan/entry/slog_screenshots

Thanks for the help all!

-- 
This message posted from opensolaris.org
___
zfs-dis...
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 1068E mpt driver issue

2010-07-07 Thread Jacob Ritorto
Thank goodness!  Where, specifically, does one obtain this firmware for 
SPARC?


On 07/07/10 17:04, Daniel Bakken wrote:

Upgrade the HBA firmware to version 1.30. We had the same problem, but
upgrading solved it for us.

Daniel Bakken


On Wed, Jul 7, 2010 at 1:57 PM, Joeri Vanthienen
m...@joerivanthienen.be mailto:m...@joerivanthienen.be wrote:

Hi,

We're using the following components with snv134:
- 1068E HBA (supermicro)
- 3U SAS / SATA Expander Backplane with dual LSI SASX28 Expander
Chips (supermicro)
- WD RE3 disks

We've got the following error messages:

Jul  7 10:09:12 sanwv01 scsi: [ID 107833 kern.warning] WARNING:
/p...@0,0/pci8086,3...@7/pci15d9,a...@0/s...@b,0 (sd2):

Jul  7 10:09:12 sanwv01 incomplete read- retrying

Jul  7 10:09:17 sanwv01 scsi: [ID 243001 kern.warning] WARNING:
/p...@0,0/pci8086,3...@7/pci15d9,a...@0 (mpt0):

Jul  7 10:09:17 sanwv01 mpt_handle_event_sync:
IOCStatus=0x8000, IOCLogInfo=0x31123000

Jul  7 10:09:17 sanwv01 scsi: [ID 243001 kern.warning] WARNING:
/p...@0,0/pci8086,3...@7/pci15d9,a...@0 (mpt0):

Jul  7 10:09:17 sanwv01 mpt_handle_event: IOCStatus=0x8000,
IOCLogInfo=0x31123000

Jul  7 10:09:19 sanwv01 scsi: [ID 365881 kern.info
http://kern.info] /p...@0,0/pci8086,3...@7/pci15d9,a...@0 (mpt0):

Jul  7 10:09:19 sanwv01 Log info 0x31123000 received for
target 21.

Jul  7 10:09:19 sanwv01 scsi_status=0x0, ioc_status=0x804b,
scsi_state=0xc

Jul  7 10:09:19 sanwv01 scsi: [ID 365881 kern.info
http://kern.info] /p...@0,0/pci8086,3...@7/pci15d9,a...@0 (mpt0):

Jul  7 10:09:19 sanwv01 Log info 0x31123000 received for
target 21.

Jul  7 10:09:19 sanwv01 scsi_status=0x0, ioc_status=0x804b,
scsi_state=0xc

Jul  7 10:09:19 sanwv01 scsi: [ID 365881 kern.info
http://kern.info] /p...@0,0/pci8086,3...@7/pci15d9,a...@0 (mpt0):

Jul  7 10:09:19 sanwv01 Log info 0x31123000 received for
target 21.

Jul  7 10:09:19 sanwv01 scsi_status=0x0, ioc_status=0x804b,
scsi_state=0xc

Jul  7 10:09:19 sanwv01 scsi: [ID 365881 kern.info
http://kern.info] /p...@0,0/pci8086,3...@7/pci15d9,a...@0 (mpt0):

Jul  7 10:09:19 sanwv01 Log info 0x31123000 received for
target 21.

Jul  7 10:09:19 sanwv01 scsi_status=0x0, ioc_status=0x804b,
scsi_state=0xc

Jul  7 10:09:21 sanwv01 smbsrv: [ID 138215 kern.notice] NOTICE:
smbd[WINDVISION\franz]: testshare share not found

Jul  7 10:09:26 sanwv01 scsi: [ID 243001 kern.warning] WARNING:
/p...@0,0/pci8086,3...@7/pci15d9,a...@0 (mpt0):

Jul  7 10:09:26 sanwv01 SAS Discovery Error on port 0.
DiscoveryStatus is DiscoveryStatus is |SMP timeout|
Jul  7 10:10:20 sanwv01 scsi: [ID 107833 kern.warning] WARNING:
/p...@0,0/pci8086,3...@7/pci15d9,a...@0 (mpt0):

Jul  7 10:10:20 sanwv01 Disconnected command timeout for
Target 21

After this message, the pool is unavailable over iscsi. We can't run
the format command or zpool status command anymore. We have to
reboot the server. This happens frequently for different Targets.
The stock firmware on the HBA is 1.26.01
--
This message posted from opensolaris.org http://opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org mailto:zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 1068E mpt driver issue

2010-07-07 Thread Jacob Ritorto

Well, OK, but where do I find it?

I'd still expect some problems with FCODE - vs. - BIOS issues if it's 
not SPARC firmware.


thx
jake


On 07/07/10 17:46, Garrett D'Amore wrote:

On Wed, 2010-07-07 at 17:33 -0400, Jacob Ritorto wrote:

Thank goodness!  Where, specifically, does one obtain this firmware for
SPARC?


Firmware is firmware -- it should not be host-cpu specific.  (At least
one *hopes* not, although I *suppose* it is possible to have endian
specific interfaces in the device... ugh that would be a very bad idea.)

- Garrett



On 07/07/10 17:04, Daniel Bakken wrote:

Upgrade the HBA firmware to version 1.30. We had the same problem, but
upgrading solved it for us.

Daniel Bakken


On Wed, Jul 7, 2010 at 1:57 PM, Joeri Vanthienen
m...@joerivanthienen.bemailto:m...@joerivanthienen.be  wrote:

 Hi,

 We're using the following components with snv134:
 - 1068E HBA (supermicro)
 - 3U SAS / SATA Expander Backplane with dual LSI SASX28 Expander
 Chips (supermicro)
 - WD RE3 disks

 We've got the following error messages:

 Jul  7 10:09:12 sanwv01 scsi: [ID 107833 kern.warning] WARNING:
 /p...@0,0/pci8086,3...@7/pci15d9,a...@0/s...@b,0 (sd2):

 Jul  7 10:09:12 sanwv01 incomplete read- retrying

 Jul  7 10:09:17 sanwv01 scsi: [ID 243001 kern.warning] WARNING:
 /p...@0,0/pci8086,3...@7/pci15d9,a...@0 (mpt0):

 Jul  7 10:09:17 sanwv01 mpt_handle_event_sync:
 IOCStatus=0x8000, IOCLogInfo=0x31123000

 Jul  7 10:09:17 sanwv01 scsi: [ID 243001 kern.warning] WARNING:
 /p...@0,0/pci8086,3...@7/pci15d9,a...@0 (mpt0):

 Jul  7 10:09:17 sanwv01 mpt_handle_event: IOCStatus=0x8000,
 IOCLogInfo=0x31123000

 Jul  7 10:09:19 sanwv01 scsi: [ID 365881 kern.info
 http://kern.info] /p...@0,0/pci8086,3...@7/pci15d9,a...@0 (mpt0):

 Jul  7 10:09:19 sanwv01 Log info 0x31123000 received for
 target 21.

 Jul  7 10:09:19 sanwv01 scsi_status=0x0, ioc_status=0x804b,
 scsi_state=0xc

 Jul  7 10:09:19 sanwv01 scsi: [ID 365881 kern.info
 http://kern.info] /p...@0,0/pci8086,3...@7/pci15d9,a...@0 (mpt0):

 Jul  7 10:09:19 sanwv01 Log info 0x31123000 received for
 target 21.

 Jul  7 10:09:19 sanwv01 scsi_status=0x0, ioc_status=0x804b,
 scsi_state=0xc

 Jul  7 10:09:19 sanwv01 scsi: [ID 365881 kern.info
 http://kern.info] /p...@0,0/pci8086,3...@7/pci15d9,a...@0 (mpt0):

 Jul  7 10:09:19 sanwv01 Log info 0x31123000 received for
 target 21.

 Jul  7 10:09:19 sanwv01 scsi_status=0x0, ioc_status=0x804b,
 scsi_state=0xc

 Jul  7 10:09:19 sanwv01 scsi: [ID 365881 kern.info
 http://kern.info] /p...@0,0/pci8086,3...@7/pci15d9,a...@0 (mpt0):

 Jul  7 10:09:19 sanwv01 Log info 0x31123000 received for
 target 21.

 Jul  7 10:09:19 sanwv01 scsi_status=0x0, ioc_status=0x804b,
 scsi_state=0xc

 Jul  7 10:09:21 sanwv01 smbsrv: [ID 138215 kern.notice] NOTICE:
 smbd[WINDVISION\franz]: testshare share not found

 Jul  7 10:09:26 sanwv01 scsi: [ID 243001 kern.warning] WARNING:
 /p...@0,0/pci8086,3...@7/pci15d9,a...@0 (mpt0):

 Jul  7 10:09:26 sanwv01 SAS Discovery Error on port 0.
 DiscoveryStatus is DiscoveryStatus is |SMP timeout|
 Jul  7 10:10:20 sanwv01 scsi: [ID 107833 kern.warning] WARNING:
 /p...@0,0/pci8086,3...@7/pci15d9,a...@0 (mpt0):

 Jul  7 10:10:20 sanwv01 Disconnected command timeout for
 Target 21

 After this message, the pool is unavailable over iscsi. We can't run
 the format command or zpool status command anymore. We have to
 reboot the server. This happens frequently for different Targets.
 The stock firmware on the HBA is 1.26.01
 --
 This message posted from opensolaris.orghttp://opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.orgmailto:zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss






___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [indiana-discuss] future of OpenSolaris

2010-03-23 Thread Jacob Ritorto
Sorry to beat the dead horse, but I've just found perhaps the only
written proof that OpenSolaris is supportable.  For those of you who
deny that this is an issue, its existence as a supported OS has been
recently erased from every other place I've seen on the Oracle sites.
Everyone please grab a copy of this before they silently delete it and
claim that it never existed.  I'm buying a contract right now.  I may
just take back every mean thing I ever said about Oracle.

http://www.sun.com/servicelist/ss/lgscaledcsupprt-us-eng-20091001.pdf


On Mon, Mar 1, 2010 at 10:23 PM, Erik Trimble erik.trim...@sun.com wrote:
 On Mon, 2010-03-01 at 20:52 -0500, Thomas Burgess wrote:
 There may be some things we choose not to open source going forward,
 similar to how MySQL manages certain value-add[s] at the top of the
 stack, Roberts said. It's important to understand the plan now is to
 deliver value again out of our IP investment, while at the same time
 measuring that with continuing to deliver OpenSolaris in the open.

         This will be a balancing act, one that we'll get right
         sometimes, but may not always.

         -
         From the feedback data I've seen customers dislike this type
         of licensing model most.  Dan may or may not be reading this,
         but I'd strongly discourage this approach.  Without knowing
         more I don't know what alternative I could recommend though..
         (Too bad I missed that irc meeting..)

         ./C



 I may be wrong, but isn't this already what they do?  I mean, there is
 a bunch of proprietary stuff in solaris that didn't make it into
 opensolaris.  I thought this was how they did things anyways, or am i
 misunderstanding something.


 Not quite. The stuff that didn't make it from Solaris Nevada into
 OpenSolaris was pretty much everything that /couldn't/ be open-sourced,
 or was being EOL'd in any case. We didn't really hold anything back
 there.

 The better analogy is what Tim Cook pointed out, which is the version of
 OpenSolaris that runs on the 7000-series storage devices. There's some
 stuff on there that isn't going to be putback into the OpenSolaris
 repos.


 I don't know, and I certainly can't speak for the project, but I suspect
 the type of enhancements which won't make it out into the OpenSolaris
 repos are indeed ones like we ship with the 7000-series hardware. That
 is, I doubt that you will be able to get an OpenSolaris with Oracle
 Improvements software distro/package - the proprietary stuff will only
 be used as part of a package bundle, since Oracle is big on
 one-stop-integrated-solution things.


 --
 Erik Trimble
 Java System Support
 Mailstop:  usca22-123
 Phone:  x17195
 Santa Clara, CA
 Timezone: US/Pacific (GMT-0800)

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [indiana-discuss] future of OpenSolaris

2010-03-23 Thread Jacob Ritorto
Wow, they actually did the right thing in the end.  This is fantastic.
 I'm all too happy to eat as much crow as you have to offer.  I wonder
when (if?) they'll bring back the ability to purchase OpenSolaris
subscriptions online..

I'm actually so happy right now that I even appreciate Tim's clueless
would-be cynicisms :)



On Tue, Mar 23, 2010 at 9:48 AM, Tim Cook t...@cook.ms wrote:


 On Tue, Mar 23, 2010 at 7:11 AM, Jacob Ritorto jacob.rito...@gmail.com
 wrote:

 Sorry to beat the dead horse, but I've just found perhaps the only
 written proof that OpenSolaris is supportable.  For those of you who
 deny that this is an issue, its existence as a supported OS has been
 recently erased from every other place I've seen on the Oracle sites.
 Everyone please grab a copy of this before they silently delete it and
 claim that it never existed.  I'm buying a contract right now.  I may
 just take back every mean thing I ever said about Oracle.

 http://www.sun.com/servicelist/ss/lgscaledcsupprt-us-eng-20091001.pdf



 Erased from every site?   Assuming when I pointed out several links the
 first go round wasn't enough, how bout directly on the opensolaris page
 itself?
 http://www.opensolaris.com/learn/features/availability/
 • Highly available open source based solutions ready to deploy on
 OpenSolaris with full production support from Sun.
 OpenSolaris enables developers to develop, debug, and globally deploy
 applications faster, with built-in innovative features and with full
 production support from Sun.

 Full production level support

 Both Standard and Premium support offerings are available for deployment of
 Open HA Cluster 2009.06 with OpenSolaris 2009.06 with following
 configurations:

 etc. etc. etc.
  So do you get paid directly by IBM then, or is it more of a consultant
 type role?
 --Tim


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [indiana-discuss] future of OpenSolaris

2010-02-25 Thread Jacob Ritorto
It's a kind gesture to say it'll continue to exist and all, but
without commercial support from the manufacturer, it's relegated to
hobbyist curiosity status for us.  If I even mentioned using an
unsupported operating system to the higherups here, it'd be considered
absurd.  I like free stuff to fool around with in my copious spare
time as much as the next guy, don't get me wrong, but that's not the
issue.  For my company, no support contract equals 'Death of
OpenSolaris.'

On Thu, Feb 25, 2010 at 4:29 AM, Peter Tribble peter.trib...@gmail.com wrote:
 On Thu, Feb 25, 2010 at 8:56 AM, Michael Schuster
 michael.schus...@sun.com wrote:
 perhaps this helps:

 http://www.eweek.com/c/a/Linux-and-Open-Source/Oracle-Explains-Unclear-Message-About-OpenSolaris-444787/

 Not really. It doesn't explain that the page in question was an
 explanation of how the
 OpenSolaris support model has worked for the past 18 months. The fact
 that people
 interpreted an unchanged 18-month old support policy (defined well
 before the acquisition
 was even mooted) as the death of the OpenSolaris project shows how
 crazy the world
 can get.

 I notice that the support page seems to have changed, though. In that
 it now says the
 GA period is until the next release, rather than the originally
 defined arbitrary
 6-month timer. (You can still see the 6-month timer in the support
 periods for 2008.05
 and 2008.11, though - notice that both of those left the GA phase
 before the next
 release happened.)

 Whether Oracle make changes in the future remains to be seen. I would expect
 them to (you can't turn around a loss-making acquisition into a
 profitable subsidiary
 without making changes).

 In terms of OpenSolaris, the word is that a position statement is due shortly.

 --
 -Peter Tribble
 http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] future of OpenSolaris

2010-02-22 Thread Jacob Ritorto

On 02/22/10 09:19, Henrik Johansen wrote:

On 02/22/10 02:33 PM, Jacob Ritorto wrote:

On 02/22/10 06:12, Henrik Johansen wrote:

Well - once thing that makes me feel a bit uncomfortable is the fact
that you no longer can buy OpenSolaris Support subscriptions.

Almost every trace of it has vanished from the Sun/Oracle website and a
quick call to our local Sun office confirmed that they apparently no
longer sell them.


I was actually very startled to see that since we're using it in
production here. After digging through the web for hours, I found that
OpenSolaris support is now included in Solaris support. This is a win
for us because we never know if a particular box, especially a dev box,
is going to remain Solaris or OpenSolaris for the duration of a support
purchase and now we're free to mix and mingle. If you refer to the
Solaris support web page (png attached if the mailing list allows),
you'll see that OpenSolaris is now officially part of the deal and is no
longer being treated as a second class support offering.


That would be *very* nice indeed. I have checked the URL in your
screenshot but I am getting a different result (png attached).

Ohwell - I'll just have to wait and see.


Confirmed your finding Henrik.  This is a showstopper for us as the 
higherups are already quite leery of Sun/Oracle and the future of 
Solaris.  I'm calling Oracle to see if I can get some answers.  The SUSE 
folks recently took a big chunk of our UNIX business here and 
OpenSolaris was my main tool in battling that.  For us, the loss of 
OpenSolaris and its support likely indicates the end of Solaris altogether.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] future of OpenSolaris

2010-02-22 Thread Jacob Ritorto
On Mon, Feb 22, 2010 at 10:04 AM, Henrik Johansen hen...@scannet.dk wrote:
 On 02/22/10 03:35 PM, Jacob Ritorto wrote:

 On 02/22/10 09:19, Henrik Johansen wrote:

 On 02/22/10 02:33 PM, Jacob Ritorto wrote:

 On 02/22/10 06:12, Henrik Johansen wrote:

 Well - once thing that makes me feel a bit uncomfortable is the fact
 that you no longer can buy OpenSolaris Support subscriptions.

 Almost every trace of it has vanished from the Sun/Oracle website and a
 quick call to our local Sun office confirmed that they apparently no
 longer sell them.

 I was actually very startled to see that since we're using it in
 production here. After digging through the web for hours, I found that
 OpenSolaris support is now included in Solaris support. This is a win
 for us because we never know if a particular box, especially a dev box,
 is going to remain Solaris or OpenSolaris for the duration of a support
 purchase and now we're free to mix and mingle. If you refer to the
 Solaris support web page (png attached if the mailing list allows),
 you'll see that OpenSolaris is now officially part of the deal and is no
 longer being treated as a second class support offering.

 That would be *very* nice indeed. I have checked the URL in your
 screenshot but I am getting a different result (png attached).

 Ohwell - I'll just have to wait and see.

 Confirmed your finding Henrik.  This is a showstopper for us as the
 higherups are already quite leery of Sun/Oracle and the future of
 Solaris.  I'm calling Oracle to see if I can get some answers.  The SUSE
 folks recently took a big chunk of our UNIX business here and
 OpenSolaris was my main tool in battling that.  For us, the loss of
 OpenSolaris and its support likely indicates the end of Solaris
 altogether.

 Well - I too am reluctant to put more OpenSolaris boxes into production
 until this matter has been resolved.


Look at http://www.sun.com/service/eosl/eosl_opensolaris.html

This page is stating that OpenSolaris is supported for up to 5 years.

- --
Al Slater


Since we're OT here, I've started a new thread in Indiana-Discuss
called OpenSolaris EOSL:
http://mail.opensolaris.org/pipermail/indiana-discuss/2010-February/017593.html

FWIW, I suspect that this situation does not warrant a Wait and See
response.  We're being badly mistreated here and it's probably too
late to do anything about it.  Probably the only chance to quell this
poor stewardship is to get big and loud right away.  Then we can see
if Oracle actually respects the notion of community.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [indiana-discuss] future of OpenSolaris

2010-02-22 Thread Jacob Ritorto
2010/2/22 Matthias Pfützner matth...@pfuetzner.de:
 You (Jacob Ritorto) wrote:
 FWIW, I suspect that this situation does not warrant a Wait and See
 response.  We're being badly mistreated here and it's probably too
 late to do anything about it.  Probably the only chance to quell this
 poor stewardship is to get big and loud right away.  Then we can see
 if Oracle actually respects the notion of community.

 Badly mistreated here?

 Bad words, you're using, please change them! And, if you have a problem,
 escalate with your Sales-Rep!


Of course I will escalate with my sales rep, buy sorry, Matthias, I won't
condone having the carpet yanked out from under me and my business
while putting on a happy face in the forums.  This has to be addressed
in public.  If the word here offends you, please take it to mean as
a consumer group.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What would happen with a zpool if you 'mirrored' a disk...

2010-02-04 Thread Jacob Ritorto
Seems your controller is actually doing only harm here, or am I missing
something?

On Feb 4, 2010 8:46 AM, Karl Pielorz kpielorz_...@tdx.co.uk wrote:


--On 04 February 2010 11:31 + Karl Pielorz kpielorz_...@tdx.co.uk
wrote:

 What would happen...
A reply to my own post... I tried this out, when you make 'ad2' online
again, ZFS immediately logs a 'vdev corrupt' failure, and marks 'ad2' (which
at this point is a byte-for-byte copy of 'ad1' as it was being written to in
background) as 'FAULTED' with 'corrupted data'.

You can't replace it with itself at that point, but a detach on ad2, and
then attaching ad2 back to ad1 results in a resilver, and recovery.

So to answer my own question - from my tests it looks like you can do this,
and get away with it. It's probably not ideal, but it does work.

A safer bet would be to detach the drive from the pool, and then re-attach
it (at which point ZFS assumes it's a new drive and probably ignores the
'mirror image' data that's on it).

-Karl

(The reason for testing this is because of a weird RAID setup I have where
if 'ad2' fails, and gets replaced - the RAID controller is going to mirror
'ad1' over to 'ad2' - and cannot be stopped. However, once the re-mirroring
is complete the RAID controller steps out the way, and allows raw access to
each disk in the mirror. Strange, a long story - but true).


___
zfs-discuss mailing list
zfs-disc...@opensolaris.or...
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is LSI SAS3081E-R suitable for a ZFS NAS ?

2010-01-31 Thread Jacob Ritorto
Hey Mark,
I spent *so* many hours looking for that firmware.  Would you please
post the link?  Did the firmware dl you found come with fcode? Running blade
2000 here (SPARC).

Thx
Jake

On Jan 26, 2010 11:52 AM, Mark Nipper ni...@bitgnome.net wrote:

 It may depend on the firmware you're running. We've
 got a SAS1068E based
 card in Dell R710 at...
Well, I may have misspoke.  I just spent a good portion of yesterday
upgrading to the latest firmware myself (downloaded from SuperMicro's FTP,
version 1.26.00 also; after I figured out I had to pass the -o option to
mptutil to force the flash since it was complaining about a mismatched card
or some such) and I thought that the machine had locked up again later in
the day yesterday because I couldn't ssh into the machine.

To my surprise though, I was able to log into the machine just fine this
morning directly on the console from the command line.  It seems the snv_125
bug with /dev/ptmx bit me (the error: /dev/ptmx: Permission denied problem
that required me tracking down the release notes for snv_125 to figure out
the problem) and the server was happy otherwise.  More importantly, the
zpool activity had all finished and I have three clean spares again!
 Normally this amount of I/O would have totally killed the machine!

So somewhere between upgrading the firmware to the latest version and
upgrading to snv_131, it looks like the problem may have actually been
addressed.  I'm guardedly optimistic at this point, given the previous
problems I've had so far with this on-board controller.

Interesting to hear someone else with the same chip but on an expansion card
has no problems (but was with the on-board chip).

-- 
This message posted from opensolaris.org
___

zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zf...
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zones and other filesystems

2010-01-21 Thread Jacob Ritorto

Thomas,
	If you're trying to make user home directories on your local machine in 
/home, you have to watch out because the initial Solaris config assumes 
that you're in an enterprise environment and the convention is to have a 
filer somewhere that serves everyone's home directories which, with the 
default automount config, get mounted onto your machine's /home. 
Personally, when setting up a standalone box, I don't put home 
directories in /home just to avoid clobbering enterprise unix 
conventions.  Gaëtan gave you the quick solution of just shutting off 
the automounter, which allows you to avoid addressing the problem this 
time around.


--jake


Thomas Burgess wrote:
hrm...that seemed to work...i'm so new to solarisit's SO 
different...what exactly did i just disable?


Does that mount nfs shares or something?
why should that prevent me from creating home directories?
thanks


2010/1/21 Gaëtan Lehmann gaetan.lehm...@jouy.inra.fr 
mailto:gaetan.lehm...@jouy.inra.fr



Le 21 janv. 10 à 14:14, Thomas Burgess a écrit :


now i'm stuck again.sorry to clog the tubes with my nubishness.

i can't seem to create users inside the zonei'm sure it's
due to zfs privelages somewhere but i'm not exactly sure how to
fix iti dont' mind if i need to manage the zfs filesystem
outside of the zone, i'm just not sure WHERE i'm supposed to do
it


when i try to create a home dir i get this:

mkdir: Failed to make directory wonslung; Operation not applicable


when i try to do it via adduser i get this:

UX: useradd: ERROR: Unable to create the home directory:
Operation not applicable.


and when i try to enter the zone home dir from the global zone i
get this, even as root:

bash: cd: home: Not owner


have i seriously screwed up or did i again miss something vital.



Maybe it's because of the automounter.
If you don't need that feature, try to disable it in your zone with

 svcadm disable autofs


Gaëtan

-- 
Gaëtan Lehmann

Biologie du Développement et de la Reproduction
INRA de Jouy-en-Josas (France)
tel: +33 1 34 65 29 66fax: 01 34 65 29 09
http://voxel.jouy.inra.fr  http://www.itk.org
http://www.mandriva.org  http://www.bepo.fr





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zones and other filesystems

2010-01-21 Thread Jacob Ritorto

Thomas Burgess wrote:
I'm not used to the whole /home vs /export/home difference and when you 
add zones to the mix it's quite confusing.


I'm just playing around with this zone.to learn but in the next REAL 
zone i'll probably:


mount the home directories from the base system (this machine itself IS 
a file server, and the zone i intend to config will be a ftp server and 
possible a bit torrent client)


or create a couple stand alone users which AREN't in /home

This makes a lot more sense nowI also forgot to set a default router 
in my zone so i can't even connect to the internet right now..


When i edit it with zonecfg can i just do:

add net
set defrouter=192.168.1.1**
end


OK, so if you're the filer too, the automount system still works for you 
the same as it does for all other machines using automount - it'll nfs 
mount to itself, etc.  Check out and follow the convention if you're so 
inclined.  Then of course, it helps to become a nis or ldap expert too, 
which is a bit much to chew on if you're just here to check out zones, 
so your simplification above is fine, as is Gaëtan's original 
recommendation... At least until your network grows to the point that 
you start to notice the home dir chaos and can't hit nfs shares at 
will..  Then you have to go back and undo all your automount breakage.


And yes, your zonecfg tweak should do the trick.  But you don't have to 
take my word for it -- the experts hang out in zones-discuss ;)

http://mail.opensolaris.org/mailman/listinfo/zones-discuss



ttyl
jake
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ssd for zil and boot?

2009-12-16 Thread Jacob Ritorto

Hi all,
	Is it sound to put rpool and ZIL on an a pair of SSDs (with rpool 
mirrored)?  I have (16) 500GB SATA disks for the data pools and they're 
doing lots of database work, so I'd been hoping to cut down the seeks a 
bit this way.  Is this a sane, safe, practical thing to do and if so, 
how much space do I need for the ZIL(s)?  OS is OpenSolaris 2009.06. 
Data on pools is being shared via old iSCSI, not COMSTAR.


tia
jake
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] hung pool on iscsi

2009-11-20 Thread Jacob Ritorto

Hi,
	Can anyone identify whether this is a known issue (perhaps 6667208) and 
if the fix is going to be pushed out to Solaris 10 anytime soon?  I'm 
getting badly beaten up over this weekly, essentially anytime we drop a 
packet between our twenty-odd iscsi-backed zones and the filer.


Chris was kind enough to provide his synopsis here (thanks Chris): 
http://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSFailmodeProblem


	Also, I really need a workaround for the meantime.  Is someone out 
there handy enough with the undocumented stuff to recommend a zdb 
command or something that will pound the delinquent pool into submission 
without crashing everything?  Surely there's a pool hard-reset command 
somewhere for the QA guys, right?


thx
jake


Chris Siebenmann wrote:

You write:
| Now I'd asked about this some months ago, but didn't get an answer so 
| forgive me for asking again: What's the difference between wait and 
| continue in my scenario?  Will this allow the one faulted pool to fully 
| fail and accept that it's broken, thereby allowing me to frob the iscsi 
| initiator, re-import the pool and restart the zone? [...]


 Our experience here in a similar iscsi-based environment is that
neither 'wait' nor 'continue' will enable the pool to recover, and that
frequently the entire system will eventually hang in a state where no
ZFS pools can be used and the system can't even be rebooted cleanly.

 My primary testing has been on Solaris 10 update 6, and I wrote
up the results here:
   http://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSFailmodeProblem

 I have recently been able to do preliminary testing on Solaris 10
update 8, and it appears to behave more or less the same.

 I wish I had better news for you.

- cks


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS GUI - where is it?

2009-11-19 Thread Jacob Ritorto

You need Solaris for the zfs webconsole, not OpenSolaris.

Paul wrote:

Hi there, my first post (yay).

I have done much googling and everywhere I look I see people saying just browse to 
https://localhost:6789 and it is there. Well its not, I am running 2009.06 
(snv_111b) the current latest stable release I do believe?

This is my first major foray into the world of Opensolaris (previous FreeBSD 
admin and then Debian/Ubuntu).

I understand the pkg is called SUNWzfsg ? But searching for this yields no results, there 
is no service called webconsole installed or running or online or offline.

Personally I am more than happy to use the CLI tools to manage ZFS storage, 
however I will need to be able to let others use the system who are more use to 
GUI's to manage such things and my hope is that they are far less likely to 
break things.

I have seen some mention of the SXCE version but apparently support for this 
finished last month?

Any help would be greatly appreciated! Thanks.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] hung pool on iscsi

2009-11-18 Thread Jacob Ritorto

Hi all,
	Not sure if you missed my last response or what, but yes, the pool is 
set to wait because it's one of many pools on this prod server and we 
can't just panic everything because one pool goes away.


I just need a way to reset one pool that's stuck.

	If the architecture of zfs can't handle this scenario, I understand and 
can rework the layout.


Just let me know one way or the other, please.

thx
jake

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] X45xx storage vs 7xxx Unified storage

2009-11-18 Thread Jacob Ritorto
	I don't wish to hijack, but along these same comparing lines, is there 
anyone able to compare the 7200 to the HP LeftHand series?   I'll start 
another thread if this goes too far astray.


thx
jake


Darren J Moffat wrote:

Len Zaifman wrote:

We are looking at adding to our storage. We would like ~20TB-30 TB.

we have ~ 200 nodes (1100 cores)   to feed data to using nfs, and we 
are looking for high reliability, good performance (up to at least 350 
MBytes /second over 10 GigE connection) and large capacity.


For the X45xx (aka thumper) capacity and performanance seem to be 
there (we have 3 now)
However, for system upgrades , maintenance and failures, we have an 
availability problem.


For the 7xxx in a cluster configuration, we seem to be able to solve 
the availability issue, and perhaps get performance benefits the SSD.


however, the costs constrain the capacity we could afford.

If anyone has experience with both systems, or with the 7xxx system in 
a cluster configuration, we would be interested in hearing:


1) Does the 7xxx perform as well or better than thumpers?


Depends on which 7xxx you pick.

2) Does the 7xxx failove r work as expected (in test and real life)


Depends what your expectations are! The time to failover depends on how 
you configure the cluster and how many filesystems you have and how many 
disks etc etc.


Have a read over this blog entry:

http://blogs.sun.com/wesolows/entry/7000_series_takeover_and_failback


3) Does the SSD really help?


For NFS yes the WriteZilla (slog) really helps because of how the NFS 
protocol works.  For ReadZillia (l2arc) it depends on your workload.


4) Do the analytics help prevent and solve real problems, or arui?he 
ge they frivolous pretty pictures?


Yes they do, at a level of detail no other storage vendor can currently 
provide.



5) is the 7xxx really a black box to be managed only by the GUI?


GUI or CLI but the CLI is *NOT* a Solaris shell it is a CLI version of 
the GUI.  The 7xxx is a true appliance, it happens to be built from 
OpenSolaris code but it is not a Solaris/OpenSolaris install.  So you 
can't run your own applications on it.  Backups are via NDMP for example.



I highly recommend downloading the simulator and trying it in 
VirtualBox/VMware:


http://www.sun.com/storage/disk_systems/unified_storage/



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] hung pool on iscsi

2009-11-18 Thread Jacob Ritorto

Tim Cook wrote:

 Also, I never said anything about setting it to panic.  I'm not sure why
 you can't set it to continue while alerting you that a vdev has failed?


Ah, right, thanks for the reminder Tim!

Now I'd asked about this some months ago, but didn't get an answer so 
forgive me for asking again: What's the difference between wait and 
continue in my scenario?  Will this allow the one faulted pool to fully 
fail and accept that it's broken, thereby allowing me to frob the iscsi 
initiator, re-import the pool and restart the zone?  That'd be exactly 
what I need.


thx
jake
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] hung pool on iscsi

2009-11-16 Thread Jacob Ritorto
zpool for zone of customer-facing production appserver hung due to iscsi 
transport errors. How can I {forcibly} reset this pool?  zfs commands 
are hanging and iscsiadm remove refuses.


r...@raadiku~[8]8:48#iscsiadm remove static-config 
iqn.1986-03.com.sun:02:aef78e-955a-4072-c7f6-afe087723466

iscsiadm: logical unit in use
iscsiadm: Unable to complete operation

r...@raadiku~[6]8:45#dmesg
[...]
Nov 16 00:03:30 Raadiku scsi: [ID 243001 kern.warning] WARNING: 
/scsi_vhci (scsi_vhci0):
Nov 16 00:03:30 Raadiku 
/scsi_vhci/s...@g013048c514da2a0049ae9806 (ssd3): Command Timeout 
on path /iscsi (iscsi0)
Nov 16 00:03:30 Raadiku scsi: [ID 107833 kern.warning] WARNING: 
/scsi_vhci/s...@g013048c514da2a0049ae9806 (ssd3):
Nov 16 00:03:30 Raadiku SCSI transport failed: reason 'timeout': 
retrying command
Nov 16 08:40:10 Raadiku su: [ID 810491 auth.crit] 'su root' failed for 
jritorto on /dev/pts/1
Nov 16 08:47:05 Raadiku iscsi: [ID 213721 kern.notice] NOTICE: iscsi 
session(9) - session logout failed (1)


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] hung pool on iscsi

2009-11-16 Thread Jacob Ritorto
On Mon, Nov 16, 2009 at 4:49 PM, Tim Cook t...@cook.ms wrote:
 Is your failmode set to wait?

Yes.  This box has like ten prod zones and ten corresponding zpools
that initiate to iscsi targets on the filers.  We can't panic the
whole box just because one {zone/zpool/iscsi target} fails.  Are there
undocumented commands to reset a specific zpool or something?

thx
jake
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs-discuss gone from web?

2009-10-28 Thread Jacob Ritorto
	With the web redesign, how does one get to zfs-discuss via the 
opensolaris.org website?


	Sorry for the ot question, but I'm becoming desperate after clicking 
circular links for the better part of the last hour :(


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] The iSCSI-backed zpool for my zone hangs.

2009-10-21 Thread Jacob Ritorto
My goal is to have a big, fast, HA filer that holds nearly everything for a 
bunch of development services, each running in its own Solaris zone.  So when I 
need a new service, test box, etc., I provision a new zone and hand it to the 
dev requesters and they load their stuff on it and go.

Each zone has zonepath on its own zpool, which is an iSCSI-backed device 
pointing to an a unique sparse zvol on the filer.

If things slow down, we buy more 1U boxes with lots of CPU and RAM, don't 
care about the disk, and simply provision more LUNs on the filer.  Works great. 
 Cheap, good performance, nice and scalable.  They smiled on me for a while.

Until the filer dropped a few packets.

I know it shouldn't happen and I'm addressing that, but the failure mode 
for this eventuality is too drastic.  If the filer isn't responding nicely to 
the zone's i/o request, the zone pretty much completely hangs, responding to 
pings perhaps, but not allowing any real connections. Kind of, not 
surprisingly, like a machine whose root disk got yanked during normal 
operations.

To make it worse, the whole global zone seems unable to do anything about 
the issue.  I can't down the affected zone; zoneadm commands just put the zone 
in a shutting_down state forever.  zpool commands just hang.  Only thing I've 
found to recover (from far away in the middle of the night) is to uadmin 1 1 
the global zone.  Even reboot didn't work. So all the zones on the box get 
hard-reset and that makes all the dev guys pretty unhappy.

I thought about setting failmode to continue on these individual zone pools 
because it's set to wait right now.  How do you folks predict that action will 
change play?

thx
jake
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] The iSCSI-backed zpool for my zone hangs.

2009-10-19 Thread Jacob Ritorto
	My goal is to have a big, fast, HA filer that holds nearly everything 
for a bunch of development services, each running in its own Solaris 
zone.  So when I need a new service, test box, etc., I provision a new 
zone and hand it to the dev requesters and they load their stuff on it 
and go.


	Each zone has zonepath on its own zpool, which is an iSCSI-backed 
device pointing to an a unique sparse zvol on the filer.


	If things slow down, we buy more 1U boxes with lots of CPU and RAM, 
don't care about the disk, and simply provision more LUNs on the filer. 
 Works great.  Cheap, good performance, nice and scalable.  They smiled 
on me for a while.


Until the filer dropped a few packets.

	I know it shouldn't happen and I'm addressing that, but the failure 
mode for this eventuality is too drastic.  If the filer isn't responding 
nicely to the zone's i/o request, the zone pretty much completely hangs, 
responding to pings perhaps, but not allowing any real connections. 
Kind of, not surprisingly, like a machine whose root disk got yanked 
during normal operations.


	To make it worse, the whole global zone seems unable to do anything 
about the issue.  I can't down the affected zone; zoneadm commands just 
put the zone in a shutting_down state forever.  zpool commands just 
hang.  Only thing I've found to recover (from far away in the middle of 
the night) is to uadmin 1 1 the global zone.  Even reboot didn't work. 
So all the zones on the box get hard-reset and that makes all the dev 
guys pretty unhappy.


	I thought about setting failmode to continue on these individual zone 
pools because it's set to wait right now.  How do you folks predict that 
action will change play?


thx
jake
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] nuke lots of snapshots

2009-09-03 Thread Jacob Ritorto
Sorry if this is a faq, but I just got a time sensitive dictim from the 
higherups to disable and remove all remnants of rolling snapshots on our 
DR filer.  Is there a way for me to nuke all snapshots with a single 
command, or to I have to manually destroy all 600+ snapshots with zfs 
destroy?


osol 2008.11


thx
jake
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] nuke lots of snapshots

2009-09-03 Thread Jacob Ritorto

Gaëtan Lehmann wrote:


  zfs list -r -t snapshot -o name -H pool | xargs -tl zfs destroy

should destroy all the snapshots in a pool



Thanks Gaëtan.  I added 'grep auto' to filter on just the rolling snaps 
and found that xargs wouldn't let me put both flags on the same dash, so:


zfs list -r -t snapshot -o name -H poolName | grep auto | xargs -t -l 
zfs destroy



worked for me.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Petabytes on a budget - blog

2009-09-02 Thread Jacob Ritorto

Torrey McMahon wrote:

3) Performance isn't going to be that great with their design but...they 
might not need it.



Would you be able to qualify this assertion?  Thinking through it a bit, 
even if the disks are better than average and can achieve 1000Mb/s each, 
each uplink from the multiplier to the controller will still have 
1000Gb/s to spare in the slowest SATA mode out there.  With (5) disks 
per multiplier * (2) multipliers * 1000GB/s each, that's 1Gb/s at 
the PCI-e interface, which approximately coincides with a meager 4x 
PCI-e slot.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Jacob Ritorto
Is this implemented in OpenSolaris 2008.11?  I'm moving move my filer's rpool 
to an ssd mirror to free up bigdisk slots currently used by the os and need to 
shrink rpool from 40GB to 15GB. (only using 2.7GB for the install).

thx
jake
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Jacob Ritorto
+1

Thanks for putting this in a real world perspective, Martin.  I'm faced with 
this exact circumstance right now (see my post to the list from earlier today). 
 Our ZFS filers are highly utilised, highly trusted components at the core of 
our enterprise and serve out OS images, mail storage, customer facing NFS 
mounts, CIFS mounts, etc. for nearly all of our critical services.  Downtime 
is, essentially, a catastrophe and won't get approval without weeks of 
painstaking social engineering..

jake
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Jacob Ritorto
Interesting, this is the same procedure I invented (with the exception 
that the zfs send came from the net) and used to hack OpenSolaris 
2009.06 onto my home SunBlade 2000 since it couldn't do AI due to low 
OBP rev..


I'll have to rework it this way, then, which will unfortunately cause 
downtime for a multitude of dependent services, affect the entire 
universe here and make my department look inept.  As much as it stings, 
I accept that this is the price I pay for adopting a new technology. 
Acknowledge and move on.  Quite simply, if this happens too often, we 
know we've made the wrong decision on vendor/platform.


Anyway, looking forward to shrink.  Thanks for the tips.


Kyle McDonald wrote:

Kyle McDonald wrote:

Jacob Ritorto wrote:
Is this implemented in OpenSolaris 2008.11?  I'm moving move my 
filer's rpool to an ssd mirror to free up bigdisk slots currently 
used by the os and need to shrink rpool from 40GB to 15GB. (only 
using 2.7GB for the install).


  
Your best bet would be to install the new ssd drives, create a new 
pool, snapshot the exisitng pool and use ZFS send/recv to migrate the 
data to the new pool. There are docs around about how install grub and 
the boot blocks on the new devices also. After that remove (export!, 
don't destroy yet!)

the old drives, and reboot to see how it works.

If you have no problems, (and I don't think there's anything technical 
that would keep this from working,) then you're good. Otherwise put 
the old pool back in. :)


This thread dicusses basically this same thing - he had a problem along 
the way, but Cindy answered it.



Hi Nawir,

I haven't tested these steps myself, but the error message
means that you need to set this property:

# zpool set bootfs=rpool/ROOT/BE-name rpool

Cindy

On 08/05/09 03:14, nawir wrote:
Hi,

I have sol10u7 OS with 73GB HD in c1t0d0.
I want to clone it to 36GB HD

These steps below is what come in my mind
STEPS TAKEN
# zpool create -f altrpool c1t1d0s0
# zpool set listsnapshots=on rpool
# SNAPNAME=`date +%Y%m%d`
# zfs snapshot -r rpool/r...@$snapname
# zfs list -t snapshot
# zfs send -R rp...@$snapname | zfs recv -vFd altrpool
# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk 
/dev/rdsk/c1t1d0s0

for x86 do
# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1d0s0
# zpool export altrpool
# init 5
remove source disk (c1t0d0s0) and move target disk (c1t1d0s0) to slot0
-insert solaris10 dvd
ok boot cdrom -s
# zpool import altrpool rpool
# init 0
ok boot disk1

ERROR:
Rebooting with command: boot disk1
Boot device: /p...@1c,60/s...@2/d...@1,0  File and args:
no pool_props
Evaluating:
The file just loaded does not appear to be executable.
ok

QUESTIONS:
1. what's wrong what my steps
2. any better idea

thanks 

-Kyle




 -Kyle


thx
jake
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SPARC SATA, please.

2009-06-24 Thread Jacob Ritorto
	I think this is the board that shipped in the original T2000 machines 
before they began putting the sas/sata onboard:  LSISAS3080X-R


Can anyone verify this?



Justin Stringfellow wrote:

Richard Elling wrote:

Miles Nordin wrote:

ave == Andre van Eyssen an...@purplecow.org writes:
et == Erik Trimble erik.trim...@sun.com writes:
ea == Erik Ableson eable...@mac.com writes:
edm == Eric D. Mudama edmud...@bounceswoosh.org writes:



   ave The LSI SAS controllers with SATA ports work nicely with
   ave SPARC.

I think what you mean is ``some LSI SAS controllers work nicely with
SPARC''.  It would help if you tell exactly which one you're using.

I thought the LSI 1068 do not work with SPARC (mfi driver, x86 only).
  


Sun has been using the LSI 1068[E] and its cousin, 1064[E] in
SPARC machines for many years.  In fact, I can't think of a
SPARC machine in the current product line that does not use
either 1068 or 1064 (I'm sure someone will correct me, though ;-)
-- richard


Might be worth having a look at the T1000 to see what's in there. We 
used to ship those with SATA drives in.


cheers,
--justin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] SPARC SATA, please.

2009-06-22 Thread Jacob Ritorto
Is there a card for OpenSolaris 2009.06 SPARC that will do SATA correctly yet?  
Need it for a super cheapie, low expectations, SunBlade 100 filer, so I think 
it has to be notched for 5v PCI slot, iirc. I'm OK with slow -- main goals here 
are power saving (sleep all 4 disks) and 1TB+ space.  Oh, and I hate to be an 
old head, but I don't want a peecee.  They still scare me :)  Thinking root 
pool on 16GB ssd, perhaps, so the thing can spin down the main pool and idle 
*really* cheaply..

thx
jake
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] rpool mirroring

2009-06-05 Thread Jacob Ritorto
I've been dealing with this at an unusually high frequency these days. 
It's even dodgier on SPARC.  My recipe has been to run format -e and 
first try to label as SMI.  Solaris PCs sometimes complain that the disk 
needs fdisk partitioning and I always delete *all* partitions, exit 
fdisk, enter fdisk again, at which time it prompts me to make it a 100% 
Solaris partition and I agree.  I think that may have been my way of 
accomplishing the part where you bailed and did your Linux partition 
hacking, Frank.  Anyway, after that, remember that you currently still 
have to use SMI labels for the root pool, so run label and choose that. 
 format(1M) sometimes momentarily hangs and crashes the there, iirc -- 
if this happens to you, run it again and it's fine the second time 
through.  Maybe related to fdisk changes plus RAM disklabel vs on-disk 
disklabel, etc.  Anyway, make slice 0 identical to slice 2, which is 
congruent with the OpenSolaris install procedure; however to eliminate 
human error, a


prtvtoc /dev/dsk/cXtXd0s0 | fmthard -s- /dev/rdsk/cXtXd0s0

should work for you, at this point, to duplicate the label from the 
first disk to the second.  Read and understand that command before 
issuing it as it does very little second guessing and can easily nuke 
stuff if you run it wrong.



Note: on SPARC, if you're using generic disks that happen to be the same 
part numbers as real Sun disks, but have generic labels instead of the 
nice SUN146G and the like, you can run into this issue of ZFS refusing 
to mirror to a smaller disk again.  My solution to that was to run 
format = type and select the type matching the other disk, then 
continue such that all reasonably matching disks have the same type. 
This seems to help with geometry issues and also completely changed 
(drastically for the better) the performance characteristics of some 
disks in my SunFire v440.


*Note to others who may end up here later or for tangential reasons: 
Realise that pretty much all of these gross manipulations of your disk 
configuration are sufficiently heavy-handed that you're going to lose 
everything stored on the disk in question when you run them.


hth
jake


Frank Middleton wrote:

On 06/04/09 06:44 PM, cindy.swearin...@sun.com wrote:

Hi Noz,

This problem was reported recently and this bug was filed:

6844090 zfs should be able to mirror to a smaller disk


Is this filed on bugs or defects? I had the exact same problem,
and it turned out to be a rounding error in Solaris format/fdisk.
The only way I could fix it was to use Linux (well, Fedora) sfdisk
to make both partitions exactly the same number of bytes. The
alternates partition seems to be hard wired on older disks  and
AFAIK there's no way to use that space. sfdisk is on the Fedora
live CD if you don't have a handy Linux system to get it from.
BTW the disks were nominally the same size but had different
geometries.

Since I can't find 6844090, I have no idea what it says, but this
really seems to be a bug in fdisk, not ZFS, although I would think
ZFS should be able to mirror to a disk that is only a tiny bit
smaller...

-- Frank
 

I believe slice 9 (alternates) is an older method for providing
alternate disk blocks on x86 systems. Apparently, it can be removed by
using the format -e command. I haven't tried this though.

I don't think removing slice 9 will help though if these two disks
are not identical, hence the bug.

You can workaround this problem by attaching a slightly larger disk.

Cindy


noz wrote:

I've been playing around with zfs root pool mirroring and came across
some problems.

I have no problems mirroring the root pool if I have both disks
attached during OpenSolaris installation (installer sees 2 disks).

The problem occurs when I only have one disk attached to the system
during install. After OpenSolaris installation completes, I attach the
second disk and try to create a mirror but I cannot.

Here are the steps I go through:
1) install OpenSolaris onto 16GB disk
2) after successful install, shutdown, and attach second disk (also 
16GB)

3) fdisk -B
4) partition
5) zfs attach

Step 5 fails, giving a disk too small error.

What I noticed about the second disk is that it has a 9th partition
called alternates that takes up about 15MBs. This partition doesn't
exist in the first disk and I believe is what's causing the problem. I
can't figure out how to delete this partition and I don't know why
it's there. How do I mirror the root pool if I don't have both disks
attached during OpenSolaris installation? I realize I can just use a
disk larger than 16GBs, but that would be a waste.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___

Re: [zfs-discuss] Comstar production-ready?

2009-03-04 Thread Jacob Ritorto
Caution:  I built a system like this and spent several weeks trying to
get iscsi share working under Solaris 10 u6 and older.  It would work
fine for the first few hours but then performance would start to
degrade, eventually becoming so poor as to actually cause panics on
the iscsi initiator boxes.  Couldn't find resolution through the
various Solaris knowledge bases.  Closest I got was to find out that
there's a problem only in the *Solaris 10* iscsi target code that
incorrectly frobs some counter when it shouldn't, violating the iscsi
target specifications.  The problem is fixed in Nevada/OpenSolaris.

Long story short, I tried OpenSolaris 2008.11 and the iscsi crashes
ceased and things ran smoothly.  Not the solution I was hoping for,
since this was to eventually be a prod box, but then Sun announced
that I could purchase OpenSolaris support, so I was covered.  On OS,
my two big filers have been running really nicely for months and
months now.

Don't try to use Solaris 10 as a filer OS unless you can identify and
resolve the iscsi target issue.



On Wed, Mar 4, 2009 at 2:47 AM, Scott Lawson scott.law...@manukau.ac.nz wrote:


 Stephen Nelson-Smith wrote:

 Hi,

 I recommended a ZFS-based archive solution to a client needing to have
 a network-based archive of 15TB of data in a remote datacentre.  I
 based this on an X2200 + J4400, Solaris 10 + rsync.

 This was enthusiastically received, to the extent that the client is
 now requesting that their live system (15TB data on cheap SAN and
 Linux LVM) be replaced with a ZFS-based system.

 The catch is that they're not ready to move their production systems
 off Linux - so web, db and app layer will all still be on RHEL 5.


 At some point I am sure you will convince them to see the light! ;)

 As I see it, if they want to benefit from ZFS at the storage layer,
 the obvious solution would be a NAS system, such as a 7210, or
 something buillt from a JBOD and a head node that does something
 similar.  The 7210 is out of budget - and I'm not quite sure how it
 presents its storage - is it NFS/CIFS?

 The 7000 series devices can present NFS, CIFS and iSCSI. Looks very nice if
 you need
 a nice Gui / Don't know command line / need nice analytics. I had a play
 with one the other
 day and am hoping to get my mit's on one shortly for testing. I would like
 to give it a real
 gd crack with VMWare for VDI VM's.

  If so, presumably it would be
 relatively easy to build something equivalent, but without the
 (awesome) interface.


 For sure the above gear would be fine for that. If you use standard Solaris
 10 10/08 you have
 NFS and iSCSI ability directly in the OS and also available to be supported
 via a support contract
 if needed. Best bet would probably be NFS for the Linux machines, but you
 would need
 to test in *their* environment with *their* workload.

 The interesting alternative is to set up Comstar on SXCE, create
 zpools and volumes, and make these available either over a fibre
 infrastructure, or iSCSI.  I'm quite excited by this as a solution,
 but I'm not sure if it's really production ready.


 If you want fibre channel target then you will need to use OpenSolaris or
 SXDE I believe. It's
 not available in mainstream Solaris yet. I am personally waiting till then
 when it has been
 *well* tested in the bleeding edge community. I have too much data to take
 big risks with it.

 What other options are there, and what advice/experience can you share?


 I do very similar stuff here with J4500's and T2K's for compliance archives,
 NFS and iSCSI targets
 for Windows machines. Works fine for me. Biggest system is 48TB on J4500 for
 Veritas Netbackup
 DDT staging volumes. Very good throughput indeed. Perfect in fact, based on
 the large files that
 are created in this environment. One of these J4500's can keep 4 LTO4 drives
 in a SL500  saturated with
 data on a T5220. (4 streams at ~160 MB/sec)

 I think you have pretty much the right idea though. Certainly if you use Sun
 kit you will be able to deliver
 a commercially supported solution for them.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] destroy means destroy, right?

2009-01-29 Thread Jacob Ritorto
I like that, although it's a bit of an intelligence insulter.  Reminds
me of the old pdp11 install (
http://charles.the-haleys.org/papers/setting_up_unix_V7.pdf ) --

This step makes an empty file system.
6.The next thing to do is to restore the data onto the new empty
file system. To do this you respond
  to the ':' printed in the last step with
(bring in the program restor)
: tm(0,4)  ('ht(0,4)' for TU16/TE16)
tape? tm(0,5)  (use 'ht(0,5)' for TU16/TE16)
disk? rp(0,0)(use 'hp(0,0)' for RP04/5/6)
Last chance before scribbling on disk. (you type return)
(the tape moves, perhaps 5-10 minutes pass)
end of tape
Boot
:
  You now have a UNIX root file system.




On Thu, Jan 29, 2009 at 3:42 PM, Orvar Korvar
knatte_fnatte_tja...@yahoo.com wrote:
 Maybe add a timer or something? When doing a destroy, ZFS will keep 
 everything for 1 minute or so, before overwriting. This way the disk won't 
 get as fragmented. And if you had fat fingers and typed wrong, you have up to 
 one minute to undo. That will catch 80% of the mistakes?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] destroy means destroy, right?

2009-01-28 Thread Jacob Ritorto
Hi,
I just said zfs destroy pool/fs, but meant to say zfs destroy
pool/junk.  Is 'fs' really gone?

thx
jake
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Observation of Device Layout vs Performance

2009-01-06 Thread Jacob Ritorto
My OpenSolaris 2008/11 PC seems to attain better throughput with one big 
sixteen-device RAIDZ2 than with four stripes of 4-device RAIDZ.  I know it's by 
no means an exhaustive test, but catting /dev/zero to a file in the pool now 
frequently exceeds 600 Megabytes per second, whereas before with the striped 
RAIDZ I was only occasionally peaking around 400MB/s.  The kit is SuperMicro 
Intel 64 bit, 2-socket by 4 thread, 3 GHz with two AOC MV8 boards and 800 MHz 
(iirc) fsb connecting 16 GB RAM that runs at equal speed to fsb.  Cheap 7200 
RPM Seagate SATA half-TB disks with 32MB cache.

Is this increase explicable / expected?  The throughput calculator sheet output 
I saw seemed to forecast better iops with the striped raidz vdevs and I'd read 
that, generally, throughput is augmented by keeping the number of vdevs in the 
single digits.  Is my superlative result perhaps related to the large cpu and 
memory bandwidth?

Just throwing this out for sake of discussion/sanity check..

thx
jake
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Observation of Device Layout vs Performance

2009-01-06 Thread Jacob Ritorto
Is urandom nonblocking?



On Tue, Jan 6, 2009 at 1:12 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
 On Tue, 6 Jan 2009, Keith Bierman wrote:

 Do you get the same sort of results from /dev/random?

 /dev/random is very slow and should not be used for benchmarking.

 Bob
 ==
 Bob Friesenhahn
 bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
 GraphicsMagick Maintainer,http://www.GraphicsMagick.org/


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Observation of Device Layout vs Performance

2009-01-06 Thread Jacob Ritorto
OK, so use a real io test program or at least pre-generate files large
enough to exceed RAM caching?



On Tue, Jan 6, 2009 at 1:19 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
 On Tue, 6 Jan 2009, Jacob Ritorto wrote:

 Is urandom nonblocking?

 The OS provided random devices need to be secure and so they depend on
 collecting entropy from the system so the random values are truely random.
  They also execute complex code to produce the random numbers. As a result,
 both of the random device interfaces are much slower than a disk drive.

 Bob
 ==
 Bob Friesenhahn
 bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
 GraphicsMagick Maintainer,http://www.GraphicsMagick.org/


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Observation of Device Layout vs Performance

2009-01-06 Thread Jacob Ritorto
I have that iozone program loaded, but its results were rather cryptic
for me.  Is it adequate if I learn how to decipher the results?  Can
it thread out and use all of my CPUs?



 Do you have tools to do random I/O exercises?

 --
 Darren
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Observation of Device Layout vs Performance

2009-01-06 Thread Jacob Ritorto
 Yes, iozone does support threading.  Here is a test with a record size of
 8KB, eight threads, synchronous writes, and a 2GB test file:

Multi_buffer. Work area 16777216 bytes
OPS Mode. Output is in operations per second.
Record Size 8 KB
SYNC Mode.
File size set to 2097152 KB
Command line used: iozone -m -t 8 -T -O -r 8k -o -s 2G
Time Resolution = 0.01 seconds.
Processor cache size set to 1024 Kbytes.
Processor cache line size set to 32 bytes.
File stride size set to 17 * record size.
Throughput test with 8 threads
Each thread writes a 2097152 Kbyte file in 8 Kbyte records

 When testing with iozone, you will want to make sure that the test file is
 larger than available RAM, such as 2X the size.

 Bob


OK, I ran it as suggested (using a 17GB file pre-generated from
urandom) and I'm getting what appear to be sane iozone results now.
Do we have a place to compare performance notes?

thx
jake
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to diagnose zfs - iscsi - nfs hang

2008-12-03 Thread Jacob Ritorto
Update:  It would appear that the bug I was complaining about nearly a 
year ago is still at play here:  
http://opensolaris.org/jive/thread.jspa?threadID=49372tstart=0

Unfortunate Solution:  Ditch Solaris 10 and run Nevada.  The nice folks 
in the OpenSolaris project fixed the problem a long time ago.

This means that I can't have Sun support until Nevada becomes a 
real product, but it's better than having a silent failure every time 
6GB crosses the wire.  My big question is why won't they fix it in 
Solaris 10?  Sun's depriving themselves of my support revenue stream and 
I'm stuck with an unsupportable box as my core filer.  Bad situation on 
so many levels..  If it weren't for the stellar quality of the Nevada 
builds (b91 uptime=132 days now with no problems), I'd not be sleeping 
much at night..  Imagine my embarrassment had I taken the high road and 
spent the $$$ for a Thumper for this purpose..








___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to diagnose zfs - iscsi - nfs hang

2008-11-10 Thread Jacob Ritorto
Thanks for the reply and corroboration, Brent.  I just liveupgraded the machine 
from Solaris 10 u5 to Solaris 10 u6, which purports to have fixed all known 
issues with the Marvell device, and am still experiencing the hang.  So I guess 
this set of facts would imply one of:

1) they missed one, or
2) it's not a Marvell related problem.


Not sure where else to look for information about this.  Without further info, 
I guess I'm essentially forced to ditch production Solaris and stick with 
Nevada.  But that'd be a very blind, dismissive action on my part and I'd 
really rather find out what's at play here.

A little more background/ tangent:  The other filer we're running with this 
exact same feature set (simultaneous iSCSI and NFS sharing out of the same 
zpool), in production, is at Nevada b91 and it has never exhibited this flaw.  
My intention was to install an officially supported Solaris release on the new 
filer and zfs send everything from the old Nevada box to the new Solaris box to 
get to a position where I could purchase Sun support.   But now I'm obviously 
thinking that I can't do it.  We have like $12000 worth of Sun contracts here 
but haven't added the PC filers in yet because they're on Nevada and thus, I 
assumed, unsupportable.  Is that correct?  Or can I put a Nevada PC on Sun 
support?  (Yes, it's on the HCL.) (Sorry for the seemingly ot question here, 
but I do need to find out how to get Sun support on my zfs box, so it's at 
least *arguably* on-topic :)


One last thing I noticed was that the zfs version in Solaris 10 u6 is higher 
than that in u5.  Any chance that an upgrade of my zpool would enable the new 
features that would address this issue?

thx
jake
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to diagnose zfs - iscsi - nfs hang

2008-11-10 Thread Jacob Ritorto
It's a 64 bit dual processor 4 core Xeon kit.  16GB RAM.  Supermicro-Marvell 
SATA boards featuring the same S-ATA chips as the Sun x4500.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to diagnose zfs - iscsi - nfs hang

2008-11-10 Thread Jacob Ritorto
FWIW: 

[EMAIL PROTECTED]:01#kstat vmem::heap
module: vmeminstance: 1 
name:   heapclass:vmem
alloc   25055
contains0
contains_search 0
crtime  0
fail0
free7187
lookup  356
mem_import  0
mem_inuse   1966219264
mem_total   1627733884928
populate_fail   0
populate_wait   0
search  14135
snaptime11240.434350896
vmem_source 0
wait0
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] How to diagnose zfs - iscsi - nfs hang

2008-11-07 Thread Jacob Ritorto
I have a PC server running Solaris 10 5/08 which seems to frequently become 
unable to share zfs filesystems via the shareiscsi and sharenfs options.  It 
appears, from the outside, to be hung -- all clients just freeze, and while 
they're able to ping the host, they're not able to transfer nfs or iSCSI data.  
They're in the same subnet and I've found no network problems thus far.  

After hearing so much about the Marvell problems I'm beginning to wonder it 
they're the culprit, though they're supposed to be fixed in 127128-11, which is 
the kernel I'm running.  

I have an exact hardware duplicate of this machine running Nevada b91 (iirc) 
that doesn't exhibit this problem.

There's nothing in /var/adm/messages and I'm not sure where else to begin.  

Would someone please help me in diagnosing this failure?  

thx
jake
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] resilver being killed by 'zpool status' when root

2008-10-21 Thread Jacob Ritorto
Pls pardon the off-topic question, but is there a Solaris backport of the fix?


On Tue, Oct 21, 2008 at 2:15 PM, Victor Latushkin
[EMAIL PROTECTED] wrote:
 Blake Irvin wrote:
 Looks like there is a closed bug for this:

 http://bugs.opensolaris.org/view_bug.do?bug_id=6655927

 It's been closed as 'not reproducible', but I can reproduce consistently on 
 Sol 10 5/08.  How can I re-open this bug?

 Have you tried to reproduce it with Nevada build 94 or later? Bug
 6655927 is closed as not reproducible because that part of the code was
 rewritten as part of fixing 6343667, and problem described in 6655927
 was not reproducible any longer.

 If you can reproduce it with build 94 or later, then the bug 6655927
 probably worth revisiting.

 victor
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] mv iSCSI store to another zpool w/o losing original target name

2008-09-09 Thread Jacob Ritorto
Hi,
I made a zvol and set it up as a target like this:
[EMAIL PROTECTED]:19#zfs create -V20g Allika/joberg
[EMAIL PROTECTED]:19#zfs set shareiscsi=on Allika/joberg
[EMAIL PROTECTED]:19#iscsitadm list target
Target: Allika/joberg
iSCSI Name: iqn.1986-03.com.sun:02:085ec10a-16f7-e09d-968a-fc9101751a08
Connections: 0

I then told my initiator(s) its name and connected successfully,
copied some stuff in.  All was well.

But now I wish to consolidate pools on the iSCSI target host and need
to move the joberg zvol to the new pool.  I tried to send|receive it
over, but the properties apparently didn't come along.  Undaunted, I
set the shareiscsi property again, same way as before, but of course
got a different target name.  The I couldn't find a way to change it
back.  I needed to have the same target name to avoid having to chase
down and rework the settings on other box(en), bounce iscsi stacks,
remount volumes, reboot, reinstall, etc etc etc. (yes I'm being
dramatic here, but they are crabby, mean old Novell boxes that are
less angry the less they're touched :) .  Is there a way to accomplish
this with a zfs command (keep old target name after moving zvol)?  I
RTFMed but didn't see any mention of a way to do this.

thx
jake
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best layout for 15 disks?

2008-08-22 Thread Jacob Ritorto
While on the subject, in a home scenario where one actually notices
the electric bill personally, is it more economical to purchase a big
expensive 1tb disk and save on electric to run it for five years or to
purchase two cheap 1/2 TB disk and spend double on electric for them
for 5 years?  Has anyone calculated this?

If this is too big a turn for this thread, let's start a new one
and/or perhaps find an appropriate forum.

thx
jake

On Fri, Aug 22, 2008 at 1:14 PM, Chris Cosby [EMAIL PROTECTED] wrote:


 On Fri, Aug 22, 2008 at 1:08 PM, mike [EMAIL PROTECTED] wrote:

 It looks like this will be the way I do it:

 initially:
 zpool create mypool raidz2 disk0 disk1 disk2 disk3 disk4 disk5 disk6 disk7

 when I need more space and buy 8 more disks:
 zpool add mypool raidz2 disk8 disk9 disk10 disk11 disk12 disk13 disk14
 disk15

 Correct?


  Enable compression, and set up multiple raidz2 groups.  Depending on
  what you're storing, you may get back more than you lose to parity.

 It's DVD backups and media files. Probably everything has already been
 compressed pretty well by the time it hits ZFS.

  That's a lot of spindles for a home fileserver.   I'd be inclined to go
  with a smaller number of larger disks in mirror pairs, allowing me to
  buy larger disks in pairs as they come on the market to increase
  capacity.

 Or do smaller groupings of raidz1's (like 3 disks) so I can remove
 them and put 1.5TB disks in when they come out for instance?

 Somebody correct me if I'm wrong. ZFS (early versions) did not support
 removing zdevs from a pool. It was a future feature. Is it done yet?


 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



 --
 chris -at- microcozm -dot- net
 === Si Hoc Legere Scis Nimium Eruditionis Habes

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 24-port SATA controller options?

2008-06-30 Thread Jacob Ritorto
I bought similar kit from them, but when I received the machine,
uninstalled, I looked at the install manual for the Areca card and
found that it's a manual driver add that is documented to
_occasionally hang_ and you have to _kill it off manually_ if it does.
 I'm really not having that in a production server, so as soon as I
saw this, I asked them (SiMech) for an alternative, production worthy
solution, but they've not yet responded at all (after three
emailings).  The pre-sale communication, however, was superb.  Perhaps
this sort of driver munging behaviour is perfectly acceptable in the
ms/linux world and they therefore think I'm being too fussy?  Hmm.

Anyway, to get this Areca card as far away from me as possible, I plan
to either go with a small zfs CompactFlash array on the mobo ide
channels (or) just use one channel of each of my two
supermicro/marvell boards.

Any opinions out there on that plan, btw?  This is slated to be a
first tier production file/iscsi server (!)..

Sorry if this is drifting too far off topic, but I really would have
appreciated this sort of info when searching for a zfs hw solution
smaller than thumper.

 That said, why, oh why, does Sun not offer a Niagara board on a 16
disk chassis for $1 for our little corner of the market?  Dealing
with the concept of using PCs in production has been absolutely
horrifying thus far.

thx
jake


On Fri, Jun 27, 2008 at 3:59 PM, Blake Irvin [EMAIL PROTECTED] wrote:
 We are currently using the 2-port Areca card SilMech offers for boot, and 2 
 of the Supermicro/Marvell cards for our array.  Silicon Mechanics gave us 
 great support and burn-in testing for Solaris 10.  Talk to a sales rep there 
 and I don't think you will be disappointed.

 cheers,
 Blake


 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 24-port SATA controller options?

2008-04-15 Thread Jacob Ritorto
Right, a nice depiction of the failure modes involved and their
probabilities based on typical published mtbf of components and other
arguments/caveats, please?  Does anyone have the cycles to actually
illustrate this or have urls to such studies?

On Tue, Apr 15, 2008 at 1:03 PM, Keith Bierman [EMAIL PROTECTED] wrote:


 On Apr 15, 2008, at 10:58 AM, Tim wrote:



 On Tue, Apr 15, 2008 at 10:09 AM, Maurice Volaski [EMAIL PROTECTED]
 wrote:
  I have 16 disks in RAID 5 and I'm not worried.
 
 
  I'm sure you're already aware, but if not, 22 drives in a raid-6 is
  absolutely SUICIDE when using SATA disks.  12 disks is the upper end of
 what
  you want even with raid-6.  The odds of you losing data in a 22 disk
 raid-6
  is far too great to be worth it if you care about your data.  /rant
 


 You could also be driving your car down the freeway at 100mph drunk, high,
 and without a seatbelt on and not be worried.  The odds will still be
 horribly against you.


 Perhaps providing the computations rather than the conclusions would be more
 persuasive  on a technical list ;

 --
 Keith H. Bierman   [EMAIL PROTECTED]  | AIM kbiermank
 5430 Nassau Circle East  |
 Cherry Hills Village, CO 80113   | 303-997-2749
 speaking for myself* Copyright 2008





 ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Nice chassis for ZFS server

2008-03-17 Thread Jacob Ritorto
Hi all,
Did anyone ever confirm whether this ssr212 box, without hardware raid 
option, works reliably under OpenSolaris without fooling around with external 
drivers, etc.?  I need a box like this, but can't find a vendor that will give 
me a try  buy.  (Yes, I'm spoiled by Sun).

thx
jake
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss