[OmniOS-discuss] Moving a zfs pool from server to another with less memory.

2013-10-30 Thread Svavar Örn Eysteinsson

Hi.

Are there any special configuration or setup that I need to take consider ?

I'm currently moving a 14TB ZFS pool from a out-dated OpenIndiana server with 
16GB in RAM.
The new server will only provide 8GB of RAM, and will be running OmniOS.
The new server has a maximum of 8GB of RAM, so there is no option extending the 
RAM.

I have also in plans to extend the size of the pool to at least 20TB.
It only serves as a archive server. not much of client activity, only rsync and 
archival stuff.


Any help and or information much appreciated.

Thanks allot.

Best regards,

Svavar
Reykjavik - Iceland


___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] Moving a zfs pool from server to another with less memory.

2013-10-30 Thread Jim Klimov

On 2013-10-30 16:44, Svavar Örn Eysteinsson wrote:


Hi.

Are there any special configuration or setup that I need to take consider ?

I'm currently moving a 14TB ZFS pool from a out-dated OpenIndiana server with 
16GB in RAM.
The new server will only provide 8GB of RAM, and will be running OmniOS.
The new server has a maximum of 8GB of RAM, so there is no option extending the 
RAM.

I have also in plans to extend the size of the pool to at least 20TB.
It only serves as a archive server. not much of client activity, only rsync and 
archival stuff.


First of all, probably, make sure that your RAM is ECC (my limited
NAS at home was based on desktop hardware which only used non-ECC
memory which could be the cause of some of my problems - with no
good way to check). Also, this amount of memory is very small for
dedup for example.

Otherwise, especially for an archival system with little random IO
(sans scrubbing) it might not matter much... Basically, this all
revolves around the working set (or hot data) size, so that for
the data you mostly use your data or metadata is speedily available
in RAM. There are rough estimates like you need 1GB RAM per TB of
storage, but exact numbers depend on the system's usage and data
layout (i.e. average number of blocks per TB).

HTH,
//Jim

___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] Re-installing OmniOS after Crash, Errors with pkg [Subject Edited]

2013-10-30 Thread Chris Nehren
On Wed, Oct 30, 2013 at 18:16:40 +0530, Sam M wrote:
 Hi Eric,
 
 The issue I'm having is that I had Bloody up and running and then my HD
 crashed. While on bloody, I upgraded my 8 TB data zpool. Now, the same
 zpool is not accessible when I'm running the release version of OmniOS. The
 issue I mentioned in my original email seems minor but it's causing other
 issues.
 
 I understand bloody is not supposed to be stable, but I can access the data
 zpool in readonly mode.
 
 So I was hoping for a way to fix the pkg issue, or I guess I'll wait for
 the updates to finish...

If you install pkg@0.5.11,5.11-0.151007:20131027T234759Z (an
unsigned version that supports signing), you should be able to
use that as an interstitial version from which to do the rest of
the upgrade. Let me know if this doesn't work for you.

-- 
Chris Nehren
Site Reliability Engineer, OmniTI
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


[OmniOS-discuss] multipath problem when replacing a failed SAS drive

2013-10-30 Thread Kevin Swab
Hello,

I'm running OmniOS r151006p on the following system:

- Supermicro X8DT6 board, Xeon E5606 CPU, 48GB ram
- Supermicro SC847 chassis, 36 drive bays, SAS expanders, LSI 9211-8i
controller
- 34 x Toshiba 3T SAS drives MG03SCA300 in one pool w/ 16 mirrored sets
+ 2 hot spares

'mpathadm list lu' showed all drives as having two paths to the controller.

Yesterday, one of the drives failed and was replaced.  The new drive is
only showing one path in mpathadm, and errors have started showing up
periodically in /var/adm/messages:



# mpathadm list lu /dev/rdsk/c1t539478CA7150d0
mpath-support:  libmpscsi_vhci.so
/dev/rdsk/c1t539478CA7150d0s2
Total Path Count: 1
Operational Path Count: 1

Oct 30 09:30:22 hagler scsi: [ID 243001 kern.warning] WARNING:
/pci@0,0/pci8086,3410@9/pci1000,3020@0 (mpt_sas0):
Oct 30 09:30:22 hagler  mptsas_handle_event_sync: IOCStatus=0x8000,
IOCLogInfo=0x31120101
Oct 30 09:30:22 hagler scsi: [ID 243001 kern.warning] WARNING:
/pci@0,0/pci8086,3410@9/pci1000,3020@0 (mpt_sas0):
Oct 30 09:30:22 hagler  mptsas_handle_event: IOCStatus=0x8000,
IOCLogInfo=0x31120101
Oct 30 09:30:22 hagler scsi: [ID 365881 kern.info]
/pci@0,0/pci8086,3410@9/pci1000,3020@0 (mpt_sas0):
Oct 30 09:30:22 hagler  Log info 0x31120101 received for target 89.
Oct 30 09:30:22 hagler  scsi_status=0x0, ioc_status=0x804b, scsi_state=0xc
Oct 30 09:30:22 hagler scsi: [ID 243001 kern.warning] WARNING:
/pci@0,0/pci8086,3410@9/pci1000,3020@0 (mpt_sas0):
Oct 30 09:30:22 hagler  mptsas_handle_event_sync: IOCStatus=0x8000,
IOCLogInfo=0x31120101
Oct 30 09:30:22 hagler scsi: [ID 243001 kern.warning] WARNING:
/pci@0,0/pci8086,3410@9/pci1000,3020@0 (mpt_sas0):
Oct 30 09:30:22 hagler  mptsas_handle_event: IOCStatus=0x8000,
IOCLogInfo=0x31120101
Oct 30 09:30:22 hagler scsi: [ID 365881 kern.info]
/pci@0,0/pci8086,3410@9/pci1000,3020@0 (mpt_sas0):
Oct 30 09:30:22 hagler  Log info 0x31120101 received for target 89.
Oct 30 09:30:22 hagler  scsi_status=0x0, ioc_status=0x804b, scsi_state=0xc
Oct 30 09:30:22 hagler scsi: [ID 243001 kern.warning] WARNING:
/pci@0,0/pci8086,3410@9/pci1000,3020@0 (mpt_sas0):
Oct 30 09:30:22 hagler  mptsas_handle_event_sync: IOCStatus=0x8000,
IOCLogInfo=0x31120101
Oct 30 09:30:22 hagler scsi: [ID 365881 kern.info]
/pci@0,0/pci8086,3410@9/pci1000,3020@0 (mpt_sas0):
Oct 30 09:30:22 hagler  Log info 0x31120101 received for target 89.
Oct 30 09:30:22 hagler  scsi_status=0x0, ioc_status=0x804b, scsi_state=0xc
Oct 30 09:30:22 hagler scsi: [ID 243001 kern.warning] WARNING:
/pci@0,0/pci8086,3410@9/pci1000,3020@0 (mpt_sas0):
Oct 30 09:30:22 hagler  mptsas_handle_event: IOCStatus=0x8000,
IOCLogInfo=0x31120101
Oct 30 09:30:22 hagler scsi: [ID 243001 kern.warning] WARNING:
/pci@0,0/pci8086,3410@9/pci1000,3020@0 (mpt_sas0):
Oct 30 09:30:22 hagler  mptsas_handle_event_sync: IOCStatus=0x8000,
IOCLogInfo=0x31120101
Oct 30 09:30:22 hagler scsi: [ID 365881 kern.info]
/pci@0,0/pci8086,3410@9/pci1000,3020@0 (mpt_sas0):
Oct 30 09:30:22 hagler  Log info 0x31120101 received for target 89.
Oct 30 09:30:22 hagler  scsi_status=0x0, ioc_status=0x804b, scsi_state=0xc
Oct 30 09:30:22 hagler scsi: [ID 243001 kern.warning] WARNING:
/pci@0,0/pci8086,3410@9/pci1000,3020@0 (mpt_sas0):
Oct 30 09:30:22 hagler  mptsas_handle_event: IOCStatus=0x8000,
IOCLogInfo=0x31120101



The error messages refer to target 89, which I can confirm corresponds
to the missing path for my replacement drive using lsiutil:



# lsiutil -p 1 16

LSI Logic MPT Configuration Utility, Version 1.63, June 4, 2009

1 MPT Port found

 Port Name Chip Vendor/Type/RevMPT Rev  Firmware Rev  IOC
 1.  mpt_sas0  LSI Logic SAS2008 03  200  0d000100 0

SAS2008's links are 6.0 G, 6.0 G, 6.0 G, 6.0 G, 6.0 G, 6.0 G, 6.0 G, 6.0 G

 B___T SASAddress PhyNum  Handle  Parent  Type
[ ... cut ... ]
 0  89  539478ca715217 00590032   SAS Target
 0  90  539478ca715317 005a000a   SAS Target
[ ... cut ... ]



When I ask lsiutil to rescan the bus, I see the following error when
it gets to target 89:



# lsiutil -p 1 8

LSI Logic MPT Configuration Utility, Version 1.63, June 4, 2009

1 MPT Port found

 Port Name Chip Vendor/Type/RevMPT Rev  Firmware Rev  IOC
 1.  mpt_sas0  LSI Logic SAS2008 03  200  0d000100 0

SAS2008's links are 6.0 G, 6.0 G, 6.0 G, 6.0 G, 6.0 G, 6.0 G, 6.0 G, 6.0 G

 B___T___L  Type   Vendor   Product  Rev
[ ... cut ... ]
ScsiIo to Bus 0 Target 89 failed, IOCStatus = 004b (IOC Terminated)
 0  90   0  Disk   TOSHIBA  MG03SCA300   0108  539478ca7153
   17
[ ... cut ... ]



This problem has happened to me once before on a similar system.  At
that time, I tried reseating the drive, and tried several different
replacement drives, all had the same issue.  I even tried rebooting the
system and that didn't help.

Does anyone know how I can clear this issue up?  I'd be happy to provide
any additional 

[OmniOS-discuss] hw recomendation for JBOD box

2013-10-30 Thread Tobias Oetiker
We are looking at buying a system consisting of a 1U server and 1
or more 3U JBOD boxes to run and OmniOS ZFS server. Any HW
recommendations for the JBOD box, Controller, Disks, what are you
using ?

cheers
tobi


-- 
Tobi Oetiker, OETIKER+PARTNER AG, Aarweg 15 CH-4600 Olten, Switzerland
http://it.oetiker.ch t...@oetiker.ch ++41 62 775 9902 / sb: -9900
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] hw recomendation for JBOD box

2013-10-30 Thread Bryan Horstmann-Allen
+--
| On 2013-10-30 14:29:06, Eric Sproul wrote:
| 
| Tobi,
| I'd recommend reviewing the parts lists that Joyent publishes:
| https://github.com/joyent/manufacturing
| 
| See the parts_database.ods file.  It's mostly Supermicro kit, and what
| I find very helpful is the lists of BIOS and firmware revisions within
| those parts.

I've also had very good experience with Fujitsu RX200 and RX300s (using
IT-flashed non-Fujitsu non-RAID LSI HBAs) and their JX40 JBODs.

(We have ~40 or so of those systems, and half a dozen JX40s.)

iRMC Enterprise is actually comparatively nice to use, too.

Cheers.
-- 
bdha
cyberpunk is dead. long live cyberpunk.
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] Physical slot based disk names for LSI SAS on OmniOS?

2013-10-30 Thread Eric Sproul
On Wed, Oct 30, 2013 at 4:25 PM, Chris Siebenmann c...@cs.toronto.edu wrote:
  This is a long shot and I suspect that the answer is no, but: in
 OmniOS, is it possible somehow to have disk device names for disks
 behind LSI SAS controllers that are based on the physical slot ('phy')
 that the disk is found in instead of the disk's reported WNN or serial
 number?

Chris,
I'm not 100% sure, but I think the integration of
https://www.illumos.org/issues/4018 will give you what you want.  This
is currently available in r151007 but will be part of the upcoming
r151008 release.

Issues 4016-4019 are all related, in fact, and are all in r151007 at this time.

Eric
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] hw recomendation for JBOD box

2013-10-30 Thread Michael Rasmussen
On Wed, 30 Oct 2013 23:05:11 +0100 (CET)
Tobias Oetiker t...@oetiker.ch wrote:

 
 http://www.supermicro.com/products/system/4u/6047/ssg-6047r-e1r36l.cfm
 
 filled with UltraStar 7k3000 and a ZeusRam 8GB as Zil
 
 and 256 GB ram

Nice rick;-)

 
 nice :-) no jbod though as far as I can see ... so maybe jbod is
 not such a good idea ...
 
Point 6:  Onboard LSI 2308 IT mode. For LSI HBA IT mode means jbod
mode. And no, the only good thing for ZFS is jbod mode! The non-IT mode
HBA's where you create a RAID0 for each disk is simply horrible.

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael at rasmussen dot cc
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xD3C9A00E
mir at datanom dot net
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE501F51C
mir at miras dot org
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE3E80917
--
I've got a very bad feeling about this.
-- Han Solo


signature.asc
Description: PGP signature
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] hw recomendation for JBOD box

2013-10-30 Thread Johan Kragsterman
Tobi!

@ the discussion about JBOD and expanders: For me, JBOD is the best solution, 
at least for spinning disks. Where you would place your cache is another 
question. But for the JBOD, you would need to ask yourself wether you would 
like expanders or not. That depends of course of your environmental needs, the 
prestanda needed, your workload, etc.

JBOD, for me, is the best solution because of the flexibility. It's easy to 
move it to another storage head.

There was a discussion on this list a couple of month ago, where someone 
mentioned expanderless JBOD's, I believe it was Supermicro. I don't recall who 
it was that mentioned them, perhaps it was Richard Elling? With these you can 
get all the bandwith you like, so it's like internal disks. I would like 
someone that has experience of these to comment on this.

Otherwise, normally perhaps, you wouldn't need direct links to all of your 
drives, especially if you got a good cache implementation.
 I would also consider, if I was in your situation, a 2U storage head server, 
instead of a 1U, because of two things: 1: The cache. You would like your cache 
to be able to expand. 2: Expansion even here: You would like to have good 
expansion possibilities in the chassi and the mo'bo'(slots), for beeing able to 
connect to more JBOD's. In a 1U chassi your expansion possibilities are very 
limited.

Rgrds Johan




-OmniOS-discuss omnios-discuss-boun...@lists.omniti.com skrev: -
Till: Eric Sproul espr...@omniti.com
Från: Tobias Oetiker 
Sänt av: OmniOS-discuss 
Datum: 2013.10.30 23:06
Kopia: omnios-discuss omnios-discuss@lists.omniti.com
Ärende: Re: [OmniOS-discuss] hw recomendation for JBOD box

Hi Eric,

Today Eric Sproul wrote:

 Tobi,
 I'd recommend reviewing the parts lists that Joyent publishes:
 https://github.com/joyent/manufacturing

so that would be this

http://www.supermicro.com/products/system/4u/6047/ssg-6047r-e1r36l.cfm

filled with UltraStar 7k3000 and a ZeusRam 8GB as Zil

and 256 GB ram

nice :-) no jbod though as far as I can see ... so maybe jbod is
not such a good idea ...

cheers
tobi


-- 
Tobi Oetiker, OETIKER+PARTNER AG, Aarweg 15 CH-4600 Olten, Switzerland
http://it.oetiker.ch t...@oetiker.ch ++41 62 775 9902 / sb: -9900
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss



___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] hw recomendation for JBOD box

2013-10-30 Thread Tobias Oetiker
Yesterday Michael Rasmussen wrote:

 On Wed, 30 Oct 2013 23:05:11 +0100 (CET)
 Tobias Oetiker t...@oetiker.ch wrote:

 
  http://www.supermicro.com/products/system/4u/6047/ssg-6047r-e1r36l.cfm
 
  filled with UltraStar 7k3000 and a ZeusRam 8GB as Zil
 
  and 256 GB ram

 Nice rick;-)

 
  nice :-) no jbod though as far as I can see ... so maybe jbod is
  not such a good idea ...
 
 Point 6:  Onboard LSI 2308 IT mode. For LSI HBA IT mode means jbod
 mode. And no, the only good thing for ZFS is jbod mode! The non-IT mode
 HBA's where you create a RAID0 for each disk is simply horrible.

sure, jbod mode, I meant having an external jbod box attached

cheers
tobi



-- 
Tobi Oetiker, OETIKER+PARTNER AG, Aarweg 15 CH-4600 Olten, Switzerland
http://it.oetiker.ch t...@oetiker.ch ++41 62 775 9902 / sb: -9900
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss