Am 28.05.12 00:35, schrieb Richard Elling:
On May 27, 2012, at 12:52 PM, Stephan Budach wrote:
Hi,
today I issued a scrub on one of my zpools and after some time I
noticed that one of the vdevs became degraded due to some drive
having cksum errors. The spare kicked in and the drive got
On 05/ 7/12 04:08 PM, Ian Collins wrote:
On 05/ 7/12 03:42 PM, Greg Mason wrote:
I am currently trying to get two of these things running Illumian. I don't have
any particular performance requirements, so I'm thinking of using some sort of
supported hypervisor, (either RHEL and KVM or VMware
On 05/28/2012 10:48 AM, Ian Collins wrote:
To follow up, the H310 appears to be useless in non-raid mode.
The drives do show up in Solaris 11 format, but they show up as
unknown, unformatted drives. One oddity is the box has two SATA
SSDs which also show up the card's BIOS, but present OK
On 05/28/12 08:55 PM, Sašo Kiselkov wrote:
On 05/28/2012 10:48 AM, Ian Collins wrote:
To follow up, the H310 appears to be useless in non-raid mode.
The drives do show up in Solaris 11 format, but they show up as
unknown, unformatted drives. One oddity is the box has two SATA
SSDs which also
On 05/28/2012 11:48 AM, Ian Collins wrote:
On 05/28/12 08:55 PM, Sašo Kiselkov wrote:
On 05/28/2012 10:48 AM, Ian Collins wrote:
To follow up, the H310 appears to be useless in non-raid mode.
The drives do show up in Solaris 11 format, but they show up as
unknown, unformatted drives. One
On 05/28/12 10:53 PM, Sašo Kiselkov wrote:
On 05/28/2012 11:48 AM, Ian Collins wrote:
On 05/28/12 08:55 PM, Sašo Kiselkov wrote:
On 05/28/2012 10:48 AM, Ian Collins wrote:
To follow up, the H310 appears to be useless in non-raid mode.
The drives do show up in Solaris 11 format, but they show
On 05/28/2012 12:59 PM, Ian Collins wrote:
On 05/28/12 10:53 PM, Sašo Kiselkov wrote:
On 05/28/2012 11:48 AM, Ian Collins wrote:
On 05/28/12 08:55 PM, Sašo Kiselkov wrote:
On 05/28/2012 10:48 AM, Ian Collins wrote:
To follow up, the H310 appears to be useless in non-raid mode.
The drives do
On 05/28/12 11:01 PM, Sašo Kiselkov wrote:
On 05/28/2012 12:59 PM, Ian Collins wrote:
On 05/28/12 10:53 PM, Sašo Kiselkov wrote:
On 05/28/2012 11:48 AM, Ian Collins wrote:
On 05/28/12 08:55 PM, Sašo Kiselkov wrote:
On 05/28/2012 10:48 AM, Ian Collins wrote:
To follow up, the H310 appears to
On 05/28/2012 01:12 PM, Ian Collins wrote:
On 05/28/12 11:01 PM, Sašo Kiselkov wrote:
On 05/28/2012 12:59 PM, Ian Collins wrote:
On 05/28/12 10:53 PM, Sašo Kiselkov wrote:
On 05/28/2012 11:48 AM, Ian Collins wrote:
On 05/28/12 08:55 PM, Sašo Kiselkov wrote:
On 05/28/2012 10:48 AM, Ian
Hi folks,
Looking to get some larger drives for one of my boxes. It runs
exclusively ZFS and has been using Seagate 2TB units up until now (which
are 512 byte sector).
Anyone offer up suggestions of either 3 or preferably 4TB drives that
actually work well with ZFS out of the box? (And not
On Mon, May 28, 2012 at 6:48 AM, Nathan Kroenert nat...@tuneunix.com wrote:
Anyone offer up suggestions of either 3 or preferably 4TB drives that
actually work well with ZFS out of the box? (And not perform like
rubbish)...
With our NCP 3 boxes the WD drives seem to be working okay (this is
Hi All
I would like to create a sample application for ZFS using C++/C and libzfs.
I am very new to ZFS, I would like to have an some information about ZFS
API.
Even some sample code will be useful.
Looking for help and constructive suggestion.
Thanks and regards,
Dhiraj Bhandare
I'm getting sub-optimal performance with an mmap based database
(mongodb) which is running on zfs of Solaris 10u9.
System is Sun-Fire X4270-M2 with 2xX5680 and 72GB (6 * 8GB + 6 * 4GB)
ram (installed so it runs at 1333MHz) and 2 * 300GB 15K RPM disks
- a few mongodb instances are running
On 05/28/12 20:06, Iwan Aucamp wrote:
I'm getting sub-optimal performance with an mmap based database
(mongodb) which is running on zfs of Solaris 10u9.
System is Sun-Fire X4270-M2 with 2xX5680 and 72GB (6 * 8GB + 6 * 4GB)
ram (installed so it runs at 1333MHz) and 2 * 300GB 15K RPM disks
-
On Mon, May 28, 2012 at 9:06 PM, Iwan Aucamp aucam...@gmail.com wrote:
I'm getting sub-optimal performance with an mmap based database (mongodb)
which is running on zfs of Solaris 10u9.
System is Sun-Fire X4270-M2 with 2xX5680 and 72GB (6 * 8GB + 6 * 4GB) ram
(installed so it runs at 1333MHz)
On May 28, 2012, at 12:46 PM, Lionel Cons wrote:
On Mon, May 28, 2012 at 9:06 PM, Iwan Aucamp aucam...@gmail.com wrote:
I'm getting sub-optimal performance with an mmap based database (mongodb)
which is running on zfs of Solaris 10u9.
System is Sun-Fire X4270-M2 with 2xX5680 and 72GB (6 *
On 05/28/2012 10:12 PM, Andrew Gabriel wrote:
On 05/28/12 20:06, Iwan Aucamp wrote:
I'm thinking of doing the following:
- relocating mmaped (mongo) data to a zfs filesystem with only
metadata cache
- reducing zfs arc cache to 16 GB
Is there any other recommendations - and is above
Hi Dhiraj,
On May 27, 2012, at 11:28 PM, Dhiraj Bhandare wrote:
Hi All
I would like to create a sample application for ZFS using C++/C and libzfs.
I am very new to ZFS, I would like to have an some information about ZFS API.
Even some sample code will be useful.
Looking for help and
question below...
On May 28, 2012, at 1:25 PM, Iwan Aucamp wrote:
On 05/28/2012 10:12 PM, Andrew Gabriel wrote:
On 05/28/12 20:06, Iwan Aucamp wrote:
I'm thinking of doing the following:
- relocating mmaped (mongo) data to a zfs filesystem with only
metadata cache
- reducing zfs arc
On May 28, 2012, at 5:48 AM, Nathan Kroenert wrote:
Hi folks,
Looking to get some larger drives for one of my boxes. It runs exclusively
ZFS and has been using Seagate 2TB units up until now (which are 512 byte
sector).
Anyone offer up suggestions of either 3 or preferably 4TB drives
On May 28, 2012, at 2:48 AM, Ian Collins wrote:
On 05/28/12 08:55 PM, Sašo Kiselkov wrote:
On 05/28/2012 10:48 AM, Ian Collins wrote:
To follow up, the H310 appears to be useless in non-raid mode.
The drives do show up in Solaris 11 format, but they show up as
unknown, unformatted drives.
On 28 May 2012 22:10, Richard Elling richard.ell...@gmail.com wrote:
The only recommendation which will lead to results is to use a
different OS or filesystem. Your choices are
- FreeBSD with ZFS
- Linux with BTRFS
- Solaris with QFS
- Solaris with UFS
- Solaris with NFSv4, use ZFS on
[Apologies to the list, this has expanded past ZFS, if someone complains, we can
move the thread to another illumos dev list]
On May 28, 2012, at 2:18 PM, Lionel Cons wrote:
On 28 May 2012 22:10, Richard Elling richard.ell...@gmail.com wrote:
The only recommendation which will lead to results
On 05/29/12 08:32 AM, Richard Elling wrote:
Hi Dhiraj,
On May 27, 2012, at 11:28 PM, Dhiraj Bhandare wrote:
Hi All
I would like to create a sample application for ZFS using C++/C and
libzfs.
I am very new to ZFS, I would like to have an some information about
ZFS API.
Even some sample
2012-05-29 0:34, Richard Elling wrote:
I'd be interested in the results of such tests. You can change the
primarycache
parameter on the fly, so you could test it in less time than it takes
for me to type
this email :-)
I believe it would also take some time for memory distribution
to settle,
On Mon, May 28, 2012 at 09:23:25AM -0600, Nigel W wrote:
After a snafu
last week at $work where a 512 byte pool would not resilver with a 4K
drive plugged in, it appears that (keep in mind that these are
consumer drives) Seagate no longer manufactures the 7200.12 series
drives which has a
On Mon, May 28, 2012 at 01:34:18PM -0700, Richard Elling wrote:
I'd be interested in the results of such tests.
Me too, especially for databases like postgresql where there's a
complementary cache size tunable within the db that often needs to be
turned up, since they implicitly rely on some
Hi all,
just to wrap this issue up: as FMA didn't report any other error than
the one which led to the degradation of the one mirror, I detached the
original drive from the zpool which flagged the mirror vdev as ONLINE
(although there was still a cksum error count of 23 on the spare drive).
On May 28, 2012, at 9:21 PM, Stephan Budach wrote:
Hi all,
just to wrap this issue up: as FMA didn't report any other error than the one
which led to the degradation of the one mirror, I detached the original drive
from the zpool which flagged the mirror vdev as ONLINE (although there
On Mon, May 28, 2012 at 6:13 PM, Daniel Carosone d...@geek.com.au wrote:
On Mon, May 28, 2012 at 09:23:25AM -0600, Nigel W wrote:
After a snafu
last week at $work where a 512 byte pool would not resilver with a 4K
drive plugged in, it appears that (keep in mind that these are
consumer drives)
On 05/28/12 17:13, Daniel Carosone wrote:
There are two problems using ZFS on drives with 4k sectors:
1) if the drive lies and presents 512-byte sectors, and you don't
manually force ashift=12, then the emulation can be slow (and
possibly error prone). There is essentially an
31 matches
Mail list logo