looking for maximum performance with availability so that
narrows it down to a mirrored pool, unless your Postgresql workload is very
specific that raidz would fit, but beware of the performance hit.
Regards,
--
Giovanni Tirloni
___
zfs-discuss mailing list
/datasets from each one
independently.
http://download.oracle.com/docs/cd/E19963-01/html/821-1448/index.html
http://download.oracle.com/docs/cd/E18752_01/html/819-5461/index.html
--
Giovanni Tirloni
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
shouldn't panic
just because it can't import a pool.
Try booting with the kernel debugger on (add -kv to the grub kernel
line). Take a look at dumpadm.
--
Giovanni Tirloni
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
that is not a problem comments. With the 7000s appliance
I've heard that the 900hr estimated resilver time was normal and
everything is working as expected. Can't help but think there is some
walled garden syndrome floating around.
--
Giovanni Tirloni
___
zfs-discuss
On Wed, May 4, 2011 at 9:04 PM, Brandon High bh...@freaks.com wrote:
On Wed, May 4, 2011 at 2:25 PM, Giovanni Tirloni gtirl...@sysdroid.com
wrote:
The problem we've started seeing is that a zfs send -i is taking hours
to
send a very small amount of data (eg. 20GB in 6 hours) while a zfs
,
--
Giovanni Tirloni
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Anyone else with over 600 hours of resilver time? :-)
Thank you,
Giovanni Tirloni (gtirl...@sysdroid.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
for this type of situation?
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6899970
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
the LSI 2004/2008 HBAs connected to the backplane (both 3Gb/s
and 6Gb/s).
The MegaRAID ELP, when connected to the same backplane, doesn't exhibit
that behavior.
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss
Solaris and refuses to investigate it.
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
basis. It's
not if they'll happen but how often.
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
builds.
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
in a
mirror. I've always wondered what exactly it was doing since it was supposed
to be 30 seconds worth of data. It also generates lots of checksum errors.
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
. Was the autoreplace code
supposed to replace the faulty disk and release the spare when
resilver is done ?
Thank you,
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
On Wed, Aug 11, 2010 at 4:06 PM, Cindy Swearingen
cindy.swearin...@oracle.com wrote:
Hi Giovanni,
The spare behavior and the autoreplace property behavior are separate
but they should work pretty well in recent builds.
You should not need to perform a zpool replace operation
have reported works OK.
If you want to try something in between b111 and b134, see the
following instructions:
http://blogs.sun.com/observatory/entry/updating_to_a_specific_build
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs
a vdev is degraded).
Thank you,
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, Jul 23, 2010 at 11:59 AM, Richard Elling rich...@nexenta.com wrote:
On Jul 23, 2010, at 2:31 AM, Giovanni Tirloni wrote:
Hello,
We've seen some resilvers on idle servers that are taking ages. Is it
possible to speed up resilver operations somehow?
Eg. iostat shows 5MB/s writes
On Fri, Jul 23, 2010 at 12:50 PM, Bill Sommerfeld
bill.sommerf...@oracle.com wrote:
On 07/23/10 02:31, Giovanni Tirloni wrote:
We've seen some resilvers on idle servers that are taking ages. Is it
possible to speed up resilver operations somehow?
Eg. iostat shows5MB/s writes
these issues.
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, Jul 19, 2010 at 7:12 AM, Joerg Schilling
joerg.schill...@fokus.fraunhofer.de wrote:
Giovanni Tirloni gtirl...@sysdroid.com wrote:
On Sun, Jul 18, 2010 at 10:19 PM, Miles Nordin car...@ivy.net wrote:
IMHO it's important we don't get stuck running Nexenta in the same
spot we're now
version 134.
Have you enabled compression or deduplication ?
Check the disks with `iostat -XCn 1` (look for high asvc_t times) and
`iostat -En` (hard and soft errors).
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss
see/change these thresholds ?
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I can't
get zpool status to show my pool.
vdev_path = /dev/dsk/c9t0d0s0
vdev_devid = id1,s...@ahitachi_hds7225scsun250g_0719bn9e3k=vfa100r1dn9e3k/a
parent_guid = 0xb89f3c5a72a22939
Does format(1M) show the devices where they once where ?
--
Giovanni Tirloni
gtirl...@sysdroid.com
/messages
Perhaps with the additional information someone here can help you
better. I don't have any experience with Windows 7 to guarantee that
it hasn't messes with the disk contents.
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing
the opposite.
Some companies are successfully doing the opposite of you: They are
using standard parts and a competent staff that knows how to create
solutions out of them without having to pay for GUI-powered systems
and a 4-hour on-site part swapping service.
--
Giovanni Tirloni
gtirl...@sysdroid.com
and packages, so
I can't envision that.
Would anyone have any ideas what may cause this?
It could be a disk failing and dragging I/O down with it.
Try to check for high asvc_t with `iostat -XCn 1` and errors in `iostat -En`
Any timeouts or retries in /var/adm/messages ?
--
Giovanni Tirloni
gtirl
release.
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
://www.nexenta.com/corp/documentation/nexentastor-changelog
Is there a bug tracker were one can objectively list all the bugs
(with details) that went into a release ?
Many bug fixes is a bit too general.
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss
.
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
driver by default uses scsi_vhci, and scsi_vhci by
default does load-balance round-robin. Have you tried setting
load-balance=none in scsi_vhci.conf?
That didn't help.
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss
anymore so I'm
guessing it's something related to the mpt_sas driver.
I submitted bug #6963321 a few minutes ago (not available yet).
--
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
Thanks guys - I will take a look at those clustered file systems.
My goal is not to stick with Windows - I would like to have a Storage pool for
XenServer (free) so that I can have guests, but using a storage server
(Opensolaris - ZFS) as the iSCSI storage pool.
Any suggestions for the added
Hi guys
I wanted to ask how i could setup a iSCSI device to be shared by 2 computers
concurrently, by that i mean sharing files like it was a NFS share but use
iSCSI instead.
I tried and setup iSCSI on both computers and was able to see my files (I had
formatted it NTFS before), from my
.
--
Giovanni
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Guys
I am having trouble installing Opensolaris 2009.06 into my Biostar Tpower I45
motherboard, approved on BigAdmin HCL here:
http://www.sun.com/bigadmin/hcl/data/systems/details/26409.html -- why is it
not working?
My setup:
3x 1TB hard-drives SATA
1x 500GB hard-drive (I have only left
the differences between PCIe 1.0 and 2.0. My fault.
If Giovanni had put the Megaraid in this slot, he would have seen
an even lower throughput, around 600MB/s:
This slot is provided by the ICH10R which as you can see on:
http://www.supermicro.com/manuals/motherboard/5500/MNL-1062.pdf
for a
MegaRAID ELP.
I'm guessing the backplane and cable are the bottleneck here.
Any comments ?
--
Giovanni
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, May 26, 2010 at 9:22 PM, Brandon High bh...@freaks.com wrote:
On Wed, May 26, 2010 at 4:27 PM, Giovanni Tirloni gtirl...@sysdroid.com
wrote:
SuperMicro X8DTi motherboard
SuperMicro SC846E1 chassis (3Gb/s backplane)
LSI 9211-4i (PCIex x4) connected to backplane with a SFF-8087 cable
them back as configured any help is appreciated. Thanks
On Fri, May 7, 2010 at 9:45 PM, Ian Collins i...@ianshome.com wrote:
On 05/ 8/10 04:38 PM, Giovanni wrote:
Hi guys,
I have a quick question, I am playing around with ZFS and here's what I
did.
I created a storage pool with several
Hi,
Were you ever able to solve this problem on your AOC-SAT2-MV8 card? I am in
need of purchasing it to add more drives to my server.
Thanks
Giovanni
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss
of 24 would show that).
If you look in the archives, people have sent a few averaged I/O performance
numbers that you could compare to your workload.
--
Giovanni
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
Hi guys,
I have a quick question, I am playing around with ZFS and here's what I did.
I created a storage pool with several drives. I unplugged 3 out of 5 drives
from the array, currently:
NAMESTATE READ WRITE CKSUM
gpool UNAVAIL 0 0 0
to version 19).
Right ?
This happened on OpenSolaris 2009.6.
--
Giovanni
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi guys
I am new to Opensolaris and ZFS world, I have 6x2TB SATA hdds on my system, I
picked a single 2TB disk and installed opensolaris (therefore zpool was created
by the installer)
I went ahead and created a new pool gpool with raidz (the kind of redundancy
I want. Here's the output:
Thank you. I was not aware that root pools could not be moved.
But here's the kicker, what if I have a single drive for root pool, and its
failing... I connect a new HDD to replace the boot drive thats dying, ZFS has
no way of migrating to a new drive?
Thanks
--
This message posted from
in fmdump.
But check 'fmdump -eV' and see the actual errors that got created. You could
find something there.
--
Giovanni
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
only need to make sure your system
will boot from the correct disk.
--
Giovanni
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
could hack zfs.c to automatically handle editing the crontab but I
don't know if it's worth the effort.
Are you worried that cron will fail or is it just an aesthetic requirement ?
--
Giovanni
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
idle time falls under a certain percentage.
What build of OpenSolaris are you using ?
Is it nearly freezing during all the process or just at the end ?
There was another thread where a similar issue was discusses a week ago.
--
Giovanni
___
zfs
drives and advertise a
real 1TB disk using power-of-2 and show how people are being misled by
other vendors that use power-of-10. Highly unlikely but would sure get some
respect from the storage community.
--
Giovanni
___
zfs-discuss mailing list
zfs
the filesystem
reservation. I'm not sure it's doable.
--
Giovanni
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
of 10
Main memory is the odd one out.
My bad on generalizing that information.
Perhaps the software stack dealing with disks should be changed to use
power-of-10. Unlikely too.
--
Giovanni
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
the relationship between
that and the defect database ? I'm still trying to understand the flow of
information here, since both databases seem to be used exclusively for
OpenSolaris but one is less open.
--
Giovanni
___
zfs-discuss mailing list
zfs-discuss
opened for that but it doesn't seem to be implemented yet.
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6662467
--
Giovanni
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
. You can try to import the pool
again and see if `fmdump -e` lists any errors afterwards.
You use the spare with `zpool replace`.
--
Giovanni Tirloni
sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
of
encouragement or wisdom?
What does `iostat -En` say ?
My suggestion is to replace the cable that's connecting the c3t3d0 disk.
IMHO, the cable is much more likely to be faulty than a single port on the
disk controller.
--
Giovanni Tirloni
sysdroid.com
___
zfs
files being created on the
SSD disk ?
You can check device usage with `zpool iostat -v hdd`. Please also send the
output of `zpool status hdd`.
Thank you,
--
Giovanni Tirloni
sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
there is nothing to read back from later when a read()
misses the ARC cache and checks L2ARC.
I don't know what your OLTP benchmark does but my advice is to check if it's
really writing files in the 'hdd' zpool mount point.
--
Giovanni Tirloni
sysdroid.com
___
zfs
of
3.7TByte.
Please check the ZFS FAQ:
http://hub.opensolaris.org/bin/view/Community+Group+zfs/faq
There is a question regarding the difference between du, df and zfs list.
--
Giovanni Tirloni
sysdroid.com
___
zfs-discuss mailing list
zfs-discuss
is incredible in terms of resilience and
performance, no doubt. Which makes me think the pretty interface becomes an
annoyance sometimes. Let's wait for 2010.Q1 :)
--
Giovanni Tirloni
sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
. For small
datasets/snapshots that doesn't happen or is harder to notice.
Does ZFS have to do something special when it's done releasing the data
blocks at the end of the destroy operation ?
--
Giovanni Tirloni
sysdroid.com
___
zfs-discuss mailing list
together.
AFAIK, RAID0+1 is not supported since a vdev can only be of type disk,
mirror or raidz. And all vdevs are stripped together. Someone more
experienced in ZFS can probably confirm/deny this.
--
Giovanni Tirloni
sysdroid.com
___
zfs-discuss mailing
0 0 0
c7t1d0 ONLINE 0 0 0
c7t2d0 ONLINE 0 0 0
cache
c7t22d0ONLINE 0 0 0
spares
c7t3d0 AVAIL
Any ideas?
Thank you,
--
Giovanni Tirloni
sysdroid.com
noted, it doesn't seem possible.
You could create a new zpool with this larger LUN and use zfs send/receive
to migrate your data.
--
Giovanni Tirloni
sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
, and that
makes us all very nervous.
Any ideas?
Is it possible that your users are now deleting everything before starting
to write the backup data ?
--
Giovanni Tirloni
sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
analogy, then by
all means lets hear it.
Dell joins the party:
http://lists.us.dell.com/pipermail/linux-poweredge/2010-February/041335.html
--
Giovanni
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
fixed in other drives that Seagate/WD sells ?
For me it's just hard to objectively point out the differences between
Seagate's enterprise drives and the ones provided by Sun except that they
were tested more.
1 - http://en.wikipedia.org/wiki/Freebie_marketing
--
Giovanni
. That is, for disk and
storage vendors. Not customers.
--
Giovanni
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
' firmware is programmed
(timeouts, retries, etc) but so far nothing in the official docs. In
this case the disk's retry timeout seem way too high for our needs and
I believe a timeout limit imposed by the OS would help.
--
Giovanni P. Tirloni
___
zfs
On Mon, Jan 4, 2010 at 3:51 PM, Joerg Schilling
joerg.schill...@fokus.fraunhofer.de wrote:
Giovanni Tirloni tirl...@gmail.com wrote:
We use Seagate Barracuda ES.2 1TB disks and every time the OS starts
to bang on a region of the disk with bad blocks (which essentially
degrades the performance
71 matches
Mail list logo