On 20/01/2010 15:45, David Dyer-Bennet wrote:
On Wed, January 20, 2010 09:23, Robert Milkowski wrote:
Now you rsync all the data from your clients to a dedicated filesystem
per client, then create a snapshot.
Is there an rsync out there that can reliably replicate all file
On 20/01/2010 19:20, Ian Collins wrote:
Julian Regel wrote:
It is actually not that easy.
Compare a cost of 2x x4540 with 1TB disks to equivalent solution on
LTO.
Each x4540 could be configured as: 4x 11 disks in raidz-2 + 2x hot
spare
+ 2x OS disks.
The four raidz2 group form a single
Robert Milkowski wrote:
On 20/01/2010 19:20, Ian Collins wrote:
Julian Regel wrote:
It is actually not that easy.
Compare a cost of 2x x4540 with 1TB disks to equivalent solution on
LTO.
Each x4540 could be configured as: 4x 11 disks in raidz-2 + 2x hot
spare
+ 2x OS disks.
The four
zpool create -f testpool mirror c0t0d0 c1t0d0 mirror c4t0d0 c6t0d0
mirror c0t1d0 c1t1d0 mirror c4t1d0 c5t1d0 mirror c6t1d0 c7t1d0
mirror c0t2d0 c1t2d0
mirror c4t2d0 c5t2d0 mirror c6t2d0 c7t2d0 mirror c0t3d0 c1t3d0
mirror c4t3d0 c5t3d0
mirror c6t3d0 c7t3d0 mirror
Zfs does not strictly support RAID 1+0. However, your sample command
will create a pool based on mirror vdevs which is written to in a
load-shared fashion (not striped). This type of pool is ideal for
Although it's not technically striped according to the RAID definition of
striping, it does
On Thursday 21 January 2010 10:29:16 Edward Ned Harvey wrote:
zpool create -f testpool mirror c0t0d0 c1t0d0 mirror c4t0d0 c6t0d0
mirror c0t1d0 c1t1d0 mirror c4t1d0 c5t1d0 mirror c6t1d0 c7t1d0
mirror c0t2d0 c1t2d0
mirror c4t2d0 c5t2d0 mirror c6t2d0 c7t2d0 mirror c0t3d0
zpool create testpool disk1 disk2 disk3
In the traditional sense of RAID, this would create a concatenated data set.
The size of the data set is the size of disk1 + disk2 + disk3. However,
since this is ZFS, it's not constrained to linearly assigning virtual disk
blocks to physical disk blocks
Robert Milkowski wrote:
I think one should actually compare whole solutions - including servers,
fc infrastructure, tape drives, robots, software costs, rack space, ...
Servers like x4540 are ideal for zfs+rsync backup solution - very
compact, good $/GB ratio, enough CPU power for its
Can ASM match ZFS for checksum and self healing? The reason I ask is
that the x45x0 uses inexpensive (less reluable) SATA drives. Even the
J4xxx paper you cite uses SAS for production data (only using SATA for
Oracle Flash, although I gave my concerns about that too).
The thing is, ZFS and
On 21/01/2010 09:07, Ian Collins wrote:
Robert Milkowski wrote:
On 20/01/2010 19:20, Ian Collins wrote:
Julian Regel wrote:
It is actually not that easy.
Compare a cost of 2x x4540 with 1TB disks to equivalent solution
on LTO.
Each x4540 could be configured as: 4x 11 disks in raidz-2 + 2x
I'm pretty new to opensolaris. I come from FreeBSD.
Naturally, after using FreeBSD forr awhile i've been big on the use of
FreeBSD jails so i just had to try zones. I've figured out how to get zones
running but now i'm stuck and need help. Is there anything like nullfs in
opensolaris...
or
Until you try to pick one up and put it in a fire safe!
Then you backup to tape from x4540 whatever data you need.
In case of enterprise products you save on licensing here as you need a one
client license per x4540 but in fact can backup data from many clients which
are there.
Which brings
Le 21 janv. 10 à 12:33, Thomas Burgess a écrit :
I'm pretty new to opensolaris. I come from FreeBSD.
Naturally, after using FreeBSD forr awhile i've been big on the use
of FreeBSD jails so i just had to try zones. I've figured out how
to get zones running but now i'm stuck and need
the path of the root of your zone is not important for that feature.
\
Ok, cool
with zonecfg, you can add a configuration like this one to your zone:
add fs
set dir=/some/path/Video
set special=/tank/nas/Video
set type=lofs
end
add fs
set dir=/some/path/JeffB
set
No. But, that's where the hybrid solution comes in. ASM would be used for the
database files and ZFS for the redo/archive logs and undo. Corrupt blocks in
the datafiles would be repaired with data from redo during a recovery, and ZFS
should give you assurance that the redo didn't get corrupted.
now i'm stuck again.sorry to clog the tubes with my nubishness.
i can't seem to create users inside the zonei'm sure it's due to zfs
privelages somewhere but i'm not exactly sure how to fix iti dont' mind
if i need to manage the zfs filesystem outside of the zone, i'm just not
sure
Le 21 janv. 10 à 14:14, Thomas Burgess a écrit :
now i'm stuck again.sorry to clog the tubes with my nubishness.
i can't seem to create users inside the zonei'm sure it's due to
zfs privelages somewhere but i'm not exactly sure how to fix iti
dont' mind if i need to manage the
hrm...that seemed to work...i'm so new to solarisit's SO
different...what exactly did i just disable?
Does that mount nfs shares or something?
why should that prevent me from creating home directories?
thanks
2010/1/21 Gaëtan Lehmann gaetan.lehm...@jouy.inra.fr
Le 21 janv. 10 à 14:14,
Thomas,
If you're trying to make user home directories on your local machine in
/home, you have to watch out because the initial Solaris config assumes
that you're in an enterprise environment and the convention is to have a
filer somewhere that serves everyone's home directories which, with
ahh,
On Thu, Jan 21, 2010 at 8:55 AM, Jacob Ritorto jacob.rito...@gmail.comwrote:
Thomas,
If you're trying to make user home directories on your local machine
in /home, you have to watch out because the initial Solaris config assumes
that you're in an enterprise environment and the
add net
set defrouter=192.168.1.1**
end
Thanks again
I must be doing something wrong...i can access the zone on my network but i
can't for the life of me get the zone to access the internet
I'm googling like crazy but maybe someone here knows what i'm doing wrong.
Does anyone know if the current OpenSolaris mpt driver supports the recent LSI
SAS2008 controller?
This controller/ASIC is used in the next generation SAS-2 6Gbps PCIe cards from
LSI and SuperMicro etc, e.g.:
1. SuperMicro AOC-USAS2-L8e and the AOC-USAS2-L8i
2. LSI SAS 9211-8i
Cheers,
Simon
Thomas Burgess wrote:
I'm not used to the whole /home vs /export/home difference and when you
add zones to the mix it's quite confusing.
I'm just playing around with this zone.to learn but in the next REAL
zone i'll probably:
mount the home directories from the base system (this machine
Hi list,
I have a serious issue with my zpool.
My zpool consists of 4 vdevs which are assembled to 2 mirrors.
One of this mirrors got degraded cause of too many errors on each vdev
of the mirror.
Yes, both vdevs of the mirror got degraded.
According to murphys law I don't have a backup as
On Thu, 21 Jan 2010, Edward Ned Harvey wrote:
Although it's not technically striped according to the RAID definition of
striping, it does achieve the same performance result (actually better) so
people will generally refer to this as striping anyway.
People will say a lot of things, but that
Hi all,
I'm going to be trying out some tests using b130 for dedup on a server with
about 1,7Tb of useable storage (14x146 in two raidz vdevs of 7 disks). What
I'm trying to get a handle on is how to estimate the memory overhead required
for dedup on that amount of storage. From what I
That looks promising.
As the main thing here is that OpenSolaris supports the LSI SAS2008 controller,
I have created a new post to ask for confirmation of driver support -- see here:
http://opensolaris.org/jive/thread.jspa?threadID=122156tstart=0
Cheers,
Simon
Hi,
I found this script for replicating zfs data:
http://www.infrageeks.com/groups/infrageeks/wiki/8fb35/zfs_autoreplicate_script.html
- I am testing it out in the lab with b129.
It error-ed out the first run with some syntax error about the send component
(recursive needed?)
But I have not
On Jan 21, 2010, at 3:55 AM, Julian Regel wrote:
Until you try to pick one up and put it in a fire safe!
Then you backup to tape from x4540 whatever data you need.
In case of enterprise products you save on licensing here as you need a one
client license per x4540 but in fact can backup
On Jan 20, 2010, at 4:17 PM, Daniel Carosone wrote:
On Wed, Jan 20, 2010 at 03:20:20PM -0800, Richard Elling wrote:
Though the ARC case, PSARC/2007/618 is unpublished, I gather from
googling and the source that L2ARC devices are considered auxiliary,
in the same category as spares. If so,
On Jan 21, 2010, at 8:04 AM, erik.ableson wrote:
Hi all,
I'm going to be trying out some tests using b130 for dedup on a server with
about 1,7Tb of useable storage (14x146 in two raidz vdevs of 7 disks). What
I'm trying to get a handle on is how to estimate the memory overhead required
Julian Regel wrote:
Until you try to pick one up and put it in a fire safe!
Then you backup to tape from x4540 whatever data you need.
In case of enterprise products you save on licensing here as you need
a one client license per x4540 but in fact can backup data from many
clients which are
On 22/01/10 12:28 AM, Simon Breden wrote:
Does anyone know if the current OpenSolaris mpt driver supports the recent LSI
SAS2008 controller?
This controller/ASIC is used in the next generation SAS-2 6Gbps PCIe cards from
LSI and SuperMicro etc, e.g.:
1. SuperMicro AOC-USAS2-L8e and the
Hi Folks,
Situation, 64 bit Open Solaris on AMD. 2009-6 111b - I can't successfully
update the OS.
I've got three external 1.5 Tb drives in a raidz pool connected via USB.
Hooked on to an IDE channel is a 750gig hard drive that I'm copying the data
off. It is an ext3 drive from an Ubuntu
CC'ed to ext3-disc...@opensolaris.org because this is an ext3 on Solaris
issue. ZFS has no problem with large files, but the older ext3 did.
See also the ext3 project page and documentation, especially
http://hub.opensolaris.org/bin/view/Project+ext3/Project_status
-- richard
On Jan 21, 2010,
Thanks a lot for the info James.
For the benefit of myself and others then:
1. mpt_sas driver is used for the SuperMicro AOC-USAS2-L8e
2. mr_sas driver is used for the SuperMicro AOC-USAS2-L8i and LSI SAS 9211-8i
And how does the maturity/robustness of the mpt_sas mr_sas drivers compare to
the
On Thu, Jan 21, 2010 at 10:00 PM, Richard Elling
richard.ell...@gmail.com wrote:
On Jan 21, 2010, at 8:04 AM, erik.ableson wrote:
Hi all,
I'm going to be trying out some tests using b130 for dedup on a server with
about 1,7Tb of useable storage (14x146 in two raidz vdevs of 7 disks). What
Aplogies for not explaining myself correctly, I'm copying from ext3 on to ZFS -
it appears to my amateur eyes, that it is ZFS that is having the problem.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
On 22/01/10 06:14 AM, Simon Breden wrote:
Thanks a lot for the info James.
For the benefit of myself and others then:
1. mpt_sas driver is used for the SuperMicro AOC-USAS2-L8e
2. mr_sas driver is used for the SuperMicro AOC-USAS2-L8i and LSI SAS
9211-8i
Correct. I only know the internal
On Thu, Jan 21, 2010 at 09:36:06AM -0800, Richard Elling wrote:
On Jan 20, 2010, at 4:17 PM, Daniel Carosone wrote:
On Wed, Jan 20, 2010 at 03:20:20PM -0800, Richard Elling wrote:
Though the ARC case, PSARC/2007/618 is unpublished, I gather from
googling and the source that L2ARC devices
Fair enough.
So where do you think my problem lies?
Do you think it could be a limitation of the driver I loaded to read the ext3
partition?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Correct. I only know the internal chip code names, not what the actual
shipping products are called :|
Now 'knew' ;-)
It's reassuring to hear your points a thru d regarding the development/test
cycle.
I could always use the 'try before you buy' approach: others try it, and if it
works, I
Michelle Knight wrote:
Fair enough.
So where do you think my problem lies?
Do you think it could be a limitation of the driver I loaded to read the ext3
partition?
Without knowing exactly what commands you typed and exactly what error
messages they produced, and which directories/files are
On Thu, Jan 21, 2010 at 05:04:51PM +0100, erik.ableson wrote:
What I'm trying to get a handle on is how to estimate the memory
overhead required for dedup on that amount of storage.
We'd all appreciate better visibility of this. This requires:
- time and observation and experience, and
-
The error messages are in the original post. They are...
/mirror2/applications/Microsoft/Operating Systems/Virtual PC/vm/XP-SP2/XP-SP2
Hard Disk.vhd: File too large
/mirror2/applications/virtualboximages/xp/xp.tar.bz2: File too large
The system installed to read the EXT3 system is here -
On Thu, Jan 21, 2010 at 01:55:53PM -0800, Michelle Knight wrote:
The error messages are in the original post. They are...
/mirror2/applications/Microsoft/Operating Systems/Virtual PC/vm/XP-SP2/XP-SP2
Hard Disk.vhd: File too large
/mirror2/applications/virtualboximages/xp/xp.tar.bz2: File too
We tried the new LSI controllers in our configuration trying to replace Areca
1680 controllers. The tests were done on 2009.06
Unlike the mpt drivers which were rock solid (but obviously do not support the
new chips), the mr_sas was a complete disaster. (We got ours from LSI website).
Timeouts,
| On Thu, Jan 21, 2010 at 01:55:53PM -0800, Michelle Knight wrote:
Anything else I can get that would help this?
split(1)? :-)
--
bda
cyberpunk is dead. long live cyberpunk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
PS: For data that you want to mostly archive, consider using Amazon
Web Services (AWS) S3 service. Right now there is no charge to push
data into the cloud and its $0.15/gigabyte to keep it there. Do a
quick (back of the napkin) calculation on what storage you can get for
$30/month and factor in
Hello,
Anyone else noticed that zpool is kind of negative when reporting back from
some error conditions?
Like:
cannot import 'zpool01': I/O error
Destroy and re-create the pool from
a backup source.
or even worse:
cannot import 'rpool': pool already exists
Destroy and
On Fri, Jan 22, 2010 at 08:55:16AM +1100, Daniel Carosone wrote:
For performance (rather than space) issues, I look at dedup as simply
increasing the size of the working set, with a goal of reducing the
amount of IO (avoided duplicate writes) in return.
I should add and avoided future
On Thu, Jan 21, 2010 at 02:11:31PM -0800, Moshe Vainer wrote:
PS: For data that you want to mostly archive, consider using Amazon
Web Services (AWS) S3 service. Right now there is no charge to push
data into the cloud and its $0.15/gigabyte to keep it there. Do a
quick (back of the napkin)
+1
I agree 100%
I have a website whose ZFS Home File Server articles are read around 1 million
times a year, and so far I have recommended Western Digital drives
wholeheartedly, as I have found them to work flawlessly within my RAID system
using ZFS.
With this recent action by Western
[Richard makes a hobby of confusing Dan :-)]
more below..
On Jan 21, 2010, at 1:13 PM, Daniel Carosone wrote:
On Thu, Jan 21, 2010 at 09:36:06AM -0800, Richard Elling wrote:
On Jan 20, 2010, at 4:17 PM, Daniel Carosone wrote:
On Wed, Jan 20, 2010 at 03:20:20PM -0800, Richard Elling wrote:
On Thu, Jan 21, 2010 at 02:54:21PM -0800, Richard Elling wrote:
+ support file systems larger then 2GiB include 32-bit UIDs a GIDs
file systems, but what about individual files within?
--
Dan.
pgpw54qWyHczW.pgp
Description: PGP signature
___
Ouch. Was that on the original 2009.06 vanilla install, or a later updated
build? Hopefully a lot of the original bugs have been fixed by now, or soon
will be.
Has anyone got any from the trenches experience of using the mpt_sas driver?
Any comments?
Cheers,
Simon
On Thu, Jan 21, 2010 at 11:14:33PM +0100, Henrik Johansson wrote:
I think this could scare or even make new users do terrible things,
even if the errors could be fixed. I think I'll file a bug, agree?
Yes, very much so.
--
Dan.
pgp7OGc773Bqe.pgp
Description: PGP signature
On Jan 21, 2010, at 6:47 PM, Daniel Carosone d...@geek.com.au wrote:
On Thu, Jan 21, 2010 at 02:54:21PM -0800, Richard Elling wrote:
+ support file systems larger then 2GiB include 32-bit UIDs a GIDs
file systems, but what about individual files within?
I think the original author meant
On Thu, Jan 21, 2010 at 03:33:28PM -0800, Richard Elling wrote:
[Richard makes a hobby of confusing Dan :-)]
Heh.
Lutz, is the pool autoreplace property on? If so, god help us all
is no longer quite so necessary.
I think this is a different issue.
I agree. For me, it was the main
And I agree as well. WD was about to get upwards of $500-$700 of my money, and
is now getting zero over this issue alone moving me to look harder for other
drives.
I'm sure a WD rep would tell us about how there are extra unseen goodies in the
RE line. Maybe.
--
This message posted from
Thanks!
Yep, I was about to buy six or so WD15EADS or WD15EARS drives, but it looks
like I will not be ordering them now.
The bad news is that after looking at the Samsungs it too seems that they have
no way of changing the error reporting time in the 'desktop' drives. I hope I'm
wrong
Vanilla 2009.06, mr_sas drivers from LSI website.
To answer your other question - the mpt driver is very solid on 2009.06
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Thu, Jan 21, 2010 at 7:37 PM, Moshe Vainer mvai...@doyenz.com wrote:
Vanilla 2009.06, mr_sas drivers from LSI website.
To answer your other question - the mpt driver is very solid on 2009.06
Are you sure those are the open source drivers he's referring to? LSI has a
habit of releasing
On Jan 21, 2010, at 4:32 PM, Daniel Carosone wrote:
I propose a best practice of adding the cache device to rpool and be
happy.
It is *still* not that simple. Forget my slow disks caching an even
slower pool (which is still fast enough for my needs, thanks to the
cache and zil).
On Thu, Jan 21, 2010 at 8:05 PM, Moshe Vainer mvai...@doyenz.com wrote:
http://lsi.com/storage_home/products_home/internal_raid/megaraid_sas/6gb_s_value_line/sas9260-8i/index.html
2009.06 didn't have the drivers integrated, so those aren't the open source
ones. As i said, it is possible
On Thu, Jan 21, 2010 at 05:52:57PM -0800, Richard Elling wrote:
I agree with this, except for the fact that the most common installers
(LiveCD, Nexenta, etc.) use the whole disk for rpool[1].
Er, no. You certainly get the option of whole disk or make
partitions, at least with the opensolaris
Hello all,
I have a small issue with zfs.
I create a volume 1TB.
# zfs get all tank/test01
NAMEPROPERTY VALUE
SOURCE
tank/test01 type volume
On Thu, Jan 21, 2010 at 07:33:47PM -0800, Younes wrote:
Hello all,
I have a small issue with zfs.
I create a volume 1TB.
# zfs get all tank/test01
NAMEPROPERTY VALUE
I installed b130 on my server, and i'm being hit by this bug:
http://defect.opensolaris.org/bz/show_bug.cgi?id=13540
Where i can't log into gnome. I've bee trying to deal with it hoping that i
a workaround would show up..
if there IS a workaround, i'd love to have it...if not, i'm wondering:
is
I have just installed EON .599 on a machine with a 6 disk raidz2 configuration.
I run updimg after creating a zpool. When I reboot, and attempt to run 'zpool
list' it returns 'no pools configured'.
I've checked /etc/zfs/zpool.cache, and it appears to have configuration
information about
On Thu, Jan 21, 2010 at 2:51 PM, Andrey Kuzmin
andrey.v.kuz...@gmail.com wrote:
Looking at dedupe code, I noticed that on-disk DDT entries are
compressed less efficiently than possible: key is not compressed at
all (I'd expect roughly 2:1 compression ration with sha256 data),
A cryptographic
Wrong list But anyhow I was able to install b128 and then upgrade to
b130. I had relink some OpenGL files to get Compiz to work but apart
from that it looks OK.
/peter
On 2010-01-22 11.03, Thomas Burgess wrote:
I installed b130 on my server, and i'm being hit by this bug:
Did you buy the SSDs directly from Sun? I've heard there could possibly be
firmware that's vendor specific for the X25-E.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Hi
On Friday 22 January 2010 07:04:06 Brad wrote:
Did you buy the SSDs directly from Sun? I've heard there could possibly be
firmware that's vendor specific for the X25-E.
No.
So far I've heard that they are not readily available as certification
procedures are still underway (apart from
74 matches
Mail list logo