Hi guys, I am about to reshape my data spool and am wondering what performance
diff. I can expect from the new config. Vs. The old.
The old config. Is a pool of a single vdev of 8 disks raidz2.
The new pool config is 2vdev's of 7 disk raidz2 in a single pool.
I understand it should be better
On Mon, 2010-07-19 at 01:28 -0700, tomwaters wrote:
Hi guys, I am about to reshape my data spool and am wondering what
performance diff. I can expect from the new config. Vs. The old.
The old config. Is a pool of a single vdev of 8 disks raidz2.
The new pool config is 2vdev's of 7 disk
On Sat, Jul 17, 2010 at 12:57:40AM +0200, Richard Elling wrote:
Because of BTRFS for Linux, Linux's popularity itself and also thanks
to the Oracle's help.
BTRFS does not matter until it is a primary file system for a dominant
distribution.
From what I can tell, the dominant Linux
Thanks, seems simple.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Giovanni Tirloni gtirl...@sysdroid.com wrote:
On Sun, Jul 18, 2010 at 10:19 PM, Miles Nordin car...@ivy.net wrote:
IMHO it's important we don't get stuck running Nexenta in the same
spot we're now stuck with OpenSolaris: with a bunch of CDDL-protected
source that few people know how to use
On Mon, Jul 19, 2010 at 3:31 PM, Pasi Kärkkäinen pa...@iki.fi wrote:
Upcoming Ubuntu 10.10 will use BTRFS as a default.
Though there was some discussion around this, I don't think the above
is a given. The ubuntu devs would look at the status of the project,
and decide closer to the release.
On Mon, Jul 19, 2010 at 7:12 AM, Joerg Schilling
joerg.schill...@fokus.fraunhofer.de wrote:
Giovanni Tirloni gtirl...@sysdroid.com wrote:
On Sun, Jul 18, 2010 at 10:19 PM, Miles Nordin car...@ivy.net wrote:
IMHO it's important we don't get stuck running Nexenta in the same
spot we're now
Ubuntu always likes to be on the edge even if btrfs is far from being
'stable' I would not want to run a release that does this. Servers need
stability and reliability. Btrfs is far from this.
Well, it seems to me that this is a well-known and very popular „circle in
proving“:
A: XYZ is far
On 12/07/2010 16:32, Erik Trimble wrote:
ZFS is NOT automatically ACID. There is no guaranty of commits for
async write operations. You would have to use synchronous writes to
guaranty commits. And, furthermore, I think that there is a strong
# zfs set sync=always pool
will force all I/O
On 07/19/10 07:26, Andrej Podzimek wrote:
I run ArchLinux with Btrfs and OpenSolaris with ZFS. I haven't had a
serious issue with any of them so far.
Moblin/Meego ships with btrfs by default. COW file system on a
cell phone :-). Unsurprisingly for a read-mostly file system it
seems pretty
Hello,
I think this is the second time this happens to me. A couple of year ago, I
deleted a big (500G) zvol and then the machine started to hang some 20 minutes
later (out of memory), even rebooting didnt help. But with the great support
from Victor Latushkin, who on a weekend helped me debug
Hi--
I don't know what's up with iostat -En but I think I remember a problem
where iostat does not correctly report drives running in legacy IDE mode.
You might use the format utility to identify these devices.
Thanks,
Cindy
On 07/18/10 14:15, Alxen4 wrote:
This is a situation:
I've got an
On Fri, Jul 16 at 18:32, Jordan McQuown wrote:
I'm curious to know what other people are running for HD's in white box
systems? I'm currently looking at Seagate Barracuda's and Hitachi
Deskstars. I'm looking at the 1tb models. These will be attached to an LSI
expander in a sc847e2
Hi--
A google search of ST3500320AS turns up Seagate Barracuda drives.
All 7 drives in the pool tank are ST3500320AS. The other two c1t0d0
and c3d0 are unknown, but are not part of this pool.
You can also use fmdump -eV to see how long c2t3d0 has had problems.
Thanks,
Cindy
On 07/19/10
I've tried ssh blowfish and scp arcfour. both are CPU limited long before the
10g link is.
I'vw also tried mbuffer, but I get broken pipe errors part way through the
transfer.
I'm open to ideas for faster ways to to either zfs send directly or through a
compressed file of the zfs send output.
Is it possible in ZFS to do the following.
I have an 800GB lun a single device in a pool and I want to migrate
that to 8 100GB luns. Is it possible to create an 800GB concat out of
the 8 devices, and mirror that to the original device, then detach the
original device? It is possible to do this
I think you are saying that even though format shows 9 devices (0-8) on
this system, there's really only 7 and the pool tank has only 5 (?).
I'm not sure why some devices would show up as duplicates.
Any recent changes to this system?
You might try exporting this pool and make sure that all
On Mon, 19 Jul 2010, Joerg Schilling wrote:
The missing requirement to provide build scripts is a drawback of the CDDL.
...But believe me that the GPL would not help you here, as the GPL cannot
force the original author (in this case Sun/Oracle or whoever) to supply the
scripts in question.
Richard Jahnel wrote:
I've tried ssh blowfish and scp arcfour. both are CPU limited long before the
10g link is.
I'vw also tried mbuffer, but I get broken pipe errors part way through the
transfer.
Any idea why? Does the zfs send or zfs receive bomb out part way through?
Might be worth
On 07/18/10 17:39, Packet Boy wrote:
What I can not find is how to take an existing Fedora image and copy
the it's contents into a ZFS volume so that I can migrate this image
from my existing Fedora iScsi target to a Solaris iScsi target (and
of course get the advantages of having that disk
On Mon, 19 Jul 2010, Garrett D'Amore wrote:
With those same 14 drives, you can get 7x the performance instead of 2x
the performance by using mirrors instead of raidz2.
This is of course constrained by the limits of the I/O channel.
Sometimes the limits of PCI-E or interface cards become the
A few things:
1.) did you move your drives around or change which controller each one
was connected to sometime after installing and setting up OpenSolaris?
If so, a pool export and re-import may be in order.
2.) are you sure the drive is failing? Does the problem only affect
this drive
If these files are deduped, and there is not a lot of RAM on the machine, it
can take a long, long time to work through the dedupe portion. I don't know
enough to know if that is what you are experiencing, but it could be the
problem.
How much RAM do you have?
Scott
--
This message posted
If the format utility is not displaying the WD drives correctly,
then ZFS won't see them correctly either. You need to find out why.
I would export this pool and recheck all of your device connections.
cs
On 07/19/10 10:37, Yuri Homchuk wrote:
No, the pool tank consists of 7 physical
This is now CR 6970210.
I've been experimenting with a two system setup in snv_134 where
each system exports a zvol via COMSTAR iSCSI. One system imports
both its own zvol and the one from the other system and puts them
together in a ZFS mirror.
I manually faulted the zvol on one system by
If this is across a trusted link, have a look at the HPN patches to
ZFS. There are three main benefits to these patches:
- increased (and dynamic) buffers internal to SSH
- adds a multi-threaded aes cipher
- adds the NONE cipher for non-encrypted data transfers
(authentication is still encrypted)
Hi,
If you can share those scripts that make use of mbuffer, please feel
free to do so ;)
Bruno
On 19-7-2010 20:02, Brent Jones wrote:
On Mon, Jul 19, 2010 at 9:06 AM, Richard Jahnel rich...@ellipseinc.com
wrote:
I've tried ssh blowfish and scp arcfour. both are CPU limited long before
Richard,
On 19 Jul 2010, at 18:49, Richard Jahnel wrote:
I heard of some folks using netcat.
I haven't figured out where to get netcat nor the syntax for using
it yet.
I also did a bit of research into using netcat and found this...
On Mon, Jul 19, 2010 at 11:14 AM, Bruno Sousa bso...@epinfante.com wrote:
Hi,
If you can share those scripts that make use of mbuffer, please feel
free to do so ;)
Bruno
On 19-7-2010 20:02, Brent Jones wrote:
On Mon, Jul 19, 2010 at 9:06 AM, Richard Jahnel rich...@ellipseinc.com
wrote:
On 19-7-2010 20:36, Brent Jones wrote:
On Mon, Jul 19, 2010 at 11:14 AM, Bruno Sousa bso...@epinfante.com wrote:
Hi,
If you can share those scripts that make use of mbuffer, please feel
free to do so ;)
Bruno
On 19-7-2010 20:02, Brent Jones wrote:
On Mon, Jul 19, 2010 at 9:06
ap 2) there are still bugs that *must* be fixed before Btrfs can
ap be seriously considered:
ap http://www.mail-archive.com/linux-bt...@vger.kernel.org/msg05130.html
I really don't think that's a show-stopper. He filled the disk with
2KB files. HE FILLED THE DISK WITH 2KB
On 16/07/2010 23:57, Richard Elling wrote:
On Jul 15, 2010, at 4:48 AM, BM wrote:
2. No community = stale outdated code.
But there is a community. What is lacking is that Oracle, in their infinite
wisdom, has stopped producing OpenSolaris developer binary releases.
Not to be
On Jul 19, 2010, at 10:49 AM, Richard Jahnel wrote:
Any idea why? Does the zfs send or zfs receive bomb out part way through?
I have no idea why mbuffer fails. Changing the -s from 128 to 1536 made it
take longer to occur and slowed it down bu about 20% but didn't resolve the
issue. It
I've used mbuffer to transfer hundreds of TB without a problem in mbuffer
itself. You will get disconnected if the send or receive prematurely ends,
though.
mbuffer itself very specifically ends with a broken pipe error. Very quickly
with s set to 128 or after sometime with s set over 1024.
My
Hi,
some information is missing...
How large is your ARC / your main memory?
Probably too small to hold all metadata (1/1000 of the data amount).
= metadata has to be read again and again
A recordsize smaller than 128k increases the problem.
Its a data volume, perhaps raidz or raidz2 and
Thanks Cindy,
But format shows exactly same thing:
All of them appear as Seagate, no WD at all...
How could it be ???
# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c1t0d0 DEFAULT cyl 60798 alt 2 hd 255 sec 63
/p...@0,0/pci15d9,a...@5/d...@0,0
1.
Hi, thanks for answering,
How large is your ARC / your main memory?
Probably too small to hold all metadata (1/1000 of the data amount).
= metadata has to be read again and again
Main memory is 8GB. ARC (according to arcstat.pl) usually stays at 5-7GB
A recordsize smaller than 128k
I know that ST3500320AS is Seagate Barracuda.
That exactly why I am confused.
I looked physically at drives and I confirm again that 5 drives are Seagate and
2 drives are Western Digital.
But Solaris tells me that all 7 drives are Seagate Barracuda which is
definetly not correct.
This is
No, the pool tank consists of 7 physical drives(5 of Seagate and 2 of Western
Digital)
See output below
#zpool status tank
pool: tank
state: ONLINE
scrub: none requested
config:
NAMESTATE READ WRITE CKSUM
tankONLINE 0 0 0
1.) did you move your drives around or change which controller each one was
connected to sometime after installing and setting up OpenSolaris?
If so, a pool export and re-import may be in order.
No I didn't. It was original setup.
2.) are you sure the drive is failing? Does the problem only
Using SunOS X 5.11 snv_133 i86pc i386 i86pc. So the network thing that
was fixed in 129 shouldn't be the issue.
-Original Message-
From: Brent Jones [mailto:br...@servuhome.net]
Sent: Monday, July 19, 2010 1:02 PM
To: Richard Jahnel
Cc: zfs-discuss@opensolaris.org
Subject: Re:
FWIW I found netcat over at CSW.
http://www.opencsw.org/packages/CSWnetcat/
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I was looking for a way to do this without downtime... It seems that
this kind of basic relayout operation should be easy to do.
On Mon, Jul 19, 2010 at 12:44 PM, Freddie Cash fjwc...@gmail.com wrote:
On Mon, Jul 19, 2010 at 9:06 AM, Max Levine max...@gmail.com wrote:
Is it possible in ZFS to
I'm currently running a Sun Fire V880 with snv_134, but would like to
upgrade the machine to a self-built snv_144. Unfortunately, boot
environment creation fails:
# beadm create snv_134-svr4
Unable to create snv_134-svr4.
Mount failed.
In truss output, I find
2514: mount(rpool, /rpool,
Hello,
I'm working on building a iSCSI storage server to use as the backend for
virtual servers. I am far more familiar with FreeBSD and Linux, but want to use
OpenSolaris for this project because of Comstar ZFS. My plan was to have a 24
2TB Hitachi SATA drives connected via SAS expanders to
On Mon, 2010-07-19 at 17:19 -0400, Max Levine wrote:
I was looking for a way to do this without downtime... It seems that
this kind of basic relayout operation should be easy to do.
On Mon, Jul 19, 2010 at 12:44 PM, Freddie Cash fjwc...@gmail.com wrote:
On Mon, Jul 19, 2010 at 9:06 AM, Max
On Jul 19, 2010, at 2:38 PM, Horace Demmink wrote:
Hello,
I'm working on building a iSCSI storage server to use as the backend for
virtual servers. I am far more familiar with FreeBSD and Linux, but want to
use OpenSolaris for this project because of Comstar ZFS. My plan was to
have a
3.) on some systems I've found another version of the iostat command to be more
useful, particularly when iostat -En leaves the serial number field empty or
otherwise doesn't read the serial number correctly. Try
this:
' iostat -Eni ' indeed outputs Device ID on some of the
I've found plenty of documentation on how to create a
ZFS volume, iscsi share it, and then do a fresh
install of Fedora or Windows on the volume.
Really? I have found just the opposite: how to move your functioning
Windows/Linux install to iSCSI.
I am fumbling through this process for
On Mon, Jul 19, 2010 at 3:11 PM, Haudy Kazemi kaze0...@umn.edu wrote:
' iostat -Eni ' indeed outputs Device ID on some of the drives,but I still
can't understand how it helps me to identify model of specific drive.
Curious:
[r...@nas01 ~]# zpool status -x
pool: tank
state: DEGRADED
status:
On Mon, Jul 19, 2010 at 1:42 PM, Wolfraider wolfrai...@nightwalkers.org wrote:
Our server locked up hard yesterday and we had to hard power it off and back
on. The server locked up again on reading ZFS config (I left it trying to
read the zfs config for 24 hours). I went through and removed
' iostat -Eni ' indeed outputs Device ID on some of
the drives,but I still
can't understand how it helps me to identify model
of specific drive.
Get and install smartmontools. Period. I resisted it for a few weeks but it
has been an amazing tool. It will tell you more than you ever
On Mon, Jul 19, 2010 at 4:16 PM, Marty Scholes martyscho...@yahoo.com wrote:
Start a scrub or do an obscure find, e.g. find /tank_mointpoint -name core
and watch the drive activity lights. The drive in the pool which isn't
blinking like crazy is a faulted/offlined drive.
Ugly and
On Mon, Jul 19, 2010 at 4:16 PM, Marty Scholes martyscho...@yahoo.com wrote:
Start a scrub or do an obscure find, e.g. find /tank_mointpoint -name core
and watch the drive activity lights. The drive in the pool which isn't
blinking like crazy is a faulted/offlined drive.
Actually I guess
On Jul 19, 2010, at 4:21 PM, Michael Shadle wrote:
On Mon, Jul 19, 2010 at 4:16 PM, Marty Scholes martyscho...@yahoo.com wrote:
Start a scrub or do an obscure find, e.g. find /tank_mointpoint -name core
and watch the drive activity lights. The drive in the pool which isn't
blinking like
On Mon, Jul 19, 2010 at 4:26 PM, Richard Elling rich...@nexenta.com wrote:
Aren't you assuming the I/O error comes from the drive?
fmdump -eV
okay - I guess I am. Is this just telling me hey stupid, a checksum
failed ? In which case why did this never resolve itself and the
specific device get
Marty Scholes wrote:
' iostat -Eni ' indeed outputs Device ID on some of
the drives,but I still
can't understand how it helps me to identify model
of specific drive.
Get and install smartmontools. Period. I resisted it for a few weeks but it
has been an amazing tool. It will tell
On Jul 19, 2010, at 4:30 PM, Michael Shadle wrote:
On Mon, Jul 19, 2010 at 4:26 PM, Richard Elling rich...@nexenta.com wrote:
Aren't you assuming the I/O error comes from the drive?
fmdump -eV
okay - I guess I am. Is this just telling me hey stupid, a checksum
failed ? In which case why
On 07/20/10 08:20 AM, Richard Jahnel wrote:
I've used mbuffer to transfer hundreds of TB without a problem in mbuffer
itself. You will get disconnected if the send or receive prematurely ends,
though.
mbuffer itself very specifically ends with a broken pipe error. Very quickly
with s
On Mon, Jul 19, 2010 at 4:35 PM, Richard Elling rich...@nexenta.com wrote:
I depends on if the problem was fixed or not. What says
zpool status -xv
-- richard
[r...@nas01 ~]# zpool status -xv
pool: tank
state: DEGRADED
status: One or more devices has experienced an unrecoverable
On Wed, Jul 14 at 23:51, Tim Cook wrote:
Out of the fortune 500, I'd be willing to bet there's exactly zero
companies that use whitebox systems, and for a reason.
--Tim
Sure, some core SAP system or HR data warehouse runs on name-brand
gear, and maybe they have massive SANs with various
On Mon, 2010-07-19 at 17:54 -0600, Eric D. Mudama wrote:
On Wed, Jul 14 at 23:51, Tim Cook wrote:
Out of the fortune 500, I'd be willing to bet there's exactly zero
companies that use whitebox systems, and for a reason.
--Tim
Sure, some core SAP system or HR data warehouse runs on
fyi, everyone, I have some more info here. in short, rich lowe's 142 works
correctly (fast) on my hardware, while both my compilations (snv 143, snv 144)
and also the nexanta 3 rc2 kernel (134 with backports) are horribly slow.
I finally got around to trying rich lowe's snv 142 compilation in
Yuri Homchuk wrote:
Well, this is a REALLY 300 users production server with 12 VM's
running on it, so I definitely won't play with a firmware J
I can easily identify which drive is what by physically looking at it.
It's just sad to realize that I cannot trust solaris anymore.
I never
more below...
On Jul 19, 2010, at 4:42 PM, Michael Shadle wrote:
On Mon, Jul 19, 2010 at 4:35 PM, Richard Elling rich...@nexenta.com wrote:
I depends on if the problem was fixed or not. What says
zpool status -xv
-- richard
[r...@nas01 ~]# zpool status -xv
pool: tank
On 20/07/10 10:40 AM, Chad Cantwell wrote:
fyi, everyone, I have some more info here. in short, rich lowe's 142 works
correctly (fast) on my hardware, while both my compilations (snv 143, snv 144)
and also the nexanta 3 rc2 kernel (134 with backports) are horribly slow.
I finally got around to
On Mon, 2010-07-19 at 17:40 -0700, Chad Cantwell wrote:
fyi, everyone, I have some more info here. in short, rich lowe's 142 works
correctly (fast) on my hardware, while both my compilations (snv 143, snv 144)
and also the nexanta 3 rc2 kernel (134 with backports) are horribly slow.
The idea
Erik's experiences echo mine. I've never seen a white-box in a medium to
large company that I've visited. Always a name brand.
His comments on sysadmin staffing are dead on.
Jim Litchfield
Oracle Consulting
On 7/19/2010 5:35 PM, Erik Trimble wrote:
On Mon, 2010-07-19
On Tue, Jul 20, 2010 at 10:54:44AM +1000, James C. McPherson wrote:
On 20/07/10 10:40 AM, Chad Cantwell wrote:
fyi, everyone, I have some more info here. in short, rich lowe's 142 works
correctly (fast) on my hardware, while both my compilations (snv 143, snv
144)
and also the nexanta 3 rc2
Rodrigo E. De León Plicet wrote:
On Fri, Jun 25, 2010 at 9:08 PM, Erik Trimble erik.trim...@oracle.com wrote:
(2) Ubuntu is a desktop distribution. Don't be fooled by their server
version. It's not - it has too many idiosyncrasies and bad design choices to
be a stable server OS. Use
On Mon, Jul 19, 2010 at 06:00:04PM -0700, Brent Jones wrote:
On Mon, Jul 19, 2010 at 5:40 PM, Chad Cantwell c...@iomail.org wrote:
fyi, everyone, I have some more info here. in short, rich lowe's 142 works
correctly (fast) on my hardware, while both my compilations (snv 143, snv
144)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Richard L. Hamilton
I would imagine that if it's read-mostly, it's a win, but
otherwise it costs more than it saves. Even more conventional
compression tends to be more resource intensive
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Pasi Kärkkäinen
Redhat Fedora 13 includes BTRFS, but it's not used as a default (yet).
RHEL6 beta also includes BTRFS support (tech preview), but again,
Upcoming Ubuntu 10.10 will use
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Richard Jahnel
I'vw also tried mbuffer, but I get broken pipe errors part way through
the transfer.
The standard answer is mbuffer. I think you should ask yourself what's
going wrong with
On Mon, Jul 19, 2010 at 11:06 PM, Richard Jahnel rich...@ellipseinc.com wrote:
I've tried ssh blowfish and scp arcfour. both are CPU limited long before the
10g link is.
I'vw also tried mbuffer, but I get broken pipe errors part way through the
transfer.
I'm open to ideas for faster ways
On Sat, Jun 26, 2010 at 12:20 AM, Ben Miles
merloc...@hotmail.com wrote:
What supporting applications are there on Ubuntu
for RAIDZ?
None. Ubuntu doesn't officially support ZFS.
You can kind of make it work using the ZFS-FUSE
project. But it's not
stable, nor recommended.
I have
76 matches
Mail list logo