On 10/16/10 12:29 PM, Marty Scholes wrote:
On Fri, Oct 15, 2010 at 3:16 PM, Marty Scholes
martyscho...@yahoo.com wrote:
My home server's main storage is a 22 (19 + 3) disk
RAIDZ3 pool backed up hourly to a 14 (11+3) RAIDZ3
backup pool.
How long does it take to resilver a disk
On 10/11/10 05:40 AM, Günther wrote:
on my raidz3 pool one drive failed. on resilvering the hotspare seems to failed also. this ended in
a insufficient replicas error with state of hotfix drive too many errors
i could bring back the hotfix drive by export/import the pool (hotfix drive is
On 10/11/10 05:13 PM, Harry Putnam wrote:
Osol b134
I'm experiencing a forced shutdown of a machine.
It is proceeded with a number of beeps in a steady pattern like
beep beep beep beep beep beep
And onward. 2 beeps pause 2 beeps pause... etc.
The beeps are the same
On 10/ 7/10 06:22 PM, Stephan Budach wrote:
Hi Edward,
these are interesting points. I have considered a couple of them, when I
started playing around with ZFS.
I am not sure whether I disagree with all of your points, but I conducted a
couple of tests, where I configured my raids as jbods
On 10/ 8/10 10:54 AM, Roy Sigurd Karlsbakk wrote:
Hi all
I'm setting up a couple of 110TB servers and I just want some feedback in case
I have forgotten something.
The servers (two of them) will, as of current plans, be using 11 VDEVs with 7
2TB WD Blacks each, with a couple of Crucial
On 10/ 8/10 11:06 AM, Roy Sigurd Karlsbakk wrote:
- Original Message -
On 10/ 8/10 10:54 AM, Roy Sigurd Karlsbakk wrote:
Hi all
I'm setting up a couple of 110TB servers and I just want some
feedback in case I have forgotten something.
The servers (two of them) will, as of
On 10/ 8/10 11:22 AM, Scott Meilicke wrote:
Those must be pretty busy drives. I had a recent failure of a 1.5T disks in a 7
disk raidz2 vdev that took about 16 hours to resliver. There was very little IO
on the array, and it had maybe 3.5T of data to resliver.
On Oct 7, 2010, at 3:17 PM, Ian
On 10/ 6/10 09:52 PM, Stephan Budach wrote:
Hi,
I recently discovered some - or at least one corrupted file on one ofmy ZFS
datasets, which caused an I/O error when trying to send a ZFDS snapshot to
another host:
zpool status -v obelixData
pool: obelixData
state: ONLINE
status: One or
On 09/29/10 09:38 AM, Nicolas Williams wrote:
I've researched this enough (mainly by reading most of the ~240 or so
relevant zfs-discuss posts and several bug reports) to conclude the
following:
- ACLs derived from POSIX mode_t and/or POSIX Draft ACLs that result in
DENY ACEs are
On 09/25/10 02:54 AM, Erik Trimble wrote:
Honestly, I've said it before, and I'll say it (yet) again: unless
you have very stringent power requirement (or some other unusual
requirement, like very, very low noise), used (or even new-in-box,
previous generation excess inventory) OEM stuff
On 09/26/10 07:25 AM, Erik Trimble wrote:
On 9/25/2010 1:57 AM, Ian Collins wrote:
On 09/25/10 02:54 AM, Erik Trimble wrote:
Honestly, I've said it before, and I'll say it (yet) again: unless
you have very stringent power requirement (or some other unusual
requirement, like very, very low
On 09/23/10 06:33 PM, Alexander Skwar wrote:
Hi.
2010/9/19 R.G. Keenk...@geofex.com
and last-generation hardware is very, very cheap.
Yes, of course, it is. But, actually, is that a true statement? I've read
that it's *NOT* advisable to run ZFS on systems which do NOT have ECC
RAM.
On 09/23/10 05:00 PM, Carl Brewer wrote:
G'day,
My OpenSolaris (b134) box is low on space and has a ZFS mirror for root :
uname -a
SunOS wattage 5.11 snv_134 i86pc i386 i86pc
rpool 696G 639G 56.7G91% 1.09x ONLINE -
It's currently a pair of 750GB drives. In my bag I have a
On 09/21/10 06:52 AM, sridhar surampudi wrote:
Thank you for your quick reply.
When I run below command it is showing.
bash-3.00# zpool upgrade
This system is currently running ZFS pool version 15.
All pools are formatted using this version.
How can I upgrade to new zpool and zfs versions so
On 09/18/10 06:47 PM, Carsten Aulbert wrote:
Hi all
one of our system just developed something remotely similar:
s06:~# zpool status
pool: atlashome
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a
On 09/18/10 08:58 PM, Carsten Aulbert wrote:
Hi
On Saturday 18 September 2010 10:02:42 Ian Collins wrote:
I see this all the time on a troublesome Thumper. I believe this
happens because the data in the pool is continuously changing.
Ah ok, that may be, there is one particular
On 09/19/10 12:01 AM, Tom Bird wrote:
On 18/09/10 09:02, Ian Collins wrote:
On 09/18/10 06:47 PM, Carsten Aulbert wrote:
Has someone an idea how it is possible to resilver 678G of data on a
500G
drive?
I see this all the time on a troublesome Thumper. I believe this happens
because
On 09/19/10 08:11 AM, Stephan Ferraro wrote:
This is new for me:
$ zpool status
pool: rpool
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device
On 09/18/10 04:28 AM, Tom Bird wrote:
Bob Friesenhahn wrote:
On Fri, 17 Sep 2010, Tom Bird wrote:
Morning,
c7t5000CCA221F4EC54d0 is a 2T disk, how can it resilver 5.63T of it?
This is actually an old capture of the status output, it got to
nearly 10T before deciding that there was an error
On 09/18/10 04:46 PM, Neil Perrin wrote:
On 09/17/10 18:32, Edward Ned Harvey wrote:
From: Neil Perrin [mailto:neil.per...@oracle.com]
you lose information. Not your whole pool. You lose up to
30 sec of writes
The default is now 5 seconds (zfs_txg_timeout).
When did
On 09/16/10 09:18 AM, Edward Ned Harvey wrote:
From: Richard Elling [mailto:rich...@nexenta.com]
Suppose you want to ensure at least 99% efficiency of the drive. At
most 1%
time wasted by seeking.
This is practically impossible on a HDD. If you need this, use
On 09/15/10 12:56 PM, Peter Jeremy wrote:
I am looking at backing up my fileserver by replicating the
filesystems onto an external disk using send/recv with something
similar to:
zfs send ... myp...@snapshot | zfs recv -d backup
but have run into a bit of a gotcha with the mountpoint
On 09/ 9/10 11:37 AM, Rather not say wrote:
Hello -
After waiting an hour or so for opensolaris, I had forgot what username I put
so I booted into windows to see if I could find it, no luck.
How can I figure it out?
Not by asking here! The opensolaris-help list is more appropriate.
Boot
On 09/ 9/10 01:14 PM, Fei Xu wrote:
Hi all:
I'm a new guy who is just started ZFS for half a year. We are using
Nexenta in corporate pilot environment. these days, when I was trying to move
around 4TB data from an old pool(4*2TB raidz) to new pool (11*2TB raidz2), it
seems will never
On 09/ 9/10 02:42 PM, Fei Xu wrote:
now it gets extremly slow at around 400G sent.
first iostat result is captured when the send operation starts.
capacity operationsbandwidth
pool alloc free read write read write
--- - - - -
On 08/28/10 11:39 PM, LaoTsao 老曹 wrote:
hi all
Try to learn how UFS root to ZFS root liveUG work.
I download the vbox image of s10u8, it come up as UFS root.
add a new disks (16GB)
create zpool rpool
run lucreate -n zfsroot -p rpool
run luactivate zfsroot
run lustatus it do show zfsroot will
On 08/28/10 11:13 AM, Robert Milkowski wrote:
Hi,
When I set readonly=on on a dataset then no new files are allowed to
be created.
However writes to already opened files are allowed.
This is rather counter intuitive - if I set a filesystem as read-only
I would expect it not to allow any
On 08/28/10 12:05 PM, Ian Collins wrote:
On 08/28/10 11:13 AM, Robert Milkowski wrote:
Hi,
When I set readonly=on on a dataset then no new files are allowed to
be created.
However writes to already opened files are allowed.
This is rather counter intuitive - if I set a filesystem as read
On 08/28/10 12:45 PM, Edward Ned Harvey wrote:
Another specific example ...
Suppose you zfs send from a primary server to a backup server. You want
the filesystems to be readonly on the backup fileserver, in order to receive
incrementals. If you make a mistake, and start writing to the backup
On 08/23/10 10:38 AM, Richard Elling wrote:
On Aug 21, 2010, at 9:22 PM, devsk wrote:
If dedup is ON and the pool develops a corruption in a file, I can never fix it
because when I try to copy the correct file on top of the corrupt file,
the block hash will match with the existing blocks
On 08/21/10 07:03 PM, Martin Mundschenk wrote:
After about 62 hours and 90%, the resilvering process got stuck. Since
12 hours nothing happens anymore. Thus, I can not detach the spare
device. Is there a way to get the resilvering process back running?
Are you sure it's stuck? They can take
On 08/21/10 08:50 PM, Simone Caldana wrote:
Il giorno 21/ago/2010, alle ore 10.10, Ian Collins ha scritto:
On 08/21/10 07:03 PM, Martin Mundschenk wrote:
After about 62 hours and 90%, the resilvering process got stuck. Since 12 hours
nothing happens anymore. Thus, I can not detach
On 08/21/10 12:53 PM, devsk wrote:
I have a USB flash drive which boots up my opensolaris install. What happens is
that whenever I move to a different machine,
the root pool is lost because the devids don't match with what's in
/etc/zfs/zpool.cache and the system just can't find the rpool.
On 08/19/10 08:51 PM, Joerg Schilling wrote:
Ian Collinsi...@ianshome.com wrote:
A quick test with a C++ application I'm working with which does a lot of
string and container manipulation shows it
runs about 10% slower in 64 bit mode on AMD64 and about the same in 32
or 64 bit on a core
On 08/20/10 07:13 AM, C. Bergström wrote:
(Why is this being discussed on zfs-discuss)
As a distraction form the endless circular licensing arguments?
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On 08/20/10 07:48 AM, Garrett D'Amore wrote:
On Thu, 2010-08-19 at 20:14 +0100, Daniel Taylor wrote:
On 19 Aug 2010, at 19:42, Garrett D'Amore wrote:
Out of interest, what language do you recommend?
Depends on the job -- I'm a huge fan of choosing the right tool for the
job. I just
On 08/20/10 08:35 AM, Garrett D'Amore wrote:
On Fri, 2010-08-20 at 03:26 +0700, C. Bergström wrote:
Ian Collins wrote:
On 08/20/10 07:48 AM, Garrett D'Amore wrote:
On Thu, 2010-08-19 at 20:14 +0100, Daniel Taylor wrote:
On 19 Aug 2010, at 19:42, Garrett D'Amore
On 08/20/10 08:30 AM, Garrett D'Amore wrote:
On Fri, 2010-08-20 at 07:58 +1200, Ian Collins wrote:
On 08/20/10 07:48 AM, Garrett D'Amore wrote:
On Thu, 2010-08-19 at 20:14 +0100, Daniel Taylor wrote:
On 19 Aug 2010, at 19:42, Garrett D'Amore wrote:
Out of interest, what
On 08/20/10 09:26 AM, Garrett D'Amore wrote:
On Fri, 2010-08-20 at 09:23 +1200, Ian Collins wrote:
There is no common C++ ABI. So you get into compatibility concerns
between code built with different compilers (like Studio vs. g++).
Fail.
Which is why we have extern C. Just about
On 08/20/10 09:33 AM, Nicolas Williams wrote:
On Fri, Aug 20, 2010 at 09:23:56AM +1200, Ian Collins wrote:
On 08/20/10 08:30 AM, Garrett D'Amore wrote:
There is no common C++ ABI. So you get into compatibility concerns
between code built with different compilers (like Studio vs. g
On 08/20/10 09:48 AM, Nicolas Williams wrote:
On Fri, Aug 20, 2010 at 09:38:51AM +1200, Ian Collins wrote:
On 08/20/10 09:33 AM, Nicolas Williams wrote:
Any driver C++ code would still need a C++ run-time. Either you must
statically link it in, or you'll have a problem with multiple
On 08/19/10 04:56 AM, seth keith wrote:
I had a perfectly working 7 drive raidz pool using some on board STATA
connectors and some on PCI SATA controller cards. My pool was using 500GB
drives. I had the stupid idea to replace my 500GB drives with 2TB ( Mitsubishi
) drives. This process
On 08/18/10 08:40 AM, Joerg Schilling wrote:
Ian Collinsi...@ianshome.com wrote:
Some application benefit from the extended register set and function
call ABI, others suffer due to increased sizes impacting the cache.
Well, please verify your claims as they do not meet my
On 08/17/10 09:43 PM, Joerg Schilling wrote:
Garrett D'Amoregarr...@nexenta.com wrote:
It can be as simple as impact on the cache. 64-bit programs tend to be
bigger, and so they have a worse effect on the i-cache.
Unless your program does something that can inherently benefit from
On 08/18/10 12:05 AM, Joerg Schilling wrote:
Ian Collinsi...@ianshome.com wrote:
If you have an orthogonal architecture like sparc, a typical 64 bit program is
indeed a bit slower than the same program in 32 bit.
On Amd64, you have twice as many registers in 64 bit mode and this is the
On 08/18/10 08:40 AM, Joerg Schilling wrote:
Ian Collinsi...@ianshome.com wrote:
On 08/18/10 12:05 AM, Joerg Schilling wrote:
Ian Collinsi...@ianshome.com wrote:
If you have an orthogonal architecture like sparc, a typical 64 bit program is
indeed a bit slower than the
I look after an x4500 for a client and wee keep getting drives marked as
degraded with just over 20 checksum errors.
Most of these errors appear to be driver or hardware related and thier
frequency increases during a resilver, which can lead to a death
spiral. The increase in errors within a
On 08/16/10 12:37 PM, Richard Elling wrote:
On Aug 15, 2010, at 4:59 PM, Ian Collins wrote:
I look after an x4500 for a client and wee keep getting drives marked as
degraded with just over 20 checksum errors.
Most of these errors appear to be driver or hardware related and thier
On 08/10/10 06:21 PM, Terry Hull wrote:
I am wanting to build a server with 16 - 1TB drives with 2 – 8 drive
RAID Z2 arrays striped together. However, I would like the capability
of adding additional stripes of 2TB drives in the future. Will this be
a problem? I thought I read it is best to
On 08/10/10 09:12 PM, Andrew Gabriel wrote:
Phil Harman wrote:
On 10 Aug 2010, at 08:49, Ian Collins i...@ianshome.com wrote:
On 08/10/10 06:21 PM, Terry Hull wrote:
I am wanting to build a server with 16 - 1TB drives with 2 – 8
drive RAID Z2 arrays striped together. However, I would like
On 08/10/10 10:09 PM, Phil Harman wrote:
On 10 Aug 2010, at 10:22, Ian Collinsi...@ianshome.com wrote:
On 08/10/10 09:12 PM, Andrew Gabriel wrote:
Another option - use the new 2TB drives to swap out the existing 1TB drives.
If you can find another use for the swapped out drives, this
On 08/11/10 05:16 AM, Terry Hull wrote:
So do I understand correctly that really the Right thing to do is to build
a pool not only with a consistent strip width, but also to build it with
drives on only one size? It also sounds like from a practical point of
view that building the pool
On 08/11/10 03:45 PM, David Dyer-Bennet wrote:
On 10-Aug-10 13:46, David Dyer-Bennet wrote:
It's possible that a snapshot was *deleted* on the sending pool
during the
send operation, however. Also that snapshots were created (however, a
newly created one would be after the one specified in
On 07/29/10 07:41 AM, sol wrote:
Hi
Having just done a scrub of a mirror I've lost a file and I'm curious how this
can happen in a mirror. Doesn't it require the almost impossible scenario
of exactly the same sector being trashed on both disks? However the
zpool status shows checksum errors
On 07/23/10 04:38 PM, Chris wrote:
Apologies if this question has been answered before, and sorry if this is in the wrong
forum (I couldn't find communities zfs discuss in the list) but I haven't
been able to find the answer despite extensive searching.
I have a zpool consisting of 3 x 1TB
On 07/21/10 03:12 AM, Richard Jahnel wrote:
On the receiver
/opt/csw/bin/mbuffer -m 1G -I Ostor-1:8000 | zfs recv -F e...@sunday
in @ 0.0 kB/s, out @ 0.0 kB/s, 43.7 GB total, buffer 100% fullcannot receive
new filesystem stream: invalid backup stream
mbuffer: error: outputThread: error
On 07/20/10 08:20 AM, Richard Jahnel wrote:
I've used mbuffer to transfer hundreds of TB without a problem in mbuffer
itself. You will get disconnected if the send or receive prematurely ends,
though.
mbuffer itself very specifically ends with a broken pipe error. Very quickly
with s
On 07/18/10 11:19 AM, marco wrote:
Im seeing weird differences between 2 raidz pools, 1 created on a recent
freebsd 9.0-CURRENT amd64 box containing the zfs v15 bits, the other on a old
osol build.
The raidz pool on the fbsd box is created from 3 2Tb sata drives.
The raidz pool on the osol box
On 07/14/10 07:10 PM, Peter Taps wrote:
Folks,
This is probably a very naive question.
Is it possible to set zfs for bi-directional synchronization of data across two
locations? I am thinking this is almost impossible. Consider two files A and B
at two different sites. There are three
On 07/14/10 03:55 AM, David Dyer-Bennet wrote:
On Fri, July 9, 2010 16:49, BJ Quinn wrote:
I have a couple of systems running 2009.06 that hang on relatively large
zfs send/recv jobs. With the -v option, I see the snapshots coming
across, and at some point the process just pauses, IO and
On 07/14/10 04:20 PM, Edward Ned Harvey wrote:
Here's a really simple way to get some pricing information:
Go to Dell.com. Servers. Servers. Rack. Enhanced. PowerEdge R710
(Customize.)
You could pick any server that supports solaris. I just chose the R710
because I know it does.
On 07/13/10 06:48 AM, BJ Quinn wrote:
Yeah, it's just that I don't think I'll be allowed to put up a dev version, but
I would probably get away with putting up 2008.11 if it doesn't have the same
problems with zfs send/recv. Does anyone know?
That would be a silly thing to do. Your
On 07/13/10 11:10 AM, Kris Kasner wrote:
Hi Folks..
I have a system that was inadvertently left unmirrored for root. We
were able to add a mirror disk, resilver, and fix the corrupted files
(nothing very interesting was corrupt, whew), but zpool status -v
still shows errors..
Will this
On 07/13/10 12:26 PM, Gary Leong wrote:
I'm looking to use ZFS to export ISCSI volumes to a Windows/Linux client.
Essentially, I'm looking to create two storage ZFS machines that I will export
ISCSI targets from. Then from the client side, I will enable mirrorings. The
two ZFS machines
On 07/10/10 08:10 AM, zfsnoob4 wrote:
I'm not trying to fix anything in particular, I'm just curious. In case I
rollback a filesystem and then realize, I wanted a file from the original file
system (before rollback).
I read the section on clones here:
On 07/10/10 09:49 AM, BJ Quinn wrote:
I have a couple of systems running 2009.06 that hang on relatively large zfs
send/recv jobs. With the -v option, I see the snapshots coming across, and at
some point the process just pauses, IO and CPU usage go to zero, and it takes a
hard reboot to get
On 07/ 9/10 09:21 AM, Edward Ned Harvey wrote:
Suppose I have a fileserver, which may be zpool 10, 14, or 15. No
compression, no dedup.
Suppose I have a backupserver. I want to zfs send from the fileserver
to the backupserver, and I want the backupserver to receive and store
compressed
On 07/ 9/10 10:59 AM, Brandon High wrote:
Personally, I've started organizing datasets in a hierarchy, setting
the properties that I want for descendant datasets at a level where it
will apply to everything that I want to get it. So if you have your
source at tank/export/foo and your
On 07/ 9/10 01:29 PM, zfsnoob4 wrote:
Hi,
I have a question about snapshots. If I restore a file system based on some
snapshot I took in the past, is it possible to revert back to before I
restored? ie:
zfs snapshot t...@yesterday
mkdir /test/newfolder
zfs rollback t...@yesterday
so now
On 07/ 6/10 02:21 AM, Francois wrote:
Hi list,
Here's my case :
pool: mypool
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in
On 07/ 4/10 02:54 PM, zfsnoob4 wrote:
Hello,
I'm using opensolaris b134 and I'm trying to mount a ntfs partition. I followed
the instructions located here:
http://sun.drydog.com/faq/9.html
You have posted to the wrong list, opensolaris-help would be more
appropriate so I've coped that
On 07/ 2/10 04:12 PM, Peter Taps wrote:
Folks,
While going through a quick tutorial on zfs, I came across a way to create zfs
filesystem within a filesystem. For example:
# zfs create mytest/peter
where mytest is a zpool filesystem.
When does this way, the new filesystem has the mount point
On 07/ 1/10 01:36 AM, Tony MacDoodle wrote:
Hello,
Has anyone encountered the following error message, running Solaris 10
u8 in an LDom.
bash-3.00# devfsadm
devfsadm: write failed for /dev/.devfsadm_dev.lock: Bad exchange
descriptor
Not specifically. But it is clear from what follows
On 06/28/10 08:15 PM, Gabriele Bulfon wrote:
I found this today:
http://blog.lastinfirstout.net/2010/06/sunoracle-finally-announces-zfs-data.html?utm_source=feedburnerutm_medium=feedutm_campaign=Feed%3A+LastInFirstOut+%28Last+In%2C+First+Out%29utm_content=FriendFeed+Bot
How can I be sure my
I've noticed (at least on Solaris 10) that the resiver rate appears to
slow down considerably as it nears completion.
On an eight 500G raidz2 vdev, after 28 hours zpool status reported:
spare DEGRADED 0 063
c1t6d0 DEGRADED 0 011 too many
On 06/21/10 03:55 AM, Roy Sigurd Karlsbakk wrote:
Hi all
We're working on replacing our current fileserver with something based on
either Solaris or NexentaStor. We have about 200 users with variable needs.
There will also be a few common areas for each department and perhaps a backup
area.
On 06/18/10 09:21 PM, artiepen wrote:
This is a test system. I'm wondering, now, if I should just reconfigure with
maybe 7 disks and add another spare. Seems to be the general consensus that
bigger raid pools = worse performance. I thought the opposite was true...
No, wider vdevs gives
On 06/ 1/10 07:16 AM, Sandon Van Ness wrote:
Here is zpool status for my 'data' pool:
r...@opensolaris: 11:43 AM :~# zpool status data
pool: data
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
data
On 05/27/10 09:16 PM, Per Jorgensen wrote:
thanks for the quick responses and yes the history show just what you said :(
is there a way i can get c9t8d0 out of the pool , or how do i get the pool back
to optimal redundancy ?
No, you will have to destroy the pool and start over. Or if
On 05/23/10 08:52 AM, Thomas Burgess wrote:
If you install Opensolaris with the AHCI settings off, then switch
them on, it will fail to boot
I had to reinstall with the settings correct.
Well you probably didn't have to. Booting form the live CD and
importing the pool would have put things
On 05/23/10 08:43 AM, Brian wrote:
Is there a way within opensolaris to detect if AHCI is being used by various
controllers?
I suspect you may be accurate an AHCI is not turned on. The bios for this particular
motherboard is fairly confusing on the AHCI settings. The only setting I have is
On 05/23/10 11:31 AM, Brian wrote:
Sometimes when it hangs on boot hitting space bar or any key won't bring it
back to the command line. That is why I was wondering if there was a way to
not show the splashscreen at all, and rather show what it was trying to load
when it hangs.
From my
On 05/23/10 01:18 PM, Thomas Burgess wrote:
this worked fine, next today, i wanted to send what has changed
i did
zfs snapshot tank/nas/d...@second
now, heres where i'm confusedfrom reading the man page i thought
this command would work:
pfexec zfs send -i tank/nas/d...@first
On 05/23/10 03:56 PM, Thomas Burgess wrote:
let me ask a question though.
Lets say i have a filesystem
tank/something
i make the snapshot
tank/someth...@one
i send/recv it
then i do something (add a file...remove something, whatever) on the
send side, then i do a send/recv and force it of
On 05/22/10 12:31 PM, Don wrote:
I just spoke with a co-worker about doing something about it.
He says he can design a small in-line UPS that will deliver 20-30
seconds of 3.3V, 5V, and 12V to the SATA power connector for about $50
in parts. It would be even less if only one voltage was needed.
On 05/22/10 12:54 PM, Thomas Burgess wrote:
Something i've been meaning to ask
I'm transfering some data from my older server to my newer one. the
older server has a socket 775 intel Q9550 8 gb ddr2 800 20 1TB drives
in raidz2 (3 vdevs, 2 with 7 drives one with 6) connected to 3
On 05/22/10 04:44 PM, Thomas Burgess wrote:
I can't tell you for sure
For some reason the server lost power and it's taking forever to come
back up.
(i'm really not sure what happened)
anyways, this leads me to my next couple questions:
Is there any way to resume a zfs send/recv
On 05/22/10 05:22 PM, Thomas Burgess wrote:
yah, it seems that rsync is faster for what i need anywaysat least
right now...
ZFS send/receive should run at wire speed for a Gig-E link.
Ian.
___
zfs-discuss mailing list
On 05/20/10 08:39 PM, roi shidlovsky wrote:
hi.
i am trying to attach a mirror disk to my root pool. if the two disk are the same size..
it all works fine, but if the two disks are with different size (8GB and 7.5GB) i get a
I/O error on the attach command.
can anybody tell me what am i doing
On 05/19/10 09:34 PM, Philippe wrote:
Hi !
It is strange because I've checked the SMART data of the 4 disks, and
everything seems really OK ! (on another hardware/controller, because I needed
Windows to check it). Maybe it's a problem with the SAS/SATA controller ?!
One question : if I halt
On 05/17/10 12:08 PM, Thomas Burgess wrote:
well, i haven't had a lot of time to work with this...but i'm having
trouble getting the onboard sata to work in anything but NATIVE IDE mode.
I'm not sure exactly what the problem isi'm wondering if i bought
the wrong cable (i have a norco
On 05/15/10 09:43 PM, Jason Barr wrote:
Hello,
I want to slice these 3 disks into 2 partitions each and configure 1 Raid0 and
1 Raidz1 on these 3.
Lets get the obvious question out of the way first: why?
If you intend one two way mirror and one raidz, you will either have to
waste one
On 05/16/10 06:52 AM, John Balestrini wrote:
Howdy All,
I've a bit of a strange problem here. I have a filesystem with one snapshot
that simply refuses to be destroyed. The snapshots just prior to it and just
after it were destroyed without problem. While running the zfs destroy command
on
On 05/16/10 12:40 PM, John Balestrini wrote:
Yep. Dedup is on. A zpool list shows a 1.50x dedup ratio. I was imagining that
the large ratio was tied to that particular snapshot.
basie@/root# zpool list pool1
NAMESIZE ALLOC FREECAP DEDUP HEALTH ALTROOT
pool1 2.72T 1.55T 1.17T
I just tried moving a dump volume form rpool into another pool so I used
zfs send/receive to copy the volume (to keep some older dumps) then ran
dumpadm -d to use the new location. This caused a panic. Nothing ended
up in messages and needless to say, there isn't a dump!
Creating a new
On 05/13/10 03:27 AM, Lori Alt wrote:
On 05/12/10 04:29 AM, Ian Collins wrote:
I just tried moving a dump volume form rpool into another pool so I
used zfs send/receive to copy the volume (to keep some older dumps)
then ran dumpadm -d to use the new location. This caused a panic.
Nothing
On 05/13/10 08:55 AM, Jens Elkner wrote:
On Wed, May 12, 2010 at 09:34:28AM -0700, Doug wrote:
We have a 2006 Sun X4500 with Hitachi 500G disk drives. Its been running for over
four years and just now fmadm zpool reports a disk has failed. No data was
lost (RAIDZ2 + hot spares worked
On 05/13/10 12:46 PM, Erik Trimble wrote:
I've gotten a couple of the newest prototype AMD systems, with the C34
and G34 sockets. All have run various flavors of OpenSolaris quite
well, with the exception of a couple of flaky network problems, which
we've tracked down to pre-production NIC
On 05/12/10 02:10 PM, Terence Tan wrote:
I was having quite a bit of problems getting the rpool mirroring to work as
expected.
This appears to be a known issue, see the thread b134 - Mirrored rpool
won't boot unless both mirrors are present and
On 05/ 9/10 06:54 AM, Giovanni Mazzeo wrote:
giova...@server:~# cfgadm
Ap_Id Type Receptacle Occupant
Condition
sata1/0disk connectedunconfigured
unknown
sata1/1::dsk/c8t1d0disk connected
201 - 300 of 728 matches
Mail list logo