device
and that iSCSI is disabled.
On Solaris 11.1, how would I determine what's busying it?
John
groenv...@acm.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
-lu -v
Bingo!
Deleted the LU and destroyed the volume.
John
groenv...@acm.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
installation media, import the
mirror drive.
If it imports, you will be able to installgrub(1M).
By the way, whatever the error message is when booting, it disapears so
quickly I can't read it, so I am only guessing that this is the reason.
Boot with kernel debugger so you can see the panic.
John
groenv
I seem to have managed to end up with a pool that is confused abut its children
disks. The pool is faulted with corrupt metadata:
pool: d
state: FAULTED
status: The pool metadata is corrupted and the pool cannot be opened.
action: Destroy and re-create the pool from
a backup source.
Gregg Wonderly gregg...@gmail.com wrote:
Have you tried importing the pool with that drive completely unplugged?
Thanks for your reply. I just tried that. zpool import now says:
pool: d
id: 13178956075737687211
state: FAULTED
status: The pool metadata is corrupted.
action: The
# pstack core
John
groenv...@acm.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
After searching for dm-crypt and ZFS on Linux and finding too little
information, I shall ask here. Please keep in mind this in the context of
running this in a production environment.
We have the need to encypt our data, approximately 30TB on three ZFS
volumes under Solaris 10. The volumes
Replacing the SANs is cost prohibitive.
On Fri, Nov 23, 2012 at 10:24 AM, Tim Cook t...@cook.ms wrote:
On Fri, Nov 23, 2012 at 9:49 AM, John Baxter johnleebax...@gmail.comwrote:
We have the need to encypt our data, approximately 30TB on three ZFS
volumes under Solaris 10. The volumes
Hello everybody,
I just wanted to share my experience with a (partially) broken SSD that was in
use in a ZIL mirror.
We experienced a dramatic performance problem with one of our zpools, serving
home directories. Mainly NFS clients were affected. Our SunRay infrastructure
came to a complete
Hello everybody,
my time-slider service on a Sol11 machine died. I already deinstalled/installed
the time-slider packeage, restarted manifest-import service etc., but no
success.
/var/svc/log/application-time-slider:default.log:
--snip--
[ Sep 11 12:40:04 Enabled. ]
[ Sep 11 12:40:04
-Original message-
To: zfs-discuss@opensolaris.org;
From: Carsten John cj...@mpi-bremen.de
Sent: Tue 11-09-2012 13:08
Subject:[zfs-discuss] Sol11 time-slider / snapshot not starting [again]
Hello everybody,
my time-slider service on a Sol11 machine died. I already
On 07/29/12 14:52, Bob Friesenhahn wrote:
My opinion is that complete hard drive failure and block-level media
failure are two totally different things.
That would depend on the recovery behavior of the drive for
block-level media failure. A drive whose firmware does excessive
(reports of up
On 07/19/12 19:27, Jim Klimov wrote:
However, if the test file was written in 128K blocks and then
is rewritten with 64K blocks, then Bob's answer is probably
valid - the block would have to be re-read once for the first
rewrite of its half; it might be taken from cache for the
second half's
()
# pkg info entire| grep Summary
Summary: entire incorporation including Support Repository Update
(Oracle Solaris 11 11/11 SRU 8.5).
John
groenv...@acm.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
On 07/10/12 19:56, Sašo Kiselkov wrote:
Hi guys,
I'm contemplating implementing a new fast hash algorithm in Illumos' ZFS
implementation to supplant the currently utilized sha256. On modern
64-bit CPUs SHA-256 is actually much slower than SHA-512 and indeed much
slower than many of the SHA-3
-Original message-
To: Carsten John cj...@mpi-bremen.de;
CC: zfs-discuss@opensolaris.org;
From: Ian Collins i...@ianshome.com
Sent: Thu 05-07-2012 21:40
Subject:Re: [zfs-discuss] Sol11 missing snapshot facility
On 07/ 5/12 11:32 PM, Carsten John wrote
Hello everybody,
for some reason I can not find the zfs-autosnapshot service facility any more.
I already reinstalles time-slider, but it refuses to start:
RuntimeError: Error reading SMF schedule instances
Details:
['/usr/bin/svcs', '-H', '-o', 'state',
-Original message-
To: Carsten John cj...@mpi-bremen.de;
CC: zfs-discuss@opensolaris.org;
From: Ian Collins i...@ianshome.com
Sent: Thu 05-07-2012 09:59
Subject:Re: [zfs-discuss] Sol11 missing snapshot facility
On 07/ 5/12 06:52 PM, Carsten John wrote:
Hello
-Original message-
To: Carsten John cj...@mpi-bremen.de;
CC: zfs-discuss@opensolaris.org;
From: Ian Collins i...@ianshome.com
Sent: Thu 05-07-2012 11:35
Subject:Re: [zfs-discuss] Sol11 missing snapshot facility
On 07/ 5/12 09:25 PM, Carsten John wrote:
Hi Ian
On 07/04/12 16:47, Nico Williams wrote:
I don't see that the munmap definition assures that anything is written to
disk. The system is free to buffer the data in RAM as long as it likes
without writing anything at all.
Oddly enough the manpages at the Open Group don't make this clear. So
I
Hello everybody,
I recently migrated a file server (NFS Samba) from OpenSolaris (Build 111) to
Sol11. This the move we are facing random (or random looking) outages of our
Samba. As we have moved several folders (like Desktop and ApplicationData) out
of the usual profile to a folder inside
On 06/16/12 12:23, Richard Elling wrote:
On Jun 15, 2012, at 7:37 AM, Hung-Sheng Tsao Ph.D. wrote:
by the way
when you format start with cylinder 1 donot use 0
There is no requirement for skipping cylinder 0 for root on Solaris, and there
never has been.
Maybe not for core Solaris, but it
On 06/15/12 15:52, Cindy Swearingen wrote:
Its important to identify your OS release to determine if
booting from a 4k disk is supported.
In addition, whether the drive is really 4096p or 512e/4096p.
___
zfs-discuss mailing list
In message 201206141413.q5eedvzq017...@mklab.ph.rhul.ac.uk, tpc...@mklab.ph.r
hul.ac.uk writes:
Memory: 2048M phys mem, 32M free mem, 16G total swap, 16G free swap
My WAG is that your zpool history is hanging due to lack of
RAM.
John
groenv...@acm.org
In message 008c01cd4812$7399c180$5acd4480$@net, David Combs writes:
Actual newsgroup for zfs-discuss?
Did you try Gmane's interface?
URL:http://groups.google.com/groups?selm=jo43q0%24no50%241%40tr22n12.aset.psu.edu
John
groenv...@acm.org
___
zfs
On 05/28/12 08:48, Nathan Kroenert wrote:
Looking to get some larger drives for one of my boxes. It runs
exclusively ZFS and has been using Seagate 2TB units up until now (which
are 512 byte sector).
Anyone offer up suggestions of either 3 or preferably 4TB drives that
actually work well with
On 05/29/12 08:35, Nathan Kroenert wrote:
Hi John,
Actually, last time I tried the whole AF (4k) thing, it's performance
was worse than woeful.
But admittedly, that was a little while ago.
The drives were the seagate green barracuda IIRC, and performance for
just about everything was 20MB/s
On 05/29/12 07:26, bofh wrote:
ashift:9 is that standard?
Depends on what the drive reports as physical sector size.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
info entire
John
groenv...@acm.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hello everybody,
just to let you know what happened in the meantime:
I was able to open a Service Request at Oracle.
The issue is a known bug (Bug 6742788 : assertion panic at: zfs:zap_deref_leaf)
The bug has bin fixed (according to Oracle support) since build 164, but there
is no fix for
::NO:RP,6:P6_LPI:27242443094470222098916
John
groenv...@acm.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
of transferring the core
file though. I will ask around and see if I can help you here.
How to Upload Data to Oracle Such as Explorer and Core Files [ID 1020199.1]
John
groenv...@acm.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
-Original message-
To: zfs-discuss@opensolaris.org;
From: John D Groenveld jdg...@elvis.arl.psu.edu
Sent: Fri 30-03-2012 21:47
Subject:Re: [zfs-discuss] kernel panic during zfs import [ORACLE should
notice this]
In message 4f735451.2020...@oracle.com, Deepak Honnalli
-Original message-
To: zfs-discuss@opensolaris.org;
From: Borja Marcos bor...@sarenet.es
Sent: Thu 29-03-2012 11:49
Subject:[zfs-discuss] Puzzling problem with zfs receive exit status
Hello,
I hope someone has an idea.
I have a replication program that copies a
-Original message-
To: ZFS Discussions zfs-discuss@opensolaris.org;
From: Paul Kraus p...@kraus-haus.org
Sent: Tue 27-03-2012 15:05
Subject:Re: [zfs-discuss] kernel panic during zfs import
On Tue, Mar 27, 2012 at 3:14 AM, Carsten John cj...@mpi-bremen.de wrote:
Hallo
-Original message-
To: zfs-discuss@opensolaris.org;
From: Deepak Honnalli deepak.honna...@oracle.com
Sent: Wed 28-03-2012 09:12
Subject:Re: [zfs-discuss] kernel panic during zfs import
Hi Carsten,
This was supposed to be fixed in build 164 of Nevada (6742788). If
paid for keeping systems running and not clicking through f
lash overloaded support portals searching for CSIs, I'm giving the relevant in
formation to the list now.
If the Flash interface is broken, try the non-Flash MOS site:
URL:http://SupportHTML.Oracle.COM/
John
groenv...@acm.org
will support
Solaris running on third-party hardware.
URL:http://www.oracle.com/webfolder/technetwork/hcl/hcts/index.html
John
groenv...@acm.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
laotsu said:
well check this link
https://shop.oracle.com/pls/ostore/product?p1=3DSunFireX4270M2serverp2=3Dp=
3=3Dp4=3Dsc=3Docom_x86_SunFireX4270M2servertz=3D-4:00
you may not like the price
Hahahah! Thanks for the laugh. The dual 10Gbe PCI card breaks my budget. I'm
not going to try to
Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote:
On Thu, 22 Mar 2012, The Honorable Senator and Mrs. John Blutarsky wrote:
This will be a do-everything machine. I will use it for development, hosting
various apps in zones (web, file server, mail server etc.) and running other
systems
On Fri Mar 23 at 10:06:12 2012 laot...@gmail.com wrote:
well
use component of x4170m2 as example you will be ok
intel cpu
lsi sas controller non raid
sas 72rpm hdd
my 2c
That sounds too vague to be useful unless I could afford an X4170M2. I
can't build a custom box and I don't have the
Ladies and Gentlemen,
I'm thinking about spending around 1,250 USD for a tower format (desk side)
server with RAM but without disks. I'd like to have 16G ECC RAM as a
minimum and ideally 2 or 3 times that amount and I'd like for the case to
have room for at least 6 drives, more would be better
?
# MegaCli -AdpSetProp -EnableJBOD -1 -aALL
# MegaCli -PDMakeJBOD -PhysDrv[E0:S0,E1:S1,...] -aALL
John
groenv...@acm.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi everybody,
are there any problems to expect if we try to export/import a zfs pool from
opensolaris (intel) (zpool version 14) to solaris 10 (sparc) (zpool version 19)?
thanks
Carsten
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Hello everybody,
I set up a script to replicate all zfs filesystems (some 300 user home
directories in this case) within a given pool to a mirror machine. The basic
idea is to send the snapshots incremental if the corresponding snapshot exists
on the remote side or send a complete snapshot if
In message 4f435ca9.8010...@tuneunix.com, nathank writes:
Is there actually a fix to allow manual setting of ashift now that I
No.
URL:http://docs.oracle.com/cd/E23824_01/html/821-1462/zpool-1m.html
John
groenv...@acm.org
___
zfs-discuss mailing list
a new pool with ashift=12 out of the box or will
Yes.
John
groenv...@acm.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 01/25/12 09:08, Edward Ned Harvey wrote:
Assuming the failure rate of drives is not linear, but skewed toward higher
failure rate after some period of time (say, 3 yrs) ...
See section 3.1 of the Google study:
http://research.google.com/archive/disk_failures.pdf
although section 4.2
On 01/24/12 17:06, Gregg Wonderly wrote:
What I've noticed, is that when I have my drives in a situation of small
airflow, and hence hotter operating temperatures, my disks will drop
quite quickly.
While I *believe* the same thing and thus have over provisioned
airflow in my cases (for both
On 01/16/12 11:08, David Magda wrote:
The conclusions are hardly unreasonable:
While the reliability mechanisms in ZFS are able to provide reasonable
robustness against disk corruptions, memory corruptions still remain a
serious problem to data integrity.
I've heard the same thing said (use
On 01/08/12 20:10, Jim Klimov wrote:
Is it true or false that: ZFS might skip the cache and
go to disks for streaming reads?
I don't believe this was ever suggested. Instead, if
data is not already in the file system cache and a
large read is made from disk should the file system
put this
On 01/08/12 10:15, John Martin wrote:
I believe Joerg Moellenkamp published a discussion
several years ago on how L1ARC attempt to deal with the pollution
of the cache by large streaming reads, but I don't have
a bookmark handy (nor the knowledge of whether the
behavior is still accurate
On 01/08/12 09:30, Edward Ned Harvey wrote:
In the case of your MP3 collection... Probably the only thing you can do is
to write a script which will simply go read all the files you predict will
be read soon. The key here is the prediction - There's no way ZFS or
solaris, or any other OS in
On 01/08/12 11:30, Jim Klimov wrote:
However for smaller servers, such as home NASes which have
about one user overall, pre-reading and caching files even
for a single use might be an objective per se - just to let
the hard-disks spin down. Say, if I sit down to watch a
movie from my NAS, it is
the answer. Thank you.
URL:http://docs.oracle.com/cd/E23823_01/html/819-5461/ggset.html#gkdep
| How to Create a Mirrored ZFS Root Pool (Postinstallation)
John
groenv...@acm.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
/11 still spews the I/O request is not aligned with
4096 disk sector size warnings but zpool(1M) create's label
persists and I can export and import between systems.
John
groenv...@acm.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
recent kernel build.
Thanks,
John
groenv...@acm.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
In message 201110150202.p9f22w2n000...@elvis.arl.psu.edu, John D Groenveld
writes:
I'm baffled why zpool import is unable to find the pool on the
drive, but the drive is definitely functional.
Per Richard Elling, it looks like ZFS is unable to find
the requisite labels for importing.
John
i386
# zpool destroy foobar
# newfs /dev/rdsk/c1t0d0s0
newfs: construct a new file system /dev/rdsk/c1t0d0s0: (y/n)? y
The device sector size 4096 is not supported by ufs!
John
groenv...@acm.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
this on more recent bits, like the EA release,
which I think is b 171.
Doubtful I'll find time to install EA before S11 FCS's
November launch.
I'll still file the CR.
Thank you.
John
groenv...@acm.org
___
zfs-discuss mailing list
zfs-discuss
In message 4e9db04b.80...@oracle.com, Cindy Swearingen writes:
This is CR 7102272.
Anyone out there have Western Digital's competing 3TB Passport
drive handy to duplicate this bug?
John
groenv...@acm.org
___
zfs-discuss mailing list
zfs-discuss
baffled why zpool import is unable to find the pool on the
drive, but the drive is definitely functional.
John
groenv...@acm.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
it.
John
groenv...@acm.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
In message 4e970387.3040...@oracle.com, Cindy Swearingen writes:
Any USB-related messages in /var/adm/messages for this device?
Negative.
cfgadm(1M) shows the drive and format-fdisk-analyze-read
runs merrily.
John
groenv...@acm.org
___
zfs-discuss
In message 4e95cb2a.30...@oracle.com, Cindy Swearingen writes:
What is the error when you attempt to import this pool?
cannot import 'foo': no such pool available
John
groenv...@acm.org
# format -e
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c1t0d0 Seagate-External-SG11 cyl
slices presumably hunting for pools.
John
groenv...@acm.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
import it.
I thought weird USB connectivity issue, but I can run
format - analyze - read merrily.
Anyone seen this bug?
John
groenv...@acm.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
On 09/12/11 10:33, Jens Elkner wrote:
Hmmm, at least if S11x, ZFS mirror, ICH10 and cmdk (IDE) driver is involved,
I'm 99.9% confident, that a while turns out to be some days or weeks, only
- no matter what Platinium-Enterprise-HDDs you use ;-)
On Solaris 11 Express with a dual drive mirror,
http://wdc.custhelp.com/app/answers/detail/a_id/1397/~/difference-between-desktop-edition-and-raid-%28enterprise%29-edition-drives
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
?
John
groenv...@acm.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.
John
groenv...@acm.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Is there a list of zpool versions for development builds?
I found:
http://blogs.oracle.com/stw/entry/zfs_zpool_and_file_system
where it says Solaris 11 Express is zpool version 31, but my
system has BEs back to build 139 and I have not done a zpool upgrade
since installing this system but it
Hello everybody,
is there any known way to configure the point-in-time *when* the time-slider
will snapshot/rotate?
With hundreds of zfs filesystems, the daily snapshot rotation slows down a big
file server significantly, so it would be better to have the snapshots rotated
outside the usual
bin65MpxTCk5V.bin
Description: PGP/MIME version identification
encrypted.asc
Description: OpenPGP encrypted message
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 06/28/11 02:55, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Carsten John
Now I'm wondering about the best option to replace the HDD with the SSD:
What version
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 06/28/11 02:55, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Carsten John
Now I'm wondering about the best option to replace the HDD with the SSD:
What version
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hello everybody,
some time ago a SSD within a ZIL mirror died. As I had no SSD available
to replace it, I dropped in a normal SAS harddisk to rebuild the mirror.
In the meantime I got the warranty replacement SSD.
Now I'm wondering about the best
of that fdisk partition, use beadm(1M) to copy
your BE back to your new rpool, and then restore any other ZFS
from those snapshots.
John
groenv...@acm.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
.
Ask Keith Block and company's sales critter about Hardware from Oracle
- Pricing for Education (HOPE):
URL:http://www.oracle.com/ocom/groups/public/@ocom/documents/webcontent/364419.pdf
John
groenv...@acm.org
___
zfs-discuss mailing list
zfs-discuss
following are some thoughts if it's not too late:
1 SuperMicro 847E1-R1400LPB
I guess you meant the 847E1[b]6[/b]-R1400LPB, the SAS1 version makes no sense
1 SuperMicro H8DG6-F
not the best choice, see below why
171 Hitachi 7K3000 3TB
I'd go for the more environmentally
-Original Message-
From: Frank Lahm [mailto:frankl...@googlemail.com]
Sent: 25 January 2011 14:50
To: Ryan John
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Changed ACL behavior in snv_151 ?
John,
welcome onboard!
2011/1/25 Ryan John john.r...@bsse.ethz.ch:
I’m
any ideas?
On a snv_134 system, the ACLs are retained.
Regards
John
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I’m using /usr/bin/chmod
From: phil.har...@gmail.com [mailto:phil.har...@gmail.com]
Sent: 25 January 2011 14:50
To: Ryan John; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Changed ACL behavior in snv_151 ?
Which chmod are you using? (check your PATH)
- Reply message -
From
I'm trying to rollback from a bad patch install on Solaris 10. From the
failsafe BE I tried to rollback, but zfs is asking me to provide allow rollback
permissions. It's hard for me to tell exactly because the messages are
scrolling off the screen before I can read them. Any help would be
, but I can't figure out
how to access it.
Use beadm(1M) to duplicate your BE to a USB disk, then boot it,
then format/fdisk your workstation disk, then use beadm(1M) to
duplicate your BE back to your workstation disk.
John
groenv...@acm.org
___
zfs
In message 201008112022.o7bkmc2j028...@elvis.arl.psu.edu, John D Groenveld wr
ites:
I'm stumbling over BugID 6961707 on build 134.
I see the bug has been stomped in build 150. Awesome!
URL:http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6961707
In which build did it first arrive
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Jeff Bacon wrote:
I have a bunch of sol10U8 boxes with ZFS pools, most all raidz2 8-disk
stripe. They're all supermicro-based with retail LSI cards.
I've noticed a tendency for things to go a little bonkers during the
weekly scrub (they all
Wouldn't it be possible to saturate the SSD ZIL with enough backlogged sync
writes?
What I mean is, doesn't the ZIL eventually need to make it to the pool, and if
the pool as a whole (spinning disks) can't keep up with 30+ vm's of write
requests, couldn't you fill up the ZIL that way?
--
service order requests to
/dev/null, but someone manually entered after I submitted
web feedback.
John
groenv...@acm.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
hit this and gotten it
resolved?
Is the pool corrupted on disk?
John
groenv...@acm.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
. JBOD, RAID zvols on both controllers.
--
John
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hello all. I am new...very new to opensolaris and I am having an issue and have
no idea what is going wrong. So I have 5 drives in my machine. all 500gb. I
installed open solaris on the first drive and rebooted. . Now what I want to do
is ad a second drive so they are mirrored. How does one do
Could you import it back on the original server with
Zpool import -f newpool rpool?
Jay
-Original Message-
From: Brandon High [mailto:bh...@freaks.com]
Sent: Wednesday, June 16, 2010 2:19 PM
To: Seaman, John
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] mount zfs boot disk
the filesystem, is there another way to
de-dedup the pool?
Thanks,
John
On Jun 13, 2010, at 10:17 PM, Erik Trimble wrote:
Hernan F wrote:
Hello, I tried enabling dedup on a filesystem, and moved files into it to
take advantage of it. I had about 700GB of files and left it for some hours.
When I
Can I make a pool not mount on boot? I seem to recall reading
somewhere how to do it, but can't seem to find it now.
--
John
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of John
Hoogerdijk
I'm building a campus cluster with identical
storage in two locations
with ZFS mirrors spanning both storage frames. Data
will be mirrored
using zfs. I'm looking
On Tue, May 18, 2010 20:45, Edward Ned Harvey wrote:
The whole point of a log device is to accelerate
sync writes, by providing
nonvolatile storage which is faster than the
primary storage. You're not
going to get this if any part of the log device is
at the other side of a
WAN. So
-0 ONLINE 0 0 0
c3t3d0 ONLINE 0 0 0
c3t2d0 ONLINE 0 0 0
errors: Permanent errors have been detected in the following files:
vol2/v...@snap-daily-1-2010-05-06-:/as5/as5-flat.vmdk
--
John
Not to my knowledge, how would I go about getting one? (CC'ing discuss)
On Wed, May 19, 2010 at 8:46 AM, Mark J Musante mark.musa...@oracle.com wrote:
Do you have a coredump? Or a stack trace of the panic?
On Wed, 19 May 2010, John Andrunas wrote:
Running ZFS on a Nexenta box, I had
genunix:taskq_thread+248 ()
ff001f45eb60 unix:thread_start+8 ()
syncing file systems... done
skipping system dump - no dump device configured
rebooting...
On Wed, May 19, 2010 at 8:55 AM, Michael Schuster
michael.schus...@oracle.com wrote:
On 19.05.10 17:53, John Andrunas wrote:
Not to my knowledge
1 - 100 of 262 matches
Mail list logo