On Tue, 27 Mar 2007, [UTF-8] �^Aukasz wrote:
zfs send then would:
1. create replicate snapshot if it does not exist
2. send data
3. wait 10 seconds
4. rename snapshot to replicate_previous ( destroy previous if exists )
5. goto 1.
All snapshot operations are done in kernel - it works
On Tue, 27 Mar 2007, [UTF-8] �^Aukasz wrote:
Out of curiosity, what is the timing difference between a userland script
and performing the operations in the kernel?
Operation takes 15 - 20 seconds
In kernel it takes ( time in ms ):
[between 2.5 and 14.5 seconds]
Very nice improvement.
On Wed, 28 Mar 2007, prasad wrote:
We create iso images of our product in the following way (high-level):
# mkfile 3g /isoimages/myiso
# lofiadm -a /isoimages/myiso
/dev/lofi/1
# newfs /dev/rlofi/1
# mount /dev/lofi/1 /mnt
# cd /mnt; zcat /product/myproduct.tar.Z | tar xf -
How big does
On Tue, 10 Apr 2007, Martin Girard wrote:
Is it possible to make my zpool redundant by adding a new disk in the pool
and making it a mirror with the initial disk?
Sure, by using zpool attach:
# mkfile 64m /tmp/foo /tmp/bar
# zpool create tank /tmp/foo
# zpool status
pool: tank
state:
On Tue, 10 Apr 2007, Constantin Gonzalez wrote:
Has anybody tried it yet with a striped mirror? What if the pool is
composed out of two mirrors? Can I attach devices to both mirrors, let
them resilver, then detach them and import the pool from those?
You'd want to export them, not detach
On Tue, 10 Apr 2007, Rich Teer wrote:
I have a pool called tank/home/foo and I want to rename it to
tank/home/bar. What's the best way to do this (the zfs and zpool man
pages don't have a rename option)?
In fact, there is a rename option for zfs:
# zfs create tank/home
# zfs create
On Thu, 12 Apr 2007, Simon wrote:
I'm installing Oracle 9i on Solaris 10 11/06(update 3),I created some
zfs volumes which will be used by oracle data file,as:
Have you tried using SVM volumes? I ask, because SVM does the same thing:
soft-link to /devices
If it works for SVM and not for ZFS,
On Thu, 19 Apr 2007, Mario Goebbels wrote:
Is it possible to gracefully and permanently remove a vdev from a pool
without data loss?
Is this what you're looking for?
http://bugs.opensolaris.org/view_bug.do?bug_id=4852783
If so, the answer is 'not yet'.
Regards,
markm
On Tue, 24 Apr 2007, Darren J Moffat wrote:
There are obvious other places that would really benefit but I think
having them as separate datasets really depends on what the machine is
doing. For example /var/apache if you really are a webserver, but then
why not go one better and split out
On Thu, 26 Apr 2007, Ben Miller wrote:
I just rebooted this host this morning and the same thing happened again. I
have the core file from zfs.
[ Apr 26 07:47:01 Executing start method (/lib/svc/method/nfs-server start)
]
Assertion failed: pclose(fp) == 0, file ../common/libzfs_mount.c,
On 8 May, 2007, at 22.51, Cyril Plisko wrote:
So I quickly hacked together a script which defines the necessary
complete clauses (yes I am a tcsh user). After playing with it
for a while I decided to share it with community in a hope that
it may be improved/extended and be a useful tool in
On Wed, 9 May 2007, Anantha N. Srirama wrote:
However, the poor performance of the destroy is still valid. It is quite
possible that we might create another clone for reasons beyond my
original reason.
There are a few open bugs against destroy. It sounds like you may be
running into 6509628
On Thu, 10 May 2007, Bruce Shaw wrote:
I don't have enough disk to do clones and I haven't figured out how to
mount snapshots directly.
Maybe I'm misunderstanding what you're saying, but 'zfs clone' is exactly
the way to mount a snapshot. Creating a clone uses up a negligible amount
of disk
On Fri, 11 May 2007, Jason J. W. Williams wrote:
Is it possible (or even technically feasible) for zfs to have a destroy
to feature? Basically destroy any snapshot older than a certain date?
Sorta-kinda. You can use 'zfs get' to get the creation time of a
snapshot. If you give it -p, it'll
On Mon, 14 May 2007, Alec Muffett wrote:
I suspect the proper thing to do would be to build the six new large
disks into a new RAID-Z vdev, add it as a mirror of the older,
smaller-disk RAID-Z vdev, rezilver to zynchronize them, and then break
the mirror.
The 'zpool replace' command is a
On Tue, 15 May 2007, Trevor Watson wrote:
I don't suppose that it has anything to do with the flag being wm
instead of wu on your second drive does it? Maybe if the driver thinks
slice 2 is writeable, it treats it as a valid slice?
If the slice doesn't take up the *entire* disk, then it
On Fri, 25 May 2007, Ben Rockwood wrote:
May 25 23:32:59 summer unix: [ID 836849 kern.notice]
May 25 23:32:59 summer ^Mpanic[cpu1]/thread=1bf2e740:
May 25 23:32:59 summer genunix: [ID 335743 kern.notice] BAD TRAP: type=e (#pf
Page fault) rp=ff00232c3a80 addr=490 occurred in
On Fri, 1 Jun 2007, Krzys wrote:
bash-3.00# zpool replace mypool c1t2d0 emcpower0a
bash-3.00# zpool status
pool: mypool
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for
On Fri, 1 Jun 2007, John Plocher wrote:
This seems especially true when there is closure on actions - the set of
zfs snapshot foo/[EMAIL PROTECTED]
zfs destroy foo/[EMAIL PROTECTED]
commands is (except for debugging zfs itself) a noop
Note that if you use the recursive
On Fri, 1 Jun 2007, Ben Bressler wrote:
When I do the zfs send | ssh zfs recv part, the file system (folder) is
getting created, but none of the data that I have in my snapshot is
sent. I can browse on the source machine to view the snapshot data
pool/.zfs/snapshot/snap-name and I see the
On Mon, 11 Jun 2007, Rick Mann wrote:
ZFS Readonly implemntation is loaded!
Is that a copy-n-paste error, or is that typo in the actual output?
Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Tue, 12 Jun 2007, Tim Cook wrote:
This pool should have 7 drives total, which it does, but for some reason
c4d0 is displayed twice. Once as online (which it is), and once as
unavail (which it is not).
What's the name of the 7th drive? Did you take all the drives from the
old system and
On Tue, 19 Jun 2007, John Brewer wrote:
bash-3.00# zpool import
pool: zones
id: 4567711835620380868
state: ONLINE
status: The pool is formatted using an older on-disk version.
action: The pool can be imported using its name or numeric identifier, though
some features will
On Wed, 27 Jun 2007, [UTF-8] Jürgen Keil wrote:
Yep, I just tried it, and it refuses to zpool import the newer pool,
telling me about the incompatible version. So I guess the pool format
isn't the correct explanation for the Dick Davies' (number9) problem.
Have you tried creating the pool on
On Fri, 13 Jul 2007, Kwang-Hyun Baek wrote:
NAMESTATE READ WRITE CKSUM
poolUNKNOWN 0 0 0
c0d0s5UNKNOWN 0 0 0
c0d0s6UNKNOWN 0 0 0
c0d0s4UNKNOWN 0 0 0
On Fri, 13 Jul 2007, Kwang-Hyun Baek wrote:
zpool import pool (my pool is named 'pool') returns
cannot import 'pool': no such pool available
What does 'zpool import' by itself show you? It should give you a list of
available pools to import.
Regards,
markm
On Fri, 13 Jul 2007, Kwang-Hyun Baek wrote:
zpool list
it shows my pool with health UNKNOWN
That means it's already imported. What's the output of 'zpool status'?
Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Mon, 16 Jul 2007, Kwang-Hyun Baek wrote:
Is there any way to fix this? I actually tried to destroy the pool and
try to create a new one, but it doesn't let me. Whenever I try, I get
the following error:
[EMAIL PROTECTED]:/var/crash# zpool create -f pool c0d0s5
internal error: No such
On Tue, 17 Jul 2007, Kwang-Hyun Baek wrote:
# uname -a
SunOS solaris-devx 5.11 opensol-20070713 i86pc i386 i86pc
===
What's more interesting is that ZFS version shows that it's 8does it
even exist?
Yes, 8 was created to support
On Mon, 17 Sep 2007, Robert Milkowski wrote:
If you do 'zpool create -f test A B C spare D E' and D or E contains UFS
filesystem then despite of -f zpool command will complain that there is
UFS file system on D.
This was fixed recently in build 73. See CR 6573276.
Regards,
markm
On Mon, 24 Sep 2007, Michael Schuster wrote:
I recently started seeing zfs chattiness at boot time: reading zfs config
and something like mounting zfs filesystems (n/n).
This was added recently because ZFS can take a while to mount large
configs. Consoles would appear to freeze after the
On Mon, 24 Sep 2007, Michael Schuster wrote:
I'm also quite prepared to see a running tally(?) after an initial timeout
(your minute) has gone by and we haven't finished ... but I guess we'd also
have to make sure that the output generated isn't messed up by other output
to the console that's
On Mon, 8 Oct 2007, Kugutsumen wrote:
I just tried..
mount -o rw,remount /
zpool import -f tank
mount -F zfs tank/rootfs /a
zpool status
ls -l /dev/dsk/c1t0d0s0
# /[EMAIL PROTECTED],0/pci1000,[EMAIL PROTECTED]/[EMAIL PROTECTED],0:a
csh
setenv TERM vt100
vi /a/boot/solaris/bootenv.rc
#
On Tue, 23 Oct 2007, A Darren Dunham wrote:
On Tue, Oct 23, 2007 at 09:55:58AM -0700, Scott Laird wrote:
I'm writing a couple scripts to automate backups and snapshots, and I'm
finding myself cringing every time I call 'zfs destroy' to get rid of a
snapshot, because a small typo could take
On Mon, 29 Oct 2007, Krzys wrote:
everything is great but I've made a mistake and I would like to remove
emcpower2a from my pool and I cannot do that...
Well the mistake that I made is that I did not format my device
correctly so instead of adding 125gig I added 128meg
You can't remove it
On Thu, 15 Nov 2007, Manoj Nayak wrote:
I am getting following error message when I run any zfs command.I have
attach the script I use to create ramdisk image for Thumper.
# zfs volinit
internal error: Bad file number
Abort - core dumped
This sounds as if you may have somehow lost the
On Thu, 15 Nov 2007, Brian Lionberger wrote:
The question is, should I create one zpool or two to hold /export/home
and /export/backup?
Currently I have one pool for /export/home and one pool for /export/backup.
Should it be on pool for both??? Would this be better and why?
One thing to
I can think of two things to check:
First, is there a 'bootfs' line in your grub entry? I didn't see it
in the original email; not sure if it was left out or it simply isn't
present. If it's not present, ensure the 'bootfs' property is set on
your pool.
Secondly, ensure that there's a
On Wed, 19 Dec 2007, Ross wrote:
The title says it all really, we'll be creating one big zpool here, with
many sub filesystems for various systems. Am I right in thinking that
we can use snapshots of the root filesystem to take a complete backup of
everything?
I believe what you're
On Mon, 7 Jan 2008, Andre Lue wrote:
I usually have to do a zpool import -f pool to get it back.
What do you mean by 'usually'?
After the import, what's the output of 'zpool status'?
During reboot, are there any relevant messages in the console?
Regards,
markm
On Fri, 11 Jan 2008, Wyllys Ingersoll wrote:
I want to remove c0d0p4:
# zpool remove bigpool c0d0p4
cannot remove c0d0p4: only inactive hot spares or cache devices can be removed
Use replace, not remove.
Regards,
markm
___
zfs-discuss mailing list
On Mon, 14 Jan 2008, Wyllys Ingersoll wrote:
That doesn't work either.
The zpool replace command didn't work? You wouldn't happen to have a copy
of the errors you received, would you? I'd like to see that.
Regards,
markm
___
zfs-discuss mailing
On Fri, 29 Feb 2008, Justin Vassallo wrote:
# zpool status
pool: external
state: FAULTED
status: One or more devices could not be opened. There are insufficient
replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
On Fri, 7 Mar 2008, Paul Raines wrote:
zfs create -o quota=131G -o reserv=131G -o recsize=8K zpool1/itgroup_001
and this is still running now. truss on the process shows nothing. I
don't know how to debug it beyond that. I thought I would ask for any
info from this list before I just
Drew Schatt wrote:
Can anyone explain how the following came about, and/or how to get rid
of it?
What does zdb show? Also, do the partitions look like for c5t0d0? Did
something get overlapped?
Regards,
markm
___
zfs-discuss mailing list
On Sun, 23 Mar 2008, msl wrote:
I have some zfs filesystems with two // at the beggining like
//dir1/dir2/dir3. And some other filesystems correct with just one
/ (/dir1/dir2/). The question is: Can i set the mountpoint correctly?
You can set the mountpoint at any time with 'zfs set
On Wed, 21 May 2008, Justin Vassallo wrote:
zpool add -f external c12t0d0p0
zpool add -f external c13t0d0p0 (it wouldn't work without -f, and I believe
that's because the fs was online)
No, it had nothing to do with the pool being online. It was because a
single disk was being added to a
On Wed, 21 May 2008, Claus Guttesen wrote:
Aren't one supposed to be able to add more disks to an existing raidz(2)
pool and have the data spread all disks in the pool automagically?
Alas, that is not yet possible. See Adam's blog for details:
On Tue, 3 Jun 2008, Gordon Ross wrote:
I'd really like to know: What are the conditions under which the
installer will offer ZFS root?
Only the text-based installer will offer it - not the GUI.
Regards,
markm
___
zfs-discuss mailing list
On Jun 5, 2008, at 4:43 PM, Bill Sommerfeld wrote:
after install, I'd think you could play games with zfs send | zfs
receive on an inactive BE to rewrite everything with the desired
attributes (more important for copies than compression).
I blogged about something similar about a year ago:
On Thu, 5 Jun 2008, Albert Lee wrote:
It doesn't seem to change the bootfs property on zpl or GRUB's menu.lst
on the zpool, so we do it manually:
It *does* acutally update bootfs menu.lst, but not until after the init
6 is run.
Regards,
markm
On Tue, 24 Jun 2008, Justin Vassallo wrote:
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
external 449G 427G 27.4K /external
external/backup447G 427G 374G /external/backup
# zoneadm -z anzan boot
could not verify fs /backup: could not access
On Jun 26, 2008, at 8:11 PM, Sumit Gupta wrote:
[EMAIL PROTECTED] is the snapshot of the original installation on
snv_92. snv_92.backup is the clone. You can see that the / is
mounted on snv_92.backup but in zfs list output it still shows that
'/' is mounted on snv_92.
It's showing
On Fri, 27 Jun 2008, wan_jm wrote:
the procedure is follows:
1. mkdir /tank
2. touch /tank/a
3. zpool create tank c0d0p3
this command give the following error message:
cannot mount '/tank': directory is not empty;
4. reboot.
then the os can only be login in from console. does it a bug?
On Thu, 10 Jul 2008, Mark Phalan wrote:
I find this annoying as well. Another way that would help (but is fairly
orthogonal to your suggestion) would be to write a completion module for
zsh/bash/whatever that could tab-complete options to the z* commands
including zfs filesystems.
You
On Thu, 10 Jul 2008, Tim Foster wrote:
Mark Musante (famous for recently beating the crap out of lu)
Heh. Although at this point it's hard to tell who's the beat-er and who's
the beat-ee...
Regards,
markm
___
zfs-discuss mailing list
On Tue, 22 Jul 2008, Rainer Orth wrote:
I just wanted to attach a second mirror to a ZFS root pool on an Ultra
1/170E running snv_93.
I've followed the workarounds for CR 6680633 and 6680633 from the ZFS
Admin Guide, but booting from the newly attached mirror fails like so:
I think you're
On Wed, 23 Jul 2008, [EMAIL PROTECTED] wrote:
Rainer,
Sorry for your trouble.
I'm updating the installboot example in the ZFS Admin Guide with the
-F zfs syntax now. We'll fix the installboot man page as well.
Mark, I don't have an x86 system to test right now, can you send me the
On Fri, 25 Jul 2008, Alan Burlison wrote:
Enda O'Connor wrote:
probably
6722767 lucreate did not add new BE to menu.lst ( or grub )
Yeah, I found that bug, added A CR bumped the priority.
Unfortunatrly there's no analysis or workaround in the bug, so I've no
idea what the real problem
On Thu, 28 Aug 2008, Paul Floyd wrote:
Does anyone have a pointer to a howto for doing a liveupgrade such that
I can convert the SXCE 94 UFS BE to ZFS (and liveupgrade to SXCE 96
while I'm at it) if this is possible? Searching with google shows a lot
of blogs that describe the early
On Mon, 1 Sep 2008, Gavin Maltby wrote:
I'd like to be able to utter cmdlines such as
$ zfs set readonly=on .
$ zfs snapshot [EMAIL PROTECTED]
with '.' interpreted to mean the dataset corresponding to the current
working directory.
Sounds like it would be a useful RFE.
This would
On 3 Sep 2008, at 05:20, F. Wessels [EMAIL PROTECTED] wrote:
Hi,
can anybody describe the correct procedure to replace a disk (in a
working OK state) with a another disk without degrading my pool?
This command ought to do the trick:
zfs replace pool old-disk new-disk
The type of pool
On Mon, 8 Sep 2008, jan damborsky wrote:
Is there any way to release dump ZFS volume after it was activated by
dumpadm(1M) command ?
Try 'dumpadm -d swap' to point the dump to the swap device.
Regards,
markm
___
zfs-discuss mailing list
On 13 Sep 2008, at 08:33, Guido [EMAIL PROTECTED] wrote:
Hi all,
after installing OpenSolaris 2008.05 in VirtualBox I've created a
ZFS Root Mirror by:
zfs attach rpool Disk B
and it works like a charm. Now I tried to restore the rpool from the
worst Case
Scenario: The Disk the
Hi Glenn,
Where is it hanging? Could you provide a stack trace? It's possible
that it's just a bug and not a configuration issue.
On 18 Sep, 2008, at 16.12, Glenn Lagasse wrote:
I had a disk that contained a zpool. For reasons that we won't go in
to, that disk had zero's written all
On Sat, 27 Sep 2008, Marcin Woźniak wrote:
After successful upgrade from snv_95 to snv_98 ( ufs boot - zfs boot).
After luactive new BE with zfs. I am not able to ludelete old BE with
ufs. problem is, I think that zfs boot is /rpool/boot/grub.
This is due to a bug in the /usr/lib/lu/lulib
On Tue, 30 Sep 2008, Ian Collins wrote:
Mark J Musante wrote:
On Sat, 27 Sep 2008, Marcin Woźniak wrote:
After successful upgrade from snv_95 to snv_98 ( ufs boot - zfs
boot). After luactive new BE with zfs. I am not able to ludelete old
BE with ufs. problem is, I think that zfs boot
On Tue, 30 Sep 2008, Ram Sharma wrote:
Hi,
can anyone please tell me what is the maximum number of files that can
be there in 1 folder in Solaris with ZSF file system.
By folder, I assume you mean directory and not, say, pool. In any case,
the 'limit' is 2^48, but that's effectively no
So this is where I stand. I'd like to ask zfs-discuss if they've seen any
ZIL/Replay style bugs associated with u3/u5 x86? Again, I'm confident in my
hardware, and /var/adm/messages is showing no warnings/errors.
Are you absolutely sure the hardware is OK? Is there another disk you can
On Thu, 6 Nov 2008, Chris Ridd wrote:
I probably need to downgrade a machine from 10u5 to 10u3. The zpool on
u5 is a v4 pool, and AIUI 10u3 only supports up to v3 pools.
The only difference between a v4 pool and a v3 pool is that v4 added
history ('zpool history pool'). I would expect a v3
Hi Michael,
Did you try doing an export/import of tank?
On Thu, 6 Nov 2008, Michael Schuster wrote:
all,
I've gotten myself into a fix I don't know how to resolve (and I can't
reboot the machine, it's a build server we share):
$ zfs list -r tank/schuster
NAME
Just to try this out, I created a 9g zpool and a 5g volume in that zpool.
Then I used dd to write to every block of the volume.
Taking a snapshot of the volume at that point attempts to reserve an
additional 5g, which fails.
With 1g volumes we see it in action:
bash-3.00# zpool create tank
On Mon, 17 Nov 2008, Vincent Boisard wrote:
#zpool create pool1 c1d1s0
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c1d1s0 overlaps with /dev/dsk/c1d1s2
That's CR 6419310.
Regards,
markm
___
zfs-discuss mailing
On Tue, 9 Dec 2008, Tim Haley wrote:
ludelete doesn't handle this any better than beadm destroy does, it
fails for the same reasons. lucreate does not promote the clone it
creates when a new BE is spawned, either.
Live upgrade's luactivate command is meant to promote the BE during init 6
On Tue, 9 Dec 2008, elaine ashton wrote:
Thanks! That'd be great as I have an snv_79 system that doesn't exhibit
this behaviour so I'll assume that this has been added in sometime
between that release and 101a?
According to the CR, the putback went into build 66.
external link:
On Tue, 9 Dec 2008, Elaine Ashton wrote:
If I fdisk 2 disks to have EFI partitions and label them with the
appropriate partition beginning at sector 34 and then give them to ZFS
for a pool, ZFS would appear to change the beginning sector to 256.
Right. This is done deliberately so that we
The best you can do right now is mirroring. During the install, choose
more than one hard drive and zfs will create a mirror configuration.
Support for raidz and/or striping is for a future project.
On Fri, 19 Dec 2008, iman habibi wrote:
Hello All
Im new in solaris 10 zfs structure.my
Hi Amy,
This is a known problem with ZFS and live upgrade. I believe the docs for
s10u6 discourage the config you show here. A patch should be ready some
time next month with a fix for this.
On Fri, 16 Jan 2009, amy.r...@tufts.edu wrote:
I've installed an s10u6 machine with no UFS
On Fri, 16 Jan 2009, amy.r...@tufts.edu wrote:
mmusante This is a known problem with ZFS and live upgrade. I believe the
mmusante docs for s10u6 discourage the config you show here. A patch should
mmusante be ready some time next month with a fix for this.
Do you happen to have a bugid
On Thu, 22 Jan 2009, Al Slater wrote:
Mounting root on rpool/ROOT/Sol11_b105 with filesystem type zfs is not
supported
This line is coming from svm, which leads me to believe that the zfs
boot blocks were not properly installed by live upgrade.
You can try doing this by hand, with the
On Wed, 28 Jan 2009, Richard Elling wrote:
Orvar Korvar wrote:
I have 5 terabyte discs in a raidz1. Could I add one SSD drive in a
similar vein? Would it be easy to do?
Yes.
To be specific, you use the 'cache' argument to zpool, as in:
zpool create pool ... cache cache-device
On Fri, 30 Jan 2009, Frank Cusack wrote:
so, is there a way to tell zfs not to perform the mounts for data2? or
another way i can replicate the pool on the same host, without exporting
the original pool?
There is not a way to do that currently, but I know it's coming down the
road.
Hi Pål,
CR 6420274 covers the -p part of your question. As far as kstats go, we only
have them in the arc and the vdev read-ahead cache.
Regards,
markm
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
On Fri, 30 Jan 2009, Ed Kaczmarek wrote:
And/or step me thru the required mdb/kdb/whatever it's called stack
trace dump command sequence after booting with -kd
Dan Mick's got a good guide on his blog:
http://blogs.sun.com/dmick/entry/diagnosing_kernel_hangs_panics_with
Regards,
markm
To set the mountpoint back to default, use 'zfs inherit mountpoint dataset'
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Handojo wrote:
hando...@opensolaris:~# zpool add rpool c4d0
Two problems: first, the command needed is 'zpool attach', because
you're making a mirror. 'zpool add' is for extending stripes, and
currently stripes are not supported as root pools.
The second problem is that when the drive is
On Fri, 13 Feb 2009, Tony Marshall wrote:
How would i obtain the current setting for the vdev_cache from a
production system? We are looking at trying to tune ZFS for better
performance with respect to oracle databases, however before we start
changing settings via the /etc/system file we
On Thu, 5 Mar 2009, Blake wrote:
I had a 2008.11 machine crash while moving a 700gb file from one machine
to another using cp. I looked for an existing bug for this, but found
nothing.
Has anyone else seen behavior like this? I wanted to check before
filing a bug.
Have you got a copy of
Hi Steven,
Try doing 'zfs list -t all'. This is a change that went in late last year
to list only datasets unless snapshots were explicitly requested.
On Fri, 6 Mar 2009, Steven Sim wrote:
Gurus;
I am using OpenSolaris 2008.11 snv_101b_rc2 X86
Prior to this I was using SXCE built 91
On Fri, 6 Mar 2009, Blake wrote:
I have savecore enabled, but it doesn't look like the machine is dumping
core as it should - that is, I don't think it's a panic - I suspect
interrupt handling.
Then when you say you had a machine crash, what did you mean?
Did you look in /var/crash/* to see
On Fri, 6 Mar 2009, Blake wrote:
I have savecore enabled, but nothing in /var/crash:
r...@filer:~# savecore -v
savecore: dump already processed
r...@filer:~# ls /var/crash/filer/
r...@filer:~#
OK, just to ask the dumb questions: is dumpadm configured for
/var/crash/filer? Is the dump zvol
On Tue, 17 Mar 2009, Neal Pollack wrote:
Can anyone share some instructions for setting up the rpool mirror of
the boot disks during the Solaris Nevada (SXCE) install?
You'll need to use the text-based installer, and in there you choose two
the two bootable disks instead of just one.
On 17 Mar, 2009, at 16.21, Bryan Allen wrote:
Then mirror the VTOC from the first (zfsroot) disk to the second:
# prtvtoc /dev/rdsk/c1t0d0s2 | fmthard -s - /dev/rdsk/c1t1d0s2
# zpool attach -f rpool c1t0d0s0 c1t1d0s0
# zpool status -v
And then you'll still need to run installgrub to put
On Fri, 27 Mar 2009, Alec Muffett wrote:
The inability to create more than 1 clone at a time (ie: in separate
TXGs) is something which has hampered me (and several projects on which
I have worked) for some years, now.
Hi Alec,
Does CR 6475257 cover what you're looking for?
Regards,
markm
On Thu, 9 Apr 2009, shyamali.chakrava...@sun.com wrote:
Hi All,
I have corefile where we see NULL pointer de-reference PANIC as we have sent
(deliberately) NULL pointer for return value.
vdev_disk_io_start()
error = ldi_ioctl(dvd-vd_lh, zio-io_cmd,
On Fri, 10 Apr 2009, Patrick Skerrett wrote:
degradation) when these write bursts come in, and if I could buffer them
even for 60 seconds, it would make everything much smoother.
ZFS already batches up writes into a transaction group, which currently
happens every 30 seconds. Have you
On Fri, 17 Apr 2009, Mark J Musante wrote:
The dependency is based on the names.
I should clarify what I mean by that. There are actually two dependencies
here: one is based on dataset names, and one is based on snapshots and
clones.
If there are two datasets, pool/foo and pool/foo/bar
On Thu, 7 May 2009, Mike Gerdts wrote:
Perhaps you have change the configuration of the array since the last
reconfiguration boot. If you run devfsadm then run format, does it
see more disks?
Another thing to check is to see if the controller has a jbod mode as
opposed to passthrough.
On Thu, 21 May 2009, Ian Collins wrote:
I'm trying to use zfs send/receive to replicate the root pool of a system and
I can't think of a way to stop the received copy attempting to mount the
filesystem over the root of the destination pool.
If you're using build 107 or later, there's a
On Thu, 21 May 2009, Nandini Mocherla wrote:
Then I booted into failsafe mode of 101a and then tried to run the
following command as given in luactivate output.
Yeah, that's a known bug in the luactivate output. CR 6722845
# mount -F zfs /dev/dsk/c1t2d0s0 /mnt
cannot open
1 - 100 of 179 matches
Mail list logo