Hello everybody! Please, help me!
I have Solaris 10x86_64 server with a 5x40gb hdd.
1 HDD with /root and /usr (and other partition) (ufs filesystem) were crashed.
He's died.
Other 4 HDD (zfs file system) were mounted by 4 pool (zfs create pool disk1
c0t1d0 and etc.).
I install Solaris 10x86_64
Hi list,
I have a question about setting up zfs send-receive functionality (between
remote machine) as non-root user.
server1 - is a server where zfs send will be executed
server2 - is a server where zfs receive will be executed.
I am using the following zfs structure:
[server1]$ zfs list -t
space to store intermidiate .zfs file..
Of course, I can write to remote type using ssh using the command below but I'd
lile to see some kind of meaningful names on the tape:
# zfs send tank/[EMAIL PROTECTED] | ssh remote_server cat /dev/rmt/0bn
Thanks,
Sergey
This message posted from
I am running Solaris U4 x86_64.
Seems that something is changed regarding mdb:
# mdb -k
Loading modules: [ unix krtld genunix specfs dtrace cpu.AuthenticAMD.15 uppc
pcplusmp ufs ip hook neti sctp arp usba fctl nca lofs zfs random nfs sppp
crypto ptm ]
arc::print -a c_max
mdb: failed to
on a Solaris box. Now I
cannot do it.
Thanks,
Sergey
On August 1, 2007 08:15 am, [EMAIL PROTECTED] wrote:
On 01/08/2007, at 7:50 PM, Joerg Schilling wrote:
Boyd Adamson [EMAIL PROTECTED] wrote:
Or alternatively, are you comparing ZFS(Fuse) on Linux with XFS on
Linux? That doesn't seem
,
Sergey Chechelnitskiy ([EMAIL PROTECTED])
WestGrid/SFU
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
The setup below works fine for me.
macmini:~ jimb$ mount | grep jimb
ride:/xraid2/home/jimb on /private/var/automount/home/jimb (nosuid, automounted)
macmini:~ jimb$ nidump fstab / | grep jimb
ride:/xraid2/home/jimb /home/jimb nfs rw,nosuid,tcp 0 0
NFS server: Solaris 10 11/06 x86_64 + patches,
After bfuing from b37 to current zpool can't start with error:
wis-2 ~ # zpool status -x
pool: zstore
state: FAULTED
status: One or more devices could not be opened. There are insufficient
replicas for the pool to continue functioning.
action: Attach the missing device and online it
+ a little addition to the original quesion:
Imagine that you have a RAID attached to Solaris server. There's ZFS on RAID.
And someday you lost your server completely (fired motherboard, physical crash,
...). Is there any way to connect the RAID to some another server and restore
ZFS layout
Please read also http://docs.info.apple.com/article.html?artnum=303503.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I had the same problem. Read the following article -
http://docs.info.apple.com/article.html?artnum=302780
Most likely you have Allow host cache Flushing checked. Uncheck it and try
again.
This message posted from opensolaris.org
___
zfs-discuss
11 matches
Mail list logo