Ghost should work just fine. It's not just a windows program, it's best used
on a bootable cd or floppy (or network boot for the adventurous), and it'll
backup any hard drive, not just windows.
If you're using ghost it's always best to have a bootable CD or similar, since
you can recover the
On Thu, 2008-06-05 at 11:53 -0700, Brandon High wrote:
On Thu, Jun 5, 2008 at 12:44 AM, Aubrey Li [EMAIL PROTECTED] wrote:
for windows we use ghost to backup system and recovery.
can we do similar thing for solaris by ZFS?
You could probably use a Ghost bootdisk to create an image of a
I don't presently have any working x86 hardware, nor do I routinely work with
x86 hardware configurations.
But it's not hard to find previous discussion on the subject:
http://www.opensolaris.org/jive/thread.jspa?messageID=96790
for example...
Also, remember that SAS controllers can usually also
Hi Erik,
Thanks for your instruction, but let me dig into details.
On Thu, Jun 5, 2008 at 10:04 PM, Erik Trimble [EMAIL PROTECTED] wrote:
Thus, you could do this:
(1) Install system A
No problem, :-)
(2) hook USB drive to A, and mount it at /mnt
I created a zfs pool, and mount it at
On Thu, Jun 05, 2008 at 09:13:24PM -0600, Keith
Bierman wrote:
On Jun 5, 2008, at 8:58 PM 6/5/, Brad Diggs
wrote:
Hi Keith,
Sure you can truncate some files but that
effectively corrupts
the files in our case and would cause more harm
than good. The
only files in our volume
Hi Tobias,
I did this for a large lab we had last month, I have it setup
something like this.
zfs snapshot [EMAIL PROTECTED]
zfs send -i [EMAIL PROTECTED] [EMAIL PROTECTED] | ssh server2 zfs recv rep_pool
ssh zfs destroy [EMAIL PROTECTED]
ssh zfs rename [EMAIL PROTECTED] [EMAIL PROTECTED]
Or you could use Tim Fosters ZFS snapshot service
http://blogs.sun.com/timf/entry/zfs_automatic_snapshots_now_with
/peter
On Jun 6, 2008, at 14:07, Tobias Exner wrote:
Hi,
I'm thinking about the following situation and I know there are some
things I have to understand:
I want to use two
Buy a 2-port SATA II PCI-E x1 SiI3132 controller ($20). The solaris driver is
very stable.
Or, a solution I would personally prefer, don't use a 7th disk. Partition
each of your 6 disks with a small ~7-GB slice at the beginning and the rest of
the disk for ZFS. Install the OS in one of the
If I read the man page right, you might only have to keep a minimum of two
on each side (maybe even just one on the receiving side), although I might be
tempted to keep an extra just in case; say near current, 24 hours old, and a
week old (space permitting for the larger interval of the last one).
Richard L. Hamilton rlhamil at smart.net writes:
But I suspect to some extent you get what you pay for; the throughput on the
higher-end boards may well be a good bit higher.
Not really. Nowadays, even the cheapest controllers, processors mobos are
EASILY capable of handling the platter-speed
On Thu, Jun 05, 2008 at 12:02:42PM -0400, Chris Siebenmann wrote:
- as separate filesystems, they have to be separately NFS mounted
I think this is the one that gets under my skin. If there would be a
way to merge a filesystem into a parent filesystem for the purposes
of NFS, that would be
I encountered an issue that people using OS-X systems
as NFS clients
need to be aware of. While not strictly a ZFS issue,
it may be
encounted most often by ZFS users since ZFS makes it
easy to support
and export per-user filesystems. The problem I
encountered was when
using ZFS to
[...]
That's not to say that there might not be other
problems with scaling to
thousands of filesystems. But you're certainly not
the first one to test it.
For cases where a single filesystem must contain
files owned by
multiple users (/var/mail being one example), old
fashioned
Brian Hechinger wrote:
On Thu, Jun 05, 2008 at 12:02:42PM -0400, Chris Siebenmann wrote:
- as separate filesystems, they have to be separately NFS mounted
I think this is the one that gets under my skin. If there would be a
way to merge a filesystem into a parent filesystem for the purposes
On Fri, Jun 6, 2008 at 12:23 AM, Aubrey Li [EMAIL PROTECTED] wrote:
Here, zfs send tank/root /mnt/root doesn't work, zfs send can't accept
a directory as an output. So I use zfs send and zfs receive:
Really? zfs send just gives you a byte stream, and the shell redirects
it to the file root in
On Thu, Jun 5, 2008 at 11:37 PM, Albert Lee
[EMAIL PROTECTED] wrote:
Raw disk images are, uh, nice and all, but I don't think that was what
Aubrey had in mind when asking zfs-discuss about a backup solution. This
is 2008, not 1960.
But retro is in!
The point that I didn't really make is that
That was it!
hpux-is-old.com - nearline.host NFS C GETATTR3 FH=F6B3
nearline.host - hpux-is-old.com NFS R GETATTR3 OK
hpux-is-old.com - nearline.host NFS C SETATTR3 FH=F6B3
nearline.host - hpux-is-old.com NFS R SETATTR3 Update synch mismatch
hpux-is-old.com - nearline.host NFS C GETATTR3 FH=F6B3
On Fri, Jun 6, 2008 at 10:41 PM, Brandon High [EMAIL PROTECTED] wrote:
On Fri, Jun 6, 2008 at 12:23 AM, Aubrey Li [EMAIL PROTECTED] wrote:
Here, zfs send tank/root /mnt/root doesn't work, zfs send can't accept
a directory as an output. So I use zfs send and zfs receive:
Really? zfs send just
My organization is considering an RFP for MAID storage and we're
wondering about potential conflicts between MAID and ZFS.
We want MAID's power management benefits but are concerned
that what we understand to be ZFS's use of dynamic striping across
devices with filesystem metadata replication and
I think most MAID is sold as a (misguided IMHO) replacement for
Tape, not as a Tier 1 kind of storage. YMMV.
-- mark
John Kunze wrote:
My organization is considering an RFP for MAID storage and we're
wondering about potential conflicts between MAID and ZFS.
We want MAID's power management
Folks,
I am running into an issue with a quota enabled ZFS system. I tried to check
out the ZFS properties but could not figure out a workaround.
I have a file system /data/project/software which has 250G quota set. There
are no snapshots enabled for this system. When the quota is reached on
On Fri, Jun 6, 2008 at 9:29 AM, John Kunze [EMAIL PROTECTED] wrote:
My organization is considering an RFP for MAID storage and we're
wondering about potential conflicts between MAID and ZFS.
I had to look up MAID, first link Google gave me was
http://www.closetmaid.com/ which doesn't seem right
Richard L. Hamilton wrote:
A single /var/mail doesn't work well for 10,000 users
either. When you
start getting into that scale of service
provisioning, you might look at
how the big boys do it... Apple, Verizon, Google,
Amazon, etc. You
should also look at e-mail systems designed to
2008/6/6 Richard Elling [EMAIL PROTECTED]:
Richard L. Hamilton wrote:
A single /var/mail doesn't work well for 10,000 users
either. When you
start getting into that scale of service
provisioning, you might look at
how the big boys do it... Apple, Verizon, Google,
Amazon, etc. You
should
**pci or pci-x. Yes, you might see
*SOME* loss in speed from a pci interface, but
let's be honest, there aren't a whole lot of
users on this list that have the infrastructure to
use greater than 100MB/sec who are asking this sort
of question. A PCI bus should have no issues
pushing that.br
Hi Ricardo,
I'll try that.
Thanks (Obrigado)
Paulo Soeiro
On 6/5/08, Ricardo M. Correia [EMAIL PROTECTED] wrote:
On Ter, 2008-06-03 at 23:33 +0100, Paulo Soeiro wrote:
6)Remove and attached the usb sticks:
zpool status
pool: myPool
state: UNAVAIL
status: One or more devices could not
On Thu, Jun 5, 2008 at 2:11 PM, Erik Trimble [EMAIL PROTECTED] wrote:
Quotas are great when, for administrative purposes, you want a large
number of users on a single filesystem, but to restrict the amount of
space for each. The primary place I can think of this being useful is
/var/mail
On Fri, Jun 6, 2008 at 16:23, Tom Buskey [EMAIL PROTECTED] wrote:
I have an AMD 939 MB w/ Nvidea on the motherboard and 4 500GB SATA II drives
in a RAIDZ.
...
I get 550 MB/s
I doubt this number a lot. That's almost 200 (550/N-1 = 183) MB/s per
disk, and drives I've seen are usually more in
On Fri, Jun 06, 2008 at 10:42:45AM -0500, Bob Friesenhahn wrote:
On Fri, 6 Jun 2008, Brian Hechinger wrote:
On Thu, Jun 05, 2008 at 12:02:42PM -0400, Chris Siebenmann wrote:
- as separate filesystems, they have to be separately NFS mounted
I think this is the one that gets under my
On Fri, Jun 06, 2008 at 07:37:18AM -0400, Brian Hechinger wrote:
On Thu, Jun 05, 2008 at 12:02:42PM -0400, Chris Siebenmann wrote:
- as separate filesystems, they have to be separately NFS mounted
I think this is the one that gets under my skin. If there would be a
way to merge a
On Fri, Jun 06, 2008 at 08:51:13PM +0200, Mattias Pantzare wrote:
2008/6/6 Richard Elling [EMAIL PROTECTED]:
I was going to post some history of scaling mail, but I blogged it instead.
http://blogs.sun.com/relling/entry/on_var_mail_and_quotas
The problem with that argument is that 10.000
On Jun 6, 2008, at 2:50 PM, Nicolas Williams wrote:
On Fri, Jun 06, 2008 at 10:42:45AM -0500, Bob Friesenhahn wrote:
On Fri, 6 Jun 2008, Brian Hechinger wrote:
On Thu, Jun 05, 2008 at 12:02:42PM -0400, Chris Siebenmann wrote:
- as separate filesystems, they have to be separately NFS
On Fri, Jun 06, 2008 at 02:58:09PM -0700, eric kustarz wrote:
I expect that mirror mounts will be coming Linux's way too.
The should already have them:
http://blogs.sun.com/erickustarz/en_US/entry/linux_support_for_mirror_mounts
Even better.
___
On Fri, Jun 06, 2008 at 02:58:09PM -0700, eric kustarz wrote:
clients do not. Without per-filesystem mounts, 'df' on the client
will not report correct data though.
I expect that mirror mounts will be coming Linux's way too.
The should already have them:
On Fri, Jun 06, 2008 at 04:52:45PM -0500, Nicolas Williams wrote:
Mirror mounts take care of the NFS problem (with NFSv4).
NFSv3 automounters could be made more responsive to server-side changes
is share lists, but hey, NFSv4 is the future.
So basically it's just a waiting game at this
On Jun 6, 2008, at 3:27 PM, Brian Hechinger wrote:
On Fri, Jun 06, 2008 at 02:58:09PM -0700, eric kustarz wrote:
clients do not. Without per-filesystem mounts, 'df' on the client
will not report correct data though.
I expect that mirror mounts will be coming Linux's way too.
The should
On Fri, Jun 06, 2008 at 06:27:01PM -0400, Brian Hechinger wrote:
On Fri, Jun 06, 2008 at 02:58:09PM -0700, eric kustarz wrote:
clients do not. Without per-filesystem mounts, 'df' on the client
will not report correct data though.
I expect that mirror mounts will be coming Linux's
Mattias Pantzare wrote:
2008/6/6 Richard Elling [EMAIL PROTECTED]:
Richard L. Hamilton wrote:
A single /var/mail doesn't work well for 10,000 users
either. When you
start getting into that scale of service
provisioning, you might look at
how the big boys do it... Apple, Verizon,
Hello,
I have two disks with a partition mounted as swap, having also some space
unallocated. I would like to format the disk to create a partition from that
unallocated space.
This should be safe given i've done it several time on disks with ufs, but
i'm not too sure with zfs. Is there
On Fri, Jun 06, 2008 at 03:43:29PM -0700, eric kustarz wrote:
On Jun 6, 2008, at 3:27 PM, Brian Hechinger wrote:
On Fri, Jun 06, 2008 at 02:58:09PM -0700, eric kustarz wrote:
clients do not. Without per-filesystem mounts, 'df' on the client
will not report correct data though.
I
On Thu, Jun 5, 2008 at 9:26 PM, Tim [EMAIL PROTECTED] wrote:
On Thu, Jun 5, 2008 at 11:12 PM, Joe Little [EMAIL PROTECTED] wrote:
On Thu, Jun 5, 2008 at 8:16 PM, Tim [EMAIL PROTECTED] wrote:
On Thu, Jun 5, 2008 at 9:17 PM, Peeyush Singh [EMAIL PROTECTED]
wrote:
Hey guys, please
41 matches
Mail list logo