Jeff A. Earickson wrote:
Hi,
I was looking for the zfs system calls to check zfs quotas from
within C code, analogous to the quotactl(7I) interface for UFS,
and realized that there was nothing similar. Is anything like this
planned? Why no public API for ZFS?
Do I start making calls to
I have a Sun x4200 with 4x gigabit ethernet NICs. I
have several of
them configured with distinct IP addresses on an
internal (10.0.0.0)
network.
[off topic]
Why are you using distinct IP addresses instead of IPMP ?
[/off]
This message posted from opensolaris.org
Mike Gerdts wrote:
On 9/11/06, Matthew Ahrens [EMAIL PROTECTED] wrote:
B. DESCRIPTION
A new property will be added, 'copies', which specifies how many copies
of the given filesystem will be stored. Its value must be 1, 2, or 3.
Like other properties (eg. checksum, compression), it only
On 12/09/06, Matthew Ahrens [EMAIL PROTECTED] wrote:
Here is a proposal for a new 'copies' property which would allow
different levels of replication for different filesystems.
Your comments are appreciated!
Flexibility is always nice, but this seems to greatly complicate things,
both
Hi Matt,
Interesting proposal. Has there been any
consideration if free space being reported for a ZFS
filesystem would take into account the copies
setting?
Example:
zfs create mypool/nonredundant_data
zfs create mypool/redundant_data
df -h /mypool/nonredundant_data
Thank you all for your advices.
Finally, I chose the way writing 2 scripts ( client server) using Port
forwading via SSH for security reasons.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On 12/09/2006, at 1:28 AM, Nicolas Williams wrote:
On Mon, Sep 11, 2006 at 06:39:28AM -0700, Bui Minh Truong wrote:
Does ssh -v tell you any more ?
I don't think problem is ZFS send/recv. I think it's take a lot of
time to connect over SSH.
I tried to access SSH by typing: ssh remote_machine.
On Tue, 12 Sep 2006, Darren J Moffat wrote:
Date: Tue, 12 Sep 2006 10:30:33 +0100
From: Darren J Moffat [EMAIL PROTECTED]
To: Jeff A. Earickson [EMAIL PROTECTED]
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] ZFS API (again!), need quotactl(7I)
Jeff A. Earickson wrote:
Hi,
I was
Hello Mark,
Monday, September 11, 2006, 4:25:40 PM, you wrote:
MM Jeremy Teo wrote:
Hello,
how are writes distributed as the free space within a pool reaches a
very small percentage?
I understand that when free space is available, ZFS will batch writes
and then issue them in sequential
On 12/09/06, Darren J Moffat [EMAIL PROTECTED] wrote:
Dick Davies wrote:
The only real use I'd see would be for redundant copies
on a single disk, but then why wouldn't I just add a disk?
Some systems have physical space for only a single drive - think most
laptops!
True - I'm a laptop
I'm experiencing a bizzare write performance problem while using a ZFS
filesystem. Here are the relevant facts:
[b]# zpool list[/b]
NAMESIZEUSED AVAILCAP HEALTH ALTROOT
mtdc 3.27T502G 2.78T14% ONLINE -
zfspool
This proposal would benefit greatly by a problem statement. As it stands, it
feels like a solution looking for a problem.
The Introduction mentions a different problem and solution, but then pretends that
there is value to this solution. The Description section mentions some benefits
of
On Tue, Sep 12, 2006 at 05:57:33PM +1000, Boyd Adamson wrote:
On 12/09/2006, at 1:28 AM, Nicolas Williams wrote:
Now you have a persistent SSH connection to remote-host that forwards
connections to localhost:12345 to port 56789 on remote-host.
So now you can use your Perl scripts more
The biggest problem I see with this is one of observability, if not all
of the data is encrypted yet what should the encryption property say ?
If it says encryption is on then the admin might think the data is
safe, but if it says it is off that isn't the truth either because
some of it maybe
Anton B. Rang wrote:
The biggest problem I see with this is one of observability, if not all
of the data is encrypted yet what should the encryption property say ?
If it says encryption is on then the admin might think the data is
safe, but if it says it is off that isn't the truth either
True - I'm a laptop user myself. But as I said, I'd assume the whole disk
would fail (it does in my experience).
That's usually the case, but single-block failures can occur as well. They're
rare (check the uncorrectable bit error rate specifications) but if they
happen to hit a critical file,
And if we are still writing to the file systems at that time ?
New writes should be done according to the new state (if encryption is being
enabled, all new writes are encrypted), since the goal is that eventually the
whole disk will be in the new state.
The completion percentage should
On 9/11/06, Matthew Ahrens [EMAIL PROTECTED] wrote:
Here is a proposal for a new 'copies' property which would allow
different levels of replication for different filesystems.
Your comments are appreciated!
I've read the proposal, and followed the discussion so far. I have to
say that I
I had a strange ZFS problem this morning. The entire system would hang when
mounting the ZFS filesystems. After trial and error I determined that the
problem was with one of the 2500 ZFS filesystems. When mounting that users'
home the system would hang and need to be rebooted. After I
Darren J Moffat wrote:
While encryption of existing data is not in scope for the first ZFS
crypto phase I am being careful in the design to ensure that it can be
done later if such a ZFS framework becomes available.
The biggest problem I see with this is one of observability, if not all
of
Neil A. Wilson wrote:
Darren J Moffat wrote:
While encryption of existing data is not in scope for the first ZFS
crypto phase I am being careful in the design to ensure that it can be
done later if such a ZFS framework becomes available.
The biggest problem I see with this is one of
On Tue, Sep 12, 2006 at 07:23:00AM -0400, Jeff A. Earickson wrote:
Modify the dovecot IMAP server so that it can get zfs quota information
to be able to implement the QUOTA feature of the IMAP protocol (RFC 2087).
In this case pull the zfs quota numbers for quoted home directory/zfs
On Tue, Sep 12, 2006 at 10:36:30AM +0100, Darren J Moffat wrote:
Mike Gerdts wrote:
Is there anything in the works to compress (or encrypt) existing data
after the fact? For example, a special option to scrub that causes
the data to be re-written with the new properties could potentially do
Hello,
I'm trying to set ZFS to work with RBAC so that I could manage all ZFS
stuff w/out root. However, in my setup there is sys_mount privilege
needed:
- without sys_mount:
vk199839:tessier:~$ zpool list
NAMESIZEUSED AVAILCAP HEALTH ALTROOT
local
Robert Milkowski wrote:
Hello Mark,
Monday, September 11, 2006, 4:25:40 PM, you wrote:
MM Jeremy Teo wrote:
Hello,
how are writes distributed as the free space within a pool reaches a
very small percentage?
I understand that when free space is available, ZFS will batch writes
and then issue
This is simply not true. ZFS would protect against
the same type of
errors seen on an individual drive as it would on a
pool made of HW raid
LUN(s). It might be overkill to layer ZFS on top of a
LUN that is
already protected in some way by the devices internal
RAID code but it
does
There are also the speed enhancement provided by a HW
raid array, and
usually RAS too, compared to a native disk drive but
the numbers on
that are still coming in and being analyzed. (See
previous threads.)
Speed enhancements? What is the baseline of comparison?
Hardware RAIDs can be
On September 12, 2006 11:35:54 AM -0700 UNIX admin [EMAIL PROTECTED]
wrote:
There are also the speed enhancement provided by a HW
raid array, and
usually RAS too, compared to a native disk drive but
the numbers on
that are still coming in and being analyzed. (See
previous threads.)
It would
Vladimir Kotal wrote:
Hello,
I'm trying to set ZFS to work with RBAC so that I could manage all ZFS
stuff w/out root. However, in my setup there is sys_mount privilege
needed:
- without sys_mount:
Currently, anything in zfs that changes dataset configurations, such as
file systems and
Take this for what it is: the opinion on someone who knows less about zfs than
probably anyone else on this thread ,but...
I would like to add my support for this proposal.
As I understand it, the reason for using ditto blocks on metadata, is that
maintaining their integrity is vital for the
On 12/09/06, Celso [EMAIL PROTECTED] wrote:
One of the great things about zfs, is that it protects not just against
mechanical failure, but against silent data corruption. Having this available
to laptop owners seems to me to be important to making zfs even more attractive.
I'm not arguing
UNIX admin wrote:
This is simply not true. ZFS would protect against
the same type of
errors seen on an individual drive as it would on a
pool made of HW raid
LUN(s). It might be overkill to layer ZFS on top of a
LUN that is
already protected in some way by the devices internal
RAID code but
On 12/09/06, Celso [EMAIL PROTECTED] wrote:
One of the great things about zfs, is that it
protects not just against mechanical failure, but
against silent data corruption. Having this available
to laptop owners seems to me to be important to
making zfs even more attractive.
I'm not
So, people here recommended the Marvell cards, and one even provided a
link to acquire them for SATA jbod support. Well, this is what the
latest bits (B47) say:
Sep 12 13:51:54 vram marvell88sx: [ID 679681 kern.warning] WARNING:
marvell88sx0: Could not attach, unsupported chip stepping or unable
Thomas Burns wrote:
Hi,
We have been using zfs for a couple of months now, and, overall, really
like it. However, we have run into a major problem -- zfs's memory
requirements
crowd out our primary application. Ultimately, we have to reboot the
machine
so there is enough free memory to
On Tue, 12 Sep 2006, Mark Maybee wrote:
Thomas Burns wrote:
Hi,
We have been using zfs for a couple of months now, and, overall, really
like it. However, we have run into a major problem -- zfs's memory
requirements
crowd out our primary application. Ultimately, we have to reboot
Joe Little wrote:
So, people here recommended the Marvell cards, and one even provided a
link to acquire them for SATA jbod support. Well, this is what the
latest bits (B47) say:
Sep 12 13:51:54 vram marvell88sx: [ID 679681 kern.warning] WARNING:
marvell88sx0: Could not attach, unsupported chip
On Sep 12, 2006, at 2:04 PM, Mark Maybee wrote:
Thomas Burns wrote:
Hi,
We have been using zfs for a couple of months now, and, overall,
really
like it. However, we have run into a major problem -- zfs's
memory requirements
crowd out our primary application. Ultimately, we have to
I currently have a system which has two ZFS storage pools. One of the pools is
coming from a faulty piece of hardware. I would like to bring up our server
mounting the storage pool which is okay and NOT mounting the one with from the
hardware with problems. Is there a simple way to NOT
zfs export
On September 12, 2006 2:41:27 PM -0700 David Smith [EMAIL PROTECTED] wrote:
I currently have a system which has two ZFS storage pools. One of the pools is
coming from a
faulty piece of hardware. I would like to bring up our server mounting the
storage pool which is
okay and NOT
Thomas Burns wrote:
On Sep 12, 2006, at 2:04 PM, Mark Maybee wrote:
Thomas Burns wrote:
Hi,
We have been using zfs for a couple of months now, and, overall, really
like it. However, we have run into a major problem -- zfs's memory
requirements
crowd out our primary application.
1) You should be able to limit your cache max size by
setting arc.c_max. Its currently initialized to be
phys-mem-size - 1GB.
Mark's assertion that this is not a best practice is something of an
understatement. ZFS was designed so that users/administrators wouldn't have to
configure
On 12/09/06, Celso [EMAIL PROTECTED] wrote:
...you split one disk in two. you then have effectively two partitions which
you can then create a new mirrored zpool with. Then everything is mirrored.
Correct?
Everything in the filesystems in the pool, yes.
With ditto blocks, you can
Also, where do I set arc.c_max? In etc/system? Out of
curiosity, why isn't
limiting arc.c_max considered best practice (I just want to make
sure I am
not missing something about the effect limiting it will have)?
My guess is
that in our case (lots of small groups -- 50 people or less
On 12/09/06, Celso [EMAIL PROTECTED] wrote:
...you split one disk in two. you then have
effectively two partitions which you can then create
a new mirrored zpool with. Then everything is
mirrored. Correct?
Everything in the filesystems in the pool, yes.
With ditto blocks, you can
Celso wrote:
Hopefully we can agree that you lose nothing by adding this feature, even if
you personally don't see a need for it.
If I read correctly user tools will show more space in use when adding
copies, quotas are impacted, etc. One could argue the added confusion
outweighs the
Matthew Ahrens wrote:
Here is a proposal for a new 'copies' property which would allow
different levels of replication for different filesystems.
Thanks everyone for your input.
The problem that this feature attempts to address is when you have some
data that is more important (and thus
Matthew Ahrens wrote:
Here is a proposal for a new 'copies' property
which would allow
different levels of replication for different
filesystems.
Thanks everyone for your input.
The problem that this feature attempts to address is
when you have some
data that is more important
On 9/12/06, Matthew Ahrens [EMAIL PROTECTED] wrote:
Matthew Ahrens wrote:
Here is a proposal for a new 'copies' property which would allow
different levels of replication for different filesystems.
Thanks everyone for your input.
The problem that this feature attempts to address is when you
On 12/09/06, Celso [EMAIL PROTECTED] wrote:
I think it has already been said that in many peoples experience, when a disk
fails, it completely fails. Especially on laptops. Of course ditto blocks
wouldn't help you in this situation either!
Exactly.
I still think that silent data
Matthew Ahrens wrote:
Matthew Ahrens wrote:
Here is a proposal for a new 'copies' property which would allow
different levels of replication for different filesystems.
Thanks everyone for your input.
The problem that this feature attempts to address is when you have some
data that is more
Dick Davies wrote:
For the sake of argument, let's assume:
1. disk is expensive
2. someone is keeping valuable files on a non-redundant zpool
3. they can't scrape enough vdevs to make a redundant zpool
(remembering you can build vdevs out of *flat files*)
Given those assumptions, I think
On 12/09/06, Celso [EMAIL PROTECTED] wrote:
I think it has already been said that in many
peoples experience, when a disk fails, it completely
fails. Especially on laptops. Of course ditto blocks
wouldn't help you in this situation either!
Exactly.
I still think that silent data
On Sep 12, 2006, at 4:39 PM, Celso wrote:
On 12/09/06, Celso [EMAIL PROTECTED] wrote:
I think it has already been said that in many
peoples experience, when a disk fails, it completely
fails. Especially on laptops. Of course ditto blocks
wouldn't help you in this situation either!
Exactly.
Matthew Ahrens wrote:
Matthew Ahrens wrote:
Here is a proposal for a new 'copies' property which would allow
different levels of replication for different filesystems.
Thanks everyone for your input.
The problem that this feature attempts to address is when you have
some data that is
Chad Lewis wrote:
On Sep 12, 2006, at 4:39 PM, Celso wrote:
the proposed solution differs in one important aspect: it automatically
detects data corruption.
Detecting data corruption is a function of the ZFS checksumming feature. The
proposed solution has _nothing_ to do with detecting
Here's the information you requested.
Script started on Tue Sep 12 16:46:46 2006
# uname -a
SunOS umt1a-bio-srv2 5.10 Generic_118833-18 sun4u sparc SUNW,Netra-T12
# prtdiag
System Configuration: Sun Microsystems sun4u Sun Fire E2900
System clock frequency: 150 MHZ
Memory size: 96GB
On 9/12/06, eric kustarz [EMAIL PROTECTED] wrote:
So it seems to me that having this feature per-file is really useful.
Say i have a presentation to give in Pleasanton, and the presentation
lives on my single-disk laptop - I want all the meta-data and the actual
presentation to be replicated.
On 9/12/06, Celso [EMAIL PROTECTED] wrote:
Whether it's hard to understand is debatable, but
this feature
integrates very smoothly with the existing
infrastructure and wouldn't
cause any trouble when extending or porting ZFS.
OK, given this statement...
Just for the record, these
David Dyer-Bennet wrote:
While I'm not a big fan of this feature, if the work is that well
understood and that small, I have no objection to it. (Boy that
sounds snotty; apologies, not what I intend here. Those of you
reading this know how muich you care about my opinion, that's up to
you.)
60 matches
Mail list logo