-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Just tried and didn't help :-(.
Regards,
- --
Saso
Brent Jones wrote:
On Wed, Jan 6, 2010 at 2:40 PM, Saso Kiselkov skisel...@gmail.com wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Buffering the writes in the OS would work for me as
With reference to the earlier Kingston part numbers:
Desktop Bundle - SNV125-S2BD/40GB
Bare drive - SNV125-S2/40GB
It looks like Intel has begun to ship a similar (same ??) product as
part number:
SSDSA2MP040G2R5
model name: Intel X25-V (V = value)
see:
Thanks Will,
I thought it might be an i2c interface port to the psu, but obviously much
simpler.
I'll probably use a small picaxe micro, since I have a few here have used
them before.
I used them to 'translate' the replacement fans clock pulse to what the
monitoring circuit needed in a few
The OpenSolaris Just enough OS (JeOS) project has been working on making
stripped down images available for virtual machines as well as automated
installer profiles.
See: http://hub.opensolaris.org/bin/view/Project+jeos/WebHome
for the project home page.
Also, a frequently updated blog on the
I'm in the process of standing up a couple of t5440's, of which the
config will eventually end up in another data center 6k miles from
the original config, and I'm supposed to send disks to the data
center and we'll start from there (yes, I know how to flar and jumpstart.
When the boss says do
Make sure you have the latest LU patches installed. There were a lot of fixes
put back in that area within the last six months or so.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Jan 6, 2010, at 11:09 PM, Wilkinson, Alex wrote:
0n Wed, Jan 06, 2010 at 11:00:49PM -0800, Richard Elling wrote:
On Jan 6, 2010, at 10:39 PM, Wilkinson, Alex wrote:
0n Wed, Jan 06, 2010 at 02:22:19PM -0800, Richard Elling wrote:
Rather, ZFS works very nicely with hardware RAID
I have posted my ZFS Tutorial slides from USENIX LISA09 on
slideshare.net.
You will notice that there is no real material on dedup. The reason
is that
dedup was not yet released when the materials were created. Everything
in the slides is publicly known information and, perhaps by chance,
After spending some time reading up on this whole deal with SSD with caches
and how they are prone to data losses during power failures, I need some
clarifications...
When you guys say write cache, do you just really mean the on board cache
(for both read AND writes)? Or is there a separate
Also...
There is talk about using those cheap disks for rpool. Isn't rpool also prone
to a lot of writes, specifically when the /tmp is in a SSD?
What's the real reason to making those cheap SSD as a rpool rather than a L2ARC?
Basically is everyone saying that SSD without
Hi,
I got this:
gue...@opensolaris:~$ pfexec zpool status -v
pool: rpool
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
I was wondering what the future holds for web-based administration of ZFS? In
SXCE and prior versions of Solaris, the Sun Management Console or WebConsole
is/was used to administer ZFS. However, the Webconsole packages are not part
of Opensolaris, and SXCE is supposed to be discontinued at
On Thu, 7 Jan 2010, Anil wrote:
After spending some time reading up on this whole deal with SSD with caches
and how they are prone to data losses during power failures, I need some clarifications...
When you guys say write cache, do you just really mean the on
board cache (for both read AND
On Thu, 2010-01-07 at 11:07 -0800, Anil wrote:
There is talk about using those cheap disks for rpool. Isn't rpool
also prone to a lot of writes, specifically when the /tmp is in a SSD?
Huh? By default, solaris uses tmpfs for /tmp, /var/run,
and /etc/svc/volatile; writes to those filesystems
I *am* talking about situations where physical RAM is used up. So definitely
the SSD could be touched quite a bit when used as a rpool - for pages in/out.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
On Jan 7, 2010, at 12:02 PM, Anil wrote:
I *am* talking about situations where physical RAM is used up. So
definitely the SSD could be touched quite a bit when used as a rpool
- for pages in/out.
In the cases where rpool does not serve user data (eg. home directories
and databases are not
Hi Gunther,
Are these external USB disks?
You could determine what disk problems caused the errors by using the
fmdump -eV output.
From your output, the scrub is still in progress so maybe these errors
will clear up. Or, the objects no longer exist and a subsequent scrub
will remove these
On Jan 7, 2010, at 23:47, Cindy Swearingen cindy.swearin...@sun.com
wrote:
Hi Gunther,
Are these external USB disks?
You could determine what disk problems caused the errors by using the
fmdump -eV output.
From your output, the scrub is still in progress so maybe these errors
will clear
Marty Scholes wrote:
I did something similar, but with a SCSI drive. I keep a large external USB drive as a
last ditch recovery pool which is synchronized hourly from the main pool.
Kind of like a poor man's tape backup.
When I enabled dedup=verify on the USB pool, the sync performance
You could use NexetaStor (www.nexenta.com) which is a commercial storage
appliance, however - it based on opensolaris, but it is not just a package to
install.
Also there is EONStor (www.genunix.org).
--
This message posted from opensolaris.org
___
[removed zones-discuss after sending heads-up that the conversation
will continue at zfs-discuss]
On Mon, Jan 4, 2010 at 5:16 PM, Cindy Swearingen
cindy.swearin...@sun.com wrote:
Hi Mike,
It is difficult to comment on the root cause of this failure since
the several interactions of these
Hi Mike,
I can't really speak for how virtualization products are using
files for pools, but we don't recommend creating pools on files,
much less NFS-mounted files and then building zones on top.
File-based pool configurations might be used for limited internal
testing of some features, but
hello
you can try webadmin application napp-it.
it's a s free and end-user configurable perl-cgi script to manage your nexenta
(core), eon or opensolaris server via browser. (not only zfs, also user,
network, iscsi..).
it will support newest features of snv 129 like dedup, zfs3..
see
Hello,
Is there a way to upgrade my current ZFS version. I show the version could
be as high as 22.
I tried the command below. It seems that you can only upgrade by upgrade
the OS release.
[ilmcoso0vs056:root] / # zpool upgrade -V 16 tank
invalid version '16'
usage:
upgrade
Hi John,
On 08/01/2010, at 7:19 AM, john_dil...@blm.gov wrote:
Is there a way to upgrade my current ZFS version. I show the version could
be as high as 22.
The version of Solaris you are running only suport ZFS versions up to version
15 as demonstrated by your zfs upgrade -v output. You
http://www.opensolaris.org/os/community/zfs/version/
No longer exists. Is there a bug for this yet?
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Ian wrote:
Why did you set dedup=verify on the USB pool?
Because that is my last-ditch copy of the data and MUST be correct. At the
same time, I want to cram as much data as possible into the pool.
If I ever go to the USB pool, something has already gone horribly wrong and I
am desperate. I
James Lever wrote:
Is there a way to upgrade my current ZFS version. I show the version could
be as high as 22.
The version of Solaris you are running only suport ZFS versions up to version
15 as demonstrated by your zfs upgrade -v output. You probably need a newer
version of Solaris,
hey mike/cindy,
i've gone ahead and filed a zfs rfe on this functionality:
6915127 need full support for zfs pools on files
implmenting this rfe is a requirement for supporting encapsulated
zones on shared storage.
ed
On Thu, Jan 07, 2010 at 03:26:17PM -0700, Cindy Swearingen wrote:
Hi Ian,
The link from the old version page to the new version page
should be working. I'll check.
The CR is 6898657.
In the meantime, the version information can be reached from
the right column from this page:
http://hub.opensolaris.org/bin/view/Community+Group+zfs/
Thanks,
Cindy
-
Sorry to hijack the thread, but can you explain your setup? Sounds
interesting, but need more info...
Thanks!
--Tiernan
On Jan 7, 2010 11:56 PM, Marty Scholes martyscho...@yahoo.com wrote:
Ian wrote: Why did you set dedup=verify on the USB pool?
Because that is my last-ditch copy of the data
Lutz Schumann presa...@storageconcepts.de writes:
When importing a pool with many snapshots (which happens during
reboot also) the import may take a long time (example: 1
snapshots ~ 1-2 days).
I've not tested the new release of Solaris (svn_125++) which fixes
this regarding this issue.
0n Thu, Jan 07, 2010 at 10:49:50AM -0800, Richard Elling wrote:
I have posted my ZFS Tutorial slides from USENIX LISA09 on
slideshare.net.
http://richardelling.blogspot.com/2010/01/zfs-tutorial-at-usenix-lisa09-slides.html
Is there a PDF available of this ?
-Alex
On 7 Jan 2010, at 23:52, Ian Collins wrote:
http://www.opensolaris.org/os/community/zfs/version/
No longer exists. Is there a bug for this yet?
I don't think so. But
http://hub.opensolaris.org/bin/view/Community+Group+zfs/VERSION/ is where
they've moved to.
Cheers,
Chris
34 matches
Mail list logo