On 05/14/2013 02:51 PM, Ram Chander wrote:
Hi,
I have MD1200 of three trays daisy chained and connected to Dell box. The
Dell box has LSI-9207-8-e card which sees the dirves as jbod.
But its not able to detect all drives. Few drives gives below error as
failed to power up.
The disks
On 04/17/2013 02:08 AM, Edward Ned Harvey (openindiana) wrote:
From: Sašo Kiselkov [mailto:skiselkov...@gmail.com]
If you are IOPS constrained, then yes, raid-zn will be slower, simply
because any read needs to hit all data drives in the stripe.
Saso, I would expect you to know the answer
On 04/16/2013 10:57 PM, Bob Friesenhahn wrote:
On Tue, 16 Apr 2013, Jay Heyl wrote:
It's actually not all that difficult to saturate a 6Gb/s pathway with ZFS
when there are multiple storage devices on the other end of that path. No
single HDD today is going to come close to needing that full
On 04/16/2013 11:25 PM, Timothy Coalson wrote:
On Tue, Apr 16, 2013 at 3:48 PM, Jay Heyl j...@frelled.us wrote:
My question about the rationale behind the suggestion of mirrored SSD
arrays was really meant to be more in relation to the question from the OP.
I don't see how mirrored arrays of
On 04/16/2013 11:37 PM, Timothy Coalson wrote:
On Tue, Apr 16, 2013 at 4:29 PM, Sašo Kiselkov skiselkov...@gmail.comwrote:
If you are IOPS constrained, then yes, raid-zn will be slower, simply
because any read needs to hit all data drives in the stripe. This is
even worse on writes
On 04/17/2013 12:08 AM, Richard Elling wrote:
clarification below...
On Apr 16, 2013, at 2:44 PM, Sašo Kiselkov skiselkov...@gmail.com wrote:
On 04/16/2013 11:37 PM, Timothy Coalson wrote:
On Tue, Apr 16, 2013 at 4:29 PM, Sašo Kiselkov
skiselkov...@gmail.comwrote:
If you are IOPS
On 04/15/2013 03:30 PM, John Doe wrote:
From: Günther Alka a...@hfg-gmuend.de
I would think about the following
- yes, i would build that from SSD
- build the pool from multiple 10 disk Raid-Z2 vdevs,
Slightly out of topic but, what is the status of the TRIM command and zfs...?
ATM:
On 04/14/2013 05:15 PM, Wim van den Berge wrote:
Hello,
We have been running OpenIndiana (and its various predecessors) as storage
servers in production for the last couple of years. Over that time the
majority of our storage infrastructure has been moved to Open Indiana to the
point where
On 03/27/2013 10:51 AM, Eric 姚宗澧 wrote:
Hi all,
Does anyone has experience in accessing PCI configuration space in
OpenIndiana?
I need to modify the register of specific Nic to make it stop in physical.
Any idea will be appreciated.
Hi Eric,
You should try and ask around on
On 03/21/2013 02:13 PM, Schweiss, Chip wrote:
I have four 480GB SSDs I'm trying to re-add to my pool as L2ARC. They
were all connected internally on the server, but now I have a dedicated SAS
JBOD for the 2 1/2 SSDs in the system. The JBOD is a SuperMicro 2U with
LSI SAS2X36 expander.
On 03/21/2013 10:50 PM, Ian Collins wrote:
{sorry about top-posting, crap webmail client!}
I'm running Solaris on a X9DRH-7TF, which looks to be basically the
same electronics except for the 10G NICs.
Everything works well.
Thanks for the info, I guess I shouldn't have any issues then
On 03/21/2013 11:18 PM, Udo Grabowski (IMK) wrote:
On 03/21/13 11:00 PM, Sašo Kiselkov wrote:
On 03/21/2013 10:50 PM, Ian Collins wrote:
Thanks for the info, I guess I shouldn't have any issues then either
(even though this is a revision C602J chipset, not C602 as in your
case). I've also
On 03/20/2013 12:32 PM, Edward Ned Harvey (openindiana) wrote:
It would only bring a tear to my eye, because of how foolishly
irresponsible that is. 3737 days of uptime means 10 years of
never applying security patches and bugfixes. Whenever people
are proud of a really long uptime, it's a
On 03/17/2013 04:06 PM, Hans J. Albertsson wrote:
Your counter-question baffles me a bit... but:
Can you point at any part of that wiki page that actually deals with how
to produce a 4kblocksize pool on a SATA, not a SCSI, drive that is
actually 4k physical blocksize but reports having 512
On 03/12/2013 09:53 PM, Hans J. Albertsson wrote:
I sort of expected
echo {Z..Ö}
to generate
Z Å Ä Ö
when LC_ALL was set to sv_SE.UTF-8
But it doesn't. Seems like a bug, or what??
Found while hacking some scripts for backup and indexing stuff. Major
showstopper, actually.
I'm
On 03/12/2013 10:10 PM, Marcel Telka wrote:
On Tue, Mar 12, 2013 at 10:02:27PM +0100, Sašo Kiselkov wrote:
I'm pretty sure nobody in bash development actually considers
locale-specific letter ordering rules. Language-specific idiosyncrasies
are a never ending stream of hurt and implementation
The root file system of a zone is not kept in the zonepath but in
zonepath/ROOT/zbe - look for that filesystem to snapshot/rollback, not
the zonepath filesystem itself (which is just an empty container). Your
snapshot listing below shows that.
--
Saso
On 03/08/2013 11:58 PM,
On 02/22/2013 02:16 AM, Timothy Coalson wrote:
The first parity uses straight XOR on uint64_t, while the second parity
performs the LFSR on all bytes in a uint64_t with some bitwise math (search
for VDEV_RAIDZ_64MUL_2) that adds up to 8 operators by my count, followed
by xor - using the LFSR
On 02/20/2013 08:05 PM, Reginald Beardsley wrote:
On an N40L running oi_151a7 w/ four ST2000DM001 drives I'm seeing
a large drop in performance for RAIDZ2 vs RAIDZ1 which surprises me.
The discussions google found were not entirely enlightening and not
OI based. How much CPU does a small OI
On 02/21/2013 07:27 PM, Timothy Coalson wrote:
I think last time this was asked, the consensus was that the implementation
was based on linear feedback shift registers and xor, which happens to be a
reed-solomon code (not as clear on this part, but what matters is what it
is, not what it
On 02/21/2013 08:06 PM, Sašo Kiselkov wrote:
On 02/21/2013 07:27 PM, Timothy Coalson wrote:
I think last time this was asked, the consensus was that the implementation
was based on linear feedback shift registers and xor, which happens to be a
reed-solomon code (not as clear on this part
On 02/21/2013 07:27 PM, Timothy Coalson wrote:
I think last time this was asked, the consensus was that the implementation
was based on linear feedback shift registers and xor, which happens to be a
reed-solomon code (not as clear on this part, but what matters is what it
is, not what it
On 02/19/2013 12:41 PM, Weiergräber, Oliver H. wrote:
Moreover, providing security fixes has been a defined goal of OpenIndiana
right from the beginning.
See the FAQ:
Q: Will OpenIndiana provide security and bug fixes to their stable releases?
A: Yes, absolutely. We view this as one of
On 02/19/2013 02:23 PM, Jim Klimov wrote:
I believe RedHat and its spin-offs (Fedora as a bleeding edge
experiment, and CentOS as a rebadged clone) have set a nice
example here, especially the latter. All the source is open as
GPL requires, and AFAIK CentOS is a rebuild of the same code in
On 02/19/2013 03:12 PM, Jim Klimov wrote:
On 2013-02-19 14:38, Sašo Kiselkov wrote:
You don't get access to RedHat's repos without paying. There are some
portions of the code that CentOS doesn't ship (such as the policy
enforcement libraries). In this respect, RHEL is closer to what Solaris
On 02/18/2013 05:18 AM, Grant Albitz wrote:
I would like to discuss one more item:
Based on the writes below vs the reads it seems like I am able to get more
data out of a w/s as apposed to a read per second. I may just be
misunderstanding the results but the disks themselves are rated for
On 02/18/2013 01:42 PM, Grant Albitz wrote:
The results below were done locally and are Gigabytes ps. I will try going
back to the h310.
Then you should definitely expect higher numbers. You can test to see if
it is ZFS that's slowing you down by doing raw-device writes/reads, like
so (don't
On 02/18/2013 02:47 PM, Martin Edgar wrote:
Where does the graphical installer write the GRUB bootloader?
If I choose to install OpenIndiana to an entire external (USB) HDD, will
the bootloader be written to this same HDD
Yes.
Cheers,
--
Saso
___
On 02/18/2013 10:08 PM, Ian Collins wrote:
I agree with Sas's comments on the 710.
If you have driver problems with a 310, try Solaris 11 as an experiment
to verify performance, I have a number of Dells with 310 in JBOD mode
running Solaris and they work well.
That might actually not be
On 02/17/2013 04:38 AM, Grant Albitz wrote:
I just wanted to add that when I create a pool it is using ashift 9. Arent
all ssds 4k drives at this point?
In file-based datasets, ZFS uses a variable block size which
automatically adjusts to the size of the write operation (from 512-bytes
to
Hi Grant,
On 02/16/2013 05:14 PM, Grant Albitz wrote:
Hi I am trying to track down a performance issue with my setup.
Always be sure to do your performance testing on the machine itself
first, before going on to test through more layers of the stack (i.e.
iSCSI). What does iostat -xn 1 report
On 02/16/2013 11:37 PM, Ian Collins wrote:
Have you tried the connecting to the volume form another IO host to
eliminate vmware as the cause?
Second that.
The 9k MTU might also be hitting some NIC driver bugs - non-standard
settings can, at times. Since the difference is only in the storage -
On 02/16/2013 11:58 PM, Sašo Kiselkov wrote:
On 02/16/2013 11:37 PM, Ian Collins wrote:
Have you tried the connecting to the volume form another IO host to
eliminate vmware as the cause?
Second that.
The 9k MTU might also be hitting some NIC driver bugs - non-standard
settings can
On 02/17/2013 12:39 AM, Grant Albitz wrote:
I am not sure that I can disable flow control since no switch is present. Is
it turned on in the host by default?
Flow control is a feature of the NIC and any two NICs can negotiate to
have it turned on, you don't need a switch in between. See
On 02/17/2013 12:52 AM, Grant Albitz wrote:
Yes jim I actually used something similar to enable the 9000 mtu that's why I
want familiar with the config file method.
dladm set-linkprop -p mtu=9000 InterfaceName
Flowcontrol is currently off on the zfs host but enabled by default on esxi,
On 02/12/2013 01:31 AM, Rainer Heilke wrote:
This is essentially what I argued for years ago, and I was pretty much
shot down. The response was basically a suggestion that I was being lazy
by only wanting to type one SMF command instead of two or three.
Sorry hear that. If many people use a
On 02/11/2013 09:54 PM, Robbie Crash wrote:
If you weren't having any issues with speed and they've progressively
gotten worse, I'd look at dedup. If you're using dedup, you better make
sure you've got 2.5GB RAM for every TB of unique data you have, otherwise
you'll be swapping your dedup
On 02/11/2013 11:51 PM, Marion Hakanson wrote:
richard.ell...@richardelling.com said:
Soon, many, if not all, HDDs will be shipped as self encrypting. AFAIK, there
is no OI project for managing the keys, however. I'm interested to know what
the demand for these tools might be.
-- richard
On 02/12/2013 12:12 AM, Jim Klimov wrote:
Hello all,
While setting up systems there are occasions when an SMF service
needs to be restarted - i.e. due to reconfiguration or its failure.
The svcadm restart action only takes place for online services.
I wonder if it is a reasonable RFE
On 02/10/2013 10:33 PM, Ray Arachelian wrote:
So, how would I go about installing OpenSXCE on a T1000? Is there some
sort of jumpstart available for it?
In the past, I've attempted to hook up a SATA DVD drive to my T1000, but
I couldn't get it booting, even after messing with the
On 02/09/2013 02:52 AM, Jim Klimov wrote:
Hello all,
I found it not very convenient to have root's home directory
as a part of rootfs - if I switch between different BEs, the
homedir changes back and forth. It also consumes space in the
snapshots, if a BE is cloned from a variant which
On 02/09/2013 08:55 PM, Roel_D wrote:
Just a question out of interest:
Let's say you put root's directory to another zfs dataset.
This dataset has been backupped to an USB stick.
Hang on, you don't encrypt your back ups? Seriously? No offense dude,
but if you did that at my place, you'd find
On 02/09/2013 10:59 PM, Roel_D wrote:
It was hypothetical.
I never backup ;-)
You always end up with copies of old software ;-)
I know this one, it's called the Torvalds method :-P
Only wimps use tape backup: real men just upload their important stuff
on ftp, and let the rest of the world
You have an issue with conectivity to your drives on the Coraid HBA
card. I suggest querying your HBA via its management tools to make sure
you can discover all the drives on your network. Chances are, they're
not all visible, which is why your pool is having trouble.
--
Saso
On 02/07/2013 01:49
On 02/07/2013 02:17 PM, Martin Bochnig wrote:
Dear community,
I'm posting this from an Internet Cafe.
Hi Martin, great to hear from you again!
Did everything work for you?
I'm currently getting the SPARC build farm for Illumos up and running.
So far, your distro is running like a champ, no
On 02/07/2013 11:28 PM, CJ Keist wrote:
Here are actual numbers:
# du -sh *
11G home
zfs list data/students/GRAD/ECE/vwb
NAME USED AVAIL REFER MOUNTPOINT
data/students/GRAD/ECE/vwb 13.6G 6.37G 13.6G
/data/students/GRAD/ECE/vwb
As you can see there
On 02/06/2013 07:55 AM, Ram Chander wrote:
Hi,
I had a zpool thats exported on another system and when i try to import,
it fails. Any idea how to recover ?
format shows all the disks.
root@host:~# zpool import -FfX pool1
cannot import 'pool1': one or more devices is currently
How did you determine that you are running in 32-bit? What is the output
of isainfo -kv? If it prints something like:
# isainfo -kv
64-bit amd64 kernel modules
Then you *are* running 64-bit.
Anyways, should you need to enforce 64-bit for whatever reason, it can
be easily done by instructing the
Sorry, I noticed that my post is irrelevant, I only skimmed your e-mail.
If the ISO or USB installer only include the 32-bit kernel, then you
cannot by definition boot into 64-bit mode (since there's no 64-bit
kernel to load).
Also, why would you want to boot the installer in 64-bit mode? The
On 02/06/2013 02:30 PM, Jim Klimov wrote:
Hello all,
I am currently helping evacuate data/OS from a legacy system
that ran Solaris 10u3 (11/06) in a VM until recently - hypervisor
host died - with tasks stuffed into a number of local zones, in
whole roots over dedicated UFS SVM
On 02/06/2013 09:48 PM, Ian Collins wrote:
Jim Klimov wrote:
On 2013-02-06 21:20, Roel_D wrote:
If the old software/services running on the old solaris didn't rely
on /usr or /etc installed software (like apache/mysql/java-based
software) i would suggest to only copy the software directories
On 02/04/2013 07:31 PM, dormitionsk...@hotmail.com wrote:
Oh, and a little off topic here, but some of you all didn't appreciate my old
habit of weekly reboots just to clear out the RAM, etc. a while back.
Please don't think that anybody here was dissing you or anything. It's
just that you had
On 02/04/2013 08:02 PM, dormitionsk...@hotmail.com wrote:
On Feb 4, 2013, at 11:45 AM, Sašo Kiselkov wrote:
On 02/04/2013 07:31 PM, dormitionsk...@hotmail.com wrote:
Oh, and a little off topic here, but some of you all didn't appreciate my
old habit of weekly reboots just to clear out
On 02/01/2013 08:01 AM, Andrej Javoršek wrote:
Thank you for your answer!
Would driver from Solaris 10 work and if yes how to get it from installed
system (or installation DVD)?
It should, Solaris' DDI is pretty stable. The other most likely
possibility is that it either won't load or will
On 02/01/2013 05:23 PM, Harry Putnam wrote:
Do server platforms generally not have graphics or is it just inserted
into a slot somewhere? or what?
Sadly nowadays they do. I put sadly in quotes, because for some OSes
having a graphics card in the machine is actually required to even be
On 01/31/2013 07:59 PM, Stefan Müller-Wilken wrote:
Hi there,
They are two different animals. Below is what he posted to this
list on December 17. I'm trying to save him some trouble here,
since he's not living in very good conditions.
I hadn't originally noticed this extensive list,
On 01/30/2013 11:48 PM, dormitionsk...@hotmail.com wrote:
They are two different animals. Below is what he posted to this list on
December 17. I'm trying to save him some trouble here, since he's not living
in very good conditions.
I hadn't originally noticed this extensive list, though I
On 01/28/2013 02:31 PM, Shvayakov A. wrote:
Hi,
I created a ZFS Storage Pool with Cache and ZIL Devices on SSD.
During the performance test I removed the Cache Device to simulate failure.
Now I can't get an output from the commands zpool status, format etc.
This commands hangs.
Is this
On 01/28/2013 02:36 PM, Sašo Kiselkov wrote:
On 01/28/2013 02:31 PM, Shvayakov A. wrote:
Hi,
I created a ZFS Storage Pool with Cache and ZIL Devices on SSD.
During the performance test I removed the Cache Device to simulate failure.
Now I can't get an output from the commands zpool status
On 01/27/2013 01:04 AM, Reginald Beardsley wrote:
Because I can't boot from 3 TB drives, I'm trying to sort out plan B for
configuring my N40L.
If I stick the 250 GB disk that came w/ the system in the ODD slot and use
that for the root pool w/ a 4x3 TB RAIDZ, what happens if my root pool
On 01/26/2013 08:15 PM, Roel_D wrote:
I installed OI yesterday on VMWare server. After that i did some alterings
and installed postgres and so. It all worked fine untill i ran out of memory
so i did Shutdown -y . Bit this lead to single usermode so i did a init 5
from the console. This
On 01/26/2013 09:21 PM, Roel_D wrote:
Well no-one told me that it changed ( the init part) and i still have some
solaris 10 and 11 servers so it's a habbit to use the init commands.
I will check these new commands asap.
Check your Solaris 10 manpages for init - that one already includes
On 01/26/2013 09:54 PM, Reginald Beardsley wrote:
Having got the 3 TB disk working in the Solaris 10 system where it belongs, I
think I'll pass on doing battle w/ booting from disks 2 TB.
I've got a zpool on a 3 TB USB disk actually functioning w/ 151_a7 which is a
big improvement, but it
On 01/25/2013 01:24 AM, Bentley, Dain wrote:
Sure, where do I start?
Not sure, I'm not familiar with how OI is built and how to contribute
package manifests to the project for building. That's why I said go talk
to the guys at oi-dev.
--
Saso
___
On 01/25/2013 03:11 PM, Edward Ned Harvey (openindiana) wrote:
From: Christopher Chan [mailto:christopher.c...@bradbury.edu.hk]
:-D I'm here to entertain since I have not been able to spring for a ssd
for use as a slog. :-D
LOL, you mean you have a HDD slog device? :-D
It's actually very
On 01/25/2013 03:56 PM, Bentley, Dain wrote:
Well to take advantage of php-fpm it appears I need 5.3 and php 5.2 is in
the repos listed.
I tried to build it from source but the autoconf pang is to new.
Sorry to hear that. Hope you can build PHP 5.3 from source, and perhaps
the OI maintainers
On 01/24/2013 02:39 PM, Edward Ned Harvey (openindiana) wrote:
Based on my extensive work benchmarking zfs systems, I can say this: The
default write performance on a plain old sas/sata card (without SSD) is
horrible by comparison to the following alternatives:
You get a huge increase
On 01/24/2013 03:57 PM, Sebastian Gabler wrote:
Hello,
I am using a share via nfs as esxi datastore for more than a year. 3
hosts have root access. I have added a vcenter server appliance to
manage the esxi hosts, and added the vcenter server's IP address to the
allowed hosts using zfs set
On 01/24/2013 06:38 PM, Dimitri Alexandris wrote:
I will agree with the driver problem.
My OI has 2 1G Intel ethernet bonded, and crashes at random times.
There are also 2 10G ports connected and working fine.
Symptom: OI crashes when a lot of traffic at the bond (5 - 40 minutes
after
On 01/24/2013 10:13 PM, Reginald Beardsley wrote:
I just unpacked an N40L and had a look around. It came w/ a single 2 GB DIMM.
Would 4 GB be sufficient for a generally light load single user environment?
Absolutely.
What sort of disk throughput should I expect w/ that?
That largely
On 01/24/2013 11:30 PM, Brogyányi József wrote:
Hi
I'd like to start my system from USB port.I think about USB stick or USB
HDD. What do you think of my idea?
Can I install OI on USB HDD or USB stick?
Yep, you can. In fact, for Joyent's SmartOS (another Illumos distro),
this is the only way
On 01/24/2013 11:36 PM, Bentley, Dain wrote:
Thanks for the help.
Is there a way to request php-fpm be added to the pkg list? The PHP-FPM with
nginx beats apache mod-php hands down.
I suggest you drop a line to the OI devs over at oi-...@openindiana.org
(though some may read this list as
On 01/25/2013 12:01 AM, Sebastian Gabler wrote:
Dear all,
I found the problem after some desperate changes by accident, using Saso's
recommendation #2. Vcsa apparently uses another interface to access data
stores than for management console. After giving the management interface rw
access
On 01/22/2013 06:26 PM, Len Zaifman wrote:
We have just had a major system meltdown and it took several days to fix.
What we would have liked is 2 things we had on thumpers (Old SUN ZFS systems)
1) A tool to show the mapping of a solaris device name to a physical location
2) A tool to turn
On 01/20/2013 02:10 PM, Sašo Kiselkov wrote:
On 01/20/2013 01:57 PM, Ulrich Hagen wrote:
Hello everyone,
I have recently added an Intel SASUC8I controller to my home file
server, and hooked up eight Western Digital Red 3000GB disks to it.
Only after I started filling up the new pool I
On 01/20/2013 03:47 PM, Jim Klimov wrote:
On 2013-01-20 13:57, Ulrich Hagen wrote:
Hello everyone,
I have recently added an Intel SASUC8I controller to my home file
server, and hooked up eight Western Digital Red 3000GB disks to it.
Only after I started filling up the new pool I noticed
, started OI and created the pool
using entire disks.
And, to reply to Sašo Kiselkov:
ashift is 9, these disk lie about their native sector size. So my pool
will never be as fast as it could.
Nope, they don't. What you're hitting is a bug in ZFS which incorrectly
handles Advanced Format drives. I
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 01/20/2013 05:52 PM, Ulrich Hagen wrote:
Sašo Kiselkov wrote:
It indeed appears to be a hard limit of the LSI SAS 1068e chip,
no newer firmware appears to fix this issue (which is bizarre,
but I suppose LSI also knows how to force customers
On 01/19/2013 05:37 PM, Jan Owoc wrote:
Hi,
I have a home NAS that I'm running on an Asus E35M1-I (AMD E-350)
motherboard. When I do a shutdown using init 5, and then physically
walk over to power it on (without ever unplugging the power),
sometimes the system powers up with the date set to
Your dump device contains a crash dump from a kernel panic that your
machine previously encountered. See
http://wiki.illumos.org/display/illumos/How+To+Report+Problems for a
guide on how to extract useful information from the crash dump and post
it here. In particular, you'll want to do savecore
On 01/19/2013 01:53 AM, dormitionsk...@hotmail.com wrote:
From 1992 to I used to 1998, I used to work at the Denver Museum of Natural
History -- now the Denver Museum of Nature and Science. We had two or three
DEC Vax's and an AIX machine there. It was their policy that once a week we
had
On 01/18/2013 03:20 AM, David Scharbach wrote:
I ran memtest86 for 3 passes, everything was ok there.
Computer froze again today after only 1 day of uptime. I now have a dump
file but I am confused as to what to do with it. Sorry to be a n00b but
could you point me in the right
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 01/16/2013 04:08 PM, Robert W Johnson Jr wrote:
Unfortunately I'm using a few extra H310's in non dell servers and
need to flash them otherwise I can't boot the server.
Could you please chime in with a guide to reflash these controllers?
My
On 01/15/2013 08:48 PM, Ian Collins wrote:
Robert W Johnson Jr wrote:
We have a few Dell PERC H310 adapters which seem to be rebranded LSI 9211
8i cards. Can anyone who has successfully flashed the firmware from
the LSI
9211 (SAS2008 chipset) IT/IR mode please post the steps taken to do so?
On 11/18/2012 08:32 PM, Richard Elling wrote:
more below...
On Nov 18, 2012, at 3:13 AM, Sašo Kiselkov skiselkov...@gmail.com wrote:
On 11/17/2012 03:03 AM, Richard Elling wrote:
On Nov 15, 2012, at 5:39 AM, Sašo Kiselkov skiselkov...@gmail.com wrote:
I've been lately looking around
On 11/17/2012 03:03 AM, Richard Elling wrote:
On Nov 15, 2012, at 5:39 AM, Sašo Kiselkov skiselkov...@gmail.com wrote:
I've been lately looking around the net for high-availability and sync
replication solutions for ZFS and came up pretty dry - seems like all
the jazz is going around
On 11/15/2012 12:38 PM, Florian wrote:
Hello,
has someone experience with Linux software-raid with two Comstar iSCSI
volumes from two OI servers?
I tested this on a virtual machine, but it would be great, if I can get
some experience with such a combination!
Will this work without
On 11/15/2012 12:48 PM, Dan Swartzendruber wrote:
How sophisticated does it need to be? I do 5-min dataset-based replication
to a remote pool using zrep, but that's all I use it for - a backup...
Well, it's more of a question of mapping out the landscape of available
tools. Async replication
On 11/13/2012 10:05 AM, Ilya Arhipkin wrote:
got a question why just is not included in the compiler can only for
violating the license GNU GPL?
The rest of GCC is also covered by the GPL and that is distributed just
fine, so it's not because of the license. As Peter Tribble noted, it's
On 11/12/2012 09:34 PM, Peter Tribble wrote:
Just as a question: does anyone actually use gcj?
I don't think I ever have, at least.
One reason for the questions is that a gcc4.7.2 build is ~1G in size,
half that is java support. So if I could just forget about that completely
that saves
On 11/08/2012 03:27 PM, Boris Epstein wrote:
Hello listmates,
I believe there is a repository containing Postgres 9 for OpenIndiana
somewhere - at least I remember hearing about it - but I can't find it for
some reason. If you can point me to to it that will be appreciated.
I'm using
On 11/08/2012 07:58 PM, Boris Epstein wrote:
Andrej,
Thanks!
I guess instructions ought never to be followed blindly, but rather
creatively interpreted:)
Sorry, I messed up, I wrote that down from memory and forgot about the
flag. In any case, you are right, always read up on what the
I'm trying to update a few zones on my oi_151a4 box and the update
requests a new boot environment to be created - for the zone.
Predictably, this fails, and specifying --deny-new-be in the zone makes
pkg refuse to do the update.
So here's the kicker: how does one update the image version inside
On 11/02/2012 05:52 PM, Jeppe Toustrup wrote:
On Fri, Nov 2, 2012 at 5:40 PM, Sašo Kiselkov skiselkov...@gmail.comwrote:
I'm trying to update a few zones on my oi_151a4 box and the update
requests a new boot environment to be created - for the zone.
Predictably, this fails, and specifying
Try disabling CPU C-states in the BIOS. It appears your machine is
having trouble throttling CPUs into power-saving modes.
--
Saso
On 10/24/2012 11:22 AM, Ram Chander wrote:
Below is the dmesg when it crashes .
Oct 24 14:47:42 myhost unix: [ID 950921 kern.info] cpu20: x86
(chipid 0x0
On 10/12/2012 04:18 PM, Rich wrote:
All 4 are listed in there as being SAS2008-based, which makes sense
[AFAIK, Dell has no cards based on the LSI 22xx/23xx lines of chips
yet].
Yeah, but it all often comes down to PCI IDs and firmwares. The LSI SAS
2008 can run various firmware versions,
On 10/12/2012 04:26 PM, Neddy, NH. Nam wrote:
Hi,
I'm sorry for OT, but how could I know which controller chip model is
on controller card? eg. on HP, IBM, etc ... ?
Look at the manufacturer's datasheets, the chip is typically listed.
Here's also a pre-compiled list of SAS2008-based cards:
On 10/11/2012 10:25 AM, Udo Grabowski (IMK) wrote:
Hello,
vendor wants us to buy Dell Precision T3600 with Xeon E5-1650 16 GB ECC.
processor (C600 series Chipset, Raid Card H310 PCIe, Intel 82579 Gbe
controller). Does anybody know if that works with OI151a7 ? Don't want
to return 3 large
On 10/11/2012 11:23 AM, Udo Grabowski (IMK) wrote:
On 11/10/2012 10:35, Sašo Kiselkov wrote:
On 10/11/2012 10:25 AM, Udo Grabowski (IMK) wrote:
Hello,
vendor wants us to buy Dell Precision T3600 with Xeon E5-1650 16 GB ECC.
processor (C600 series Chipset, Raid Card H310 PCIe, Intel 82579 Gbe
On 09/28/2012 01:06 PM, Rainer Heilke wrote:
Greetings.
I've connected a Seagate 3000GB HDD to my OpenIndiana oi_151.1.6 X86
server. But when I try to use it, only 746GB show. Printing the
partition table from format shows:
partition pri
Current partition table (original):
Total disk
1 - 100 of 156 matches
Mail list logo