Re: [OpenIndiana-discuss] test images of actual Hipster for download

2021-04-06 Thread david allan finch

On 04/06/21 02:32 PM, Andreas Wacknitz wrote:

A new release is due in about a month.


Wicked. Thanks



___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] test images of actual Hipster for download

2021-04-06 Thread Andreas Wacknitz
A new release is due in about a month.


> Am 06.04.2021 um 10:08 schrieb david allan finch :
> 
> On 04/05/21 10:36 PM, Andreas Wacknitz wrote:
>> Everybody should be aware that they are no official releases.
>> I will keep them there for some days. 
> 
> Will there be a new release soon? I am thinking of building a new server.
> 
> Regards
> 
> 
> 
> 
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] test images of actual Hipster for download

2021-04-06 Thread david allan finch

On 04/05/21 10:36 PM, Andreas Wacknitz wrote:

Everybody should be aware that they are no official releases.
I will keep them there for some days. 


Will there be a new release soon? I am thinking of building a new server.

Regards




___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] test images of actual Hipster for download

2021-04-05 Thread Andreas Wacknitz

Am 05.04.21 um 23:55 schrieb Reginald Beardsley via openindiana-discuss:


Andreas,

Thanks! I've started the gui iso download. I'll follow with the other isos. I 
should have a report on results by Wednesday. I shall attempt installs to both 
MBR and GPT/EFI label disks for all 3 ISOs.

After that I'll try the USB images. I installed 2019.04 from USB on a friend's Dell, but 
was unable to get it to install on the Z400. IIRC I got a "no system image" 
error from the Z400 BIOS.

Have Fun!
Reg


Don't expect too much of it. It just represents the state of
illumos-gate and oi-userland as of this morning.
As long as nobody analyses and fixes the problems of your old machine
you might get the same results like before.

Regards,
Andreas

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] test images of actual Hipster for download

2021-04-05 Thread Reginald Beardsley via openindiana-discuss
 
Andreas,

Thanks! I've started the gui iso download. I'll follow with the other isos. I 
should have a report on results by Wednesday. I shall attempt installs to both 
MBR and GPT/EFI label disks for all 3 ISOs.

After that I'll try the USB images. I installed 2019.04 from USB on a friend's 
Dell, but was unable to get it to install on the Z400. IIRC I got a "no system 
image" error from the Z400 BIOS.

Have Fun!
Reg


 On Monday, April 5, 2021, 04:37:19 PM CDT, Andreas Wacknitz 
 wrote:  
 
 Hi,

I have created new OI Hipster test images and uploaded them to our
download server.
You can find it at http://dlc.openindiana.org/isos/hipster/test/
Everybody should be aware that they are no official releases.
I will keep them there for some days.

Best regards,
Andreas

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss
  
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] test images of actual Hipster for download

2021-04-05 Thread Andreas Wacknitz

Hi,

I have created new OI Hipster test images and uploaded them to our
download server.
You can find it at http://dlc.openindiana.org/isos/hipster/test/
Everybody should be aware that they are no official releases.
I will keep them there for some days.

Best regards,
Andreas

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] TEST-pkg VirtualBox 5.16

2016-09-20 Thread Мартин Бохниг via openindiana-discuss
Hi,

ok, thanks for the info and sorry for my nightly first remark ;)
Now, what did VirtualBox's GUI itself give you as error excuse every time you 
attempted to fire up a VM from there?
This info will be most revealing, more than anything else. From your memory: 
What was it, in which direction was it hinting?


Tnx, regards,
%martin


>I am running Openindiana Hipster and previously updated my boot 
>environment to the latest packages on Saturday 17th September. 
>Unfortunately as I reverted the to my previous boot environment to get a 
>working VirtualBox and activated it then deleted the later boot environment.
>
>I will try again this weekend and see if I can get more information.
>
>Regards
>
>Russell
>
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] TEST-pkg VirtualBox 5.16

2016-09-20 Thread russell

Hi

I am running Openindiana Hipster and previously updated my boot 
environment to the latest packages on Saturday 17th September. 
Unfortunately as I reverted the to my previous boot environment to get a 
working VirtualBox and activated it then deleted the later boot environment.


I will try again this weekend and see if I can get more information.

Regards

Russell


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] TEST-pkg VirtualBox 5.16

2016-09-19 Thread Мартин Бохниг via openindiana-discuss
First let me thank Predrag, Aurelien, Apostolos and you for having taken the 
time to test and report.

As for now it is 4 (including myself: works) versus 1 (you).
Given that this was installed not via ips with a strict 100% accurate 
dependencies handling this is already a good result.

Now let us investigate what your host might be missing: To find out about it it 
would be helpful to get any details from you in reagrds to what you are using.

Could you perhaps please post uname -a output?
Which packages are installed or perhaps missing?

Then, pls. open a shell window and start /opt/VirtualBox/VirtualBox from there.
The "normal" output would be this:


$ /opt/VirtualBox/VirtualBox
Qt WARNING: Object::connect: No such signal QApplication::screenAdded(QScreen *)
Qt WARNING: Object::connect:  (sender name:   'VirtualBox')
Qt WARNING: Object::connect: No such signal QApplication::screenRemoved(QScreen 
*)
Qt WARNING: Object::connect:  (sender name:   'VirtualBox')
Qt WARNING: Object::disconnect: No such signal 
QApplication::screenAdded(QScreen *)
Qt WARNING: Object::disconnect:  (sender name:   'VirtualBox')
Qt WARNING: Object::disconnect: No such signal 
QApplication::screenRemoved(QScreen *)
Qt WARNING: Object::disconnect:  (sender name:   'VirtualBox')

Then perhaps let me know the details which teh VirtualBox GUI repost after 
impact.
There are various possible reasons:

* hardening related (wrong permissions for your /opt prevent you from starting 
a VM)

* related to the splash screen I modified (long topic, but should not be the 
reason, because it is not a works here versus doesn't work there case, but 
either it recognizes the bmp or not)

* are you sure your drivers attached and got loaded? Did you upgrade 
boot_archive and reboot?

* Do you have qemu-kvm installed? If yes then the kvm module needs to be 
unloaded, it is mutually exclsuive to VirtualBox (because the CPU's 
virtualization extensions can only be owned by exactly one kernel module)


While I do thank everybody for every postive or negative report, I must wonder 
why and how Solaris users report bugs like Win10 Home Edition first timers 
(ohh, doesn't work), without providing the slightest details.
If you want help for free for a Vbox package that was created for free running 
on an OS that was created for free by a community who is willing to assist you 
for free, the least one can expect is a reasonable problem description, or am I 
still expecting too much even after I significantly lowered all my previous 
hop[es and expectations year over year during the most recent 11 years?



>Понедельник, 19 сентября 2016, 18:48 UTC от russell 
>:
>
>Hi,
>
>I downloaded and installed the TEST-pkg of VirtualBox 5.1.6 
>unfortunately everyone of VBOX started instantly aborted. I tried to 
>install VirtualBox 5.0.26 but that detected and open source version of 
>solaris and the installation would not complete even using Jim's 
>vboxconfig.sh. I had performed a pkg update on Saturday.
>It was easier just to revert back to a previous boot environment to get 
>a working version of VirtualBox.
>
>
>Regards
>
>Russell
>
>
>___
>openindiana-discuss mailing list
>openindiana-discuss@openindiana.org
>https://openindiana.org/mailman/listinfo/openindiana-discuss

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] TEST-pkg VirtualBox 5.16

2016-09-19 Thread Aurélien Larcher
On Mon, Sep 19, 2016 at 8:48 PM, russell  wrote:
> Hi,
>
> I downloaded and installed the TEST-pkg of VirtualBox 5.1.6 unfortunately
> everyone of VBOX started instantly aborted. I tried to install VirtualBox
> 5.0.26 but that detected and open source version of solaris and the
> installation would not complete even using Jim's vboxconfig.sh. I had
> performed a pkg update on Saturday.
> It was easier just to revert back to a previous boot environment to get a
> working version of VirtualBox.

Could it be just a missing dependency?
>
>
> Regards
>
> Russell
>
>
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss



-- 
---
Praise the Caffeine embeddings

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] TEST-pkg VirtualBox 5.16

2016-09-19 Thread Apostolos Syropoulos via openindiana-discuss
> I downloaded and installed the TEST-pkg of VirtualBox 5.1.6
> unfortunately everyone of VBOX started instantly aborted. I tried to
> install VirtualBox 5.0.26 but that detected and open source version of
> solaris and the installation would not complete even using Jim's
> vboxconfig.sh. I had performed a pkg update on Saturday.
> It was easier just to revert back to a previous boot environment to get
> a working version of VirtualBox.



What vwrsion of the OS are you running? I installed the package and everything
worked with no problem! 


A.S.
--
Apostolos Syropoulos
Xanthi, Greece

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] TEST-pkg VirtualBox 5.16

2016-09-19 Thread russell

Hi,

I downloaded and installed the TEST-pkg of VirtualBox 5.1.6 
unfortunately everyone of VBOX started instantly aborted. I tried to 
install VirtualBox 5.0.26 but that detected and open source version of 
solaris and the installation would not complete even using Jim's 
vboxconfig.sh. I had performed a pkg update on Saturday.
It was easier just to revert back to a previous boot environment to get 
a working version of VirtualBox.



Regards

Russell


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] test

2015-05-22 Thread Hans J Albertsson
Define "alive"

Hans J. Albertsson
From my Nexus 5
Den 21 maj 2015 20:46 skrev "Volker A. Brandt" :

> > Mailing list alive ?
>
> Seems so... :-)
>
>
> Regards -- Volker
> --
> 
> Volker A. Brandt   Consulting and Support for Oracle Solaris
> Brandt & Brandt Computer GmbH   WWW: http://www.bb-c.de/
> Am Wiesenpfad 6, 53340 Meckenheim, GERMANYEmail: v...@bb-c.de
> Handelsregister: Amtsgericht Bonn, HRB 10513  Schuhgröße: 46
> Geschäftsführer: Rainer J.H. Brandt und Volker A. Brandt
>
> "When logic and proportion have fallen sloppy dead"
>
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss
>
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] test

2015-05-21 Thread Volker A. Brandt
> Mailing list alive ?

Seems so... :-)


Regards -- Volker
-- 

Volker A. Brandt   Consulting and Support for Oracle Solaris
Brandt & Brandt Computer GmbH   WWW: http://www.bb-c.de/
Am Wiesenpfad 6, 53340 Meckenheim, GERMANYEmail: v...@bb-c.de
Handelsregister: Amtsgericht Bonn, HRB 10513  Schuhgröße: 46
Geschäftsführer: Rainer J.H. Brandt und Volker A. Brandt

"When logic and proportion have fallen sloppy dead"

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] test

2015-05-21 Thread Udo Grabowski (IMK)

Mailing list alive ?
--
Dr.Udo Grabowski   Inst.f.Meteorology & Climate Research IMK-ASF-SAT
http://www.imk-asf.kit.edu/english/sat.php
KIT - Karlsruhe Institute of Technology   http://www.kit.edu
Postfach 3640,76021 Karlsruhe,Germany T:(+49)721 608-26026 F:-926026

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Test on Sun X4170M2 between STEC Mach16 SLC/MLC and Intel DC S3700

2015-02-23 Thread Albert Chin
On Mon, Feb 23, 2015 at 07:47:31AM -0600, Schweiss, Chip wrote:
> This may be a better topic for the Illumos ZFS mailing.

Ok.

> There are a couple things affecting your results here.
> 
> The SSDs  perform best when multiple threads are filling their
> queue.   In your test you have a single thread.
> 
> It is my understanding the ZIL is also one thread per ZFS file
> system. The latency of the SAS bus and SSD will stack up against you
> here.  You should get better results across the board if you execute
> against several ZFS file systems on the pool.

Thanks. I ran the dd command on five separate filesystems:
  ATA-STECMACH16 M-0289-186.31GB (MLC)
$ /usr/gnu/bin/dd if=/dev/zero of=file bs=1 oflag=sync
$ iostat -cnx 1
r/s w/s   kr/s kw/s wait actv wsvc_t asvc_t  %w  %b
0.0 26788.20.0 107152.9  0.1  3.60.00.1   6 100
0.0 27470.80.0 109883.2  0.1  3.00.00.1   5 100
0.0 27980.10.0 111924.3  0.1  2.80.00.1   5 100
0.0 27100.30.0 108401.3  0.1  2.70.00.1   5  97
0.0 25801.00.0 103204.2  0.0  2.60.00.1   5  99
0.0 26955.10.0 107820.2  0.1  3.20.00.1   6 100
0.0 26374.30.0 105497.3  0.1  3.60.00.1   6 100

  ATA-INTEL SSDSC2BA20-0270-186.31GB (MLC)
$ /usr/gnu/bin/dd if=/dev/zero of=file bs=1 oflag=sync
$ iostat -cnx 1
r/s w/s   kr/skw/s wait actv wsvc_t asvc_t  %w  %b
0.0 16462.90.0 32927.8  0.0  1.80.00.1   3  94
0.0 16441.10.0 32888.2  0.0  1.80.00.1   3  94
0.0 16483.90.0 32963.8  0.0  1.80.00.1   2  93
0.0 16468.70.0 32935.4  0.0  1.80.00.1   3  94
0.0 16479.80.0 32965.7  0.0  1.80.00.1   3  94
0.0 16472.20.0 32942.4  0.0  1.80.00.1   2  94

Performance of the Intel is so sad in comparison to the older STEC.

> -Chip
> 
> On Sun, Feb 22, 2015 at 9:28 PM, Albert Chin <
> openindiana-disc...@mlists.thewrittenword.com> wrote:
> 
> > I've tested three SSDs in a Sun X4170M2. This server has 8 internal
> > SSD 2.5" drive bays with a Sun Storage 6 Gb SAS PCIe RAID HBA. I
> > believe the chipset on the HBA is a LSI SAS2108 (according to
> > http://tinyurl.com/koc6kdn).
> >
> > I tested by adding each SSD as a ZIL for a pool and then running the
> > following command on one of the file systems in the pool:
> >   $ /usr/gnu/bin/dd if=/dev/zero of=file bs=1 oflag=sync
> >
> > Iostat numbers below are given by:
> >   $ iostat -cnx 1
> >
> > $ cat /kernel/drv/sd.conf
> > ...
> > sd-config-list="*MACH16*","disksort:false, cache-nonvolatile:true",
> >"*INTELSSD*","disksort:false, cache-nonvolatile:true,
> > physical-block-size:8192";
> >
> > The 8k block size for the Intel S3700 comes from:
> >
> > http://wiki.illumos.org/display/illumos/List+of+sd-config-list+entries+for+Advanced-Format+drives
> >
> > Product part numbers:
> >   1. STEC Mach16 SLC 100GB - M16CSD2-100UIU
> >   2. STEC Mach16 MLC 200GB - M16ISD2-200UCV
> >   3. Intel DC S3700 - SSDSC2BA200G301
> >
> > ATA-STECMACH16 M-0300-93.16GB (SLC)
> >   $ /usr/gnu/bin/dd if=/dev/zero of=file bs=1 oflag=sync
> >   $ iostat -cnx 1
> >   r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b
> >   0.0 1439.20.0 5756.9  0.0  0.80.00.6   0  84
> >   0.0 1478.70.0 5915.0  0.0  0.90.00.6   1  86
> >   0.0 1491.10.0 5964.2  0.0  0.90.00.6   1  88
> >   0.0 1506.00.0 6023.8  0.0  0.90.00.6   0  89
> >
> > ATA-STECMACH16 M-0289-186.31GB (MLC)
> >   $ /usr/gnu/bin/dd if=/dev/zero of=file bs=1 oflag=sync
> >   $ iostat -cnx 1
> >   r/sw/s   kr/skw/s wait actv wsvc_t asvc_t  %w  %b
> >   0.0 8079.70.0 32318.9  0.0  0.50.00.1   2  52
> >   0.0 8229.10.0 32916.5  0.0  0.50.00.1   2  53
> >   0.0 7368.00.0 29471.9  0.0  0.50.00.1   2  47
> >   0.0 7318.00.0 29272.2  0.0  0.50.00.1   2  47
> >
> > ATA-INTEL SSDSC2BA20-0270-186.31GB (MLC)
> >   $ /usr/gnu/bin/dd if=/dev/zero of=file bs=1 oflag=sync
> >   $ iostat -cnx 1
> >   r/sw/s   kr/skw/s wait actv wsvc_t asvc_t  %w  %b
> >   0.0 9196.00.0 18392.0  0.0  0.20.00.0   1  19
> >   0.0 9144.30.0 18288.6  0.0  0.20.00.0   1  19
> >   0.0 9288.70.0 18575.5  0.0  0.20.00.0   1  19
> >   0.0 8352.00.0 16704.0  0.0  0.20.00.0   1  17
> >
> > The STEC Mach16's are 3.0Gbps devices. The Intel SSD DC S3700 is a
> > 6.0Gbps device. Just two questions:
> >   1. Why don't I see double the IOPS performance between the
> >  6.0Gbps device than the 3.0Gbps devices?
> >   2. Why does the STEC Mach16 100GB SLC suck so badly in comparison
> >  to it's 200GB MLC cousin? I know that the 100GB drives won't
> >  perform as well as the 200GB models but I did not expect this
> >  much of a difference.
> >
> > Even using bs=4096 on the Intel S3700, I was hoping to see >10K IOPS,
> > possibly matching the numbers 

Re: [OpenIndiana-discuss] Test on Sun X4170M2 between STEC Mach16 SLC/MLC and Intel DC S3700

2015-02-23 Thread Schweiss, Chip
This may be a better topic for the Illumos ZFS mailing.

There are a couple things affecting your results here.

The SSDs  perform best when multiple threads are filling their queue.   In
your test you have a single thread.

It is my understanding the ZIL is also one thread per ZFS file system.
The latency of the SAS bus and SSD will stack up against you here.  You
should get better results across the board if you execute against several
ZFS file systems on the pool.

-Chip

On Sun, Feb 22, 2015 at 9:28 PM, Albert Chin <
openindiana-disc...@mlists.thewrittenword.com> wrote:

> I've tested three SSDs in a Sun X4170M2. This server has 8 internal
> SSD 2.5" drive bays with a Sun Storage 6 Gb SAS PCIe RAID HBA. I
> believe the chipset on the HBA is a LSI SAS2108 (according to
> http://tinyurl.com/koc6kdn).
>
> I tested by adding each SSD as a ZIL for a pool and then running the
> following command on one of the file systems in the pool:
>   $ /usr/gnu/bin/dd if=/dev/zero of=file bs=1 oflag=sync
>
> Iostat numbers below are given by:
>   $ iostat -cnx 1
>
> $ cat /kernel/drv/sd.conf
> ...
> sd-config-list="*MACH16*","disksort:false, cache-nonvolatile:true",
>"*INTELSSD*","disksort:false, cache-nonvolatile:true,
> physical-block-size:8192";
>
> The 8k block size for the Intel S3700 comes from:
>
> http://wiki.illumos.org/display/illumos/List+of+sd-config-list+entries+for+Advanced-Format+drives
>
> Product part numbers:
>   1. STEC Mach16 SLC 100GB - M16CSD2-100UIU
>   2. STEC Mach16 MLC 200GB - M16ISD2-200UCV
>   3. Intel DC S3700 - SSDSC2BA200G301
>
> ATA-STECMACH16 M-0300-93.16GB (SLC)
>   $ /usr/gnu/bin/dd if=/dev/zero of=file bs=1 oflag=sync
>   $ iostat -cnx 1
>   r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b
>   0.0 1439.20.0 5756.9  0.0  0.80.00.6   0  84
>   0.0 1478.70.0 5915.0  0.0  0.90.00.6   1  86
>   0.0 1491.10.0 5964.2  0.0  0.90.00.6   1  88
>   0.0 1506.00.0 6023.8  0.0  0.90.00.6   0  89
>
> ATA-STECMACH16 M-0289-186.31GB (MLC)
>   $ /usr/gnu/bin/dd if=/dev/zero of=file bs=1 oflag=sync
>   $ iostat -cnx 1
>   r/sw/s   kr/skw/s wait actv wsvc_t asvc_t  %w  %b
>   0.0 8079.70.0 32318.9  0.0  0.50.00.1   2  52
>   0.0 8229.10.0 32916.5  0.0  0.50.00.1   2  53
>   0.0 7368.00.0 29471.9  0.0  0.50.00.1   2  47
>   0.0 7318.00.0 29272.2  0.0  0.50.00.1   2  47
>
> ATA-INTEL SSDSC2BA20-0270-186.31GB (MLC)
>   $ /usr/gnu/bin/dd if=/dev/zero of=file bs=1 oflag=sync
>   $ iostat -cnx 1
>   r/sw/s   kr/skw/s wait actv wsvc_t asvc_t  %w  %b
>   0.0 9196.00.0 18392.0  0.0  0.20.00.0   1  19
>   0.0 9144.30.0 18288.6  0.0  0.20.00.0   1  19
>   0.0 9288.70.0 18575.5  0.0  0.20.00.0   1  19
>   0.0 8352.00.0 16704.0  0.0  0.20.00.0   1  17
>
> The STEC Mach16's are 3.0Gbps devices. The Intel SSD DC S3700 is a
> 6.0Gbps device. Just two questions:
>   1. Why don't I see double the IOPS performance between the
>  6.0Gbps device than the 3.0Gbps devices?
>   2. Why does the STEC Mach16 100GB SLC suck so badly in comparison
>  to it's 200GB MLC cousin? I know that the 100GB drives won't
>  perform as well as the 200GB models but I did not expect this
>  much of a difference.
>
> Even using bs=4096 on the Intel S3700, I was hoping to see >10K IOPS,
> possibly matching the numbers from the anandtech review:
>   http://www.anandtech.com/show/6433/intel-ssd-dc-s3700-200gb-review/3
> ATA-INTEL SSDSC2BA20-0270-186.31GB (MLC)
>   $ /usr/gnu/bin/dd if=/dev/zero of=file bs=4096 oflag=sync
>   $ iostat -cnx 1
>   r/sw/s   kr/skw/s wait actv wsvc_t asvc_t  %w  %b
>   0.0 8714.50.0 34858.0  0.0  0.20.00.0   1  22
>   0.0 8410.90.0 33639.6  0.0  0.20.00.0   1  21
>   0.0 8431.10.0 33728.2  0.0  0.20.00.0   1  21
>   0.0 8295.00.0 33224.0  0.0  0.20.00.0   1  21
>   0.0 8970.10.0 35740.4  0.0  0.20.00.0   1  23
>
> Am I getting the best possible performance out of the Intel S3700?
>
> --
> albert chin (ch...@thewrittenword.com)
>
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss
>
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] Test on Sun X4170M2 between STEC Mach16 SLC/MLC and Intel DC S3700

2015-02-22 Thread Albert Chin
I've tested three SSDs in a Sun X4170M2. This server has 8 internal
SSD 2.5" drive bays with a Sun Storage 6 Gb SAS PCIe RAID HBA. I
believe the chipset on the HBA is a LSI SAS2108 (according to
http://tinyurl.com/koc6kdn).

I tested by adding each SSD as a ZIL for a pool and then running the
following command on one of the file systems in the pool:
  $ /usr/gnu/bin/dd if=/dev/zero of=file bs=1 oflag=sync

Iostat numbers below are given by:
  $ iostat -cnx 1

$ cat /kernel/drv/sd.conf
...
sd-config-list="*MACH16*","disksort:false, cache-nonvolatile:true",
   "*INTELSSD*","disksort:false, cache-nonvolatile:true, 
physical-block-size:8192";

The 8k block size for the Intel S3700 comes from:
  
http://wiki.illumos.org/display/illumos/List+of+sd-config-list+entries+for+Advanced-Format+drives

Product part numbers:
  1. STEC Mach16 SLC 100GB - M16CSD2-100UIU
  2. STEC Mach16 MLC 200GB - M16ISD2-200UCV
  3. Intel DC S3700 - SSDSC2BA200G301

ATA-STECMACH16 M-0300-93.16GB (SLC)
  $ /usr/gnu/bin/dd if=/dev/zero of=file bs=1 oflag=sync
  $ iostat -cnx 1
  r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b
  0.0 1439.20.0 5756.9  0.0  0.80.00.6   0  84
  0.0 1478.70.0 5915.0  0.0  0.90.00.6   1  86
  0.0 1491.10.0 5964.2  0.0  0.90.00.6   1  88
  0.0 1506.00.0 6023.8  0.0  0.90.00.6   0  89

ATA-STECMACH16 M-0289-186.31GB (MLC)
  $ /usr/gnu/bin/dd if=/dev/zero of=file bs=1 oflag=sync
  $ iostat -cnx 1
  r/sw/s   kr/skw/s wait actv wsvc_t asvc_t  %w  %b
  0.0 8079.70.0 32318.9  0.0  0.50.00.1   2  52
  0.0 8229.10.0 32916.5  0.0  0.50.00.1   2  53
  0.0 7368.00.0 29471.9  0.0  0.50.00.1   2  47
  0.0 7318.00.0 29272.2  0.0  0.50.00.1   2  47

ATA-INTEL SSDSC2BA20-0270-186.31GB (MLC)
  $ /usr/gnu/bin/dd if=/dev/zero of=file bs=1 oflag=sync
  $ iostat -cnx 1
  r/sw/s   kr/skw/s wait actv wsvc_t asvc_t  %w  %b
  0.0 9196.00.0 18392.0  0.0  0.20.00.0   1  19
  0.0 9144.30.0 18288.6  0.0  0.20.00.0   1  19
  0.0 9288.70.0 18575.5  0.0  0.20.00.0   1  19
  0.0 8352.00.0 16704.0  0.0  0.20.00.0   1  17

The STEC Mach16's are 3.0Gbps devices. The Intel SSD DC S3700 is a
6.0Gbps device. Just two questions:
  1. Why don't I see double the IOPS performance between the
 6.0Gbps device than the 3.0Gbps devices?
  2. Why does the STEC Mach16 100GB SLC suck so badly in comparison
 to it's 200GB MLC cousin? I know that the 100GB drives won't
 perform as well as the 200GB models but I did not expect this
 much of a difference.

Even using bs=4096 on the Intel S3700, I was hoping to see >10K IOPS,
possibly matching the numbers from the anandtech review:
  http://www.anandtech.com/show/6433/intel-ssd-dc-s3700-200gb-review/3
ATA-INTEL SSDSC2BA20-0270-186.31GB (MLC)
  $ /usr/gnu/bin/dd if=/dev/zero of=file bs=4096 oflag=sync
  $ iostat -cnx 1
  r/sw/s   kr/skw/s wait actv wsvc_t asvc_t  %w  %b
  0.0 8714.50.0 34858.0  0.0  0.20.00.0   1  22
  0.0 8410.90.0 33639.6  0.0  0.20.00.0   1  21
  0.0 8431.10.0 33728.2  0.0  0.20.00.0   1  21
  0.0 8295.00.0 33224.0  0.0  0.20.00.0   1  21
  0.0 8970.10.0 35740.4  0.0  0.20.00.0   1  23

Am I getting the best possible performance out of the Intel S3700?

-- 
albert chin (ch...@thewrittenword.com)

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] test Mailman - ignore

2014-05-31 Thread howard eisenberger
Testing Mailman. Sorry for the interruption.

Regards,

Howard E.


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] test from yahoo

2014-05-23 Thread Edward Ned Harvey via openindiana-discuss
asdf

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] test

2012-08-12 Thread openbabel


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] test

2012-01-18 Thread Open Indiana
-ikel

-Original Message-
From: Heiko Wuest [mailto:he...@wuest.de]
Sent: woensdag 18 januari 2012 9:53
To: openindiana-discuss@openindiana.org
Subject: [OpenIndiana-discuss] test



___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss



___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] test

2012-01-18 Thread Heiko Wuest



___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Test

2010-10-08 Thread Jonathan Adams
well I got your message fine.

On 8 October 2010 16:08, Paul Johnston wrote:

> Sorry about this but I tried to send a message and got:
>
>
>
> Your message did not reach some or all of the intended recipients.
>
>
>
>  Subject:Procedure for Release of Upgrades
>
>  Sent: 08/10/2010 16:06
>
>
>
> The following recipient(s) cannot be reached:
>
>
>
>  Discussion list for OpenIndina on 08/10/2010 16:06
>
>You do not have permission to send to this recipient.  For
> assistance, contact your system administrator.
>
>MSEXCH:MSExchangeIS:/DC=uk/DC=ac/DC=blah!!!
>
>
>
>
>
>
>
> Paul Johnston
>
> Humanities ICT (Infrastructure)
>
> Samuel Alexander Building
>
> Room W1.19
>
> Tel Internal 63327
>
>
>
> e-mail paul.johns...@manchester.ac.uk
> 
>
> web http://servalan.humanities.manchester.ac.uk/users/mcasspj/
> 
>
>
>
> Tuzoqlar granatalardan yuksak darajali portlovchi moddalardan yoki
> bosshqa narslardan qilingan?
>
>
>
>
>
>
>
> ___
> OpenIndiana-discuss mailing list
> OpenIndiana-discuss@openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss
>
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] Test

2010-10-08 Thread Paul Johnston
Sorry about this but I tried to send a message and got:

 

Your message did not reach some or all of the intended recipients.

 

  Subject:Procedure for Release of Upgrades

  Sent: 08/10/2010 16:06

 

The following recipient(s) cannot be reached:

 

  Discussion list for OpenIndina on 08/10/2010 16:06

You do not have permission to send to this recipient.  For
assistance, contact your system administrator.

MSEXCH:MSExchangeIS:/DC=uk/DC=ac/DC=blah!!!

 

 

 

Paul Johnston

Humanities ICT (Infrastructure)

Samuel Alexander Building

Room W1.19

Tel Internal 63327

 

e-mail paul.johns...@manchester.ac.uk
 

web http://servalan.humanities.manchester.ac.uk/users/mcasspj/
 

 

Tuzoqlar granatalardan yuksak darajali portlovchi moddalardan yoki
bosshqa narslardan qilingan?

 

 

 

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss