Hi Keith,
On Tue, Mar 18, 2014 at 08:58:01PM +0200, Len Weincier wrote:
We are looking at some new kit and we saw these from SuperMicro :
http://www.supermicro.com/products/system/1U/1027/SYS-1027R-WC1RT.cfm
They seem to be a decent option and by the looks of things all the pieces
I saw the SAS-3108 [Invader] in the list from mr_sas, is that a different
device ?
Oh, so it's HW RAID then. Yuck. But if it's in the list, then it's
probably supported.
Right and that looks like a mission to get IT mode flashed (if possible) etc,
going to avoid that
I have had a
Hi
We have a bunch of older machines, nice CPU's and memory but no disks to
speak of (only 2 bays e.g.)
Is there any way we can have the zones pool come from an ISCSI share or any
other option ?
Thanks
Len
---
smartos-discuss
Archives:
Hi
We have some new hosts with 1.5T memory and this causes a zones/swap of
1.5T to be created.
Is this necessary in the GZ ? Each VM gets its own swap so just not sure
what would be running in the GZ that would end u using that swap ?
Thanks
Len
---
Hi
This has now happened to 3 hosts in the last 3 days. Any idea what we can
look at ?
It seems to happen under high load on those systems, all older E5 based
hosts.
We just had another reboot and this is in /var/adm/messages
2016-09-19T11:49:39.065057+00:00 c1a unix: [ID 836849 kern.notice]
panic occur.
>
>
> --
> Brian Bennett
> Systems Engineer, Cloud Operations
> Joyent, Inc. | www.joyent.com
>
> On Sep 19, 2016, at 5:09 AM, Len Weincier <l...@cloudafrica.net> wrote:
>
> Hi
>
> This has now happened to 3 hosts in the last 3 days. Any idea what we can
>
nnett
> Systems Engineer, Cloud Operations
> Joyent, Inc. | www.joyent.com
>
> On Sep 20, 2016, at 11:22 AM, Len Weincier <l...@cloudafrica.net> wrote:
>
> Hi Brian
>
> How do we get the dump to you ?
>
> Thanks
> Len
>
>
> On Tue, 20 Sep 2016 at 20:21
Hi
Thanks for the responses. Of course the LX machines will need the
zones/swap which I did not realize.
cheers
Len
On Sun, 2 Oct 2016 at 23:42 Nigel W <nige...@nosun.ca> wrote:
> On Sep 29, 2016 2:03 PM, "Len Weincier" <l...@cloudafrica.net> wrote:
> >
> &g
0 0 0
errors: No known data errors
On 16 November 2016 at 10:34, Ian Collins <ian.iansh...@gmail.com> wrote:
> On 11/16/16 09:24 PM, Len Weincier wrote:
>
> Hi
>
> We have a node with the following :
>
> # zpool list
> NAMESIZE ALLOC FREE EXPANDS
Hi
We have a node with the following :
# zpool list
NAMESIZE ALLOC FREE EXPANDSZ FRAGCAP DEDUP HEALTH ALTROOT
zones 10,4T 4,07T 6,37T -25%39% 1.00x ONLINE -
And :
# zfs list | head
NAME USED AVAIL REFER
Just to add to this.
The issue is happening in an LX zone.
SmartOS image is 20160915T211220Z
Thanks
Len
On 1 December 2016 at 18:17, Len Weincier <l...@cloudafrica.net> wrote:
> Hi all
>
> Is there any issues with these cpu's and smartos - "Intel(R) Xeon(R) CPU
> E7-4850
Hi all
Is there any issues with these cpu's and smartos - "Intel(R) Xeon(R) CPU
E7-4850 v4 @ 2.10GHz" ?
We have a new host with 4 of those cpus and 1.5TB ram and moved some
machines to the host which are now thrashing the cpu - load avg in a test zone
with cpu_cap=800 is over 180 and just
of time but with the
E7-x cpu's that have high core counts and the ratio is higher this will
become a more serious problem especially for multithreaded apps ?
Thanks
Len
On 2 December 2016 at 09:44, Len Weincier <l...@cloudafrica.net> wrote:
> This is the classic symptom of a machine with
>
> This is the classic symptom of a machine with a cpu_cap less than its CPU
> (core) count.
>
So the vm sees all the cores and dispatches threads / processes for that
many cores but the actual uasage is limited by the cpu_cap so the threads
are stalling waiting for the OS to give them a slot to
On 2 December 2016 at 16:43, Nahum Shalman wrote:
> I like to think about CPU as a resource either under contention (more
> processes need CPU time than CPU cores to go around) or not:
>
> CPU caps limit how much you can use when there's no contention. In the
> Joyent Could
On 2 December 2016 at 18:30, Robert Mustacchi <r...@joyent.com> wrote:
> On 12/2/16 8:17 , Len Weincier wrote:
> > On 2 December 2016 at 18:04, Robert Mustacchi <r...@joyent.com> wrote:
> >
> >> On 12/1/16 23:44 , Len Weincier wrote:
> >>>&
On 2 December 2016 at 18:04, Robert Mustacchi <r...@joyent.com> wrote:
> On 12/1/16 23:44 , Len Weincier wrote:
> >>
> >> This is the classic symptom of a machine with a cpu_cap less than its
> CPU
> >> (core) count.
> >>
> >
> > So
Hi
Throughout the day we have had a host that “pauses” for a bit in the GZ,
especially commands like ps and prstat are very slow (up to 100 seconds)
Digging around I managed to find the /proc system was being unresponsive when
this happens. truss says that opening /proc/PID/… files takes up to
> On 26 Oct 2017, at 15:53, Robert Mustacchi <r...@joyent.com> wrote:
>
> On 10/26/17 6:51 , Len Weincier wrote:
>> Hi All
>>
>> We are looking to get a bunch of new compute nodes and looking at 2 things
>>
>> - the latest scal
Hi All
We are looking to get a bunch of new compute nodes and looking at 2 things
- the latest scalable intel cpu’s
- all NVMe based storage
Are there any issues with the above that anyone knows of ?
Thanks
Len
---
smartos-discuss
Archives:
;
Len
> On 25 Jul 2018, at 16:58, Len Weincier wrote:
>
> Hi
>
> We a very strange situation trying to upgrade to a newer smartos image where
> the disk I/O is *very* slow.
>
> I have been working through the released images and the last one that works
> 1
> On 25 Jul 2018, at 22:42, Michal Nowak wrote:
>
> On 07/25/18 08:49 PM, Len Weincier wrote:
>> Hi
>> I see this commit and the hosts where we see issues have 2 NVMe devs used as
>> slogs if that helps
>> https://github
On Wed, 2018-07-25 at 16:58 +0200, Len Weincier wrote:
> Hi
> We a very strange situation trying to upgrade to a newer smartos
> image where the disk I/O is *very* slow.
> I have been working through the released images and the last one that
> works 100% is 20180329
On Thu, 2018-07-26 at 12:07 +1200, Ian Collins wrote:
> On Thu, Jul 26, 2018 at 2:58 AM, Len Weincier
> wrote:
> > Hi
> >
> > We a very strange situation trying to upgrade to a newer smartos
> > image where the disk I/O is *very* slow.
> >
> > I hav
Hi
We have some custom stuff we want to add to a platform image but need
to base it off the 20180329 release.
I have done a git checkout of the relevant branch but dont see how to
refence the correct versions of illumos etc.
Whats the process of using that older version as a base for my
On Thu, 2018-07-26 at 07:17 +, Daniel Plominski wrote:
> Hi,
>
> 1. checkout your specific smartos-live git branch:
> git clone https://github.com/joyent/smartos-live --branch release-
> 20180104
> 2. merge your own commits
>
> 3. modify the configure.smartos:
> cd smartos-livecp -v
Hi
We a very strange situation trying to upgrade to a newer smartos image
where the disk I/O is *very* slow.
I have been working through the released images and the last one that
works 100% is 20180329T002644Z
From 20180412T003259Z onwards, the release with the new zfs features
like spacemaps
> On 26 Jul 2018, at 20:43, Mike Gerdts wrote:
>
> On Thu, Jul 26, 2018 at 5:02 AM, Len Weincier <mailto:l...@cloudafrica.net>> wrote:
> On Wed, 2018-07-25 at 16:58 +0200, Len Weincier wrote:
>> Hi
>>
>> We a very strange situation trying to
I see from the manufacturing docs that you have a single 50GB device
(270-022 usually). From reading around it seems to be recommend that
the slog be mirrored as it is taking the writes and if one fails there
is a spare. Do you find having only the one slog device is ok ?
Yes. The slog is
On 2 December 2014 at 15:25, Johan Claesson jo...@jobetech.se wrote:
2014-12-02 13:18 skrev Len Weincier via smartos-discuss:
Hi,
I had a similar problem with a failing disk.
I started to notice errors and warnings for a disk in my raidz setup.
The system randomly rebooted.
That random
Hi
We are looking at SDC again after last seeing it as version 6.x - so far
looking good.
I was wondering if the customer portal is part of the open source release ?
For a private cloud deployment at a customer would they use the operator
portal ?
Thanks
Len
31 matches
Mail list logo