Re: [smartos-discuss] How to install SmartOS when memory is bigger than hard disk?

2016-11-29 Thread Si-Qi Liu
I am still trying to find a better way, so I read the source of 
/smartdc/lib/smartos_prompt_config.sh, then find the following comment before 
it create the swap zone:

We cannot allow the swap size to be less than the size of DRAM, lest we run 
into the availrmem double accounting issue for locked anonymous memory that is 
backed by in-memory swap (which will severely and artificially limit VM 
tenancy).  We will therfore not create a swap device smaller than DRAM -- but 
we still allow for the configuration variable to account for actual consumed 
space by using it to set the refreservation on the swap volume if/when the 
specified size is smaller than DRAM.

What is "availrmem double accounting issue"? Does it mean that it is not a good 
idea to install smartos on large memory small disk machine? Or I can just 
ignore it, and set my favorite size of swap?

Best, Si-Qi

> 在 2016年11月26日,20:16,Si-Qi Liu  写道:
> 
> Dear Ian,
> 
> Thank you for your advice. It works! Though I really don't like this 
> literally get-hands-dirty workaround...
> 
> I pull most memories to make the machine "normal". Then install SmartOS, no 
> problem. Then resize the
> swap zone to 2G and poweroff. Then insert the memories back and boot again. A 
> new problem arises here:
> the svc:/system/filesystem/smartdc:default service fails. I check the log, 
> the reason is the dump zone is too
> small. It seems that the default size of this zone is 1/100 of the whole 
> disk, so it is set to be 1.43G. But when
> the machine has 256G memory, a dump zone should be at least 5.xxG (I don't 
> know how this number comes).
> So I resize the dump zone to fulfill the requirement, and reboot again. There 
> is no problem this time.
> 
> Best wishes, Si-Qi
> 
> 
>> 在 2016年11月25日,17:29,Ian Collins  写道:
>> 
>> On 11/25/16 10:05 PM, Si-Qi Liu wrote:
>>> Hi,
>>> 
>>> I am trying to install SmartOS on my HP Z820 workstation, which has 256G 
>>> memory and 160G SSD hard disk.
>>> 
>>> There is no problem when I setup the network, the zpool, the root password, 
>>> and so on. Finally, when I press 'y' and
>>> 'Enter' to start to installation, it fail. The reason is quite simple: when 
>>> the installing script create the swap zone, there
>>> is no enough disk space, so it just stop there, and give me a prompt. I 
>>> tried to create a small swap zone by myself,
>>> but I didn't know what to do next. Simply reboot doesn't work.
>> 
>> Have you tried pulling some of the memory just for the install?
>> 
>>> BTW, for a machine with 256G memory, is there necessary to create a swap 
>>> zone? If so, how big should it be?
>> 
>> Once you have installed, you can resize the swap volume.  I'd say yes, and 
>> for this machine, keep it small.
>> 
>> Cheers.
>> 
>> --
>> Ian.
>> 
> 
> 


---
smartos-discuss
Archives: https://www.listbox.com/member/archive/184463/=now
RSS Feed: https://www.listbox.com/member/archive/rss/184463/25769125-55cfbc00
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=25769125_secret=25769125-7688e9fb
Powered by Listbox: http://www.listbox.com


AW: [smartos-discuss] Wrong disksize calculation

2016-11-29 Thread Kilian Ries
@Gjermund Gusland Thorsen


shure, i stopped the VM first



@Jorge


Both pools have a HDD mirror with an SSD mirror als log-device




But that isn't really the problem. I think the problem is how SmartOS 
calculates the size from the ZVOL to its own MiB size inside the 
vm-json-definition...


I was able to move the KVM to the other Host after i changed the volsize, for 
example:


###

set set volsize=29G zones/uuid-disk0


gave me a


"size": 29696

in the KVMs json
###

After that, vmadm sees an integer instead of a float / double and i was able to 
move the KVM.


Von: Jorge Schrauwen 
Gesendet: Dienstag, 29. November 2016 10:34
An: smartos-discuss@lists.smartos.org
Cc: Gjermund Gusland Thorsen
Betreff: Re: [smartos-discuss] Wrong disksize calculation

I vaguely remember having this when the blocksize did not match 4k from
a HDD pool to 16k for SSD pool.

Regards

Jorge



On 2016-11-29 10:26, Gjermund Gusland Thorsen wrote:
> Did you stop the vm first?
>
> vmadm stop 557ba0c0-b09c-678f-b1bc-a6bf2cc7b439; vmadm send
> 557ba0c0-b09c-678f-b1bc-a6bf2cc7b439 | ssh r...@192.168.xx.xx vmadm
> receive
>
> G
>
> On 29 Nov, 2016, at 10:08, Kilian Ries  wrote:
>
>> Hello,
>>
>> yesterday i encountered the following problem:
>>
>> I wanted to move a KVM from one host to another via "vmadm send"
>> command, but that ended up in the following error:
>>
>> ###
>> vmadm send 557ba0c0-b09c-678f-b1bc-a6bf2cc7b439 | ssh
>> r...@192.168.xx.xx vmadm receive
>> Invalid value(s) for: disks.*.size
>> This socket is closed.
>> ###
>>
>>
>> Reading the vmadm manual i figured out that
>>
>>  * disks.*.size MUST be an integer
>>  * if you have a ZVOL, disks.*.size is depending on the size from the
>> ZVOL and cannot be updated via vmadm
>>
>> So i think SmartOS does somewhere calculate wrong disksizes, because
>> there should always be an integer but that isn't the case in reallity,
>> for example:
>>
>> -> first, output of vmadm get UUID
>> -> second, output of zfs get all zones/UUID-disk0
>>
>> ###
>>
>> Example 1:
>>
>> vmadm get UUID
>>
>>   "disks": [
>> {
>>   ...
>>   "size": 117187.5,
>>   ...
>> }
>>   ]
>>
>>
>> volsize  114G
>>
>>
>>
>> Example 2:
>>
>>   "disks": [
>> {
>>   ...
>>   "size": 29296,875
>>   ...
>> }
>>   ]
>>
>>
>> volsize   28,6G
>>
>> ###
>>
>>
>> Why does SmartOS calculate wrong numbers here? These should all be
>> integers ...
>>
>> Thanks
>> Greets
>> Kilian
>>
>>
>> smartos-discuss | Archives  | Modify Your Subscription
>
>
 
 



---
smartos-discuss
Archives: https://www.listbox.com/member/archive/184463/=now
RSS Feed: https://www.listbox.com/member/archive/rss/184463/25769125-55cfbc00
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=25769125_secret=25769125-7688e9fb
Powered by Listbox: http://www.listbox.com


Re: [smartos-discuss] Wrong disksize calculation

2016-11-29 Thread Jorge Schrauwen
I vaguely remember having this when the blocksize did not match 4k from 
a HDD pool to 16k for SSD pool.


Regards

Jorge



On 2016-11-29 10:26, Gjermund Gusland Thorsen wrote:

Did you stop the vm first?

vmadm stop 557ba0c0-b09c-678f-b1bc-a6bf2cc7b439; vmadm send
557ba0c0-b09c-678f-b1bc-a6bf2cc7b439 | ssh r...@192.168.xx.xx vmadm
receive

G

On 29 Nov, 2016, at 10:08, Kilian Ries  wrote:


Hello,

yesterday i encountered the following problem:

I wanted to move a KVM from one host to another via "vmadm send" 
command, but that ended up in the following error:


###
vmadm send 557ba0c0-b09c-678f-b1bc-a6bf2cc7b439 | ssh 
r...@192.168.xx.xx vmadm receive

Invalid value(s) for: disks.*.size
This socket is closed.
###


Reading the vmadm manual i figured out that

 • ​disks.*.size MUST be an integer
 • if you have a ZVOL, disks.*.size is depending on the size from the 
ZVOL and cannot be updated via vmadm


So i think SmartOS does somewhere calculate wrong disksizes, because 
there should always be an integer but that isn't the case in reallity, 
for example:


-> first, output of vmadm get UUID
-> second, output of zfs get all zones/UUID-disk0

###

Example 1:

vmadm get UUID​

  "disks": [
{
  ...
  "size": 117187.5,
  ...
}
  ]


volsize  114G



Example 2:

  "disks": [
{
  ...
  "size": 29296,875
  ...
}
  ]


volsize   28,6G

###


Why does SmartOS calculate wrong numbers here? These should all be 
integers ...


Thanks
Greets
Kilian


smartos-discuss | Archives  | Modify Your Subscription






---
smartos-discuss
Archives: https://www.listbox.com/member/archive/184463/=now
RSS Feed: https://www.listbox.com/member/archive/rss/184463/25769125-55cfbc00
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=25769125_secret=25769125-7688e9fb
Powered by Listbox: http://www.listbox.com


Re: [smartos-discuss] Wrong disksize calculation

2016-11-29 Thread Gjermund Gusland Thorsen
Did you stop the vm first?

vmadm stop 557ba0c0-b09c-678f-b1bc-a6bf2cc7b439; vmadm send 
557ba0c0-b09c-678f-b1bc-a6bf2cc7b439 | ssh r...@192.168.xx.xx vmadm receive

G

On 29 Nov, 2016, at 10:08, Kilian Ries  wrote:

> Hello,
> 
> yesterday i encountered the following problem:
> 
> I wanted to move a KVM from one host to another via "vmadm send" command, but 
> that ended up in the following error:
> 
> ###
> vmadm send 557ba0c0-b09c-678f-b1bc-a6bf2cc7b439 | ssh r...@192.168.xx.xx 
> vmadm receive
> Invalid value(s) for: disks.*.size
> This socket is closed.
> ###
> 
> 
> Reading the vmadm manual i figured out that
> 
>   • ​disks.*.size MUST be an integer
>   • if you have a ZVOL, disks.*.size is depending on the size from the 
> ZVOL and cannot be updated via vmadm
> 
> So i think SmartOS does somewhere calculate wrong disksizes, because there 
> should always be an integer but that isn't the case in reallity, for example:
> 
> -> first, output of vmadm get UUID
> -> second, output of zfs get all zones/UUID-disk0
> 
> ###
> 
> Example 1:
> 
> vmadm get UUID​
> 
>   "disks": [
> {
>   ...
>   "size": 117187.5,
>   ...
> }
>   ]
> 
> 
> volsize  114G
> 
> 
> 
> Example 2:
> 
>   "disks": [
> {
>   ...
>   "size": 29296,875
>   ...
> }
>   ]
> 
> 
> volsize   28,6G
> 
> ###
> 
> 
> Why does SmartOS calculate wrong numbers here? These should all be integers 
> ...
> 
> Thanks
> Greets
> Kilian
> 
> 
> smartos-discuss | Archives  | Modify Your Subscription



signature.asc
Description: Message signed with OpenPGP using GPGMail




---
smartos-discuss
Archives: https://www.listbox.com/member/archive/184463/=now
RSS Feed: https://www.listbox.com/member/archive/rss/184463/25769125-55cfbc00
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=25769125_secret=25769125-7688e9fb
Powered by Listbox: http://www.listbox.com


Re: [smartos-discuss] Power Management on Modern CPUs

2016-11-29 Thread Adam Richmond-Gordon
Still fighting on with this. Tried several configurations with the SSDs now 
(mirrored cache, striped cache, mirrored logs striped cache, individual cache 
and mirror), all eventually end with the same host reboot with nothing logged 
or in the crash dump directory.

One thing I have noticed is that when any log device is configured, the system 
reboots more often. This happens on two boxes of identical configuration.

I am beginning to wonder if either;
- Support for the onboard SATA controller isn’t great
- We are generating too much IO for the SATA controller or the SSDs to keep up 
with

Does anybody else have any systems running on a C612 chipset box and SATA 
drives?

> On 6 Oct 2016, at 12:17, Adam Richmond-Gordon  
> wrote:
> 
>> If the device dies while the system is running the log will fall back to
>> being on the pool and any uncommitted entries are still in memory and
>> won't be lost.
> 
> This is pretty much what I’d had in mind. That said, this is a production box 
> and data loss isn’t something I really want to be dealing with. Removing the 
> cache device hasn’t changed performance at all, so I guess there’s enough 
> free RAM on the box to deal with the majority of the ARC.
> 
> It’ll be interesting (now that the drives are mirrored) to see if the 
> crashing still occurs, or if the drive that appears to be timing out just 
> gets dropped from the pool.
> 
>> On 6 Oct 2016, at 04:13, Paul B. Henson  wrote:
>> 
>> On Thu, Oct 06, 2016 at 03:42:36AM +0100, Adam Richmond-Gordon wrote:
>> 
>>> Thank you for pointing that out - I had it in my head that a single
>>> log device would be safe.
>> 
>> There's only one failure mode (AFAIK) where a single log device will
>> cause data loss; if your box crashes or has an unclean shutdown (say due
>> to a power failure) while there are uncommitted entries on the log
>> device, and the device fails before the system comes back online to
>> process them.
>> 
>> If the device dies while the system is running the log will fall back to
>> being on the pool and any uncommitted entries are still in memory and
>> won't be lost. If the device dies after a clean shutdown or poweroff
>> there won't be any uncommitted entries on it and when the system comes
>> up it will fail the device and again just fall back to an on-pool log.
>> 
>> So while it's true that a single log device is non-redundant, it's a lot
>> less "not safe" than say a non-redundant pool. You'd have to be pretty
>> unlucky to actually have data loss from losing a non-redundant log
>> device. Of course, depending on the importance of your data, that might
>> not be a risk you want to take. But for a budget sensitive system, a
>> single high-cost SSD for a log isn't an insane configuration if you
>> think the odds of your system crashing/powering off dirty at the exact
>> same time your log device dies are pretty low.
>> 
> 
> 


---
smartos-discuss
Archives: https://www.listbox.com/member/archive/184463/=now
RSS Feed: https://www.listbox.com/member/archive/rss/184463/25769125-55cfbc00
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=25769125_secret=25769125-7688e9fb
Powered by Listbox: http://www.listbox.com