Re: Local storage

2018-12-07 Thread McClune, James
Hi Adam,

My apologies, what version of CloudStack are you running?

Like Ivan said, you would need to enable local storage for the zone.

libvirt should create a local storage pool, like you stated. The UUID &
path would be in /etc/cloudstack/agent.properties. virsh pool-list will
verify.

Best,
James

On Fri, Dec 7, 2018 at 10:59 AM Adam Witwicki 
wrote:

> I thought it related to this in the agent.props
>
>
> local.storage.uuid=
> local.storage.path=/var/lib/libvirt/images/
>
>
> and enabling local storage in the zone??
>
> The link you sent shows none of that?
>
>
> Thanks
>
> Adam
>
> -Original Message-
> From: McClune, James 
> Sent: 07 December 2018 15:55
> To: users@cloudstack.apache.org
> Subject: Re: Local storage
>
> ** This mail originated from OUTSIDE the Oakford corporate network. Treat
> hyperlinks and attachments in this email with caution. **
>
> Hi Adam,
>
> This should do it:
>
> http://docs.cloudstack.apache.org/en/latest/adminguide/storage.html#using-local-storage-for-data-volumes
>
> Best,
> James
>
> On Fri, Dec 7, 2018 at 10:49 AM Adam Witwicki 
> wrote:
>
> > Hello
> >
> > Is there a guide on how to set up local storage using KVM in cloudstack?
> >
> > Thanks
> >
> > Adam
> >
> >
> >
> > Disclaimer Notice:
> > This email has been sent by Oakford Technology Limited, while we have
> > checked this e-mail and any attachments for viruses, we can not
> > guarantee that they are virus-free. You must therefore take full
> > responsibility for virus checking.
> > This message and any attachments are confidential and should only be
> > read by those to whom they are addressed. If you are not the intended
> > recipient, please contact us, delete the message from your computer
> > and destroy any copies. Any distribution or copying without our prior
> > permission is prohibited.
> > Internet communications are not always secure and therefore Oakford
> > Technology Limited does not accept legal responsibility for this message.
> > The recipient is responsible for verifying its authenticity before
> > acting on the contents. Any views or opinions presented are solely
> > those of the author and do not necessarily represent those of Oakford
> Technology Limited.
> > Registered address: Oakford Technology Limited, 10 Prince Maurice
> > Court, Devizes, Wiltshire. SN10 2RT.
> > Registered in England and Wales No. 5971519
> >
> Disclaimer Notice:
> This email has been sent by Oakford Technology Limited, while we have
> checked this e-mail and any attachments for viruses, we can not guarantee
> that they are virus-free. You must therefore take full responsibility for
> virus checking.
> This message and any attachments are confidential and should only be read
> by those to whom they are addressed. If you are not the intended recipient,
> please contact us, delete the message from your computer and destroy any
> copies. Any distribution or copying without our prior permission is
> prohibited.
> Internet communications are not always secure and therefore Oakford
> Technology Limited does not accept legal responsibility for this message.
> The recipient is responsible for verifying its authenticity before acting
> on the contents. Any views or opinions presented are solely those of the
> author and do not necessarily represent those of Oakford Technology Limited.
> Registered address: Oakford Technology Limited, 10 Prince Maurice Court,
> Devizes, Wiltshire. SN10 2RT.
> Registered in England and Wales No. 5971519
>


Re: Local storage

2018-12-07 Thread McClune, James
Hi Adam,

This should do it:
http://docs.cloudstack.apache.org/en/latest/adminguide/storage.html#using-local-storage-for-data-volumes

Best,
James

On Fri, Dec 7, 2018 at 10:49 AM Adam Witwicki 
wrote:

> Hello
>
> Is there a guide on how to set up local storage using KVM in cloudstack?
>
> Thanks
>
> Adam
>
>
>
> Disclaimer Notice:
> This email has been sent by Oakford Technology Limited, while we have
> checked this e-mail and any attachments for viruses, we can not guarantee
> that they are virus-free. You must therefore take full responsibility for
> virus checking.
> This message and any attachments are confidential and should only be read
> by those to whom they are addressed. If you are not the intended recipient,
> please contact us, delete the message from your computer and destroy any
> copies. Any distribution or copying without our prior permission is
> prohibited.
> Internet communications are not always secure and therefore Oakford
> Technology Limited does not accept legal responsibility for this message.
> The recipient is responsible for verifying its authenticity before acting
> on the contents. Any views or opinions presented are solely those of the
> author and do not necessarily represent those of Oakford Technology Limited.
> Registered address: Oakford Technology Limited, 10 Prince Maurice Court,
> Devizes, Wiltshire. SN10 2RT.
> Registered in England and Wales No. 5971519
>


Re: Upgrading from 4.9.3 to 4.11

2018-11-05 Thread McClune, James
Hi Sergey,

Thank you for the feedback! :)

I managed to update the schema to 4.10. I'm trying to get to 4.11. However,
I'm running into these errors now:

2018-11-05 10:23:19,896 INFO  [c.c.u.d.T.Transaction] (main:null) (logid:)
Is Data Base High Availiability enabled? Ans : false
2018-11-05 10:23:20,233 DEBUG [c.c.u.d.DriverLoader] (main:null) (logid:)
Successfully loaded DB driver com.mysql.jdbc.Driver
2018-11-05 10:23:20,239 DEBUG [c.c.u.d.DriverLoader] (main:null) (logid:)
DB driver com.mysql.jdbc.Driver was already loaded.
2018-11-05 10:23:20,240 DEBUG [c.c.u.d.DriverLoader] (main:null) (logid:)
DB driver com.mysql.jdbc.Driver was already loaded.
2018-11-05 10:23:20,567 DEBUG [c.c.u.d.ConnectionConcierge] (main:null)
(logid:) Registering a database connection for LockMaster1
2018-11-05 10:23:20,567 INFO  [c.c.u.d.Merovingian2] (main:null) (logid:)
Cleaning up locks for 90520739700260
2018-11-05 10:23:20,580 INFO  [c.c.u.d.Merovingian2] (main:null) (logid:)
Released 0 locks for 90520739700260
2018-11-05 10:23:20,951 INFO  [o.a.c.s.l.CloudStackExtendedLifeCycle]
(main:null) (logid:) Running system integrity checker
com.cloud.upgrade.DatabaseUpgradeChecker@1e6b9a95
2018-11-05 10:23:20,952 INFO  [c.c.u.DatabaseUpgradeChecker] (main:null)
(logid:) Grabbing lock to check for database upgrade.
2018-11-05 10:23:21,042 DEBUG [c.c.u.d.VersionDaoImpl] (main:null) (logid:)
Checking to see if the database is at a version before it was the version
table is created
2018-11-05 10:23:21,078 INFO  [c.c.u.DatabaseUpgradeChecker] (main:null)
(logid:) DB version = 4.10.0.0 Code Version = 4.11.0.0
2018-11-05 10:23:21,079 INFO  [c.c.u.DatabaseUpgradeChecker] (main:null)
(logid:) Database upgrade must be performed from 4.10.0.0 to 4.11.0.0
2018-11-05 10:23:21,081 DEBUG [c.c.u.DatabaseUpgradeChecker] (main:null)
(logid:) Running upgrade Upgrade41000to41100 to upgrade from
4.10.0.0-4.11.0.0 to 4.11.0.0
2018-11-05 10:23:21,086 DEBUG [c.c.u.d.ScriptRunner] (main:null) (logid:)
-- Licensed to the Apache Software Foundation (ASF) under one
2018-11-05 10:23:21,086 DEBUG [c.c.u.d.ScriptRunner] (main:null) (logid:)
-- or more contributor license agreements.  See the NOTICE file
2018-11-05 10:23:21,086 DEBUG [c.c.u.d.ScriptRunner] (main:null) (logid:)
-- distributed with this work for additional information
2018-11-05 10:23:21,086 DEBUG [c.c.u.d.ScriptRunner] (main:null) (logid:)
-- regarding copyright ownership.  The ASF licenses this file
2018-11-05 10:23:21,087 DEBUG [c.c.u.d.ScriptRunner] (main:null) (logid:)
-- to you under the Apache License, Version 2.0 (the
2018-11-05 10:23:21,087 DEBUG [c.c.u.d.ScriptRunner] (main:null) (logid:)
-- "License"); you may not use this file except in compliance
2018-11-05 10:23:21,087 DEBUG [c.c.u.d.ScriptRunner] (main:null) (logid:)
-- with the License.  You may obtain a copy of the License at
2018-11-05 10:23:21,087 DEBUG [c.c.u.d.ScriptRunner] (main:null) (logid:) --
2018-11-05 10:23:21,087 DEBUG [c.c.u.d.ScriptRunner] (main:null) (logid:)
--   http://www.apache.org/licenses/LICENSE-2.0
2018-11-05 10:23:21,087 DEBUG [c.c.u.d.ScriptRunner] (main:null) (logid:) --
2018-11-05 10:23:21,087 DEBUG [c.c.u.d.ScriptRunner] (main:null) (logid:)
-- Unless required by applicable law or agreed to in writing,
2018-11-05 10:23:21,087 DEBUG [c.c.u.d.ScriptRunner] (main:null) (logid:)
-- software distributed under the License is distributed on an
2018-11-05 10:23:21,088 DEBUG [c.c.u.d.ScriptRunner] (main:null) (logid:)
-- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
2018-11-05 10:23:21,088 DEBUG [c.c.u.d.ScriptRunner] (main:null) (logid:)
-- KIND, either express or implied.  See the License for the
2018-11-05 10:23:21,088 DEBUG [c.c.u.d.ScriptRunner] (main:null) (logid:)
-- specific language governing permissions and limitations
2018-11-05 10:23:21,088 DEBUG [c.c.u.d.ScriptRunner] (main:null) (logid:)
-- under the License.
2018-11-05 10:23:21,088 DEBUG [c.c.u.d.ScriptRunner] (main:null) (logid:)
--;
2018-11-05 10:23:21,088 DEBUG [c.c.u.d.ScriptRunner] (main:null) (logid:)
-- Schema upgrade from 4.10.0.0 to 4.11.0.0
2018-11-05 10:23:21,088 DEBUG [c.c.u.d.ScriptRunner] (main:null) (logid:)
--;
2018-11-05 10:23:21,088 DEBUG [c.c.u.d.ScriptRunner] (main:null) (logid:)
-- Add For VPC flag
2018-11-05 10:23:21,088 DEBUG [c.c.u.d.ScriptRunner] (main:null) (logid:)
ALTER TABLE cloud.network_offerings ADD COLUMN for_vpc INT(1) NOT NULL
DEFAULT 0
2018-11-05 10:23:21,092 ERROR [c.c.u.d.ScriptRunner] (main:null) (logid:)
Error executing: ALTER TABLE cloud.network_offerings ADD COLUMN for_vpc
INT(1) NOT NULL DEFAULT 0
2018-11-05 10:23:21,093 ERROR [c.c.u.d.ScriptRunner] (main:null) (logid:)
com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Duplicate column
name 'for_vpc'
2018-11-05 10:23:21,094 ERROR [c.c.u.DatabaseUpgradeChecker] (main:null)
(logid:) Unable to execute upgrade script
com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Duplicate column
name 'for_vpc'
at 

Upgrading from 4.9.3 to 4.11

2018-11-02 Thread McClune, James
Hello CloudStack Community,

I was wondering if you could help me a little further. I referenced the
documentation Rohit sent:

http://docs.cloudstack.apache.org/en/4.11.1.0/upgrading/upgrade/upgrade-4.9.html

I backed up my cloud database and followed the instructions verbatim. My
CloudStack management server is running on Ubuntu 14.04. When I upgrade my
packages to 4.11, the management doesn't start correctly (/client 404
error). I was getting errors along the lines of "Cannot connect to
database". I checked my db.properties file and made sure the
cluster.node.ip was set correctly, as well as database authentication (e.g.
MySQL username/password). I ran the Java command to encrypt my database
password, as described here:

http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/4.11/management.html#changing-the-database-password

I added the password to db.properties, hoping that my authentication was
somehow screwed up. I still got the errors. I didn't catch those errors,
but I'll retry and send them to you.

I also tried doing a manual upgrade and re-importing the database schema.
However, the 4.9.3 schema is different from 4.11 (as expected), so I
started to get errors:

root@cloudstack:~# tail -f
/var/log/cloudstack/management/management-server.log
2018-11-01 21:04:50,909 INFO  [o.e.j.s.h.ContextHandler] (main:null)
(logid:) Started o.e.j.s.h.MovedContextHandler@5cdd8682{/,null,AVAILABLE}
2018-11-02 08:28:23,152 INFO  [o.a.c.ServerDaemon] (main:null) (logid:)
Server configuration file found:
/etc/cloudstack/management/server.properties
2018-11-02 08:28:23,160 INFO  [o.a.c.ServerDaemon] (main:null) (logid:)
Initializing server daemon on :::8080, with https.enabled=false,
https.port=8443, context.path=/client
2018-11-02 08:28:23,172 INFO  [o.e.j.u.log] (main:null) (logid:) Logging
initialized @853ms to org.eclipse.jetty.util.log.Slf4jLog
2018-11-02 08:28:23,406 INFO  [o.e.j.s.Server] (main:null) (logid:)
jetty-9.4.z-SNAPSHOT, build timestamp: 2017-11-21T16:27:37-05:00, git hash:
82b8fb23f757335bb3329d540ce37a2a2615f0a8
2018-11-02 08:28:23,454 INFO  [o.e.j.s.AbstractNCSARequestLog] (main:null)
(logid:) Opened /var/log/cloudstack/management/access.log
2018-11-02 08:28:23,586 INFO  [o.e.j.w.StandardDescriptorProcessor]
(main:null) (logid:) NO JSP Support for /client, did not find
org.eclipse.jetty.jsp.JettyJspServlet
2018-11-02 08:28:23,619 INFO  [o.e.j.s.session] (main:null) (logid:)
DefaultSessionIdManager workerName=node0
2018-11-02 08:28:23,619 INFO  [o.e.j.s.session] (main:null) (logid:) No
SessionScavenger set, using defaults
2018-11-02 08:28:23,621 INFO  [o.e.j.s.session] (main:null) (logid:)
Scavenging every 66ms
2018-11-02 08:28:45,916 INFO  [o.a.c.s.m.m.i.DefaultModuleDefinitionSet]
(main:null) (logid:) Loading module context [bootstrap] from URL
[jar:file:/usr/share/cloudstack-management/lib/cloudstack-4.11.0.0.jar!/META-INF/cloudstack/bootstrap/spring-bootstrap-context.xml]
2018-11-02 08:28:45,917 INFO  [o.a.c.s.m.m.i.DefaultModuleDefinitionSet]
(main:null) (logid:) Loading module context [bootstrap] from URL
[jar:file:/usr/share/cloudstack-management/lib/cloudstack-4.11.0.0.jar!/META-INF/cloudstack/bootstrap/spring-bootstrap-context-inheritable.xml]
2018-11-02 08:28:46,132 DEBUG [c.c.u.c.EncryptionSecretKeyChecker]
(main:null) (logid:) Encryption Type: file
2018-11-02 08:28:46,186 INFO  [c.c.u.LogUtils] (main:null) (logid:) log4j
configuration found at /etc/cloudstack/management/log4j-cloud.xml
2018-11-02 08:28:46,233 INFO  [o.a.c.s.m.m.i.DefaultModuleDefinitionSet]
(main:null) (logid:) Loaded module context [bootstrap] in 317 ms
2018-11-02 08:28:46,236 INFO  [o.a.c.s.m.m.i.DefaultModuleDefinitionSet]
(main:null) (logid:) Module Hierarchy: bootstrap
2018-11-02 08:28:46,236 INFO  [o.a.c.s.m.m.i.DefaultModuleDefinitionSet]
(main:null) (logid:) Module Hierarchy:   system
2018-11-02 08:28:46,237 INFO  [o.a.c.s.m.m.i.DefaultModuleDefinitionSet]
(main:null) (logid:) Module Hierarchy: core
2018-11-02 08:28:46,237 INFO  [o.a.c.s.m.m.i.DefaultModuleDefinitionSet]
(main:null) (logid:) Module Hierarchy:   allocator
2018-11-02 08:28:46,237 INFO  [o.a.c.s.m.m.i.DefaultModuleDefinitionSet]
(main:null) (logid:) Module Hierarchy: host-allocator-random
2018-11-02 08:28:46,237 INFO  [o.a.c.s.m.m.i.DefaultModuleDefinitionSet]
(main:null) (logid:) Module Hierarchy: planner
2018-11-02 08:28:46,238 INFO  [o.a.c.s.m.m.i.DefaultModuleDefinitionSet]
(main:null) (logid:) Module Hierarchy:   api-planner
2018-11-02 08:28:46,238 INFO  [o.a.c.s.m.m.i.DefaultModuleDefinitionSet]
(main:null) (logid:) Module Hierarchy:   baremetal-planner
2018-11-02 08:28:46,238 INFO  [o.a.c.s.m.m.i.DefaultModuleDefinitionSet]
(main:null) (logid:) Module Hierarchy:   explicit-dedication
2018-11-02 08:28:46,238 INFO  [o.a.c.s.m.m.i.DefaultModuleDefinitionSet]
(main:null) (logid:) Module Hierarchy:   host-anti-affinity
2018-11-02 08:28:46,239 INFO  

Upgrading from 4.9.3 to 4.11

2018-10-31 Thread McClune, James
Hello CloudStack Community,

I work for a school district in Ohio and we've been running ACS 4.9.3 for
over a year now. Great piece of software! We're using Ceph (Luminous) for
our storage backend and KVM/libvirt for virtualization.

We're starting to expand our private cloud. We'd like to rely on external
systems for IP services (e.g. DHCP, DNS, routing, etc.). Right now, we're
running a basic network zone (with ACS managing IP services). I'm building
a roadmap for upgrading to ACS 4.11. I'm looking to implement the L2
functionality in 4.11, as described here:

https://www.shapeblue.com/layer-2-networks-in-cloudstack/

I was wondering if anyone has advice on upgrading to 4.11 from 4.9.3 (i.e.
important things to watch for, problems encountered, etc.). I'm referencing
the documentation here:

http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/en/4.11.0.0/upgrade/upgrade-4.9.html

I appreciate all input! :)

Thanks,

--

James McClune

Technical Support Specialist

Norwalk City Schools

Phone: 419-660-6590

mcclu...@norwalktruckers.net


Re: CEPH / CloudStack features

2018-07-30 Thread McClune, James
Hi Dag,

I updated the documentation on the Ceph site a couple months ago (
https://github.com/ceph/ceph/pull/21050). I updated the Primary Storage
section as well as outdated Disk Offering links (These should reflect
4.11/master specs).

We use KVM/CloudStack with Ceph storage. We're still running ACS 4.9.3 with
minimal issues (some of our idiosyncrasies are related to Ceph). I can say,
from my experiences, CloudStack/KVM interfaces fairly well with Ceph (we
use Luminous).

Best,
James

On Fri, Jul 27, 2018 at 6:18 AM, Dag Sonstebo 
wrote:

> Hi all,
>
> I’m trying to find out more about CEPH compatibility with CloudStack / KVM
> – i.e. trying to put together a feature matrix of what works  and what
> doesn’t compared to NFS (or other block storage platforms).
> There’s not a lot of up to date information on this – the configuration
> guide on [1] is all I’ve located so far apart from a couple of one-liners
> in the official documentation.
>
> Could I get some feedback from the Ceph users in the community?
>
> Regards,
> Dag Sonstebo
>
> [1] http://docs.ceph.com/docs/master/rbd/rbd-cloudstack/
>
> dag.sonst...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
>


-- 



James McClune

Technical Support Specialist

Norwalk City Schools

Phone: 419-660-6590

mcclu...@norwalktruckers.net


Re: SSVM's not starting, timeout for libvirt python script in agent.log

2018-06-12 Thread McClune, James
Hi Christoffer,

Did you try setting the max.template.iso.size to a higher value (e.g. 50GB)?

I think the default is set pretty low.

Best,
James

On Tue, Jun 12, 2018 at 11:10 AM, Christoffer Pedersen  wrote:

> Just want to update that I still get the ISO error:
>
> 2018-06-12 17:09:42,194 INFO  [c.c.t.HypervisorTemplateAdapter]
> (qtp1096283470-12:ctx-962ec49c ctx-2c82c0e2) (logid:0c7df1d7) Image store
> doesn't have enough capacity. Skip downloading template to this image store
> 1
>
> Thanks!
>
> On Tue, Jun 12, 2018 at 5:07 PM, Christoffer Pedersen 
> wrote:
>
>> Hi Dag,
>>
>> Yes, I altered the IP addresses as I do not fancy throwing them out on
>> the public net. If you think they have value to the troubleshooting, I can
>> send you the original logdata directly. I configured the networking with
>> OVS and as follows:
>>
>> cloudbr0 - MGMT0 (management interface, VLAN1000), this also hosts the
>> system VMs I guess. In the Zone setup, I labelled the Management network as
>> MGMT0, I guess that's OK? It's a Internal network where other servers are
>> also connected. I can ping the VMs here, also over a S2S connection
>> cloudbr1 - Public (VLAN 4000) and Guest network (VLAN 500-999). I use
>> real public IP-addresses from a scope and I am able to ping the addresses
>> of both the CPVM and SSVM from my home.
>>
>> I have to update the situation though...
>>
>> Somehow at 13:02, I had the last error from the agent. Now, I do not have
>> any other errors and both VMs are now showing as "running" as well as agent
>> being "up" in the UI...  had these errors since yesterday and did not
>> change anything after 9:00 this morning. Maybe I'm impatient but 4 hours
>> seems a bit long for them to get to work. :)
>>
>> Console VM works though at least:
>>
>>
>>
>> My impression is that cloudstack needs a while to get hold of things...
>> or am I just experiencing unusual things? I have a question though: When
>> adding an ISO this morning, I had an error about that there was no space
>> left (though the storage is 20TB). Was this because the SSVM was not
>> running at the time?
>>
>> Thank you!
>> Chris
>>
>> On Tue, Jun 12, 2018 at 4:39 PM, Dag Sonstebo > > wrote:
>>
>>> Chris,
>>>
>>> Going off in a slightly different direction to previous answers. I
>>> suspect your problem is with networking - how have you configured this?
>>> When you say you can ping the SSVM on the private interface which IP
>>> address do you use and where do you successfully ping from?
>>>
>>> /usr/share/cloudstack-common/scripts/vm/hypervisor/kvm/patchviasocket.py
>>> -n
>>> v-1-VM -p
>>> %template=domP%type=consoleproxy%host=1.1.1.1%port=8250%name
>>> =v-1-VM%zone=1%pod=1%guid=Proxy.1%proxy_vm=1%disable_rp_filt
>>> er=true%eth2ip=9.9.9.9%eth2mask=255.255.255.0%gateway=9.9.9.
>>> 1%eth0ip=169.254.3.159%eth0mask=255.255.0.0%eth1ip=6.
>>> 6.6.6%eth1mask=255.255.255.0%mgmtcidr=
>>> 9.9.9.0/24%localgw=1.2.3.4%internaldns1=1.2.3.4%dns1=8.8.8.8
>>> %dns2=8.8.4.4
>>>
>>> It could be you have edited the above IP addresses to mask your real
>>> addresses – if so ignore this.
>>>
>>> If not then the above points to:
>>> - Management host is on 1.1.1.1
>>> - Eth2 which for a console proxy is public traffic is on 9.9.9.9/24
>>> - Eth0 which is the link local management interface is on
>>> 169.254.3.159/16 (system generated)
>>> - Eth1 is the main management interface on 6.6.6.6/24
>>> - You have a gateway address of 1.2.3.4
>>>
>>> So in this case – the CPVM can not check in to the management host on
>>> 1.1.1.1 -  It’s got no interface on that subnet and it also has a gateway
>>> it’s not able to reach.
>>>
>>> Regards,
>>> Dag Sonstebo
>>> Cloud Architect
>>> ShapeBlue
>>>
>>> On 12/06/2018, 13:12, "Nicolas Bouige"  wrote:
>>>
>>> Hi Ivan,
>>>
>>>
>>> Are you talking about this global parameters :
>>>
>>> router.aggregation.command.each.timeout
>>>
>>>
>>>
>>> Best regards,
>>>
>>> Nicolas Bouige
>>> DIMSI
>>> cloud.dimsi.fr
>>> 4, avenue Laurent Cely
>>> 
>>> Tour d’Asnière – 92600 Asnière sur Seine
>>> T/ +33 (0)6 28 98 53 40
>>>
>>>
>>> 
>>> De : Ivan Kudryavtsev 
>>> Envoyé : mardi 12 juin 2018 13:59:39
>>> À : users
>>> Objet : Re: SSVM's not starting, timeout for libvirt python script
>>> in agent.log
>>>
>>> Increasing command timeouts in global parameters can work here. At
>>> least I
>>> met similar behaviour with VR.
>>>
>>> вт, 12 июн. 2018 г., 14:39 Christoffer Pedersen :
>>>
>>> > Hi Nicolas,
>>> >
>>> > I did a apt show qemu and it gave me this version:
>>> >
>>> > Version: 1:2.5+dfsg-5ubuntu10.29
>>> >
>>> > So I guess tha would be version 2.5?
>>> >
>>>
>>> dag.sonst...@shapeblue.com
>>> www.shapeblue.com
>>> 53 Chandos Place, Covent Garden, London
>>> 

Re: Problems with KVM HA & STONITH

2018-04-05 Thread McClune, James
Hi Victor,

If I may interject, I read your email and understand you're running KVM
with Ceph storage. As I far I know, ACS only supports HA on NFS or iSCSI
primary storage.

http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/4.11/reliability.html

However, if you wanted to use Ceph, you could create an RBD block device
and export it over NFS. Here is an article I referenced in the past:

https://www.sebastien-han.fr/blog/2012/07/06/nfs-over-rbd/

You could then add that NFS storage into ACS and utilize HA. I hope I'm
understanding you correctly.

Best Regards,
James

On Thu, Apr 5, 2018 at 12:53 PM, victor <vic...@ihnetworks.com> wrote:

> Hello Boris,
>
> I am able to create VM with nfs+Ha and nfs without HA. The issue is with
> creating VM with Ceph  storage.
>
> Regards
> Victor
>
>
>
> On 04/05/2018 01:18 PM, Boris Stoyanov wrote:
>
>> Hi Victor,
>> Host HA is working only with KVM + NFS. Ceph is not supported at this
>> stage. Obviously RAW volumes are not supported on your pool, but I’m not
>> sure if that’s because of Ceph or HA in general. Are you able to deploy a
>> non-ha VM?
>>
>> Boris Stoyanov
>>
>>
>> boris.stoya...@shapeblue.com
>> www.shapeblue.com
>> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
>> @shapeblue
>>
>>
>>> On 5 Apr 2018, at 4:19, victor <vic...@ihnetworks.com> wrote:
>>>
>>> Hello Rohit,
>>>
>>> Is the Host HA provider start working with Ceph. The reason I am asking
>>> is because, I am not able to create a VM with Ceph storage in a kvm host
>>> with HA enabled and I am getting the following error while creating VM.
>>>
>>> 
>>> .cloud.exception.StorageUnavailableException: Resource [StoragePool:2]
>>> is unreachable: Unable to create Vol[9|vm=6|DATADISK]:com.cloud
>>> .utils.exception.CloudRuntimeException: org.libvirt.LibvirtException:
>>> unsupported configuration: only RAW volumes are supported by this storage
>>> pool
>>> 
>>>
>>> Regards
>>> Victor
>>>
>>> On 11/04/2017 09:53 PM, Rohit Yadav wrote:
>>>
>>>> Hi James, (/cc Simon and others),
>>>>
>>>>
>>>> A new feature exists in upcoming ACS 4.11, Host HA:
>>>>
>>>> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Host+HA
>>>>
>>>> You can read more about it here as well: http://www.shapeblue.com/host-
>>>> ha-for-kvm-hosts-in-cloudstack/
>>>>
>>>> This feature can use a custom HA provider, with default HA provider
>>>> implemented for KVM and NFS, and uses ipmi based fencing (STONITH) of the
>>>> host. The current HA mechanism provides no such method of fencing (powering
>>>> off) a host and it depends under what circumstances the VM HA is failing
>>>> (environment issues, ACS version etc).
>>>>
>>>> As Simon mentioned, we have a (host) HA provider that works with Ceph
>>>> in near future.
>>>>
>>>> Regards.
>>>>
>>>> 
>>>> From: Simon Weller <swel...@ena.com.INVALID>
>>>> Sent: Thursday, November 2, 2017 7:27:22 PM
>>>> To: users@cloudstack.apache.org
>>>> Subject: Re: Problems with KVM HA & STONITH
>>>>
>>>> James,
>>>>
>>>>
>>>> Ceph is a great solution and we run all of our ACS storage on Ceph.
>>>> Note that it adds another layer of complexity to your installation, so
>>>> you're going need to develop some expertise with that platform to get
>>>> comfortable with how it works. Typically you don't want to mix Ceph with
>>>> your ACS hosts. We in fact deploy 3 separate Ceph Monitors, and then scale
>>>> OSDs as required on a per cluster basis in order to add additional
>>>> resiliency (So every KVM ACS cluster has it's own Ceph "POD").  We also use
>>>> Ceph for S3 storage (on completely separate Ceph clusters) for some other
>>>> services.
>>>>
>>>>
>>>> NFS is much simpler to maintain for smaller installations in my
>>>> opinion. If the IO load you're looking at isn't going to be insanely high,
>>>> you could look at building a 2 node NFS cluster using pacemaker and DRDB
>>>> for data replication between nodes. That would reduce your storage
>>>> requirement to 2 fairly low power servers (NFS is not very cpu intensive).
>>>> 

Re: Failed to add data store: DB Exception on: com.mysql.jdbc.JDBC42PreparedStatement

2018-03-12 Thread McClune, James
I'm using the UI to register storage.

- James

On Mon, Mar 12, 2018 at 3:50 PM, Rafael Weingärtner <
rafaelweingart...@gmail.com> wrote:

> So, you are getting the error while adding the storage in ACS. Right?
> I copied and pasted the SQL you have your logs, and I got this:
> Error: Column 'host_address' cannot be null
> SQLState:  23000
>
>
> This is the SQL I executed, which is in one of your emails:
>
> > INSERT INTO storage_pool
> > (storage_pool.id,storage_pool.name
> > ,storage_pool.uuid
> > ,storage_pool.pool_type,storage_pool.created
> > ,storage_pool.update_time,storage_pool.data_center_id,
> storage_pool.pod_id,storage_pool.used_bytes,storage_pool.
> capacity_bytes,storage_pool.status,storage_pool.storage_
> provider_name,storage_pool.host_address,storage_pool.path
> > ,storage_pool.port,storage_pool.user_info,storage_pool.
> cluster_id,storage_pool.scope,storage_pool.managed,storage_
> pool.capacity_iops,storage_pool.hypervisor)
> > VALUES
> > (0
> > ,_binary'PrimaryStorage1',_binary'9c279e74-15a5-3c8a-
> b0d1-14349b59710a','RBD'
> > ,'2018-03-12 14:55:18',null,1
> > ,1  ,0  ,0
> > ,'Initialized'  ,_binary'DefaultPrimary'
> > ,null ,_binary'6A==@10.10.13.141/rbd',6789
> > ,null  ,1  ,null
> > ,0   ,null          ,null   )
> >
>
> Are you using the API or UI to register this storage?
>
>
> On Mon, Mar 12, 2018 at 4:47 PM, McClune, James <
> mcclu...@norwalktruckers.net> wrote:
>
> > I started fresh with 4.11.0. The Ceph storage was already setup, however,
> > I'm just now adding it to ACS.
> >
> > I get this error anytime I add primary storage. Secondary storage can be
> > added with no problems.
> >
> > - James
> >
> > On Mon, Mar 12, 2018 at 3:39 PM, Rafael Weingärtner <
> > rafaelweingart...@gmail.com> wrote:
> >
> > > When are you receiving this error? I mean, are you receiving it when
> you
> > > add the storage? Or, when you do something else? Was the storage
> already
> > > confgured?
> > >
> > >
> > > On Mon, Mar 12, 2018 at 4:35 PM, McClune, James <
> > > mcclu...@norwalktruckers.net> wrote:
> > >
> > > > I'm using Oracle MySQL.
> > > >
> > > > - James
> > > >
> > > > On Mon, Mar 12, 2018 at 3:33 PM, Rafael Weingärtner <
> > > > rafaelweingart...@gmail.com> wrote:
> > > >
> > > > > Well, you can delete the entry you just inserted. the idea to run
> the
> > > SQL
> > > > > was to find the problem.
> > > > > It is odd, I was expecting to be a missing column problem. What is
> > the
> > > DB
> > > > > that you are using? MariaDB or Oracle MySQL?
> > > > >
> > > > > On Mon, Mar 12, 2018 at 4:29 PM, McClune, James <
> > > > > mcclu...@norwalktruckers.net> wrote:
> > > > >
> > > > > > Hi Rafael,
> > > > > >
> > > > > > I ran the SQL query and got these errors:
> > > > > >
> > > > > >
> > > > > > When I deleted the *_binary* and *storage_pool.* from the query
> > > > entries,
> > > > > > it seemed to work. However, I'm still experiencing oddities:
> > > > > >
> > > > > >
> > > > > > Some of my entries were 'null', when they shouldn't be.
> > > > > >
> > > > > > Any help is much appreciated.
> > > > > >
> > > > > > Thanks,
> > > > > > James
> > > > > >
> > > > > > On Mon, Mar 12, 2018 at 3:06 PM, Rafael Weingärtner <
> > > > > > rafaelweingart...@gmail.com> wrote:
> > > > > >
> > > > > >> Did you try running the insert that is causing the error
> manually?
> > > > > >>
> > > > > >> On Mon, Mar 12, 2018 at 4:04 PM, McClune, James <
> > > > > >> mcclu...@norwalktruckers.net> wrote:
> > > > > >>
> > > > > >> > Hello CloudStack Community,
> > > > > >> >
> > > > > >> > I just upgraded ACS from 4.9.3 to 4.11.0. I'm revamping my
> > primary
> > > > > >> storage
> > > > > >> > (Ceph RBD). The pr

Re: Failed to add data store: DB Exception on: com.mysql.jdbc.JDBC42PreparedStatement

2018-03-12 Thread McClune, James
Yes, I also got Error: Column 'host_address' cannot be null as well. I
manually added my RADOS monitor IP for the host_address.

Is this right?

- James

On Mon, Mar 12, 2018 at 3:51 PM, McClune, James <
mcclu...@norwalktruckers.net> wrote:

> Sorry if I confused you on upgrading from 4.9.3 to 4.11.0. I decided to
> start from scratch and re-installed with 4.11.0.
>
> I did not use any old SQL backups from 4.9.3.
>
> - James
>
> On Mon, Mar 12, 2018 at 3:39 PM, Rafael Weingärtner <
> rafaelweingart...@gmail.com> wrote:
>
>> When are you receiving this error? I mean, are you receiving it when you
>> add the storage? Or, when you do something else? Was the storage already
>> confgured?
>>
>>
>> On Mon, Mar 12, 2018 at 4:35 PM, McClune, James <
>> mcclu...@norwalktruckers.net> wrote:
>>
>> > I'm using Oracle MySQL.
>> >
>> > - James
>> >
>> > On Mon, Mar 12, 2018 at 3:33 PM, Rafael Weingärtner <
>> > rafaelweingart...@gmail.com> wrote:
>> >
>> > > Well, you can delete the entry you just inserted. the idea to run the
>> SQL
>> > > was to find the problem.
>> > > It is odd, I was expecting to be a missing column problem. What is
>> the DB
>> > > that you are using? MariaDB or Oracle MySQL?
>> > >
>> > > On Mon, Mar 12, 2018 at 4:29 PM, McClune, James <
>> > > mcclu...@norwalktruckers.net> wrote:
>> > >
>> > > > Hi Rafael,
>> > > >
>> > > > I ran the SQL query and got these errors:
>> > > >
>> > > >
>> > > > When I deleted the *_binary* and *storage_pool.* from the query
>> > entries,
>> > > > it seemed to work. However, I'm still experiencing oddities:
>> > > >
>> > > >
>> > > > Some of my entries were 'null', when they shouldn't be.
>> > > >
>> > > > Any help is much appreciated.
>> > > >
>> > > > Thanks,
>> > > > James
>> > > >
>> > > > On Mon, Mar 12, 2018 at 3:06 PM, Rafael Weingärtner <
>> > > > rafaelweingart...@gmail.com> wrote:
>> > > >
>> > > >> Did you try running the insert that is causing the error manually?
>> > > >>
>> > > >> On Mon, Mar 12, 2018 at 4:04 PM, McClune, James <
>> > > >> mcclu...@norwalktruckers.net> wrote:
>> > > >>
>> > > >> > Hello CloudStack Community,
>> > > >> >
>> > > >> > I just upgraded ACS from 4.9.3 to 4.11.0. I'm revamping my
>> primary
>> > > >> storage
>> > > >> > (Ceph RBD). The problem I'm experiencing is every time I try to
>> > re-add
>> > > >> my
>> > > >> > Ceph storage pool, I get this error:
>> > > >> >
>> > > >> >
>> > > >> >- Something went wrong; please correct the following:
>> > > >> >Failed to add data store: DB Exception on:
>> > > >> >com.mysql.jdbc.JDBC42PreparedStatement@71996b4a: INSERT INTO
>> > > >> >storage_pool (storage_pool.id, storage_pool.name,
>> > > storage_pool.uuid,
>> > > >> >storage_pool.pool_type, storage_pool.created,
>> > > >> storage_pool.update_time,
>> > > >> >storage_pool.data_center_id, storage_pool.pod_id,
>> > > >> > storage_pool.used_bytes,
>> > > >> >storage_pool.capacity_bytes, storage_pool.status,
>> > > >> >storage_pool.storage_provider_name,
>> storage_pool.host_address,
>> > > >> >storage_pool.path, storage_pool.port, storage_pool.user_info,
>> > > >> >storage_pool.cluster_id, storage_pool.scope,
>> > storage_pool.managed,
>> > > >> >storage_pool.capacity_iops, storage_pool.hypervisor) VALUES
>> (0,
>> > > >> >_binary'PrimaryStorage1', _binary'9c279e74-15a5-3c8a-
>> > > >> > b0d1-14349b59710a',
>> > > >> >'RBD', '2018-03-12 14:55:18', null, 1, 1, 0, 0, 'Initialized',
>> > > >> >_binary'DefaultPrimary', null, _binary'6A==@10.10.13.141/rbd'
>> ,
>> > > 6789,
>> > > >> >null, 1, null, 0, null, null)
>> > > >> >
>> > > >> > Here is th

Re: Failed to add data store: DB Exception on: com.mysql.jdbc.JDBC42PreparedStatement

2018-03-12 Thread McClune, James
Sorry if I confused you on upgrading from 4.9.3 to 4.11.0. I decided to
start from scratch and re-installed with 4.11.0.

I did not use any old SQL backups from 4.9.3.

- James

On Mon, Mar 12, 2018 at 3:39 PM, Rafael Weingärtner <
rafaelweingart...@gmail.com> wrote:

> When are you receiving this error? I mean, are you receiving it when you
> add the storage? Or, when you do something else? Was the storage already
> confgured?
>
>
> On Mon, Mar 12, 2018 at 4:35 PM, McClune, James <
> mcclu...@norwalktruckers.net> wrote:
>
> > I'm using Oracle MySQL.
> >
> > - James
> >
> > On Mon, Mar 12, 2018 at 3:33 PM, Rafael Weingärtner <
> > rafaelweingart...@gmail.com> wrote:
> >
> > > Well, you can delete the entry you just inserted. the idea to run the
> SQL
> > > was to find the problem.
> > > It is odd, I was expecting to be a missing column problem. What is the
> DB
> > > that you are using? MariaDB or Oracle MySQL?
> > >
> > > On Mon, Mar 12, 2018 at 4:29 PM, McClune, James <
> > > mcclu...@norwalktruckers.net> wrote:
> > >
> > > > Hi Rafael,
> > > >
> > > > I ran the SQL query and got these errors:
> > > >
> > > >
> > > > When I deleted the *_binary* and *storage_pool.* from the query
> > entries,
> > > > it seemed to work. However, I'm still experiencing oddities:
> > > >
> > > >
> > > > Some of my entries were 'null', when they shouldn't be.
> > > >
> > > > Any help is much appreciated.
> > > >
> > > > Thanks,
> > > > James
> > > >
> > > > On Mon, Mar 12, 2018 at 3:06 PM, Rafael Weingärtner <
> > > > rafaelweingart...@gmail.com> wrote:
> > > >
> > > >> Did you try running the insert that is causing the error manually?
> > > >>
> > > >> On Mon, Mar 12, 2018 at 4:04 PM, McClune, James <
> > > >> mcclu...@norwalktruckers.net> wrote:
> > > >>
> > > >> > Hello CloudStack Community,
> > > >> >
> > > >> > I just upgraded ACS from 4.9.3 to 4.11.0. I'm revamping my primary
> > > >> storage
> > > >> > (Ceph RBD). The problem I'm experiencing is every time I try to
> > re-add
> > > >> my
> > > >> > Ceph storage pool, I get this error:
> > > >> >
> > > >> >
> > > >> >- Something went wrong; please correct the following:
> > > >> >Failed to add data store: DB Exception on:
> > > >> >com.mysql.jdbc.JDBC42PreparedStatement@71996b4a: INSERT INTO
> > > >> >storage_pool (storage_pool.id, storage_pool.name,
> > > storage_pool.uuid,
> > > >> >storage_pool.pool_type, storage_pool.created,
> > > >> storage_pool.update_time,
> > > >> >storage_pool.data_center_id, storage_pool.pod_id,
> > > >> > storage_pool.used_bytes,
> > > >> >storage_pool.capacity_bytes, storage_pool.status,
> > > >> >storage_pool.storage_provider_name, storage_pool.host_address,
> > > >> >storage_pool.path, storage_pool.port, storage_pool.user_info,
> > > >> >storage_pool.cluster_id, storage_pool.scope,
> > storage_pool.managed,
> > > >> >storage_pool.capacity_iops, storage_pool.hypervisor) VALUES (0,
> > > >> >_binary'PrimaryStorage1', _binary'9c279e74-15a5-3c8a-
> > > >> > b0d1-14349b59710a',
> > > >> >'RBD', '2018-03-12 14:55:18', null, 1, 1, 0, 0, 'Initialized',
> > > >> >_binary'DefaultPrimary', null, _binary'6A==@10.10.13.141/rbd',
> > > 6789,
> > > >> >null, 1, null, 0, null, null)
> > > >> >
> > > >> > Here is the error from the management-server.log file:
> > > >> >
> > > >> > 2018-03-12 11:01:32,786 DEBUG [c.c.u.d.T.Transaction]
> > > >> > (qtp66233253-19:ctx-04c43d3a ctx-329be767) (logid:c19a8879)
> Rolling
> > > back
> > > >> > the transaction: Time = 1 Name =  qtp66233253-19; called by
> > > >> > -TransactionLegacy.rollback:889-TransactionLegacy.removeUpTo:832-
> > > >> > TransactionLegacy.close:656-TransactionContextInterceptor.
> > invoke:36-
> > > >> > ReflectiveMethodInvocation.proceed:174-
> ExposeInvocationInt

Re: Failed to add data store: DB Exception on: com.mysql.jdbc.JDBC42PreparedStatement

2018-03-12 Thread McClune, James
I started fresh with 4.11.0. The Ceph storage was already setup, however,
I'm just now adding it to ACS.

I get this error anytime I add primary storage. Secondary storage can be
added with no problems.

- James

On Mon, Mar 12, 2018 at 3:39 PM, Rafael Weingärtner <
rafaelweingart...@gmail.com> wrote:

> When are you receiving this error? I mean, are you receiving it when you
> add the storage? Or, when you do something else? Was the storage already
> confgured?
>
>
> On Mon, Mar 12, 2018 at 4:35 PM, McClune, James <
> mcclu...@norwalktruckers.net> wrote:
>
> > I'm using Oracle MySQL.
> >
> > - James
> >
> > On Mon, Mar 12, 2018 at 3:33 PM, Rafael Weingärtner <
> > rafaelweingart...@gmail.com> wrote:
> >
> > > Well, you can delete the entry you just inserted. the idea to run the
> SQL
> > > was to find the problem.
> > > It is odd, I was expecting to be a missing column problem. What is the
> DB
> > > that you are using? MariaDB or Oracle MySQL?
> > >
> > > On Mon, Mar 12, 2018 at 4:29 PM, McClune, James <
> > > mcclu...@norwalktruckers.net> wrote:
> > >
> > > > Hi Rafael,
> > > >
> > > > I ran the SQL query and got these errors:
> > > >
> > > >
> > > > When I deleted the *_binary* and *storage_pool.* from the query
> > entries,
> > > > it seemed to work. However, I'm still experiencing oddities:
> > > >
> > > >
> > > > Some of my entries were 'null', when they shouldn't be.
> > > >
> > > > Any help is much appreciated.
> > > >
> > > > Thanks,
> > > > James
> > > >
> > > > On Mon, Mar 12, 2018 at 3:06 PM, Rafael Weingärtner <
> > > > rafaelweingart...@gmail.com> wrote:
> > > >
> > > >> Did you try running the insert that is causing the error manually?
> > > >>
> > > >> On Mon, Mar 12, 2018 at 4:04 PM, McClune, James <
> > > >> mcclu...@norwalktruckers.net> wrote:
> > > >>
> > > >> > Hello CloudStack Community,
> > > >> >
> > > >> > I just upgraded ACS from 4.9.3 to 4.11.0. I'm revamping my primary
> > > >> storage
> > > >> > (Ceph RBD). The problem I'm experiencing is every time I try to
> > re-add
> > > >> my
> > > >> > Ceph storage pool, I get this error:
> > > >> >
> > > >> >
> > > >> >- Something went wrong; please correct the following:
> > > >> >Failed to add data store: DB Exception on:
> > > >> >com.mysql.jdbc.JDBC42PreparedStatement@71996b4a: INSERT INTO
> > > >> >storage_pool (storage_pool.id, storage_pool.name,
> > > storage_pool.uuid,
> > > >> >storage_pool.pool_type, storage_pool.created,
> > > >> storage_pool.update_time,
> > > >> >storage_pool.data_center_id, storage_pool.pod_id,
> > > >> > storage_pool.used_bytes,
> > > >> >storage_pool.capacity_bytes, storage_pool.status,
> > > >> >storage_pool.storage_provider_name, storage_pool.host_address,
> > > >> >storage_pool.path, storage_pool.port, storage_pool.user_info,
> > > >> >storage_pool.cluster_id, storage_pool.scope,
> > storage_pool.managed,
> > > >> >storage_pool.capacity_iops, storage_pool.hypervisor) VALUES (0,
> > > >> >_binary'PrimaryStorage1', _binary'9c279e74-15a5-3c8a-
> > > >> > b0d1-14349b59710a',
> > > >> >'RBD', '2018-03-12 14:55:18', null, 1, 1, 0, 0, 'Initialized',
> > > >> >_binary'DefaultPrimary', null, _binary'6A==@10.10.13.141/rbd',
> > > 6789,
> > > >> >null, 1, null, 0, null, null)
> > > >> >
> > > >> > Here is the error from the management-server.log file:
> > > >> >
> > > >> > 2018-03-12 11:01:32,786 DEBUG [c.c.u.d.T.Transaction]
> > > >> > (qtp66233253-19:ctx-04c43d3a ctx-329be767) (logid:c19a8879)
> Rolling
> > > back
> > > >> > the transaction: Time = 1 Name =  qtp66233253-19; called by
> > > >> > -TransactionLegacy.rollback:889-TransactionLegacy.removeUpTo:832-
> > > >> > TransactionLegacy.close:656-TransactionContextInterceptor.
> > invoke:36-
> > > >> > ReflectiveMethodInvocatio

Re: Failed to add data store: DB Exception on: com.mysql.jdbc.JDBC42PreparedStatement

2018-03-12 Thread McClune, James
I'm using Oracle MySQL.

- James

On Mon, Mar 12, 2018 at 3:33 PM, Rafael Weingärtner <
rafaelweingart...@gmail.com> wrote:

> Well, you can delete the entry you just inserted. the idea to run the SQL
> was to find the problem.
> It is odd, I was expecting to be a missing column problem. What is the DB
> that you are using? MariaDB or Oracle MySQL?
>
> On Mon, Mar 12, 2018 at 4:29 PM, McClune, James <
> mcclu...@norwalktruckers.net> wrote:
>
> > Hi Rafael,
> >
> > I ran the SQL query and got these errors:
> >
> >
> > When I deleted the *_binary* and *storage_pool.* from the query entries,
> > it seemed to work. However, I'm still experiencing oddities:
> >
> >
> > Some of my entries were 'null', when they shouldn't be.
> >
> > Any help is much appreciated.
> >
> > Thanks,
> > James
> >
> > On Mon, Mar 12, 2018 at 3:06 PM, Rafael Weingärtner <
> > rafaelweingart...@gmail.com> wrote:
> >
> >> Did you try running the insert that is causing the error manually?
> >>
> >> On Mon, Mar 12, 2018 at 4:04 PM, McClune, James <
> >> mcclu...@norwalktruckers.net> wrote:
> >>
> >> > Hello CloudStack Community,
> >> >
> >> > I just upgraded ACS from 4.9.3 to 4.11.0. I'm revamping my primary
> >> storage
> >> > (Ceph RBD). The problem I'm experiencing is every time I try to re-add
> >> my
> >> > Ceph storage pool, I get this error:
> >> >
> >> >
> >> >- Something went wrong; please correct the following:
> >> >Failed to add data store: DB Exception on:
> >> >com.mysql.jdbc.JDBC42PreparedStatement@71996b4a: INSERT INTO
> >> >storage_pool (storage_pool.id, storage_pool.name,
> storage_pool.uuid,
> >> >storage_pool.pool_type, storage_pool.created,
> >> storage_pool.update_time,
> >> >storage_pool.data_center_id, storage_pool.pod_id,
> >> > storage_pool.used_bytes,
> >> >storage_pool.capacity_bytes, storage_pool.status,
> >> >storage_pool.storage_provider_name, storage_pool.host_address,
> >> >storage_pool.path, storage_pool.port, storage_pool.user_info,
> >> >storage_pool.cluster_id, storage_pool.scope, storage_pool.managed,
> >> >storage_pool.capacity_iops, storage_pool.hypervisor) VALUES (0,
> >> >_binary'PrimaryStorage1', _binary'9c279e74-15a5-3c8a-
> >> > b0d1-14349b59710a',
> >> >'RBD', '2018-03-12 14:55:18', null, 1, 1, 0, 0, 'Initialized',
> >> >_binary'DefaultPrimary', null, _binary'6A==@10.10.13.141/rbd',
> 6789,
> >> >null, 1, null, 0, null, null)
> >> >
> >> > Here is the error from the management-server.log file:
> >> >
> >> > 2018-03-12 11:01:32,786 DEBUG [c.c.u.d.T.Transaction]
> >> > (qtp66233253-19:ctx-04c43d3a ctx-329be767) (logid:c19a8879) Rolling
> back
> >> > the transaction: Time = 1 Name =  qtp66233253-19; called by
> >> > -TransactionLegacy.rollback:889-TransactionLegacy.removeUpTo:832-
> >> > TransactionLegacy.close:656-TransactionContextInterceptor.invoke:36-
> >> > ReflectiveMethodInvocation.proceed:174-ExposeInvocationInterceptor.
> >> > invoke:92-ReflectiveMethodInvocation.proceed:185-
> >> > JdkDynamicAopProxy.invoke:212-$Proxy91.persist:-1-PrimaryDat
> >> aStoreHelper.
> >> > createPrimaryDataStore:135-CloudStackPrimaryDataStoreLife
> >> > CycleImpl.initialize:353-StorageManagerImpl.createPool:710
> >> > 2018-03-12 11:01:32,788 DEBUG [c.c.s.StorageManagerImpl]
> >> > (qtp66233253-19:ctx-04c43d3a ctx-329be767) (logid:c19a8879) Failed to
> >> add
> >> > data store: DB Exception on: com.mysql.jdbc.JDBC42PreparedStatement@
> >> > 4901e335:
> >> > INSERT INTO storage_pool (storage_pool.id, storage_pool.name,
> >> > storage_pool.uuid, storage_pool.pool_type, storage_pool.created,
> >> > storage_pool.update_time, storage_pool.data_center_id,
> >> storage_pool.pod_id,
> >> > storage_pool.used_bytes, storage_pool.capacity_bytes,
> >> storage_pool.status,
> >> > storage_pool.storage_provider_name, storage_pool.host_address,
> >> > storage_pool.path, storage_pool.port, storage_pool.user_info,
> >> > storage_pool.cluster_id, storage_pool.scope, storage_pool.managed,
> >> > storage_pool.capacity_iops, storage_pool.hypervisor) VALUES (0,
> >> > _binary'PrimaryStorage1', _binar

Re: Failed to add data store: DB Exception on: com.mysql.jdbc.JDBC42PreparedStatement

2018-03-12 Thread McClune, James
Hi Rafael,

I ran the SQL query and got these errors:


When I deleted the *_binary* and *storage_pool.* from the query entries, it
seemed to work. However, I'm still experiencing oddities:


Some of my entries were 'null', when they shouldn't be.

Any help is much appreciated.

Thanks,
James

On Mon, Mar 12, 2018 at 3:06 PM, Rafael Weingärtner <
rafaelweingart...@gmail.com> wrote:

> Did you try running the insert that is causing the error manually?
>
> On Mon, Mar 12, 2018 at 4:04 PM, McClune, James <
> mcclu...@norwalktruckers.net> wrote:
>
> > Hello CloudStack Community,
> >
> > I just upgraded ACS from 4.9.3 to 4.11.0. I'm revamping my primary
> storage
> > (Ceph RBD). The problem I'm experiencing is every time I try to re-add my
> > Ceph storage pool, I get this error:
> >
> >
> >- Something went wrong; please correct the following:
> >Failed to add data store: DB Exception on:
> >com.mysql.jdbc.JDBC42PreparedStatement@71996b4a: INSERT INTO
> >storage_pool (storage_pool.id, storage_pool.name, storage_pool.uuid,
> >storage_pool.pool_type, storage_pool.created,
> storage_pool.update_time,
> >storage_pool.data_center_id, storage_pool.pod_id,
> > storage_pool.used_bytes,
> >storage_pool.capacity_bytes, storage_pool.status,
> >storage_pool.storage_provider_name, storage_pool.host_address,
> >storage_pool.path, storage_pool.port, storage_pool.user_info,
> >storage_pool.cluster_id, storage_pool.scope, storage_pool.managed,
> >storage_pool.capacity_iops, storage_pool.hypervisor) VALUES (0,
> >_binary'PrimaryStorage1', _binary'9c279e74-15a5-3c8a-
> > b0d1-14349b59710a',
> >'RBD', '2018-03-12 14:55:18', null, 1, 1, 0, 0, 'Initialized',
> >_binary'DefaultPrimary', null, _binary'6A==@10.10.13.141/rbd', 6789,
> >null, 1, null, 0, null, null)
> >
> > Here is the error from the management-server.log file:
> >
> > 2018-03-12 11:01:32,786 DEBUG [c.c.u.d.T.Transaction]
> > (qtp66233253-19:ctx-04c43d3a ctx-329be767) (logid:c19a8879) Rolling back
> > the transaction: Time = 1 Name =  qtp66233253-19; called by
> > -TransactionLegacy.rollback:889-TransactionLegacy.removeUpTo:832-
> > TransactionLegacy.close:656-TransactionContextInterceptor.invoke:36-
> > ReflectiveMethodInvocation.proceed:174-ExposeInvocationInterceptor.
> > invoke:92-ReflectiveMethodInvocation.proceed:185-
> > JdkDynamicAopProxy.invoke:212-$Proxy91.persist:-1-
> PrimaryDataStoreHelper.
> > createPrimaryDataStore:135-CloudStackPrimaryDataStoreLife
> > CycleImpl.initialize:353-StorageManagerImpl.createPool:710
> > 2018-03-12 11:01:32,788 DEBUG [c.c.s.StorageManagerImpl]
> > (qtp66233253-19:ctx-04c43d3a ctx-329be767) (logid:c19a8879) Failed to add
> > data store: DB Exception on: com.mysql.jdbc.JDBC42PreparedStatement@
> > 4901e335:
> > INSERT INTO storage_pool (storage_pool.id, storage_pool.name,
> > storage_pool.uuid, storage_pool.pool_type, storage_pool.created,
> > storage_pool.update_time, storage_pool.data_center_id,
> storage_pool.pod_id,
> > storage_pool.used_bytes, storage_pool.capacity_bytes,
> storage_pool.status,
> > storage_pool.storage_provider_name, storage_pool.host_address,
> > storage_pool.path, storage_pool.port, storage_pool.user_info,
> > storage_pool.cluster_id, storage_pool.scope, storage_pool.managed,
> > storage_pool.capacity_iops, storage_pool.hypervisor) VALUES (0,
> > _binary'PrimaryStorage1', _binary'9c279e74-15a5-3c8a-b0d1-14349b59710a',
> > 'RBD', '2018-03-12 15:01:32', null, 1, 1, 0, 0, 'Initialized',
> > _binary'DefaultPrimary', null, _binary'6A==@10.10.13.141/rbd', 6789,
> null,
> > 1, null, 0, null, null)
> > com.cloud.utils.exception.CloudRuntimeException: DB Exception on:
> > com.mysql.jdbc.JDBC42PreparedStatement@4901e335: INSERT INTO
> storage_pool
> > (
> > storage_pool.id, storage_pool.name, storage_pool.uuid,
> > storage_pool.pool_type, storage_pool.created, storage_pool.update_time,
> > storage_pool.data_center_id, storage_pool.pod_id,
> storage_pool.used_bytes,
> > storage_pool.capacity_bytes, storage_pool.status,
> > storage_pool.storage_provider_name, storage_pool.host_address,
> > storage_pool.path, storage_pool.port, storage_pool.user_info,
> > storage_pool.cluster_id, storage_pool.scope, storage_pool.managed,
> > storage_pool.capacity_iops, storage_pool.hypervisor) VALUES (0,
> > _binary'PrimaryStorage1', _binary'9c279e74-15a5-3c8a-b0d1-14349b59710a',
> > 'RBD', '2018-03-12 15:01:32', null, 1, 1, 0, 0, 'Initialized',
> > _binary'DefaultPrimary', null, _binary'6A==@10.10.13.141/rbd', 6789,
> 

Re: Failed to add data store: DB Exception on: com.mysql.jdbc.JDBC42PreparedStatement

2018-03-12 Thread McClune, James
No, I'll try that right now.

Thanks for the prompt reply Rafael.

- James

On Mon, Mar 12, 2018 at 3:06 PM, Rafael Weingärtner <
rafaelweingart...@gmail.com> wrote:

> Did you try running the insert that is causing the error manually?
>
> On Mon, Mar 12, 2018 at 4:04 PM, McClune, James <
> mcclu...@norwalktruckers.net> wrote:
>
> > Hello CloudStack Community,
> >
> > I just upgraded ACS from 4.9.3 to 4.11.0. I'm revamping my primary
> storage
> > (Ceph RBD). The problem I'm experiencing is every time I try to re-add my
> > Ceph storage pool, I get this error:
> >
> >
> >- Something went wrong; please correct the following:
> >Failed to add data store: DB Exception on:
> >com.mysql.jdbc.JDBC42PreparedStatement@71996b4a: INSERT INTO
> >storage_pool (storage_pool.id, storage_pool.name, storage_pool.uuid,
> >storage_pool.pool_type, storage_pool.created,
> storage_pool.update_time,
> >storage_pool.data_center_id, storage_pool.pod_id,
> > storage_pool.used_bytes,
> >storage_pool.capacity_bytes, storage_pool.status,
> >storage_pool.storage_provider_name, storage_pool.host_address,
> >storage_pool.path, storage_pool.port, storage_pool.user_info,
> >storage_pool.cluster_id, storage_pool.scope, storage_pool.managed,
> >storage_pool.capacity_iops, storage_pool.hypervisor) VALUES (0,
> >_binary'PrimaryStorage1', _binary'9c279e74-15a5-3c8a-
> > b0d1-14349b59710a',
> >'RBD', '2018-03-12 14:55:18', null, 1, 1, 0, 0, 'Initialized',
> >_binary'DefaultPrimary', null, _binary'6A==@10.10.13.141/rbd', 6789,
> >null, 1, null, 0, null, null)
> >
> > Here is the error from the management-server.log file:
> >
> > 2018-03-12 11:01:32,786 DEBUG [c.c.u.d.T.Transaction]
> > (qtp66233253-19:ctx-04c43d3a ctx-329be767) (logid:c19a8879) Rolling back
> > the transaction: Time = 1 Name =  qtp66233253-19; called by
> > -TransactionLegacy.rollback:889-TransactionLegacy.removeUpTo:832-
> > TransactionLegacy.close:656-TransactionContextInterceptor.invoke:36-
> > ReflectiveMethodInvocation.proceed:174-ExposeInvocationInterceptor.
> > invoke:92-ReflectiveMethodInvocation.proceed:185-
> > JdkDynamicAopProxy.invoke:212-$Proxy91.persist:-1-
> PrimaryDataStoreHelper.
> > createPrimaryDataStore:135-CloudStackPrimaryDataStoreLife
> > CycleImpl.initialize:353-StorageManagerImpl.createPool:710
> > 2018-03-12 11:01:32,788 DEBUG [c.c.s.StorageManagerImpl]
> > (qtp66233253-19:ctx-04c43d3a ctx-329be767) (logid:c19a8879) Failed to add
> > data store: DB Exception on: com.mysql.jdbc.JDBC42PreparedStatement@
> > 4901e335:
> > INSERT INTO storage_pool (storage_pool.id, storage_pool.name,
> > storage_pool.uuid, storage_pool.pool_type, storage_pool.created,
> > storage_pool.update_time, storage_pool.data_center_id,
> storage_pool.pod_id,
> > storage_pool.used_bytes, storage_pool.capacity_bytes,
> storage_pool.status,
> > storage_pool.storage_provider_name, storage_pool.host_address,
> > storage_pool.path, storage_pool.port, storage_pool.user_info,
> > storage_pool.cluster_id, storage_pool.scope, storage_pool.managed,
> > storage_pool.capacity_iops, storage_pool.hypervisor) VALUES (0,
> > _binary'PrimaryStorage1', _binary'9c279e74-15a5-3c8a-b0d1-14349b59710a',
> > 'RBD', '2018-03-12 15:01:32', null, 1, 1, 0, 0, 'Initialized',
> > _binary'DefaultPrimary', null, _binary'6A==@10.10.13.141/rbd', 6789,
> null,
> > 1, null, 0, null, null)
> > com.cloud.utils.exception.CloudRuntimeException: DB Exception on:
> > com.mysql.jdbc.JDBC42PreparedStatement@4901e335: INSERT INTO
> storage_pool
> > (
> > storage_pool.id, storage_pool.name, storage_pool.uuid,
> > storage_pool.pool_type, storage_pool.created, storage_pool.update_time,
> > storage_pool.data_center_id, storage_pool.pod_id,
> storage_pool.used_bytes,
> > storage_pool.capacity_bytes, storage_pool.status,
> > storage_pool.storage_provider_name, storage_pool.host_address,
> > storage_pool.path, storage_pool.port, storage_pool.user_info,
> > storage_pool.cluster_id, storage_pool.scope, storage_pool.managed,
> > storage_pool.capacity_iops, storage_pool.hypervisor) VALUES (0,
> > _binary'PrimaryStorage1', _binary'9c279e74-15a5-3c8a-b0d1-14349b59710a',
> > 'RBD', '2018-03-12 15:01:32', null, 1, 1, 0, 0, 'Initialized',
> > _binary'DefaultPrimary', null, _binary'6A==@10.10.13.141/rbd', 6789,
> null,
> > 1, null, 0, null, null)
> > at com.cloud.utils.db.GenericDaoBase.persist(GenericDaoBase.java:1436)
> > at
> > org.apache.cloudstack.storage.datastore.db.PrimaryDataStoreDa

Failed to add data store: DB Exception on: com.mysql.jdbc.JDBC42PreparedStatement

2018-03-12 Thread McClune, James
Hello CloudStack Community,

I just upgraded ACS from 4.9.3 to 4.11.0. I'm revamping my primary storage
(Ceph RBD). The problem I'm experiencing is every time I try to re-add my
Ceph storage pool, I get this error:


   - Something went wrong; please correct the following:
   Failed to add data store: DB Exception on:
   com.mysql.jdbc.JDBC42PreparedStatement@71996b4a: INSERT INTO
   storage_pool (storage_pool.id, storage_pool.name, storage_pool.uuid,
   storage_pool.pool_type, storage_pool.created, storage_pool.update_time,
   storage_pool.data_center_id, storage_pool.pod_id, storage_pool.used_bytes,
   storage_pool.capacity_bytes, storage_pool.status,
   storage_pool.storage_provider_name, storage_pool.host_address,
   storage_pool.path, storage_pool.port, storage_pool.user_info,
   storage_pool.cluster_id, storage_pool.scope, storage_pool.managed,
   storage_pool.capacity_iops, storage_pool.hypervisor) VALUES (0,
   _binary'PrimaryStorage1', _binary'9c279e74-15a5-3c8a-b0d1-14349b59710a',
   'RBD', '2018-03-12 14:55:18', null, 1, 1, 0, 0, 'Initialized',
   _binary'DefaultPrimary', null, _binary'6A==@10.10.13.141/rbd', 6789,
   null, 1, null, 0, null, null)

Here is the error from the management-server.log file:

2018-03-12 11:01:32,786 DEBUG [c.c.u.d.T.Transaction]
(qtp66233253-19:ctx-04c43d3a ctx-329be767) (logid:c19a8879) Rolling back
the transaction: Time = 1 Name =  qtp66233253-19; called by
-TransactionLegacy.rollback:889-TransactionLegacy.removeUpTo:832-TransactionLegacy.close:656-TransactionContextInterceptor.invoke:36-ReflectiveMethodInvocation.proceed:174-ExposeInvocationInterceptor.invoke:92-ReflectiveMethodInvocation.proceed:185-JdkDynamicAopProxy.invoke:212-$Proxy91.persist:-1-PrimaryDataStoreHelper.createPrimaryDataStore:135-CloudStackPrimaryDataStoreLifeCycleImpl.initialize:353-StorageManagerImpl.createPool:710
2018-03-12 11:01:32,788 DEBUG [c.c.s.StorageManagerImpl]
(qtp66233253-19:ctx-04c43d3a ctx-329be767) (logid:c19a8879) Failed to add
data store: DB Exception on: com.mysql.jdbc.JDBC42PreparedStatement@4901e335:
INSERT INTO storage_pool (storage_pool.id, storage_pool.name,
storage_pool.uuid, storage_pool.pool_type, storage_pool.created,
storage_pool.update_time, storage_pool.data_center_id, storage_pool.pod_id,
storage_pool.used_bytes, storage_pool.capacity_bytes, storage_pool.status,
storage_pool.storage_provider_name, storage_pool.host_address,
storage_pool.path, storage_pool.port, storage_pool.user_info,
storage_pool.cluster_id, storage_pool.scope, storage_pool.managed,
storage_pool.capacity_iops, storage_pool.hypervisor) VALUES (0,
_binary'PrimaryStorage1', _binary'9c279e74-15a5-3c8a-b0d1-14349b59710a',
'RBD', '2018-03-12 15:01:32', null, 1, 1, 0, 0, 'Initialized',
_binary'DefaultPrimary', null, _binary'6A==@10.10.13.141/rbd', 6789, null,
1, null, 0, null, null)
com.cloud.utils.exception.CloudRuntimeException: DB Exception on:
com.mysql.jdbc.JDBC42PreparedStatement@4901e335: INSERT INTO storage_pool (
storage_pool.id, storage_pool.name, storage_pool.uuid,
storage_pool.pool_type, storage_pool.created, storage_pool.update_time,
storage_pool.data_center_id, storage_pool.pod_id, storage_pool.used_bytes,
storage_pool.capacity_bytes, storage_pool.status,
storage_pool.storage_provider_name, storage_pool.host_address,
storage_pool.path, storage_pool.port, storage_pool.user_info,
storage_pool.cluster_id, storage_pool.scope, storage_pool.managed,
storage_pool.capacity_iops, storage_pool.hypervisor) VALUES (0,
_binary'PrimaryStorage1', _binary'9c279e74-15a5-3c8a-b0d1-14349b59710a',
'RBD', '2018-03-12 15:01:32', null, 1, 1, 0, 0, 'Initialized',
_binary'DefaultPrimary', null, _binary'6A==@10.10.13.141/rbd', 6789, null,
1, null, 0, null, null)
at com.cloud.utils.db.GenericDaoBase.persist(GenericDaoBase.java:1436)
at
org.apache.cloudstack.storage.datastore.db.PrimaryDataStoreDaoImpl.persist(PrimaryDataStoreDaoImpl.java:274)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:338)
at
org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:197)
at
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
at
com.cloud.utils.db.TransactionContextInterceptor.invoke(TransactionContextInterceptor.java:34)
at
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:174)
at
org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92)
at
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:185)
at

Re: Problems with KVM HA & STONITH

2017-11-02 Thread McClune, James
Hi Simon,

Thanks for getting back to me. I created one single NFS share and added it
as primary storage. I think I better understand how the storage works, with
ACS.

I was able to get HA working with one NFS storage, which is good. However,
is there a way to incorporate multiple NFS storage pools and still have the
HA functionality? I think something like GlusterFS or Ceph (like Ivan and
Dag described) will work better.

Thank you Simon, Ivan, and Dag for your assistance!
James

On Wed, Nov 1, 2017 at 10:10 AM, Simon Weller <swel...@ena.com.invalid>
wrote:

> James,
>
>
> Try just configuring a single NFS server and see if your setup works. If
> you have 3 NFS shares, across all 3 hosts, i'm wondering whether ACS is
> picking the one you rebooted as the storage for your VMs and when that
> storage goes away (when you bounce the host), all storage for your VMs
> vanishes and ACS tries to reboot your other hosts.
>
>
> Normally in a simple ACS setup, you would have a separate storage server
> that can serve up NFS to all hosts. If a host dies, then a VM would be
> brought up on a spare hosts since all hosts have access to the same storage.
>
> Your other option is to use local storage, but that won't provide HA.
>
>
> - Si
>
>
> 
> From: McClune, James <mcclu...@norwalktruckers.net>
> Sent: Monday, October 30, 2017 2:26 PM
> To: users@cloudstack.apache.org
> Subject: Re: Problems with KVM HA & STONITH
>
> Hi Dag,
>
> Thank you for responding back. I am currently running ACS 4.9 on an Ubuntu
> 14.04 VM. I have the three nodes, each having about 1TB of primary storage
> (NFS) and 1TB of secondary storage (NFS). I added each NFS share into ACS.
> All nodes are in a cluster.
>
> Maybe I'm not understanding the setup or misconfigured something. I'm
> trying to setup an HA environment where if one node goes down, running an
> HA marked VM, the VM will start on another host. When I simulate a network
> disconnect or reboot of a host, all of the nodes go down (STONITH?).
>
> I am unsure on how to setup an HA environment, if all the nodes in the
> cluster go down. Any help is much appreciated!
>
> Thanks,
> James
>
> On Mon, Oct 30, 2017 at 3:49 AM, Dag Sonstebo <dag.sonst...@shapeblue.com>
> wrote:
>
> > Hi James,
> >
> > I think  you possibly have over-configured your KVM hosts. If you use NFS
> > (and no clustered file system like CLVM) then there should be no need to
> > configure STONITH. CloudStack takes care of your HA, so this is not
> > something you offload to the KVM host.
> >
> > (As mentioned the only time I have played with STONITH and CloudStack was
> > for CLVM – and I eventually found it not fit for purpose, too unstable
> and
> > causing too many issues like you describe. Note this was for block
> storage
> > though – not NFS).
> >
> > Regards,
> > Dag Sonstebo
> > Cloud Architect
> > ShapeBlue
> >
> > On 28/10/2017, 03:40, "Ivan Kudryavtsev" <kudryavtsev...@bw-sw.com>
> wrote:
> >
> > Hi. If the node losts nfs host it reboots (acs agent behaviour). If
> you
> > really have 3 storages, you'll go clusterwide reboot everytime your
> > host is
> > down.
> >
> > 28 окт. 2017 г. 3:02 пользователь "Simon Weller"
> > <swel...@ena.com.invalid>
> > написал:
> >
> > > Hi James,
> > >
> > >
> > > Can you elaborate a bit further on the storage? You say you're
> > running NFS
> > > on all 3 nodes, can you explain how it is setup?
> > >
> > > Also, what version of ACS are you running?
> > >
> > >
> > > - Si
> > >
> > >
> > >
> > >
> > > 
> > > From: McClune, James <mcclu...@norwalktruckers.net>
> > > Sent: Friday, October 27, 2017 2:21 PM
> > > To: users@cloudstack.apache.org
> > > Subject: Problems with KVM HA & STONITH
> > >
> > > Hello Apache CloudStack Community,
> > >
> > > My setup consists of the following:
> > >
> > > - Three nodes (NODE1, NODE2, and NODE3)
> > > NODE1 is running Ubuntu 16.04.3, NODE2 is running Ubuntu 16.04.3,
> > and NODE3
> > > is running Ubuntu 14.04.5.
> > > - Management Server (running on separate VM, not in cluster)
> > >
> > > The three nodes use KVM as the hypervisor. I also configured
> primary
> > and
> >

Re: Problems with KVM HA & STONITH

2017-10-30 Thread McClune, James
Hi Dag,

Thank you for responding back. I am currently running ACS 4.9 on an Ubuntu
14.04 VM. I have the three nodes, each having about 1TB of primary storage
(NFS) and 1TB of secondary storage (NFS). I added each NFS share into ACS.
All nodes are in a cluster.

Maybe I'm not understanding the setup or misconfigured something. I'm
trying to setup an HA environment where if one node goes down, running an
HA marked VM, the VM will start on another host. When I simulate a network
disconnect or reboot of a host, all of the nodes go down (STONITH?).

I am unsure on how to setup an HA environment, if all the nodes in the
cluster go down. Any help is much appreciated!

Thanks,
James

On Mon, Oct 30, 2017 at 3:49 AM, Dag Sonstebo <dag.sonst...@shapeblue.com>
wrote:

> Hi James,
>
> I think  you possibly have over-configured your KVM hosts. If you use NFS
> (and no clustered file system like CLVM) then there should be no need to
> configure STONITH. CloudStack takes care of your HA, so this is not
> something you offload to the KVM host.
>
> (As mentioned the only time I have played with STONITH and CloudStack was
> for CLVM – and I eventually found it not fit for purpose, too unstable and
> causing too many issues like you describe. Note this was for block storage
> though – not NFS).
>
> Regards,
> Dag Sonstebo
> Cloud Architect
> ShapeBlue
>
> On 28/10/2017, 03:40, "Ivan Kudryavtsev" <kudryavtsev...@bw-sw.com> wrote:
>
> Hi. If the node losts nfs host it reboots (acs agent behaviour). If you
> really have 3 storages, you'll go clusterwide reboot everytime your
> host is
> down.
>
> 28 окт. 2017 г. 3:02 пользователь "Simon Weller"
> <swel...@ena.com.invalid>
> написал:
>
> > Hi James,
> >
> >
> > Can you elaborate a bit further on the storage? You say you're
> running NFS
> > on all 3 nodes, can you explain how it is setup?
> >
>     > Also, what version of ACS are you running?
> >
> >
> > - Si
> >
> >
> >
> >
> > 
> > From: McClune, James <mcclu...@norwalktruckers.net>
> > Sent: Friday, October 27, 2017 2:21 PM
> > To: users@cloudstack.apache.org
> > Subject: Problems with KVM HA & STONITH
> >
> > Hello Apache CloudStack Community,
> >
> > My setup consists of the following:
> >
> > - Three nodes (NODE1, NODE2, and NODE3)
> > NODE1 is running Ubuntu 16.04.3, NODE2 is running Ubuntu 16.04.3,
> and NODE3
> > is running Ubuntu 14.04.5.
> > - Management Server (running on separate VM, not in cluster)
> >
> > The three nodes use KVM as the hypervisor. I also configured primary
> and
> > secondary storage on all three of the nodes. I'm using NFS for the
> primary
> > & secondary storage. VM operations work great. Live migration works
> great.
> >
> > However, when a host goes down, the HA functionality does not work
> at all.
> > Instead of spinning up the VM on another available host, the down
> host
> > seems to trigger STONITH. When STONITH happens, all hosts in the
> cluster go
> > down. This not only causes no HA, but also downs perfectly good
> VM's. I
> > have read countless articles and documentation related to this
> issue. I
> > still cannot find a viable solution for this issue. I really want to
> use
> > Apache CloudStack, but cannot implement this in production when
> STONITH
> > happens.
> >
> > I think I have something misconfigured. I thought I would reach out
> to the
> > CloudStack community and ask for some friendly assistance.
> >
> > If there is anything (system-wise) you request in order to further
> > troubleshoot this issue, please let me know and I'll send. I
> appreciate any
> > help in this issue!
> >
> > --
> >
> > Thanks,
> >
> > James
> >
>
>
>
> dag.sonst...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
>


-- 



James McClune

Technical Support Specialist

Norwalk City Schools

Phone: 419-660-6590

mcclu...@norwalktruckers.net


Re: Problems with KVM HA & STONITH

2017-10-30 Thread McClune, James
Hi Simon,

Thank you for responding back. I am currently running ACS 4.9 on an Ubuntu
14.04 VM. I have the three nodes, each having about 1TB of primary storage
(NFS) and 1TB of secondary storage (NFS). I added each NFS share into ACS.
All nodes are in a cluster.

Maybe I'm not understanding the setup or misconfigured something. I'm
trying to setup an HA environment where if one node goes down, running an
HA marked VM, the VM will start on another host. When I simulate a network
disconnect or reboot of a host, all of the nodes go down.

If you request more information, please let me know. Again, any help is
greatly appreciated!

Thanks,
James

On Fri, Oct 27, 2017 at 4:02 PM, Simon Weller <swel...@ena.com.invalid>
wrote:

> Hi James,
>
>
> Can you elaborate a bit further on the storage? You say you're running NFS
> on all 3 nodes, can you explain how it is setup?
>
> Also, what version of ACS are you running?
>
>
> - Si
>
>
>
>
> 
> From: McClune, James <mcclu...@norwalktruckers.net>
> Sent: Friday, October 27, 2017 2:21 PM
> To: users@cloudstack.apache.org
> Subject: Problems with KVM HA & STONITH
>
> Hello Apache CloudStack Community,
>
> My setup consists of the following:
>
> - Three nodes (NODE1, NODE2, and NODE3)
> NODE1 is running Ubuntu 16.04.3, NODE2 is running Ubuntu 16.04.3, and NODE3
> is running Ubuntu 14.04.5.
> - Management Server (running on separate VM, not in cluster)
>
> The three nodes use KVM as the hypervisor. I also configured primary and
> secondary storage on all three of the nodes. I'm using NFS for the primary
> & secondary storage. VM operations work great. Live migration works great.
>
> However, when a host goes down, the HA functionality does not work at all.
> Instead of spinning up the VM on another available host, the down host
> seems to trigger STONITH. When STONITH happens, all hosts in the cluster go
> down. This not only causes no HA, but also downs perfectly good VM's. I
> have read countless articles and documentation related to this issue. I
> still cannot find a viable solution for this issue. I really want to use
> Apache CloudStack, but cannot implement this in production when STONITH
> happens.
>
> I think I have something misconfigured. I thought I would reach out to the
> CloudStack community and ask for some friendly assistance.
>
> If there is anything (system-wise) you request in order to further
> troubleshoot this issue, please let me know and I'll send. I appreciate any
> help in this issue!
>
> --
>
> Thanks,
>
> James
>



-- 



James McClune

Technical Support Specialist

Norwalk City Schools

Phone: 419-660-6590

mcclu...@norwalktruckers.net


Problems with KVM HA & STONITH

2017-10-27 Thread McClune, James
Hello Apache CloudStack Community,

My setup consists of the following:

- Three nodes (NODE1, NODE2, and NODE3)
NODE1 is running Ubuntu 16.04.3, NODE2 is running Ubuntu 16.04.3, and NODE3
is running Ubuntu 14.04.5.
- Management Server (running on separate VM, not in cluster)

The three nodes use KVM as the hypervisor. I also configured primary and
secondary storage on all three of the nodes. I'm using NFS for the primary
& secondary storage. VM operations work great. Live migration works great.

However, when a host goes down, the HA functionality does not work at all.
Instead of spinning up the VM on another available host, the down host
seems to trigger STONITH. When STONITH happens, all hosts in the cluster go
down. This not only causes no HA, but also downs perfectly good VM's. I
have read countless articles and documentation related to this issue. I
still cannot find a viable solution for this issue. I really want to use
Apache CloudStack, but cannot implement this in production when STONITH
happens.

I think I have something misconfigured. I thought I would reach out to the
CloudStack community and ask for some friendly assistance.

If there is anything (system-wise) you request in order to further
troubleshoot this issue, please let me know and I'll send. I appreciate any
help in this issue!

-- 

Thanks,

James