RE: Private Icons for a charm.

2017-10-05 Thread Michael Van Der Beek
Hi Francesco,

Yup looks like the bug.

Just to give you some details of my test openstack setup so for more test 
scenarios.
Openstack is setup based on packstack on a single 128G ram server. Running 
Centos7
Juju (cli) instance using ubuntu
Then juju creates the controller on this openstack.
I run simplestreams to load centos7 images and I'm writing centos charms.

The reason, is because I am porting my company products (running on centos) to 
cloud images.

juju version
2.2.4-xenial-amd64
OS=ubuntu 16.04

Thanks for your pointer.

Regards,

Michael



-Original Message-
From: Francesco Banconi [mailto:francesco.banc...@canonical.com] 
Sent: Thursday, October 5, 2017 7:55 PM
To: Merlijn Sebrechts <merlijn.sebrec...@gmail.com>
Cc: Michael Van Der Beek <michael@antlabs.com>; juju@lists.ubuntu.com
Subject: Re: Private Icons for a charm.

> 
> On 5 Oct 2017, at 10:24, Merlijn Sebrechts <merlijn.sebrec...@gmail.com> 
> wrote:
> 
> If use this url in a browser it works it brings up the icon. But in the juju 
> webgui it shows up a generic icon.

Thanks for reporting this issue! This seems to be an instance of 
https://github.com/juju/juju-gui/issues/3067
It is possible to confirm that by trying to load the GUI with Firefox, for 
instance, as only Chrome is affected by the problem.
We are currently investigating possible solutions.

--
Francesco

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Private Icons for a charm.

2017-10-04 Thread Michael Van Der Beek
Hi All,

I've created custom icons.svg for charms but in the juju gui it doesn't show up.
The url for the icon when I inspect the webpage looks like this.
https://user-admin@local:password@10.60.1.20:17070/model/f7872e93-5cbf-46dd-814b-2c63031686cd/charms?url=local:centos7/radius-27=1

If use this url in a browser it works it brings up the icon. But in the juju 
webgui it shows up a generic icon.
Is there a fix for this?

I am currently starting the charm like this
juju  deploy /home/ubuntu/charms/radius --constraints mem=4G

Regards,

Michael
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


RE: Rejection of peer join.

2017-09-27 Thread Michael Van Der Beek
Hi Stuart,

Thanks for your insight. Greatly appreciate it.

Last thing, when you said " I'm generally deploying charms to bare metal and 
using local disk",
Does that mean you manually install a machine with Ubuntu, then in juju 
add-machine ssh:user@
Then juju deploy --to  ?

Or you using juju deploy --to host.maas and in which landscape is already aware 
of a machine?

I guess that is probably a dumb question, they both are generally the same, 
except one does it by bootp and  the other is a manual install of ubuntu then 
juju provisions it

Regards,

Michael
-Original Message-
From: stu...@stuartbishop.net [mailto:stu...@stuartbishop.net] On Behalf Of 
Stuart Bishop
Sent: Wednesday, September 27, 2017 8:47 PM
To: Michael Van Der Beek <michael@antlabs.com>
Cc: juju@lists.ubuntu.com
Subject: Re: Rejection of peer join.

On 27 September 2017 at 14:50, Michael Van Der Beek <michael@antlabs.com> 
wrote:
> Hi Stuart,
>
> I think you misinterpreted what I was asking
>
> Assuming a pair of instance already are in relationship.
> Lets assume we have drbd running between these two instances lets call it A-B.
> If juju starts a 3rd instance C.  I want to make sure the 3rd cannot join the 
> pair as drbd not supposed to have 3rd node. Although in theory you can create 
> a stack node for 3rd or more backups.
>
> So when the ha-relation-joined is triggered on either A or B. How do I tell 
> C, it is rejected from the join so that it doesn't screw up juju. I suppose 
> if A/B can send do a "relation-set" some thing to tell C, it is rejected.

I would have C use relation-list to get a list of all the peers. If there are 
two or more units with a lower unit number, C knows it should not join, sets 
its status to 'blocked' and exit 0. There is no need to involve A or B in the 
conversation; Unit C can make the decision itself. Unit C could happily sit 
there in blocked state until either A or B departs, at which point C would see 
only one unit with a lower unit number and know it should join. You could even 
argue that the blocked state is incorrect with this design, and that unit C 
should actually set its status to active or maintenance, with a message stating 
it is a hot spare.

The trick is that if someone does 'juju deploy -n3 thecharm', there is no 
guarantee on what order the units join the peer relation. So Unit C may think 
it should be active, and then a short while later sees other units join and 
will need to become inactive. If you need to avoid this situation, you are 
going to need to use Juju leadership to avoid these sorts of race conditions. 
There is only one leader at any point in time, so it can make decisions like 
which of the several units should be active or not without worrying about race 
conditions. But it sounds overly complex for your use case.

> The issue for me, is how to scale if you have specific data in a set of 
> nodes.  So you can ceph, or drbd or some cluster. So ceph will require 3 ceph 
> nodes, drbd two nodes and maybe galera cluster 3 nodes.
>
> So my idea is that there is already a loadbalance to scale. So my idea is 
> each time you want to scale you would add one or more pairs (assuming drbd) 
> to an already existing set of pairs. The load balancer will just redirect 
> data to specific pairs based on some logic (like modulus of the last octet of 
> customer IP which can give you 256 pairs). This is how we are doing on 
> physical machines. Haven't had a customer yet that requires more than 10,000 
> tps for radius or 5 million concurrent sessions). Note I use pairs loosely in 
> this line as the pair if running galera cluster is 3 nodes instead of pair).
>
> I'm currently trying to figure how to do it on openstack. If you have some 
> recommendation for me to read/view on how people deal with scaling for very 
> high write IO to disk. Current for radius we are looking at near 95% writes 
> 5%reads. Nobody reads the data unless someone wants to know if user X is 
> currently logged in. If it was the other way around in (R/W) IO requirements 
> its much easier to scale.

I'm generally deploying charms to bare metal and using local disk if there are 
any sort of non-trivial IO requirements. I believe the newer Juju storage 
features should allow you mount whatever sort of volumes you want from 
OpenStack (or any cloud provider), but I'm not familiar with the OpenStack 
specifics.



--
Stuart Bishop <stuart.bis...@canonical.com>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


RE: Rejection of peer join.

2017-09-27 Thread Michael Van Der Beek
Hi Stuart,

I think you misinterpreted what I was asking

Assuming a pair of instance already are in relationship.
Lets assume we have drbd running between these two instances lets call it A-B.
If juju starts a 3rd instance C.  I want to make sure the 3rd cannot join the 
pair as drbd not supposed to have 3rd node. Although in theory you can create a 
stack node for 3rd or more backups.

So when the ha-relation-joined is triggered on either A or B. How do I tell C, 
it is rejected from the join so that it doesn't screw up juju. I suppose if A/B 
can send do a "relation-set" some thing to tell C, it is rejected. 
So A/B can now exit 0. And C can set blocked and exit 0. So would that create a 
satisfactory state in juju so that there is no relationship set but all exit 
okay.

Now as to your other suggestion of starting C and allowing data to be synced to 
C, before dropping one node say A. That is possible as well, just means that 
you'll restrict at a higher number.

The issue for me, is how to scale if you have specific data in a set of nodes.  
So you can ceph, or drbd or some cluster. So ceph will require 3 ceph nodes, 
drbd two nodes and maybe galera cluster 3 nodes.

So my idea is that there is already a loadbalance to scale. So my idea is each 
time you want to scale you would add one or more pairs (assuming drbd) to an 
already existing set of pairs. The load balancer will just redirect data to 
specific pairs based on some logic (like modulus of the last octet of customer 
IP which can give you 256 pairs). This is how we are doing on physical 
machines. Haven't had a customer yet that requires more than 10,000 tps for 
radius or 5 million concurrent sessions). Note I use pairs loosely in this line 
as the pair if running galera cluster is 3 nodes instead of pair).

I'm currently trying to figure how to do it on openstack. If you have some 
recommendation for me to read/view on how people deal with scaling for very 
high write IO to disk. Current for radius we are looking at near 95% writes 
5%reads. Nobody reads the data unless someone wants to know if user X is 
currently logged in. If it was the other way around in (R/W) IO requirements 
its much easier to scale.

Regards,

Michael
-Original Message-
From: stu...@stuartbishop.net [mailto:stu...@stuartbishop.net] On Behalf Of 
Stuart Bishop
Sent: Wednesday, September 27, 2017 2:38 PM
To: Michael Van Der Beek <michael@antlabs.com>
Cc: juju@lists.ubuntu.com
Subject: Re: Rejection of peer join.

On 27 September 2017 at 10:02, Michael Van Der Beek <michael@antlabs.com> 
wrote:
> Hi All,
>
> I was thinking about a charm, that for peer relationship, supports 
> only two instances to form a peer.
>
> What would be the correct response to reject a 3rd instance trying to 
> join gracefully so that juju doesn’t complain of a failed setup?
>
> Exit 1 this will sure cause problems. Exit 0 but then juju doesn’t 
> know that the relationship is not set.

Set the workload status  of the unit that should not join to 'blocked'
with a message explaining the problem, and Exit 0 (using the 'status-set' hook 
environment tool, or hookenv.status_set() in charm-helpers). The live peers 
should just ignore the third unit.

If you do it right, it would be possible to add a third unit, then remove one 
of the live pair, and have everything migrate across to the new unit with only 
a small period of time without redundancy.


> The idea is that I want to do failover within a pair of instances.
>
> I have a radius load balancer that will split packets based on a 
> modulus of same framedip (ip of client) or mac address or something.
>
> So each pair of radius would only hold sessions related to that 
> modulus result. So if I do modulus 4, I’ll have 4 pairs of instances.
>
> I’m not sure if running each pair (active-standby) support 1000tps is 
> viable up to a million sessions.

You may not be sure, but maybe the person who is using your charm is?
Or wants to test it? If it is no more effort, I'd suggest supporting an 
arbitrary number of units rather than just one or two. It could also provide a 
mechanism to migrate the Juju application to new units without ever losing 
redundancy by allowing the operator to bring a third unit online, wait for it 
to get in sync, and then drop one of the original units.

--
Stuart Bishop <stuart.bis...@canonical.com>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Rejection of peer join.

2017-09-26 Thread Michael Van Der Beek
Hi All,

I was thinking about a charm, that for peer relationship, supports only two 
instances to form a peer.
What would be the correct response to reject a 3rd instance trying to join 
gracefully so that juju doesn't complain of a failed setup?
Exit 1 this will sure cause problems. Exit 0 but then juju doesn't know that 
the relationship is not set.
Or is there some setting for peer in metadata.yaml that specifies only two 
instances at a time.
Or this is not possibile, and must be chosen manually?

The idea is that I want to do failover within a pair of instances.

I have a radius load balancer that will split packets based on a modulus of 
same framedip (ip of client) or mac address or something.
So each pair of radius would only hold sessions related to that modulus result. 
So if I do modulus 4, I'll have 4 pairs of instances.

I'm not sure if running each pair (active-standby) support 1000tps is viable up 
to a million sessions.
I can easily do that on an hardware platform, I'm not sure in an openstack 
environment with all its virtualization overheads is that still possible in a 
live environment.

Thanks for your help.


Regards,

Michael

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


SSH keys not installed during bootstrap

2017-08-28 Thread Michael Van Der Beek
Hi All,

I've setup an Ubuntu vm with Juju installed.
I've managed to link it to a packstack openstack setup.

So I've configured MainStack to have the api to keystone on the packstack 
server 10.60.1.4

juju bootstrap MainStack --config network=38d1b49e-4ddb-47ef-92f2-23ffa5d2931f

so .. it proceeds to create then Xenical vm with out problems, it starts up and 
logs show that it gets valid routable IP.

Now the weird part is here.. eventually it will reach a state where it needs to 
ssh into the controller vm to do stuff.
So when not verify the host I did the recommended step  'ssh-keygen -f 
"/tmp/juju-known-hosts953850952" -R 10.60.1.31' which fixed the part of the 
problem
But I don't know what I missed not to have a ssh RSA key being created into the 
controller vm.

Can anyone give me pointers as to what to do ?
I think once this is done, it should be a working vm.
Oh I'm running packstack single node, because I don't have that many hardware 
to play with and I need to do kvm testing. I can't do the ubuntu single server 
lxd deployment.

Regards,

Michael

15:44:59 DEBUG juju.provider.common bootstrap.go:497 connection attempt for 
10.60.1.31 failed: ssh: connect to host 10.60.1.31 port 22: Connection refused
15:45:04 DEBUG juju.provider.common bootstrap.go:497 connection attempt for 
10.60.1.31 failed: ssh: connect to host 10.60.1.31 port 22: Connection refused
15:45:09 DEBUG juju.provider.common bootstrap.go:497 connection attempt for 
10.60.1.31 failed: ssh: connect to host 10.60.1.31 port 22: Connection refused
15:45:14 DEBUG juju.provider.common bootstrap.go:497 connection attempt for 
10.60.1.31 failed: ssh: connect to host 10.60.1.31 port 22: Connection refused
15:45:19 DEBUG juju.provider.common bootstrap.go:497 connection attempt for 
10.60.1.31 failed: ssh: connect to host 10.60.1.31 port 22: Connection refused
15:45:24 DEBUG juju.provider.common bootstrap.go:497 connection attempt for 
10.60.1.31 failed: ssh: connect to host 10.60.1.31 port 22: Connection refused
15:45:29 DEBUG juju.provider.common bootstrap.go:497 connection attempt for 
10.60.1.31 failed: ssh: connect to host 10.60.1.31 port 22: Connection refused
15:45:34 DEBUG juju.provider.common bootstrap.go:497 connection attempt for 
10.60.1.31 failed: @@@
@WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
SHA256:QPszhwseg3wKPRu9hkPILvsRVpP8FdqR5MHr9+556ac.
Please contact your system administrator.
Add correct host key in /tmp/juju-known-hosts953850952 to get rid of this 
message.
Offending RSA key in /tmp/juju-known-hosts953850952:1
  remove with:
  ssh-keygen -f "/tmp/juju-known-hosts953850952" -R 10.60.1.31
RSA host key for 10.60.1.31 has changed and you have requested strict checking.
Host key verification failed.
15:45:39 DEBUG juju.provider.common bootstrap.go:497 connection attempt for 
10.60.1.31 failed: @@@
@WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
SHA256:QPszhwseg3wKPRu9hkPILvsRVpP8FdqR5MHr9+556ac.
Please contact your system administrator.
Add correct host key in /tmp/juju-known-hosts953850952 to get rid of this 
message.
Offending RSA key in /tmp/juju-known-hosts953850952:1
  remove with:
  ssh-keygen -f "/tmp/juju-known-hosts953850952" -R 10.60.1.31
RSA host key for 10.60.1.31 has changed and you have requested strict checking.
Host key verification failed.
15:45:44 DEBUG juju.provider.common bootstrap.go:497 connection attempt for 
10.60.1.31 failed: No RSA host key is known for 10.60.1.31 and you have 
requested strict checking.
Host key verification failed.
15:45:49 DEBUG juju.provider.common bootstrap.go:497 connection attempt for 
10.60.1.31 failed: No RSA host key is known for 10.60.1.31 and you have 
requested strict checking.
Host key verification failed.
15:45:54 DEBUG juju.provider.common bootstrap.go:497 connection attempt for 
10.60.1.31 failed: No RSA host key is known for 10.60.1.31 and you have 
requested strict checking.
Host key verification failed.
15:45:59 DEBUG juju.provider.common bootstrap.go:497 connection attempt for 
10.60.1.31 failed: No RSA host key is known for 10.60.1.31 and you have 
requested strict checking.
Host key verification failed.
15:46:04 DEBUG juju.provider.common bootstrap.go:497 connection attempt for 
10.60.1.31 failed: No RSA host key