[jira] [Commented] (MESOS-3370) Deprecate the external containerizer

2015-12-23 Thread Bernd Mathiske (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-3370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069437#comment-15069437
 ] 

Bernd Mathiske commented on MESOS-3370:
---

commit 43420dd0a27cd4adf1b2c929262f96e86d647acf
Author: Joerg Schad 
Date:   Wed Dec 23 10:41:38 2015 +0100

Added links to individual containerizers in containerizer-internal.md.

Review: https://reviews.apache.org/r/41683/

> Deprecate the external containerizer
> 
>
> Key: MESOS-3370
> URL: https://issues.apache.org/jira/browse/MESOS-3370
> Project: Mesos
>  Issue Type: Task
>Reporter: Niklas Quarfot Nielsen
>
> To our knowledge, no one is using the external containerizer and we could 
> clean up code paths in the slave and containerizer interface (the dual 
> launch() signatures)
> In a deprecation cycle, we can move this code into a module (dependent on 
> containerizer modules landing) and from there, move it into it's own repo



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MESOS-3370) Deprecate the external containerizer

2015-12-23 Thread Bernd Mathiske (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-3370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069440#comment-15069440
 ] 

Bernd Mathiske commented on MESOS-3370:
---

commit 3c40d2d27d792c4baa927271414c4541f59069bd
Author: Joerg Schad 
Date:   Wed Dec 23 10:43:33 2015 +0100

Reflected deprecation of external containerizer in documentation.

Review: https://reviews.apache.org/r/41682/

> Deprecate the external containerizer
> 
>
> Key: MESOS-3370
> URL: https://issues.apache.org/jira/browse/MESOS-3370
> Project: Mesos
>  Issue Type: Task
>Reporter: Niklas Quarfot Nielsen
>
> To our knowledge, no one is using the external containerizer and we could 
> clean up code paths in the slave and containerizer interface (the dual 
> launch() signatures)
> In a deprecation cycle, we can move this code into a module (dependent on 
> containerizer modules landing) and from there, move it into it's own repo



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MESOS-4113) Docker Executor should not set container IP during bridged mode

2015-12-23 Thread Bernd Mathiske (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-4113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069446#comment-15069446
 ] 

Bernd Mathiske commented on MESOS-4113:
---

@scalp42, thanks for being persistent! I don't see either how MESOS-4064 can be 
viewed as a duplicate of this issue here. I suspect it was closed assuming the 
"duplicate" link is correct. Further indication for this is that AFAICT none of 
the code in the reviews posted for MESOS-4064 addresses MESOS-4113. @hartem, 
can you confirm this view?

Reopening this ticket. 

@scalp42, It would be great if you check the output from current master is the 
same as from 0.26.0. I suspect it is.


> Docker Executor should not set container IP during bridged mode
> ---
>
> Key: MESOS-4113
> URL: https://issues.apache.org/jira/browse/MESOS-4113
> Project: Mesos
>  Issue Type: Bug
>  Components: docker
>Affects Versions: 0.25.0, 0.26.0
>Reporter: Sargun Dhillon
>Assignee: Artem Harutyunyan
>  Labels: mesosphere
>
> The docker executor currently sets the IP address of the container into 
> ContainerStatus.NetworkInfo.IPAddresses. This isn't a good thing, because 
> during bridged mode execution, it makes it so that that IP address is 
> useless, since it's behind the Docker NAT. I would like a flag that disables 
> filling the IP address in, and allows it to fall back to the agent IP. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MESOS-4113) Docker Executor should not set container IP during bridged mode

2015-12-23 Thread Sargun Dhillon (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-4113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069471#comment-15069471
 ] 

Sargun Dhillon commented on MESOS-4113:
---

The information that's exposed by MESOS-4064 allows for a external program to

> Docker Executor should not set container IP during bridged mode
> ---
>
> Key: MESOS-4113
> URL: https://issues.apache.org/jira/browse/MESOS-4113
> Project: Mesos
>  Issue Type: Bug
>  Components: docker
>Affects Versions: 0.25.0, 0.26.0
>Reporter: Sargun Dhillon
>Assignee: Artem Harutyunyan
>  Labels: mesosphere
>
> The docker executor currently sets the IP address of the container into 
> ContainerStatus.NetworkInfo.IPAddresses. This isn't a good thing, because 
> during bridged mode execution, it makes it so that that IP address is 
> useless, since it's behind the Docker NAT. I would like a flag that disables 
> filling the IP address in, and allows it to fall back to the agent IP. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (MESOS-4113) Docker Executor should not set container IP during bridged mode

2015-12-23 Thread Sargun Dhillon (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-4113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069471#comment-15069471
 ] 

Sargun Dhillon edited comment on MESOS-4113 at 12/23/15 10:22 AM:
--

The information that's exposed by MESOS-4064 allows for a external program to 
analyze the state.json and determine what IP to use. Specifically, it parses to 
see if the task / executor has a docker container in bridged mode. If it's in 
the mode, it uses the slaveID field to lookup the relevant slave, and then 
parses the PID. Currently, Minuteman, and Mesos-DNS both do this.

I believe we should have another NetworkInfos field that actually determines 
the definitive IPs that external users can contact in order to connect to the 
task, because NetworkInfos as they are today are effectively useless, due to 
the behaviour under Docker containers.


was (Author: sargun):
The information that's exposed by MESOS-4064 allows for a external program to

> Docker Executor should not set container IP during bridged mode
> ---
>
> Key: MESOS-4113
> URL: https://issues.apache.org/jira/browse/MESOS-4113
> Project: Mesos
>  Issue Type: Bug
>  Components: docker
>Affects Versions: 0.25.0, 0.26.0
>Reporter: Sargun Dhillon
>Assignee: Artem Harutyunyan
>  Labels: mesosphere
>
> The docker executor currently sets the IP address of the container into 
> ContainerStatus.NetworkInfo.IPAddresses. This isn't a good thing, because 
> during bridged mode execution, it makes it so that that IP address is 
> useless, since it's behind the Docker NAT. I would like a flag that disables 
> filling the IP address in, and allows it to fall back to the agent IP. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (MESOS-4113) Docker Executor should not set container IP during bridged mode

2015-12-23 Thread Sargun Dhillon (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-4113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069471#comment-15069471
 ] 

Sargun Dhillon edited comment on MESOS-4113 at 12/23/15 10:24 AM:
--

The information that's exposed by MESOS-4064 allows for a external program to 
analyze the state.json and determine what IP to use. Specifically, it parses to 
see if the task / executor has a docker container in bridged mode. If it's in 
the mode, it uses the slaveID field to lookup the relevant slave, and then 
parses the PID. Currently, Minuteman, and Mesos-DNS both do this.

I believe we should have another NetworkInfos field that actually determines 
the definitive IPs that external users can contact in order to connect to the 
task, because NetworkInfos as they are today are effectively useless, due to 
the behaviour under Docker containers.

CC: [~jieyu] [~avin...@mesosphere.io]


was (Author: sargun):
The information that's exposed by MESOS-4064 allows for a external program to 
analyze the state.json and determine what IP to use. Specifically, it parses to 
see if the task / executor has a docker container in bridged mode. If it's in 
the mode, it uses the slaveID field to lookup the relevant slave, and then 
parses the PID. Currently, Minuteman, and Mesos-DNS both do this.

I believe we should have another NetworkInfos field that actually determines 
the definitive IPs that external users can contact in order to connect to the 
task, because NetworkInfos as they are today are effectively useless, due to 
the behaviour under Docker containers.

> Docker Executor should not set container IP during bridged mode
> ---
>
> Key: MESOS-4113
> URL: https://issues.apache.org/jira/browse/MESOS-4113
> Project: Mesos
>  Issue Type: Bug
>  Components: docker
>Affects Versions: 0.25.0, 0.26.0
>Reporter: Sargun Dhillon
>Assignee: Artem Harutyunyan
>  Labels: mesosphere
>
> The docker executor currently sets the IP address of the container into 
> ContainerStatus.NetworkInfo.IPAddresses. This isn't a good thing, because 
> during bridged mode execution, it makes it so that that IP address is 
> useless, since it's behind the Docker NAT. I would like a flag that disables 
> filling the IP address in, and allows it to fall back to the agent IP. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MESOS-4181) Change port range logging to different logging level.

2015-12-23 Thread Joerg Schad (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-4181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069622#comment-15069622
 ] 

Joerg Schad commented on MESOS-4181:


Trivial change in logging level: LOG(INFO) -> VLOG(1). Checked for other places 
where we log 'resources' via LOG(INFO): only other place 
src/slave/containerizer/mesos/isolators/posix/disk.cpp doesn't seem as 
problematic.

https://reviews.apache.org/r/41680/

> Change port range logging to different logging level.
> -
>
> Key: MESOS-4181
> URL: https://issues.apache.org/jira/browse/MESOS-4181
> Project: Mesos
>  Issue Type: Bug
>  Components: master
>Affects Versions: 0.25.0
>Reporter: Cody Maloney
>Assignee: Joerg Schad
>  Labels: mesosphere, newbie
>
> Transforming from mesos' internal port range representation -> text is 
> non-linear in the number of bytest output. We end up with a massive amount of 
> log data like the following:
> {noformat}
> Dec 15 23:54:08 ip-10-0-7-60.us-west-2.compute.internal mesos-master[15919]: 
> I1215 23:51:58.891165 15925 hierarchical.hpp:1103] Recovered cpus(*):1e-05; 
> mem(*):10; ports(*):[5565-5565] (total: ports(*):[1025-2180, 2182-3887, 
> 3889-5049, 5052-8079, 8082-8180, 8182-32000]; cpus(*):4; mem(*):14019; 
> disk(*):32541, allocated: cpus(*):0.01815; ports(*):[1050-1050, 1092-1092, 
> 1094-1094, 1129-1129, 1132-1132, 1140-1140, 1177-1178, 1180-1180, 1192-1192, 
> 1205-1205, 1221-1221, 1308-1308, 1311-1311, 1323-1323, 1326-1326, 1335-1335, 
> 1365-1365, 1404-1404, 1412-1412, 1436-1436, 1455-1455, 1459-1459, 1472-1472, 
> 1477-1477, 1482-1482, 1491-1491, 1510-1510, 1551-1551, 1553-1553, 1559-1559, 
> 1573-1573, 1590-1590, 1592-1592, 1619-1619, 1635-1636, 1678-1678, 1738-1738, 
> 1742-1742, 1752-1752, 1770-1770, 1780-1782, 1790-1790, 1792-1792, 1799-1799, 
> 1804-1804, 1844-1844, 1852-1852, 1867-1867, 1899-1899, 1936-1936, 1945-1945, 
> 1954-1954, 2046-2046, 2055-2055, 2063-2063, 2070-2070, 2089-2089, 2104-2104, 
> 2117-2117, 2132-2132, 2173-2173, 2178-2178, 2188-2188, 2200-2200, 2218-2218, 
> 2223-2223, 2244-2244, 2248-2248, 2250-2250, 2270-2270, 2286-2286, 2302-2302, 
> 2332-2332, 2377-2377, 2397-2397, 2423-2423, 2435-2435, 2442-2442, 2448-2448, 
> 2477-2477, 2482-2482, 2522-2522, 2586-2586, 2594-2594, 2600-2600, 2602-2602, 
> 2643-2643, 2648-2648, 2659-2659, 2691-2691, 2716-2716, 2739-2739, 2794-2794, 
> 2802-2802, 2823-2823, 2831-2831, 2840-2840, 2848-2848, 2876-2876, 2894-2895, 
> 2900-2900, 2904-2904, 2912-2912, 2983-2983, 2991-2991, 2999-2999, 3011-3011, 
> 3025-3025, 3036-3036, 3041-3041, 3051-3051, 3074-3074, 3097-3097, 3107-3107, 
> 3121-3121, 3171-3171, 3176-3176, 3195-3195, 3197-3197, 3210-3210, 3221-3221, 
> 3234-3234, 3245-3245, 3250-3251, 3255-3255, 3270-3270, 3293-3293, 3298-3298, 
> 3312-3312, 3318-3318, 3325-3325, 3368-3368, 3379-3379, 3391-3391, 3412-3412, 
> 3414-3414, 3420-3420, 3492-3492, 3501-3501, 3538-3538, 3579-3579, 3631-3631, 
> 3680-3680, 3684-3684, 3695-3695, 3699-3699, 3738-3738, 3758-3758, 3793-3793, 
> 3808-3808, 3817-3817, 3854-3854, 3856-3856, 3900-3900, 3906-3906, 3909-3909, 
> 3912-3912, 3946-3946, 3956-3956, 3959-3959, 3963-3963, 3974-
> Dec 15 23:54:09 ip-10-0-7-60.us-west-2.compute.internal mesos-master[15919]: 
> 3974, 3981-3981, 3985-3985, 4134-4134, 4178-4178, 4206-4206, 4223-4223, 
> 4239-4239, 4245-4245, 4251-4251, 4262-4263, 4271-4271, 4308-4308, 4323-4323, 
> 4329-4329, 4368-4368, 4385-4385, 4404-4404, 4419-4419, 4430-4430, 4448-4448, 
> 4464-4464, 4481-4481, 4494-4494, 4499-4499, 4510-4510, 4534-4534, 4543-4543, 
> 4555-4555, 4561-4562, 4577-4577, 4601-4601, 4675-4675, 4722-4722, 4739-4739, 
> 4748-4748, 4752-4752, 4764-4764, 4771-4771, 4787-4787, 4827-4827, 4830-4830, 
> 4837-4837, 4848-4848, 4853-4853, 4879-4879, 4883-4883, 4897-4897, 4902-4902, 
> 4911-4911, 4940-4940, 4946-4946, 4957-4957, 4994-4994, 4996-4996, 5008-5008, 
> 5019-5019, 5043-5043, 5059-5059, 5109-5109, 5134-5135, 5157-5157, 5172-5172, 
> 5192-5192, 5211-5211, 5215-5215, 5234-5234, 5237-5237, 5246-5246, 5255-5255, 
> 5268-5268, 5311-5311, 5314-5314, 5316-5316, 5348-5348, 5391-5391, 5407-5407, 
> 5433-5433, 5446-5447, 5454-5454, 5456-5456, 5482-5482, 5514-5515, 5517-5517, 
> 5525-5525, 5542-5542, 5554-5554, 5581-5581, 5624-5624, 5647-5647, 5695-5695, 
> 5700-5700, 5703-5703, 5743-5743, 5747-5747, 5793-5793, 5850-5850, 5856-5856, 
> 5858-5858, 5899-5899, 5901-5901, 5940-5940, 5958-5958, 5962-5962, 5974-5974, 
> 5995-5995, 6000-6001, 6037-6037, 6053-6053, 6066-6066, 6078-6078, 6129-6129, 
> 6139-6139, 6160-6160, 6174-6174, 6193-6193, 6234-6234, 6263-6263, 6276-6276, 
> 6287-6287, 6292-6292, 6294-6294, 6296-6296, 6306-6307, 6333-6333, 6343-6343, 
> 6349-6349, 6377-6377, 6418-6418, 6454-6454, 6484-6484, 6496-6496, 6504-6504, 
> 6518-6518, 6589-6589, 6592-6592, 6606-6606, 

[jira] [Comment Edited] (MESOS-4113) Docker Executor should not set container IP during bridged mode

2015-12-23 Thread Anthony Scalisi (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-4113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070045#comment-15070045
 ] 

Anthony Scalisi edited comment on MESOS-4113 at 12/23/15 7:01 PM:
--

Sorry for the delay here, on PST timezone.

[~jieyu] unfortunately, I'm not familiar with how Marathon does health checks 
(I don't speak Java at all). I opened an issue on Github: 
https://github.com/mesosphere/marathon/issues/2870#issuecomment-166848308
so hoping to get some leads over there.

The only thing for sure is that 0.26.0 changed what task IP is reporting and 
Marathon 0.13.0 (or 0.14 RC for that matter) grabs the IP reported now 
(defaulting to the internal Docker one) which make all the health checks 
failing.

I saw mention of Mesos-DNS and I would like to add that even though Marathon 
health checks stopped working, our other discovery mechanisms didn't:

in the case of Mesos-consul for example 
(https://github.com/CiscoCloud/mesos-consul), you can see here 
(https://github.com/CiscoCloud/mesos-consul/blob/aac6c2828a46c3a54efe1fbc41003dbcd69a6a40/main.go#L120-L123)
 that it has a flag to specify how the task IP is registered in Consul. We had 
it set to "host,mesos,docker,netinfo", so pretty much registering the task IP 
as the Mesos slave IP so everything worked fine on that end on v0.26.0.

I'd like to mention that Mesos-DNS has the same kind of flag: 
https://github.com/mesosphere/mesos-dns/blob/9a8aa106a05339c79fb189d435c68f64e876414c/records/config.go#L49

I also see multiple issues relevant:

- https://github.com/mesosphere/mesos-dns/issues/334#issuecomment-155657196 
- https://github.com/mesosphere/mesos-dns/issues/332
- https://github.com/mesosphere/mesos-dns/issues/369

Unfortunately, I don't know enough of the internal of Mesos itself so just 
coming from an Ops point of view (and a long night of nuked containers 
unfortunately).

I'd like to mention also that I don't have enough bandwidth to be able to 
support Mesos compiled from sources as I rely on stable and testing packages 
from Mesosphere repositories.

If it is any help, I'm available on IRC in the #mesos and #marathon channels 
and more than happy to help debug things.



was (Author: scalp42):
Sorry for the delay here, on PST timezone.

[~jieyu] unfortunately, I'm not familiar with how Marathon does health checks 
(I don't speak Java at all). I opened an issue on Github: 
https://github.com/mesosphere/marathon/issues/2870#issuecomment-166848308
so hoping to get some leads over there.

The only thing for sure is that 0.26.0 changed what task IP is reporting and 
Marathon 0.13.0 (or 0.14 RC for that matter) grabs the IP reported now 
(defaulting to the internal Docker one) which make all the health checks 
failing.

I saw mention of Mesos-DNS and I would like to add that even though Marathon 
health checks stopped working, our other discovery mechanisms didn't:

in the case of Mesos-consul for example 
(https://github.com/CiscoCloud/mesos-consul), you can see here 
(https://github.com/CiscoCloud/mesos-consul/blob/aac6c2828a46c3a54efe1fbc41003dbcd69a6a40/main.go#L120-L123)
 that it has a flag to specify how the task IP is registered in Consul. We had 
it set to "host,mesos,docker,netinfo", so pretty much registering the task IP 
as the Mesos slave IP so everything worked fine on that end on v0.26.0.

I'd like to mention that Mesos-DNS has the same kind of flag: 
https://github.com/mesosphere/mesos-dns/blob/9a8aa106a05339c79fb189d435c68f64e876414c/records/config.go#L49

I also multiple issues relevant:

- https://github.com/mesosphere/mesos-dns/issues/332
- https://github.com/mesosphere/mesos-dns/issues/369

Unfortunately, I don't know enough of the internal of Mesos itself so just 
coming from an Ops point of view (and a long night of nuked containers 
unfortunately).

I'd like to mention also that I don't have enough bandwidth to be able to 
support Mesos compiled from sources as I rely on stable and testing packages 
from Mesosphere repositories.

If it is any help, I'm available on IRC in the #mesos and #marathon channels 
and more than happy to help debug things.


> Docker Executor should not set container IP during bridged mode
> ---
>
> Key: MESOS-4113
> URL: https://issues.apache.org/jira/browse/MESOS-4113
> Project: Mesos
>  Issue Type: Bug
>  Components: docker
>Affects Versions: 0.25.0, 0.26.0
>Reporter: Sargun Dhillon
>Assignee: Artem Harutyunyan
>  Labels: mesosphere
>
> The docker executor currently sets the IP address of the container into 
> ContainerStatus.NetworkInfo.IPAddresses. This isn't a good thing, because 
> during bridged mode execution, it makes it so that that IP address is 
> useless, since it's behind the Docker NAT. I would like a flag that 

[jira] [Commented] (MESOS-3560) JSON-based credential files do not work correctly

2015-12-23 Thread Sean Owen (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-3560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069974#comment-15069974
 ] 

Sean Owen commented on MESOS-3560:
--

This ends up causing a problem in Spark, for later when Mesos gets updated in 
Spark: https://issues.apache.org/jira/browse/SPARK-12501. Ideally the old field 
is also supported, but hey.

> JSON-based credential files do not work correctly
> -
>
> Key: MESOS-3560
> URL: https://issues.apache.org/jira/browse/MESOS-3560
> Project: Mesos
>  Issue Type: Bug
>  Components: master
>Reporter: Michael Park
>Assignee: Isabel Jimenez
>  Labels: mesosphere
> Fix For: 0.26.0
>
>
> Specifying the following credentials file:
> {code}
> {
>   “credentials”: [
> {
>   “principal”: “user”,
>   “secret”: “password”
> }
>   ]
> }
> {code}
> Then hitting a master endpoint with:
> {code}
> curl -i -u “user:password” ...
> {code}
> Does not work. This is contrary to the text-based credentials file which 
> works:
> {code}
> user password
> {code}
> Currently, the password in a JSON-based credentials file needs to be 
> base64-encoded in order for it to work:
> {code}
> {
>   “credentials”: [
> {
>   “principal”: “user”,
>   “secret”: “cGFzc3dvcmQ=”
> }
>   ]
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MESOS-3560) JSON-based credential files do not work correctly

2015-12-23 Thread Sean Owen (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-3560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069973#comment-15069973
 ] 

Sean Owen commented on MESOS-3560:
--

This ends up causing a problem in Spark, for later when Mesos gets updated in 
Spark: https://issues.apache.org/jira/browse/SPARK-12501. Ideally the old field 
is also supported, but hey.

> JSON-based credential files do not work correctly
> -
>
> Key: MESOS-3560
> URL: https://issues.apache.org/jira/browse/MESOS-3560
> Project: Mesos
>  Issue Type: Bug
>  Components: master
>Reporter: Michael Park
>Assignee: Isabel Jimenez
>  Labels: mesosphere
> Fix For: 0.26.0
>
>
> Specifying the following credentials file:
> {code}
> {
>   “credentials”: [
> {
>   “principal”: “user”,
>   “secret”: “password”
> }
>   ]
> }
> {code}
> Then hitting a master endpoint with:
> {code}
> curl -i -u “user:password” ...
> {code}
> Does not work. This is contrary to the text-based credentials file which 
> works:
> {code}
> user password
> {code}
> Currently, the password in a JSON-based credentials file needs to be 
> base64-encoded in order for it to work:
> {code}
> {
>   “credentials”: [
> {
>   “principal”: “user”,
>   “secret”: “cGFzc3dvcmQ=”
> }
>   ]
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (MESOS-4113) Docker Executor should not set container IP during bridged mode

2015-12-23 Thread Anthony Scalisi (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-4113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070045#comment-15070045
 ] 

Anthony Scalisi edited comment on MESOS-4113 at 12/23/15 7:05 PM:
--

Sorry for the delay here, on PST timezone.

[~jieyu] unfortunately, I'm not familiar with how Marathon does health checks 
(I don't speak Java at all). I opened an issue on Github: 
https://github.com/mesosphere/marathon/issues/2870#issuecomment-166848308
so hoping to get some leads over there.

The only thing for sure is that 0.26.0 changed what task IP is reporting and 
Marathon 0.13.0 (or 0.14 RC for that matter) grabs the IP reported now 
(defaulting to the internal Docker one) which make all the health checks 
failing.

I saw mention of Mesos-DNS and I would like to add that even though Marathon 
health checks stopped working, our other discovery mechanisms didn't:

in the case of Mesos-consul for example 
(https://github.com/CiscoCloud/mesos-consul), you can see here 
(https://github.com/CiscoCloud/mesos-consul/blob/aac6c2828a46c3a54efe1fbc41003dbcd69a6a40/main.go#L120-L123)
 that it has a flag to specify how the task IP is registered in Consul. We had 
it set to "host,mesos,docker,netinfo", so pretty much registering the task IP 
as the Mesos slave IP so everything worked fine on that end on v0.26.0.

I'd like to mention that Mesos-DNS has the same kind of flag: 
https://github.com/mesosphere/mesos-dns/blob/9a8aa106a05339c79fb189d435c68f64e876414c/records/config.go#L49

I also see multiple issues relevant:

- https://github.com/mesosphere/mesos-dns/issues/334#issuecomment-155657196 
- https://github.com/mesosphere/mesos-dns/issues/334#issuecomment-155662858
- https://github.com/mesosphere/mesos-dns/issues/332
- https://github.com/mesosphere/mesos-dns/issues/369

The strange part is that they mention the docker IP issue was added in 0.25.0, 
but we're currently running 0.25.0 after the botched upgrade yesterday and 
everything's working as intended (the JSON not populated with the Docker 
internal IP on Marathon side).

Unfortunately, I don't know enough of the internal of Mesos itself so just 
coming from an Ops point of view (and a long night of nuked containers 
unfortunately).

I'd like to mention also that I don't have enough bandwidth to be able to 
support Mesos compiled from sources as I rely on stable and testing packages 
from Mesosphere repositories.

If it is any help, I'm available on IRC in the #mesos and #marathon channels 
and more than happy to help debug things.

And at last, I think we should strongly reconsider the "resolved" flag (this 
upgrade to 0.26.0 was fatal to us).



was (Author: scalp42):
Sorry for the delay here, on PST timezone.

[~jieyu] unfortunately, I'm not familiar with how Marathon does health checks 
(I don't speak Java at all). I opened an issue on Github: 
https://github.com/mesosphere/marathon/issues/2870#issuecomment-166848308
so hoping to get some leads over there.

The only thing for sure is that 0.26.0 changed what task IP is reporting and 
Marathon 0.13.0 (or 0.14 RC for that matter) grabs the IP reported now 
(defaulting to the internal Docker one) which make all the health checks 
failing.

I saw mention of Mesos-DNS and I would like to add that even though Marathon 
health checks stopped working, our other discovery mechanisms didn't:

in the case of Mesos-consul for example 
(https://github.com/CiscoCloud/mesos-consul), you can see here 
(https://github.com/CiscoCloud/mesos-consul/blob/aac6c2828a46c3a54efe1fbc41003dbcd69a6a40/main.go#L120-L123)
 that it has a flag to specify how the task IP is registered in Consul. We had 
it set to "host,mesos,docker,netinfo", so pretty much registering the task IP 
as the Mesos slave IP so everything worked fine on that end on v0.26.0.

I'd like to mention that Mesos-DNS has the same kind of flag: 
https://github.com/mesosphere/mesos-dns/blob/9a8aa106a05339c79fb189d435c68f64e876414c/records/config.go#L49

I also see multiple issues relevant:

- https://github.com/mesosphere/mesos-dns/issues/334#issuecomment-155657196 
- https://github.com/mesosphere/mesos-dns/issues/334#issuecomment-155662858
- https://github.com/mesosphere/mesos-dns/issues/332
- https://github.com/mesosphere/mesos-dns/issues/369

The strange part is that they mention the docker IP issue was added in 0.25.0, 
but we're currently running 0.25.0 after the botched upgrade yesterday and 
everything's working as intended (the JSON not populated with the Docker 
internal IP on Marathon side).

Unfortunately, I don't know enough of the internal of Mesos itself so just 
coming from an Ops point of view (and a long night of nuked containers 
unfortunately).

I'd like to mention also that I don't have enough bandwidth to be able to 
support Mesos compiled from sources as I rely on stable and testing packages 
from Mesosphere repositories.

If it is any help, I'm 

[jira] [Commented] (MESOS-3157) only perform batch resource allocations

2015-12-23 Thread James Peach (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-3157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069858#comment-15069858
 ] 

James Peach commented on MESOS-3157:


{quote}
allocator becomes unresponsive due to its long event queue
{quote}

It is not the length of the queue, it is the number of long-running events in 
it. For example, if an allocation pass takes 3sec and we queue one every 1sec, 
the queue will grow without bound. The rate of allocation arrival is 
proportional to the amount of churn in the cluster.

{quote}
is it because there are too many slaves in your Mesos cluster? Or too many 
frameworks?
{quote}

Yes, we run fairly large clusters with numerous frameworks (hundreds). See the 
{{HierarchicalAllocator_BENCHMARK_Test.DeclineOffers}} test for a synthetic 
example.

{quote}
means even when a reviveOffers is handled by allocator after a long time, it 
will not take effect immediately (i.e., trigger an allocation so that framework 
can get offers immediately)
{quote}

Yes that is correct. In the scenario when a number of frameworks revive at 
once, we only want to do a single allocation pass across all the slaves, not 
multiple passes. This necessarily entails some sort of batching or delay, 
though that is bounded by the allocation interval.

As I pointed out earlier in this ticket I haven't been able to create a 
benchmark to demonstrate the original problem. I'm working on deploying an 
un-patched Mesos to one of our test clusters to better understand the 
triggering conditions.

> only perform batch resource allocations
> ---
>
> Key: MESOS-3157
> URL: https://issues.apache.org/jira/browse/MESOS-3157
> Project: Mesos
>  Issue Type: Bug
>  Components: allocation
>Reporter: James Peach
>Assignee: James Peach
>
> Our deployment environments have a lot of churn, with many short-live 
> frameworks that often revive offers. Running the allocator takes a long time 
> (from seconds up to minutes).
> In this situation, event-triggered allocation causes the event queue in the 
> allocator process to get very long, and the allocator effectively becomes 
> unresponsive (eg. a revive offers message takes too long to come to the head 
> of the queue).
> We have been running a patch to remove all the event-triggered allocations 
> and only allocate from the batch task 
> {{HierarchicalAllocatorProcess::batch}}. This works great and really improves 
> responsiveness.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MESOS-4113) Docker Executor should not set container IP during bridged mode

2015-12-23 Thread Jie Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-4113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069968#comment-15069968
 ] 

Jie Yu commented on MESOS-4113:
---

To add some context here. I think it will be hard for Mesos to determine which 
IP is routable and accessible from other hosts without operator's knowledge. 
For instance, what if the docker container uses a custom network plugin? Also, 
in NAT mode, using agent IP is not sufficient for accessing a given endpoint in 
the container. The mapped port needs to be known as well. That means anyway you 
need some other information in order to access the endpoint (relying on the IP 
exposed in NetworkInfo alone is not sufficient). The best Mesos could do is to 
do a best effort guess about the externally accessible IP for the container. 

> Docker Executor should not set container IP during bridged mode
> ---
>
> Key: MESOS-4113
> URL: https://issues.apache.org/jira/browse/MESOS-4113
> Project: Mesos
>  Issue Type: Bug
>  Components: docker
>Affects Versions: 0.25.0, 0.26.0
>Reporter: Sargun Dhillon
>Assignee: Artem Harutyunyan
>  Labels: mesosphere
>
> The docker executor currently sets the IP address of the container into 
> ContainerStatus.NetworkInfo.IPAddresses. This isn't a good thing, because 
> during bridged mode execution, it makes it so that that IP address is 
> useless, since it's behind the Docker NAT. I would like a flag that disables 
> filling the IP address in, and allows it to fall back to the agent IP. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (MESOS-4113) Docker Executor should not set container IP during bridged mode

2015-12-23 Thread Anthony Scalisi (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-4113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070045#comment-15070045
 ] 

Anthony Scalisi edited comment on MESOS-4113 at 12/23/15 7:03 PM:
--

Sorry for the delay here, on PST timezone.

[~jieyu] unfortunately, I'm not familiar with how Marathon does health checks 
(I don't speak Java at all). I opened an issue on Github: 
https://github.com/mesosphere/marathon/issues/2870#issuecomment-166848308
so hoping to get some leads over there.

The only thing for sure is that 0.26.0 changed what task IP is reporting and 
Marathon 0.13.0 (or 0.14 RC for that matter) grabs the IP reported now 
(defaulting to the internal Docker one) which make all the health checks 
failing.

I saw mention of Mesos-DNS and I would like to add that even though Marathon 
health checks stopped working, our other discovery mechanisms didn't:

in the case of Mesos-consul for example 
(https://github.com/CiscoCloud/mesos-consul), you can see here 
(https://github.com/CiscoCloud/mesos-consul/blob/aac6c2828a46c3a54efe1fbc41003dbcd69a6a40/main.go#L120-L123)
 that it has a flag to specify how the task IP is registered in Consul. We had 
it set to "host,mesos,docker,netinfo", so pretty much registering the task IP 
as the Mesos slave IP so everything worked fine on that end on v0.26.0.

I'd like to mention that Mesos-DNS has the same kind of flag: 
https://github.com/mesosphere/mesos-dns/blob/9a8aa106a05339c79fb189d435c68f64e876414c/records/config.go#L49

I also see multiple issues relevant:

- https://github.com/mesosphere/mesos-dns/issues/334#issuecomment-155657196 
- https://github.com/mesosphere/mesos-dns/issues/334#issuecomment-155662858
- https://github.com/mesosphere/mesos-dns/issues/332
- https://github.com/mesosphere/mesos-dns/issues/369

The strange part is that they mention the docker IP issue was added in 0.25.0, 
but we're currently running 0.25.0 after the botched upgrade yesterday and 
everything's working as intended (the JSON not populated with the Docker 
internal IP on Marathon side).

Unfortunately, I don't know enough of the internal of Mesos itself so just 
coming from an Ops point of view (and a long night of nuked containers 
unfortunately).

I'd like to mention also that I don't have enough bandwidth to be able to 
support Mesos compiled from sources as I rely on stable and testing packages 
from Mesosphere repositories.

If it is any help, I'm available on IRC in the #mesos and #marathon channels 
and more than happy to help debug things.



was (Author: scalp42):
Sorry for the delay here, on PST timezone.

[~jieyu] unfortunately, I'm not familiar with how Marathon does health checks 
(I don't speak Java at all). I opened an issue on Github: 
https://github.com/mesosphere/marathon/issues/2870#issuecomment-166848308
so hoping to get some leads over there.

The only thing for sure is that 0.26.0 changed what task IP is reporting and 
Marathon 0.13.0 (or 0.14 RC for that matter) grabs the IP reported now 
(defaulting to the internal Docker one) which make all the health checks 
failing.

I saw mention of Mesos-DNS and I would like to add that even though Marathon 
health checks stopped working, our other discovery mechanisms didn't:

in the case of Mesos-consul for example 
(https://github.com/CiscoCloud/mesos-consul), you can see here 
(https://github.com/CiscoCloud/mesos-consul/blob/aac6c2828a46c3a54efe1fbc41003dbcd69a6a40/main.go#L120-L123)
 that it has a flag to specify how the task IP is registered in Consul. We had 
it set to "host,mesos,docker,netinfo", so pretty much registering the task IP 
as the Mesos slave IP so everything worked fine on that end on v0.26.0.

I'd like to mention that Mesos-DNS has the same kind of flag: 
https://github.com/mesosphere/mesos-dns/blob/9a8aa106a05339c79fb189d435c68f64e876414c/records/config.go#L49

I also see multiple issues relevant:

- https://github.com/mesosphere/mesos-dns/issues/334#issuecomment-155657196 
- https://github.com/mesosphere/mesos-dns/issues/332
- https://github.com/mesosphere/mesos-dns/issues/369

Unfortunately, I don't know enough of the internal of Mesos itself so just 
coming from an Ops point of view (and a long night of nuked containers 
unfortunately).

I'd like to mention also that I don't have enough bandwidth to be able to 
support Mesos compiled from sources as I rely on stable and testing packages 
from Mesosphere repositories.

If it is any help, I'm available on IRC in the #mesos and #marathon channels 
and more than happy to help debug things.


> Docker Executor should not set container IP during bridged mode
> ---
>
> Key: MESOS-4113
> URL: https://issues.apache.org/jira/browse/MESOS-4113
> Project: Mesos
>  Issue Type: Bug
>  Components: docker
>Affects Versions: 0.25.0, 

[jira] [Commented] (MESOS-4113) Docker Executor should not set container IP during bridged mode

2015-12-23 Thread Craig W (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-4113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070018#comment-15070018
 ] 

Craig W commented on MESOS-4113:


[~sargun]what is Minuteman? do you have a link?

> Docker Executor should not set container IP during bridged mode
> ---
>
> Key: MESOS-4113
> URL: https://issues.apache.org/jira/browse/MESOS-4113
> Project: Mesos
>  Issue Type: Bug
>  Components: docker
>Affects Versions: 0.25.0, 0.26.0
>Reporter: Sargun Dhillon
>Assignee: Artem Harutyunyan
>  Labels: mesosphere
>
> The docker executor currently sets the IP address of the container into 
> ContainerStatus.NetworkInfo.IPAddresses. This isn't a good thing, because 
> during bridged mode execution, it makes it so that that IP address is 
> useless, since it's behind the Docker NAT. I would like a flag that disables 
> filling the IP address in, and allows it to fall back to the agent IP. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MESOS-4113) Docker Executor should not set container IP during bridged mode

2015-12-23 Thread Sargun Dhillon (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-4113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070050#comment-15070050
 ] 

Sargun Dhillon commented on MESOS-4113:
---

Could Mesos just use the libprocess IP by default? RE: The ports, this could be 
ascertained using DiscoveryInfo or by looking at the ports resource.

> Docker Executor should not set container IP during bridged mode
> ---
>
> Key: MESOS-4113
> URL: https://issues.apache.org/jira/browse/MESOS-4113
> Project: Mesos
>  Issue Type: Bug
>  Components: docker
>Affects Versions: 0.25.0, 0.26.0
>Reporter: Sargun Dhillon
>Assignee: Artem Harutyunyan
>  Labels: mesosphere
>
> The docker executor currently sets the IP address of the container into 
> ContainerStatus.NetworkInfo.IPAddresses. This isn't a good thing, because 
> during bridged mode execution, it makes it so that that IP address is 
> useless, since it's behind the Docker NAT. I would like a flag that disables 
> filling the IP address in, and allows it to fall back to the agent IP. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MESOS-4113) Docker Executor should not set container IP during bridged mode

2015-12-23 Thread Jie Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-4113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069986#comment-15069986
 ] 

Jie Yu commented on MESOS-4113:
---

[~scalp42], I am not familiar with how marathon does health checks, especially 
how does marathon determine the IP:PORT of the endpoint to check with? Can you 
give us more context here?

> Docker Executor should not set container IP during bridged mode
> ---
>
> Key: MESOS-4113
> URL: https://issues.apache.org/jira/browse/MESOS-4113
> Project: Mesos
>  Issue Type: Bug
>  Components: docker
>Affects Versions: 0.25.0, 0.26.0
>Reporter: Sargun Dhillon
>Assignee: Artem Harutyunyan
>  Labels: mesosphere
>
> The docker executor currently sets the IP address of the container into 
> ContainerStatus.NetworkInfo.IPAddresses. This isn't a good thing, because 
> during bridged mode execution, it makes it so that that IP address is 
> useless, since it's behind the Docker NAT. I would like a flag that disables 
> filling the IP address in, and allows it to fall back to the agent IP. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MESOS-4113) Docker Executor should not set container IP during bridged mode

2015-12-23 Thread Anthony Scalisi (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-4113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070045#comment-15070045
 ] 

Anthony Scalisi commented on MESOS-4113:


Sorry for the delay here, on PST timezone.

[~jieyu] unfortunately, I'm not familiar with how Marathon does health checks 
(I don't speak Java at all). I opened an issue on Github: 
https://github.com/mesosphere/marathon/issues/2870#issuecomment-166848308
so hoping to get some leads over there.

The only thing for sure is that 0.26.0 changed what task IP is reporting and 
Marathon 0.13.0 (or 0.14 RC for that matter) grabs the IP reported now 
(defaulting to the internal Docker one) which make all the health checks 
failing.

I saw mention of Mesos-DNS and I would like to add that even though Marathon 
health checks stopped working, our other discovery mechanisms didn't:

in the case of Mesos-consul for example 
(https://github.com/CiscoCloud/mesos-consul), you can see here 
(https://github.com/CiscoCloud/mesos-consul/blob/aac6c2828a46c3a54efe1fbc41003dbcd69a6a40/main.go#L120-L123)
 that it has a flag to specify how the task IP is registered in Consul. We had 
it set to "host,mesos,docker,netinfo", so pretty much registering the task IP 
as the Mesos slave IP so everything worked fine on that end on v0.26.0.

I'd like to mention that Mesos-DNS has the same kind of flag: 
https://github.com/mesosphere/mesos-dns/blob/9a8aa106a05339c79fb189d435c68f64e876414c/records/config.go#L49

I also multiple issues relevant:

- https://github.com/mesosphere/mesos-dns/issues/332
- https://github.com/mesosphere/mesos-dns/issues/369

Unfortunately, I don't know enough of the internal of Mesos itself so just 
coming from an Ops point of view (and a long night of nuked containers 
unfortunately).

I'd like to mention also that I don't have enough bandwidth to be able to 
support Mesos compiled from sources as I rely on stable and testing packages 
from Mesosphere repositories.

If it is any help, I'm available on IRC in the #mesos and #marathon channels 
and more than happy to help debug things.


> Docker Executor should not set container IP during bridged mode
> ---
>
> Key: MESOS-4113
> URL: https://issues.apache.org/jira/browse/MESOS-4113
> Project: Mesos
>  Issue Type: Bug
>  Components: docker
>Affects Versions: 0.25.0, 0.26.0
>Reporter: Sargun Dhillon
>Assignee: Artem Harutyunyan
>  Labels: mesosphere
>
> The docker executor currently sets the IP address of the container into 
> ContainerStatus.NetworkInfo.IPAddresses. This isn't a good thing, because 
> during bridged mode execution, it makes it so that that IP address is 
> useless, since it's behind the Docker NAT. I would like a flag that disables 
> filling the IP address in, and allows it to fall back to the agent IP. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (MESOS-4113) Docker Executor should not set container IP during bridged mode

2015-12-23 Thread Anthony Scalisi (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-4113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070045#comment-15070045
 ] 

Anthony Scalisi edited comment on MESOS-4113 at 12/23/15 9:54 PM:
--

Sorry for the delay here, on PST timezone.

[~jieyu] unfortunately, I'm not familiar with how Marathon does health checks 
(I don't speak Java at all). I opened an issue on Github: 
https://github.com/mesosphere/marathon/issues/2870#issuecomment-166848308
so hoping to get some leads over there.

The only thing for sure is that 0.26.0 changed what task IP is reporting and 
Marathon 0.13.0 (or 0.14 RC for that matter) grabs the IP reported now 
(defaulting to the internal Docker one) which make all the health checks 
failing.

I saw mention of Mesos-DNS and I would like to add that even though Marathon 
health checks stopped working, our other discovery mechanisms didn't:

in the case of Mesos-consul for example 
(https://github.com/CiscoCloud/mesos-consul), you can see here 
(https://github.com/CiscoCloud/mesos-consul/blob/aac6c2828a46c3a54efe1fbc41003dbcd69a6a40/main.go#L120-L123)
 that it has a flag to specify how the task IP is registered in Consul. We had 
it set to "host,mesos,docker,netinfo", so pretty much registering the task IP 
as the Mesos slave IP so everything worked fine on that end on v0.26.0.

I'd like to mention that Mesos-DNS has the same kind of flag: 
https://github.com/mesosphere/mesos-dns/blob/9a8aa106a05339c79fb189d435c68f64e876414c/records/config.go#L49

I also see multiple issues relevant:

- https://github.com/mesosphere/mesos-dns/issues/334#issuecomment-155657196 
- https://github.com/mesosphere/mesos-dns/issues/334#issuecomment-155662858
- https://github.com/mesosphere/mesos-dns/issues/332
- https://github.com/mesosphere/mesos-dns/issues/369

The strange part is that they mention the docker IP issue was added in 0.25.0, 
but we're currently running 0.25.0 after the botched upgrade yesterday and 
everything's working as intended (the JSON not populated with the Docker 
internal IP on Marathon side).

Unfortunately, I don't know enough of the internal of Mesos itself so just 
coming from an Ops point of view (and a long night of nuked containers 
unfortunately).

I'd like to mention also that I don't have enough bandwidth to be able to 
support Mesos compiled from sources as I rely on stable and testing packages 
from Mesosphere repositories.

If it is any help, I'm available on IRC in the #mesos and #marathon channels 
and more than happy to help debug things.

And at last, I think we should strongly reconsider the "resolved" flag (this 
upgrade to 0.26.0 was fatal to us).

EDIT: thanks a lot for looking into the issue during holidays



was (Author: scalp42):
Sorry for the delay here, on PST timezone.

[~jieyu] unfortunately, I'm not familiar with how Marathon does health checks 
(I don't speak Java at all). I opened an issue on Github: 
https://github.com/mesosphere/marathon/issues/2870#issuecomment-166848308
so hoping to get some leads over there.

The only thing for sure is that 0.26.0 changed what task IP is reporting and 
Marathon 0.13.0 (or 0.14 RC for that matter) grabs the IP reported now 
(defaulting to the internal Docker one) which make all the health checks 
failing.

I saw mention of Mesos-DNS and I would like to add that even though Marathon 
health checks stopped working, our other discovery mechanisms didn't:

in the case of Mesos-consul for example 
(https://github.com/CiscoCloud/mesos-consul), you can see here 
(https://github.com/CiscoCloud/mesos-consul/blob/aac6c2828a46c3a54efe1fbc41003dbcd69a6a40/main.go#L120-L123)
 that it has a flag to specify how the task IP is registered in Consul. We had 
it set to "host,mesos,docker,netinfo", so pretty much registering the task IP 
as the Mesos slave IP so everything worked fine on that end on v0.26.0.

I'd like to mention that Mesos-DNS has the same kind of flag: 
https://github.com/mesosphere/mesos-dns/blob/9a8aa106a05339c79fb189d435c68f64e876414c/records/config.go#L49

I also see multiple issues relevant:

- https://github.com/mesosphere/mesos-dns/issues/334#issuecomment-155657196 
- https://github.com/mesosphere/mesos-dns/issues/334#issuecomment-155662858
- https://github.com/mesosphere/mesos-dns/issues/332
- https://github.com/mesosphere/mesos-dns/issues/369

The strange part is that they mention the docker IP issue was added in 0.25.0, 
but we're currently running 0.25.0 after the botched upgrade yesterday and 
everything's working as intended (the JSON not populated with the Docker 
internal IP on Marathon side).

Unfortunately, I don't know enough of the internal of Mesos itself so just 
coming from an Ops point of view (and a long night of nuked containers 
unfortunately).

I'd like to mention also that I don't have enough bandwidth to be able to 
support Mesos compiled from sources as I rely on stable and testing packages 

[jira] [Created] (MESOS-4245) Add `dist` target to CMake solution

2015-12-23 Thread Alex Clemmer (JIRA)
Alex Clemmer created MESOS-4245:
---

 Summary: Add `dist` target to CMake solution
 Key: MESOS-4245
 URL: https://issues.apache.org/jira/browse/MESOS-4245
 Project: Mesos
  Issue Type: Bug
  Components: cmake
Reporter: Alex Clemmer
Assignee: Diana Arroyo






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (MESOS-4181) Change port range logging to different logging level.

2015-12-23 Thread Joerg Schad (JIRA)

 [ 
https://issues.apache.org/jira/browse/MESOS-4181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joerg Schad reassigned MESOS-4181:
--

Assignee: Joerg Schad

> Change port range logging to different logging level.
> -
>
> Key: MESOS-4181
> URL: https://issues.apache.org/jira/browse/MESOS-4181
> Project: Mesos
>  Issue Type: Bug
>  Components: master
>Affects Versions: 0.25.0
>Reporter: Cody Maloney
>Assignee: Joerg Schad
>  Labels: mesosphere, newbie
>
> Transforming from mesos' internal port range representation -> text is 
> non-linear in the number of bytest output. We end up with a massive amount of 
> log data like the following:
> {noformat}
> Dec 15 23:54:08 ip-10-0-7-60.us-west-2.compute.internal mesos-master[15919]: 
> I1215 23:51:58.891165 15925 hierarchical.hpp:1103] Recovered cpus(*):1e-05; 
> mem(*):10; ports(*):[5565-5565] (total: ports(*):[1025-2180, 2182-3887, 
> 3889-5049, 5052-8079, 8082-8180, 8182-32000]; cpus(*):4; mem(*):14019; 
> disk(*):32541, allocated: cpus(*):0.01815; ports(*):[1050-1050, 1092-1092, 
> 1094-1094, 1129-1129, 1132-1132, 1140-1140, 1177-1178, 1180-1180, 1192-1192, 
> 1205-1205, 1221-1221, 1308-1308, 1311-1311, 1323-1323, 1326-1326, 1335-1335, 
> 1365-1365, 1404-1404, 1412-1412, 1436-1436, 1455-1455, 1459-1459, 1472-1472, 
> 1477-1477, 1482-1482, 1491-1491, 1510-1510, 1551-1551, 1553-1553, 1559-1559, 
> 1573-1573, 1590-1590, 1592-1592, 1619-1619, 1635-1636, 1678-1678, 1738-1738, 
> 1742-1742, 1752-1752, 1770-1770, 1780-1782, 1790-1790, 1792-1792, 1799-1799, 
> 1804-1804, 1844-1844, 1852-1852, 1867-1867, 1899-1899, 1936-1936, 1945-1945, 
> 1954-1954, 2046-2046, 2055-2055, 2063-2063, 2070-2070, 2089-2089, 2104-2104, 
> 2117-2117, 2132-2132, 2173-2173, 2178-2178, 2188-2188, 2200-2200, 2218-2218, 
> 2223-2223, 2244-2244, 2248-2248, 2250-2250, 2270-2270, 2286-2286, 2302-2302, 
> 2332-2332, 2377-2377, 2397-2397, 2423-2423, 2435-2435, 2442-2442, 2448-2448, 
> 2477-2477, 2482-2482, 2522-2522, 2586-2586, 2594-2594, 2600-2600, 2602-2602, 
> 2643-2643, 2648-2648, 2659-2659, 2691-2691, 2716-2716, 2739-2739, 2794-2794, 
> 2802-2802, 2823-2823, 2831-2831, 2840-2840, 2848-2848, 2876-2876, 2894-2895, 
> 2900-2900, 2904-2904, 2912-2912, 2983-2983, 2991-2991, 2999-2999, 3011-3011, 
> 3025-3025, 3036-3036, 3041-3041, 3051-3051, 3074-3074, 3097-3097, 3107-3107, 
> 3121-3121, 3171-3171, 3176-3176, 3195-3195, 3197-3197, 3210-3210, 3221-3221, 
> 3234-3234, 3245-3245, 3250-3251, 3255-3255, 3270-3270, 3293-3293, 3298-3298, 
> 3312-3312, 3318-3318, 3325-3325, 3368-3368, 3379-3379, 3391-3391, 3412-3412, 
> 3414-3414, 3420-3420, 3492-3492, 3501-3501, 3538-3538, 3579-3579, 3631-3631, 
> 3680-3680, 3684-3684, 3695-3695, 3699-3699, 3738-3738, 3758-3758, 3793-3793, 
> 3808-3808, 3817-3817, 3854-3854, 3856-3856, 3900-3900, 3906-3906, 3909-3909, 
> 3912-3912, 3946-3946, 3956-3956, 3959-3959, 3963-3963, 3974-
> Dec 15 23:54:09 ip-10-0-7-60.us-west-2.compute.internal mesos-master[15919]: 
> 3974, 3981-3981, 3985-3985, 4134-4134, 4178-4178, 4206-4206, 4223-4223, 
> 4239-4239, 4245-4245, 4251-4251, 4262-4263, 4271-4271, 4308-4308, 4323-4323, 
> 4329-4329, 4368-4368, 4385-4385, 4404-4404, 4419-4419, 4430-4430, 4448-4448, 
> 4464-4464, 4481-4481, 4494-4494, 4499-4499, 4510-4510, 4534-4534, 4543-4543, 
> 4555-4555, 4561-4562, 4577-4577, 4601-4601, 4675-4675, 4722-4722, 4739-4739, 
> 4748-4748, 4752-4752, 4764-4764, 4771-4771, 4787-4787, 4827-4827, 4830-4830, 
> 4837-4837, 4848-4848, 4853-4853, 4879-4879, 4883-4883, 4897-4897, 4902-4902, 
> 4911-4911, 4940-4940, 4946-4946, 4957-4957, 4994-4994, 4996-4996, 5008-5008, 
> 5019-5019, 5043-5043, 5059-5059, 5109-5109, 5134-5135, 5157-5157, 5172-5172, 
> 5192-5192, 5211-5211, 5215-5215, 5234-5234, 5237-5237, 5246-5246, 5255-5255, 
> 5268-5268, 5311-5311, 5314-5314, 5316-5316, 5348-5348, 5391-5391, 5407-5407, 
> 5433-5433, 5446-5447, 5454-5454, 5456-5456, 5482-5482, 5514-5515, 5517-5517, 
> 5525-5525, 5542-5542, 5554-5554, 5581-5581, 5624-5624, 5647-5647, 5695-5695, 
> 5700-5700, 5703-5703, 5743-5743, 5747-5747, 5793-5793, 5850-5850, 5856-5856, 
> 5858-5858, 5899-5899, 5901-5901, 5940-5940, 5958-5958, 5962-5962, 5974-5974, 
> 5995-5995, 6000-6001, 6037-6037, 6053-6053, 6066-6066, 6078-6078, 6129-6129, 
> 6139-6139, 6160-6160, 6174-6174, 6193-6193, 6234-6234, 6263-6263, 6276-6276, 
> 6287-6287, 6292-6292, 6294-6294, 6296-6296, 6306-6307, 6333-6333, 6343-6343, 
> 6349-6349, 6377-6377, 6418-6418, 6454-6454, 6484-6484, 6496-6496, 6504-6504, 
> 6518-6518, 6589-6589, 6592-6592, 6606-6606, 6640-6640, 6713-6713, 6717-6717, 
> 6738-6738, 6757-6757, 6765-6765, 6778-6778, 6792-6792, 6798-6798, 6811-6811, 
> 6815-6815, 6828-6828, 6838-6839, 6856-6856, 6868-6868, 6877-6877, 6892-6892, 
> 6903-6903, 6908-6908, 6943-6943, 6973-6973, 6977-6977, 7003-7003, 7019-7019, 

[jira] [Commented] (MESOS-4214) Introduce HTTP endpoint /weights for updating weight

2015-12-23 Thread Yongqiao Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-4214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15069425#comment-15069425
 ] 

Yongqiao Wang commented on MESOS-4214:
--

Append RR: https://reviews.apache.org/r/41681/

> Introduce HTTP endpoint /weights for updating weight
> 
>
> Key: MESOS-4214
> URL: https://issues.apache.org/jira/browse/MESOS-4214
> Project: Mesos
>  Issue Type: Task
>Reporter: Yongqiao Wang
>Assignee: Yongqiao Wang
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)