doption.
hth,
James
*Joris Van Remoortere*
Mesosphere
On Tue, Jun 14, 2016 at 3:02 PM, Du, Fan
<fan...@intel.com <mailto:fan...@intel.com>
<mailto:fan...@intel.com <mailto:fan...@intel.com>>
that we may (as
a project) not want to support this information as the current string
attributes.
Well understood, thanks for the explanation!
Any comments about #3. and #4?
—
*Joris Van Remoortere*
Mesosphere
On Tue, Jun 14, 2016 at 3:02 PM, Du, Fan <fan...@intel.com> wrote:
On 2016/6/14
/browse/MESOS-3059
--
Aaron Carey
Production Engineer - Cloud Pipeline
Industrial Light & Magic
London
020 3751 9150
________
From: Du, Fan [fan...@intel.com <mailto:fan...@intel.com>]
Sent: 14 June 2016 07:24
To: user@m
. Moreover time to
randomly shuffle the agents also grows.
How about rearranging the agent in a per rack basis, and a minor change
to the way how resources are allocated will fix this.
I might not see the whole picture here, so comments are welcomed!
On 2016/6/6 17:17, Du, Fan wrote:
Hi, Mesos
On 2016/6/8 0:58, james wrote:
Do I have access to the jira system by default joining this list,
or do I have to request permission somewhere? (sorry jira is new to me
so recommendations on jira, per mesos, in a document, would be keen.)
You need a JIRA account, sign up one here:
ers,
On 06/06/16 10:17, Du, Fan wrote:
Hi, Mesos folks
I’ve been thinking about Mesos rack awareness support for a
while,
it’s a common interest for lots of data center applications
to provide
data locality,
t very much.
hth,
James
On 06/06/2016 05:06 AM, Stephen Gran wrote:
Hi,
This looks potentially interesting. How does it work in a public cloud
deployment scenario? I assume you would just have to disable this
feature, or not enable it?
Cheers,
On 06/06/16 10:17, Du, Fan wrote:
Hi, Mesos
Hi, Mesos folks
I've been thinking about Mesos rack awareness support for a while,
it's a common interest for lots of data center applications to provide data
locality,
fault tolerance and better task placement. Create MESOS-5545 to track the story,
and here is the initial design doc [1] to
Mesos has its own containerizer implemenation or employ Docker to do that.
Framework can specify which containerizer layer by using:
containerInfo.set_type(ContainerInfo::DOCKER);
or MESOS
See below example if it helps clear your puzzle.
How to make slave_a to use first half of cpu/memory and slave_b use the
rest of it?
On 2016/1/12 20:54, haosdent wrote:
Yes, need use different work_dir and port.
On Tue, Jan 12, 2016 at 8:42 PM, Du, Fan <fan...@intel.com
<mailto:fan...@intel.com>> wrote:
Just my 2 cents.
On 2016/1/12 21:14, haosdent wrote:
what I mean is how to prevent two slaves from using the same cpu at the
same time?
I think this is handled by container. Different contains would try to
use different resources.
I will dig the code for us to get a ultimate answer.
otherwise I will try to
Just my 2 cents.
I guess the spew is caused by the same work_dir.
Even with two different work_dir, how does cpu/mem resources are
partitioned for two slave instances?
I'm not aware how current resources parsing logic support this( probably
not).
but why not use slave docker image to do the
-
Da (Klaus), Ma (马达) | PMP® | Advisory Software Engineer
Platform Symphony/DCOS Development & Support, STG, IBM GCG
+86-10-8245 4084 | klaus1982...@gmail.com
<mailto:klaus1982...@gmail.com> | http://k82.me
On Tue, Jan 12, 2016 at 9:00 PM, Du, Fan <fan...@intel.com
<mailto:fan...
in this, I can add you to that work group to move this forward.
Please add me in the work group.
So I can contribute to it.
Thanks a lot!
Thanks,
Guangya
On Thu, Dec 31, 2015 at 5:00 PM, Du, Fan <fan...@intel.com
<mailto:fan...@intel.com>> wrote:
Hi
Happy new year!
Curre
On 2015/11/24 9:47, Chengwei Yang wrote:
Hi all,
We're using mesos in product on CentOS 6 and plan to upgrade CentOS to 7.1, to
avoid affect any tasks running on mesos. We're about to replace all
mesos-masters in fly.
The procedure listed below:
0. 3 mesos-masters running on CentOS 6
1.
thub.com/apache/mesos/blob/3539b7a0e15b594148308319bf052d28b1429b98/src/zookeeper/contender.cpp#L147
On Tue, Nov 24, 2015 at 9:40 PM, Du, Fan <fan...@intel.com
<mailto:fan...@intel.com>> wrote:
On 2015/11/24 9:47, Chengwei Yang wrote:
Hi all,
We're using mesos in product on CentOS 6 and
move onto adding the next host.
On Tue, Nov 24, 2015 at 9:40 PM, Du, Fan <fan...@intel.com
<mailto:fan...@intel.com>> wrote:
On 2015/11/24 9:47, Chengwei Yang wrote:
Hi all,
We're using mesos in product on CentOS 6 and plan to upgrade
CentOS to 7.1, to
Hi Mesos experts
There is server and client snapshot metrics in jason format provided by
Mesos itself.
but more often we want to extend the metrics a bit more than that.
I have been looking for this for a couple of days, while
https://collectd.org/ comes
to my sight, it also has a mesos
on-2.6.0.jar
[root@tylersburg spark-1.5.1-bin-hadoop2.6]# ls -hl
/opt/hadoop-2.6.0/bin/hadoop
-rwxr-xr-x. 1 root root 5.4K Nov 3 08:36 /opt/hadoop-2.6.0/bin/hadoop
On Wed, Nov 4, 2015 at 4:56 PM, Du, Fan <fan...@intel.com
<mailto:fan...@intel.com>> wrote:
On 2015/11/4 16:4
, Du, Fan <fan...@intel.com
<mailto:fan...@intel.com>> wrote:
Hi Mesos experts
I setup a small mesos cluster with 1 master and 6 slaves,
and deploy hdfs on the same cluster topology, both with root user role.
#cat spark-1.5.1-bin-hadoop2.6/conf/spark-env.s
Hi Mesos experts
I setup a small mesos cluster with 1 master and 6 slaves,
and deploy hdfs on the same cluster topology, both with root user role.
#cat spark-1.5.1-bin-hadoop2.6/conf/spark-env.sh
export MESOS_NATIVE_JAVA_LIBRARY=/usr/local/lib/libmesos.so
export
folder could not over 800G. I suggest use LVM.
On Mon, Oct 26, 2015 at 4:39 PM, haosdent <haosd...@gmail.com
<mailto:haosd...@gmail.com>> wrote:
Do you use LVM? LVM could combine multi disks to a volume. This is
not related to Mesos.
On Mon, Oct 26, 2015 at 4:36 PM,
22 matches
Mail list logo