Hi all,
I just installed cygwin on my Windows 8.1 laptop and I found that
the result of ls -l is like this:
-rw-rw-r-- 1 Theodore None 0 Nov 8 22:44 a
And I fond that I am in several groups
$ groups Theodore
Theodore : None root Performance Log Users
This raise my curiosity because when I use
Shouldn't I be in the group with the same name of my username, like in
Linux?
在 11/8/2014 11:29 PM, Theodore Si 写道:
Hi all,
I just installed cygwin on my Windows 8.1 laptop and I found that
the result of ls -l is like this:
-rw-rw-r-- 1 Theodore None 0 Nov 8 22:44 a
And I fond that I am
on Windows 8/8.1 ?
在 11/9/2014 11:05 AM, Larry Hall (Cygwin) 写道:
On 11/08/2014 11:17 AM, Theodore Si wrote:
Shouldn't I be in the group with the same name of my username, like
in Linux?
No. Windows isn't Linux. Of course, if you want to make a group with
your
user name and add your user
on Windows 8/8.1 ?
I changed my GID in /etc/passwd from 513 to 544 to make my primary group
Administrators. Now the group owner of the files are turned to ? .
Is it OK to do this?
在 11/9/2014 11:33 AM, Theodore Si 写道:
Thank you for your replies.
The permission of files under ~/.ssh can't
References: 545e36e5.5010...@gmail.com 545e420a.2020...@gmail.com
545ed9de.5010...@cygwin.com
From: Theodore Si sjyz...@gmail.com
Date: Sun, 9 Nov 2014 11:39:43 +0800
Message-ID:
CAG=ehjo-ojefqnmysungewpkq1kpjnnaebs9axgkrcfuozk...@mail.gmail.com
Subject: Re: Should the group of my user
Hi all,
I have two network interface card on one node, one is a Eithernet card,
the other Infiniband HCA.
The master has two IP addresses, lets say 1.2.3.4 (for Eithernet card)
and 2.3.4.5 (for HCA).
I can start the master by
export SPARK_MASTER_IP='1.2.3.4';sbin/start-master.sh
to let master
://spark.apache.org/docs/latest/spark-standalone.html#starting-a-cluster-manually
while starting the worker. like:
spark-1.0.1/bin/spark-class org.apache.spark.deploy.worker.Worker --ip
1.2.3.4 spark://1.2.3.4:7077 http://1.2.3.4:7077
Thanks
Best Regards
On Fri, Oct 24, 2014 at 12:34 PM, Theodore
Hi all,
Workers will exchange data in between, right?
What classes are in charge of these actions?
-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org
Can anyone help me, please?
在 10/14/2014 9:58 PM, Theodore Si 写道:
Hi all,
I have two nodes, one as master(*host1*) and the other as
worker(*host2*). I am using the standalone mode.
After starting the master on host1, I run
$ export MASTER=spark://host1:7077
$ bin/run-example SparkPi 10
Hi all,
I have two nodes, one as master(*host1*) and the other as
worker(*host2*). I am using the standalone mode.
After starting the master on host1, I run
$ export MASTER=spark://host1:7077
$ bin/run-example SparkPi 10
on host2, but I get this:
14/10/14 21:54:23 WARN TaskSchedulerImpl:
Hi all,
I want to use two nodes for test, one as master, the other worker.
Can I submit the example application included in Spark source code
tarball on master to let it run on the worker?
What should I do?
BR,
Theo
-
To
to the master - clustermanager) and the workers
will execute it.
Thanks
Best Regards
On Fri, Oct 10, 2014 at 2:47 PM, Theodore Si sjyz...@gmail.com wrote:
Hi all,
I want to use two nodes for test, one as master, the other worker.
Can I submit the example application included in Spark source
Should I pack the example into a jar file and submit it on master?
On Fri, Oct 10, 2014 at 9:32 PM, Theodore Si sjyz...@gmail.com wrote:
But I cannot do this via using
./bin/run-example SparkPi 10
right?
On Fri, Oct 10, 2014 at 6:04 PM, Akhil Das ak...@sigmoidanalytics.com
wrote
Hi,
Let's say that I managed to port Spark from TCP/IP to RDMA.
What tool or benchmark can I use to test the performance improvement?
BR,
Theo
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional
Hi all,
What tools should I use to benchmark SPARK applications?
BR,
Theo
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org
How can I get figures like those in the Evaluation part of the following
paper?
http://www.cs.berkeley.edu/~matei/papers/2011/tr_spark.pdf
在 10/10/2014 10:35 AM, Theodore Si 写道:
Hi all,
What tools should I use to benchmark SPARK applications?
BR,
Theo
What can I get from it?
Can you show me some results please?
在 10/10/2014 10:46 AM, 牛兆捷 写道:
*You can try https://github.com/databricks/spark-perf*
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional
That's a breakdown of the timings that CUDA cannot always measure.
What does that mean?
We run the same test case on the same machine. This GPU timings part didn't
show up before, but now it appears.
在 9/25/2014 6:11 PM, Mark Abraham 写道:
On Thu, Sep 25, 2014 at 11:57 AM,
Hi,
Please help me with that.
BR,
Theodore Si
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org
Hi all,
In the page 34 of the manual:
The Verlet cut-off scheme is implemented in a very efficient fashion
based on clusters of particles.
The simplest example is a cluster size of 4 particles. The pair list is
then constructed based on
cluster pairs.
I want to know on what condition will 4
Hi all,
I run gromacs 4.6 on 5 nodes(each has 16 CPU cores and 2 Nvidia K20m)
and 4 nodes in the following ways:
5 nodes:
1. Each node has 8 MPI processes, and use one node as PME-dedicated node
2. Each node has 8 MPI processes, and use two nodes as PME-dedicated nodes
3. Each node has 4 MPI
I mapped 2 GPUs to multiple MPI ranks by using -gpu_id
On 8/26/2014 1:12 AM, Xingcheng Lin wrote:
Theodore Si sjyzhxw@... writes:
Hi,
https://onedrive.live.com/redir?
resid=990FCE59E48164A4!2572authkey=!AP82sTNxS6MHgUkithint=file%2clog
https://onedrive.live.com/redir?
resid
, and this is
the origin of the note (above the table) that you might want to balance
things better.
Mark
On 8/23/2014 9:30 PM, Mark Abraham wrote:
On Sat, Aug 23, 2014 at 1:47 PM, Theodore Si sjyz...@gmail.com wrote:
Hi,
When we used 2 GPU nodes (each has 2 cpus and 2 gpus) to do a mdrun(with
no PME
is the time
spent on PP nodes, therefore time spent on PME is covered.
On 8/23/2014 9:30 PM, Mark Abraham wrote:
On Sat, Aug 23, 2014 at 1:47 PM, Theodore Si sjyz...@gmail.com wrote:
Hi,
When we used 2 GPU nodes (each has 2 cpus and 2 gpus) to do a mdrun(with
no PME-dedicated node), we noticed
the last approach, because of its complexity.
Clearly there are design decisions to improve. Work is underway.
Cheers,
Mark
On Fri, Aug 22, 2014 at 10:11 AM, Theodore Si sjyz...@gmail.com wrote:
Hi Mark,
Could you tell me why that when we are GPU-CPU nodes as PME-dedicated
nodes, the GPU
Hi,
I wonder why we are using cpu instead of gpu to solve FFT? Is it
possible to use gpu fft library, say cuFFT to make the FFT used in PME
faster?
BR,
Theo
--
Gromacs Users mailing list
* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
not), then arranging
your MPI environment to place PME ranks on CPU-only nodes is probably
worthwhile. For example, all your PP ranks first, mapped to GPU nodes, then
all your PME ranks, mapped to CPU-only nodes, and then use mdrun -ddorder
pp_pme.
Mark
On Mon, Aug 11, 2014 at 2:45 AM, Theodore Si sjyz
6.5 supports icc 14.0 (only).
Mark
On Wed, Aug 20, 2014 at 8:26 AM, Theodore Si sjyz...@gmail.com wrote:
Hi,
I am using CUDA 5.5 and Intel ICC 14.0.1 to compile GROMACS and this
happened:
[ 0%] Building NVCC (Device) object src/gromacs/gmxlib/gpu_utils/
CMakeFiles/gpu_utils.dir
Hi,
I am using CUDA 5.5 and Intel ICC 14.0.1 to compile GROMACS and this
happened:
[ 0%] Building NVCC (Device) object
src/gromacs/gmxlib/gpu_utils/CMakeFiles/gpu_utils.dir//./gpu_utils_generated_gpu_utils.cu.o
In file included from /usr/local/cuda-5.5/include/cuda_runtime.h(59),
, 2014 at 2:45 AM, Theodore Si sjyz...@gmail.com wrote:
Hi Mark,
This is information of our cluster, could you give us some advice as
regards to our cluster so that we can make GMX run faster on our system?
Each CPU node has 2 CPUs and each GPU node has 2 CPUs and 2 Nvidia K20M
Device Name
using GPUs, but if
separate PME ranks are used, any GPUs on nodes that only have PME ranks are
left idle. The most effective approach depends critically on the hardware
and simulation setup, and whether you pay money for your hardware.
Mark
On Sat, Aug 9, 2014 at 2:56 AM, Theodore Si sjyz
and the GPU-based PP offload
do not combine very well.
Mark
On Fri, Aug 8, 2014 at 7:24 AM, Theodore Si sjyz...@gmail.com wrote:
Hi,
Can we set the number manually with -npme when using GPU acceleration?
--
Gromacs Users mailing list
* Please search the archive at http://www.gromacs.org/
Support
Hi,
I find this in the installation instruction part GMX 5.0:
Helping CMake find the right libraries/headers/programs
http://www.gromacs.org/Documentation/Installation_Instructions#TOC
If libraries are installed in non-default locations their location can
be specified using the
Hi,
I would like to know what kind of profiling tools can be used with GMX?
Which is the most commonly used one?
--
Gromacs Users mailing list
* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
* Can't post? Read
This is extracted from a log file of a mdrun of 512 openMP threads
without GPU acceleration. Since the first line and third line both have
N*N Vdw [F], does the former include the latter?
As we can see, in the log file of a mdrun of 8 openMP threads without
GPU acceleration, there is no
Elec. + VdW [F]
NxN Ewald Elec. + VdW [VF]
Does NxN Ewald Elec. + VdW [F] mean NxN Ewald Elec. and NxN VdW [F]?
If it is the case, why 512.log has both NxN Ewald Elec. + VdW [F] and
NxN VdW [F]?
On 8/5/2014 10:11 PM, Mark Abraham wrote:
On Tue, Aug 5, 2014 at 4:00 AM, Theodore Si sjyz
Hi all,
Does anyone know the instrumentation tool Vampir Trace? I am using it,
and I want to instrument the gromacs code.
So my cmake options are:
cmake .. -DCMAKE_BUILD_OWN_FFTW=ON -DCMAKE_C_COMPILER=vtcc
-DCMAKE_CXX_COMPILER=vtcxx -DGMX_MPI=on
-DCMAKE_INSTALL_PREFIX=/home/theo/gmx
I am using CentOS 6.5, Vmware 10.0.3
It's working now. I have no idea why, since I didn't change anything.
于2014年8月1日 19:06:03,Guy Harrison写到:
On Wednesday 30 July 2014 15:40:56 Theodore Si wrote:
But how? Could you be more specific?
I am using NAT to connect my virtual machine
It's so weird... It works now, either with #include bsd/stdlib.h or
not.
于2014年7月31日 8:40:05,Jonathan Billings写到:
On Thu, Jul 31, 2014 at 08:09:52AM +0800, Theodore Si wrote:
I build it myself, not using rpm since it doesn't work. My OS is 32bit.
It works for me with libbsd and libbsd-devel
Hi all,
I instaled a CentOS 6.5 in Vmware and installed the vmware-tools.
Howerver, when I start up, it gives me the message that
mounting hgfs shares [failed]
How do I solve this, any thoughts?
___
CentOS mailing list
CentOS@centos.org
But how? Could you be more specific?
I am using NAT to connect my virtual machine to the Internet.
于2014年7月30日 22:39:04,Devin Reade写到:
On Jul 29, 2014, at 23:47, Gopu Krishnan gopukrishnan...@gmail.com wrote:
try adding google dns
8.8.8.8
in resolv.conf
His IP is in private address
I tried to wget the google homepage, and It worked... So confused
On Jul 31, 2014 2:02 AM, John R Pierce pie...@hogranch.com wrote:
On 7/29/2014 10:11 PM, Theodore Si wrote:
nameserver 192.168.80.2
is that a valid DNS server that knows how to look up the address you're
trying to wget from
the file on CentOS 6 with the EPEL libbsd package installed.
Are you sure you installed the right package? 32-bit vs. 64-bit?
On July 29, 2014 11:54:01 PM EDT, Theodore Si sjyz...@gmail.com wrote:
I write a .c file and #include bsd/stdlib.h and call heapsort, I get
this:
Apparently
thnks a lot !
于2014年7月29日 19:53:19,Justin Lemkul写到:
On 7/28/14, 10:11 PM, Theodore Si wrote:
For example, a form that explains the meanings of the all items in
the log file.
I found this page
(http://www.gromacs.org/Documentation/Tutorials/GROMACS_USA_Workshop_and_Conference_2013
Hi all,
I want to compile the source code of Advanced Programming in the Unix
Environment(APUE) 3rd edition, and I encountered some difficulties.
After executing make, I got this message:
gcc -ansi -I../include -Wall -DLINUX -D_GNU_SOURCE barrier.c -o barrier
-L../lib -lapue -pthread -lrt -lbsd
So all the packages of Fedora can also be used in CentOS?
于2014年7月29日 23:58:48,Jonathan Billings写到:
On Tue, Jul 29, 2014 at 10:00:53PM +0800, Theodore Si wrote:
Hi all,
I want to compile the source code of Advanced Programming in the Unix
Environment(APUE) 3rd edition, and I encountered
, Theodore Si wrote:
Hi all,
I want to compile the source code of Advanced Programming in the Unix
Environment(APUE) 3rd edition, and I encountered some difficulties.
After executing make, I got this message:
[...]
How to install libbsd to solve this problem on CentOS (this works on
Ubuntu
Billings ??:
On Tue, Jul 29, 2014 at 10:00:53PM +0800, Theodore Si wrote:
Hi all,
I want to compile the source code of Advanced Programming in the Unix
Environment(APUE) 3rd edition, and I encountered some difficulties.
After executing make, I got this message:
[...]
How to install libbsd
I build the libbsd from source code that I downloaded from here:
http://libbsd.freedesktop.org/wiki/
I think I got all things needed, right?
于2014年7月30日 10:35:39,Theodore Si写到:
I tried to install libbsd before, it didn't work.(I can execute man
heapsort)
Today, I installed epel-release-6
I write a .c file and #include bsd/stdlib.h and call heapsort, I get this:
Apparently, heapsort can be called.
于2014年7月30日 11:46:55,Theodore Si写到:
I build the libbsd from source code that I downloaded from here:
http://libbsd.freedesktop.org/wiki/
I think I got all things needed, right
Hi all,
I find that in my CentOS, which is installed in vmware, I can use yum to
install software from Internet, and I can also ping websites, but I
cannot download stuff using wget.
I receive error msg unable to resolve host address “x”. The IP
address is 192.168.80.128, and this is the
some
kinds of comparisons.
Mark
On Mon, Jul 28, 2014 at 5:35 AM, Theodore Si sjyz...@gmail.com wrote:
Thanks a lot!
But I am still confused about other things.
For instance, what do count, wall t(s) G-Cycles mean? It seems that the
last column is the percentage of G-Cycles.
I really hope
:
Your run took nearly a minute, and did so at a rate that would take 0.345
hours to do a simulated nanosecond
Mark
On Mon, Jul 28, 2014 at 9:05 AM, Theodore Si sjyz...@gmail.com wrote:
Core t (s) Wall t (s)(%)
Time: 2345.800 49.744 4715.7
, then in one day you will simulate 1/0.345*24~69.5 ns
Guillaume
On 07/28/2014 09:39 AM, Theodore Si wrote:
I thought that 69.479 ns/day means I can simulate 69.479 ns per day.
But if as you said, I need 0.345 hour to get a simulated nanosecond,
then I can only get 0.345 * 24 = 8.28 simulated
For example, in the following form, what does Wait + Comm. F mean? Is
there a webpage that explains the forms in log file?
R E A L C Y C L E A N D T I M E A C C O U N T I N G
Computing: Nodes Th. Count Wall t (s) G-Cycles %
Abraham 写道:
On Jul 28, 2014 4:53 AM, Theodore Si sjyz...@gmail.com wrote:
For example, in the following form, what does Wait + Comm. F mean? Is
there a webpage that explains the forms in log file?
Unfortunately not (yet), but they correspond in a more-or-less clear way to
the segments in manual figure
Hi all,
In the log file, what do count, wall t(s) G-Cycles mean? It seems that
the last column is the percentage of G-Cycles.
I really hope there is a place where I can find all relative information
of the log file.
Thanks in advance.
--
Gromacs Users mailing list
* Please search the archive at
57 matches
Mail list logo