[Beowulf] mixing of Infiniband HCAs in cluster

2006-05-24 Thread Mikhail Kuzminsky
. But is this possible - to have both HCA kinds (DDR and usual IB 4x HCAs) connected to the same switch ? If the answer is yes, is it necessary to setup all the DDR-HCA drivers manually for work w/4x speed or all the necessary things will be done via negotiations ? Yours Mikhail Kuzminsky

Re: [Beowulf] mixing of Infiniband HCAs in cluster

2006-05-26 Thread Mikhail Kuzminsky
for parallelization ... Yours Mikhail Yours Mikhail Kuzminsky Computer Assistance to Chemical Research Center, Zelinsky Institute of Organic Chemistry Moscow ___ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode

Re: [Beowulf] Slection from processor choices; Requesting Giudence

2006-06-15 Thread Mikhail Kuzminsky
single cores Opteron having independed memory channels) is in some cases better than any sharing of memory bus(es). Mikhail Kuzminsky Zelinsky Institute of Organic Chemistry Moscow Please guide me that how much parallel programming will differ for the above four choices of processing nodes

[Beowulf] Opteron nodes w/PCI-E (for 4x IB)

2006-07-05 Thread Mikhail Kuzminsky
support is Supermicro H8DCE-B (again nVidia; formally for workstations; but I may try to find 2nd hand PCI-32 graphics card for it :-)) Thanks for your help in the future ! Mikhail Kuzminsky Zelinsky Institute of Organic Chemistry Moscow ___ Beowulf

[Beowulf] power supply/chassis for Opteron-based node

2006-07-12 Thread Mikhail Kuzminsky
- AIC/T-Win ? what else ? Yours Mikhail Kuzminsky Zelinsky Institute of Organic Chemistry Moscow ___ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

Re: [Beowulf] cluster softwares supporting parallel CFD computing

2006-09-04 Thread Mikhail Kuzminsky
computational work along with cluster-handling? Taking into account that your cluster will be small, I beleive the answer should be yes: your frontal host has 25% of your total performance. Yours Mikhail Kuzminsky Zelinsky Institute of Organic Chemistry Moscow Could any of you please suggest me full

Re: [Beowulf] Any Gaussian users out there?

2007-01-08 Thread Mikhail Kuzminsky
In message from Joe Landman [EMAIL PROTECTED] (Sun, 07 Jan 2007 22:49:55 -0500): I found a neat ... feature ... of Linux while getting g03 running in SMP on cluster nodes. Long story, but the folks I am doing this for don't have/want to use Linda. They asked us to help them get g03

Re: [Beowulf] SGI to offer Windows on clusters

2007-01-17 Thread Mikhail Kuzminsky
case it's bad news :-( SGI has solid reputation in HPC and university world, and may be somebody will be tempted. But it's interesting, w/which prices SGI will sell their clusters ? Hope that the price will be much more higher than for SGI Linux clusters ;-) Mikhail Kuzminsky Zelinsky Institute

[Beowulf] was: Intel Quad-Core or AMD Opteron

2007-08-24 Thread Mikhail Kuzminsky
) What is known about RDTSC synchronization between CPU cores in modern Linux kernels (especially I'm interesting in OpenSuSE :-)) ? I heard some time ago that at least some Fedora kernels performed like synchronization. Yours Mikhail Kuzminsky Zelinsky Institute of Organic Chemistry Moscow

Re: [Beowulf] Reading raw binary files in Fortran (Intel compiler)?

2007-09-01 Thread Mikhail Kuzminsky
In message from Mark Hahn [EMAIL PROTECTED] (Fri, 31 Aug 2007 13:29:56 -0400 (EDT)): guessing that my raw binary read trick does not work on Intel Fortran? Is there another option I need to pass (e.g. perhaps form='binary')? I haven't looked closely, but have heard that different compilers

[Beowulf] quad-core SPECfp2006: where are 4 FPresults/cycle ?

2007-10-12 Thread Mikhail Kuzminsky
about 7% of performance increase :-( The question is - should we wait some better results for new incoming optimizing compilers versions ? Or it is the reality - that 2 additional FP results per cycle gives (in average) relative small performance increase ? Mikhail Kuzminsky Zelinsky Institute

Re: [Beowulf] quad-core SPECfp2006: where are 4 FPresults/cycle ?

2007-10-13 Thread Mikhail Kuzminsky
In message from Mark Hahn [EMAIL PROTECTED] (Fri, 12 Oct 2007 16:09:05 -0400 (EDT)): This means that 2 additional FP results per cycle in microarchitecture gives only about 7% of performance increase :-( the 4 flops/cycle is really for linpack-like code: it assumes you are executing packed

Re: [Beowulf] quad-core SPECfp2006: where are 4 FPresults/cycle ?

2007-10-13 Thread Mikhail Kuzminsky
In message from [EMAIL PROTECTED] (Fri, 12 Oct 2007 20:50:08 +): Mikhail, I am not sure I fully understand what you are presenting here, but I might say that yes, at the FPU unit level the series AMD Opteron/Barcelona and the Intel Core2/Clovertown (and also Harpertown at 45 nm) are

Re: [Beowulf] NEC SX-9

2007-10-29 Thread Mikhail Kuzminsky
In message from Peter St. John [EMAIL PROTECTED] (Mon, 29 Oct 2007 10:31:49 -0500): According to http://www.geekzone.co.nz/content.asp?contentid=7458, NEC has announced an 800+ TFLOPS machine, SX-9; it does 100-odd GFLOPS per core (with a new vector processor). Peter P.S. gosh, wouldn't it be

[Beowulf] Opteron 235X: mobos coolers

2008-04-18 Thread Mikhail Kuzminsky
? Mikhail Kuzminsky Computer Assistance to Chemical Research Center Zelinsky Inst. of Organic Chemistry Moscow ___ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman

Re: [Beowulf] Improving access to a Linux beowulf cluster for Windows users

2008-04-19 Thread Mikhail Kuzminsky
1000 6.54 149.29 2048 1000 8.05 242.76 4096 100010.93 357.42 8192 100016.72 467.14 Also better :-) Mikhail Kuzminsky Computer Assistance to Chemical Research Center Zelinsky Institute

[Beowulf] IB DDR: mvapich2 vs mvapich performance

2008-04-22 Thread Mikhail Kuzminsky
at SC'07 ) is not significant - in the sense that it is simple because of some measurement errors (inaccuracies)? Mikhail Kuzminsky Computer Assistance to Chemical Research Center Zelinsky Institute of Organic Chemistry Moscow ___ Beowulf mailing list

Re: [Beowulf] IB DDR: mvapich2 vs mvapich performance

2008-04-23 Thread Mikhail Kuzminsky
In message from Greg Lindahl [EMAIL PROTECTED] (Wed, 23 Apr 2008 00:36:44 -0700): On Wed, Apr 23, 2008 at 07:04:51AM +0400, Mikhail Kuzminsky wrote: Is this throughput difference the result of MPI-2 vs MPI implementation or should I beleive that this difference (about 4% for my mvapich vs

Re: [Beowulf] IB DDR: mvapich2 vs mvapich performance

2008-04-24 Thread Mikhail Kuzminsky
In message from Eric Thibodeau [EMAIL PROTECTED] (Wed, 23 Apr 2008 16:48:04 -0400): Mikhail Kuzminsky wrote: In message from Greg Lindahl [EMAIL PROTECTED] (Wed, 23 Apr 2008 00:36:44 -0700): On Wed, Apr 23, 2008 at 07:04:51AM +0400, Mikhail Kuzminsky wrote: Is this throughput difference

Re: [Beowulf] IB DDR: mvapich2 vs mvapich performance

2008-04-24 Thread Mikhail Kuzminsky
! I thought that on my more old HCA hardware (Infinihost III Lx PCI-e x8 MHGS18-XTC), more old CPU/mobo/... (Opteron 246/2 Ghz/...), more old Linux, ofed and mvapich/mvapich2 versions I must obtain more lower throughput value ... Mikhail -Tom Mikhail Kuzminsky Computer Assistance

Re: [Beowulf] Nvidia, cuda, tesla and... where's my double floating point?

2008-05-04 Thread Mikhail Kuzminsky
In message from Ricardo Reis [EMAIL PROTECTED] (Fri, 2 May 2008 14:05:25 +0100 (WEST)): Does anyone knows if/when there will be double floating point on those little toys from nvidia? Next generation Tesla, but I don't know when. Or use AMD FireStream 9170 instead :-) Mikhail Kuzminsky

[Beowulf] Barcelona hardware error: how to detect

2008-06-05 Thread Mikhail Kuzminsky
How is possible to detect, that particular AMD Barcelona CPU has - or doesn't have - known hardware error problem ? To be more exact, Rev. B2 of Opteron 2350 - is it for CPU stepping w/error or w/o error ? Mikhail Kuzminsky Computer Assistance to Chemical Research Center Zelinsky Inst

Re: [Beowulf] Barcelona hardware error: how to detect

2008-06-05 Thread Mikhail Kuzminsky
In message from Mark Hahn [EMAIL PROTECTED] (Thu, 5 Jun 2008 11:57:28 -0400 (EDT)): To be more exact, Rev. B2 of Opteron 2350 - is it for CPU stepping w/error or w/o error ? AMD, like Intel, does a reasonable job of disclosing such info:

Re: [Beowulf] Barcelona hardware error: how to detect

2008-06-05 Thread Mikhail Kuzminsky
In message from Mark Hahn [EMAIL PROTECTED] (Thu, 5 Jun 2008 13:30:57 -0400 (EDT)): http://www.amd.com/us-en/assets/content_type/white_papers_and_tech_docs/41322.PDF the well-known problem is erattum 298, I think, and fixed in B3. Yes, this AMD errata document says that in B3 revision the

Re: [Beowulf] Barcelona hardware error: how to detect

2008-06-05 Thread Mikhail Kuzminsky
In message from Mark Hahn [EMAIL PROTECTED] (Thu, 5 Jun 2008 13:55:01 -0400 (EDT)): I believe the absence of 'x' in the B3 column of the table on p 15 means that it _is_ fixed in B3. I received just now some preliminary data about Gaussian-03 run problems w/B2 and about absence of this

Re: [Beowulf] Barcelona hardware error: how to detect

2008-06-05 Thread Mikhail Kuzminsky
In message from Jason Clinton [EMAIL PROTECTED] (Thu, 5 Jun 2008 13:16:33 -0500): On Thu, Jun 5, 2008 at 1:09 PM, Mikhail Kuzminsky [EMAIL PROTECTED] wrote: In message from Mark Hahn [EMAIL PROTECTED] (Thu, 5 Jun 2008 13:55:01 -0400 (EDT)): I'm mystified by this: B2 was broken, so using

[Beowulf] size of swap partition

2008-06-09 Thread Mikhail Kuzminsky
A lot of time ago it was formulated simple rule for swap partition size (equal to main memory size). Currently we all have relative large RAM on the nodes (typically, I beleive, it is 2 or more GB per core; we have 16 GB per dual-socket quad-core Opteron node). What is typical modern swap

Re: [Beowulf] size of swap partition

2008-06-10 Thread Mikhail Kuzminsky
In message from Mark Hahn [EMAIL PROTECTED] (Tue, 10 Jun 2008 00:58:12 -0400 (EDT)): ... for instance, you can always avoid OOM with the vm.overcommit_memory=2 sysctl (you'll need to tune vm.overcommit_ratio and the amount of swap to get the desired limits.) in this mode, the kernel tracks how

[Beowulf] Powersave on Beowulf nodes

2008-06-14 Thread Mikhail Kuzminsky
frequency) is not the our danger :-) So I'm thinking about simple stopping of all the corresponding daemons. Mikhail Kuzminsky Computer Assistance to Chemical Research Center Zelinsky Institute of Organic Chemistry Moscow ___ Beowulf mailing list

[Beowulf] Tyan S2932 and lm_sensors

2008-06-18 Thread Mikhail Kuzminsky
Sorry, do somebody have correct sensors.conf file for Tyan S2932 motherboard ? There is no lm_sensors configuration file for this mobos on Tyan site :-( Yours Mikhail Kuzminsky Computer Assistance to Chemical Research Center Zelinsky Institute of Organic Chemistry Moscow

Re: [Beowulf] Tyan S2932 and lm_sensors

2008-06-18 Thread Mikhail Kuzminsky
://www.integratedsolutions.org Failure can not cope with knowledge and perseverance! -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Mikhail Kuzminsky Sent: Wednesday, June 18, 2008 9:26 AM To: beowulf@beowulf.org Subject: [Beowulf] Tyan S2932 and lm_sensors

Re: [Beowulf] Tyan S2932 and lm_sensors

2008-06-18 Thread Mikhail Kuzminsky
Road Colorado Springs, CO 80921 719-495-5866 719-495-5870 Fax 719-337-4779 Cell http://www.integratedsolutions.org Failure can not cope with knowledge and perseverance! -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Mikhail Kuzminsky Sent

Re: [Beowulf] SuperMicro and lm_sensors

2008-06-19 Thread Mikhail Kuzminsky
In message from Bernard Li [EMAIL PROTECTED] (Thu, 19 Jun 2008 11:28:08 -0700): Hi David: On Thu, Jun 19, 2008 at 6:50 AM, Lombard, David N [EMAIL PROTECTED] wrote: Did you look for /proc/acpi/thermal_zone/*/temperature The glob is for your BIOS-defined ID. If it does exist, that's the

[Beowulf] Again about NUMA (numactl and taskset)

2008-06-23 Thread Mikhail Kuzminsky
and the program_file creates some new processes, will all this processes run only on the same CPUs defined in taskset command ? Mikhail Kuzminsky Computer Assistance to Chemical Research Center, Zelinsky Institute of Organic Chemistry Moscow

Re: [Beowulf] Again about NUMA (numactl and taskset)

2008-06-23 Thread Mikhail Kuzminsky
, Mikhail Kuzminsky wrote: I'm testing my 1st dual-socket quad-core Opteron 2350-based server. Let me assume that the RAM used by kernel and system processes is zero, there is no physical RAM fragmentation, and the affinity of processes to CPU cores is maintained. I assume also that both the nodes

[Beowulf] Timers and TSC behaviour on SMP/x86

2008-06-24 Thread Mikhail Kuzminsky
processors have one common clock source in NorthBridge (BTW, is it in this case integrated into CPU chip - i.e. includes integrated memory controller and support of HT links ? - M.K.) - for all the TSCs of CPUs (cores ? - M.K.). The synchronization accuracy should be few tens cycles. Mikhail Kuzminsky

Re: [Beowulf] Again about NUMA (numactl and taskset)

2008-06-25 Thread Mikhail Kuzminsky
(and therefore more memory channels) will work simultaneously. Is it right - that more cheap server will have higher performance for like cases ?? Mikhail Kuzminsky Computer Assistance to Chemical Research Center Zelinsky Institute of Organic Chemistry Moscow

Re: [Beowulf] Again about NUMA (numactl and taskset)

2008-06-26 Thread Mikhail Kuzminsky
free RAM). It is in opposition w/my expectations of continuous RAM allocation from the RAM of one node ! Mikhail Kuzminsky, Computer Assistance to Chemical Research Zelinsky Institute of Organic Chemistry Moscow At 18:34 25.06.2008, Mikhail Kuzminsky wrote: Let me assume now the following

[Beowulf] Strange Opteron 2350 performance: Gaussian-03

2008-06-28 Thread Mikhail Kuzminsky
-cpus Opteron 246 parallel test). Yes, AFAIK DFT method is cache-friendly, and more slow L3 cache in Opteron 2350 may give more bad performance. But in 1.8 times ?? Any your comments are welcome. Mikhail Kuzminsky Computer Assistance to Chemical Research Center Zelinsky Institute of Organic

Re: [Beowulf] Strange Opteron 2350 performance: Gaussian-03

2008-06-28 Thread Mikhail Kuzminsky
-03 and for DFT in particular ? Did you compile it on K10 using target=barcelona (i.e. optimized for barcelona) ? Yours Mikhail Regards, Li, Bo - Original Message - From: Mikhail Kuzminsky [EMAIL PROTECTED] To: beowulf@beowulf.org Sent: Saturday, June 28, 2008 11:48 PM Subject

Re: [Beowulf] Strange Opteron 2350 performance: Gaussian-03

2008-06-28 Thread Mikhail Kuzminsky
? Linux? Which release? X86_64? Regards, Li, Bo - Original Message - From: Mikhail Kuzminsky [EMAIL PROTECTED] To: Li, Bo [EMAIL PROTECTED] Cc: beowulf@beowulf.org Sent: Sunday, June 29, 2008 12:23 AM Subject: Re: [Beowulf] Strange Opteron 2350 performance: Gaussian-03 In message from Li

Re: [Beowulf] Strange Opteron 2350 performance: Gaussian-03

2008-06-28 Thread Mikhail Kuzminsky
In message from Joe Landman [EMAIL PROTECTED] (Sat, 28 Jun 2008 14:48:02 -0400): This is possible, depending upon the compiler used. Though I have to admit that I find it odd that it would be the case within the Opteron family and not between Opteron and Xeon. Intel compilers used to

Re: [Beowulf] Strange Opteron 2350 performance: Gaussian-03

2008-06-30 Thread Mikhail Kuzminsky
In message from Bernd Schubert [EMAIL PROTECTED] (Sat, 28 Jun 2008 19:04:50 +0200): On Saturday 28 June 2008, Li, Bo wrote: Hello, Sorry, I don't have the same applications as you. Did you compile them with gcc? If gcc, then -o3 can do some optimization. -march=k8 is enough I think. As

[Beowulf] MPI: over OFED and over IBGD

2008-07-03 Thread Mikhail Kuzminsky
Is there some MPI realization/versions which may be installed one some nodes - to work over Mellanox IBGD 1.8.0 (Gold Distribution) IB stack and on other nodes - for work w/OFED-1.2 ? Mikhail Kuzminsky Computer Assistance to Chemical Research Center Zelinsky Institute of Organic Chemistry

Re: [Beowulf] MPI: over OFED and over IBGD

2008-07-03 Thread Mikhail Kuzminsky
In message from Gilad Shainer [EMAIL PROTECTED] (Thu, 3 Jul 2008 09:41:01 -0700): Mikhail Kuzminsky wrote: Is there some MPI realization/versions which may be installed one some nodes - to work over Mellanox IBGD 1.8.0 (Gold Distribution) IB stack and on other nodes - for work w/OFED-1.2

Re: [Beowulf] Re: Building new cluster - estimate (Ivan Oleynik)

2008-08-01 Thread Mikhail Kuzminsky
more relevant than the temp of the PDU... using lm_sensors is a poor substitute for IPMI. IMHO the only disadvantage of lm_sensors is the poroblem of building of right sensors.conf file. Mikhail Kuzminsky Computer Assistance to Chemical Research Center Zelinsky Institute of Organic Chemistry

Re: [Beowulf] Building new cluster - estimate

2008-08-05 Thread Mikhail Kuzminsky
In message from Joshua Baker-LePain [EMAIL PROTECTED] (Tue, 5 Aug 2008 14:10:33 -0400 (EDT)): On Tue, 5 Aug 2008 at 8:34pm, Mikhail Kuzminsky wrote xfs has a rich set of utilities, but AFAIK no defragmentation tools (I don't know what will be after xfsdump/xfsrestore). But which modern linux

Re: [Beowulf] Building new cluster - estimate

2008-08-06 Thread Mikhail Kuzminsky
(for increasing of lifetime) attempt not to erase data really - if it's physically possible. But if I use practically whole HDD partition for scratch files (and therefore whole SSD) - IMHO it'll be impossible not to erase flash RAM. What will be w/SSD disk lifetime in that case ? Mikhail Kuzminsky

[Beowulf] new flash SSDs

2008-08-19 Thread Mikhail Kuzminsky
is awaiting in Q1'2009. I hope this will lead to decrease of SSD market price. Unfortunately I have no information about prices and about lifetime. But I'm not too enthusiastic about prices: even Samsung PATA 2.5/32 GB SDD costs about $300, IBM SATA are much more expensive. Mikhail Kuzminsky

Re: [Beowulf] hang-up of HPC Challenge

2008-08-19 Thread Mikhail Kuzminsky
immediately after output of this strings. Mikhail In message from Mikhail Kuzminsky [EMAIL PROTECTED] (Mon, 18 Aug 2008 22:20:16 +0400): I ran a set of HPC Challenge benchmarks on ONE dual socket quad-core Opteron2350 (Rev. B3) based server (8 logical CPUs). RAM size is 16 Gbytes

Re: [Beowulf] hang-up of HPC Challenge

2008-08-20 Thread Mikhail Kuzminsky
In message from Greg Lindahl [EMAIL PROTECTED] (Tue, 19 Aug 2008 19:39:38 -0700): On Wed, Aug 20, 2008 at 03:45:43AM +0400, Mikhail Kuzminsky wrote: For some localization of possible problem reason, I ran pure HPL test instead of HPCC. HPL performs direct output to screen instead of writing

Re: [Beowulf] hang-up of HPC Challenge

2008-08-20 Thread Mikhail Kuzminsky
In message from Chris Samuel [EMAIL PROTECTED] (Wed, 20 Aug 2008 11:12:52 +1000 (EST)): - Mikhail Kuzminsky [EMAIL PROTECTED] wrote: What else may be the reason of hangup ? Depends what you mean by hangup really.. Does the code crash, does it just stop idle, does it busy loop, does

Re: [Beowulf] gpgpu

2008-08-26 Thread Mikhail Kuzminsky
about 150 W (although I'm not absolutely sure that it's TDP) - it's as for some Intel Xeon quad-cores chips w/names beginning from X. Mikhail On Aug 23, 2008, at 10:31 PM, Mikhail Kuzminsky wrote: BTW, why GPGPUs are considered as vector systems ? Taking into account that GPGPUs contain many

Re: [Beowulf] gpgpu

2008-08-28 Thread Mikhail Kuzminsky
- From: Vincent Diepeveen [EMAIL PROTECTED] To: Li, Bo [EMAIL PROTECTED] Cc: Mikhail Kuzminsky [EMAIL PROTECTED]; Beowulf beowulf@beowulf.org Sent: Thursday, August 28, 2008 12:22 AM Subject: Re: [Beowulf] gpgpu Hi Bo, Thanks for your message. What library do i call to find primes? Currently

[Beowulf] Re: Beowulf Digest, Vol 55, Issue 2

2008-09-04 Thread Mikhail Kuzminsky
In message from Li, Bo [EMAIL PROTECTED] (Thu, 4 Sep 2008 14:34:00 +0800): Hello, Is it too expensive for the platform? The easy solution is: And X48 level motherboard with CF support, about $150 Q6600 Processor, about $170 Two 4870X2 $1,100 Do somebody know, are ACML routines parallelized

Re: [Beowulf] Nehalem Xeons

2008-10-14 Thread Mikhail Kuzminsky
- unfortunately, I don't know more exactly :-( Mikhail Kuzminsky Computer Assistance to Chemical Research Center, Zelinsky Institute of Organic Chemistry Moscow Håkon At 01:57 14.10.2008, Ivan Oleynik wrote: I am still in process of purchasing a new cluster and consider whether is worth waiting

Re: [Beowulf] Shanghai vs Barcelona, Shanghai vs Nehalem

2008-10-22 Thread Mikhail Kuzminsky
In message from Ivan Oleynik [EMAIL PROTECTED] (Tue, 21 Oct 2008 18:15:49 -0400): I have heard that AMD Shanghai will be available in Nov 2008. Does someone know the pricing and performance info and how is it compared with Barcelona? Are there some informal comparisons of Shanghai vs Nehalem?

Re: [Beowulf] Shanghai vs Barcelona, Shanghai vs Nehalem

2008-10-22 Thread Mikhail Kuzminsky
In message from Mark Hahn [EMAIL PROTECTED] (Wed, 22 Oct 2008 13:23:08 -0400 (EDT)): Are there some informal comparisons of Shanghai vs Nehalem? I beleive that Shanghai performance increase in comparison w/Barcelona will be practically defined only by possible higher Shanghai frequencies.

Re: Re[2]: [Beowulf] Shanghai vs Barcelona, Shanghai vs Nehalem

2008-10-23 Thread Mikhail Kuzminsky
In message from Jan Heichler [EMAIL PROTECTED] (Wed, 22 Oct 2008 20:27:40 +0200): Hallo Mikhail, Mittwoch, 22. Oktober 2008, meintest Du: MK In message from Ivan Oleynik [EMAIL PROTECTED] (Tue, 21 Oct 2008 MK 18:15:49 -0400): I have heard that AMD Shanghai will be available in Nov 2008. Does

[Beowulf] Сlos network vs fat tree

2008-11-13 Thread Mikhail Kuzminsky
Sorry, is it correct to say that fat tree topology is equal to *NON-BLOCKING* Clos network w/addition of uplinks ? I.e. any non-blocking Clos network w/corresponding addition of uplinks gives fat tree ? I read somewhere that exact evidence of non-blocking was performed for Clos networks with

Re: [Beowulf] Parallel software for chemists

2008-12-12 Thread Mikhail Kuzminsky
if it is linux. To say shortly, practically all the modern software for molecular modelling calculations can run in parallel on Linux clusters. Mikhail Kuzminsky Computer Assistance to Chemical Research Center Zelinsky Institute of Organic Chemistry RAS Moscow -- This message has been scanned

Re: [Beowulf] Hadoop

2008-12-29 Thread Mikhail Kuzminsky
of religious language war :-) Mikhail Kuzminsky Computer Assistance to Chemical Research Center Zelinsky Institute of Organic Chemistry Moscow gerry Jeff Layton wrote: I hate to tangent (hijack?) this subject, but I'm curious about your class poll. Did the people who were interested in Matlab consider

Re: [Beowulf] How many double precision floating point operations per clock cycle for AMD Barcelona?

2009-02-12 Thread Mikhail Kuzminsky
In message from Prakashan Korambath p...@ats.ucla.edu (Tue, 10 Feb 2009 08:23:05 -0800): Could someone confirm the number of double precision floating point operations (FLOPS) for AMD Barcelona chips? The URL below seems to indicate 4 FLOPS per cycle. I just want to confirm it. Thanks. 4

Re: [Beowulf] RE:small distro for PXE boot, autostarts sshd?

2009-02-27 Thread Mikhail Kuzminsky
In message from Greg Keller g...@keller.net (Fri, 27 Feb 2009 10:20:50 -0600): Have you ever considered Perceus (Caos has it baked in) from infiscale? ... http://www.infiscale.com/ It looks that there is only one way to understand a bit more detailed what Perceus does - to download it :-)

Re: [Beowulf] Grid scheduler for Windows XP

2009-03-05 Thread Mikhail Kuzminsky
this grid are Native to Windows XP. GRAM component of Globus Toolkit (http://www.globus.org/) give you some possibilities of batch queue system, and there is SGE interfaces to Globus. Mikhail Kuzminsky Computer Assistance to Chemical Research Center Zelinsky Institute of Organic Chemistry RAS Moscow

[Beowulf] Sun X4600 STREAM results

2009-03-16 Thread Mikhail Kuzminsky
Sorry, do somebody have X4600 M2 Stream results (or the corresponding URLs) for DDR2/667 - w/dependance from processor core numbers? Mikhail Kuzminsky Computer Assistance to Chemical Reserach Center Zelinsky Institute of Organic Chemistry RAS Moscow

Re: [Beowulf] Lowered latency with multi-rail IB?

2009-03-27 Thread Mikhail Kuzminsky
for a set of calculation methods, and this messages are middle-to-large in sizes. NWChem is the only quantum-chemical program I know, which require high interconnect performance. I don't know about Jaguar. Mikhail Kuzminsky Computer Assistance to Chemical Research Center Zelinsky Institute

Re: [Beowulf] X5500

2009-03-31 Thread Mikhail Kuzminsky
. Is there some price information available ? Mikhail Kuzminsky Computer Assistance to Chemical Research Center Zelinsky Institute of Organic Chemostry RAS Moscow ___ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode or unsubscribe

Re: [Beowulf] FPU performance of Intel CPUs

2009-04-06 Thread Mikhail Kuzminsky
that allows you to disable this?  ;) Concerning Nehalems, of course. I read up about this. You can always disable it using ACPI If you use good parallelized program w/high CPUs utilization, I beleive, you SHOULD disable turbo-boost mode :-) Mikhail Kuzminsky Computer Assistance to Chemical Research

[Beowulf] Tyan S7002 for Nehalem-based nodes

2009-05-05 Thread Mikhail Kuzminsky
Is there some contra-indications for using of Tyan S7002 AG2NR w/Xeon 5520 for cluster nodes ? May be somebody have some experience w/S7002 ? Mikhail Kuzminsky Computer Assistance to Chemical Research Center Zelinsky Institute of Organic Chemistry RAS Moscow

Re: [Beowulf] numactl SuSE11.1

2009-08-10 Thread Mikhail Kuzminsky
Mikhail Kuzminsky Computer Assistance to Chemical Research Center Zelinsky Institute of Organic Chemistry RAS Moscow ___ Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe

Re: [Beowulf] performance tweaks and optimum memory configs for a Nehalem

2009-08-11 Thread Mikhail Kuzminsky
for 8 cores. Mikhail Kuzminsky Computer Assistance to Chemical Research Center Zelinsky Institute of Organic Chemistry RAS Moscow ___ Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing To change your subscription (digest mode

Re: [Beowulf] numactl SuSE11.1

2009-08-11 Thread Mikhail Kuzminsky
). The situation for Opteron's is opposite: NUMA mode gives more high throughput. In message from Mikhail Kuzminsky k...@free.net (Mon, 10 Aug 2009 21:43:56 +0400): I'm sorry for my mistake: the problem is on Nehalem Xeon under SuSE -11.1, but w/kernel 2.6.27.7-9 (w/Supermicro X8DT mobo). For Opteron

Re: [Beowulf] bizarre scaling behavior on a Nehalem

2009-08-14 Thread Mikhail Kuzminsky
In message from Bill Broadley b...@cse.ucdavis.edu (Thu, 13 Aug 2009 17:09:24 -0700): Tom Elken wrote: To add some details to what Christian says, the HPC Challenge version of STREAM uses dynamic arrays and is hard to optimize. I don't know what's best with current compiler versions, but you

Re: [Beowulf] bizarre scaling behavior on a Nehalem

2009-08-14 Thread Mikhail Kuzminsky
In message from Bill Broadley b...@cse.ucdavis.edu (Thu, 13 Aug 2009 17:09:24 -0700): Do I unerstand correctly that this results are for 4 cores 4 openmp threads ? And what is DDR3 RAM: DDR3/1066 ? Mikhail I tried open64-4.2.2 with those flags and on a nehalem single socket: $ opencc

Re: [Beowulf] bizarre scaling behavior on a Nehalem

2009-08-14 Thread Mikhail Kuzminsky
In message from Tom Elken tom.el...@qlogic.com (Fri, 14 Aug 2009 13:57:53 -0700): On Behalf Of Bill Broadley I put DDR3-1333 in the machine, but the bios seems to want to run them at 1066, How many dimms per memory channel do you have? My understanding (which may be a few months old) is

Re: [Beowulf] bizarre scaling behavior on a Nehalem

2009-08-14 Thread Mikhail Kuzminsky
In message from Bill Broadley b...@cse.ucdavis.edu (Fri, 14 Aug 2009 16:13:21 -0700): Mikhail Kuzminsky wrote: Your results look excellent, so I wouldn't be surprised if they are running at 1333. I have 12-18 GB/s on 4 threads of stream/ifort w/DDR3-1066 on dual E5520 server. But it works

[Beowulf] moving of Linux HDD to other node: udev problem at boot

2009-08-19 Thread Mikhail Kuzminsky
udev.sh */ and then the proposal to try again. After finish of this script I don't see any HDDs in /dev. BIOS setting for this SATA device is enhanced. compatible mode gives the same result. What may be the source of the problem ? May be HDD driver used by initrd ? Mikhail Kuzminsky

Re: [Beowulf] moving of Linux HDD to other node: udev problem at boot

2009-08-20 Thread Mikhail Kuzminsky
? May be HDD driver used by initrd ? Mikhail Kuzminsky Computer Assistance to Chemical Research Center Zelinsky Institute of Organic Chemistry RAS Moscow PS. If I see (after finish of udev.sh script) the content of /sys - it's right in NUMA sense, i.e. /sys/devices/system/node contains

[Beowulf] Re: moving of Linux HDD to other node: udev problem at boot

2009-08-20 Thread Mikhail Kuzminsky
In message from David Mathog mat...@caltech.edu (Thu, 20 Aug 2009 11:29:17 -0700): Mikhail Kuzminsky k...@free.net wrote: I moved Western Digital SATA HDD w/SuSE 10.3 installed (on dual Barcelona server) to dual Nehalem server (master HDD on Nehalem server) with Supermicro X8DTi mobo. Which

Re: [Beowulf] moving of Linux HDD to other node: udev problem at boot

2009-08-20 Thread Mikhail Kuzminsky
In message from Greg Lindahl lind...@pbm.com (Thu, 20 Aug 2009 11:23:25 -0700): On Thu, Aug 20, 2009 at 08:06:07PM +0200, Reuti wrote: AFAIK, initrd (as the kernel itself) is universal for EM64T/x86-64, The problem is not the type of CPU, but the chipset (i.e. the necessary kernel module)

[Beowulf] nearly future of Larrabee

2009-08-21 Thread Mikhail Kuzminsky
/additional commemts). Q5. How much may costs Larrabee-based hardware in 2010 ? I hope it'll be lower $1. Any more exact predictions ? Mikhail Kuzminsky Computer Assistance to Chemical Research Center Zelinsky Institute of Organic Chemistry RAS Moscow

Re: [Beowulf] nearly future of Larrabee

2009-08-24 Thread Mikhail Kuzminsky
In message from Bogdan Costescu bcoste...@gmail.com (Sun, 23 Aug 2009 03:17:08 +0200): 2009/8/21 Mikhail Kuzminsky k...@free.net: Q3. Does it means that Larrabee will give essential speedup also on relative short vectors ? I don't quite understand your question... For example, will DAXPY

Re: [Beowulf] Fortran Array size question

2009-11-03 Thread Mikhail Kuzminsky
hitting a Fortran limit, but I need to prove it. I haven't been able to find anything using The Google. It is not Fortran restriction. It may be some compiler restriction. 64-bit ifort for EM64t allow you to use, for example, 400 millions elements. Mikhail Kuzminsky Computer Assistance

Re: [Beowulf] Q: IB message rate large core counts (per node) ?

2010-02-25 Thread Mikhail Kuzminsky
BTW, is Cray SeaStar2+ better than IB - for nodes w/many cores ? And I didn't see latencies comparison for SeaStar vs IB. Mikhail ___ Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or

Re: [Beowulf] Quantum Chemistry scalability for large number of processors (cores)

2012-09-28 Thread Mikhail Kuzminsky
Thu, 27 Sep 2012 11:11:24 +1000 от Christopher Samuel sam...@unimelb.edu.au: -BEGIN PGP SIGNED MESSAGE- On 27/09/12 03:52, Andrew Holway wrote: Let the benchmarks begin!!! Assuming the license agreement allows you to publish them.. :-) For example: Gaussian-09/03/... licenses

[Beowulf] nVidia Kepler GK110 GPU is incompatible w/Intel x86 hardware in PCI-E 3.0 mode ?

2013-04-18 Thread Mikhail Kuzminsky
? Mikhail Kuzminsky Computer Assistance to Chemical Research Center Zelinsky Institute of Organic Chemistry RAS Moscow ___ Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe

[Beowulf] cpupower, acpid cpufreq

2013-06-05 Thread Mikhail Kuzminsky
acpid and/or cpupower RPM packages in cluster nodes ? If  yes, why they are interesting ?  Mikhail Kuzminsky Computer Assistance to Chemical Research Center RAS, Zelinsky Institute of Organic Chemistry Moscow ___ Beowulf mailing list, Beowulf

[Beowulf] Prevention of cpu frequency changes in cluster nodes (Was : cpupower, acpid cpufreq)

2013-06-09 Thread Mikhail Kuzminsky
? Mikhail Kuzminsky Computer Assistance to Chemical Research Center RAS Zelinsky Institute of Organic Chemistry Moscow   ___ Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe

[Beowulf] Strange resume statements generated for GRUB2

2013-06-09 Thread Mikhail Kuzminsky
and why they are right ? Mikhail Kuzminsky Computer Assistance to Chemical Research Center RAS Zelinsky Institute of Organic Chemistry Moscow ___ Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing To change your subscription

Re: [Beowulf] Strange resume statements generated for GRUB2

2013-06-10 Thread Mikhail Kuzminsky
the natural possibility (enough knowledge) to work w/OpenSUSE, Fedora etc.   So I prefer to change GRUB2 configuration files :-) Mikhail On 06/09/2013 11:37 AM, Mikhail Kuzminsky wrote: I have swap in sda1 and / in sda2 partitions of HDD. At installation of OpenSUSE 12.3 (where YaST2 is used) on my

[Beowulf] Nvidia K20 + Supermicro mobo

2013-07-16 Thread Mikhail Kuzminsky
] [size=16M] Memory at unassigned (64-bit, prefetchable) [disabled] Memory at unassigned (64-bit, prefetchable) [disabled] Does this kernel messages above means that I have hardware/BIOS problems or it may be some NVIDIA driver problems ? Mikhail Kuzminsky Computer Assistance

Re: [Beowulf] Nvidia K20 + Supermicro mobo

2013-07-17 Thread Mikhail Kuzminsky
. Cheers, Adam On Tue, Jul 16, 2013 at 10:29 AM, Mikhail Kuzminsky mikk...@mail.ru wrote: I want to test NVIDIA GPU (PNY Tesla K20c) w/our own application for future using in our cluster. But I found problems w/NVIDIA driver (v.319.32) installation (OpenSUSE 12.3, kernel 3.7.10-1.1

[Beowulf] PCI configuration space errors ? (was Nvidia K20 + Supermicro mobo)

2013-07-22 Thread Mikhail Kuzminsky
Gen.2 mode forced (instead of Gen.3) or no. Mikhail Kuzminsky Computer Assistance to Chemical Research Center Zelinsky Institute of Organic Chemistry Moscow ___ Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing To change your

Re: [Beowulf] Supermicro BIOS error (was Nvidia K20 + Supermicro mobo)

2013-08-15 Thread Mikhail Kuzminsky
situation is absent on Supermicro board X8-series, and on ASUS board - driver was installed successfully on OpenSUSE 12.3 (and 11.4 also); nvidia-smi utility works normally. Mikhail Kuzminsky Computer Assistance to Chemical Research Center Zelinsky Institute of Organic Chemistry Moscow

[Beowulf] sorry

2013-09-30 Thread Mikhail Kuzminsky
I apologize again for erroneous setting of date field in mailer used some years ago. Mikhail Kuzminsky ___ Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http

Re: [Beowulf] Haswell as supercomputer microprocessors

2015-08-04 Thread Mikhail Kuzminsky
By my opinion, PowerPC A2 more exactly should be used as name for *core*, not for IBM  BlueGene/Q *processor chip*. Power BQC name is used in TOP500, GREEN500, in a lot of Internet data, in IBM journal - see: Sugavanam K. et al. Design for low power and power management in IBM Blue Gene/Q

[Beowulf] Haswell as supercomputer microprocessors

2015-08-03 Thread Mikhail Kuzminsky
Haswell E5 v3 may also have 18 = 2**4 +2 cores.  Is there some sense to try POWER BQC or SPARC64 XIfx ideas (not exactly), and use only 16 Haswell cores for parallel computations ? If the answer is yes, then how to use this way under Linux ? Mikhail Kuzminsky, Zelinsky Institute of Organic

[Beowulf] modern batch management systems

2015-11-10 Thread Mikhail Kuzminsky
(and potentially free in a few years) batch systems you recommend ? Mikhail Kuzminsky Zelinsky Institute of Organic Chemistry RAS Moscow Mikhail Kuzminsky___ Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing To change your

Re: [Beowulf] Thoughts on IB EDR and Intel OmniPath

2016-05-01 Thread Mikhail Kuzminsky
d iWARP in the dim and distant past, and it was much better than >plain old gigabit on the same systems (with Ammasso cards). -- BTW, this gives >the question about choice between RoCE vs iWARP. Does your "Even with RoCE2" >means that iWARP is more bad than RoCE ?  Mikhail K

  1   2   >