blogger visitor

Techucation

A Blog by Malcolm Yoke Hean Low

 Subscribe in a reader




View Malcolm Low's profile on LinkedIn

Malcolm Low

Create Your Badge

 Subscribe in a reader


Enter your email address:

Delivered by FeedBurner


Archive for the HPC category

Chinese Computer is the world's fastest - and without using US chips

Posted on Saturday, September 10, 2016 at 11:17 AM by Malcolm

A Chinese supercomputer built using domestic chip technology has been declared the world's fastest. The news highlights China's recent advances in the creation of such systems, as well the country's waning reliance on US semiconductor technology. Read the rest of the article here.

Edited on: Saturday, September 10, 2016 12:01 PM

Posted in HPC (RSS), Research (RSS), Tech (RSS)

Improving Futures and Callbacks in C++ To Avoid Synching by Waiting

Posted on Saturday, July 28, 2012 at 9:35 AM by Malcolm

The C++11 standard provides several long-requested concurrency features such as the std::thread, std::future, and others. While those are a welcome addition to the language, in this article, the author showed that they are not sufficient for all but the most basic concurrency needs. He further argued that the primitives in C++11 are particularly ill-suited for modern applications that must deal with the concurrency imposed by I/O operations and exploit multicore at the same time. Microsoft's Parallel Pattern Library (PPL) provides a solution using tasks. Read the rest of the article here.

Posted in HPC (RSS)

Parallel Microsoft-Style

Posted on Monday, August 22, 2011 at 10:25 PM by Malcolm

"Actors avoid the dangers of the shared-memory model because they touch only the data that's sent to them in messages. If they need some external data, they request it from other actors (by sending the request in the form of a message). In addition, the data actors receive is immutable. They don't change the data, rather they copy it and transform the copy. In this way, no two threads are ever contending for access to the same data item, nor are their internal operations interfering with one another. As a result, the nightmares I described earlier disappear almost completely."

This article from Dr. Dobb Journal discussed how the actor model of concurrency is gaining favor in Java but remains largely ignored by Microsoft.

Edited on: Tuesday, August 23, 2011 10:31 AM

Posted in HPC (RSS)

Compilers and More: OpenCL Promises and Potential

Posted on Saturday, September 12, 2009 at 12:49 PM by Malcolm

In this article, Michael Wolfe from PGI discusses different aspects of OpenCL. Given all the hype, what can we expect from OpenCL? Is it really simple? Is it portable? Will it replace other parallel programming models?

Posted in HPC (RSS)

Ebook: HPC for Dummies

Posted on Thursday, September 10, 2009 at 12:37 AM by Malcolm

Ebook: HPC for Dummies

This special edition eBook from Sun and AMD shares details on real-world uses of HPC, explains the different types of HPC, guides you on how to choose between different suppliers, and provides benchmarks and guidelines you can use to get your system up and running. Get it here.
Edited on: Thursday, September 10, 2009 12:57 AM

Posted in General (RSS), HPC (RSS)

An Introduction to Parallel Programming - Module 1: Performance Tuning

Posted on Monday, July 06, 2009 at 1:57 PM by Malcolm

A seven part series from Sun on the introduction to parallel programming. Part 1 is on performance tuning. Edited on: Monday, July 06, 2009 3:52 PM

Posted in HPC (RSS)

parallel_invoke() - Running Multiple Functions in Parallel using Intel Thread Building Block

Posted on Monday, April 06, 2009 at 4:46 PM by Malcolm

From this article, running multiple functions together with Intel Thread Building block is as simple as the following:

void Function1();
void Function2();
void Function3();

void RunFunctions() {
tbb::parallel_invoke(Function1, Function2, Function3);
}
Edited on: Monday, April 06, 2009 4:49 PM

Posted in HPC (RSS)

Cloud Computing for Dummy

Posted on Wednesday, February 04, 2009 at 10:53 PM by Malcolm

This video in YouTube gives an easy to understand explanation of what Cloud Computing is.

Edited on: Monday, February 09, 2009 9:00 AM

Posted in HPC (RSS)

Four Paths to HPC using Java

Posted on Friday, December 19, 2008 at 1:19 AM by Malcolm

This article from JDJ gives a high-level description of four approaches using fork/join framework, Pervasive DataRush, Terracotta, and Hadoop for writing parallel applications in Java.

Edited on: Tuesday, December 23, 2008 7:18 PM

Posted in HPC (RSS)

Parallel Programming: Three Things You Must Teach

Posted on Friday, December 19, 2008 at 12:53 AM by Malcolm

From Intel Software College, this series of three lectures provides an introduction to parallel programming.

Module 1. Recognizing Potential Parallelism



Module 2. Shared Memory and Threads

Part 1



Part 2



Module 3. Programming with OpenMP

Part 1



Part 2

Edited on: Monday, March 30, 2009 9:37 AM

Posted in HPC (RSS)

Google Code University - Introduction to Parallel Programming and MapReduce

Posted on Friday, December 19, 2008 at 12:40 AM by Malcolm

This tutorial from the Google Code University covers the basics of parallel programming and the MapReduce programming model. The pre-requisites are significant programming experience with a language such as C++ or Java, and data structures & algorithms.



Posted in General (RSS), HPC (RSS), Research (RSS)

Why Lazy Functional Programming Languages are Good for Multicore

Posted on Saturday, September 20, 2008 at 11:25 AM by Malcolm

In this article, Peyton-Jones describes his interest in lazy functional programming languages, and chats about their increasing relevance in a world with rapidly increasing multi-core CPUs and clusters. "I think Haskell is increasingly well placed for this multi-core stuff, as I think people are increasingly going to look to languages like Haskell and say 'oh, that's where we can get some good ideas at least', whether or not it's the actual language or concrete syntax that they adopt.'"



Edited on: Saturday, September 20, 2008 11:27 AM

Posted in General (RSS), HPC (RSS)

CUDA, Supercomputing for the Masses

Posted on Saturday, September 20, 2008 at 11:22 AM by Malcolm

This series of articles introduces the power of CUDA -- through working code -- and to the thought process to help programmers map applications onto multi-threaded hardware (such as GPUs) to get big performance increases. Of course, not all problems can be mapped efficiently onto multi-threaded hardware, so part of the thought process will be to distinguish what will and what won't work, plus provide a common-sense idea of what might work "well-enough".



Edited on: Saturday, September 20, 2008 11:27 AM

Posted in HPC (RSS)

9 Reusable Parallel Data Structures and Algorithms

Posted on Wednesday, September 17, 2008 at 9:36 AM by Malcolm

This article looks at nine reusable data structures and algorithms that are common to many parallel programs. Each example is accompanied by fully working, though not completely hardened, tested, and tuned, code. The list is by no means exhaustive, but it represents some of the more common patterns. Many of the examples build on each other.



Edited on: Thursday, September 18, 2008 12:08 AM

Posted in HPC (RSS)

Parallel Programming Made Easy

Posted on Saturday, September 06, 2008 at 12:34 PM by Malcolm

Michael Wolfe from Protland Group looks at all the current research projects aimed at making parallel programming easy. He has this to say "Every time I see someone claiming they've come up with a method to make parallel programming easy, I can't take them seriously. First, making parallel programming easy must be harder than making programming easy, and I don't think we've reached that first milestone yet."

Edited on: Saturday, September 06, 2008 12:36 PM

Posted in HPC (RSS)

Online course on multi-core performance from NCSA

Posted on Thursday, September 04, 2008 at 9:30 AM by Malcolm

The National Center for Supercomputing Applications (NCSA) is offering a new Web-based course, "Introduction to Multi-core Performance." This tutorial helps current and prospective users of multi-core systems understand and use the technology to accelerate their research. Multi-core processors, which hold the promise of enhanced performance and more efficient parallel processing, are a key stepping stone on the path to petascale computation. Applications that run on multi-core systems must be optimized to take full advantage of the improved performance offered by multi-core technology. To browse the course catalog, go to ci-tutor.ncsa.uiuc.edu/browse.php . To create a login and take a course, go to ci-tutor.ncsa.uiuc.edu/ .

Edited on: Thursday, September 04, 2008 10:42 AM

Posted in General (RSS), HPC (RSS)

Five Multicore Chip Startups to Watch

Posted on Sunday, August 31, 2008 at 6:23 PM by Malcolm

As semiconductor firms get around the limitations of making individual processors faster by putting more cores onto a single chip, the mindset of today's software developers and engineers mindset needs to adapt. For to really take advantage of multiple cores, a programmer needs to look at ways to make her code parallel, splitting jobs into different parts rather than the step-by step instructions delivered to single-core machines. There are also energy and communications issues that can constrain how far multicore can grow. This article give a list of startups that have the potential to stretch multicore processors to their very limit.

Edited on: Saturday, September 06, 2008 4:49 PM

Posted in HPC (RSS)

GPUs Help Spread Parallel Computing

Posted on Tuesday, August 26, 2008 at 1:09 PM by Malcolm

Graphics processing units (GPU) are evolving to provide a diverse range of image processing functions flexibly and at high speed; as a result they are morphing into architectures appropriate for general-purpose calculating engines. The term "GPU computing" has been applied to the idea of utilizing their capabilities for high-speed processing of applications such as medical imagery processing.

Posted in HPC (RSS)

New educational section launches on CUDA Zone

Posted on Saturday, August 16, 2008 at 7:12 PM by Malcolm

NVIDIA Corp. has launched "CUDA U," a new section on CUDA Zone (www.nvidia.com/cuda) that provides students, instructors, and developers educational resources for its CUDA programming environment. Now there is one place to go for CUDA instructional material, syllabuses and curricula, and information on schools and programs that offer CUDA instruction. CUDA U can be found at www.nvidia.com/object/cuda_education.

Posted in HPC (RSS)

Intel Lifts the Curtain on Larrabee

Posted on Thursday, August 07, 2008 at 5:00 PM by Malcolm

Intel representatives revealed some of the architectural details of the company's much talked-about Larrabee processor. The new design is the chipmaker's first manycore x86 platform and represents what could be described as a general-purpose, x86 vector processor, combining features from both GPUs and CPUs. The architecture is the culmination of more than three years of R&D accomplished under Intel's terascale research program.

Edited on: Thursday, August 07, 2008 5:02 PM

Posted in HPC (RSS)