Archive

Posts Tagged ‘Parallel programming’

Low Level Virtual Machine (LLVM)

October 6th, 2010 No comments

The Low Level Virtual Machine (LLVM) is a compiler infrastructure, written in C++, which is designed for compile-time, link-time, run-time, and “idle-time” optimization of programs written in arbitrary programming languages. Originally implemented for C/C++, the language-independent design (and the success) of LLVM has since spawned a wide variety of front ends, including Objective-C, Fortran, Ada, Haskell, Java bytecode, Python, Ruby, ActionScript, GLSL, and others.

LLVM can provide the middle layers of a complete compiler system, taking intermediate form (IF) code from a compiler and outputting an optimized IF that can then be converted and linked into machine-dependent assembler code for a target platform. LLVM can accept the IF from the GCC toolchain, allowing it to be used with a wide array of existing compilers written for that project.

LLVM can also generate relocatable machine code at compile-time or link-time or even binary machine code at run-time.

LLVM supports a language-independent instruction set and type system. Each instruction is in static single assignment form (SSA), meaning that each variable (called a typed register) is assigned once and is frozen. This helps simplify the analysis of dependencies among variables. LLVM allows code to be compiled statically, as it is under the traditional GCC system, or left for late-compiling from the IF to machine code in a just-in-time compiler (JIT) in a fashion similar to Java. The type system consists of basic types such as integers or floats and five derived types: pointers, arrays, vectors, structures, and functions. A type construct in a concrete language can be represented by combining these basic types in LLVM. For example, a class in C++ can be represented by a combination of structures, functions and arrays of function pointers.

Additional links:

C++ Standard (C++0x) Thread Library

March 28th, 2010 No comments

The upcoming C++ standard (C++0x) will support multithreading and concurrency both as an inherent part of the memory model, and as part of the C++ Standard Library. “Multithreading and Concurrency” (Anthony Williams):

Also Herb Sutter is a leading authority on software development. He is the best selling author of several books including Exceptional C++ and C++ Coding Standards, as well as hundreds of technical papers and articles, including “The Free Lunch Is Over” which coined the term “concurrency revolution.”

Effective Concurrency: Prefer Futures to Baked-In “Async APIs”

Parallel Programming

October 22nd, 2009 No comments

Intel Threading Building Blocks (TBB)

October 6th, 2009 No comments

Intel® Threading Building Blocks (Intel® TBB) is an award-winning C++ template library that abstracts threads to tasks to create reliable, portable, and scalable parallel applications. Just as the C++ Standard Template Library (STL) extends the core language, Intel TBB offers C++ users a higher level abstraction for parallelism. To implement Intel TBB, developers use familiar C++ templates and coding style, leaving low-level threading details to the library. It is also portable between architectures and operating systems. With Intel TBB, developers get the benefits of faster programming, scalable performance, and easier to maintain code.

Intel homepage

Intel® Threading Building Blocks 2.2 for Open Source

Parallel Pattern Library

October 5th, 2009 No comments

The new Parallel Pattern Library (PPL) enables you to express parallelism in your code and how the asynchronous messaging APIs can be used to separate shared state and increase your application’s resilience and robustness.

1. Four Ways to Use the Concurrency Runtime in Your C++ Projects

2. Concurrency Runtime

The Concurrency Runtime is a concurrent programming framework for C++. The Concurrency Runtime simplifies parallel programming and helps you write robust, scalable, and responsive parallel applications.

The features that the Concurrency Runtime provides are unified by a common work scheduler. This work scheduler implements a work-stealing algorithm that enables your application to scale as the number of available processors increases.

3. Parallel Programming in Native Code blog

OpenMP

October 4th, 2009 No comments

The OpenMP Application Program Interface (API) supports multi-platform shared-memory parallel programming in C/C++ and Fortran on all architectures, including Unix platforms and Windows NT platforms. Jointly defined by a group of major computer hardware and software vendors, OpenMP is a portable, scalable model that gives shared-memory parallel programmers a simple and flexible interface for developing parallel applications for platforms ranging from the desktop to the supercomputer.

OpenMP homepage

Additional: