Epreuve de langue vivante d'Informatique industrielle


"When will parallel processing arrive in mainstream computing?, " it's one of those infuriating questions that always seems to require the answer, "Next year."

We continually need more and more computing power to run applications such as 3-D graphics, MPEG video, and huge SQL queries. Using multiple processors seems an obvious way to supply that power. The problem is software.

Although operating systems such as Windows NT support multiple processors, desktop PC applications have yet to fully exploit this capability through internal multithreading. Even with more sophisticated enterprise-level software, a portability problem exists: until recently, parallel programming techniques have been so hardware-dependent that a program that runs on one parallel architecture needed to be rewritten to run on a different architecture.

But that profile is changing. Today, many commercial applications are demanding the power that parallel processing offers. At the same time, three main technical developments - new hardware designs, clustering, and advances in program-code portability - are allowing parallel processing to break through into wider markets. Change is coming at a critical time.

Switching to Success

The first of the three important trends - hardware innovations - sees designers moving to highspeed switched interconnects. These interconnects can make distributed-memory massively parallel processing (MPP) machines appear to programmers like shared-memory symmetric multiprocessing (SMP) machines, which enormously simplifies programming them.

The key to success in designing a parallel computer is to get the right balance between the processing power of the CPUs and the communication bandwidth between them ; any imbalance here will mean that some of the CPUs will be starved of data and the advantage of parallelism lost.

Such switched-interconnect fabrics make it possible to allocate a single large virtual address space to all the separate memories of a physically distributed system. Thus, the machine appears to programs as a shared-memory machine because when two nodes are actually connected, they alone have access to that piece of interconnect. Every time you add more processing power, you are also adding more communications bandwidth. The same technique applies equally to I/O, so disk drives may also be connected via crossbar switches, allowing you to momentarily attach any disk to any CPU node.

Clustering for comfort

[The trend toward clustering, where groups of workstations or PCs employ a middleware layer to make them behave like a single parallel computer, means that companies can boost their existing hardware investment by using the LAN as a "supercomputer" during off-peak periods. Clustering treats a network of separate computers as if it were a single computer. This approach has been used for many years in the minicomputer sector for high-availability, fault-tolerant servers.]

You can implement clustering using software alone, a concept made popular by PVM (Parallel Virtual Machine), a message-passing environment. There are implementations of PVM for many flavors of Unix and now for Windows PCs. This approach created the "supercomputer" - actually a network of 117 Sun workstations - used to render frames for the movie Toy Story.

The Message Passing Interface (MPI), with language bindings for C++ and Fortran, lets you build portable parallel applications to run on clusters of workstations. The second version, MPI 2, has just been released and adds advanced features like dynamic process management, parallel I/O, and real time extensions.

Basically each node in a cluster becomes an SMP computer in its own right, with a smart interconnect designed to make the whole cluster look to software like it's a single SMP machine, thus there's no need to change any application software when you add more nodes.


The lack of portability of program code between different parallel architectures remains a major stumbling block for new commercial customers, companies that place great importance on after-sales support. Parallel computing is caught in a vicious circle : the lack of commercial software hinders parallel hardware vendors from selling machines, while software vendors will not spend money porting their code to parallel machines because the market is too small.

However, newly invented software layers now disguise the underlying machine's topology and allow programs to be more easily ported between machines.

Grand Strategy

Don't think that the traditional supercomputer market has gone away completely ; supercomputers are a strategic resource for the defense industry, so no government would let that happen.

Perhaps asking when parallel computing will hit the mainstream isn't the right question. Rather, we should ask if one declining and one growing industry sector - supercomputing and PC-based client/server computing, respectively, can combine to make parallel computing viable for general business.

Thanks to the three important technical innovations we're seeing today, the answer appears to be "Yes, they can."

BYTE Magazine, May 1997 (adapted)

MPP: Massively Parallel Processing - SMP: Symmetric Multi-Processing - LAN: Local Area Network - PVM: Parallel Virtual Machine - SQL: Structured Query Language


1/ Rédiger en français un résumé du texte comportant 250 mots (+ ou - 10 %). (10 points)

2/ Traduire le passage allant de The trend toward clustering jusqu'à fault-tolerant servers. (5 points)

3/ Expression personnelle en anglais : Why, in your opinion, do we always need more powerful PCs? (about 80 words). (5 points)

Durée : 2 heures - L'usage d'un dictionnaire bilingue est autorisé

Corrigé du compte rendu - Traduction du passage

 home   tech voc   general voc   grammar    EtoF   FtoE   exam papers   texts   pronunciation   methods   franglais   dictionaries   publications   Q&A   links

Christian Lassure - English For Techies