Cell software development

Software development for the Cell microprocessor involves a mixture of conventional development practices for the PowerPC-compatible PPU core, and novel software development challenges with regard to the functionally reduced SPU coprocessors.

Linux on Cell

An open source software-based strategy was adopted to accelerate the development of a Cell BE ecosystem and to provide an environment to develop Cell applications, including a GCC-based Cell compiler, binutils and a port of the Linux operating system.[1]

Octopiler

Octopiler is IBM's prototype compiler to allow software developers to write code for Cell processors.[2][3][4]

Software portability

Differences between VMX and SPU

The VMX (Vector Multimedia Extensions) technology is conceptually similar to the vector model provided by the SPU processors, but there are many significant differences.

VMX to SPU Comparison
unfinished
featureVMXSPU
word size 32 bits32 bits
number of registers 32128
register width 128-bit quadword128-bit quadword
integer formats 8, 16, 328, 16, 32, 64
saturation support yesno
byte ordering big (default), littlebig endian
floating point modes Java, non-Javasingle precision, IEEE double
Memory alignment quadword onlyquadword only

The VMX Java mode conforms to the Java Language Specification 1 subset of the default IEEE Standard, extended to include IEEE and C9X compliance where the Java standard falls silent. In a typical implementation, non-Java mode converts denormal values to zero but Java mode traps into an emulator when the processor encounters such a value.

The IBM PPE Vector/SIMD manual does not define operations for double-precision floating point, though IBM has published material implying certain double-precision performance numbers associated with the Cell PPE VMX technology.

Intrinsics

Compilers for Cell provide intrinsics to expose useful SPU instructions in C and C++. Instructions that differ only in the type of operand (such as a, ai, ah, ahi, fa, and dfa for addition) are typically represented by a single C/C++ intrinsic which selects the proper instruction based on the type of the operand.

Porting VMX code for SPU

There is a great body of code which has been developed for other IBM Power processors that could potentially be adapted and recompiled to run on the SPU. This code base includes VMX code that runs under the PowerPC version of Apple's Mac OS X, where it is better known as Altivec. Depending on how many VMX specific features are involved, the adaptation involved can range anywhere from straightforward, to onerous, to completely impractical. The most important workloads for the SPU generally map quite well.

In some cases it is possible to port existing VMX code directly. If the VMX code is highly generic (makes few assumptions about the execution environment) the translation can be relatively straightforward. The two processors specify a different binary code format, so recompilation is required at a minimum. Even where instructions exist with the same behaviors, they do not have the same instruction names, so this must be mapped as well. IBM provides compiler intrinsics which take care of this mapping transparently as part of the development toolkit.

In many cases, however, a directly equivalent instruction does not exist. The workaround might be obvious or it might not. For example, if saturation behavior is required on the SPU, it can be coded by adding additional SPU instructions to accomplish this (with some loss of efficiency). At the other extreme, if Java floating-point semantics are required, this is almost impossible to achieve on the SPU processor. To achieve the same computation on the SPU might require that an entirely different algorithm be written from scratch.

The most important conceptual similarity between VMX and the SPU architecture is supporting the same vectorization model. For this reason, most algorithms adapted to Altivec will usually adapt successfully to the SPU architecture as well.

Local store exploitation

Transferring data between the local stores of different SPUs can have a large performance cost. The local stores of individual SPUs can be exploited using a variety of strategies.

Applications with high locality, such as dense matrix computations, represent an ideal workload class for the local stores in Cell BE.[5]

Streaming computations can be efficiently accommodated using software pipelining of memory block transfers using a multi-buffering strategy.[1]

The software cache offers a solution for random accesses.[6]

More sophisticated applications can use multiple strategies for different data types.[7]

References

This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.