Standard Performance Evaluation Corporation
The Standard Performance Evaluation Corporation (SPEC) is an American non-profit corporation that aims to "produce, establish, maintain and endorse a standardized set" of performance benchmarks for computers.[1]
Formation | 1988 |
---|---|
Type | Non-profit corporation |
Headquarters | Gainesville, Virginia |
Membership | Hardware & Software Vendors, Universities, Research Centers |
Staff | 5 |
Website | www |
SPEC was founded in 1988.[2][3] SPEC benchmarks are widely used to evaluate the performance of computer systems; the test results are published on the SPEC website.
SPEC evolved into an umbrella organization encompassing four diverse groups; Graphics and Workstation Performance Group (GWPG), the High Performance Group (HPG), the Open Systems Group (OSG) and the newest, the Research Group (RG).
Structure
Membership
Membership in SPEC is open to any interested company or entity that is willing to commit to SPEC's standards. It allows:
- Participation in benchmark development
- Participation in review of results
- Complimentary software based on group participation
The list of members is available on SPEC's membership page;.
Membership Levels
- Sustaining Membership requires dues payment and typically includes hardware or software companies.
- SPEC "Associates" pay a reduced fee as nonprofits; typically includes academia and research organizations.
SPEC Benchmark Suites
The benchmarks aim to test "real-life" situations. There are several benchmarks testing Java scenarios, from simple computation (SPECjbb) to a full system with Java EE, database, disk, and network (SPECjEnterprise).
The SPEC CPU suites test CPU performance by measuring the run time of several programs such as the compiler GCC, the chemistry program gamess, and the weather program WRF. The various tasks are equally weighted; no attempt is made to weight them based on their perceived importance. An overall score is based on a geometric mean.
Cloud
Measuring and comparing the provisioning, compute, storage, and network resources of IaaS cloud platforms.
- SPEC Cloud IaaS 2018
- SPEC Cloud IaaS 2016
CPU
Current Supported | Have been retired | |
---|---|---|
Latest | Will be retired | |
|
|
Measuring and comparing combined performance of CPU, memory and compiler.
- SPEC CPU2006 contains two suites:
- CINT2006 ("SPECint") - testing integer arithmetic, with programs such as compilers, interpreters, word processors, chess programs etc.
- CFP2006 ("SPECfp") - testing floating point performance, with physical simulations, 3D graphics, image processing, computational chemistry etc.
- SPEC CPU 2017 package contains four suites.
- The SPECspeed 2017 Integer and SPECspeed 2017 Floating Point suites are used for comparing time for a computer to complete single tasks.
- The SPECrate 2017 Integer and SPECrate 2017 Floating Point suites measure the throughput or work per unit of time.
Graphics and Workstation Performance
Measuring performance of an OpenGL 3D graphics system, tested with various rendering tasks from several popular 3D-intensive real applications on a given system.
Benchmark Suite | Current Release | Old Releases | |
---|---|---|---|
SPECviewperf | SPECviewperf 2020 |
| |
SPECwpc | SPECwpc v2.1 | ||
SPECapcSM | |||
SPECapcSM for 3ds Max™ | SPECapcSM for 3ds Max™ 2015 |
| |
SPECapcSM for Maya | SPECapcSM for Maya 2017 |
| |
SPECapcSM for PTC Creo | SPECapcSM for PTC Creo 3.0 |
| |
SPECapcSM for Siemens NX | SPECapcSM for Siemens NX 9.0 and 10.0 |
| |
SPECapcSM for SolidWorks | SPECapcSM for SolidWorks 2017 |
|
High Performance Computing, OpenMP, MPI, OpenACC, OpenCL
Benchmark Suites | Current Supported | Have been retired |
---|---|---|
HPC | (none) |
|
OMP |
|
|
MPI |
| (none) |
ACCEL |
| (none) |
OMP
The SPEC OMP (OpenMP) is the first one for evaluating performance based on OpenMP applications, for measuring the performance of SMP (Shared memory Multi-Processor, i.e. UMA) systems.
Java Client/Server
Benchmark Suite | Current Supported | Have been retired | |
---|---|---|---|
Latest | Will be retired | ||
jAppServer | (none) | (none) |
|
JBB |
| (none) |
|
jEnterprise |
| (none) | (none) |
JMS |
| (none) | (none) |
JVM |
| (none) |
|
JBB
evaluates the performance of server side Java by emulating a three-tier client/server system (with emphasis on the middle tier).
jEnterprise
A multi-tier benchmark for measuring the performance of Java 2 Enterprise Edition (J2EE) technology-based application servers.
Mail Servers
Current Supported | Have been retired |
---|---|
(none) |
|
Storage
Current Supported | Have been retired |
---|---|
|
|
SPEC SFS is for measuring file server throughput and response time supporting both NFS and SMB protocol access.
Power
Current Supported | Have been retired |
---|---|
|
(none) |
Virtualization
Current Supported | Have been retired |
---|---|
|
|
Web Servers
Current Supported | Have been retired |
---|---|
(none) |
|
SPEC Tools
- Server Efficiency Rating Tool (SERT). Intended to measure server efficiency, initially as part of the second generation of the US Environmental Protection Agency (EPA) ENERGY STAR for Computer Servers program.
- SPEC Chauffeur WDK Tool. Designed to simplify the development of workloads for measuring both energy efficiency and performance.
- PTDaemon. The SPEC PTDaemon software is used to control power analyzers in benchmarks which contain a power measurement component.
Benchmark Search Program
- SPEC CPUv6, The CPU Search Program seeks to encourage those outside of SPEC to assist them in locating applications that could be used in the next CPU-intensive benchmark suite, currently designated as SPEC CPUv6. Obsoleted now.
Retired Benchmarks (No Successor)
- SPEC SDM91
- SPECsip_infrastructure2011 - the benchmark is still available for purchase but no additional result submissions are being accepted and support is no longer offered.
Retired Benchmarks (No Longer Documented)
- SPECapcSM for Lightwave 3D 9.6, performance evaluation software for systems running NewTek LightWave 3D v9.6 software.
- SPEC 2001
- SPEC CPU89
Portability
SPEC benchmarks are written in a portable programming language (usually C, C#, Java or Fortran), and the interested parties may compile the code using whatever compiler they prefer for their platform, but may not change the code. Manufacturers have been known to optimize their compilers to improve performance of the various SPEC benchmarks. SPEC has rules that attempt to limit such optimizations.
Licensing
In order to use a benchmark, a license has to be purchased from SPEC; the costs vary from test to test with a typical range from several hundred to several thousand dollars. This pay-for-license model might seem to be in violation of the GPL as the benchmarks include software such as GCC that is licensed by the GPL. However, the GPL does not require software to be distributed for free, only that recipients be allowed to redistribute any GPLed software that they receive; the license agreement for SPEC specifically exempts items that are under "licenses that require free distribution", and the files themselves are placed in a separate part of the overall software package.
Culture
SPEC attempts to create an environment where arguments are settled by appeal to notions of technical credibility, representativeness, or the "level playing field". SPEC representatives are typically engineers with expertise in the areas being benchmarked. Benchmarks include "run rules", which describe the conditions of measurement and documentation requirements. Results that are published on SPEC's website undergo a peer review by members' performance engineers.
References
- "SPEC Frequently Asked Questions". Retrieved 15 March 2010.
- "The SPEC Organization". Retrieved 15 March 2010.
- "SPEC Membership". Retrieved 15 March 2010.
- Kant, Krishna (1992). Introduction to Computer System Performance Evaluation. New York: McGraw-Hill Inc. pp. 16–17. ISBN 0-07-033586-9.