Collective Knowledge (software)
The Collective Knowledge (CK) project is an open-source framework and repository to enable collaborative, reproducible and sustainable research and development of complex computational systems.[1][2] CK is a small, portable, customizable and decentralized infrastructure helping researchers and practitioners:
- share their code, data and models as reusable Python components and automation actions[3] with unified JSON API, JSON meta information, and a UID based on FAIR principles[1]
- assemble portable workflows from shared components (such as multi-objective autotuning and Design space exploration [4])
- automate, crowdsource and reproduce benchmarking of complex computational systems[5]
- unify predictive analytics (scikit-learn, R, DNN)
- enable reproducible and interactive papers[6]
Developer(s) | Grigori Fursin and the cTuning foundation |
---|---|
Initial release | 2014 |
Stable release | 1.55.0
/ November 17, 2020 |
Written in | Python |
Operating system | Linux, Mac OS X, Microsoft Windows, Android |
Type | Knowledge management, Data management, Artifact Evaluation, Package management system, Scientific workflow system, DevOps, Continuous integration, Reproducibility |
License | BSD License 3-clause |
Website | github |
Notable usages
- ARM uses CK to accelerate computer engineering[7][2][8]
- Association for Computing Machinery evaluates CK for possible integration with the ACM Digital Library sponsored by the Sloan Foundation[9]
- Several ACM-sponsored conferences use CK for the Artifact Evaluation process[10]
- Imperial College (London) uses CK to automate and crowdsource compiler bug detection[11]
- Researchers from the University of Cambridge used CK to help the community reproduce results of their publication in the International Symposium on Code Generation and Optimization (CGO'17) during Artifact Evaluation[12]
- General Motors (USA) uses CK to crowd-benchmark convolutional neural network optimizations [13][14]
- The Raspberry Pi Foundation and the cTuning foundation released a CK workflow with a reproducible "live" paper to enable collaborative research into multi-objective autotuning and machine learning techniques[4]
- IBM uses CK to reproduce Quantum results from Nature[15]
- CK is used to automate MLPerf benchmark[16]
Portable package manager for portable workflows
CK has an integrated cross-platform package manager with Python scripts, JSON API and JSON meta-description to automatically rebuild software environment on a user machine required to run a given research workflow.[17]
Reproducibility of experiments
CK enables reproducibility of experimental results via community involvement similar to Wikipedia and physics. Whenever a new workflow with all components is shared via GitHub, anyone can try it on a different machine, with different environment and using slightly different choices (compilers, libraries, data sets). Whenever an unexpected or wrong behavior is encountered, the community explains it, fixes components and shares them back as described in.[4]
References
- Fursin, Grigori (October 2020). Collective Knowledge: organizing research projects as a database of reusable components and portable workflows with common APIs (PDF). Philosophical Transactions of the Royal_Society. Retrieved 22 October 2020.
- Fursin, Grigori; Anton Lokhmotov; Ed Plowman (January 2016). Collective Knowledge: Towards R&D Sustainability. Proceedings of the 2016 Design, Automation & Test in Europe Conference & Exhibition (DATE). Retrieved 14 September 2016.
- reusable CK components and actions to automate common research tasks
- Grigori Fursin, Anton Lokhmotov, Dmitry Savenko, Eben Upton. A Collective Knowledge workflow for collaborative research into multi-objective autotuning and machine learning techniques, arXiv:1801.08024, January 2018 (arXiv link, interactive report with reproducible experiments)
- Online repository with reproduced results
- Index of reproduced papers
- HiPEAC info (page 17) (PDF), January 2016
- Ed Plowman; Grigori Fursin, ARM TechCon'16 presentation "Know Your Workloads: Design more efficient systems!"
- Reproducibility of Results in the ACM Digital Library
- Artifact Evaluation for systems and machine learning conferences
- EU TETRACOM project to combine CK and CLSmith (PDF), archived from the original (PDF) on 2017-03-05, retrieved 2016-09-15
- Artifact Evaluation Reproduction for "Software Prefetching for Indirect Memory Accesses", CGO 2017, using CK
- GitHub development website for CK-powered Caffe
- Open-source Android application to let the community participate in collaborative benchmarking and optimization of various DNN libraries and models
- Reproducing Quantum results from Nature – how hard could it be?
- MLPerf crowd-benchmarking
- List of shared CK packages
External links
- Development site:
- Documentation:
- Public repository with crowdsourced experiments:
- International Workshop on Adaptive Self-tuning Computing System (ADAPT) uses CK to enable public reviewing of publications and artifacts via Reddit: