Serverless computing
Serverless computing is a cloud computing execution model in which the cloud provider allocates machine resources on demand, taking care of the servers on behalf of their customers. Serverless computing does not hold resources in volatile memory; computing is rather done in short bursts with the results persisted to storage. When an app is not in use, there are no computing resources allocated to the app. Pricing is based on the actual amount of resources consumed by an application.[1] It can be a form of utility computing. "Serverless" is a misnomer in the sense that servers are still used by cloud service providers to execute code for developers. However developers of serverless applications are not concerned with capacity planning, configuration, management, maintenance, operating or scaling of containers, VMs, or physical servers.
Serverless computing can simplify the process of deploying code into production. Serverless code can be used in conjunction with code deployed in traditional styles, such as microservices or monoliths. Alternatively, applications can be written to be purely serverless and use no provisioned servers at all.[2] This should not be confused with computing or networking models that do not require an actual server to function, such as peer-to-peer (P2P).
Serverless runtimes
Serverless vendors offer compute runtimes, also known as function as a service (FaaS) platforms, which execute application logic but do not store data. The first "pay as you go" code execution platform was Zimki, released in 2006, but it was not commercially successful.[3] In 2008, Google released Google App Engine, which featured metered billing for applications that used a custom Python framework, but could not execute arbitrary code.[4] PiCloud, released in 2010, offered FaaS support for Python.[5]
Kubeless and Fission are two Open Source FaaS platforms which run with Kubernetes.
Google App Engine, introduced in 2008, was the first abstract serverless computing offering.[6] App Engine included HTTP functions with a 60 second timeout, and a blob store and data store with their own timeouts. No in-memory persistence was allowed. All operations had to be executed within these limits, but this allowed apps built in App Engine to scale near-infinitely and was used to support early customers including Snapchat, as well as many external and internal Google apps. Language support was limited to Python using native Python modules, as well as a limited selection of Python modules in C that were chosen by Google. Like later serverless platforms, App Engine also used pay-for-what-you-use billing.[7]
AWS Lambda, introduced by Amazon in 2014,[8] popularized the abstract serverless computing model. It is supported by a number of additional AWS serverless tools such as AWS Serverless Application Model (AWS SAM) Amazon CloudWatch, and others.
Google Cloud Platform created a second serverless offering, Google Cloud Functions in 2016.[9]
IBM offers IBM Cloud Functions in the public IBM Cloud since 2016.[10]
Microsoft Azure offers Azure Functions, offered both in the Azure public cloud or on-premises via Azure Stack.[11]
Cloudflare offers Cloudflare Workers, since 2017.[12]
Serverless databases
Several serverless databases have emerged in the last few years. These systems extend the serverless execution model to the RDBMS, eliminating the need to provision or scale virtualized or physical database hardware.
Nutanix offers a solution named Era which turns an existing RDBMS such as Oracle, MariaDB, PostgreSQL or Microsoft SQL Server into a serverless service.[13]
Amazon Aurora offers a serverless version of its databases, based on MySQL and PostgreSQL, providing on-demand, auto-scaling configurations.[14]
Azure Data Lake is a highly scalable data storage and analytics service. The service is hosted in Azure, Microsoft's public cloud. Azure Data Lake Analytics provides a distributed infrastructure that can dynamically allocate or de-allocate resources so customers pay for only the services they use.
Firebase, also owned by Google,[15] includes a hierarchical database and is available via fixed and pay-as-you-go plans.[16]
Advantages
Cost
Serverless can be more cost-effective than renting or purchasing a fixed quantity of servers,[17] which generally involves significant periods of underutilization or idle time.[1] It can even be more cost-efficient than provisioning an autoscaling group, due to more efficient bin-packing of the underlying machine resources.
This can be described as pay-as-you-go computing[17] or bare-code[17] as you are charged based solely upon the time and memory allocated to run your code; without associated fees for idle time.[17]
Immediate cost benefits are related to the lack of operating systems costs, including: licences, installation, dependencies, maintenance, support, and patching.[17]
Elasticity versus scalability
In addition, a serverless architecture means that developers and operators do not need to spend time setting up and tuning autoscaling policies or systems; the cloud provider is responsible for scaling the capacity to the demand.[1][11][17] As Google puts it: "from prototype to production to planet-scale."[17]
As cloud native systems inherently scale down as well as up, these systems are known as elastic rather than scalable.
Small teams of developers are able to run code themselves without the dependence upon teams of infrastructure and support engineers; more developers are becoming DevOps skilled and distinctions between being a software developer or hardware engineer are blurring.[17]
Productivity
With function as a service, the units of code exposed to the outside world are simple event driven functions. This means that typically, the programmer does not have to worry about multithreading or directly handling HTTP requests in their code, simplifying the task of back-end software development.
Disadvantages
Performance
Infrequently-used serverless code may suffer from greater response latency than code that is continuously running on a dedicated server, virtual machine, or container. This is because, unlike with autoscaling, the cloud provider typically "spins down" the serverless code completely when not in use. This means that if the runtime (for example, the Java runtime) requires a significant amount of time to start up, it will create additional latency.
Resource limits
Serverless computing is not suited to some computing workloads, such as high-performance computing, because of the resource limits imposed by cloud providers, and also because it would likely be cheaper to bulk-provision the number of servers believed to be required at any given point in time.
Monitoring and debugging
Diagnosing performance or excessive resource usage problems with serverless code may be more difficult than with traditional server code, because although entire functions can be timed,[2] there is typically no ability to dig into more detail by attaching profilers, debuggers or APM tools. Furthermore, the environment in which the code runs is typically not open source, so its performance characteristics cannot be precisely replicated in a local environment.
Security
Serverless is sometimes mistakenly considered as more secure than traditional architectures. While this is true to some extent because OS vulnerabilities are taken care of by the cloud provider, the total attack surface is significantly larger as there are many more components to the application compared to traditional architectures and each component is an entry point to the serverless application. Moreover, the security solutions customers used to have to protect their cloud workloads become irrelevant as customers cannot control and install anything on the endpoint and network level such as an intrusion detection/prevention system (IDS/IPS).[18]
This is intensified by the mono-culture properties of the entire server network. (A single flaw can be applied globally.) According to Protego, the "solution to secure serverless apps is close partnership between developers, DevOps, and AppSec, also known as DevSecOps. Find the balance where developers don’t own security, but they aren’t absolved from responsibility either. Take steps to make it everyone’s problem. Create cross-functional teams and work towards tight integration between security specialists and development teams. Collaborate so your organization can resolve security risks at the speed of serverless."[19]
Privacy
Many serverless function environments are based on proprietary public cloud environments. Here, some privacy implications have to be considered, such as shared resources and access by external employees. However, serverless computing can also be done on private cloud environment or even on-premises, using for example the Kubernetes platform. This gives companies full control over privacy mechanisms, just as with hosting in traditional server setups.
Standards
Serverless computing is covered by International Data Center Authority (IDCA) in their Framework AE360. However, the part related to portability can be an issue when moving business logic from one public cloud to another for which the Docker solution was created. Cloud Native Computing Foundation (CNCF) is also working on developing a specification with Oracle.[20]
See also
References
- Miller, Ron (24 Nov 2015). "AWS Lambda Makes Serverless Applications A Reality". TechCrunch. Retrieved 10 July 2016.
- MSV, Janakiram (16 July 2015). "PaaS Vendors, Watch Out! Amazon Is All Set To Disrupt the Market". Retrieved 10 July 2016.
- Williams, Christopher. "Fotango to smother Zimki on Christmas Eve". Retrieved 2017-06-11.
- "Python Runtime Environment | App Engine standard environment for Python | Google Cloud Platform". Google Cloud Platform. Retrieved 2017-06-11.
- "PiCloud Launches Serverless Computing Platform To The Public". TechCrunch. Retrieved 2018-12-17.
- Evans, Jon. TechCrunch https://techcrunch.com/2015/04/11/whatever-happened-to-paas/. Retrieved 17 December 2020. Missing or empty
|title=
(help) - Kincaid, Jason. "Google App Engine Offers Pricing Plan Beyond Quotas; Grab A Free I/O Ticket To Celebrate". TechCrunch. Retrieved 17 December 2020.
- Miller, Ron (13 Nov 2014). "Amazon Launches Lambda, An Event-Driven Compute Service". TechCrunch. Retrieved 10 July 2016.
- Novet, Jordan (9 February 2016). "Google has quietly launched its answer to AWS Lambda". VentureBeat. Retrieved 10 July 2016.
- Zimmerman, Mike (23 February 2016). "IBM Unveils Fast, Open Alternative to Event-Driven Programming".
- Miller, Ron (31 March 2016). "Microsoft answers AWS Lambda's event-triggered serverless apps with Azure Functions". TechCrunch. Retrieved 10 July 2016.
- Varda, Kenton (29 September 2017). "Introducing Cloudflare Workers: Run JavaScript Service Workers at the Edge". Cloudflare.
- https://www.nutanix.com/products/era/
- "Amazon Aurora Serverless - On-demand, Auto-scaling Relational Database - AWS". Amazon Web Services, Inc. Retrieved 2019-08-08.
- Lardinois, Frederic. "Google Acquires Firebase To Help Developers Build Better Real-Time Apps | TechCrunch". Retrieved 2017-06-11.
- Darrow, Barb (2013-06-20). "Firebase gets $5.6M to launch its paid product and fire up its base". gigaom.com. Retrieved 2017-06-11.
- Jamieson, Frazer (4 September 2017). "Losing the server? Everybody is talking about serverless architecture".
- https://www.puresec.io/serverless-security-top-12-csa-puresec
- Solow, Hillel (2019-02-05). "Serverless Computing Security Risks & Challenges". protego.io. Retrieved 2019-03-20.
- "CNCF, Oracle Boost Serverless Standardization Efforts". SDxCentral. Retrieved 2018-11-24.
- Bashir, Faizan (2018-05-28). "What is Serverless Architecture? What are its Pros and Cons?". Hacker Noon. Retrieved 2019-04-03.
- "What Is Serverless? Here's a Plain Answer!". Squadex. 2019-01-17. Retrieved 2019-04-03.
Further reading
- Roberts, Mike (25 July 2016). "Serverless Architectures". MartinFowler.com. Retrieved 30 July 2016.
- Jamieson, Frazer (4 September 2017). "Losing the server? Everybody is talking about serverless architecture". BCS, the Chartered Institute for IT. Retrieved 7 November 2017.