Workflow engines for clouds provide a unique solution for many corporations. A workflow offers a simplistic vantage point of a complex execution and management of applications. Processing and managing big data requires a distributed server farm and data centers. The emergence of recent virtualization technologies and the expansion of cloud acceptance have helped shift to a new paradigm in distributed computing. This distribution computing relies on existing resources for scalability computing.
Services within the cloud have opened new possibilities for vendors and corporations. The Infrastructure as a Service (IaaS) virtualization allows vendors to offer virtual hardware for intensive workflow applications. Platform as a Service (PaaS) clouds expose a high level development and runtime environment for building and deploying applications on the aforementioned IaaS. Finally, Software as a Service (SaaS) solutions give corporations the flexibility of leveraging their solutions to integrate into the existing workflows.
The background of the scientific workflows happens on infrastructures like OpenScience Grid and dedicated clusters. Existing workflow engine systems typically take advantage of these free Open Science Grids through some type of research agreement. The CloudbusToolkit workflow engine gives an example of the background of scaling workflow applications on clouds using market oriented computing.
The primary benefit of moving to clouds is application scalability. The elastic nature of clouds improves the process of adjusting the resource quantities and characteristics to vary during the application runtime. In other words, the resources scale higher when there is a higher demand and lower when there is less demand. With this capability, the services can easily meet Quality of Service (QoS) requirements for applications. This feature was never truly available in traditional computing methods.
With the change in dynamics from traditional to cloud, Service Level Agreements (SLA) has become the primary focus among both service providers and corporate consumers. Competition among service providers have driven SLAs to be drafted with extreme care in order to entice corporate consumers by offering specific niches and advantages over competitors. Cloud service providers also utilize economies of scale; providing computing, storage, and bandwidth resources at substantially lower costs.
The workflow system involves the workflow engine, a resource broker, and plug-ins for communicating with various platforms. To illustrate the workflow system architectura lstyle, we will examine scientific applications. Scientific applications consist of tasks, data elements, control sequences, and data dependencies. The components within the core are responsible for managing the execution of workflows. The plug-ins supports the workflow executions in different environments and platforms. The resource brokers populate the bottom of the diagram and consist of grids, clusters, and clouds.
The workflow system has great potential for growth in the future. Gartner has the paradigm ranked at the top of the hype cycle in 2010. There are still challenges to overcome, but the potential for growth exceeds imagination. Giants in the field like Microsoft, Amazon, Google own enormous data centers to provide these services to corporate consumers. Eventually, these consumers can start to form hybrid models where they select different sections of the vendor services to create their workflow cloud. The current workflow model is just the first step to offer customers a complete solution.
Reference
Buyya, R., Broberg, J., & Goscinski, A. (2011). Cloud computing. Hoboken, New Jersey, United States of America: John Wiley & Sons, Inc.
No comments:
Post a Comment