EPAM Cloud Orchestrator was created in 2012 as a cloud solution for the
EPAM Systems community. It appeared when the issue of redesigning the
growing infrastructure of more than
500 projects became critical. After a thorough analysis of the existing
solutions, EPAM experts had to admit that none of them fit the company
needs.
However, that was not a problem but a challenge. If no perfect solution
existed, EPAM created its own from scratch.
It took into account the specific requirements of EPAM development
teams, including the most popular platform services used in project
development.
Now we are bringing the Orchestrator to a new level - we are introducing a new framework allowing the users, on one hand, to select the Orchestrator components they need and, on the other hand, to contribute to the development by adding their code. Yes, indeed, Orchestrator goes open-source. And not only that, it goes beyond EPAM.
The new Orchestrator, to be released under the name Maestro 3, will
utilize Amazon resources in all its components. Here we are taking
advantage of our long-term integration with Amazon
during which we have had a great chance to try and test what Amazon has
to offer and to find ways to adapt Amazon services to the Orchestrator
requirements. At the same time,
the infrastructure control will be performed by the native Orchestrator
tools, as usual.
All Maestro 3 components are designed as standalone units which can be
used separately, also for purposes outside the EPAM Cloud.
Authentication is done via the Active Directory. The user can
access AWS services directly using their domain credentials. In this
case, the user is authenticated via the SAML protocol and allowed access
within the pre-configured IAM role. This type of authentication
requires no interaction with Maestro 3 whatsoever. Access to Maestro 3
is based on the user's domain credentials, as well, however, in this
case the OAuth 2.0 protocol is used.
Maestro 3 includes a mechanism allowing authentication for any
enterprise with an Active Directory.
Deployment Framework
Remember, we said "open-source"? Open-source means that the product
code is publicly shared and that anyone can contribute to it. The same
approach will be applied in Maestro 3,
and for that we have developed a special framework.
The new framework called the Deployment Framework will be fully based on Amazon using the native AWS tools and services.
In implementing its Deployment Framework, EPAM Cloud uses AWS services:
- Amazon Lambda - a serverless AWS service where custom code
can be uploaded and executed using the Amazon infrastructure. Code is
uploaded creating so-called Lambda functions
which are triggered by certain events. Lambda functions are associated
with AWS resources, for example, an Amazon DynamoDB table or an AWS S3
bucket, and changes to the associated resource
trigger the Lambda function. The required computing resources are
allocated and provided automatically.
- Amazon DynamoDB - a NoSQL database service where you can
create tables to serve as the source of triggers for Lambda functions.
With DynamoDB, you can create tables of any size
and then easily scale them up or down without affecting the
performance, as the capacities are provided automatically by internal
AWS tools.
- Amazon S3 - a native Amazon cloud storage. As any storage, S3
buckets are used to store uploaded objects and, also being an AWS
resource, can trigger Lambda functions by changes to their content.
- Amazon API Gateway - a native Amazon service where each user
can easily and quickly create and manage own RESTful APIs. API Gateway
can also trigger a Lambda function referring
to the corresponding back-end service, such as Amazon DynamoDB, Amazon
S3, etc.
- Amazon CloudWatch - an Amazon monitoring service collecting
metrics from AWS resources and logging changes occurring to them.
CloudWatch allows automatic responses to
changes to AWS resources, including scheduled ones.
The Deployment Framework will be able to detect new code uploaded by
the contributors in the form of Lambda functions and create the
corresponding AWS resources. For example,
if a Lambda function is associated with a DynamoDB table, the
Deployment Framework will create such table which will then trigger the
Lambda function.
By default, the Deployment Framework accepts code written in Java or
Python, however, any other programming language may also be acceptable,
if the contributor keeps to a certain set of rules,
such as structure and file naming conventions, so that a .json
configuration file recognizable by any language can be generated.
Code contributors can choose to integrate their code with the existing
Maestro 3 functionality only for their purposes or to make it generally
available.
The Deployment Framework processes uploaded code according to the
rules, and if contributed code does not match the rules, it is ignored.
Front-End
The front-end is written in Angular 2.0, an open-source
JavaScript-based framework for web application development. Angular 2.0
was chosen as a flexible framework supporting component approach -
if one of the components fails, this does not affect the rest of the
system.
As the previous versions of EPAM Orchestrator were written in Angular
1.0, it was only logical to continue with Angular 2.0, as such approach
could speed up the application development by reusing
the already developed elements with certain adaptations. In addition,
Angular 2.0 allows creating dynamic items, for example, by supporting
the drag-and-drop functionality which will be featured in Maestro 3 UI.
The mobile Maestro 3 application written in Angular 2.0, can be used on
Android, iOS and Windows without any special adaptations. What is also
worth mentioning, is that about 70%
of single-page applications currently on the market are written in
Angular 1.0. Angular 2.0 supports backward compatibility, therefore,
Maestro 3 components can be easily integrated with other web
applications.
Finally, let's look briefly "under the hood" of the Maestro 3 user
interface. In Maestro 3, the front-end part involves Amazon CloudFront, a
content delivery network which receives the hyperlink
from the browser and delivers the HTML stored in the special S3
storage. This HTML is a single-page application written in Angular 2.0
which represents the Orchestrator UI. When another request is made
from the UI, only the affected segment of the page is updated, which
increases the response and efficiency.
Billing
Maestro 3 supports AWS Consolidated Billing, in which you get a single
bill for all your accounts. The billing data is obtained from AWS
directly in the form of .csv files.
The billing data is updated on the hourly basis, therefore, you can see
the current cost of your infrastructure at any time.
Similarly to the EPAM Cloud, Maestro 3 receives bills from Amazon and
sends them to the customers "as is", without adding or removing
anything. However, there is an option of making
adjustments to the bills, for example, extending credits to particular
accounts for certain amounts.
The billing engine is based on the same concepts and mechanisms as were
applied in EPAM Orchestrator. Virtual resources are billed according to
the timelines - periods of time during which
the resource is in a certain state. Timeline beginnings and endings are
marked with audit events which in Maestro 3 are supplied by CADF (Cloud
Auditing Data Federation). A virtual machine
start generates an audit event as well as its stop generates another.
The period between these events is a timeline which is billed in the
same way for its entire duration.
The same principle is applied to all other virtual resources, such as
checkpoints or volumes.
Billing is designed as a serverless component, with different tasks performed by dedicated Lambdas.
Auto-Configuration
The Auto-Configuration functionality which is a great advantage of EPAM
Orchestrator has also been recreated in Maestro 3. As with other
components, auto-configuration was designed
on the basis of AWS services to ensure smooth integration. However, the
implementation has proved to be challenging and tricky.
In EPAM Orchestrator, auto-configuration is based on the Chef service
which, obviously, means that it requires a Chef server. This did not fit
into the serverless architecture concept
applied in Maestro 3. Therefore, the Chef server functions (cookbook
storage and search) were divided between Amazon S3 and DynamoDB.
The framework for cookbook development is based on the Dynamic
Discovering principle allowing to discover existing services and use
their components rather than creating them from scratch.
Cookbooks can be created using the Chef Solo client and need no special
adaptation. The completed cookbooks are then uploaded to an S3 bucket
from where they can be retrieved.
As the database, the auto-configuration module uses DynamoDB, a native
Amazon non-relational database which has shown optimal performance,
quick response and convenient search possibilities.
For automated infrastructure creation Maestro 3 uses AWS CloudFormation service.
The Deployment Framework has been released as internal beta for users within EPAM Systems. The plans include releasing the public beta of the Deployment Framework and of the Orchestrator itself.



No comments:
Post a Comment