Thank you Sunil and Thank you everyone for giving me this opportunity to talk to you
for a few minutes.
Mithi Software is one of our trusted partners as part of the Amazon Partner Ecosystem, what
we also refer to as the AWS Partner network and I have been asked to talk a little bit
about, or give you a little information about as to why if you choose to build solutions
that take advantage of the AWS infrastructure or the AWS services, you can take advantage
of the scale, the elasticity, the reliability, the availability and the most important aspect,
the durability when it comes to storage, which is what Mithi has chosen to do.
So essentially, Mithi can focus on solving the key business problems for its customers
such as the Karaikal Port trust.
It can leave the heavy lifting related to ensuring these qualities of their service,
it can leave that to the Amazon Services team.
We try to make sure that we are able to deliver these for a large number of customers on the
platform.
Just before we come to the elasticity, I'll talk a little bit about the storage, which
is the key service that the Vaultastic product leverages.
We have a portfolio of storage services, that in turn, when you look to take advantage of
it, it is not just the platforms and solutions, we do have ways that you can actually do infrastructure
migration.
Sunil has mentioned that briefly, in terms of, for example, using snowball to ship large
amounts of data, physically from one of your existing locations into the AWS infrastructure.
We also have a storage ecosystem, so this includes, not just the AWS services, but our
partners as well as ISV products, so for example, if you are using let's say a backup product
from an ISV such as Veritas or a NetApp and so on.
These products also in turn are integrated, so that they can use the Amazon Storage Services
as the storage destination for whatever workloads that they are fulfilling.
Similarly, you can have hybrid cloud storage, where you have some storage in premises that
are outside of AWS and then other parts of the storage itself extending into AWS.
When you look at the storage platform and solutions, these are all of the offerings
that we have.
I am not going to spend a lot of time on these, because each of these services in turn is
a scalable and capable service.
Some of these services are versatile in nature, some of these services are more focused on
solving a particular type of problem such as, for example, if you take the Amazon EFS
service, which is called the Elastic File System.
The EFS service actually provides a managed shared file system, so you may be familiar
with using NFS for shared file systems, that are mounted with multiple compute instances.
EFS provides the equivalent shared file system that you can use with NFS mounts, whereas
you have already heard of S3.
S3 is an object storage, you can put any amount of objects into S3, you can get these objects,
put these objects etc, using API operations.
Glacier is used for archival.
So as data, for cold data or data that you need to retain for a longer duration of time,
but you may not access frequently, you can use Glacier.
So without spending more time on particular services, I will move to the next slide.
This slide simply just calls out, like we said we have solutions for infrastructure
migration and we have an ecosystem around the storage solutions and services.
It again just calls out some of the many options that are available to customers and to partners
such as using a direct connect, using snowball, using ISV connectors in software that you
use and so on.
I want to talk a little bit about why S3 is able to offer an extremely high durability
where we proudly claim this figure, where we talk about 11 9's of durability.
Now if you think a little bit about it, the reason why we are able to deliver this durability,
is because the S3 service is engineered in such a way that we can actually claim that
we will never lose an object once you put it in S3.
For example, whenever you put an object in S3, we will only successfully acknowledge
that we have received the object, once multiple copies of this object have been stored in
separate facilities.
So these could be different availability zones, and multiple data centers within the region.
In turn, what this allows us to do is, the S3 service itself as you may be aware is more
than a decade in operation at the moment, so within the service and within the infrastructure,
we are constantly monitoring the service, we are constantly upgrading and dealing with
our own refreshed cycles, and the reason why we are able to keep the service up and running
is because we will have our own self healing mechanisms.
So for example, if some part of our infrastructure experiences any particular failure, which
can be a common occurrence, when you are operating large amounts of infrastructure, the self
healing capability means that we will continue to, we will restore good copies of data from
the multiple copies that we have and we will continue to retain these copies, a number
of such copies, so that we are able survive any local instances of failure.
So this is a little bit of an insight, in to why is it that the service is designed
in such a way that we can offer you this very high durability.
Just a little bit more information around the extent to which customers are using S3.
We have customers that are storing billions of objects.
It is a very durable and reliable platform and because all operations are API operations,
the S3 service automatically scales.
So this is one aspect of the elasticity that I will come to.
When you are storing data in S3 or retrieving data in S3, in turn these translate into API
operations, to a service end point, and therefore in addition, to just maintaining the data
in a secure and safe manner, we also operate all of the infrastructure which responds to
these API calls, retrieves the data or stores the data and satisfies the customers requirements
for use of the data.
Similarly we have API operations for easy and flexible data transfer.
We are independently audited, by third party auditors on a routine and recurring basis
to ensure that we have secured all of this infrastructure, we have secured the services
and as part of the service itself, we give a number of security controls that customers
can themselves use, so that they can decide whether the objects that they store in S3,
who is it that can store these objects, who can retrieve these objects, read these objects
and so on, including capabilities for server side encryption.
Also recently, we have launched services such as Alexa, I am sorry, Athena, so we call it
Amazon Athena.
This is a service where you can have your data in S3 and you need not necessarily load
it into a database or a big data cluster in order to run SQL queries on it.
So these are some of the ways in which we are bringing the ability for customers to
derive value from their data that they are putting into S3.
So now I am going to talk a little bit about the elasticity, there are essentially two
or three different aspects when it comes to elasticity.
Firstly, when you think about elasticity, essentially you need the flexibility, so that
whatever it is that you are operating, any set of resources that you are operating, these
could be storage, it could be compute, it could be networking, it could be databases,
it could be any kind of advanced or high level services, the basic characteristic is that
you want the freedom with little or no lead time to scale up and scale down as and when
you need it.
And there should not be any penalties to this, that is you need not have to plan necessarily
before hand to deal.
If you can plan and if you can anticipate the changes in capacity, that is also good,
but if it happens in an unplanned fashion also, or rather you need not be forced to
plan for changes in capacity.
And one of the key reasons why the AWS platform and services are elastic, is because all operations
that you perform, so when you want to store more data, you want to run more virtual machines,
you want to scale out networking etc.
All of these are simply API operations, that means you can perform these in software, you
can perform these using a number of, either the console or utilities, or sdk's and so
on.
And all these operations they will complete in seconds or even minutes.
So for example if you are starting up new instances, these are online within minutes.
Similarly, when you are starting up databases, these will be available within minutes.
Similarly, if you are re-configuring them, if you are resizing them, if you believe in
vertical scaling, if you made a choice to run an application on a certain instance type
and then you discover that it probably does not have enough memory or enough CPU capacity,
you can then resize it, again, using API operations, in just minutes.
So there is absolutely no possibility or no need for any human intervention, or any processes
which can slow down the usual requirements around scaling up or scaling down.
Secondly, like I said, there are no penalties, that means there is no upfront provisioning
required and there are no minimums, or commitments when you use these services and lastly we
are continuously adding capacity, every single day.
There are teams at AWS who are dedicated to simply continuously adding capacity and also
refreshing the underlying platform, in terms of the actual hardware, the software, the
configuration and all of the operational processes around delivering these to our customers.
I want to give you some examples around this elasticity.
So for example, if you just take compute, the Vaultastic product itself will operate
using certain virtual machines on the EC2 service.
Now the EC2 service itself, now you can provision any number of EC2 instances.
You can see here that I am talking about certain limits.
So these limits are nothing but simple mechanisms for protection.
Because all these operations are API operations and these are operations that can also be
called from software, it might happen that, due to lets say, a bug, or maybe due to some
accident or human error, we don't want that customers might end up accidentally spinning
up large number of EC2 instances or VM's or let's say you wanted to start maybe 9
instances, but due to, you know a typo, you ended up creating 99 instances or 900 instances.
So what we have is on every AWS account for each of the services, we have something called
limits.
So these limits will give you some small number that you can routinely provision and then
what you do is, when you are aware that you requirements exceed these numbers, you simply
communicate a change request before hand, which lets us know that you need to provision
more capacity.
So this is a simple mechanism for protection for not just your own account but as well
as other customers that are using the platform.
Similarly, you can take out what we call a reservation or and RI.
With this you can actually ensure that whenever you require any capacity, that capacity will
be available to you.
This is not necessary, you don't need to do this, but what happens is that, this gives
you a lower price over a longer term.
We also have a lot of spare capacity which we make available through a market which is
called the market for spot instances, where you can bid for capacity and get it at much
lower prices.
So in this way you can take advantage of our spare capacity.
We also have features like auto scaling groups and elastic load balancing, I will come to
that in a bit.
This is an illustration of using auto scaling and elastic load balancing.
The elastic load balancing service as the name says it is elastic in nature, so unlike
conventional load balancing where you must manage the infrastructure that is used for
load balancing itself.
The elastic load balancing service is designed to automatically grow and shrink to make sure
that any amount of load can be sent to your back end applications that are serving the
traffic.
Similarly, your back-end applications themselves can be part of what is called an auto scaling
group.
The auto scaling group is a mechanism where the size of your fleet can grow or shrink
on demand, based on the load that is currently hitting your application.
This is also a quick slide that shows you the auto scaling basic life cycle.
You can find the public documentation on this, but the auto scaling group works by itself
to perform both scale out events and scale in events, so the scale out events can add
capacity when needed.
Either based on increases to lets say to traffic or scheduled events, such as you know that
certain applications are busy only at certain times of the day, or certain days of the week,
so you can perform scale out actions during those times, or ahead of those times and then
you can do scale in actions automatically, because you don't want to be running larger
fleet.
So the scale in actions will automatically retire the instances when you no longer need
a large fleet of instances.
There are some of the other services which have elasticity built into them and one of
these is what we call server less computing, using lambda.
In Lambda, you simply write some code to execute a function and we take care of all of the
infrastructure management and execution of the code.
And we scale it to the number of invocations, it could be thousands of invocations every
second.
Similarly if you look at the storage, the elasticity in terms of the storage on AWS,
unlike conventional storage, you never need to tell us or tell a service before hand,
the amount of storage you need.
A single object can be as large as 5 terrabytes and you can have unlimited number of objects
per bucket.
Also the service itself, the API's, the S3 API's will routinely scale, everyday,
where we are handling a large volume of requests from our customers in terms of the number
of operations per second.
Just one more example I wanted to give you about a higher level service which is Amazon
DynamoDB.
In this case also, the elasticity is built in because you could have, actually there
is no limit to the amount of data that you can put into a DynamoDB table.
DynamoDB tables are the unit of usage.
There is no limit to the number of items and there is no limit to the total amount of data
that you could have in a DynamoDB table.
We routinely have customers that are storing billions of data items and petabytes of data
in tables provided by this service.
Just to recap, you might be considering the storage from you know some of these different
categories of workloads, so it could be a primary storage or it could be related to
the Migration, Bursting or Tearing requirements.
You are able to take advantage of our storage services as well as the elasticity so that
you can meet different kinds of requirements.
Không có nhận xét nào:
Đăng nhận xét