Real World Cost Example for Google and AWS

In the wake of Google’s Next ’17 event, and a slew of recent Reserved Instance changes by Amazon Web Services (AWS), it seemed appropriate to compare to see if anything in the public cloud VM pricing has changed, or perhaps more importantly, who is ultimately cheaper?

First things first, Google Cloud Platform (GCP) announced at Google Next ’17 the ability to reserve capacity in the form of Committed Use Discounts.  One significant difference between these and AWS Reserved Instances is the fact that GCP does not require any upfront payment for the best discount.  Where as with AWS you would need to pay an upfront fee and a 3 year commitment to receive up to 60% off on demand pricing, with GCP committed use you receive up to 57% with no up front fee.  The GCP discount most closely aligns with AWS’ new Convertible Reserved Instances.  They are regional, not tied to any specific zone within a region (in both GCP and AWS cases).  In the case of AWS, these discounts can be modified for instance type, family, OS, and tenancy.  In the case of GCP, the discount is on your aggregate core count for the region.  What makes GCP’s discount better, and keeping in line with their Sustained Use Discounts, you don’t have to modify anything to get your discount.  It applies to whatever you have running, no need to make the discount fit the infrastructure.

AWS has made a few recent changes in Reserved Instances to help make them more useful to customers.  These are welcome changes that offer increased flexibility for customers to tailer their discounts.  A few changes in how your AWS RI’s can be put to work.  Scheduled Reserved Instances allows customers to get the RI discount and capacity reservation for periodic workloads.  For customers who are willing to waive capacity reservation in exchange for more RI discount flexibility, they can now opt for regional RI’s.  Lastly, as mentioned previously, AWS customers can now modify the RI to meet changing needs through instance size flexibility.

What about those people who just want to use On Demand?  GCP has a clear advantage with Sustained Use Discounts provided your instances are running for more than 25% of the month (not an uncommon occurrence).  AWS does not provide a comparative feature.  Refer to Google’s sustained use discount chart:

So let’s take some actual compute based pricing scenarios to see how all these options play out.  To keep things as easy as possible to compare, we will assume these instances are running 24 hours a day for the entire month.  So all costs are monthly.  All prices and discounts are as of March 20th, 2017:

Description Resources Discounts Approx Monthly Total
4 vCPU
15 GB memory
10 GB SSD storage
Quantity: 10
AWS = m4.xlarge
GCP = n1.standard-4
AWS = N/A
GCP = Sustained Use 30%
AWS = $1,585

GCP = $988

4 vCPU

15 GB memory

10 GB SSD storage

Quantity: 10

AWS = m4.xlarge

GCP = n1.standard-4

AWS = 1 year No Upfront RI

GCP = Sustained Use 30%

AWS = $1,086

GCP = $988

4 vCPU

15 GB memory

10 GB SSD storage

Quantity: 10

AWS = m4.xlarge

GCP = n1.standard-4

AWS = 3 year No Upfront Convertible RI

GCP = Sustained Use 30%

AWS = $974

GCP = $988

4 vCPU

15 GB memory

10 GB SSD storage

Quantity: 10

AWS = m4.xlarge

GCP = n1.standard-4

AWS = 3 year No Upfront Convertible RI

GCP = Committed Use Discount 1 year

AWS = $974

GCP = $874

4 vCPU

15 GB memory

10 GB SSD storage

Quantity: 10

AWS = m4.xlarge

GCP = n1.standard-4

AWS = 3 year No Upfront Convertible RI

GCP = Committed Use Discount 3 year

AWS = $974

GCP = $624

All that said, AWS does offer something GCP does not, and that is up front reserved instance discounts.  To continue the above workload, let’s add one more pricing scenario.  In this case, we will use the 3 year All Upfront Convertible RI discount from AWS.

Description
Resources
Discount(s)
3 Year Total
4 vCPU

15 GB memory

10 GB SSD storage

Quantity: 10

AWS = m4.xlarge

GCP = n1.standard-4

AWS = 3 year All Upfront Convertible RI

GCP = Committed Use Discount 3 year

AWS = $29,472

GCP = $22,464

Monthly Amortized AWS = $819

GCP = $624

Speaking of flexibility, one critical difference between GCP and AWS compute resources is the ability to use a custom machine type on GCP.  This allows the customer to select a blend of compute and memory that better suits their needs.  Furthermore, this custom VM is in many cases cheaper than you would need to pay on AWS for the closest match.  The following chart shows some examples of how this would play out in an on demand manor:

Description
Resources
Cores
Memory
Approximate Monthly Total
Compute Workload Option 1

Maximum Cores, Minimum Memory

10GB SSD storage

Quantity: 100

12 hours a day / 5 days a week

AWS = c4.8xlarge

GCP = Custom

AWS = 36

GCP = 36

AWS = 60

GCP = 32

AWS = $41,148

GCP = $32,798

Compute Workload Option 2

Maximum Cores, Minimum Memory

10GB SSD storage

Quantity: 100

12 hours a day / 5 days a week

AWS = m4.16xlarge

GCP = Custom

AWS = 64

GCP = 64

AWS = 256

GCP = 58

AWS = $89,033

GCP = $58,478

Perhaps most notable though in pricing for elastic workloads, on GCP you pay per minute, not per hour.
One closing thought around core compute performance.  The boot times for linux inside GCP is faster than on AWS.  This foundational difference has a lot of follow on benefits.  Faster auto-scaling as one example.  In the case of PaaS, it even means on request instances for non production environments (no traffic, no compute running).  More on scaling, performance and PaaS in my next post.

Posted in General

New APIs expand the use for AWS Tags

Amazon recently announced some new features around tagging permissions that make tags considerably more useful.  Although this just came out, I already see a few areas where we can simplify automation scripts. More importantly, since we can limit access to tags by key, this allows us to reserve certain keys for central functions like cost allocation and monitoring, while allowing individual teams to still leverage tags for other purposes without the risk of production required tags being modified.

Resource level permissions are still not to the point where complete isolation of resources for different teams can be implemented in a single account, but this is a huge step forward in enabling developers the access they need without the risk of breaking production automation.

How will you use these new features? Reply below!

Posted in General

HIPAA on AWS the Easy Way

Many of our customers are running workloads that are subject to HIPAA regulations.  Running these on AWS is definitely doable, but there are some catches.  Foghorn has made it super easy for our customers to run HIPAA compliant workloads on AWS. Here’s how..

What is a BAA?

If you are not familiar with HIPAA, the regulations require a Business Associate Agreement to be executed with each of your partners who may have access to Protected Health Information.  From the Health Information Privacy page on BAA:

‘A “business associate” is a person or entity, other than a member of the workforce of a covered entity, who performs functions or activities on behalf of, or provides certain services to, a covered entity that involve access by the business associate to protected health information.  A “business associate” also is a subcontractor that creates, receives, maintains, or transmits protected health information on behalf of another business associate.  The HIPAA Rules generally require that covered entities and business associates enter into contracts with their business associates to ensure that the business associates will appropriately safeguard protected health information. ‘

AWS HIPAA Rules and Regs

If you are handling PHI today, you already know that any vendor that you share PHI with is required to sign a BAA.  Amazon has made this process pretty straightforward, in that they offer a BAA that they will happily sign for all customers storing and processing PHI on AWS.  But the devil is in the details.  You can read more at the AWS HIPAA compliance page here.  The important quote:

“Customers may use any AWS service in an account designated as a HIPAA account, but they should only process, store and transmit PHI in the HIPAA-eligible services defined in the BAA.”

So are you protected by the BAA?

The BAA that Amazon signs covers only a few of the AWS services, and requires that you use those services in specific architectural configurations.  If you break those conventions, the BAA is nullified.  Worse, it is nullified for your entire account, not just for data handled by the non-compliant components.

An easy example would be if your team had a compliant architecture for production, but a non-compliant infrastructure for staging.  This may have been your configuration to save on costs, and in order to maintain compliance you scrub staging data of PHI before uploading.  Let’s say that an engineer mistakenly uploaded non-scrubbed data to the non-compliant environment.  You just invalidated your BAA, even for your production environment!

In addition, any of the technical consultants, subcontractors, and managed services companies that you use also need to sign a BAA. This process can be time consuming and costly from a legal perspective.

The Easy Way

Foghorn is both a cloud services and a cloud engineering provider.  Because you get all of your AWS as well as your engineering and managed services from us, you can sign a single BAA with Foghorn.  All of the AWS gotchas still apply, but Foghorn is deeply experienced in architecting and managing HIPAA compliant environments.  By partnering with Foghorn, we can make sure your PHI is safe, and your company is protected from accidentally invalidating your AWS BAA.  There are a few ways we accomplish this:

  1. All Foghorn employees undergo HIPAA training.  We make sure our employees understand the what, the how and the why of HIPAA to avoid any simple errors.
  2. All Foghorn customer HIPAA accounts are tagged.  We know which accounts are HIPAA, and which aren’t, without a doubt.  That makes tracking and auditing easier.
  3. We segregate your PHI workloads from non PHI workloads when possible, to make sure we can focus the restrictive HIPAA based policies only where required. This saves cost and maintains agility on the rest of your workloads.
  4. We design the HIPAA infrastructure with belt and suspenders.  We make sure your architecture is compliant with Amazon’s BAA conditions, and add multiple layers of assurance.
  5. We advise and guide on the responsibilities that AWS does not take care of.  This includes scanning, penetration testing, change processes, incident response, etc.
  6. We set up realtime audit monitoring for key controls to make sure that in case someone changes something in your account that may lead to compliance issues, your team is notified immediately.

Call us today for more info on how we can help you meet HIPAA compliance while retaining your agility.

Posted in General

Take our DevOps Code… Please!

At Foghorn, we’ve been writing modular DevOps code for many years.  Over time, we have developed a set of modules that represent about 80% of our customers needs, and leveraged those modules to accelerate the timeline of DevOps projects.  This includes infrastructure as code (network, servers, IAM rules, security rules, etc.), deployment code, and operations code.  Generally we charge a fixed fee for projects which leverage FogOps IP in place of our standard hourly rates. This has been a great model to help get new customers and new initiatives up and running quickly at a relatively low cost.  Once delivered, our customers either ask us to evolve and iterate on their code, or they take complete ownership.

Recently we’ve spent a considerable amount of time to modularize our code and offer it to our customers directly.  During this process, we felt that defining a pricing model was holding us back from releasing more value more quickly to our customers.  We wanted the ability to churn out modules fast, without coming up with pricing. We also wanted the ability to maintain and update our modules.  We came to the conclusion that the best way we could serve our customers is to give this code away, for free, to the community of customers who have selected Foghorn as their Public Cloud and DevOps provider.  We are excited to help our customers grow quickly, and hope to see great uptake in the use of our code.

We discussed open-sourcing the modules completely, but ran into a few snags.  If our goal was mass adoption of our project, we would simply open source it.  But our goal is not to create another public repository of community recipes.  The value that our modules bring are that they represent Foghorn’s prescriptive opinion on best practice design and implementation.  The contributors for this project will be limited to Foghorn engineers and our clients’ engineers who share the same philosophies around building and running infrastructure and applications as code.

Our clients who choose to use our modules can benefit in a few ways:

  1. Our clients chose Foghorn because they appreciate and align with Foghorn’s principals for running mission critical workloads.  Leveraging our modules is an easy way to adopt those principles.
  2. Even if our clients know all of our principals by heart, and have the staff capabilities to implement as code, they can save a great deal of time and energy by using our modules as starting points for their custom DevOps code.
  3. Many of our customers simply don’t have the bandwidth to do this, and so they pay us to do it for them.  By leveraging our modules, we finish faster, and they pay less.
  4. Faster is not only cheaper. It’s faster too!  Our customers can accelerate their mission critical initiatives, making themselves more competitive, and positively impacting the top line of their business.
  5. We hate getting waken up at night.  We always err on the side of stability and reliability with our designs. Our clients’ engineering teams usually appreciate this.

So how do you sign up?  Simple.  Give us a ring.

 

Posted in General

DevOps Muscle for the Crossfit Games

 

As we approach the Crossfit games once again, I thought I’d share some detail on how Foghorn helped Crossfit update their DevOps processes for the 2016 games.  Crossfit has been growing rapidly, and had challenges in 2015 scaling their leaderboards.  We worked together to prep for the 2016 games, and had some great success.  As usual, we leveraged HashiCorp tools to get the job done.. If you are interested reading more, we’ve dropped a case study on our main site.

Posted in General

Business Risk? Or Assurance

risk_and_rewardWith the announcement that Snapchat has gone ‘all in’ on Google Cloud Platform, Snap has incorporated this plan into their financial filings as an additional ‘business risk’.

“Any disruption of or interference with our use of the Google Cloud operation would negatively affect our operations and seriously harm our business…”

My immediate reaction was really to question whether this is a business risk, or a business assurance? Certainly Snap is now dependent on Google’s ability to scale and manage a massive infrastructure, and so the disclosure is appropriate. But as a prospective investor, I’d feel that a great potential risk to the business, loss of availability of the Snapchat service, has been greatly reduced.

If there was a book maker taking odds on the likelihood of various companies making technical and/or operational missteps that cause an outage, Google would not be the company I’d bet on.  Quite the opposite, I think they’ve proven over the last 15 years or so that they’re pretty good at running large infrastructure.

As this reality begins to sink in with investors, partners, customers, and business leaders, cloud adoption will accelerate beyond current predictions.

 

Posted in GCP, General, Public Cloud

Terraform beats CloudFormation to the Punch with Inspector Support

block_and_punch

Cloud Neutral DevOps

HashiCorp makes some of our favorite DevOps tools.  Along with being feature rich, stable, and well designed, they are cloud neutral. This allows DevOps teams to become experts with a single tool without having to get locked in to a single cloud vendor.  Some cloud neutral tools try to completely abstract the cloud provider and the services available.  This forces the user to only use the ‘lowest common denominator’ of services available from all supported providers.  With Terraform, HashiCorp has not fallen into this trap.  They embrace the rich set of services available from each provider, with different services supported for different clouds. This allows us to put the right workload in the right cloud, without the need to leverage multiple tools, or build multiple deployment pipelines.

Terraform Supports AWS Inspector

It would be expected, however, that Terraform would trail the cloud providers’ proprietary tools supporting new cloud products and features.  But HashiCorp is amazingly quick to support features.  A great example is v 0.8.5, which now supports AWS’ Inspector service.  As of the publishing of this post, AWS’ own CloudFormation tool still does not have support for Inspector.  Pretty amazing for a small company offering an open source product!

Tagged with:
Posted in Amazon Web Services, AWS, AWS, Cloud, Public Cloud

Who’s Managing your Cloud?

managementAfter designing, building and managing hundreds of environments, sometimes we get a little too deep in the weeds with our blogs. So I thought I’d share this article, which gives a high level perspective on how companies benefit from working with a cloud managed services provider.

The article covers performance, scalability, security and compliance.  I’d add some additional benefits, like:

Cost Optimization:  Your provider knows where additional cloud spend will help, vs flushing money for little benefit.

Agility: Sure, the cloud enables agility, but it doesn’t guarantee it.  Your provider should be full of DevOps ninjas, who can put in place the pieces that don’t come ‘out of the box’ with IaaS.

Manageability: It’s so easy to string together IaaS components that give you the functionality you need. It’s also easy to do so in a manner which creates a management nightmare.  Especially if you’ve never done it before.  Your provider should lead you down the path to an infrastructure that can scale easily without additional management overhead.

And the winner is.. 

There are lots of great choices out there, although I’m pretty biased for FogOps, where our motto is “Live by Code”.  The meaning?  Everything we do to manage your site is done with code, leaving you with a self healing, auto-scaling environment that leverages continuous deployment to make your life easy.

Oh yeah, and it works on AWS, Azure, and Google Cloud.

Tagged with: , ,
Posted in AWS, AWS, Azure, Cloud, GCP, Public Cloud

Disney goes Hybrid; Shares Challenges

disney

Ian Murphy recently wrote a great article on Disney’s journey to the Hybrid Cloud. The lightning talk, given by Blake White,  highlighted the issues that many enterprise companies face when adopting some of the latest technologies, like Kubernetes and AWS, and integrating them with their existing on-prem infrastructure.  Although these technologies are well suited for integration, often the heavy lifting has to happen by the enterprise.  Many open source projects are very robust, but their focus is not on enabling integration with existing infrastructure.

The perfect example given in the talk can be found when Blake explains that in order to get the integration that Disney required, they had to build their own bespoke Kubernetes cluster provisioning tool.

Despite these challenges, Disney is forging ahead – a good sign that the value they are receiving makes overcoming the challenges a worthy endeavor. Lesson to learn?  Things worth doing are hard. Don’t let that stop you!

 

Posted in General

Crunching HIPAA data just got cheaper

With the recent AWS announcement, AWS customers can now leverage spot instances to crunch their HIPAA big data workloads.   This can help decrease the compute costs of these jobs by up to 90%, making EC2 a cost effective option for crunching large amounts of data that include Protected Health Information.

Amazon’s BAA with its HIPAA compliant customers requires that all EC2 instances that process PHI must run in dedicated tenancy mode.  Until now, spot instances were not available in dedicated tenancy mode, leaving this cost effective option unavailable for processing PHI.

Spot instance pricing is Amazon’s method of selling excess capacity that can be pre-empted if needed.  Spot pricing is market based, and often falls well below even the steepest discounts afforded with long term commitments.  Since the nodes can be pre-empted, spot instances are not suitable for many types of workloads, but most cluster compute technology is designed to tolerate node losses, making spot instances a great way to save money on short lived tasks that require high compute power.

I took a quick peek in the AWS interface, and didn’t see any option to leverage dedicated spot instances in AWS’ managed Hadoop framework, EMR.  Hopefully we will see that soon!

Tagged with:
Posted in Amazon Web Services, AWS, AWS, Cloud, Public Cloud
Follow Foghorn