Thoroughly Impressed By DreamHost: How We Launched, Scaled, & Tested an Infrastructure Using DreamCompute

Testing Dreamcompute

Every developer and DevOps master dreams of having simple, fine-grained control of their environment with the ability to scale up and down at will and a defined state to launch their infrastructure. At some point, we’ve all struggled with handling capacity while requiring the highest possible performance from our servers.

What if it were possible to access incredibly cheap, high-performance virtual machines — running the open-source OpenStack API and controlled via a simple YML text file — that could scale to handle almost anything within minutes. “Liar!” you say? Well, read on my friends. You may be pleasantly surprised at some of our findings regarding DreamHost’s new DreamCompute platform.

For those of you that want to cut right to the meat of the article:

For the rest of you: sit back, relax, and enjoy the show.

Conclusion #1: Yeah, I Am Starting With the Conclusion

This is a lengthy piece. I want to ensure you get the gist of our results before you get lazy on me:

  1. DreamCompute lets you define your infrastructure in YML, launch both instances, and define what is running on those instances in an Ansible playbook.
  2. We were able to launch an HAProxy server with 1-to-n NodeJS servers in round robin.
  3. We were able to scale the number of servers in a matter of minutes, per new server.
  4. Our infrastructure’s response times hint that DreamCompute’s hardware is top-notch. Most of the minutes were spent downloading npm packages, with the machines booting in seconds. For more on this, check out Jonathan LaCour’s talk on DreamCompute at OpenStack Summit 2016.
  5. Our ability to scale up easily allowed us to push our testing to the limit: handling 7,000 hits per 30 seconds or 603M hits per month.
  6. The pricing seems to be close to market-leading and excels when performance and its integration with the OpenStack API are considered.

I think you’re going to be as blown away with DreamCompute as we were

, once you see what we were able to do, and how easily we were able to do it.

The Intro

When we reached out to DreamHost regarding their team culture, company, and technology, we knew that they were something special. We had heard about their Founder’s involvement with Ceph and Astara along with the company’s involvement in the OpenStack community.

What we didn’t know is that their launch of DreamCompute would dramatically alter the course of our testing and turn into our development team going nutso like children playing with their new He-Man Castle Grayskull set.

Castle Grayskull from He-Man

Before testing out DreamCompute, I spoke with Stefano Maffulli, DreamHost’s Director of Cloud Marketing and Community, who brought up DreamCompute’s integration with the OpenStack API. He mentioned that since Ansible 2 supports OpenStack natively, then it is possible to launch DreamCompute instances without the need of a virtual server AND with all of the immutable goodness of Ansible. He gave me a challenge.

“Ansible 2 supports OpenStack (and DreamCompute) natively: You can create a new server and assign it a role right from the playbook, without the need to create the virtual server first. It’s pretty neat.”

– Stefano Maffulli, DreamHost Director of Cloud Marketing and Community

Alright Stef, I see what you are saying, but I think I’ll test this theory out for myself.

Our DreamCompute Testing Project

I decided I would create an architecture with an HAProxy load balancer (based on a role) and two backend NodeJS servers running a simple Express app. Creating such an architecture usually takes a decent amount of work and it’s kind of a pain to maintain.


The Project Repo: https://github.com/digital-brands/dreamcompute-ansible


Our DreamCompute Testing Architecture

That seems simple enough, but those of us who have done this “by hand” know it can be a challenge to actually implement.

A Background Intro to Ansible

For those of you not familiar with Ansible, it’s a piece of automation software that assists with:

  • Provisioning hardware in the Cloud (More details)
  • Automating configuration, using goals to describe the state you want to achieve instead of having scripts that are just a rat’s nest of pieces (More details)

By creating playbooks in Ansible, you can create repeatable, immutable, and exact app deployments without the use of an agent server (More details).

It’s more and more common these days to deploy applications to a multiple server configuration. Orchestrating these tasks can become very complicated very quickly, but Ansible makes this process pretty straightforward (More details).

As you’ll find in the following example, using Ansible makes launching an app’s infrastructure not only easy, but fun for a developer.

Our Project’s Ansible Playbook

To create our test infrastructure we are going to create a YAML file (.yml) that describes what hardware and software we need.

1. Generate Some Login Files via DreamCompute

To allow Ansible to create and log into the instances we launch, we need to have permission to use the API, plus a .pem file to allow it to SSH into our servers. DreamHost makes this easy.

Just sign into your Dashboard and create the necessary files:

  • *-openrc.sh – Creates permissions to use the DreamCompute API
  • *.pem – Used as a key for Ansible to SSH into our servers

The great news here is that these files are created automatically — i.e., magic to the end-user.

2. The Fun Part – Using Ansible

I am not an Ansible genius. I am not an Ansible junior developer.

I am an Ansible newb.

Saying this, I can honestly say that you can get started very quickly with Ansible and it is straightforward and pleasurable to use with the DreamCompute API.

For simplicity, I wanted to create a single deploy.yml file that would create my entire architecture and launch my application. Let’s walk through the major parts in the next sections.

The 2 Main Sections of Our deploy.yml File

There are basically two main sections of our deployment playbook. The first describes the hardware we need to provision and the second describes what our software state will look like on our provisioned hardware.

1. The Hardware Bit – Instances that will run NodeJS and HAProxy

- name: launch instance #1
. . os_server:
. . name: api-ins-1
. . state: present
. . image: Ubuntu-14.04
. . flavor_ram: 512
. . ...

Pretty straight forward eh?

Basically, I am telling DreamCompute to create a 512MB server running Ubuntu 14.04.

AND IT DOES IT!

The Instances We Launched on DreamCompute

Crazy. Since it was so easy to create one, let’s go ahead and create three.

2. The Software Bit – HAProxy and Our NodeJS App

HAProxy:

For funzies, let’s use an Ansible Galaxy role to create our HAProxy load balancer.

This section mostly holds information on how we want our load balancer set up. Let’s make a timeout for the client, connect, and server of five seconds. Let’s tell it which server will be our frontend (running on port 80) and which servers will be our backend servers (on port 3000).

roles:
. - role: info.haproxy
. . haproxy_defaults:
. . mode: http
. . ...

NodeJS:

We’ll be using APT to install stuff for our NodeJS servers.

To make this even more interesting, let’s do the following:

  1. Install Git, build-essential, and Curl via apt-get
  2. Install NodeJS from nodesource.com at the command line
  3. Install some Global NPM packages such as forever, gulp, and gulp-nodemon
  4. Set up our app by copying the one in the repo to our servers
  5. Run package.json
  6. Start forever on both servers

I know what you are thinking…

“Wait, are you saying that all of this is done in an automated fashion to BOTH web servers in less than 50 lines of code?”

Yes — yes, I am. It’s awesome power. So. Much. Power.

By creating our web group of servers, we can configure these servers exactly the same.

Let’s Launch This!

To run this playbook, we hit the command line and run:

$ ./dhc2182418-openrc.sh && ansible-playbook deploy.yml

Waiting… Waiting… Waiting… DONE.A Snapshot of the NodeJS Project We Made for Testing DreamComputeA Snapshot of the NodeJS Project We Made for Testing DreamCompute - 2

Amazing. After less than 10 minutes of runtime, our infrastructure is set up, running our Node app, and sitting behind an HAProxy server that is live to the world.

Performance Testing Our Infrastructure

While setting up the infrastructure is part of the equation, how the architecture performs is another. Clearly, we aren’t running apples-to-apples benchmarks, but I think it is interesting to look at response times and load performance to get a feel for what DreamCompute can do.

In fact, why don’t we load test this setup until it breaks; then let’s try to scale up our solution and see how quickly we can respond. Sound fun?!

There are a couple of caveats for my testing below:

  1. There is no database server or data storage read/write in our simple NodeJS app.
  2. There is no caching involved (and NodeJS is not running in production mode). This includes any page caching or partial caching, other than what is built into Node. I’m sorry Varnish! =(
  3. We’re using an IP to test our site, so there is no DNS lookup (adds ~200ms to response).

Response Times

For me, it is always interesting to look at response times to get a feel for the performance of a system. Without caching solutions, I have seen some crazy wait times (>10 seconds!) for the first byte. Granted, we are just running a simple NodeJS App that returns a mostly static page, but when you think about it, we are actually doing more than that:

  1. Connect to the IP (Connect Time)
  2. HAProxy round robins to the next NodeJS server (Wait Time)
  3. NodeJS renders the Jade page (Wait Time)
  4. NodeJS returns the HTML via HAProxy (Wait Time)
  5. The data is sent over the wire (Receive)

Therefore, I would consider anything under 100 milliseconds to be very fast.

Response Time Testing on DreamCompute

As you can see, the DreamCompute servers performed extremely well, in my opinion. I am not trying to test the CPU performance, or really going in-depth with straining the system, but I can say that for most applications these servers seem to be up to date and high quality.

Load Testing

Now to break some things! I am going to use Siege to ramp up a bunch of concurrent requests to our infrastructure and see how many I can run in parallel for 30 seconds at a time.

I split this into 10 tests, with my goal being to get Siege up to 1,000 concurrent connections.

Tests 1-5: Ramping Up to 500 Concurrent Connections

The following suite of tests was run with our infrastructure being set to the HAProxy server and two of our NodeJS servers being run in round robin. Let’s see where we break down.

Test 1.
siege -c 5 -b -t30s 
Lifting the server siege...      done.
Transactions:                   2042 hits
Availability:                 100.00 %
Elapsed time:                  29.79 secs
Data transferred:               6.61 MB
Response time:                  0.07 secs
Transaction rate:              68.55 trans/sec
Throughput:                     0.22 MB/sec
Concurrency:                    4.98
Successful transactions:        2042
Failed transactions:               0
Longest transaction:            0.16
Shortest transaction:           0.06
Test 2.
siege -c 20 -b -t30s 'http://208.113.133.112/
Transactions:                   2949 hits
Availability:                 100.00 %
Elapsed time:                  29.93 secs
Data transferred:               9.55 MB
Response time:                  0.20 secs
Transaction rate:              98.53 trans/sec
Throughput:                     0.32 MB/sec
Concurrency:                   19.87
Successful transactions:        2950
Failed transactions:               0
Longest transaction:            0.44
Shortest transaction:           0.06
Test 3.
siege -c 100 -b -t30s 'http://208.113.133.112/
Transactions:                   2985 hits
Availability:                 100.00 %
Elapsed time:                  29.83 secs
Data transferred:               9.66 MB
Response time:                  0.96 secs
Transaction rate:             100.07 trans/sec
Throughput:                     0.32 MB/sec
Concurrency:                   96.39
Successful transactions:        2985
Failed transactions:               0
Longest transaction:            2.07
Shortest transaction:           0.06
Test 4.
siege -c 250 -b -t30s 'http://208.113.133.112/'
Transactions:                   3026 hits
Availability:                 100.00 %
Elapsed time:                  29.88 secs
Data transferred:               9.79 MB
Response time:                  2.32 secs
Transaction rate:             101.27 trans/sec
Throughput:                     0.33 MB/sec
Concurrency:                  234.89
Successful transactions:        3026
Failed transactions:               0
Longest transaction:            4.52
Shortest transaction:           0.10
Test 5.
siege -c 500 -b -t30s 'http://208.113.133.112/
Transactions:                   2957 hits
Availability:                  98.14 %
Elapsed time:                  29.30 secs
Data transferred:               9.58 MB
Response time:                  4.33 secs
Transaction rate:             100.92 trans/sec
Throughput:                     0.33 MB/sec
Concurrency:                  436.80
Successful transactions:        2957
Failed transactions:              56
Longest transaction:           19.09
Shortest transaction:           0.10

Ah, we broke it at around 500 concurrent connections. As you can see, at 250 concurrent connections we handled 3,000 hits in 30 seconds — or about 250 million hits per month. Of course, if there were peaks of up to 500 requests, we would start having problems at that point.

Since we know that our demo site is going to blow up on Hacker News, let’s scale this up…

Tests 6-7: Scale to 3 NodeJS Servers

By making some minor changes to our YML file, we were able to add an additional server to our infrastructure in just a few minutes. Let me make that clear:

With a minor change to a text file (deploy.yml), we were able to scale our infrastructure!

It blew my mind how easy this was to do and how quickly it happened. Remember, we are running the smallest instances we possibly can.

Test 6.
Transactions:                   4890 hits
Availability:                 100.00 %
Elapsed time:                  30.28 secs
Data transferred:              15.83 MB
Response time:                  2.75 secs
Transaction rate:             161.49 trans/sec
Throughput:                     0.52 MB/sec
Concurrency:                  444.57
Successful transactions:        4891
Failed transactions:               0
Longest transaction:           10.98
Shortest transaction:           0.07
Test 7.
siege -c 750 -b -t30s 'http://208.113.133.112
Transactions:                   4822 hits
Availability:                  99.96 %
Elapsed time:                  29.54 secs
Data transferred:              15.61 MB
Response time:                  3.78 secs
Transaction rate:             163.24 trans/sec
Throughput:                     0.53 MB/sec
Concurrency:                  616.25
Successful transactions:        4822
Failed transactions:               2
Longest transaction:           17.49
Shortest transaction:           0.11

Now we are breaking around 750 concurrent connections and 5,000 requests every 30 seconds.

Well, let’s go for one more round!

Tests 8-10: Adding a 4th NodeJS Server to Our Infrastructure

Since it is so incredibly easy to scale our infrastructure, why not do it one more time?!

Here are the mind-blowing results:

Test 8.
Transactions:                   6920 hits
Availability:                 100.00 %
Elapsed time:                  30.01 secs
Data transferred:              22.40 MB
Response time:                  2.74 secs
Transaction rate:             230.59 trans/sec
Throughput:                     0.75 MB/sec
Concurrency:                  631.53
Successful transactions:        6920
Failed transactions:               0
Longest transaction:           14.53
Shortest transaction:           0.62
Transactions:                   6768 hits
Availability:                 100.00 %
Elapsed time:                  29.22 secs
Data transferred:              21.91 MB
Response time:                  2.98 secs
Transaction rate:             231.62 trans/sec
Throughput:                     0.75 MB/sec
Concurrency:                  689.33
Successful transactions:        6768
Failed transactions:               0
Longest transaction:           16.58
Shortest transaction:           0.06
siege -c 10000 -b -t30s 'http://208.113.133.112
Transactions:                   7068 hits
Availability:                 100.00 %
Elapsed time:                  31.82 secs
Data transferred:              22.88 MB
Response time:                  3.08 secs
Transaction rate:             222.12 trans/sec
Throughput:                     0.72 MB/sec
Concurrency:                  683.71
Successful transactions:        7068
Failed transactions:               0
Longest transaction:           15.41
Shortest transaction:           0.07

And we’ve done it!

I tapped out around 683 concurrent connections, because that’s all my laptop could handle.

As you can see, we scaled up to being able to server 7,000 hits in 30 seconds, which would equate to a laughable:

603 million hits per month

I want to reiterate that this is a really simple app… but that is some shocking performance given that there is no caching involved, given how easy it was to set up our infrastructure, and given what a snap it was to scale it.

The Pricing

Okay, so we know that this system is pretty awesome, can scale well, and is simple to use — but what does it cost? As it turns out, the cost of DreamCompute is pretty close to, if not way lower than, the lowest in the market. You pay by the hour and only pay for what you use. If you use more than 600 hours in a month, they only charge for the first 600 (i.e., you only pay for what you use, up to 25 days of a month…)

Pricing Table for DreamCompute Instances

Again, I think that DreamCompute has exceeded my expectations on every front.

Support

I feel my DreamCompute spiel is getting a little long-winded, so I’ll keep this short. I tried out the support in a number of ways while using DreamCompute and was again pleasantly surprised.

In fact, they give you access to an IRC channel where you can speak directly to the developers and engineers that developed DreamCompute. It’s pretty hard to beat that level of support. In addition, I tried the live chat and was helped in less than a minute by someone who could answer basic questions along with addressing more technical, escalated issues.

Conclusion #2

I can honestly say that the combination of DreamCompute, Ansible, and the OpenStack API exceeded my expectations spectacularly.

It’s rare for me to be completely blown away by something that a web host has released. This time though, I can say that what DreamHost has released in DreamCompute had me spending hours in sheer bliss. I was amazed at what they were providing in such a straightforward way.

Dropping the Mic

Can’t get enough of DreamHost? Check out Part 1 of this article, featuring the crazy-about-open-source culture of the team behind DreamCompute, DreamObjects, and all DreamHost solutions.

Advertiser Disclosure

HostingAdvice.com is a free online resource that offers valuable content and comparison services to users. To keep this resource 100% free, we receive compensation from many of the offers listed on the site. Along with key review factors, this compensation may impact how and where products appear across the site (including, for example, the order in which they appear). HostingAdvice.com does not include the entire universe of available offers. Editorial opinions expressed on the site are strictly our own and are not provided, endorsed, or approved by advertisers.

Our Editorial Review Policy

Our site is committed to publishing independent, accurate content guided by strict editorial guidelines. Before articles and reviews are published on our site, they undergo a thorough review process performed by a team of independent editors and subject-matter experts to ensure the content’s accuracy, timeliness, and impartiality. Our editorial team is separate and independent of our site’s advertisers, and the opinions they express on our site are their own. To read more about our team members and their editorial backgrounds, please visit our site’s About page.