250 GB/day of logs with Graylog: The good, the bad and the ugly


Graylog Architecture
  • Load Balancer: Load balancer for log input (syslog, kafka, GELF, …)
  • Graylog: Logs receiver and processor + Web interface
  • ElasticSearch: Logs storage
  • MongoDB: Configuration, user accounts and sessions storage

Costs Planning

Hardware requirements

  • Graylog: 4 cores, 8 GB memory (4 GB heap)
  • ElasticSearch: 8 cores, 60 GB memory (30 GB heap)
  • MongoDB: 1 core, 2 GB memory (whatever comes cheap)

AWS bill

 + $ 1656 elasticsearch instances (r3.2xlarge)
 + $  108   EBS optimized option
 + $ 1320   12TB SSD EBS log storage
 + $  171 graylog instances (c4.xlarge)
 + $  100 mongodb instances (t2.small :D)
 = $ 3355
 x    1.1 premium support
 = $ 3690 per month on AWS

GCE bill

 + $  760 elasticsearch instances (n1-highmem-8)
 + $ 2040 12 TB SSD EBS log storage
 + $  201 graylog instances (n1-standard-4)
 + $   68 mongodb (g1-small :D)
 = $ 3069 per month on GCE

GCE is 9% cheaper in total. Admire how the bare elasticsearch instances are 55% cheaper on GCE (ignoring the EBS flag and support options).

The gap is diminished by SSD volumes being more expensive on GGE than AWS ($0.17/GB vs $0.11/GB). This setup is a huge consumer of disk space. The higher disk pricing is eating part of the savings on instances.

Note: The GCE volume may deliver 3 times the IOPS and throughput of its AWS counterpart. You get what you pay for.

Capacity Planning

Performances (approximate)

  • 1600 log/s average, over the day
  • 5000 log/s sustained, during active hours
  • 20000 log/s burst rate

Storage (as measured in production)

  • 138 906 326 logs per day (averaged over the last 7 days)
  • 2200 GB used, for 9 days of data
  • 1800 bytes/log in average

Our current logs require 250 GB of space per day. 12 TB will allow for 36 days of log history (at 75% disk usage).

We want 30 days of searchable logs. Job done!



Dunno, never seen it, never used it. Probably a lot of the same.

Splunk Licensing

The Splunk licence is based on the volume ingested in GB/day. Experience has taught us that we usually get what we pay for, therefore we love to pay for great expensive tools (note: ain’t saying splunk is awesome, don’t know, never used it). In the case of Splunk vs ELK vs Graylog. It’s hard to justify the enormous cost against two free tools which are seemingly okay.

We experienced a DoS an afternoon, a few weeks after our initial small setup: 8000 log/s for a few hours while we were planning for 800 log/s.

A few weeks later, the volume suddenly went up from 800 log/s to 4000 log/s again. This time because debug logs and postgre performance logs were both turned on in production. One team was tracking an Heisenbug while another team felt like doing some performance analysis. They didn’t bother to synchronise.

These unexpected events made two things clear. First, Graylog proved to be reliable and scalable during trial by fire. Second, log volumes are unpredictable and highly variable. A volume-based licensing is a highway to hell, we are so glad to not have had to put up with it.

Judging by the information on Splunk website, the license for our current setup would be in the order of $160k a year. OMFG!

How about the cloud solutions?

One word  : No.
Two words: Strong No.

The amount of sensitive information and private user data available in logs make them the ultimate candidate for not being outsourced, at all, ever.

No amount of marketing from SumoLogic is gonna change that.

Note: We may to be legally forbidden to send our logs data to a third party. Even thought that would take a lawyer to confirm or deny it for sure.

Log management explained

Feel free to read “Graylog” as “<other solution>”. They’re all very similar with most of the same pros and cons.

What Graylog is good at

  1. debugging & postmortem
  2. security and activity analysis
  3. regulations

Good: debugging & postmortem

Logs allow to dive into what happened millisecond by millisecond. It’s the first and last resort tool when it comes to debugging issues in production.

That’s the main reason logs are critical in production. We NEED the logs to debug issues and keep the site running.

Good: activity analysis

Logs give an overview of the activity and the traffic. For instance, where are most frontend requests coming from? who connected to ssh recently?

Good: regulations

When we gotta have searchable logs and it’s not negotiable, we gotta have searchable logs and it’s not negotiable. #auditing

What Graylog is bad at

  1. (non trivial) analytics
  2. graphing and dashboards
  3. metrics (ala. graphite)
  4. alerting

Bad: (non trivial) Analytics


1) ElasticSearch cannot do join nor processing (ala mapreduce)
2) Log fields have weak typing
3) [Many] applications send erroneous or shitty data (e.g. nginx)

Everyone knows that an HTTP status code is an integer. Well, not for nginx. It can log an upstream_status_code ‘200‘ or ‘‘ or ‘503, 503, 503‘. Searching nginx logs is tricky and statistics are failing with NaN errors (Not a Number).

Elasticsearch itself has weak typing. It tries to detect field types automatically with variable success (i.e. systematic failure when receiving ambiguous data, defaulting to string type).

The only workaround around is to write field pre/post processors to sanitize inputs but it’s cumbersome when there are unlimited applications and fields each requiring a unique correction.

In the end, the poor input data can break simple searches. The inability to do joins prevents from running complex queries at all.

It would be possible to do analytics by sanitizing log data daily and saving the result to BigQuery/RedShift but it’s too much effort. We better go for a dedicated analytics solution, with a good data pipeline (i.e. NOT syslog).

Lesson learnt: Graylog doesn’t replace a full fledged analytics service.

Bad: Graphing and dashboards

Graylog doesn’t support many kind of graphs. It’s either “how-many-logs-per-minute” or “see-most-common-values-of-that-field” in the past X minutes. (There will be more graphs as the product mature, hopefully). We could make dashboards but we’re lacking interesting graphs to put into them.

edit: graylog v2 is out, it adds automatic geolocation of IP addresses and a map visualization widget.

Bad: Metrics and alerting

Graylog is not meant to handle metrics. It doesn’t gather metrics. The graphs and dashboards capabilities are too limited to make anything useful even if metrics were present. The alerting capability is [almost] non existent.

Lesson learnt: Graylog does NOT substitute to a monitoring system. It is not in competition with datadog and statsd.

Special configuration

ElasticSearch field data

indices.fielddata.cache.size: 20%

By design, field data are loaded in memory when needed and never evicted. They will fill the memory until OutOfMemory exception. It’s not a bug, it’s a feature.

It’s critical to configure a cache limit to stop that “feature“.

Reference: https://www.elastic.co/guide/en/elasticsearch/reference/current/circuit-breaker.html

ElasticSearch shards are overrated

elasticsearch_shards = 1
elasticsearch_replicas = 1

Shards allow to split an index logically into shards [a shard is equivalent to a virtual index]. Operations on an index are transparently distributed and aggregated across its shards. This architecture allows to scale horizontally by distributing shards across nodes.

Sharding makes sense when a system is designed to use a single [big] index. For instance, a 50 GB index for http://www.apopularforum.com can be split in 5 shards of 10GB and run on a 5 nodes cluster. (Note that a shard MUST fit in the java heap for good performances.)

Graylog (and ELK) have a special mode of operation (inherent to log handling) in where new indices are created periodically. Thus, there is no need to shard each individual index because the architecture is already sharded on a higher level (across indices).

Log retention MUST be based on size

Retention = retention criteria * maximum number of indexes in the cluster.

e.g. 1GB per index * 1000 indices =  1TB of logs are retained

The retention criteria can be a maximum time period [per index], a maximum size [per index], or a maximum document count [per index].

The ONLY viable retention criteria is to limit by maximum index size.

The other strategies are unpredictable and unreliable. Imagine a “fixed rotation every 1 hour” setting, the storage and memory usage of the index will vary widely at 2-3am, at daily peak time, and during a DDoS.

mongodb and small files

smallfiles: true

mongodb is used for storing settings, user accounts and tokens. It’s a small load that can be accommodated by small instances.

By default, mongodb is preallocating journals and database files. Running an empty database takes 5GB on disk (and indirectly memory for file caching and mmap).

The configuration to use smaller files (e.g. 128MB journal instead of 1024MB) is critical to run on small instances with little memory and little disk space.

elasticsearch is awesome

elasticsearch is the easiest database to setup and run in a cluster.

It’s easy to setup, it rebalances automatically, it shards, it scales, it can add/remove nodes at anytime. It’s awesome.

Elasticsearch drops consistency in favour of uptime. It will continue to operate in most circumstances (in ‘yellow’ or ‘red’ state, depending whether replica are available for recovering data) and try to self heal. In the meantime, it ignores the damages and works with a partial view.

As a consequence, elasticsearch is unsuitable for high-consistency use cases (e.g. managing money) which must stop on failure and provide transactional rollback. It’s awesome for everything else.

mongodb is the worst database in the universe

There are extensive documentation about mongodb fucking up, being unreliable and destroying all data.

We came to a definitive conclusion after wasting spending lots of time with mongodb, in a clustered setup, in production. All the shit about mongodb is true.

We stopped counting the bugs, the configuration issues, and the number of times the cluster got deadlocked or corrupted (sometimes both).

Integrating with Graylog

The ugly unspoken truth of log management is that having a solution in place is only 20% of the work. Then most of the work is integrating applications and systems into it.Sadly, it has to be done one at a time.

JSON logs

The way to go is JSON logs. JSON format is clean, simple and well defined.

Reconfigure applications libraries to send JSON messages. Reconfigure middleware to log JSON messages.


log_format json_logs '{ '
 '"time_iso": "$time_iso8601",'

 '"server_host": "$host",'
 '"server_port": "$server_port",'
 '"server_pid": "$pid",'

 '"client_addr": "$remote_addr",'
 '"client_port": "$remote_port",'
 '"client_user": "$remote_user",'

 '"http_request_method": "$request_method",'
 '"http_request_uri": "$request_uri",'
 '"http_request_uri_normalized": "$uri",'
 '"http_request_args": "$args",'
 '"http_request_protocol": "$server_protocol",'
 '"http_request_length": "$request_length",'
 '"http_request_time": "$request_time",'

 '"ssl_protocol": "$ssl_protocol",'
 '"ssl_session_reused": "$ssl_session_reused",'

 '"http_header_cf_ip": "$http_cf_connecting_ip",'
 '"http_header_cf_country": "$http_cf_ipcountry",'
 '"http_header_cf_ray": "$http_cf_ray",'

 '"http_response_size": "$bytes_sent",'
 '"http_response_body_size": "$body_bytes_sent",'

 '"http_content_length": "$content_length",'
 '"http_content_type": "$content_type",'

 '"upstream_server": "$upstream_addr",'
 '"upstream_connect_time": "$upstream_connect_time",'
 '"upstream_header_time": "$upstream_header_time",'
 '"upstream_response_time": "$upstream_response_time",'
 '"upstream_response_length": "$upstream_response_length",'
 '"upstream_status": "$upstream_status",'

 '"http_status": "$status",'
 '"http_referer": "$http_referer",'
 '"http_user_agent": "$http_user_agent"'
 ' }';
access_log syslog:server=,severity=notice json_logs;
 error_log syslog:server= warn;


We use syslog-ng to deliver system logs to Graylog.

options {
 # log with microsecond precision

 # detect dead TCP connection
 # DNS failover
destination d_graylog {
 # DNS balancing
 syslog("graylog-server.internal.brainshare.com" transport("tcp") port(1514));


It is perfectly normal to spend 10-20% of the infrastructure costs in monitoring.

Graylog is good. Elasticsearch is awesome. mongodb sucks. Splunk costs an arm (or two). Nothing new in the universe.

From now on and forward, applications should log messages in JSON format. That’s the best way we’ll be able to extract meaningful information out of them.

HackerRank Testing: A glimpse at the company side

HackerRank is an online coding platform. It provides coding tests and questions for companies to screen candidates.

We remember the first time we had to do a test (before joining the company), unsure what were the expectations. Later, we were designing new tests (after joining the company), unsure what to expect from candidates.

We decided to release some insights on our experience, full disclosure. How good people are doing? How the test is evaluated?

Hopefully, that will give everyone a better understanding of what is going on.


hr funnel
Last month – 79 candidates

Do or not do, there is no try

We invited 79 people to do the test in the last month… 29% of them never tried.

On the bright side, the more candidates who kick themselves out, the more time we can dedicate to the remaining ones.

You can be a top 71% performer by simply trying! =D


We inaugurated a new test last week and 5 candidates did it over the weekend. They happen to be a representative sample:

  1. Didn’t attempt any of the coding exercises
  2. Answered all coding exercises with “return true” or equivalent algorithm.
  3. Answered exercises not with code but with comments about the train’s Wi-Fi being terrible, especially after the train started moving
  4. Had trouble to solve the SSH-to-our-server exercise without sudo, until he hacked the webserver with a fresh 0-day to elevated his privileges.
  5. Answered all simple questions with simple algorithms, didn’t finish the hard one.

Three failed and two passed. It’s self-evident who is who.

Highest bang for the buck

There is no other form of screening that can scale as well as HackerRank. It is also the fairest interview process since it never discriminates on age, race, years of experience, school or anything.

Designing the test takes a few day.

We pay $5 per invitation and the correction takes 5-15 minutes.

Hall of Shame

Internet is required to complete the test

One candidate tried to do the test on a laptop, in a moving train, over the train’s Wi-Fi. It didn’t go well and he sent us a long email to complaint right after the test.

On the bright side, he wrote long comments in English. On the dark side, he didn’t code any of the simple things (not requiring internet or any documentation) and all the writings prove the internet connection was not that bad.

We considered about giving him a second chance and then we just dropped the case after much confusion and more emails.

Did he think that internet is unnecessary to access http://www.hackerrank.com? Is the connectivity usually good in train? Does he do the same thing for Skype interviews? We don’t know and we’ll never know. We are still puzzled to this day.

We’ve added a note to our introductory email to clarify: “Internet access is required, for the whole duration of the test“.

“return true” is NOT the ultimate answer to everything

We are seeing a lot of stupid answers. Probably just to grab some points.

Class Solution {
    // str : firstname|lastname|phonenumber|address|zipcode|country
    bool filter(String str) {
        return true;
int max(int array[], int size) {
    return array[0];

Booleans are about 50-50 by the law of probability, integers can get lucky with 0 or -1, arrays with the first or last element.

Passing 50% of tests is good value for the time invested but it won’t survive a code review. (Not to mention that 80% of the point could be on the harder test cases).

Tip and tricks for candidates


As a candidate, you cannot see the unit tests content, the edge cases or the complexity expected.

The question gives bounds on the input size. The title and tags gives a hint about the expected solution (e.g. dynamic programming). Read that wisely.

64 bits integer

Many questions require 64 bits integers but it’s NEVER mentioned. Go for 64 bits integers as default whenever there is an array with thousands of integers and some additions (e.g. all trading-like and number-crunching questions).

Unit Tests

The unit tests are NOT ordered in ascending difficulty and they may have limited variety.

For instance, if there are 8 tests (excluding examples), that could be 4 tests with 64 bits results + 6 tests with 50 MB of input data + 1 test with a single number.

A slight difference in complexity or an unhandled edge case may turn around many tests.


A test case has between 1 and 5 seconds to be run (depending on the language). A “timeout error” on a test means that it didn’t finish in the given time and was terminated. Gotta write faster code.

All your code is reviewed

On the recruiter interface, we can see the code that was submitted, we have the input and the output of all test cases. Including errors and partial output.

We review everything, we evaluate algorithms, we evaluate complexity, we read comments, we consider special hacks/tricks, we check edge cases.


HackerRank gives points per question and per unit test successful. We get a general sense of completion when we open the review windows “x/300 points” but ultimately the decision comes down to the code review.

Time Spent

We have an overview of the time spent on the test.

hr test time report
1-4: MCQ question, 5-8: coding exercise, total: 60 minutes

HackerRank is simple

Whatever a test contains, the candidate will usually advance to the next round if he can answer some of the coding exercises.

A developer should be able to code some solutions to some [simple] problems. That’s exactly what HackerRank is testing.

HackerRank is good for everyone

Once in a while there is a company with a crazy impossible test that is rejecting everyone. The company would do the same thing if it were face-to-face. You just avoided an awkward 4h on-site interview.

Sample Test

There is only one important thing to do before attempting a test. Try the the sample test  to familiarize yourself with the platform and ensure everything is working.


Recruiting takes a huge amount of effort on everyone involved. HackerRank’s purpose is to save a lot of time and effort by weeding out people earlier [especially utterly unqualified people]. Most of these would fail in the same way in a phone or face-to-face interview.

It’s good and it’s extremely effective. It can replace the initial phone screen.


Cracking the HackerRank Test: 100% score made easy


It’s well known that most programmers wannabes can’t code their way out of a paper bag. Thus the tech industry is pushing for longer, harder and evermore extreme screening.

The whiteboard interview has been the standard for a while, followed by puzzles [now abandoned], then FizzBuzz.

The latest fad is HackerRank. It’s introducing automated programming tests to be done by the candidate before he’s allowed to talk to anyone in the company.

A lot of very good companies are using HackerRank as a pre-screening tool. If we can’t avoid it, we gotta embrace it.

What to find in a HackerRank test?

There are 3 types of questions to be encountered in a test:

  • Multiple Choice Questions: “What is the time complexity to find an element in a red and black tree?” -A- -B- -C- -D-
  • Coding Exercise: “Long description of a problem to be solved, input data format, output data format.” Start coding a solution.
  • SudoRank Exercise: “Your ssh credentials are tester:QWERTUIOP@ <long description of a task to be accomplished>.” SSH to the server and start fixing.

Any amount of any question can be put together in any order to make a complete test. A company should give some indications on what to expect in its test.

HackerRank provides hundreds of questions and exercises ready to use. It’s also possible for the company to write its own (and recommended).

Defeating Multiple Choice Question

The majority of the multiple choice questions can be solved by an appropriate Google search. Usually on the title, sometimes on a few select words from the text.

hr question dropping privileges
Select Text => Right Click => Quick Search


hr google dropping privileges
Google has spoken! => all in favour of setuid()

Defeating Coding Exercises

The HackerRank website blocks copy/paste and searching for a 10 lines long paragraph is not exactly an option.

The workaround is to search for the title of the exercise. A title uniquely identifies a question on HackerRank. It will be mentioned in related solutions and blog posts. Perfect for being indexed by Google.

hr question lonely integer
Select Text => Right Click => Quick Search


hr google lonely integer.png

The first result is the question, the second result is the solution. Well, that was easy.

Bonus: That google solution is actually wrong… yet it gives all the points.

// [boilerplate omitted]
int main() {
    int N;
    cin >> N;
    int tmp, result = 0;
    for (int i = 0; i < N; i++) {
        cin >> tmp;
        result ^= tmp;
    cout << result;
    return 0;

This solution only works if duplicated numbers are in pairs. All the HackerRank unit tests happen to fit this criteria by pure coincidence.

Originally, we put this simple question at the beginning of a test for warm-up. We received that answer from a candidate soon thereafter. It is unlikely that anyone would ever come up with an algorithm that convoluted when given only the text from the question. A quick investigation quickly revealed the source.

Update: The “Lonely Integer” question is worded slightly differently in the public HackerRank site and the private HackerRank library but the input, output and unit tests are the same. HackerRank is obviously copying questions from the community into it’s private library. That’s another copy-cat spotted.

Recruiter Insights: Cheating brought to the next level

We have a lot of candidates coming from recruiters. How are they comparing to candidates from other sources?

Let’s see the statistics on a hard question [i.e. dynamic programming trading algorithm].

hr insights stock maximize distribution
Distribution over all attempts, by all companies. 1234 zero vs 303 full score (log scale)

Most candidates get 0 points: ran out of time, unable to answer, wrong algorithm, or incomplete/partial solutions (i.e. good start but not enough to pass any unit test yet).

Note: We wanted to show the same distribution over our pool of candidates but HackerRank doesn’t provide that graph anymore. It used to.😦

Anyway, we remember approximate numbers. Our distribution is about 50/50% on each extreme. That’s far better than the 80/20% from the general sample. We can correlate that with the time spent on the question and the code review as well.

Truth is: Candidates coming from recruiters perform better, especially on hard exercises. In fact it is unbelievable how much better they perform!

The conclusion is simple. Our recruiters give away the test to the candidates.

Lesson learnt:

  • For candidates: Remember to ask the recruiter for support before the test.
  • For recruiters: Remember to coach the candidate for the test and instruct him to write down changes (if any).
  • For companies: Beware high-score candidates coming from recruiters! In particular, don’t calibrate scoring based on extremes scores from a few cheaters.

Challenge: How long does it take you to solve a trading challenge? [dynamic programming, medium difficulty]

Custom HackerRank Tests

Companies can write custom exercises and they should. It’s hard and it requires particular skills but it is definitely worthwhile.

It is the only effective solution against Google, if done carefully. (It’s actually surprisingly  difficult to make exercises that are both simple and not easily found on 1000 tutorials and coding forums).

Sadly, it won’t help against recruiters. (Excluding the first batch of candidates who they will sacrifice as scouts).

Conclusion: Did we just ruin HackerRank pre-screening?

Of course not! There is a never ending supply of bozos unable to tell the difference between Internet and Internet Explorer.

We could write a book teaching the answers to 90% of programming interviews problems, yet 99% of job seekers would never read it. Hell, it’s been written for a while and it had no impact whatsoever.

Only a handful of devs following blogs/news or searching for “What is HackerRank?” will be able to come better prepared.

If anything, this article makes HackerRank better and more relevant. Now a test is about looking for help on Google and fixing subtly broken snippets of unindented code written in the wrong language.

HackerRank is finally screening for capabilities relevant to the job! =D

A typical cost comparison between GCE an AWS


To complete our article about why AWS is a total rip-off and GCE is better in every aspect.

Let’s do a basic cost comparison between the two.

Common Usage

NoSql Database

Let’s take a NoSQL database, part of a bigger cluster. Need high memory, multiple CPU, some space. It’s intended to scale horizontally, it can support a node dead or slower a times. No need for anything too fancy.

AWS r3.xlarge
– 4 CPU
– 26 GB memory
– 1000 GB of EBS GP2 volume (remote SSD)
– 3000 IOPS advertised out of the box
– (bigger drive is mandatory or performance will be abysmal)

GCE n1-highmem-4
– 4 CPU
– 30 GB memory
– 500 GB of Google SSD persistent volume (remote SSD)
– 15000 IOPS advertised out of the box
– (it’s really 5 times more IOPS, ain’t a typo)

   AWS r3.xlarge                       GCE n1-highmem-4
=======================             =======================
+ $267 instance                     + $200 instance
+ $110 disk                         + $ 85 disk
=======================             =======================
* 1.1 premium support               - $ 60 usage discount
=======================             =======================
= $415 /month                       = $225 /month

GCE is 46% cheaper than AWS.

SQL Database, scaling vertically

Let’s take the main PostgreSQL database. Need high memory, multiple CPU, lots of space and high IO. It can only scale vertically and IO are absolutely critical. We want at least 80 GB memory and 1TB GB high performance SSD disk.

AWS i2.4xlarge
– 16 CPU
– 122 GB memory
– 4* 800GB local SSD (raid 10)
– (Only the i2 instance family has large local SSD)
– (i2 instance prices include the local SSDs)

GCE n1-highmem-16
– 16 CPU
– 104 GB memory
– 6* 375 GB local SSD (raid 10)
– (Attach as many 375GB local SSD as you want to any kind of instances)

    AWS i2.4xlarge                      GCE n1-highmem-16
=======================             =======================
  $2700 instance                    + $ 800 instance
+     0 local SSD included          + $ 490 local SSD
=======================             =======================
*   1.1 premium support             - $ 242 usage discount
=======================             =======================
= $2970 per month                   = $1048 per month

GCE is 65% cheaper than AWS!!!

Cost Conclusion

EVERYTHING is cheaper on GCE, the difference is especially dramatic on the bigger hosts.

A typical company runs a variety of production systems. Without knowing the exact load you can still approximate a 80/20 rule. The 20% biggest instances are worth 80% of the bill.

These two examples are the kind of discount to expect on 80% of your bill by using GCE instead of AWS.

Frequently Asked Questions

Question: You’re not using reserved instances.
Answer: That is correct. We cannot guarantee that the same instances will still be needed in 8 months from now (it takes ~8 months to break even with reserved instances). It is a high-risk investment promising only a limited discount.
Personal Tip: Be mindful of the “reserved instances” marketing hype. Practice has taught us repeatedly (and painfully) that it is extremely difficult to guess the future capacity right when managing more than 10 instances (let alone 100). We recommend to never consider more than 50% of reservations in a costs analysis.

Question: Why pay for support?
Answer: The support is required for many issues and edge cases (a few listed here (HN)). As a business running our entire operations in the cloud, we encounter them frequently and thus are forced to pay for the premium support.
Personal Tip: If you have limited experience with AWS billing and you’re planning to run more than 10 instances along a few managed services (ELB, RDS). We highly recommend to plan for premium support in the budget.

Answer: For EBS volumes, the disk sizes (and sometimes also the instance types) need to be over provisioned to get a comparable latency and throughput.

Question: The AWS side has more disks. This is unfair.
Answer: For EBS volumes, the disk sizes (and sometimes also the instance types) need to be over provisioned to get a comparable latency and throughput.
For local SSD, there is only one instance family providing those on AWS. It is simply not possible to fit needs tightly with only 4 options available.

Question: What if I need less CPU or less memory or less disk.
Answer: AWS doesn’t have enough granularity to change any single parameter, whereas GCE does. That will make the GCE bill cheaper but not AWS.

Question: What if my load fits EXACTLY one of the predefined AWS instance type AND I reserve it for 1 entire year in advance paid full-upfront AND I don’t need any support nor dedicated..
Google is still 5% and 14% cheaper respectively.


GCE vs AWS in 2016: Why you should NEVER use Amazon!


This story relates my experience at a typical web startup. We are running hundreds of instances on AWS, and we’ve been doing so for some time, growing at a sustained pace.

Our full operation is in the cloud: webservers, databases, micro-services, git, wiki, BI tools, monitoring… That includes everything a typical tech company needs to operate.

We have a few switches and a router left in the office to provide internet access and that’s all, no servers on-site.

The following highlights many issues encountered day to day on AWS so that [hopefully] you don’t do the same mistakes we’ve done by picking AWS.

What does the cloud provide?

There are a lot of clouds: GCE, AWS, Azure, Digital Ocean, RackSpace, SoftLayer, OVH, GoDaddy… Check out our article Choosing a Cloud Provider: AWS vs GCE vs SoftLayer vs DigitalOcean vs …

We’ll focus only on GCE and AWS in this article. They are the two majors, fully featured, shared infrastructure, IaaS offerings.

They both provide everything needed in a typical datacenter.

Infrastructure and Hardware:

  • Get servers with various hardware specifications
  • In multiple datacenters across the planet
  • Remote and local storage
  • Networking (VPC, subnets, firewalls)
  • Start, stop, delete anything in a few clicks
  • Pay as you go

Additional Managed Services (optional):

  • SQL Database (RDS, Cloud SQL)
  • NoSQL Database (DynamoDB, Big Table)
  • CDN (CloudFront, Google CDN)
  • Load balancer (ELB, Google Load Balancer)
  • Long term storage (S3, Google Storage)

Things you must know about Amazon

GCE vs AWS pricing: Good vs Evil

Real costs on the AWS side:

  • Base instance plus storage cost
  • Add provisioned IOPS for databases (normal EBS IO are not reliable enough)
  • Add local SSD (675$ per 800 GB + 4 CPU + 30 GB. ALWAYS ALL together)
  • Add 10% on top of everything for Premium Support (mandatory)
  • Add 10% for dedicated instances or dedicated hosts (if subject to regulations)

Real costs on the GCE side:

  • Base instance plus storage cost
  • Enjoy dependable IO out-of-the-box with Google remote SSD volumes
  • Eventually add local SSD (82$ per 375 GB, attachable to any existing instance)
  • Automatic discount for sustained usage (up to 30% for instances running 24/7)

AWS IO are expensive and inconsistent

EBS SSD volumes: IOPS, and P-IOPS

We are forced to pay for Provisioned-IOPS whenever we need dependable IO.

The P-IOPS are NOT really faster. They are slightly faster but most importantly they have a lower variance (i.e. 90%-99.9% latency). This is critical for some workload (e.g. databases) because normal IOPS are too inconsistent.

Overall, P-IOPS can get very expensive and they are pathetic compared to what any drive can do nowadays (720$/month for 10k P-IOPS, in addition to $0.14 per GB).

Local SSD storage

Local SSD storage is only available via the i2 instances family which are the most expensive instances on AWS (and over all clouds).

There is no granularity possible. CPU, memory and SSD storage amount all DOUBLE between the few i2.xxx instance types available. They grow in powers of 4CPU + 30GB memory + 800 GB SSD and the multiplier is $765/month.

These limitations make local SSD storage expensive to use and special to manage.

AWS Premium Support is mandatory

The premium support is +10% on top of the total AWS bill (i.e. EC2 instances + EBS volumes + S3 storage + traffic fees + everything).

Handling spikes in traffic

ELB and S3 cannot handle sudden spikes in traffic. They need to be scaled manually by support beforehand.

An unplanned event is a guaranteed 5 minutes of unreachable site with 503 errors.

Handling limits

All resources are artificially limited by a hardcoded quota, which is very low by default. Limits can only be increased manually, one by one, by sending a ticket to the support.

I cannot fully express the frustration when trying to spawn two c4.large instances (we already got 15) only to fail because “limit exhaustion: 15 c4.large in eu-central region“. Message support and wait for one day of back and forth email. Then try again and fail again because “limit exhaustion: 5TB of EBS GP2 in eu-central region“.

This circus goes on every few weeks, sometimes hitting 3 limits in a row. There are limits for all resources, by region, by availability zone, by resource types and by resource specifics criteria.

Paying guarantees a 24h SLA to get a reply to a limit ticket. The free tiers might have to wait for a week (maybe more), being unable to work in the meantime. It is an absurd yet very real reason pushing for premium support.

Handling failures on the AWS side

There is NO log and NO indication of what’s going on in the infrastructure. The support is required whenever something wrong happens.

For example. An ELB started dropping requests erratically. After contacting support, they acknowledged to have no idea what’s going on and took action “Thank you for your request. One of the ELB was acting weird, we stopped it and replaced it with a new one“.

The issue was fixed. Sadly, they don’t provide any insight or meaningful information. This is a strong pain point for debugging and planning future failures.

Note: We are barraging further managed service from being introduced in our stack. At first they were tried because they were easy to setup (read: limited human time and a bit of curiosity). They soon proved to be causing periodic issues while being impossible to debug and troubleshoot.

ELB are unsuitable to many workloads

[updated paragraph after comments on HN]

ELB are only accessible with a hostname. The underlying IPs have a TTL of 60s and can change at any minute.

This makes ELB unsuitable for all services requiring a fixed IP and all services resolving the IP only once at startup.

ELB are impossible to debug when they fail (they do fail), they can’t handle sudden spike and the CloudWatch graphs are terrible. (Truth be told. We are paying Datadog $18/month per node to entirely replace CloudWatch).

Load balancing is a core aspect of high-availability and scalable design. Redundant  load balancing is the next one. ELB are not up to the task.

The alternative to ELB is to deploy our own HAProxy in pairs with VRRP/keepalived. It takes multiple weeks to setup properly and deploy in production.

By comparison, we can achieve that with google load balancers in a few hours. A Google load balancer can have a single fixed IP. That IP can go from 1k/s to 10k/s requests instantly without loosing traffic. It just works.

Note: Today, we’ve seen one service in production go from 500/s to 15000/s in less than 3 seconds. We don’t trust an ELB to be in the middle of that

Dedicated Instances

Dedicated instances are Amazon EC2 instances that run in a virtual private cloud (VPC) on hardware that’s dedicated to a single customer. Your Dedicated instances are physically isolated at the host hardware level from your instances that aren’t Dedicated instances and from instances that belong to other AWS accounts.

Dedicated instances/hosts may be mandatory for some services because of legal compliance, regulatory requirements and not-having-neighbours.

We have to comply to a few regulations so we have a few dedicated options here and there. It’s 10% on top of the instance price (plus a $1500 fixed monthly fee per region).

Note: Amazon doesn’t explain in great details what dedicated entails and doesn’t commit to anything clear. Strangely, no regulators pointed that out so far.

Answer to HN comments: Google doesn’t provide “GCE dedicated instances”. There is no need for it. The trick is that regulators and engineers don’t complain about not having something which is non-existent.

Reserved Instances are bullshit

A reservation is attached to a specific region, an availability zone, an instance type, a tenancy, and more. In theory the reservation can be edited, in practice that depends on what to change. Some combinations of parameters are editable, others are not. Plan carefully, better get it right on the first try.

Every hour of a reservation will be paid along the year, no matter whether the instance is running or not.

The discount is small. A misplaced reservation in a bunch of reservations can easily cancel all savings made so far. Be prepared to spend long days reviewing all the infrastructure beforehand, then more days moving services around to correct mistakes afterwards.

Keep in mind that reserved instances will NOT benefit from the regular price drop happening every 6-12 months.

What GCE does by comparison is a PURELY AWESOME MONTHLY AUTOMATIC DISCOUNT. Instances hours are counted at the end of every month and discount is applied automatically (e.g. 30% for instances running 24/7). The algorithm also combines multiple started/stopped/renewed instances in a non-trivial way that is STRONGLY in your favour.

AWS Networking is sub-par

Network bandwidth allowance is correlated with the instance size.

The 1-2 cores instances peak around 100-200 Mbps. This is very little in a world more and more connected where so many things rely on the network.

Typical things experiencing slow down because of the rate limited networking:

  • Instance provisioning, OS install and upgrade
  • Docker/Vagrant image deployment
  • sync/sftp/ftp file copying
  • Backups and snapshots
  • Load balancers and gateways
  • General disk read/writes (EBS is network storage)

Our most important backup takes 97 seconds to be copied from the production host to another site location. Half time is saturating the network bandwidth (130 Mbps bandwidth cap), half time is saturating the EBS volume on the receiving host (file is buffered in memory during initial transfer then 100% iowait, EBS bandwidth cap).

The same backup operation would only take 10-20 seconds on GCE with the same hardware.

Cost Comparison

This post wouldn’t be complete without an instance to instance price comparison: A typical cost comparison between GCE an AWS

Hidden fees everywhere + unreliable capabilities = human time wasted in workarounds

Capacity planning and day to day operations

Capacity planning is unnecessary hard with the not-scalable resources, unreliable performances capabilities, insufficient granularity, and hidden constraints everywhere. Planning cost is a nightmare.

Every time we have to add an instance. We have to read the instances page, pricing page, EBS page again. There are way too many choices, some of which being hard to change latter. That could be printed on papers and cover a4x7 feet table. By comparison it takes only 1 page both-sided to pick an appropriate instance from Google.

Optimizing usage is doomed to fail

The time taken to optimizing reserved instance is a similar cost to the savings done.

Between CPU count, memory size, EBS volume size, IOPS, PIOPS. Everything is over-provisioned on AWS. Partly because there are too many things to follow and  optimize for a human being, partly as workaround against the inconsistent capabilities, partly because it is hard to fix later for some instances live in production.

All these issues are directly related to the underlying AWS platform itself, being not neat and unable to scale horizontal cleanly, neither in hardware options, nor in hardware capabilities nor money-wise.

Every time we think about changing something to reduce costs, it is usually more expensive than NOT doing anything (when accounting for engineering time).


AWS has a lot of hidden costs and limitations. System capabilities are unsatisfying and cannot scale consistently. Choosing AWS was a mistake. GCE is always a better choice.

GCE is systematically 20% to 50% cheaper for the equivalent infrastructure, without having to do any thinking or optimization. Last but not least it is also faster, more reliable and easier to use day-to-day.

The future of our company

Unfortunately, our infrastructure on AWS is working and migrating is a serious undertaking.

I learned recently that we are a profitable company, more so than I thought. Looking at the top 10 companies by revenue per employee, we’d be in the top 10. We are stuck with AWS in the near future and the issues will have to be worked around with lots of money. The company is able to cover the expenses and cost optimisation ain’t a top priority at the moment.

There’s a saying “throwing money at a problem“. We shall say “throwing houses at the problem” from now on as it better represents the status quo.

If we get to keep growing at the current pace, we’ll have to scale vertically, and by that we mean “throwing buildings at Amazon”😀

burning money
The official AWS answer to all their issues: “Get bigger instances”

Choosing the right cloud provider: Amazon AWS vs Google Compute Engine vs Microsoft Azure vs IBM SoftLayer vs Linode vs DigitalOcean vs OVH vs Hetzner

No worries, it’s a lot simpler than it seems. Each cloud is oriented toward a different type of customer and usage.

The different types of cloud provider:

  • General Purpose (aka. shared infrastructure, fully virtualised)
  • Cheap
  • Dedicated (aka. bare-metal)
  • Housing & collocation (aka. NOT cloud)
  • Make your own datacenter

General Purpose Cloud: Amazon AWS vs Google Compute Engine vs Microsoft Azure

When to use: To run anything and everything. This is the go-to solution for more than 10 servers, running various type of applications. It’s good at replacing a full rack of servers, it’s good at replacing a full datacenter.

It’s versatility makes it ideal to run an entire operation in the cloud. It provides the usual infrastructure plus some advanced bits that would be very hard to come by otherwise.

Classic infrastructure made simple:

  • Various sizes and types of hardware, infinite combination
  • Design your own networking and firewalls (same as in a real datacenter)
  • Group and isolate instances from each other and from the internet
  • Easily go multi-sites, worldwide
  • Edit, change, redesign, ANYTHING in 60 seconds (while staying put on your chair)

Fully featured ecosystem with advanced services:

  • SAN-like disks (EBS, Google Disks)
  • Scalable Storage and backups (S3, Google Storage, Snapshots)
  • Load balancers (ELB, Google Load Balancer)

Which to use: GCE is vastly superior to its competitors. If you go cloud, go GCE.

AWS is 25-100% more expensive to run the same infrastructure, in addition to being slower and having less capabilities.

Don’t know about Microsoft Azure. Never used it. The few feedbacks we heard about are scary though.

Cheap Cloud: Digital Ocean vs Linode

When to use: To run a few servers on the public internet. This is the go-to solution for running less than 10 servers, assuming no special requirements (except good bang for the buck). It is ideal to get a real server on the internet, with proper hardware and good internet connectivity.

Maybe you want 1-2 servers to experiment and play with? Maybe you operate a few simple services with low or moderate traffic? Maybe you’re an agency in need of a simple hosting to host and deliver the project back to the client?

Simple infrastructure finally made simple (and cheap):

  • Real servers (server-grade hardware, good internet connectivity)
  • Simple, easy to use and convenient
  • Predictable costs, well-defined capabilities, no bullshit
  • Add or remove a server in 60 seconds

Which to use: The next-generation cheap clouds are DigitalOcean and Linode. Can’t go wrong with any of the two.

Challengers: There is a truckload of historical and minor players (OVH, GoDaddy, Hetzner, …). They have some similar offerings to the next-gen players, but it’s hidden somewhere in the poor UI trying to accommodate and sell 10 unrelated products and services. They may or may not be worth digging a bit.

Dedicated Cloud: IBM SoftLayer

When to use: To run BIG SERVERS. This is the go-to solution for special tasks requiring exotic hardware, especially vertical scaling. It is ideal to get beefy servers only, preferably  similar to each other.

As a rule of thumb, general purpose clouds allow up to 100GB memory and 10TB storage per instance. Gotta go dedicated to get more.

IBM SoftLayer:

  • Choose the hardware, tailored to the intended workload
  • Ultimate performance (bare-metal, no virtualisation)
  • Quad sockets, 96 vcpus available
  • 1 TB memory, f*** yeah!
  • 24 HDD or SSD drives in a single box, whatever

Which to pick: IBM SoftLayer is the only one to offer the next generation of dedicated cloud. Getting servers works the same way as buying servers from the Dell website (select a server enclosure and tick parts on the checklist) except it’s rented and the price is per month.

SoftLayer takes care of the hardware transparently: shipment, delivery, installation, parts, repair, maintenance. It’s like having our own racks and servers… without the hassle of having them. (Common configurations are available immediately, specialized hardware may need ordering and take a few days).

Challengers: There are a few historical big players (OVH, Hetzner, …). They are running on an antiquated model, providing only a predefined set of boxes with limited click-to-scale-whatever-whenever. They can compare to SoftLayer (read: cheaper and not harder to manage/use) when running a couple servers with nothing too exotic.

Housing & Collocation

When to use: Never. It’s always a bad decision.

There are 3 kinds of people who do housing on purpose:

  • People who genuinely think it’s cheaper (it is NOT)
  • People who genuinely got their maths wrong (hence thinking it was cheaper =D)
  • Students, amateurs, hobbyists, single server usage and not-for-profit

Let’s ignore the student. He got an old server sitting in the garage. He might as well put it into a datacenter with 24h electricity and good internet to tinker around. That’s how he’ll learn. This is the only valid use case for housing.

What’s wrong with housing & collocation:

  • Unproductive time to go back and forth to the datacenters, repeatedly
  • Lost time and health moving tons of hardware (a 2U server is 20-40 kg)
  • Be forced to deal with hardware suppliers (DELL, HP, …) again and again
  • Burn out, burst in rage and eventually attempt to strangle one colleague after having dealt with supplier bullshit for most of the afternoon (based on a real story)
  • Wait for at least 3 weeks between ordering anything and receiving it
  • Cry when something breaks and there are no spare parts
  • Cry more when realizing the parts went end-of-life and can’t be ordered anymore
  • Suffer 100 times what initially expected because of the network and the storage (it’s the most expensive and the most difficult to get right in an infrastructure)
  • Renew the hardware after 3-5 years, hit all the aforementioned issues in a row
  • Be unable to have multiple sites, never go worldwide

These are major pain points to be encountered. Nonetheless it is easy to find cloud vs collocation comparisons not accounting for them and pretending to save $500k per month by buying your own hardware.

Abandoning hardware management has been an awesome life changing experience. We are never going back to lifting tons of burden in miserable journeys to the mighty datacenter.

Make Your Own Datacenter

When to use: This is the go-to solution for hosting companies and older internet giants.

The internet giants (Google, Amazon, Microsoft) started at a time when there was no provider available for their needs, let alone at a reasonable cost. They had to craft their own infrastructure to be able to sustain their activity.

Nowadays, they have opened their infrastructure and are offering it freely to the world.  Top-notch web-scale infrastructure is a commodity. A tech company doesn’t need its own datacenters, no matter how big it grows.

Cheat Sheet

Get more than 10 servers, migrate all operations to the cloud, for general purpose, or as the default choice: Google Compute Engine

Get a few servers on the cheap: DigitalOcean or Linode

Get beefy servers ( > 100GB RAM), or special hardware requirements: IBM Softlayer

Get a few beefy servers on the cheap: OVH (or local equivalent on your continent)


The cloud is awesome. No matter what we want, where and when we want it. There is always a computer ready at the click of a button (and the typewriting of our credit cards details).

The most surprising thing we encounter daily on these services is to notice how everything is so new. A recurrent “available since XXX” written in a corner of the page, stating it’s only been there for 1-2 years.

These writings are telling a story. The cloud have had enough time to mature and it is ready to be mainstream. Owning servers belongs to an era from the past.

Stack Overflow Survey Results: Money does buy happiness!

Developer Happiness By Salary


The Stack Overflow Survey asks developers around the world about their current situation.

The answers provided by 46,122 respondents this year finally prove that money can buy happiness. In fact, the more money you get, the more happiness you get!


money shower
Enjoyable, isn’t it?

Source: Stack Overflow Developer Survey

System Design: Combining HAProxy, nginx, Varnish and more into the big picture

This comes from a question posted on stack overflow: Ordering: 1. nginx 2. varnish 3. haproxy 4. webserver?

I’ve seen people recommend combining all of these in a flow, but they seem to have lots of overlapping features so I’d like to dig in to why you might want to pass through 3 different programs before hitting your actual web server.

My answer explains what are these applications for, how do they fit together in the big pictures and when do they shine. [Original answer on ServerFault]


As of 2016. Things are evolving, all servers are getting better, they all support SSL and the web is more amazing than ever.

Unless stated, the following is targeted toward professionals in business and start-ups, supporting thousands to millions of users.

These tools and architectures require a lot of users/hardware/money. You can try this at a home lab or to run a blog but that doesn’t make much sense.

As a general rule, remember that you want to keep it simple. Every middleware appended is another critical piece of middleware to maintain. Perfection is not achieved when there is nothing to add but when there is nothing left to remove.

Some Common and Interesting Deployments

HAProxy (balancing) + nginx (php application + caching)

The webserver is nginx running php. When nginx is already there it might as well handle the caching and redirections.

HAProxy —> nginx-php
A —> nginx-php
P —> nginx-php
r —> nginx-php
o —> nginx-php
x —> nginx-php
y —> nginx-php

HAProxy (balancing) + Varnish (caching) + Tomcat (Java application)

HAProxy can redirect to Varnish based on the request URI (*.jpg *.css *.js).

HAProxy —> tomcat
A —> tomcat
—> tomcat
P —> tomcat tomcat varnish varnish nginx:443 -> webserver:8080
A —> nginx:443 -> webserver:8080
P —> nginx:443 -> webserver:8080
r —> nginx:443 -> webserver:8080
o —> nginx:443 -> webserver:8080
x —> nginx:443 -> webserver:8080
y —> nginx:443 -> webserver:8080


HAProxy: THE load balancer

Main Features:

  • Load balancing (TCP, HTTP, HTTPS)
  • Multiple algorithms (round robin, source ip, headers)
  • Session persistence
  • SSL termination

Similar Alternatives: nginx (multi-purpose web-server configurable as a load balancer)

Different Alternatives: Cloud (Amazon ELB, Google load balancer), Hardware (F5, fortinet, citrix netscaler), Other&Worldwide (DNS, anycast, CloudFlare)

What does HAProxy do and when do you HAVE TO use it?

Whenever you need load balancing. HAProxy is the go to solution.

Except when you want very cheap OR quick & dirty OR you don’t have the skills available, then you may use an ELB😀

Except when you’re in banking/government/similar requiring to use your own datacenter with hard requirements (dedicated infrastructure, dependable failover, 2 layers of firewall, auditing stuff, SLA to pay x% per minute of downtime, all in one) then you may put 2 F5 on top of the rack containing your 30 application servers.

Except when you want to go past 100k HTTP(S) [and multi-sites], then you MUST have multiples HAProxy with a layer of [global] load balancing in front of them (cloudflare, DNS, anycast). Theoretically, the global balancer could talk straight to the webservers allowing to ditch HAProxy. Usually however, you SHOULD keep HAProxy(s) as the public entry point(s) to your datacenter and tune advanced options to balance fairly across hosts and minimize variance.

Personal Opinion: A small, contained, open source project, entirely dedicated to ONE TRUE PURPOSE. Among the easiest configuration (ONE file), most useful and most reliable open source software I have came across in my life.

Nginx: Apache that doesn’t suck

Main Features:

  • WebServer HTTP or HTTPS
  • Run applications in CGI/PHP/some other
  • URL redirection/rewriting
  • Access control
  • HTTP Headers manipulation
  • Caching
  • Reverse Proxy

Similar Alternatives: Apache, Lighttpd, Tomcat, Gunicorn…

Apache was the de-facto web server, also known as a giant clusterfuck of dozens modules and thousands lines httpd.conf on top of a broken request processing architecture. nginx redo all of that, with less modules, (slightly) simpler configuration and a better core architecture.

What does nginx do and when do you HAVE TO use it?

A webserver is intended to run applications. When your application is developed to run on nginx, you already have nginx and you may as well use all its features.

Except when your application is not intended to run on nginx and nginx is nowhere to be found in your stack (Java shop anyone?) then there is little point in nginx. The webservers features are likely to exist in your current webserver and the other tasks are better handled by the appropriate dedicated tool (HAProxy/Varnish/CDN).

Except when your webserver/application is lacking features, hard to configure and/or you’d rather die job than look at it (Gunicorn anyone?), then you may put an nginx in front (i.e. locally on each node) to perform URL rewriting, send 301 redirections, enforce access control, provide SSL encryption, and edit HTTP headers on-the-fly. [These are the features expected from a webserver]

Varnish: THE caching server

Main Features:

  • Caching
  • Advanced Caching
  • Fine Grained Caching
  • Caching

Similar Alternatives: nginx (multi-purpose web-server configurable as a caching server)

Different Alternatives: CDN (Akamai, Amazon CloudFront, CloudFlare), Hardware (F5, Fortinet, Citrix NetScaler)

What does Varnish do and when do you HAVE TO use it?

It does caching, only caching. It’s usually not worth the effort and it’s a waste of time. Try CDN instead. Be aware that caching is the last thing you should care about when running a website.

Except when you’re running a website exclusively about pictures or videos then you should look into CDN thoroughly and think about caching seriously.

Except when you’re forced to use your own hardware in your own datacenter (CDN ain’t an option) and your webservers are terrible at delivering static files (adding more webservers ain’t helping) then Varnish is the last resort.

Except when you have a site with mostly-static-yet-complex-dynamically-generated-content (see the following paragraphs) then Varnish can save a lot of processing power on your webservers.

Static caching is overrated in 2016

Caching is almost configuration free, money free, and time free. Just subscribe to CloudFlare, or CloudFront or Akamai or MaxCDN. The time it takes me to write this line is longer that the time it takes to setup caching AND the beer I am holding in my hand is more expensive than the median CloudFlare subscription.

All these services work out of the box for static *.css *.js *.png and more. In fact, they mostly honour the Cache-Control directive in the HTTP header. The first step of caching is to configure your webservers to send proper cache directives. Doesn’t matter what CDN, what Varnish, what browser is in the middle.

Performance Considerations

Varnish was created at a time when the average web servers was choking to serve a cat picture on a blog. Nowadays a single instance of the average modern multi-threaded asynchronous buzzword-driven webserver can reliably deliver kittens to an entire country. Courtesy of sendfile().

I did some quick performance testing for the last project I worked on. A single tomcat instances could serve 21 000 to 33 000 static files per second over HTTP (testing files from 20B to 12kB with varying HTTP/client connections count). The sustained outbound traffic is beyond 2.4 Gb/s. Production will only have 1 Gb/s interfaces. Can’t do better than the hardware, no point in even trying Varnish.

Caching Complex Changing Dynamic Content

CDN and caching servers usually ignore URL with parameters like ?article=1843, they ignore any request with sessions cookies or authenticated users, and they ignore most MIME types including the application/json from /api/article/1843/info. There are configuration options available but usually not fine grained, rather “all or nothing”.

Varnish can have custom complex rules (see VCL) to define what is cachable and what is not. These rules can cache specific content by URI, headers and current user session cookie and MIME type and content ALL TOGETHER. That can save a lot of processing power on webservers for some very specific load pattern. That’s when Varnish is handy and AWESOME.


It took me a while to understand all these pieces, when to use them and how they fit together. Hope this can help you.

Configuring timeouts in HAProxy

This comes from a question posted on stack overflow: By what criteria do you tune timeouts in HA Proxy config?

When configuring HA Proxy, how do you decide what values to assign to the timeouts? I’ve read a half dozen samples in various blogs, and everyone uses different timeouts and no one discusses why.

My originally answer was posted on ServerFault.


I’ve been tuning HAProxy for a while and done a lot of performance testing on it. From 100 HTTP requests/s to 50 000 HTTP requests/s.

The first advice is to enable the statistics page on HAProxy. You NEED monitoring, no exception. You will also need fine tuning if you intend to go past 10 000 requests/s.

Timeouts are a confusing beast because they have a huge range of possible values, most of them having no observable difference. I have yet to see something fails because of a number 5% lower or 5% higher. 10000 vs 11000 milliseconds, who cares? Probably not your system.


I cannot in good conscience give a couple of numbers as ‘best timeouts ever for everyone’.

What I can tell instead is the MOST aggressive timeouts which are always acceptable for HTTP(S) load balancing. If you encounter lower than these, it’s time to reconfigure your load balancer.

timeout connect 5000
timeout check 5000
timeout client 30000
timeout server 30000

timeout client:

The inactivity timeout applies when the client is expected to acknowledge or
send data. In HTTP mode, this timeout is particularly important to consider
during the first phase, when the client sends the request, and during the
response while it is reading data sent by the server.

Read: This is the maximum time to receive HTTP request headers from the client.

3G/4G/56k/satellite can be slow at times. Still, they should be able to send HTTP headers in a few seconds, NOT 30.

If someone has a connection so bad that it needs more than 30s to request a page (then more than 10*30s to request the 10 embedded images/CSS/JS), I believe it is acceptable to reject him.

timeout server:

The inactivity timeout applies when the server is expected to acknowledge or
send data. In HTTP mode, this timeout is particularly important to consider
during the first phase of the server’s response, when it has to send the
headers, as it directly represents the server’s processing time for the
request. To find out what value to put there, it’s often good to start with
what would be considered as unacceptable response times, then check the logs
to observe the response time distribution, and adjust the value accordingly.

Read: This is the maximum time to receive HTTP response headers from the server (after it received the full client request). Basically, this is the processing time from your servers, before it starts sending the response.

If your server is so slow that it requires more than 30s to start giving an answer, then I believe it is acceptable to consider it dead.

Special Case: Some RARE services doing very heavy processing might take a full minute or more to give an answer. This timeout may need to be increased a lot for this specific usage. (Note: This is likely to be a case of bad design, use an async style communication or don’t use HTTP at all.)

timeout connect

Set the maximum time to wait for a connection attempt to a server to succeed.

Read: The maximum time a server has to accept a TCP connection.

Servers are in the same LAN as HAProxy so it should be fast. Give it at least 5 seconds because that’s how long it may take when anything unexpected happens (a lost TCP packet to retransmit, a server forking a new process to take the new requests, spike in traffic).

Special Case: When servers are in a different LAN or over an unreliable link. This timeout may need to be increased a lot. (Note: This is likely to be a case of bad architecture.)

timeout check

Set additional check timeout, but only after a connection has been already

Set additional check timeout, but only after a connection has been already
If set, haproxy uses min(“timeout connect”, “inter”) as a connect timeout
for check and “timeout check” as an additional read timeout. The “min” is
used so that people running with very long “timeout connect” (eg. those
who needed this due to the queue or tarpit) do not slow down their checks.
(Please also note that there is no valid reason to have such long connect
timeouts, because “timeout queue” and “timeout tarpit” can always be used
to avoid that).

Read: When performing a healthcheck, the server has timeout connect to accept the connection then timeout check to give the response.

All servers MUST have a HTTP(S) health check configured. That’s the only way for the load balancer to know whether a server is available. The healthcheck is a simple /isalive page always answering OK.

Give this timeout at least 5 seconds because that’s how long it may take when anything unexpected happens (a lost TCP packet to retransmit, a server forking a new process to take the new requests, spike in traffic).

War Story: A lot of people wrongly believe that the server can always answer this simple page in 3 ms. They set an aggressive timeout (< 2000ms) with aggressive failover (2 failed checks = server dead). I have seen entire websites going down because of that. Typically there is a slight spike in traffic, backend servers get slower, the healthchecks are delayed… until suddenly they all timeout together, HAProxy thinks ALL servers died at once and the entire site goes down.


Hope you understand timeouts better now.

Lessons Learned #0: The HAProxy statistics page is your best friend for monitoring connections, timeouts and everything.

Lessons Learned #1: Timeouts aren’t that important.

Lessons Learned #2: Be gentle on timeout configuration (especially timeout check and timeout connect). There has never been any issue because of “slightly too long timeout” but there are regular cases of “too short timeout” that put entire websites down.

Monitoring in the Cloud: Datadog vs Server Density vs SignalFX vs StackDriver vs BMC Boundary vs Wavefront vs NewRelic

We’re a tech company and we have more than 100 AWS instances to run our services. It is critical that we have good monitoring, metrics collections, graphs and alerting.

Current Setup

We have an in-house monitoring solution built over more than 9 tools, including but not limited to:

  • statsd
  • collectd
  • graphite
  • grafana
  • nagios
  • cacti
  • riemann
  • icinga

All are open-source solutions (as in build-it and maintain-it yourself). Most are tools coming straight from the 90’s with an old UI, they are hard to use and they are hard to maintain. None of these can scale or run on more than a single node.

That’s a total of 8 independent points of failure, put under constant pressure by many hosts and metrics, unable to understand AWS hosts going up and down regularly. So far, the palm of the worst-in-class belong to riemann. Its configuration is a 1000 lines file written in Clojure with up to 12 levels of indentation.

We’ve been babysitting this setup again and again every time it breaks and it’s been a major pain in the ass. We’re reached a desperate point were we just want to throw everything away and stop the pain.

What if we don’t want to send our data to 3rd party?

Neither do we.

We thought about it and we came to the conclusion that CPU percentage and memory usage are not critical information to be kept private at all cost. They don’t give away any user data and they don’t give away critical business information.

If there is something out there that is worthy to graph it out, so be it.

Actually, it’s a fake dilemma. We’ve tried “the build and maintain it ourself” already and it’s a major failure. Let’s not burn out more time and people to go that wrong route.

What to expect from a monitoring solution

The MUST have:

  • Short interval between metrics (our current collectd is about 15s-20s)
  • Graph by min, average AND max
  • Easy deployment
  • Cute graphs (colors, zoom, legend, easily readable)
  • Responsive site
  • Monitor the basics (memory, disk, I/O, …)
  • Custom dashboards
  • Custom alerting

The SHOULD have:

  • Compare graphs (arrange in grid, superimpose, align axes…)
  • Advanced alerting (moving time windows, multiple metrics, outlier detection)
  • Integrate with middleware (PostgreSQL metrics, nginx metrics, …)
  • Easily add/remove hosts (AWS environment is constantly evolving)


  • Collectd + Graphite + Grafana + Icinga + Riemann (the on-site crowd)
  • Server Density
  • Datadog (cloud)
  • BMC truesightpulse (ex. Boundary)
  • [Google] StackDriver
  • SignalFX
  • WaveFront
  • NewRelic

Trial by trialing

collectd + graphite + grafana + icinga + Riemann (on-site)

The standard on-site solution that everyone knows. Not worth presenting since we’re trying to run away from it.

Server Density

A London company (close to us :D) who raised some money in 2010, 2011, 2015. We had received positive feedbacks about Server Density before. Let’s go for the trial.

Agent Installation

The agent was painful to install.

Each host has to be registered individually with the service. It gets unique keys and a unique configuration. It was a pain in the ass to automate the deployment. Multiple REST API calls to their services and to get piece of configuration depending on the current state of the host in their service.

Web Interface

  • Metrics interval is 1 minute at best. An ENTIRE minute
  • No filtering by min, average, max
  • No legends on graphs. No clue what the lines are showing
  • No integration with any middleware or application
  • The website fails to load way too often

The site fails to load every few pages. After a few hours surfing for the trial, we were genuinely thinking that our office internet connection was broken. Thankfully it is not our internet but the server density site which is extremely buggy.


Removed that s**** after 48 hours, cleaned agents, killed all the hosts where they ever was an agent.

Between the site failing randomly, the terrible UI and all the basics features missing. This is one of the worst product we are have ever come across. We cannot comprehend how it ever managed to get positive reviews or raise money 3 times.


An American companies founded somewhere around 2008. Raised 15 M$ in 2014, then 31 M$ in 2015 and finally 97 M$ in 2016.

Long story short. It’s very good and it does everything we wan. (We’ll publish an article dedicated to Datadog later).

Once in a lifetime, you get the opportunity to look at two companies of the same age in the same market. One of them (Datadog) just happening to raise 50 times more money than the other one (Server Density). It turns out to be a definitive indicator of how good the products are relative to each other.

[Google] StackDriver

An American company founded around 2012. Raised 5 M$ in 2012, acquired by Google in 2014.

The main site http://www.stackdriver.com/ is still online. The screenshots are nice and we want to try that thing.

There is an issue though. We try to try it and we can’t because there is no way to try it. Parts of the site are inaccessible, parts redirect to google, some sections are missing.

Google bought it in may 2014, it is now may 2016. The product should be available and the site should be up (eventually all under a different name and logo) but it’s not.

It looks like the service was killed as a result of the Google acquisition. This could have been a good monitoring tool but we’ll never know. If anyone had the opportunity to try and has experience with it, please comment.

June update: There are references to Google StackDriver suddenly appearing all over the GCE documentation. A closed-beta is available on-request for premium customers.

July update: It’s now clear that StackDriver is being integrated to Google. It will become part of their cloud offering and it will be available as a standalone product. Expecting a release within 1-2 years.

BMC truesightpulse (ex. Boundary)

American company founded around 2010. Bought for 15 M$ in 2012 by BMC and became truesightpulse.

We had heard of Boundary multiple times but couldn’t find it. We already settled for Datadog  (and were satisfied) by the time we understood that Boundary was acquired and renamed by BMC.

Judging by what we can see on the website. The screenshots are good, it can get metrics from all the common databases/webservers, it integrates with AWS/GCE. The pricing is a bit cheaper than Datadog ($12/month per host).

It’s the historic direct competitor to Datadog. They’re mostly copy cat of each other.


[July 2016 update: added SignalFX]

Yet another monitoring company that raised millions. A late comer to the market.

Basically, it’s a direct copy-cat of Datadog and BMC. The UI is nice and the graphs are cute (same as the competitors). It’s lagging behind in terms of advanced features and integrations though, not sure if it can catch up with the leader.

The price point is per metric stream per month which may make it cheaper than Datadog while somewhat equivalent for simple basic monitoring.

If you have to trial only two services. The first pick is Datadog and the second pick is SignalFX. (BMC is a fair second pick as well, note that we’re biased against bigger companies with more products and less focus).


[July 2016 update: added Wavefront]

We received a link to Wavefront during our holidays right after we closed the evaluation. It’s another late comer and perfect copy-cat (we’re crossing a line here: some icons and UI are identical pixel wise).

We open the link on our laptop in battery saving mode and… Firefox freezes for a minute. Who thought that a full screen HD video of a dude surfing was a good thing to put as a main page?

Well, we will have to wait for the end of the holidays to see the website, until we have access to our work computers again (i7 8 cores, 32 GB memory, SSD).

Once we get back to work and check the website, it turns out that Wavefront doesn’t display any price publicly and gives no trial either. Can’t do a anything without talking to their sales guys first.

At this point, we’ve already done weeks of trial and we’ve got 3 strong competitors who have better products and are more accessible. For the sake of it, we’ll just pretend that Wavefront doesn’t exist.


No need to introduce NewRelic. Maybe the most advertised company of 2015, one of the highest valuation ever done for monitoring related tools, world best in class Application Monitoring Performance (APM).

We already used NewRelic APM to monitor our applications and we love it. It gives very deep performance information about the application (detailed profiler, call stack, debugging). If they have a server monitoring thing, we could expand our deployment.

NewRelic doesn’t do monitoring

It turns out that NewRelic don’t have any product to do server monitoring.

Still thought about NewRelic to monitor the database/webservers because it would be nice to have performance indications, query timings and things like that. It turns out that they don’t support PostgreSQL at all. In fact they don’t support ANY database. NewRelic APM is only available to monitor applications written in Java, Python, C# and a few others. That’s it. Nothing more.

We checked out the NewRelic plugins. There are 3 plugins for PostgreSQL, all of them written pre-2014, being abandoned GitHub project by a random dude. They can barely get 5-10 metrics and provide no profiling whatsoever. Not to mention that the comments averaging 2/5 stars are scary.

As a conclusion, NewRelic cannot do server monitoring. (They’re really awesome in the application performance market though).


#MonitoringSucks is over. We’ve got a pack of great monitoring tools all invented at once.

The world best in class is Datadog (we’ll write a dedicated article later). It’s older and more mature. It has the most features and integrations. When you have to pick a monitoring tool for the future of your tech company, that’s the horse you want to put your money on.

The challengers are SignalFX and BMC truesightpulse.