How Superfeedr built Analytics using MongoDB

We weren’t quite sure how to build these analytics. We slowly established a set of requirements and constraints

  • Zero performance impact
  • Fully decoupled from the current infrastructure
  • Results at most hourly
  • Data is more important than graphs
  • Easily-extensible, in case we want to measure more things

This is a really interesting tech read by our friend Julien from Superfeedr.

We've been experimenting with MongoDB in-house as well. It'll be interesting how things shake out over the next year w.r.t. nosql implementations... We're also using Redis and have found that to be much faster / more reliable, though simpler in some respects.

views
3 responses
Interesting. Please keep posting your findings. Would be much appreciated.
Redis gives you just enough building blocks to allow you to build really efficient abstractions on top of it. I love how simple the protocol is too. I think it'll gain huge adoption once virtual memory is implemented (I think that's coming in 1.6). As it is now, it can be expensive to use for fast growing data.
the first goal isn't feasible. see "observer effect" or "heisenberg uncertainty principle". you can't observe a system without implicitly affecting it. still worthwhile to work toward minimizing the impact, but that's a different goal.