This presentation was planned for an older Wurbe
event, but as this never quite happened in the last 4 months I am
publishing it now, before it becomes totally obsolete.
My original contribution here is a comparison between the original
r from Danga and the
MediaWiki programmers. I’ve also tried
but the pre 1.0 version (from
Google Code) in November 2007 was quite unstable and unpredictible.
In a nutshell, these memcache versions are using
instead of memory slab allocator. There are 2 direct consequences:
- when the memory is large enough for the whole cache, database-backed
servers will be slower (my tests shown 10-15% which might be
tolerable – or not – for your app)
- when you’ve got lots of data to cache and your server’s memory is
low, relying on bdb is significantly better than letting the swap
mechanism to do its job (from my benchmarks, the difference can go
up to 10 times faster especially under very high concurrency
Tugela will prove especially useful when running it on virtualized
servers with very low memory.
My tests were performed with the “Tummy” Python client and Stackless for
the multithreaded version. In one of the following weeks I’ll update the
benchmarks for memcachedb 1.0.x – and I promise never ever to wait 4
months for a presentation, again …