Membase Co-founder James Phillips - Hadoop World 2010 - theCUBE
In an interview with Wikibon's Dave Vellante and SiliconAngle's John Furrier, Membase co-founder James Phillips sits down with the men for a short interview at Hadoop World 2010. Membase is the database behind FarmVille, which has a quarter billion members, making Membase a leader in handling scalability. There is something special about a company that can support online games with players numbering in the millions. Zynga, the company that brings games such as FarmVille, CityVille, FarmVille2, MafiaWars, and CafeWorld, in 2010 was the largest San Francisco area employer in terms of having the largest amount of space. Membase is a database, which is a distributed database management system, smearing the data across a multitude of servers. It's designed to spread that data automatically across a series of low-cost servers, instead of having to go out and buy larger and larger more expensive servers like most other databanks do. This results in a low latency, high optimization environment, which is required for optimal performance of the games offered by Zynga. Phillips gives a good example, if you're playing FarmVille and you want to buy a sheep, you don't want to wait five minutes to buy that sheep, you want it right then, immediately available to you. Having a system that optimizes that immediate satisfaction also benefits Zynga, who want their revenue for that sheep immediately as well, so both the customer and the business are having their needs fulfilled in the immediacy of the desired transaction. Membase is 100% software,, and a built-in memcache D layer, with the synergy between memcache D and membase fully integrated with both projects developed closely with the intention of one to help the other. As you put data into membase, it sits into ram, and then bubbles its way down to the SSD layer, and then back up with the entire process being very fluid. Memcache D has been using an LRU mechanism where the least recently used thing gets ejected so the newest freshest data can move in, with the LRU then moving down a tier to the next level so that new data can move in and the process ever continues.