Posts Tagged ‘hadoop’

Some early Avro benchmarks

May 12, 2009

Avro is my current project. It’s a slightly different take on data serialization.

Most data serialization systems, like Thrift and Protocol Buffers, rely on code generation, which can be awkward with dynamic languages and datasets. For example, many folks write MapReduce programs in languages like Pig and Python, and generate datasets whose schema is determined by the script that generates them. One of the goals for Avro is to permit such applications to achieve high performance without forcing them to run external compilers.

A few early Avro benchmarks are now in. A month ago, Johan Oskarsson (of Last.fm) ran his serialization size benchmark using Avro. And today, Sharad Agarwal (my Avro collaborator) ran an existing java serialization benchmark using Avro, and the initial results look decent. Curiously, Avro’s generic (no code generation) and specific (generated classes) APIs diverged significantly and unexpectedly despite sharing much of their implementation. This suggests that both might be easily improved.

Advertisement

Cloud: commodity or proprietary?

April 9, 2008

A few days ago Google announced its App Engine, which lets folks build applications that run in Google’s cloud. Amazon has for a while had a number of services to let folks run applications in Amazon’s cloud. But in both of these cases, one must use their proprietary APIs.

For example, Google provides a datastore API that applications must use to persist state, while Amazon similarly provides a simple DB API. Amazon’s services are generally lower-level and easier to adopt ala-carte, while Google provides one-stop-shopping. Either way, one’s application code becomes dependent on a particular vendor. This is in contrast to most web applications today, where, with things like the LAMP stack, folks can build vendor-neutral applications from free (as in beer) parts and select from a competitive, commodity hosting market.

As we shift applications to the cloud, do we want our code to remain vendor-neutral? Or would we rather work in silos, where some folks build things to run in the Google cloud, some for the Amazon cloud, and others for the Microsoft cloud? Once an application becomes sufficiently complex, moving it from one cloud to another becomes difficult, placing folks at the mercy of their cloud provider.

I think most would prefer not to be locked-in, that cloud providers instead sold commodity services. But how can we ensure that?

If we develop standard, non-proprietary cloud APIs with open-source implementations, then cloud providers can deploy these and compete on price, availability, performance, etc., giving developers usable alternatives. But such APIs won’t be developed by the cloud providers. They have every incentive to develop proprietary APIs in order to lock folks into their services. Good open-source implementations will only come about if the community makes them a priority and builds them.

Hadoop is a big initial step in this direction. Its current focus is on batch computing, but several of its components are also key to cloud hosting. HDFS provides a scalable, distributed filesystem. It doesn’t yet meet the high-availability requirements of cloud hosting, but once folks who need that help to build it, it will. HBase provides a database comparable to Amazon’s Simple DB and Google’s Datastore API. It’s still young, but, if folks want, it could become a solid competitor to these.

Moral: if you want commodity cloud hosting, pitch in now.