How To Get Your Ph.D. Project Included In The Linux Kernel
The Linux kernel is the world’s largest collaborative development project. Almost 3,000 individual contributors work together to create and maintain an operating system kernel that works on everything from wristwatches and mobile phones to mainframes, along with all the peripherals imaginable for each platform. Linux creator Linus Torvalds sits at the top of a loose hierarchy of kernel maintainers and acts as final arbiter for what does or does not get included.
So how does one go about contributing a substantially new technology to the kernel?
Sage Weil was working on a distributed file system for Linux as part of his PhD research at University of California, Santa Cruz. This was before the advent of the buzzword “big data”, and therefore before things like Hadoop or Amazon’s S3. His research into distributed fault tolerance led him to the conclusion that the best way to manage a clustered filesystem was at the kernel layer, rather than higher up in userspace. He called his filesystem “Ceph” — a shortened version of Cephalopod — as a nod to the “highly parallel behavior of an octopus.”
Weil was no stranger to open source or the Linux community. In 1996 he was one of the founders of the web hosting company Dreamhost. As his research progressed, he knew he’d need to get his kernel components integrated upstream if they were to have any real chance of practical application: no one was likely to compile a custom kernel just for a clustered file system.
So Weil did what any good hacker would do: he joined a couple of kernel-related mailing lists and started watching how things work. He participated in small ways to slowly establish his name and reputation. He attended a couple of kernel workshops, and met kernel hackers in real life.
When the fruits of his research were ready for the public kernel, he followed the same patch submission processes that every other kernel hacker did. His patches were rejected the first couple of times, as seasoned kernel maintainers looked over the work. Weil reviewed their remarks, made suggested improvements, and submitted again.
While Weil was improving his clustered file system, Amazon rolled out their S3 storage offering. Hadoop rolled out their HDFS distributed storage. Other players came and went, mostly working in isolation and in userspace, above the kernel.
Finally, in March of 2010, Linus Torvalds merged the Ceph filesystem into the mainline kernel, making it almost immediately available to any Linux user who wanted to explore distributed file systems.
Weil hasn’t been sitting idle since that merge. He’s shepherded a group of Ceph developers together, while still working full-time at Dreamhost, and spun off a new company called Inktank to provide long-term support and consultation services to Ceph users. I spoke with Weil, and a few other Inktank employees about their short and long-term plans.
Bryan Bogensberger, President and COO, and Ross Turk, VP of Community, both expressed excitement about what was to come. The Inktank developers are very cognizant that the Ceph technology is a stepping stone for other cool things — many of which they can’t even fathom yet. They are very intentional about nurturing future developments through Ceph, though.
How can a distributed file system spur new, unexpected developments you might be asking yourself? For one, the Ceph filesystem can be used as a drop-in replacement for HDFS in Hadoop clusters. This can, in the right circumstances, allow folks to concentrate less on the storage aspects of a cluster and more on the usage of that cluster. Ceph storage nodes are also fairly intelligent, and can run code against their locally stored objects independent of any kind of master control process. As an example, image files stored in a Ceph node can have thumbnails created by the node itself, rather than from a central batch processor. This means that those thumbnails could be created at the time the object is injected into the storage node, automatically. That’s a subtly powerful change to the standard storage workflow.
Ceph also implements full Amazon S3 API compatibility. Apps written for Amazon S3 should be fully functional with Ceph by simply updating the storage endpoint URL. Whether you’re looking to compete with Amazon S3, avoid the Amazon cloud hegemony, or push for open cloud standards, Ceph can help you out.
In many ways the story of Ceph is similar to the story of Linux: one guy scratching his own itch, mostly for academic purposes — and because he clearly saw a better way to do it. Neither was the direct result of corporate R&D investment, and both gained traction by embracing a community of like-minded developers based on the merits of the work itself. Ceph has a lot of potential to shake up the storage world, just as Linux shook up computing.
Inktank is currently hiring. If you’re interested in disrupting the traditional storage market in the way that Linux itself disrupted traditional UNIX computing, check them out.
Leave a Reply