personal web platform

Battelle’s Web 2.0 Conference is next week. The theme is “the web as platform”, the trend of applications moving from stand-alone, with local data, to networked, with shared, remote data. As this happens, the details of one’s local operating system become less relevant. All you’ll need is a web browser, and perhaps a select few other web-based applications. With less stuff required of local applications, this forms a threat to Microsoft’s desktop dominance. My concern is whether another company might replace Microsoft’s desktop monopoly with a “web OS” monopoly.

In this web-based world, I’d like to keep all my personal data remotely, so that I can access it equally well from a Linux workstation, an Apple laptop, a Palm phone and a Windows-based internet-access terminal. Still, I’d like to leverage my local resources. For example, my laptop and handheld should be able to access my data while offline, and my workstation should be able to search it quickly using a local database.

Another big advantage of storing data remotely is that, if my laptop hard drive fails, or I get a new workstation, I don’t have to worry about copying all my stuff. I just log into my personal web and, voila, all my stuff is right there.

As implied above, I should be able to search all my data. Apple’s Spotlight will search my local private data on a Mac, as will Gnome’s Beagle and Longhorn on Windows. But synchronizing my data across these platforms is still a pain, so these don’t really solve the problem.

What’s needed to make this possible? We already have standards for accessing most remote data. Email can be accessed with IMAP, Address books with LDAP, files with WebDAV, chat with Jabber, etc. There are even providers for most of these services, and each has clients for most platforms. So what’s missing? A little glue, I think.

We need a standard way to store bookmarks, history and other personal meta data (like the names of your mail, instant messaging, and WebDav accounts) and a standard way to intercept personal data so that it can be indexed and searched, either locally or remotely. But who will build the glue?

I think this has to be a cross-platform open-source project. It has too much potential as a chokepoint to be entrusted to a commercial party. Inspired by Rohit Khare and Adam Rifkin’s ideas around Fisher, I think much of this could be achieved with a proxy server. It could proxy HTTP, IMAP, POP, LDAP, Jabber, etc., transparently indexing and caching things. One could connect to it as a http://localhost/ web server to search and configure things. Applications can contact it through HTTP to get their configuration data. It can expose web services APIs for all this too, so that native applications can be built for search, etc. If we could, e.g., get Mozilla and other desktop applications to look for the daemon on install, and, when it’s present, configure themselves through it by default, then all that one should need to do on a new machine is tell the daemon where on the web your personal configuration lives, and you’re good to go. With one step, your files, address book, bookmarks, cookies, logins, email configuration, etc. would all be there.

The daemon would mostly be a framework for plugins. For example, search needn’t be hard-wired into it, it should be a plugin. Different vendors might provide different personal search applications. Similarly, a spam-detectors could easily be plugged into the email processing pipeline, etc.

Does this sound plausable? Should we build it?


6 Responses to “personal web platform”

  1. Anonymous Says:

    Here’s an interesting twist on the proxy idea:
    background here:

    Its basically automated injection of related hyperlinks into proxied HTML content.

  2. Anonymous Says:

    My main concern here is this would require a strong, cross-platform user authentication/authorization mechanism. If this is personal data, then I need to be able to restrict access to only myself, while if it is group-work related data I need to be able to establish the identities/roles that are permitted to take actions on the data.

  3. Brian J. Bartlett Says:

    This is precisely what I have been researching and working on for the last couple of years, ever since numerous fora went to the web and we were told repeatedly that we could not use OffLine Readers (OLR’s) anymore. The basic idea is to crawl the content, save it via conversion to XML into a database, and then do a merge replication to any device you happen to be connected to at the moment. Posting would be local to remote to the site in question. Since cookies, forms, and the like would all be in the database as well, and the application would be emulating a user performing the same actions, it should work.

    Since then the initial idea has mushroomed to add concepts such as mail collection, monitoring of Amazon, eBay, and Google (your web page ranking anyone?), and the like.

    Since the entire thing is dependent on a remote hosting provider that you tap yourself, is plug-in based, and will have strong authentication (passphrase, perhaps IP address based as well if desired), I think it has possibilities. I don’t know about other ISP’s, but mine comes equipped with durn near everything (.NET/SQL Server/etc. to PHP/MySQL) for about $3.50 per month with 40 GB of traffic, it wouldn’t even be that expensive.

    There are limiting factors, cross-platform support (especially database feature/code compatability), firewall restrictions, etc., but the actual code is mostly developed in varioius open source applications extent today.

    Just food for thought as we are thinking along the same lines for what I’m now calling a personal media (or resource?) aggregator.

    Brian J. Bartlett

  4. Mike Says:

    Without the cool technologies (imap/ldap/etc), this is what Danger’s sidekick has done. It has made me hunger for the open, elegant solution that you describe.

    – Michael Weiksner

  5. Hans Gerwitz Says:

    Zoë is a good start for email.

    I’d love to buy a network appliance that combined this sort of aggregation with .mac-style sync and publishing services.–>

  6. Rob Napier Says:

    I hope you don’t mind but I’d like to start with a small digression. I found your blog while trying to see if anyone could use an application of Lucene that my firm has developed for our Web 2.0 project. We built a nice search function to locate entries in our context-sensitive help system. Since we have benefited from the Lucence project and a host of other open-source projects, I am looking for ways that we can give something back to the community. So far, I’ve been unable to learn whether a packaged search module for placing on websites would be useful to anyone and if it is, how would we make it available.

    Now to the subject of your post: Web 2.0/RIAs/6As or whatever else you want to tag it seems to be in its infancy. Our once:radix environment is the only serious example of a complete development environment that we have seen so far.

    But over the three and a half years since we started work on this project, the advances have been extraordinary. I believe that your ideas on the subject of searching are well ahead of the game. It opens up all sorts of issues regarding the maintenance of data security. But if you can overcome that, I would be delighted to see it.

    Congratulations to everyone on the Lucene project. It is fast and effective. A great addition to our work.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: