Chrome OS: Is it really that shiny?

A couple of weeks ago, the intertrons, the weboscape, the tweetsphere, whatever you want to call it, were on fire with the breaking news of Google’s latest press release product: Chrome OS. I can remember the initial emails at Knetwit about this being a strike back at Microsoft, a brand new way of looking at desktop computing. Ok, so the initial announcement email was sent roughly two hours after I read the original press release. I won’t get into the long and nasty boring mailing list argument that followed, but I basically flipped out and called it what it was: A web browser and nothing more. And I stand by that belief. I haven’t had much time to mull over my thoughts of why I don’t think much of Chrome OS. In fact, this was the primary reason I chose not to speak about Chrome OS at BarCampChatt. So, this blog article will haveta suffice, now that i’ve further collected my thoughts. Let me just get some disclaimers out of the way: I have nothing at all against Google Chrome or Google whatsoever. Ever since its release, Chrome has been my primary web browser and I have gone as far as suggested it to immediate friends and family. And I still use Google even though I claimed I ‘binged’ it. But my love for Google can only go so far. Also, i’m sure I boned some of the historical parts of the article below, I am not a computer historian, just an observer. Don’t correct me when I know that in some places i’m being a gigantic idiot.

Overvaluing or Undervaluing the “Personal Computer”

Let’s go back in time to a period where Sarah Palin wasn’t running (or running from) anything, computers were sold out of a farm in Pennsylvania, and AT&T wasn’t trolling 4chan but trolling the entire country: the late 60s-early 70s. See, back then, if you wanted an “Operating System” it came on an iron mainframe, and the undisputed king of the mainframe world was UNIX, one of AT&T’s many Bell Labs products (and one of its most well known). You didn’t access Unix directly at the mainframe, you typically accessed it through a dumb terminal or a teletype machine where commands were executed. In the world of the mainframe, you were more or less at the mercy of the system administrator, where he could view all the files stored in your /home directory. And god help you if someone else finds a way to fuck up the system — you could lose all the user rights in one fell swoop. There wasn’t any privacy per se, just blind trust. History will never reveal if AT&T ever planned on extending the wonders of Unix to every home, it’ll always be rampant speculation. The point remains — Unix had a severe design flaw. Nobody owned their content. And it extended all the way back to the terminals used to connect to the mainframe. Enter the Personal Computer: A fantastic device that was cheaper than a mainframe, could be run seperate from other computers, and ran an operating system aimed at satisfying the sole user, not multiple. The paradigms of computing had been changed once over, and with the advent of per-computer networking, and eventually agnostic networking through Ethernet, and global networking through the Internet, the PC quickly became the cornerstone of the computing world, relegating mainframes to the hushed caskets of failing banks. (well, ok, some companies still use a mainframe-esque way of running computers through thin clients, servers, and virtualization, but you get my point) And now, we’ve enter the next age of computing — where we can store data “in the cloud” and access it across multiple PCs, edit it, and share it with others. While it feels like an extension of the mainframe ideaology, service providers make it very clear that you own the content created in many cases. While its a step up, it still leads me to fear that someday, we will once evolve to having “dumb terminals” for all of our data, and instead of being a computer for oneself, the content of a computer besides its OS will change based on who’s logged in on it. While that has a lot of practical uses (this is coming from a Dropbox user, I might add), it feels like a compromise of privacy.

Lack of entertainment apps

I’ll be honest — I play a lot of computer games. They’re fun and they help take the edge off a long and hectic work day. While some neat stuff is being done to bring more native app-performing applications inside the web browsers, and there is a greater focus on bringing good 3D Rendering to the browser (hint: it’s not VRML), It’s still no World of Warcraft. It’s still no Maya. It’s still no Adobe Premier. It’s not effective or fast enough to replace a great native desktop app. Granted — Bespin is sorta neat, but its still no NetBeans. What I guess I should be saying is — HTML5 has done more to make web browser more like sandboxes akin to developing real applications, but yet it’s not a real application. We had this same discussion when the iPhone came out, and in the end, we got the ability to make native apps for the iPhone. We’ll probably have it again when Chrome OS comes out.

But I will end with a positive note

Chrome OS will change the paradigms of how we use computers, just like how Xerox set the stage for the invention of the GUI, and its further capitalization by Apple and Microsoft. Where the fine line between who owns what data and who accesses it, but it does promise a single pipeline to access said data — the web browser. Chrome OS could be the next best thing — but I just think that calling it the “Microsoft Killer” or the “Next big thing” before we’ve seen what it can do is sorta ridiculous. Let’s just wait it out.

This entry was posted in Opinion, Technology, You're full of hot air! and tagged , , , , , . Bookmark the permalink. Post a comment or leave a trackback: Trackback URL.