mdlbear: blue fractal bear with text "since 2002" (Default)
[personal profile] mdlbear
Web 2.0 Conference celebrates Web app vision | InfoWorld | News | 2006-11-13 | By David L. Margulius
But one theme stood out: Web-based apps and services have become serious business, and everyone’s scrambling to provide platforms to deliver them.

“This is a fundamental architectural shift,” said Google CEO Eric Schmidt of the massive server farms necessitated by maturing Web development and delivery stacks. “The network is always going to be around; … the [local] disk will be optional.” He asserted that packaged applications can’t possibly compete against Web-based apps long-term because “the datacenter is running 7-by-24, it has to be better. It can’t break.”
I disagree completely with this idea. The network is not ubiquitous -- it would cost me an extra $200/month to allow everyone in my family unlimited net access through their cell phones. No way. Even with a T1 connection here at work, it takes three days to mirror a new architecture from the Debian repository.

And one of my coworkers carries Wikipedia around in an SD card on his Treo. All of it. A 2GB SD card is $40 at Fry's today.

Google's data center may be running 24*7, but LJ's certainly isn't, and neither is my net connection.

Opinions?

Date: 2006-11-16 01:56 am (UTC)
From: [identity profile] phillip2637.livejournal.com
I'm on your side. I was involved in Java from very early on and saw some of the strange proposals for JVM-in-hardware thin clients. (Something like X terminals, but with AWT -- shudder!) The flaws seem so obvious that one has to question the motives of those promoting putting all the application intelligence on the other end of a very fallible wire.

Date: 2006-11-16 02:22 am (UTC)
From: [identity profile] randwolf.livejournal.com
It depends what you're doing. I could never realistically use a web-based CAD or photofinishing application--there is just too much data. On the other hand, for day-to-day household computing, delivered via broadband, I don't see why not--the administrative problems of common PCs make that a very attractive option.

Date: 2006-11-16 03:10 am (UTC)
ext_3294: Tux (Default)
From: [identity profile] technoshaman.livejournal.com
Yeah, I think SD cards and such on are where it's at... and blackberries for little stuff, but.... a hell of a lot of places where the net ain't.

Date: 2006-11-16 07:58 am (UTC)
ext_3294: Tux (Default)
From: [identity profile] technoshaman.livejournal.com
What, sort of like a Mac Mini writ small, or a Blackberry on steroids?

Date: 2006-11-16 07:14 am (UTC)
From: [identity profile] trogula.livejournal.com
As someone who designs and builds webservices (and the massive infrastructure that supports them) for exactly this purpose, it can be done, but it's not an easy task. It is only cost-effective for large scale stuff - you need very high speed switching and redundant routing to do this correctly, hosting redundant services in multiple data centers around the country (or world, depending on your target audience).

My smallest client's infrastructure services 9 million unique users a day, with a projected growth of close to 100 million by the end of next year (and we're well on target for that goal). This is massive stuff.

A T1 isn't ever going to cut it. Colo housing of services is the only way to go. You simply can't get the network redunancy and power requirements necessary for this at a home.

The power outage that LJ suffered simply shows how inadequate their operations staff is. This is a known, easily solved problem, and was totally avoidable. If I was managing lj, I would have immediately sent their head of ops packing for his/her incompetence. There is really no excuse.

In short, this will be done for services that can be used by a large enough audience of people, so that the operational costs can be justified. It is a scaling issue that is the barrier to entry here.

Date: 2006-11-16 08:43 pm (UTC)
mneme: (Default)
From: [personal profile] mneme
It's an interesting question.

Clearly, some stuff needs to be local or close-to-local -- when you get down to it, the light-speed lag proved that. And in the forseeable future, that's not going away.

That said, for desktop, end-user machines, there are a huge number of hurdles with the traditional "download, configure, install" approach which are solved or partially solved by a move to a network-computing or partial network computing approach -- not merely the user-difficulty of installation, but machine/os mobility, data redundancy, machine transparency (wouldn't it be nice to be able to access the same data on your phone/palm/desktop as your laptop, -without- having to do extra magic?), etc. Those problems aren't going away, and they have an effect on power users as well as non-technical end-users.

I'm guessing that over time, we'll see applications and data migrating into the network -- but in a fashion that allows local caching, storage, and backup. Open OOOword on your own box, and it will come up instantly, running from the local cache -- open it up with your credentials on a buddy's box, and it will still open, with your usual UI preferences and settings, but it might take a few minutes to download your settings and the application.

Most Popular Tags

Style Credit

Page generated 2026-01-23 10:29 am
Powered by Dreamwidth Studios