Last night was the Feb 04 rendition of the DfwPPer peoples, which was kicked off with the typically thought provoking PragDave venturing this point:
The browser is dead.
Since I'm typing this _in_ my browser, I'm not sure about that, and neither were many others, but Dave shaped his statement by comparing it to land line phones. Sure everyone still has one, but it's dead, it's the wave of the past.
From then on the conversation drifted in and out of many an observation, some of them even interesting. There were mentions of other products and their interface evolution. The microwave used to just be a dial and a button to open the door. Now it looks more like a calculator with some bookmark buttons (mmm, popcorn). But perhaps future microwaves will super-revert to no buttons -- the frozen food you insert will have enough metadata on it to tell the microwave how to cook it.
AdamKeys mentioned this was like TiVo compared to VCRs -- TiVo is basically doing the same thing, but it's big addition was metadata so you have much less work to record content.
Another thread looked at car interfaces, and how Saab tried to be cute with a 1990(?) model that moved the ignition key to the ‘floor’ of the car just to the right of the driver seat. Their motivation was to use this to lock the transmission, not the steering column, but they succeeded in simply preventing most people from finding out how to get the car started.
The ten-key number pad got jostled around quite a bit. A few referenced the story I'd seen in Slashdot sometime in the last year about how many teens in Japan are very dexterous with just their thumb, because they've been IM-ing on their mobile phones all the live long day. Dave (I think) interjected a point about riding with a cab driver who IMed the entire drive with just his thumb.
Are two-handed keyboards the ideal input? Wouldn't just a one-thumb input be good enough, because then your hand can grasp something at the same time it's typing? But how do we compare what's doable to what's Best(TM) for the user?
Someone pushed off into a tangent by mentioning the semantic web, which was batted around a bit. Some thought no one would be interested in putting enough metadata into content to make the semantic web work, and when we've got Google, inferring metadata on a good enough basis, what more do we need?
This led PragDave into some of his points I've heard him make before, that our current programming jobs of transforming requirements to code will eventually be overtaken by the machines (the Matrix is nigh upon us...). He stated that we unreasonably expect software to be very precise, when the rest of the world works with tools that are imprecise. Software that can be agile on the fly, that can be ‘taught’ by repeated trial and error, will take the place of people doing this by writing the code by hand ... or something like that, I'm sure if Dave reads this he'll be quick to point out that I'm missing the whole point -- one reason I like to hang around the guy when I can.
Someone countered that life-and-death software couldn't be done in this way. Another counter-countered that the teaching process wouldn't necessarily happen in a production environment. Those apps needing higher degrees of reliability (air traffic control) would be trained extensively before going live ... but many apps can start being useful to people before they're very reliable. People don't plan ahead very much because they're able to be very resilient on the fly. By comparison, software is not resilient at all -- when a new case is thrown its way, currently the whole thing has to stop down to handle it and until then, the user's workaround the program. How nice if the program could adapt itself to the new cases on the fly as well.
I countered with one of my favorite Prag stories (I think it's /\ndy's) where they discovered a paradox in a law they were trying to write some software to conform to (wow, was that a bad sentence). At some point, software is always going to be emulating human constructed entities: laws and corporations and new business practices to turn a buck in a new way, and what's complex about all this is the fact that what humans build are not logical and consistent, and there will always be a rub there and the need for a human to intervene on the computer's behalf. And maybe Dave doesn't disagree with this point, it's just that he thinks there will be a lot less people needed to intervene for the computers.
And on ... I eventually threw out the question to try and tie us back and down, “If the browser is dead, and I'm sitting down to build a new app, what should I do? What is my obligation to the user and how do I best ServeTheUser in light of a dead browser?”
I don't think anyone answered the question. I did note a nice comment in the right direction, “In many cases, the best interface is no interface.” Dave thought my question was good, but only countered with the question, “It depends on how you determine what is Best?” ... or somesuch. And I agree. And this is where it gets thick and out of the comfortable world of clean technology into the messy world of Values and Worldviews and Beliefs. What is best? What is right? What is healthy?
tags: ComputersAndTechnology