37signals are developing a calendar application. Watch the demo and you’ll see appointments are entered as natural
language (for example “3pm Dentist”). Compared to Yahoo’s calendar it lookspretty simple.
Think about it a bit more and you’ll realise the
complexity is still there, just hidden behind a different
interface. The GUI represents all the options graphically.
The text box hides the options in the murky workings of
the parser. 37signal’s example never shows what happens if
you enter text the application doesn’t understand. For
example, what happens if I write “Appointment with Dentist
at 3pm”? Done badly it will be like those early Sierra
games where half the challenge was discovering the words
the program understood. Not a lot of fun, at least when
you’re trying to enter your Dentist appointment rather
than save a princess.
Now if the grammar is quite restricted it should be
relatively easy to code up a bit of Javascript to prompt
the user with correct words, like most IDEs do for
programmers. Get this to work well and I think it will be
a very nice interface. GUI interfaces have a shallow
learning curve, but are slow to use. Textual interfaces
are the reverse: they favour the expert over the beginner,
by being fast to use but difficult to learn. Add prompting
to the textual interface and perhaps the end result will
be the best of both worlds.
Note that there are other ways to solve this problem. Circle menusare a relatively unknown GUI device that allow faster
input than traditional pull-down menus. I’m sure there are
other innovative ideas out there. It is possible to create
interfaces for complex tasks that suit both the beginner
and expert alike.