Enabling the disabled

High-Tech Tools Lower Barriers for Disabled says an article in the October issue of HR Magazine. It makes the point that technology continues to improve the job possibilities for the disabled. As assistive technologies such as voice recognition and alternative input devices go mainstream, the cost of these products has dropped dramatically. The article explains:

“The more other companies move toward [the paperless office], the more it opens the workplace up to people with mobility impairments,” says Glenn Higgins, an insurance company vice president and medical director, who cannot manually operate a keyboard or a mouse. He uses a speech recognition system on his office PC as well as a breath-activated device to control his electric wheelchair.

“There are many wonderful assistive-technology gizmos now available that ease access and increase productivity,” Higgins adds, “but the first step is to be flexible and open enough to consider using these tools to expose the workplace to talent that has heretofore been untapped….”

“The disabled workforce represents the greatest opportunity for employers,” says Sears Recruitment Director Bill Donahue. “A large percentage of people with disabilities are unemployed, but no one will give them a chance.” That’s a mistake, because disabled workers are “loyal and committed to being there every day,” he says.

The article goes on to describe various assistive technologies and assess how much it really costs to employ workers with disablements. This was particularly fascinating to me because a co-worker was injured in a bicycle accident that yielded a serious, but thankfully temporary, disability. He was able to continue working effecively, even while recovering, with the help of various assistive technologies. He was thankful that the building and his work area had already been designed to accomodate those in a wheelchair. We should be careful to help those with disabilities; we may be one of them someday.

Saints and Feasts of the day

Basil asked me for some ideas about how to write a bookmarklet to find today’s saints and feasts on the Orthodox Church in America website. I gave him a few suggestions and then wrote a bookmarklet so he could “check his work”. I simplified and improved the Saints of a Day bookmarklet after he posted about it in his blog. The major change was to make it more robust. It does some minimal input checking and makes sure that the numbers the user entered are zero padded if necessary.

I also created what may be a more helpful bookmarklet: it gets Today’s Readings (also from the OCA website). This gives you the scripture readings and hymns for the day and conveniently provides a link to the synaxarion for the day.

The joys of unicode, UTF-8, and form internationalization

I’ve been working on a web app that uses UTF-8 encoding and have been surprised at how little information is available about how to do internationalization that works with all browsers. (A.J. Flavell’s “FORM submission and i18n” article and related charset issues site were quite helpful.) Consider this my small contribution. Here’s the scenario for my app: users can enter a search string and it will search a database for matching entries. The search form includes a few other contols so there are a number of variations and potentially 20 named fields. I only want to show the relevant name/value pairs on the URL if possible. If I submit the form with the GET method, all fields are shown on the URL, even if their values are null, which is fairly ugly. I could use a POST, but then the URL can’t be sent to others and generate the same result. The search is also an idempotent transaction (it is just retrieving data and has no side-effects), so I’d prefer to use GET.

When I submit the form, the search field is properly encoded according to RFC 2279 (which obsoletes RFC 2044). That means that non-US-ASCII characters are converted into a %nn format, where n is a hexadecimal digit. For example, α would be converted into %CE%B1. Since I want control of the resulting URL, I thought I’d use JavaScript’s location.href to set the URL explicitly. I then ran into the problem of how to properly URL-encode the strings. I’d used the JavaScript escape() function in the past to fix up ASCII characters that are not URL safe, but escape() does not handle unicode characters well. In IE, unicode characters are suported, but the function generates a %unnnn format which is not well understood by servers. It would give %u03B1 for the previous example. What to do?

I found the encodeURI() and encodeURIComponent() functions that are new to IE5.5, Netscape 6+, and Mozilla. Thankfully, they do exactly what I want. Now I just need to figure what to do with older browsers such as IE5 and Netscape 4 (forgetting them is not yet an option). I wonder if anyone has written JavaScript code that does this encoding. I suppose I could submit the form and just live with the long URL.

I just happened to think that all my mozilla and IE5.5 bookmarklets should probably be converted to using encodeURIComponent() instead of escape(). That would allow searching for non-ASCII characters.

Washing on the web

This sounds like a great April Fools day joke created by people making fun of SOAP: IBM and USA Technologies announced Friday that they will Web-enable 9,000 washing machines and dryers at U.S. colleges and universities. Called e-Suds, the systems will allow students to check for machine availability on a web site. They can pay by swiping an id or credit card or calling on a cell phone. Students can choose to have the machine add soap and fabric softener. When their wash is done, they can be notified by e-mail. Laundromat owners can also use the web interface to monitor machine status, check water temperature and fiters, and watch usage patterns. Cashless vending should also help reduce the $500 million annual losses attributed to vandalism.

A Reuters story about the machines says “A company that owns laundry machines in colleges in Ohio, Indiana, Michigan and Kentucky will install the machines during the autumn term.” I wonder if Asbury will be getting them.

Extremists, meet the blog

Jon Udell: “What mainly fascinates me about this moment in history is the role of the blog. We’ve turned a corner, I think, in terms of pluralism. Authentic voices on all sides of all debates are heard directly. The world is profoundly more transparent. Given the irreducible and growing complexity of everything, this is a necessary and wonderful thing. I feel lucky to be a part of it!”

Can we talk?

I recently discovered that Ray Ozzie is experimenting with blogging. In case you haven’t heard of him, he was the creator and developer of Lotus Notes. He founded Groove networks in 1997 to take groupware in a new, more secure, and decentralized direction. Because of his years of experience, he’s got terrific insights into how users behave in collaborative environments, particularly with regard to security.

He predicts: “If we continue on the current trajectory, e-mail will become the place where you receive stuff from people you don’t know, and Groove and other collaborative environments will be where you work with people you know.” This is already true for me. I already do a great deal of productive work interaction through instant messaging and IRC. E-mail has become a tool for archiving information, exchanging less-pressing thoughts, and spam.

The architecture of our collaborative environment matters a great deal in our productivity and the quality of the conversation, he says. Blogs improve the signal-to-noise ratio by creating distributed conversation threads that naturally omit the spammers and flamers because nobody links to them. If you have a blog, you can participate in the conversation. The conversation can be guided as blogs link between each other. Civilized public discourse can return: blogs allow everyone to have the power of their own press.

Speaking of blog architecture, a number of folks are working on the BlogMD Initiative. The name made me think of medical blogs, but in actuality they are talking about ways to improve the metadata (MD, you see) exposed about blogs. There are other similiar projects: BlogChalking wants bloggers to add geographic and demographic information to their blogs. They’re off to a good start with thousands of people adding blogchalk meta tags, but the data isn’t completely reliable due to formatting issues (Some people use a postal abbreviation, such as TX, instead of the full state name, Texas). BlogMD seems to be focused on data that is typically available on blog web pages, such as last update time and URL. Having a way to access this consistently and programmatically for all blogs would be helpful. And I’m sure there’s other metadata that would be useful.

I’ve got to run to vespers at church, but I also want to mention there is a privacy concern with some metadata, particularly the demographic data available when blogchalking. Many times we read stories in the newspaper and have no idea about the demographics of the author or editor. This hasn’t particularly harmed newspapers. How much metadata should be available?

Doing what’s best for customers

Microsoft will be no longer offering free downloads of their TrueType core fonts for the web. The folks at Typographica asked them why they were removed and got the Microsoft spin, I mean explanation. These font downloads have been offered for five years and were a great resource for older systems. I will particularly miss the very nice descriptions of the development of the fonts, although it appears some of the information is still available. Those descriptions were part of what sparked my interest in typography.

Since almost every recent Windows and Macintosh system has Internet Explorer and these fonts installed, this will not have great impact on the majority platforms. Could it be that Microsoft removed the fonts because many Linux users needed them in order to have readable screen fonts? Hmmm.

Update: Apparently slashdot ran this font story on Sunday. Reader comments indicate that the fonts are available on sourceforge.

Getting the message

Doc Searls suggests that AOL should open the protocol to AOL Instant Messenger or adopt the Jabber protocol. He writes:

AOL Instant Messenger is a client-only lock-in that will be undermined totally once the Jabber protocol (or some other IM protocol) ubiquitizes into the same grade of Internet infrastructure as SMTP and POP3 provide for mail service and HTTP provides for Web service.

I agree that it is inevitable that instant messaging protocols will eventually be opened. It would be the best for AOL to open up all their protocols as it would give them an immediate advantage and help them become the standard.

Opening just the AOL client protocol is missing the point (and it has already been mostly reverse-engineered as well as licensed twice). Much more interesting and useful is to open the protocol that AOL uses to communicate between servers. To be able to bridge instant messaging systems at the server level opens up a whole world of possibilities. It would allow variation in clients while still supporting interoperability. It would allow differences in protocol that might not be included in the standard, such as different encryption formats or additional features. An extensible protocol could allow some of this, although adding new data encryption techniques after the fact while maintaining interoperability would likely be difficult or impossible. Opening the server protocol would also allow more secure, inside-the-firewall servers that have their own unique features and configurations, such as message logging (a legal requirement in some industries).

Blatant plug for the company: Lotus Sametime already offers many of these “fit for business” requirements as well as AOL Instant Messenger compability.

Freedom of the press and open source

Via a circuitous route I stumbled across Doc Searls’s commentary “Cheap Talk: Why Open Source and silence don’t mix” It summarizes the wisdom of The Cluetrain Manifesto:

  1. Markets are conversations
  2. Talk is cheap
  3. Silence is fatal

Open source implicitly trusts and relies the conversations that comprise its markets. This is what makes open source fundamentally different than closed source. Not only can you do more with it (and to it) because everything about it is exposed, but it trusts you enough to disclose all of itself to you….

Open source [is] burning down Development as Usual. Why? Is it just because open source has more Goodness than closed source? No…. Open source has no secrets. It is inherently disclosing. And disclosures start conversations – and then do nothing to stop them.

So here’s the clue we’re talking about here: Outside the secret-keepers themselves, there is no demand for secrecy. No market for it. And since markets are conversations, you can’t use secrecy to make a market. Only to prevent one.

I’ve been thinking about this for days since I first read it and had to wade through my browser cache to find it again. Open source is about freedom and relies on rights similar to freedom of the press. Software patents and threats of software patents are dangerous. Having worked with the Mozilla project for years now, I still find it refreshing that they have nothing to hide. The project is developed in the open. Let’s keep the conversation going.