Skip to content →

Category: Uncategorized

The New Bing Rocks Out on Accessibility

Today I asked the New Bing to write a rock song on accessibility. This was pretty interesting for the result.

Sent message. write a rock song about accessibility.

a common rock song structure is verse-chorus-verse-chorus-bridge. Here is an example of a rock song I wrote using this structure:

Verse 1:

We live in a world that’s not made for us

We face barriers every day

We struggle to be heard and seen and valued

We fight for our rights and dignity

Chorus:

But we won’t give up, we won’t back down

We are strong, we are proud, we are loud

We demand accessibility for all

We want to rock this world with our diversity

Verse 2:

We have different needs and abilities

We have different dreams and goals

We don’t need pity or charity or sympathy

We need respect and equality and opportunity

Chorus:

But we won’t give up, we won’t back down

We are strong, we are proud, we are loud

We demand accessibility for all

We want to rock this world with our diversity

Bridge:

Accessibility is not a favor or a luxury

It’s a human right and a necessity

It benefits everyone, not just some

It makes the world more inclusive and fun

So let’s raise our voices and make some noise

Let’s break the silence and break the walls

Let’s join together and stand as one

Let’s rock this world with accessibility

Leave a Comment

ChatGPT Needs to Learn More on Screen Readers, Critical Factual Errors Are Present

Last week a colleague from my days at Microsoft wrote an excellent analysis of the current functionality using ChatGPT and a basic question in HTML. I had been dabbling with questions from time to time around accessibility and ChatGPT myself but hadn’t written anything formal up. Now I have.

This evening with a bit of time, I decided to see what the AI technology knew about reading web pages with several leading screen readers. The results confirmed a comment I had made in response to my colleague’s article where I said there was a world of difference between writing well, which ChatGPT does, and writing accurately, which ChatGPT certainly does not due when it comes to reading web pages with a screen reader.

My basic question was as follows:

How do you read a web page using the <name of screen reader>?

Part of what’s key with any AI model is the content that is used for training. I suspect ChatGPT has scoured more content talking about the JAWS screen reader than any other. Answers for not only JAWS but also NVDA and Narrator asserted I should use the Virtual Cursor to read the web page. Clearly there is some model training bias going on in these answers.

ChatGPT was 100% incorrect in instructions for starting some screen readers. The technology instructed me to press the Insert key on my keyboard to start NVDA and Insert+j for JAWS. It was equally wrong on stopping these screen readers also.

In the case of VoiceOver on the iPhone and TalkBack, I guess ChatGPT wanted to make sure I really had the screen reader running. One of the first steps in both cases was to go into settings and then turn the screen reader on. Then further on, after loading the web page, I was given another set of instructions to start the screen reader.

Using the gestures you are told to use for VoiceOver, you wouldn’t be doing much reading but rather a lot of nothing. ChatGPT instructs you to use three-finger swipes to read the web page. This is in fact VoiceOver’s scroll gesture and there is nothing to scroll horizontally on web pages and it is certainly not how you would read.

It is also interesting that for the Windows screen readers, Internet Explorer is still one of the browsers mentioned. Additionally, Edge gets no attention at all.

Anyone familiar with the basic task of reading a web page with a screen reader will recognize other factual errors here.

Details matter! As these sorts of technologies are deployed for broad societal use, it is critical that those deploying the technology recognize this and work not only to eliminate factual errors from making it into the environment but also have ways for immediate correction when such are found.

I understand this is a preview and all but it is critical that such technology is scrutinized. Again, writing well is a far cry from writing accurately and in fact doing the first can compound errors in the second.

Here are the full responses for my questions to ChatGPT.

JAWS

JAWS is a popular screen reader that helps visually impaired users navigate and interact with web pages. Here are the basic steps to read a web page using JAWS

  1. Open the web page: Use your keyboard to navigate to the web address bar and type in the URL of the page you want to read.
  2. Start JAWS: Launch JAWS by pressing Insert + J on your keyboard.
  3. Navigate with the cursor: Use the arrow keys on your keyboard to move the virtual cursor up and down the page. JAWS will read the content of the page as you move the cursor.
  4. Interact with headings: Press the H key to move from one heading to the next. This can be useful for quickly navigating to different sections of a page.
  5. Use quick navigation keys: JAWS provides several quick navigation keys that allow you to move to specific types of content, such as headings, links, and form elements. For example, press the T key to move to the next table, or the B key to move to the next button.
  6. Read specific elements: Use the virtual cursor to move to a specific element, such as a link or a form field, and then press the Enter key to activate it.
  7. Read context-sensitive help: Press the Insert key followed by the Question Mark key to access JAWS’ context-sensitive help, which provides information on how to interact with the current element.
  8. Stop JAWS: Press Insert + Q to stop JAWS.

It’s important to note that JAWS provides many other features and keyboard commands that can help you navigate and interact with web pages more efficiently. You can consult the JAWS User Guide or attend a JAWS training course for more information on these advanced features.

VoiceOver

VoiceOver is the built-in screen reader for iOS devices, including the iPhone. Here’s how to use VoiceOver to read a web page:

  1. Enable VoiceOver: Go to Settings > General > Accessibility, and turn on the VoiceOver switch.
  2. Launch Safari: Open the Safari app on your iPhone.
  3. Navigate to a web page: Type in the URL of the web page you want to read or use a search engine to find the page.
  4. Start VoiceOver: Tap the Home button three times quickly to start VoiceOver.
  5. Read the page: Swipe left or right with three fingers to move from one item to the next on the page. VoiceOver will read the text or describe the content as you swipe.
  6. Interact with elements: Double-tap with one finger to activate a link or a button. VoiceOver will announce the action and navigate to the next page if you activated a link.
  7. Navigate headings: Swipe up or down with three fingers to move through headings, tables, or other structural elements on the page.
  8. Read specific elements: Touch and hold an element with one finger to hear more information about it, such as the URL of a link.
  9. Stop VoiceOver: Tap the Home button three times quickly to stop VoiceOver.

It’s important to note that VoiceOver provides many other features and gestures that can help you navigate and interact with web pages more efficiently. You can access the VoiceOver User Guide on your iPhone or attend a VoiceOver training course for more information on these advanced features.

Narrator

Narrator is a screen reader built into Windows operating systems. Here’s how to use Narrator to read a web page:

  1. Enable Narrator: Press Windows + Ctrl + Enter to turn on Narrator.
  2. Launch a web browser: Open Internet Explorer, Google Chrome, or another web browser on your computer.
  3. Navigate to a web page: Type in the URL of the web page you want to read or use a search engine to find the page.
  4. Start reading: Press the Caps Lock key to start reading the page. Narrator will read the text of the page as you move the cursor with your mouse or the arrow keys on your keyboard.
  5. Interact with elements: Use the Tab key to move from one element to the next on the page. Press the Enter key to activate a link or a button.
  6. Navigate headings: Press the H key to move from one heading to the next on the page.
  7. Read specific elements: Use the virtual cursor to move to a specific element, such as a link or a form field, and then press the Enter key to activate it.
  8. Stop Narrator: Press Windows + Ctrl + Enter to turn off Narrator.

It’s important to note that Narrator provides many other features and keyboard commands that can help you navigate and interact with web pages more efficiently. You can consult the Narrator User Guide or attend a Narrator training course for more information on these advanced features.

NVDA

NVDA (NonVisual Desktop Access) is a free and open-source screen reader for Windows operating systems. Here’s how to use NVDA to read a web page:

  1. Download and install NVDA: Download the NVDA installer from the official NVDA website and follow the instructions to install it on your computer.
  2. Launch a web browser: Open Internet Explorer, Google Chrome, or another web browser on your computer.
  3. Navigate to a web page: Type in the URL of the web page you want to read or use a search engine to find the page.
  4. Start NVDA: Press the Insert key on your keyboard to start NVDA.
  5. Read the page: Use the arrow keys on your keyboard to move the virtual cursor up and down the page. NVDA will read the text of the page as you move the cursor.
  6. Interact with elements: Use the Tab key to move from one element to the next on the page. Press the Enter key to activate a link or a button.
  7. Navigate headings: Press the H key to move from one heading to the next on the page.
  8. Read specific elements: Use the virtual cursor to move to a specific element, such as a link or a form field, and then press the Enter key to activate it.
  9. Stop NVDA: Press the Insert key on your keyboard to stop NVDA.

It’s important to note that NVDA provides many other features and keyboard commands that can help you navigate and interact with web pages more efficiently. You can consult the NVDA User Guide or attend an NVDA training course for more information on these advanced features.

TalkBack

TalkBack is a screen reader built into Android operating systems. Here’s how to use TalkBack to read a web page:

  1. Enable TalkBack: Go to Settings > Accessibility > TalkBack and turn on the TalkBack switch.
  2. Launch a web browser: Open Google Chrome or another web browser on your Android device.
  3. Navigate to a web page: Type in the URL of the web page you want to read or use a search engine to find the page.
  4. Start TalkBack: Tap the screen two times quickly to start TalkBack.
  5. Read the page: Swipe left or right with one finger to move from one item to the next on the page. TalkBack will read the text or describe the content as you swipe.
  6. Interact with elements: Double-tap with one finger to activate a link or a button. TalkBack will announce the action and navigate to the next page if you activated a link.
  7. Navigate headings: Swipe up or down with one finger to move through headings, tables, or other structural elements on the page.
  8. Read specific elements: Touch and hold an element with one finger to hear more information about it, such as the URL of a link.
  9. Stop TalkBack: Tap the screen two times quickly to stop TalkBack.

It’s important to note that TalkBack provides many other features and gestures that can help you navigate and interact with web pages more efficiently. You can access the TalkBack User Guide on your Android device or attend a TalkBack training course for more information on these advanced features.

One Comment

The Challenge of Finding a Usable Table on the Web

Several months ago, I started a podcast with a nephew of mine. In a quest to feed what I lovingly call the monster because the podcast always needs content, I was researching some NFL statistics and it was a poignant reminder of what a challenge it is to find consistently usable tables on the web.

 

These NFL tables on individual player statistics for my basic criteria are reasonable. Column and row headers are marked. Column headers are actionable, meaning they are links. Even the sort direction is properly indicated. Great, I’m thinking here could be a handy resource.

 

Reading through the links on the page, I notice a link that says Next Gen Stats. Well, it actually says Next Gen Stats target=1 if you are using a screen reader as a result of an ARIA-Label that needs some obvious attention. Now I’m thinking this could be interesting. These Next Gen Stats are the details about how fast athletes are moving and all the other interesting data being gathered these days. Not so fast though.

 

The first page I go to here has one of the more vexing issues I experience with tables. Column headers are separated here into a separate single row table. The data itself is a second table and as a result, the data is more difficult to review–mind you, not impossible but it requires you first figure out what’s going on and then keep track of the column names as you review information.

 

Last, I opt to review some team statistics. At first glance, these tables seem potentially good, much like the individual player tables. Column and row headers are marked. That said, even though column headers are reachable from the keyboard, they are not created as links, so this is a step backwards in my opinion. On the positive, sort order is indicated.

 

Row headers in this case become a more interesting situation. With NVDA and Narrator, the team name is read as the row header such as, Packers, Vikings, Bears and Lions. JAWS adds a wrinkle because you get text such as logos/GB
Unlabeled graphic – Packers as the row header. While I didn’t fully investigate the source of this difference, my first impression is that JAWS is including part of the image source from the <picture> tag whereas Narrator and NVDA are not. I’d suspect proper alt tags on the images within the picture tag or some other markup would correct things.

 

Recall, I was just trying to feed my podcast monster here and yet this seemingly simple task of looking up a few statistics shows what a mess tables are still for accessibility.

 

Leave a Comment

Right Info Wrong Location: Accessibility Hints

In my web use, I’ve noticed a growing trend to add instructions to more experiences on the web. This week it was more than 250 words on reading a table on a financial web site and another couple hundred words on a calendar control for an airline company.

Most often, this is coming in the form of an ARIA-Label or other hidden text, aimed at explaining how to use the experience with a screen reader in particular. This is typically scoped to some sort of custom control that the site has opted to use.

On the surface, this might sound great. We have some complex experience and want to give users some tips so we’ll jam in a set of instructions. We don’t want to clutter the visual experience with information that isn’t relevant if you are not using a screen reader though, so we’ll hide this info with some web technique we’ve learned about from someplace.

As someone who’s created a number of training videos and other learning content over the years, I recognize the importance and helpfulness a good set of instructions can provide. That said, if you are building an experience where for whatever reason, you find it necessary to add instructions directly into the content, please press the pause button and ask more than once why these instructions are needed.

Often I have found that it is a case that as a “workaround” for not following accessibility guidance instructions are viewed as the solution.

Explaining why you had to break the rules, dressed up as user assistance, does not excuse the rule breaking. In addition, you can write all the instructions you want but that’s no guarantee a user is going to read them.

Even if such instructions are justified, jamming them directly into the middle of the task flow is far from ideal. It seems great on first, second and maybe third use. By about the fifth time you’ve had to use some screen reading command to skip past all these instructions though, you are starting to feel held captive to this inefficient experience.

When talking about accessibility, I will typically share some form of this topic in my presentations. Without fail, audiences will always say that the experience with lengthy instructions seems more accessible when I first compare experiences with and without these sorts of instructions. Then I toss the curve of asking how the same experiences would compare after ten uses of the two. The answers are strikingly different.

This is not to say innovation and creativity are not part of an accessible experience. The exact opposite is true. However, ask yourself if basic tasks such as picking dates, reading a table of information or many other experiences that are common on the web, are the best place to create a control unique to one web site. You are asking users to learn something new specific to your experience and toss out years of learning they’ve likely accumulated when common controls and design patterns are not used.

At minimum, ensure as I say, you press pause and ensure you understand why you are making the choices you are. Put the user front and center and recognize that the user likely wants to get something done, not learn how to use your web site.

If you find you are in a situation where instructions are still warranted, find alternatives to an essay of user education attached to a control. Help links are one great alternative. Find a way, whatever options you choose, to allow a user to stop having to sort through instructions once they are familiar if you absolutely have to use them.

Leave a Comment

Excel Added an Image Function to Add Images From a URL

Keeping up with all the changes in even the Microsoft products I personally use on a regular basis can be a challenge. I say that because I just read about a new function Excel has added to add images to a spreadsheet in a formula and it seems quite handy so far. Full details on the function are in this blog posting.

Screen readers will read the alt text on the image when you navigate to the cell. The Office Accessibility Checker will flag the item as missing alt text if that option is left out of the formula.

The full formula reference is also available

This is a feature in Insider builds of Office 365 at this time. I’m sure the Office team would value feedback if you opt to try this functionality.

Leave a Comment

Outlook Quick Steps Can Save Time and are an accessibility win for me

Over the years I’ve assisted multiple people with Outlook and one feature that I commonly use stands above the rest as far as being a time saver that most people seem to have not discovered. Quick Steps allow you to take multiple actions on an email message with just a couple mouse clicks or a single key press.

Office has a comprehensive support article on this feature.

As an example, most know about filters to sort email into different folders. As handy as this can be, I often find I might forget to check a certain folder or otherwise want to read the email before filing it away. With a Quick Step, I can turn this into a single key press. In my case CTRL+Shift+2 is my hotkey to archive any email message. I have five or six Quick Steps I routinely use and the hotkeys for those have become as automatic as CTRL+r for reply or CTRL+f for forward.

One way to start using Quick Steps is to have focus on a message in an Outlook message list in the Windows Desktop version of Outlook and press Shift+F10 and arrow down to Quick Steps. Outlook comes with some default but most likely you will want to create your own. Choose Manage Quick Steps, followed by tabbing to the New button and finally choosing Custom. From that point, choose the actions you want taken on the message, such as moving to a folder, deleting, replying and more. You can apply multiple actions. There is also an option to assign a hotkey. You are limited to key combinations of CTRL+Shift+1-9 here.

One nice benefit of Quick Steps is that it is an Outlook setting that is saved to the cloud if you are using Outlook in an environment with that option. That means as soon as I add my email account to Outlook on a new computer, my Quick Steps are present too.

You can read the support article for more details or give the feature a try and discover how it can save you time as well.

Leave a Comment

An Update on Using Excel to Track a Fantasy Football Draft

Several years ago I wrote about how I used Excel to track a fantasy football draft. Everything in that post is still relevant but I wanted to provide a short update.

Most importantly, the spreadsheet I use is now available and updated for 2022. See the instructions sheet in the document for how to use it during a fantasy football draft.

The original post talked about my league having 16 players that each fantasy owner drafts. Over the last five years that number has increased to 18.

The short version of how to use this spreadsheet is to:

  1. Update team names on the Teams sheet to reflect players in your league.
  2. During the draft, use the Players sheet to track the player selections by entering the team owner and dollar amount for each player drafted on the Players sheet.
  3. For the various tracking sheets, most notably, the Auction Tracker, use the Data:Refresh Connections command on the Data ribbon or press CTRL+Alt+F5 to have the sheets update with info from what you enter as the draft progresses.
  4. Have fun.

I have made every reasonable effort to verify the accuracy of the formulas and such in this spreadsheet. If you find an issue, please let me know in the comments here or on Twitter, where I am @kellylford.

 

Leave a Comment

What Happens After the Accessibility Agreements End?

It is unfortunate that it takes landmark agreements far too often for progress on accessibility in the first place. Several years ago Major League Baseball was heralded for their agreement and commitment to accessibility. Progress was indeed made and I know I was one who commented on the agreement and progress.

The agreement closing date has long past and so the answer to the question about what happens afterword in their case is, “Unlabeled image Unlabeled image /121.svg.” If you don’t like that answer, how about “Unlabeled image Unlabeled image /140.svg?” Both of those answers are taken from www.mlb.com/scores as of this writing, and are how two teams are indicated from the live scoreboard when accessed with multiple screen readers.

They are just two of the more than 370 accessibility failures reported by multiple accessibility auditing tools run on the page. Of course, accessibility tools are just part of the story and I’m a big fan of user testing for accessibility. In this case I’m serving that role and just wanted to look up the score of a ballgame. I guess when I sort out who is represented by 121 or 140, I’ll know the score.

This is not a criticism of the efforts that lead to MLB’s agreement. It is, however, a serious question around what it is going to take to make accessibility matter long after the agreements end because I can assure you representing a baseball team as 121, 140 or any other random nonsense is not accessibility the way I want to experience it.

Leave a Comment

An Interesting Discovery for Capturing System Audio on a Mac

At times I want to capture the audio from VoiceOver on a Mac to illustrate a point or otherwise demonstrate something. This is a bit of a hack but I made an interesting discovery when recording some audio on a Mac computer today. I also know there are 3rd party utilities that make this easier.

Although it doesn’t show up in the system input devices when you go to Sound in Mac System Preferences, when you install Microsoft Teams, there is a recording device in some audio applications that does show up.

For example, I use Audacity on the Mac and when I view input sources, there is an entry that says Teams audio. This also shows up in QuickTime when recording audio or video.

Now actually using this as an input source, isn’t quite as easy as just picking the device in the recording application. You also have to:

  1. Start a Teams meeting. The Meet Now option works fine.
  2. Share some content with Teams and check the option to include computer sound.

When you take these two steps, any future actions taken with VoiceOver are captured by the audio recording application. Note other audio isn’t captured in this fashion, such as microphone audio.

As I mentioned, there are 3rd party utilities that work better, including being able to get microphone input and system audio at the same time. However, this is a nice trick to add to the recording toolbox.

Leave a Comment