Skip to content →

The Idea Place Posts

A Couple Notes on VMWare For Virtual Machine Use

When I first wrote about using virtual machines with a screen reader, I mentioned that the machine management of VMWare’s Workstation was challenging. VMWare Workstation 16 seems to have corrected the majority of these issues.

The list of virtual machines is now something you can reach with the tab key. Up and down arrow will move to the different machines and a context menu on each machine available with Shift+F10 or the computer’s application shortcut key brings up all the options you’d expect for power, snapshots and more.

In addition, the menubar in the program now reads correctly with multiple screen readers. You can press Alt and hear that the top menu has gained focus and expected behavior with keyboard and screen reading works.

There is still at least one quirk I’ve encountered when using any menus in the program. When you are using the down arrow to move through a list of items on a menu, up arrow does not seem to move in reverse. For example, in the context menu for a virtual machine, there is a list of devices you can make available to the virtual machine. If you move down past one of these devices, you have to arrow down through all the menu choices to get back to what you skipped.

Overall in a couple days of using VMWare Workstation 16, I’ve had success. As I mentioned in my original post here, this is not a free option but with these changes it is one I’m going to be putting back into my virtual machine toolbox.

On the Mac side, VMWare has released Fusion 12. This is a must if you are going to run Apple’s newest update for the Mac OS.

I also believe this is new but there is now an option for a free personal license to use Fusion on the Mac. The license supports the typical non-commercial uses, such as education, personal use, and more. Take note though, signing up for the license requires completion of a captcha challenge that has no audio option. So far VMWare has not responded to a tweet asking about this.

If you do try Fusion on the Mac, I did post about how I resolved the Capslock conflict between VoiceOver and Windows screen readers. My work and personal interests require me to use both Windows and Mac daily so I find this to be a very viable option.

Leave a Comment

Improved Sports Descriptions

I enjoy sports but often find myself wanting more details than even the best play-by-play announcer can provide. I find myself wondering if technology could help in the situation.

I’ll start by taking football as an example. As most fans of the sport with know you have 22 players on the field at any given time. It would be simply impossible for any announcer to describe the location of all 22 players, let alone what each of them is doing during a play.

Imagine though if you divided a football field up into a complete grid where you could express the location of any spot by a set of coordinates. Then imagine that you can describe the location of players based on this grid. So at the start of the play you could indicate where all 22 players are lined up. Then as the players move during a play, you could have technology that would communicate how any player is moving throughout the grid.

Or imagine if you could combine this technology with the accessibility improvements that have come in various touch devices so you could slide your finger around in the application window and tell the location of any player at any time. Obviously doing any of this in real time would likely still be tough but imagine if you could do this on demand for certain plays just to get a sense of what’s really going on.

I have no idea how possible this is with today’s technology. It seems like it could be an interesting research project for someone in the right environment. I’m tossing this out there because it’s some thing I’ve thought about lately. Anyone want to take up the challenge?

Leave a Comment

Using Windows Quick Assist with a Screen Reader

When you want to assist someone else with a computer problem on Windows, there are a range of connection options. Some, such as an actual phone call, are easy to establish but difficult to use for this sort of a situation. Others, like Quick Assist, offer the best of all worlds in most cases where establishing the connection is easy and the access you have on the remote computer works quite well. However, Quick Assist does not currently support redirection of audio as far as I know, so if you are using a screen reader, it can obviously be difficult to use.

In the screen reader case, especially if the individual giving the assistance is using a screen reader, there’s not a great way to easily know what’s happening on the remote computer. That said, combining a couple pieces of technology, something I’m sure is no surprise to pretty much anyone who uses a screen reader, can be one potential work around if you want to use this connection option.

The Scenario

You, the screen reading user, want to aid someone remotely. That person may or may not be a screen reading user. In the old school phone call method, even if you use newer voice-over-IP technology, such as Skype, Teams, Zoom, or the never-ending products that seem to appear, you obviously have no direct control over the remote computer. Asking the individual you are trying to assist to type commands and such and read the output only goes so far.

The Quick Assist Solution

Establishing a Quick Assist connection is straight forward. Both individuals launch the Quick Assist app, which comes as a part of pretty much all Windows 10 installations. The person helping requests a six-digit code that they share with the person receiving help. The person providing help also indicates how much control they want to request. For our example this would be full control. The person receiving help will enter the six-digit code provided and approve the full control request. The pin requests, pin entry, control request and control approval all appear as web content so work quite well with screen readers.

The Wrinkle

As mentioned before, Quick Assist does not support audio redirection. To work around that situation, again you can combine a bit of technology.

  1. On either the computer providing help or the computer receiving help, create a meeting or other connection with your program of choice that allows for screen sharing. You will want to be sure to use a program that supports sharing of system audio as well. As I’m sure many blog readers know, I am a Microsoft employee, although this is my personal blog, so tend to use Microsoft Teams but Zoom and other programs will work equally well here.
  2. Both individuals involved in the assistance should then join the meeting or other connection.
  3. The person wanting help next starts a screen share from within the call or meeting. It is key to include the system audio. This will create the environment, where the person offering the assistance who is for our scenario a screen reading user, is going to hear the remote audio.
  4. Now establish the Quick Assist connection, including the person to provide assistance requesting full control and the person wanting assistance granting such control.

The Additional Wrinkles

Any time you are establishing a remote connection between two computers, the issue of how and when keyboard commands, touch gestures or mouse clicks go to the remote or local computer requires attention. In my trials of Quick Assist thus far, when you first establish the full connection, keyboard commands for me stayed with the local computer. Using various screen reader commands and techniques, I had to find part of the Quick Assist window on my computer and activate that area. The area was labeled something like Remote Input. I was able to get to this area with the JAWS cursor, NVDA object navigation or Narrator previous and next commands in those programs. Activating the input area with the appropriate command in the screen readers allowed sending keyboard commands to the remote computer. At that point I could use all the standard commands on the remote computer, such as ctrl+win+enter to launch Narrator. Again, our scenario is that a screen reading user is providing help and the person receiving help is not and likely doesn’t even have anything beyond Narrator installed.

One other item of note once you are using the remote computer. So far in my use of Quick Assist, I could not find a way to get keyboard commands to go back to the local computer without having the person receiving assistance end the connection. I did try some of the standard commands I use when working with a standard remote desktop connection and they did not seem to work. This may be a knowledge gap on my part or a limitation of Quick Assist. As I learn more I’ll update with additional details.

Using The Remote Computer

The process of creating this connection sounds more complicated than it is once you do it a couple of times. In my experience, getting the remote computer audio from the screen sharing in Teams or another app, and using Quick Assist to control the remote computer worked well. There was a slight, very slight, lag from the time I used my keyboard commands until I heard the result but it was far better than trying to ask the other person to type a range of commands. I found the environment quite usable to help solve problems.

Alternatives and More

This is not the only solution to this problem of course. If both parties in the connection are using the same screen reader, solutions such as JAWS tandem or equivalents where they exist, are better alternatives. Also, you could opt to forgo the screen sharing part of this process for the audio and just listen to the audio from the remote computer over the voice connection you are using. Likely there are other options I’ve not thought of too but I offer this as one more possible option when you want to help others out of a computer jam remotely, especially when you are using a screen reader and the person needing help is not.

One Comment

A Short Update on Virtual Machines to Include an Apple Mac

Since writing about using virtual machines with a screen reader, I’ve expanded my personal computing environment to include more Macintosh use. As a result, I wanted to offer a few comments on using VMWare Fusion on the Mac.

Bill Holton’s excellent article on running Windows on a Mac was an invaluable start for me here. From that article my biggest issue to resolve was how to address use of the Caps Lock key.

I took a different approach and opted to solve the conflict between multiple screen readers and the Caps Lock key along with that key apparently not getting passed on to Windows in a virtual machine through VMWare Fusion directly. This was easier than I expected at the start.

In Fusion’s preferences, there is a keyboard option. Within that area are options for remapping keys. I chose to remap a different key on my keyboard to the Caps Lock key. While I was at it, I assigned a keyboard combination to what Fusion calls Menu. This is the equivalent of pressing the Context or Application key for right click menus on a keyboard.

These key assignments work just fine. You can press and hold the key you’ve reassigned to be the Caps Lock key and add the necessary Windows screen reader key commands, such as t for window title.

I’m successfully using JAWS, NVDA and Narrator on Windows running under VMWare Fusion. Since I’m running on a MacBook Pro, I configure the screen readers to use their laptop keyboard layouts. I also use the option known as Fusion in VMWare. This causes the open windows from both the Mac and Windows to appear as separate windows within the Mac operating system. I find it works great with the Windows screen readers kicking in just fine when I switch to a Windows app.

Bill’s article also talked about using Boot Camp. I have tried that, and it is also an excellent option. However, I’ve found one consistent problem with Boot Camp that stops me from using it. There is a problem with the audio drivers used in Boot Camp that causes the first syllable or so of synthesized speech to be cut off unless other audio is playing. This has happened on occasion with other audio drivers on Windows. I’ve reported this to Apple, and I hope they address this sooner than later.

Leave a Comment

New Name Same Content

Despite installing as I understand it pretty much all the WordPress security you can, my blog has been hacked multiple times. It reached the point that Google was claiming my site was a security threat. As a result, I’ve opted for now to try a new name with the same old content and hopefully more frequently added new content.

I also like the new name better anyway. I like the idea of putting thoughts and ideas out and finding what resonates.

Leave a Comment

Another Day, Another Example of Missing Alt Text

As much as I’m sure anyone familiar with web accessibility doesn’t need yet another example of why alt text matters, as a consumer of web content I certainly am impacted when it is missing.

For anyone exploring cutting the cord, the U.S. Federal Communications Commission (FCC) has a handy resource to show you what digital television stations you can receive in your area with an antenna. Navigate to https://www.fcc.gov/media/engineering/dtvmaps and enter an address, city and state or zip code to get this information. Results are in a table that has headers and such. This is good.

Unfortunately, one of the key pieces of information, the signal strength from these results is a graphic. As you can expect from the title of this post, there is no alt text on these graphics.

Section 508 has been around for quite some time as have the Web Content Accessibility Guidelines. Proper alt text, again as I’m sure pretty much anyone working in the web environment knows, is a requirement. One can only wonder why something this basic was missed.

One Comment

A Request to Librarians: Please Ask OverDrive About Libby Accessibility

I’m a big fan of public libraries and the wide range of resources they make available. As a child making stops to my local bookmobile or summer afternoons spent at “Story Time Tree” to hear a fun adventure were two of my favorite activities. As an adult, I make frequent use of the eBook services, databases and other resources libraries make available.

OverDrive is as far as I know the largest player in making eBooks available to libraries. In many ways they provide a quality service but I’d encourage every librarian to fully understand the bargain you are making when you use OverDrive.

Would your library tolerate an author or other speaker coming to give a talk in your facility secretly whispering to some visitors they should not attend the talk? I think not, yet when you invite OverDrive into your facility, that is close to what you are doing.

OverDrive heavily promotes their Libby app as a part of the eBook services they offer. What I suspect most librarians do not know is that for users who rely on screen reading technology, the following is what greets their patrons when the Libby app is launched:

Welcome to Libby! This is a secret message for screen readers. We are working to improve your experience with this app. In the meantime, our OverDrive app is more accessible. You can find it in the app store. We thank you for your patience.

Libby is hardly a new app at this point and it should have been accessible from the start in the first place. This message has been present to the best of my knowledge for close to two years now. My own requests to OverDrive asking for any updates have gone without any meaningful response on multiple occasions.

Accessibility requirements too are nothing new. Nor are the technical details to make an app accessible a mystery. Apple, where this message appears on the iOS Libby app, has a wealth of resources. OverDrive itself by directing users to their older app, claiming it is more accessible, also demonstrates they understand accessibility to some degree.

I’d encourage librarians to ask OverDrive when this app will be accessible? Ask why is this message indicating the app has accessibility issues “secret”. It is beyond time that these sorts of challenges not be hidden away. It is time for them to be fixed and most definitely not hidden.

One Comment

MLB Strikes Out on Accessibility

Although it took a structured negotiation and settlement, at one point MLB seemed like it was turning the corner on improving and sustaining accessibility. Sadly that time seems to have passed, much like a favorite star of yesterday.

Most recently, MLB added a simple game to their AT Bat app with a bracket challenge to pick winners in the upcoming Home Run Derby. The problem is for accessibility purposes, MLB used “Player Name” as the accessible names for each player. Obviously this means if you use a screen reader such as VoiceOver on the iPhone, you have no idea what names you’d be selecting. Capping the problem off, when I called MLB’s dedicated accessibility support phone number to report the issue, the support representative asked me, “What’s VoiceOver?” I spent more of the call teaching the person how to use VoiceOver versus anything else.

One can only wonder if MLB did any accessibility testing of this game. It took me less than two minutes to notice the issue. For an organization that seems to keep statistics on almost every conceivable aspect of a game, it doesn’t seem like it would be too difficult to ensure accessibility testing is a part of all processes.

After trying the game in the At Bat app, I did try the web site as a part of writing this blog post. That does work a bit better but still has issues. Further, why should users of screen readers be forced to jump through extra hoops and try multiple versions just to have a bit of recreation? MLB has demonstrated the ability to be a leader in accessibility previously. Here’s hoping the organization finds a way to return to those levels. In this case, it is a strikeout on accessibility, leaving this fan disappointed and more.

2 Comments

Inaccessibility Simply Gets Old

There are times when the continued lack of accessibility in basic tasks gets old. Today I wanted to see if my local Staples store carried a product so conducted a product search. Between all the instances of “icon”and “product tile large” on the resulting page, I have absolutely no idea if the product I was searching for is actually available.

I could write all about web accessibility standards or why it is important to make your web site accessible. I could also write a detailed analysis of what is wrong on the web site. Instead I’ll just say, it is disappointing that near the end of 2018, a product search on a major retail company’s web site still leads to every product reading”product tile large” with  multiple screen readers.

Staples does have an accessibility policy of sorts.

Leave a Comment

Improving the Usability of an Online Scrabble Game

A friend recently invited me to play a scrabble-like online game called Jamble on a service called ItsYourTurn.com (IYT). The service is far from a model of accessibility and I’m hopeful the company will respond to my contact about improving the basic accessibility of the site. Until that happens, I wanted to share some tips and a dictionary file I’ve created for the JAWS screen reader because it does make the game quite playable in my opinion.

The fundamental challenge for IYT and screen readers in particular is that the site makes extensive use of graphics that are all missing alternative text. JAWS, as do some other screen readers, copes with this situation by communicating some form of the address to the graphic file. In JAWS, if the graphic is part of a link, that address will be communicated by default. If it is just a graphic, a setting allows JAWS to still communicate that address. You will want to change the JAWS settings for graphics from show tagged to show all.

In my first game, the game board for Jamble read something like the following to me:

15 jb/jb-bl-2l jb/jb-bl jb/jb-bl jb/jb-bl-3w jb/jb-bl jb/jb-bl jb/jb-bl jb/jb-bl-2l

My seven letter tiles read as:

jb/jb-qu jb/jb-C jb/jb-D jb/jb-E jb/jb-L jb/jb-U jb/jb-V  

In fact, about the only positive was that the gameboard used an HTML table so screen reading table commands could be used to easily move around.

Turning the mishmash of letters here into something meaningful was quite easy though. A common feature of screen readers is to have a speech dictionary that allows a user to enter alternative spellings of words to improve on the pronunciation when speech synthesis doesn’t get things quite right.

Fixing all of the missing alt text and getting the various game squares and letter tiles to read something meaningful involved entering a series of speech dictionary definitions. For example, the text “jb/jb-bl-2l” was replaced with “Double Letter” since that square on the board was a double letter square. This means that JAWS now says “double letter” instead of the jb-gibberish.

One challenge I encountered initially when playing the game was finding the new word played each time by my opponent. The new letters played have a slightly different filename graphic for the first turn after they are played. A “-z” is appended to the filename. It was just a matter of updating speech dictionary rules to indicate not just the letter but also the fact that it was new. That helped with ensuring the letter was read correctly but I had no interest in hunting around a 15X15 grid to find new words each turn.

While the speech dictionary in JAWS does replace how words are communicated, it doesn’t alter the basic text. So, I was able to identify that the filename for the new letters always ends in -z. Thus, to find the new word just played by my opponent each time it is my turn, I can just do a search for -z and JAWS will position me on the first new letter on the board. I can then use table reading commands to explore around that letter to find the rest of the newly created word.

I’ve published a speech dictionary for this game in two forms. First is a JAWS settings package you can use to import the dictionary into JAWS. This is for the Chrome web browser. In JAWS, use the Import/Export menu option on the Utilities menu and choose import. In the dialog that appears, provide the file location where you save the settings file. For more information on importing settings files, please see the topic Importing and Exporting User Settings in the JAWS help.

For those comfortable with JAWS file locations and such, I’ve also published the chrome.jdf

file directly. You can download this file and copy it manually to the appropriate location on your computer. This would be %appdata%/Roaming/Freedom%20Scientific/JAWS/2019/Settings/enu with the most recent version of JAWS. You can also rename it for other web browsers or copy the existing text into your own existing dictionary file.

I am also experimenting with the ability in JAWS to use sounds in these dictionary files. That will allow game squares that are double and triple words or letters as an example to play unique sounds. Another dictionary update will be available when this sound option is completed.

Some Notes on Game Play and Other Randomness

To the extent my interest, time and ability to make updates is available, I intent this blog post to be a bit of a living document. What follows are brief notes about game play, screen reading techniques and other information that might be helpful if you opt to try this or other games from the IYT web site.

  1. The basic table for games in Jamble is a 16×16 table. The left-most column contains numbers for each row in the table. The numbers start at 15 in the upper left corner of the table. The bottom cell in this column is blank.
  2. The bottom row, or row 16, of the table contains the letters a through o to give letters to each column in the table.
  3. This leaves a 15 x 15 grid with the upper left corner reported as row 1 column 2 by a screen reader. Again, the left most column in the full grid, or column 1, is used for numbering.
  4. When you are playing a game of Jamble, the top part of the page contains links and information for navigating the IYT web site. It is likely fastest to hit t to get to the table for the game when you load a game. During game play, you and your opponent can post messages as a part of taking your turn. These will appear just before the table.
  5. Your letters appear as links just after the game table. The text “your letters” appears just before the letters themselves. Unfortunately, this is not a heading so you will likely need to use search function in a screen reader to get to this area.
  6. Immediately after your seven letter tiles are another seven links. These appear to be small graphic files that either serve no purpose or duplicate functionality of the letter tiles. The speech dictionary files define these links not to speak anything as a result. JAWS will still call these “graphic” because they are links.
  7. After your letter tiles, information about the game score, tiles remaining, and previous moves is shown. For previous moves, I have had the best success activating the link to show all moves. This then shows a table listing the starting coordinates for each word played and the point value awarded. The table has three columns that show the turn number, and then a column for both you and your opponent’s move.
  8. After the score and move information, there are several graphics showing the points awarded for different letters. Unfortunately, again these are missing alt text and so far I’ve not found a way to indicate the point values of letters. Just defining speech definitions in this case is not enough because you need to associate the color of the point graphic with the color of the tile played and the graphic files for the letters indicate nothing about color. As of now speech definitions for these graphics have not been entered so they will read something like “jb/jb-cir2”. Again, this is an image showing a color and telling players that letters of that color are worth two points in this example.
  9. The IYT Game Status page has three tables of interest. The first shows how many games you have indifferent categories. The second shows games where it is your turn. Activate the links containing opponent names to take your turn in each game. The final table shows games where it is your opponent’s turn.
  10. Taking a turn in Jamble involves a few steps:
    1. Locate the square where you want to place the first letter of a word you want to make and press enter on that square.
    2. In the edit box that results, enter the letters you will use to make your word. Do not include letters that are already on the board. If you are using a blank character, type the letter you want the blank to represent.
    3. Select across, the default, or down, to indicate the direction of your wort from a combo box just after the edit box where you entered your letters.
    4. Use the submit button to enter your letters.
    5. You will have a second submit button on the resulting page along with an edit box where you can enter a message for your opponent if you want to do so. Complete your turn by using the submit button from this page.
      1. If the resulting page reports it has 59 or so links, it means your word was not successful. There are brief instructions about what might be wrong. The page instructions tell you to use the back button in your browser to resubmit but I have found it faster to just return to the Game Status page and get back to the game from that page.
      2. If the resulting page has 86 or so links, your turn is done and your word was successfully played.
  11. If a blank tile is played, both the fact that it was originally a blank and the letter it represents are indicated on the board. I have not found a way to label this entirely so in these cases JAWS will announce a w and then the letter represented by the blank.
  12. The JAWS speech dictionary does not impact the braille representation of text and I have not found an equivalent method to adjust what’s communicated in this game for braille as of yet. As a result, in braille this game is still going to show all the untagged graphic file names.
  13. As mentioned earlier, the JAWS speech dictionary does not alter text when reading by character so if you read by character, you will still experience all the text such as “jb/jb-“.
  14. If you want to find a specific letter on the board, use the JAWS find text feature with ctrl+f and enter a dash followed immediately by the letter you are searching for. As a reminder, the game appends a -z to letters the first time they appear in new words so there is a chance when searching for a z on the board you will also hit this situation.
  15. JAWS has another feature known as Custom Labels that could possibly also be used to rename the untagged graphics. However, in trying that, I have found that the coordinates in a table seem to be saved along with the label, making this feature an impractical solution here.
  16. IYT has a number of games. Again, I hope the company will respond to my inquiry about improving accessibility but until they do, the techniques I’ve outlined here can be used for many of their other games.
  17. IYT is free and also offers subscriptions. Free accounts are limited to 15 game turns a day.
  18. As a reminder, these speech dictionaries are redefining the way the strings of text are pronounced by JAWS. If these same strings were to appear on another web site used in the same browser, these definitions would also be applied.

The Dictionary Entries

Following are the entries I’ve used in the dictionary files you can download.

;Double Letter

.-2l.Double Letter.*.*.*.0.0.

;Double Word

.-2w.Double Word.*.*.*.0.0.

;Triple Letter

.-3l.Triple Letter.*.*.*.0.0.

;Triple Word

.-3w.Triple Word.*.*.*.0.0.

;New letter played on board

.-z.New Letter.*.*.*.0.0.

;Squares on board with a tile played. This silences all but the letter indication.

.jb/jb-. .*.*.*.0.0.

;Empty squares on board, no letter has been played.

.jb/jb-bl.Blank.*.*.*.0.0.

;graphics in letter tile area that have no purpose

.jb/wtdot. .*.*.*.0.0.

;A blank letter in player letter tile area

.jb/jb-qu.Blank Tile.*.*.*.0.0.

Leave a Comment