Skip to content →

The Idea Place Posts

The Good and Bad of Accessibility in Two Minutes with the Olympics

Tuning into a bit of the Olympics this morning, within two minutes the reality of accessibility in 2024 is on display. Audio description for a channel showing multiple sports is impressive but the schedule view of the Olympics iOS app not so much. The progress is appreciated here but the gaps are still far too many.

NBC has taken a page from the NFL’s Red Zone and introduced a channel called Olympic Gold Zone. It provides whip-around coverage of events and live audio description of a two-box split screen for two sports mixing between live commentary is impressive as an example.

The Gold Zone channel is part of the coverage available on the Peacock streaming service. Scott Hanson of NFL Red Zone fame is one of the channel hosts. Coverage runs for 10 hours a day, starting at 6A central.

Downloading the Olympics iOS app and choosing schedule though, shows accessibility that would not make it to the metal round. My experience with VoiceOver was a jumble of words, untagged images and more. I had no success understanding the actual schedule.

Back to the Gold Zone, the live audio description is excellent. As you’d expect, you get details that are just not included in the standard TV broadcast. Player reactions, details about the stadiums, surroundings for events in the city and more. And all of that is mixed in with both the Gold Zone host and announcing from the sports. It will be a fun two weeks of athletic competition.

Leave a Comment

Audio Ducking Enhancements in iOS 18

If you use a screen reader, the concept of audio ducking is likely not new to you. For those unfamiliar with the term, it refers to a concept where any audio outside of a screen reader’s speech is lowered automatically when a screen reader is communicating.

Apple has made some changes in this area that for me have been quite positive. You can now adjust both when audio ducking is applied and have greater control over the level of ducking. This is done with settings changes for both audio ducking and volume in iOS 18.

To make these adjustments, ensure that both volume and audio ducking are items you have added to the VoiceOver rotor. Then use those options as described here.

Previously audio ducking was strictly an on/off choice. In iOS 18, on has been replaced with two choices. You can now have audio ducking set to always or only when speaking. This leaves you with a total of three settings, off, only when speaking and always.

The ability to adjust the amount of ducking is a bit more subtle to discover. In fact, originally when I discovered the option to adjust volume to percentages above 100%, I thought it was a bug. Setting a volume of greater than 100% was not producing any detectable change for me and in fact it seemed odd to be able to set volume to more than 100%.

The way this all works together is that setting a volume of greater than 100% is actually lowering the volume of audio when ducking but controlling the amount of ducking applied. For example, when setting a volume of 105%, the audio that is ducked, is ducked by 5% from the original volume. Likewise, set a volume of 150% and audio when ducked, and audio is half of the original volume.

The ability to adjust the amount of ducking is a welcome enhancement. Depending on the source audio, the amount you want it lowered to still be audible but not impact screen reader speech can vary greatly. The result here is like having a specific volume control for the ducked audio.

Again to use these features, ensure that both volume and audio ducking are added to the VoiceOver rotor. Go into Settings:Accessibility:VoiceOver:Rotor and select those options along with anything else you want to use.

Leave a Comment

If You Tag, I Will Read

In my experience, a fundamental disconnect exists between accessibility and the investing world when it comes to a statement that is blasted all over every investment web site when you are about to invest. You will read the statement that you should carefully read the prospectus before making any investment. Good luck with that as in my experience these documents are rarely, if ever, properly tagged for accessibility.

As just one example, typically deep within the multipage documents are tables of the individual investments the mutual fund or ETF holds. Yet the tables in every prospectus I’ve tried to read, more than 50 in the last few weeks, from at least 20 different companies, fail to tag tables properly.

This is just the tip of the iceberg when it comes to accessibility of these documents. Should we talk about the charts and graphs in the same documents?

The Securities and Exchange Commission should mandate that to sell securities in the U.S. at least, all investment materials need to be WCAG 2.1/2.2 AA conformant and give the industry one year to comply. If legislation is needed to make such a mandate enforceable, then congress should craft and enact such.

Given the number of employers who include 401K programs as part of employment, every company who offers this to employees should be holding the investing world accountable today for this.

If anyone knows of an investment company or ETF or mutual fund provider who actually does these documents correctly today, I’d love to hear about it.

Leave a Comment

Web Accessibility Failures and a Basic Financial Task of ETF Screening

What often gets lost in all the talk of accessibility, conformance, WCAG standards and more is how challenging task completion can be for an end-to-end experience. I recently celebrated another birthday and have been doing part of my overall evaluation of financial tools and services I use as a part of my financial life.

I did this back in 2017 when I moved back to Wisconsin and again in late 2022 when I changed jobs. This time the driving factor is nothing quite so dramatic but rather a departure from one financial services company that simply wasn’t making progress on some accessibility basics where I honestly thought they might be different.

I’m certainly not here to give anyone investment advice but for my purposes have been evaluating ETF screeners from several of the leading financial services companies and industry resources. I’ve yet to find one that I’d consider truly usable with a screen reader and just about everyone I tried had at some point a blocking accessibility issue.

Frankly, the interactions required here are relatively few to complete the task. Choose from various criteria for the evaluation, get a list of results to interpret the data and be able to take action on items of interest.

The disappointing thing is that most of the accessibility failures are still the same basic challenges of poorly named controls, custom web controls that fail accessibility, poorly constructed tables, charts that fail to address accessibility and more.

None of the tools I tried were completely broken. But kind of works doesn’t cut it for this kind of task. Nor does mix and match between providers to work around issues in one tool.

What is also disappointing here is that the biggest workaround for these sorts of challenges is often downloading the results for processing in Excel or another application. However, every experience I tried puts a restriction on the number of results you can download to the point that downloading the data is not a realistic option.

Unlike my previous forays into this task, a new wrinkle about being able to download the results has also emerged and was present on more than one site. By this I mean that you actually have to interact with some control to choose the type of file you want and those experiences fail web accessibility basics.

These sorts of tasks need to be models of web accessibility. I’ve long said that it is a struggle to try and accomplish a task, learn the experience and wrestle with accessibility issues all at the same time.

Add in the fact that in an area such as financial tools, it isn’t enough to be accessible. A financial tool that is very accessible but works poorly at the financial task is not helpful. Additionally, assuming there was something that stood out in this area, changing financial service providers isn’t always a straight forward option. It can be costly, has potential tax and other consequences and may not even be possible depending on the type of investment account you are trying to move.

Finally, much like healthcare, financial management is a highly personal task. Many of the last-resort strategies of asking a friend or family member to assist are not good options here.

In just about every case, the tools I’ve tried come from industry players who have ongoing accessibility efforts with people who I am sure are as dedicated to accessibility as I know I am. Given my own employment and as I wrote in my post on Ethical Accessibility I understand the challenges for everyone involved in these situations.

Several of the same companies here are listed as corporate partners for Disability:IN. Perhaps that organization needs to expand what they measure as one example.

Those more familiar with the legislative and executive processes of government may know if an agency such as the Security and Exchange Commission (SEC) could adopt a requirement for any organization involved in the buying and selling of financial instruments having to meet certain accessibility criteria to conduct such business. Legal requirements tend to bring a level of clarity, for better or worse, that is otherwise hard to achieve in many organizations.

Many of these organizations already make claims of acting as a fiduciary, especially if you use their advising services. I find it hard to call this an accurate statement with the cornerstone of fiduciary responsibility to put a client’s need ahead of your own interest. Comprehensive accessibility would seem to be a must if you were serious about this.

Writing 101 would say I should close this post with some call to action or other engagement strategy. Well, there is no great close here just now beyond a call for the financial industry to step up efforts because what is happening today is broken.

Leave a Comment

Trying Apple’s Personal Voice

Apple recently introduced Personal Voice to newer devices on various hardware in their lineup. I have had a little experience with the basic concept behind this sort of technology from my time at Microsoft where I dabbled with one of Microsoft’s Azure cognitive services to do something similar.

The basic concept behind these experiences is that you record some set of known text and then software converts that into a synthetic version of your voice. In Apple’s case it is 150 phrases ranging from just a few words to maybe at most 20 words in a single phrase.

After you finish recording, there is some processing time and then your voice is ready to use. On an iPhone 15 Pro, my voice was ready in about five hours. You are not able to do anything else with the phone while this is happening. On an M1 MacBook Air from 2020, processing took about two hours and I was able to do other tasks at the same time, such as writing this blog post.

Once your voice is created, you can use it as one of the voices available with Apple’s Live Speech feature. This allows you to type in various apps where you would typically use your voice and have the synthetic voice used. It compliments the existing voices Apple makes available and has the added benefit of allowing you to have some relationship to your own voice used in the experience. In situations where a person may know that they are going to lose their voice ahead of time, it does offer some ability to preserve your own speech.

Multiple factors influence the quality of the end result here—Microphone, recording environment and more, just to name a few. For short phrases it likely is not noticeable but in my sample, even the pace at which I appear to have read the samples was different. There is a 21 second difference in having the voice read back the same text.

I made two voices in trying this experience. The first was recorded using the default Apple headphones on an iPhone 15 Pro. The second using an Arctis 7 headset. Both samples are my Apple Personal Voice reading my blog post on Accessibility Island.

I have also made a sample of my original voice sample of three phrases and then Apple’s Personal Voice speaking those phrases from my recording with the Arctis 7 device. The Personal Voice speaking the phrases is the result of my typing them into an edit box and asking for them to be spoken using my newly created voice. The phrases are in this recording and have the original voice sample followed immediately by the Personal Voice speaking the phrase. After all three phrases are played, the entire series is duplicated once. The phrases are:

can you call me in an hour

Did you remember to take out the trash?

Is she going to the grocery store now or in the morning?

Creating a personal voice is straight forward. On whatever device you are using, go to Settings:Accessibility:speech:Personal Voice. You’ll be prompted to record a short phrase to test your recording environment and advised of any changes you should make, such as too much background noise. You then start the process of recording 150 phrases. They do not all need to be recorded at once. When you are finished, you’ll be advised to lock your phone if doing this on an iPhone or just ensure your computer is charged if using a Mac.

When the voice is created, you can start using it with Live Speech by going to the same Speech area of Accessibility settings and going into Live speech. Turn Live Speech on and then pick from the list of voices. Your personal Voice should be listed.

If you are doing all of this with VoiceOver, Apple’s screen reader, as I did, the process of creating a voice works well with VoiceOver. You can use VoiceOver to read the phrase to be read, then activate a record button and repeat the phrase. Recording stops when you stop speaking. If you turn on a setting for continuous recording, you will advance to the next phrase automatically and can repeat the process. I did notice that sometimes VoiceOver automatically read the next phrase but not always. Focus seems to go to the Record button and I suspect there is a timing issue between the phrase being spoken and VoiceOver announcing the newly focused button.

Having created two voices, I would say it is probably a good idea to take a short break during the reading of the 150 phrases from time to time. I found myself not speaking as clearly as I wanted once in a while as well as having sort of the same singsong phrasing. Listening to my voice samples and how the voice came out, I would also say the microphone used has a big impact on the voice quality. This isn’t surprising but is made apparent to me comparing the samples of what my recordings sounded like and how that turns out when the same text is spoken by Personal Voice. I don’t think either microphone that I used would be what I would recommend for creating a voice to be used permanently.

I was curious if Apple would allow the personal voice you create to be used with VoiceOver. I didn’t expect it would be possible and that does seem to be the case.

As with pretty much anything in AI, synthetic speech is a rapidly changing technology. There are certainly higher quality voices in the arena of synthesized speech but Apple has done a good job at allowing you to tap your own voice on consumer hardware in an easy to use process. Listening to my own voice, it is clear it isn’t me and I wasn’t expecting it to be. But even on the basic hardware I used, there are characteristics of my voice present and if I were in a situation where I was going to lose my physical voice permanently, this is one option I would definitely explore further.

One Comment

A Simple Example, Avoid Breaking What Works in Table Functionality

Opinions may differ on this, but I am of the opinion that you should not add extra instructions on the names of column headers in tables on the web. If you are going to do so, ensure it is done in a fashion that allows a screen reader to avoid announcing those instructions if the user desires not to have them communicated.

I recently encountered an experience with one of the financial services I use where some excellent table functionality is ruined by breaking this simple rule. It makes getting the actual data from the tables much more difficult. The tables properly use both column and row headers and have good keyboard navigation even when not using a screen reader as just two examples of what works well.

In the case of my financial service, an example column name is now:

Last Price, (press Enter to sort)

Because this is part of the column header, albeit hidden visually, you must now hear this or read it in braille before you get to the details for a cell when moving through a row of information with a screen reader’s table reading commands.

Instead of just hearing the column name and the value, I must hear the column name, these instructions, and then the value. This is now how the result is communicated when moving to a given cell with a screen reader’s table navigation commands.

Last Price, (press Enter to sort) $174.01-$0.81

Examining the HTML, I find this is part of the column header name.

<span class=​"screen-reader-only">
", (press Enter to sort)"

I would suspect this was added in an attempt to be helpful. It is complete speculation on my part but it is even entirely possible that a usability study was done on this table and one of the questions asked was if the users knew they could sort the table. I would be willing to bet, continuing my speculation, that the answer was no and this extra text for screen reader users was added.

The problem is that this breaks the actual functionality of the table. Reading through the row, you are trying to study the details of the data. That flow is interrupted by the instructions on sorting being inserted between the column header and the data. You either have to learn to tune it out or some other strategy of ignoring the instructions. Again it is inserted as part of the column name so it isn’t as if the screen reader can ignore half the column name here.

It is also interesting that prior to the table, there is a full paragraph marked up with the same “screen-reader-only” class giving all sorts of instructions on reading the table with a screen reader.

There are a range of options to improve on this in my opinion. At minimum, given the way the full site has been constructed, move these sorting instructions into the other instructions you already have for getting information from the table.

Other solutions are possible of course and my point here is more to point out how in trying to be helpful, you can easily break what works well with screen readers and other assistive technology.

Leave a Comment

A Need for Improvement in Web Accessibility From Bing and Bard on Tables

AI offers many opportunities for information access among the other benefits. However, if the basics of web accessibility are not followed, the promise of that access will be difficult or more to some parts of the population.

Both Bing and Bard, from Google and Microsoft respectively, currently need to improve at one of the most basic tests here in my trials. My instruction to both AI services:

Show me a list of U.S. states in a table based on population.

In both cases I received a table with proper column headings but row headings were not present. I tried a range of commands to get them to appear with no success. I tried more with Bard leading to Bard eventually acknowledging that it didn’t know how to add row headers yet. I suspect trying similar additional instructions would yield some equivalent result with Bing.

I added an instruction to both services as my first attempt to improve the output where I added the instruction to ensure the table had proper row and column headers for accessibility. This had no impact on the result.

It is vital that the information from AI technology be accurate. It is equally as critical that proper accessibility be used for that output.

Asking both services for details on how to create an accessible table yields good results talking about both row and column headers among other points that would be common from an accessibility perspective. So both services should be following their own advice here.

As these AI experiences become, if they are not already, more mainstream in society, developers need to ensure proper standards are used for information display. My intent is not to single out Bard and Bing exclusively. These are two services I have immediate access to for experimentation but I suspect other AI experiences would yield equivalent results. If you know of a service that passes this test today, please share it in the comments.

2 Comments

Experimenting with Be My Eyes and Videos

I suspect anyone who has tried some of the newer AI-based image descriptions, such as those from Be My Eyes, has noticed the high quality image descriptions that are available. I’ve been curious about how I could apply that to videos so did a little experimentation.

I want to emphasize that I do not consider this a replacement for audio description. There is so much more to that experience than just giving details on what’s in an image.

The first step for my experiment was getting individual images from the video. An article on doing this with a tool called ffmpeg was very helpful and getting the images is a snap with this tool. Options for getting an image for every frame in the video, at specific time increments or a specific time are just a few of the choices you have.

This alone is one reason why I do not consider this a replacement for audio description. There is so much content, even in a single picture, that it can be overwhelming. Then too is the challenge of identifying when enough change has happened to generate a new description.

From this point, so far I’ve simply used Be My Eyes to generate a description of the various extracted images. For example, a video clip shared on social media can quickly be separated into one image per second and then image descriptions provided from Be My Eyes or another service.

I’m sure there are APIs I can explore to automate the image description part of my experiment. Anyone with experience doing this already is welcome to share your knowledge in the comments here.

My 30 minute experiment also tells me that it would be great if the various media players would add an option to describe the current scene. Again, this is not audio description but imagine if you could press a button at any point in a video and get a detailed description. The technology to make all this happen definitely exists today. Here’s hoping the media player makers will incorporate it into a user-friendly experience sooner than later.

Even without such experiences being added directly, I have found that a screen shot of the current point in time or even a photo of the television screen can yield quality results.

I view what I’ve explored here as a supplement to human-created and human-narrated audio description and will continue to explore what is possible.

One Comment

Be My AI in the Clouds

Be My AI in the Clouds

Although it has been close to 40 years now, there was a time when I had enough vision to see the sky and the color and shape of clouds. I’ve been intrigued by trying Be My AI on pictures of the sky because I guess I still have memory of the different colors and shapes clouds can take.

Here is the result of one such exploration. Assuming the descriptions are accurate, the level of detail you can get is fascinating.

The accuracy of how to complete tasks, such as saving the full text chat and picture do need improvement. The current instructions from within the overall Be My Eyes app simply do not work.

Note: I recognize a blog post focusing on an image without that image is a bit empty. Unfortunately, a phone change and some editing seems to have lost the original picture.

Be My AI: The picture you’ve shared is of a beautiful sky. The sky is a light blue color and there are fluffy white clouds scattered throughout. The clouds vary in size and shape, some are wispy while others are more dense. The sunlight is subtly shining through the clouds, giving the sky a serene and peaceful appearance.

Kelly: Give me as much detail as you can about the shapes, sizes and colors of the clouds.

Be My AI: Certainly! In the picture, the clouds are predominantly white with varying shades of gray due to the shadows. There are several clusters of clouds. On the top left, there is a cluster of clouds that are wispy and spread out, resembling cotton candy. They are medium in size and have a slightly irregular shape.

Towards the center, there is a small patch of clouds that are more dense and compact. They are smaller in size compared to the ones on the top left.

On the right side, there is a larger cluster of clouds that are denser and have a more defined shape. They are fluffy and have a slight billowing appearance, similar to a heap of cotton balls. The sunlight is shining from behind these clouds, giving them a soft glow around the edges.

The background sky is a light blue color which provides a beautiful contrast to the white clouds. The overall scene is very calming and picturesque.

Kelly: I’d like to know more about the precise shapes of the clouds.

Be My AI: Certainly! The clouds in the picture have various shapes that are not very distinct but I’ll describe them as precisely as I can:

  1. On the top left, the clouds are wispy and spread out. They somewhat resemble elongated swirls or stretched cotton. They don’t have a defined shape but are more like irregular patches.
  2. In the center, the small patch of clouds is more like a shapeless mass. It’s dense and compact, but doesn’t have a distinct shape. It’s more like a random assortment of small cloud puffs clustered together.
  3. On the right side, the larger cluster of clouds has a slightly more defined shape. One of the clouds in this cluster somewhat resembles a sideways heart or a pair of wings. The others around it are more irregular but collectively they form a billowing mass.

The shapes are not very distinct and resemble abstract forms. They are natural cloud formations, so they don’t have precise geometrical shapes.

Kelly: What percentage of the sky is blue and what percentage is covered by clouds?

Be My AI: In the picture, approximately 70% of the sky is blue, while around 30% is covered by clouds. The clouds are scattered throughout the sky in clusters, with the majority of the sky remaining clear and blue.

Kelly: Tell me about everything in the picture again giving me percentages for each cluster of cloud as far as how much of the picture they represent. Also, give me more details about the sunlight.

Be My AI: Certainly! In the picture:

  1. The top left cluster of clouds, which are wispy and spread out, covers about 10% of the picture.
  2. The small patch of clouds in the center covers approximately 5% of the picture. They are dense and compact.
  3. The larger cluster of clouds on the right side covers about 15% of the picture. These clouds are fluffy and have a slight billowing appearance.

The remaining 70% of the picture is the clear light blue sky.

Regarding the sunlight, it appears to be diffused through the clouds, especially through the larger cluster on the right. The sunlight is not directly visible, but you can see its effect as it creates a soft glow around the edges of the clouds on the right side. This gives the clouds a slightly silver lining. The light is subtle and adds a serene and peaceful atmosphere to the sky. The overall lighting in the picture is bright but not harsh.

Kelly: Tell me more about the coloring of the clouds. For example, are all the clouds colored the same or are they different shades of white? B as descriptive as possible.

Be My AI: The clouds in the picture have varying shades of white and gray.

  1. The wispy clouds in the top left are a lighter shade of white, almost blending with the blue sky. They have a very faint gray tint in some areas, likely due to shadows.
  2. The small patch of clouds in the center is a bit denser, and they are a pure white color. They appear to be slightly brighter compared to the wispy clouds on the top left.
  3. The larger cluster of clouds on the right side has the most variation in color. The edges of these clouds are bright white, likely because they are closer to the sunlight. As you move towards the center of this cluster, the clouds have shadows which give them a grayish tint. The gray is not very dark but adds depth to the clouds, making them appear more three-dimensional.

The variation in shades of white and gray in the clouds adds texture and depth to the picture, making it more visually interesting.

Kelly: Now do the same for the blue sky are there any differences you can detect in the blue?

Be My AI: In the picture, the blue sky appears to be fairly uniform in color, which is a light blue. However, there are subtle variations:

  1. Towards the top of the picture, the blue is slightly darker. It’s still a light blue, but with a hint more depth compared to the rest of the sky.
  2. As you move towards the center and bottom of the picture, the blue lightens a bit. This is likely due to the sunlight diffusing through the clouds and reflecting on the sky, giving it a slightly paler shade of blue.
  3. Near the clouds, especially the larger cluster on the right, the blue has a very slight grayish tint. This is likely because of the shadows from the clouds.

The variations are subtle and the sky predominantly appears as a calm and light blue. The slight changes in shade contribute to the serene atmosphere of the picture.

Kelly: Reviewing everything you have indicated about the picture, can you identify at least two things that you haven’t told me about?

Be My AI: Upon reviewing the description I provided earlier, it seems that I have covered the main elements of the picture which are the clouds, the sky, and the sunlight. The picture primarily focuses on the sky and clouds, and there doesn’t appear to be any other distinct elements or objects in the image. If you have any specific questions or if there’s something else you would like to know about the picture, please let me know and I’ll do my best to assist you. If you need further assistance, you can also press the “Call a volunteer” button for guidance.

Kelly: Is the quality of the picture suitable for sharing?

Be My AI: Yes, the quality of the picture is suitable for sharing. It’s clear and has good resolution. The colors are vibrant and the clouds are well-defined against the blue sky. It’s a beautiful and serene image that captures the natural beauty of the sky and clouds, and it would likely be appreciated by others if shared.

Kelly: How can I save the picture that started this conversation?

Be My AI: To save the picture that started this conversation on your device, you can follow these steps:

  1. If you are on a smartphone, tap and hold on the picture until a menu pops up. Select “Save Image” or “Download Image” from the menu.
  2. If you are on a computer, right-click on the picture. A menu will appear, and you can select “Save Image As…” Choose the location on your computer where you want to save it and click “Save”.

Please note that the steps might vary slightly depending on the device and browser you are using. If you need further assistance or if these steps don’t work for you, I would suggest pressing the “Call a volunteer” button for guidance.

Kelly: How do I do this with voiceover? There is no image I can tap on.

Be My AI: Using VoiceOver on an iPhone, you can save the image by following these steps:

  1. Swipe right or left to navigate through the elements on the screen until VoiceOver reads out the description or label of the image.
  2. Once you are on the image, double-tap and hold until you hear a sound indicating that additional options are available.
  3. Swipe up or down to navigate through the options and VoiceOver will read them aloud.
  4. When you hear “Save Image” or “Share Image”, double-tap the screen to select that option.

If you are using a different device or if you find these steps challenging, I would recommend pressing the “Call a volunteer” button for further assistance.

Kelly: Can I copy this entire chat instead of individual items? If so, how?

Be My AI: Yes, you can copy the entire chat. Here’s how you can do it on different devices:

On a Computer:

  1. Click anywhere inside the chat window.
  2. Press “Ctrl + A” on Windows or “Command + A” on Mac to select all the text in the chat.
  3. Press “Ctrl + C” on Windows or “Command + C” on Mac to copy the selected text.

On a Smartphone:

  1. Tap and hold on any part of the chat until a magnifier appears.
  2. Release your finger and you should see options to “Select” or “Select All”.
  3. Choose “Select All” to highlight the entire chat.
  4. Tap “Copy” to copy the text.

Once you have copied the chat, you can paste it into a text document, note, or any other application by using “Ctrl + V” on Windows, “Command + V” on Mac, or by tapping and holding in the text field on your smartphone and selecting “Paste”.

If you are using VoiceOver on iPhone:

  1. Navigate to the first message in the chat.
  2. Use the rotor action by twisting two fingers on the screen until you hear “Select”.
  3. Swipe down with one finger to select the first message

Kelly: Those instructions do not work for voiceover on the iPhone. There is no select in the rotor.

Be My AI: I apologize for the confusion. To copy the entire chat using VoiceOver on an iPhone, please follow these steps:

  1. Navigate to the first message in the chat by swiping right or left.
  2. Once you are on the first message, use the rotor action by twisting two fingers on the screen until you hear “Edit”.
  3. Swipe down with one finger until you hear “Select All”.
  4. Double-tap to select all the text in the chat.
  5. Once all the text is selected, swipe down again until you hear “Copy”.
  6. Double-tap to copy the text.

Now you can paste the copied text into a note or any other application by navigating to the text field, using the rotor action to find the “Edit” option, and then swiping down until you hear “Paste”, and double-tap to paste.

If you find these steps challenging, I would recommend pressing the “Call a volunteer” button for further assistance.

Leave a Comment

Returning to VMWare’s Fusion on an M1 Mac

VMWare recently announced a 2023 technical preview for Fusion on silicon-based Macs. I didn’t have success with earlier previews on that platform with Fusion so have been using Parallels for now. This was a good opportunity to try Fusion again.

My efforts this time around were successful. I’ve now created multiple VMs under Fusion on an M1 MacBook Pro. Here is what I had to do.

  1. Create a new VM and point to a Windows 11 ARM ISO.
  2. Start the machine.
  3. The first challenge of the experience happened here. The VM started but launching Narrator, I had no screen reader speech. Thankfully a USB sound device plugged into the Mac and made available to the Windows VM solved this problem. Note, this external USB device was only necessary until the first reboot during OS install.
  4. I then used Narrator to start going through the OOBE (out of box experience) where you pick the edition of Windows, add and account and such.
  5. Note, I hit an issue here where there was no networking support available. To work around this I:
  6. Press shift+F10 to get a cmd prompt from the setup experience.
  7. Told VMWare I wanted to install VMWare tools. This inserts the virtual CD for these tools and is launched from the Virtual Machine menu in Fusion.
  8. Entered d: in the run dialog in Windows. The virtual CD for VMWare tools was inserted in that drive and this kicked off the automatic launching of the installer.
  9. Used Narrator to install VMWare tools in Windows and rebooted.
  10. Went through OOBE again. A reboot during the process causes you to have to start over.
  11. Perhaps most importantly, with a big thank you to the person who shared this tip with me, in the settings for the virtual machine, turned off hardware acceleration for the video display. This had a dramatic positive impact on the use of JAWS in this Vm.

Working with virtual machines, screen readers and multiple operating systems has a lot of moving parts. If you have any tips or techniques that have worked for you, share them in the comments.

Leave a Comment