sfba.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A Mastodon instance for the San Francisco Bay Area. Come on in and join us!

Server stats:

2.4K
active users

#screenreader

15 posts13 participants0 posts today

Sometimes when I'm arrowing through the cells of an Excel spreadsheet with #NVDA, the #screenReader will stop reading the content and just say "selected" once before falling silent. It happens regardless of the scroll lock key state, and I have to move away from the Excel window and then back to continue navigating.

Does anyone have suggestions on what might be causing this and how to avoid it? #accessibility

I thought tick tock might be kind of accessible with a #screenreader but really think I'll stick to youtube. 2 things going through my hearing aids at the same time makes things tricky and I don't see how to stop the auto playing of videos when scrolling through on tick tock. live and learn though, that's what I always say. nothing ventured nothing gained.

Question regarding alt-text for images and good interpretation by screen-readers: How should one emphasize a certain word or part of a word in plaintext? Would All-Caps work, or would something else be better?

I was wondering about this for transcribing protest-signs where some parts are written in another color or font for emphasis.

Do you use a screen reader and read arabic content with it? Have you ever wondered why Arabic tts literally always sucks, being either super unresponsive, or gets most things wrong all the time? I've been wanting to rant about this for ages!
Imagine if English dropped most vowels: "Th ct st n th mt" for "The cat sat on the mat" and expected you to just KNOW which vowels go where. That's basically what Arabic does all day every day! Arabic uses an abjad, not an alphabet. Basically, we mostly write consonants, and the vowels are just... assumed? Like, they are very important in speech but we don't really write them down except in very rare and special cases (children's books, religious texts, etc). No one writes them at all otherwise and that is very acceptable because the language is designed that way.
A proper Arabic tts needs to analyze the entire sentence, maybe even the whole paragraph because the exact same word could have different unwritten vowels depending on its location, which actually changes its form and meaning! But for screen readers, you want your tts to be fast and responsive. And you do that by skipping all of that semantic processing. Instead it's literally just half-assed guess work which is almost wrong all the time, so we end up hearing everything the wrong way and just cope with it.
It gets worse. What if we give the tts a single word to read (which is pretty common when you're more closely analyzing something). Let's apply that logic to English. Imagine you are the tts engine. You get presented with just 'st', with no surrounding context and have to figure out the vowels here. Is it Sit? Soot? Set? Maybe even stay? You literally don't know, but each of those might be valid even with how wildly the meaning could be different.
It's EXACTLY like that in Arabic, but much worse because it happens all the time. You highlight a word like 'كتب' (ktb) on its own. What does the TTS say? Does it guess 'kataba' (he wrote)? 'Kutiba' (it was written)? 'Kutub' (books (a freaking NOUN!))? Or maybe even 'kutubi' (my books)? The TTS literally just takes a stab in the dark, and usually defaults to the most basic verb form, 'kataba', even if the context screams 'books'!
So yeah. We're stuck with tools that make us work twice as hard just to understand our own language. You will get used to it over time, but It adds this whole extra layer of cognitive load that speakers of, say, English just don't have to deal with when using their screen readers.

Question for folks who use screen readers: what is your preferred way for someone to write individual letters? Example, spelling out the letters on a sign that do not add up to a real word.

I did alt text for a picture of signs that were badly made. When describing the text layout, I went with "Capital Letter space Capital Letter". How well does that work versus maybe "Capital Letter dash Capital Letter"? Or is there a better option?

Continued thread

[1/3] Der Freitag startet mit „Märchenstunde mit Frank [Mittelbach]“. Zuerst mit den Eingeständnis, dass es die Erkenntnis gebraucht hat wahrzunehmen, was wir mit den Dokumenten der letzen Jahrzehnten Personen, die auch #ScreenReader angewiesen sind „angetan haben“.
Er gibt allen, die sich bisher nicht Barrierefreiheit beschäftigt haben einen kleinen Einblick darüber was das bedeutet.

Wollte Iceshrimp ausprobieren. Noch bevor ich mich auf iceshrimp.de registrieren konnte stellte ich schon ein paar unbenannte Schalter fest. Naja, ist Opensource und lässt sich ändern, also auf die Registrierungsseite gegangen.
Iceshrimp.de nutzt HCaptcha. Ok, dann eben kein Iceshrimp für mich. Diese Accessibilitycookies funktionieren bei mir so gut wie nie, ich muss einem Service dem ich nicht traue meine Mailadresse geben, was noch ein faktor ist, der gegen HCaptcha spricht. Audiocaptcha gibts auch nicht. Wann endlich Captcha-alternative? #DisabledAlltag #Blind #A11Y #ScreenReader

Is it normal that tabbing over a webpage only reads the clickable links with the #screenReader, or should I also be able to get to text and images? The page gets read in full when I don't do anything, but when I start to use tab I only get to the links. (I am remaking our webpage and used the Windows screen reader for the first time, please be patient 🙈 )

a story of chloe, bruce rockwell and a shoppingcenter out of a community space on the willowstreet

some lines may be typo'ed at Dramatize Me‬'s channel in #caption to read

shelter as shoulder
5:36 teer=tear down

bean=been
watt=what

5:57 cat fun=have fun

6:15 ditch=teach

07:03 denomination=demolation

mooed up=moved up

tsk=it's
happy=happen

9:28 you're in=ruin

youtube.com/watch?v=vVjiT_YGCA4

Recent datepicker experience:
1. Control is presented as three separate spin controls, supporting the Up/Down Arrow keys to increment and decrement the value as well as manual typing. But because they're not text inputs, I can't use the Left/Right Arrow keys to review what each separate one contains, only to move between day, month, and year.
2. I tab to year.
3. I press Down Arrow, and the value is set to 2075. I'm unclear how many use cases require the year to be frequently set to 2075, but I can't imagine it's many so this seems like a fairly ridiculous starting point.
4. I press Up Arrow, and the value gets set to 0001. The number of applications for which 0001 is a valid year is likewise vanishingly small.
5. I delete the 0001, at which point my #screenReader reports that the current value is "0". Also not a valid year.
6. Out of curiosity, I inspect the element to see which third-party component is being used to create this mess... only to find that it's a native `<input>` with `type="date"` and this is just how Google Chrome presents it.

A good reminder that #HTML is not always the most #accessible or user-friendly.

So an update on Guide: I've exchanged some long emails with Andrew, the lead developer. He's open to dialogue, and moving the project in the right direction: well-scoped single tasks, more granular controls and permissions, etc. He doesn't strike me as an #AI maximalist can and should do everything all the time kind of guy. He's also investigating deeper screen reader interaction, to let AI just do the things we can't do that it's best at. I stand by my thoughts that the project isn't yet ready for prime time. But as someone else in the thread said, I don't think it should be written off entirely as yet another "AI will save us from inaccessibility" hype train. There is, in fact, something here if it gets polished and scoped a bit more. #blind #screenreader #a11y