Explore best practices for creating inclusive apps that cater to users with diverse abilities

Learn More

Posts under General subtopic

Post

Replies

Boosts

Views

Activity

Seeking API Support for Marking Substrings as Headings in NSTextView for VoiceOver
I'm developing a document editor for macOS using AppKit, which supports structured content such as titles and multiple heading levels—similar to what you see in the Pages app. I'm looking for a way to programmatically mark a specific substring within an NSTextView as a heading, so that VoiceOver can recognize it and announce it appropriately (e.g., by saying “heading” before reading the text). This would be similar in spirit to how NSAccessibilityLinkTextAttribute works for links. Is there an existing accessibility text attribute or recommended approach to achieve this behavior for headings? If not, I’d appreciate any guidance or suggestions on how best to implement this in a VoiceOver-friendly way. Thank you in advance for your help! Best regards,
0
0
107
May ’25
Thai input glitch after Command + Delete in macOS beta
Issue: When using the shortcut Command + Delete to clear a line of text, the next character I type in Thai unexpectedly appears as an English character, even though the input source is still set to Thai. After that, subsequent characters return to Thai as expected. Details: Affected apps: Notes, Messages, and some other native apps Not affected: Browser text fields (Safari, Chrome, etc.) Does not occur when using Option + Delete or just Delete macOS [insert beta version + build number] Mac model: [insert model] Input sources: Thai – Kedmanee, English – U.S. Steps to reproduce: Open Notes (or Messages). Switch to Thai input. Type a few Thai words. Press Command + Delete. Type again — the first character shows up in English. Expected: First character should remain in Thai, consistent with the active input source. Actual: First character shows as English, then input switches back to Thai.
0
0
800
Aug ’25
When using UIScrollView and UITextView together, inserting a link causes the following text to disappear.
Since UITextView does not support the zoom function, the zoom function of UITextView with addSubview is used in UIScrollView. However, when I use the link here, the text behind it is missing. Ex) https://appstoreconnect.apple.com/login\nApple Developer Login -> The text “Apple Developer Login” does not appear. If anyone has experienced the same problem as me or knows a solution, please leave a comment. Note) It is working normally in iOS16, but the text behind the link disappears in iOS18. The text is not visible, but you can copy it and paste it to retrieve the missing text.
0
0
245
Feb ’25
tvOS: GCController does not send button press events for "Button A" and "Button Center" when VoiceOver is On
When turning VoiceOver ON, GCController does not send button press events for "Button A" and "Button Center". This happens when using Siri 2nd generation remote (with dedicated arrow buttons on the circle around center button) and also when using iOS remote. I didn't test it on old Siri 1st generation with touchpad without arrow buttons. Example: gameController.microGamepad?.allButtons.forEach { button in button.valueChangedHandler = { [weak self] _, _, _ in self?.buttonHandler(gameController: gameController, button: button) } private func buttonHandler(gameController: GCController, button: GCControllerButtonInput) { print("BUTTON: Pressed \(button.description) isPressed=\(button.isPressed) isTouched=\(button.isTouched)") } #endif VoiceOver ON (incorrect behavior): BUTTON: Pressed Direction Pad Left (value: 0.030, pressed: 1) isPressed=true isTouched=true BUTTON: Pressed Direction Pad Down (value: 0.079, pressed: 1) isPressed=true isTouched=true BUTTON: Pressed Direction Pad Left (value: 0.000, pressed: 0) isPressed=false isTouched=false BUTTON: Pressed Direction Pad Down (value: 0.000, pressed: 0) isPressed=false isTouched=false VoiceOver OFF (correct behavior): BUTTON: Pressed Direction Pad Left (value: 0.137, pressed: 1) isPressed=true isTouched=true BUTTON: Pressed Direction Pad Up (value: 0.078, pressed: 1) isPressed=true isTouched=true BUTTON: Pressed Button A (value: 1.000, pressed: 1) isPressed=true isTouched=true BUTTON: Pressed Button Center (value: 1.000, pressed: 1) isPressed=true isTouched=true BUTTON: Pressed Button A (value: 0.000, pressed: 0) isPressed=false isTouched=false BUTTON: Pressed Button Center (value: 0.000, pressed: 0) isPressed=false isTouched=false BUTTON: Pressed Direction Pad Left (value: 0.000, pressed: 0) isPressed=false isTouched=false BUTTON: Pressed Direction Pad Up (value: 0.000, pressed: 0) isPressed=false isTouched=false I could use for detection Direction Pad Left/Right/Up/Down and detect position between -0.7 and +0.7 and handle it as center button press, because I use that on old Siri remote where I need to distinguish center button and arrows (for switching TV channels by Up/Down and Skip forward/back by Left/Right arrows), but for new Siri remote it would be unnecessary workaround. Does anybody know why the center/select button is not detected when VoiceOver is ON. Is there another way of detecting it using GCController? I don't want to use SwiftUI onTapGesture for this one particular case. Is it an unexpected bug in tvOS APIs or is there some specific reason why center button is not handled by GCController when VoiceOver is ON? Thanks.
0
0
647
Jan ’25
Accessibility Voiceover is not treating navigation bar left button as first focused element
Accessibility Voiceover is not treating navigation bar left button as first focused element. If we navigate from A->B then the focus is going to first element inside the B view not to the back button or B view's navigation title. If we post accessibility notification, in onAppear of B, focus is not shifting. but it will read back button first, and then read the B view's content item. it does't focus to back button in swiftUI. how should I do? if I want to focus on the navigation item back button or navigation title. my understanding is the system prioritizes the first focusable element in the view hierarchy. but The navigation bar (including the close button and title) is managed separately by the system. It is not part of the main view hierarchy, so it does not automatically receive focus unless explicitly set. if my thoughts are right, it seems a little strange. Why did you design it this way? Can you tell me your thinking? Thanks
0
0
379
Sep ’25
ARKit Eye Tracking Calibration Issues - Word-Level Reading Tracking Feasibility
Hi Apple Developer Community, I'm developing an eye-tracking application using ARKit's ARFaceTrackingConfiguration and ARFaceAnchor.blendShapes for gaze detection using Xcode. I'm experiencing several calibration and accuracy issues and would appreciate insights from the community. Current Implementation Using ARFaceAnchor.blendShapes (.eyeLookUpLeft, .eyeLookDownLeft, .eyeLookInLeft, .eyeLookOutLeft, etc.) Implementing custom sensitivity curves and smoothing algorithms Applying baseline correction and coordinate mapping Using quadratic regression for calibration point mapping Issues I'm Facing 1. Calibration Mismatch Red dot position doesn't align with where I'm actually looking Significant offset between intended gaze point and actual cursor position Calibration seems to drift or become inaccurate over time 2. Extreme Eye Movement Requirements Need to make exaggerated eye movements to reach screen edges/corners Natural eye movements don't translate to proportional cursor movement Difficulty reaching certain screen regions even with calibration 3. Sensitivity and Stability Issues Cursor jitters or jumps around when looking at center Too much sensitivity to micro-movements Inconsistent behavior between calibration and normal operation 4. I also noticed that tracking on calibration screen as well as tracking on reading screen works better as expected when head movement is there, but I do not want much head movement. I want tracking with normal eye movement while reading an Ebook. Primary Question: Word-Level Eye Tracking Feasibility Is word-level eye tracking (tracking gaze as users read through individual words in an ebook) technically feasible with current iPhone/iPad hardware? I understand that Apple's built-in eye tracking is primarily an accessibility feature for UI navigation. However, I'm wondering if the TrueDepth camera and ARKit's eye tracking capabilities are sufficient for: Tracking natural reading patterns (left-to-right, line-by-line progression) Detecting which specific words a user is looking at Maintaining accuracy for sustained reading sessions (15-30 minutes) Working reliably across different users and lighting conditions Questions for the Community Hardware Limitations: Are iPhone/iPad TrueDepth cameras capable of the precision needed for word-level tracking, or is this beyond current hardware capabilities? Calibration Best Practices: What calibration strategies have worked best for accurate gaze mapping? How many calibration points are typically needed? Reading-Specific Challenges: Are there particular challenges when tracking reading behavior vs. general gaze tracking? Alternative Approaches: Are there better approaches than ARKit blend shapes for this use case? Current Setup Devices: iPhone 14 Pro iOS Version: iOS 18.3 ARKit Version: Latest available Any insights, experiences, or technical guidance would be greatly appreciated. I'm particularly interested in hearing from developers who have worked on similar eye tracking applications or have experience with the limitations and capabilities of ARKit's eye tracking features. Thank you for your time and expertise!
0
0
696
Oct ’25
Apple greets Global Accessibility Awareness Day with severe accessibility violations on macOS
I'm reposting here my FB17602742, submitted yesterday: The strong wording of this message comes from years of Apple ignoring the needs of users who can't tolerate UI animations and convulsions. At this point, it's clear that Apple is either intentionally harming users like me or simply doesn't care about meeting even the most basic accessibility standards on macOS. Yes, many UI animations and convulsions can, fortunately, be disabled - but not through straightforward UI controls. Instead, users are forced to look for obscure Terminal commands found scattered across the Internet. The "Reduce motion" checkbox in System Settings is simply a fake control that doesn't do anything - instead of actually disabling all UI animations and convulsions. What's worse, two of the most offensive UI animations cannot be disabled at all. Apple has consistently dismissed requests to let users disable the following UI animations: Scroll bar rollover highlight effect (introduced on macOS 10.7.3). Every time the cursor passes over a scroll bar, it gets highlighted. This draws the user's attention to random scroll bars for no reason - just because the cursor happened to pass over them. It results in HUNDREDS of unnecessary, annoying events of distraction daily!
 Expand/collapse animation of NSOutlineView (e.g., when opening/closing folders in the list view in the Finder, or any other app using outline views). This animation is extremely distracting, irritating, and time-wasting. Global Accessibility Awareness Day is approaching. Dear Apple, Please adhere to the most basic accessibility standards. Stop the needless suffering of countless users like me. Let us disable the two aforementioned UI convulsions. Thank you for your attention to the issue.
0
0
153
May ’25
Autonomous Single App Mode(ASAM) in macOS
Hello I tried implementing the ASAM for macOS as per apple guidelines with configuration profile mentioned here but didn't had any success. Then Apple suggested to use requestGuidedAccessSession in macOS but that is only supported in macOS Catalyst but that also didn't work with valid config profiles too. Did anyone get success with ASAM mode without assessment entitltlement?
0
0
255
Jul ’25
IOHIDCheckAccess(kIOHIDRequestTypeListenEvent) does not work
I have an app that needs Input Monitoring permissions to get keyboard access in the background. I've attempted to use both IOHIDCheckAccess(kIOHIDRequestTypeListenEvent) and IOHIDRequestAccess(kIOHIDRequestTypeListenEvent), but they always return denied, even though I have given the permission for Input Monitoring to the app in Settings. Is there something I need to put in my Info.plist to enable this permission to work?
0
0
461
3w
VoiceOver and limited vision approaches for custom stepper control
I have a product for designing particle emitters, which I suspect may be of limited interest to people with limited vision. I'd still like to ensure I'm doing a good job with VoiceOver mode. There's a related, simplified sample online, if you want to look at the code As you can see from the picture below, a large part of the interface mimics Xcode's particle editor, with many value entry controls that combine up/down buttons with a tappable label. Tapping the label goes into edit mode. Apart from changing how labels are stepped through in voiceover in my app, how should I handle these stepper buttons? Is this a good place to use a Custom Rotor?
0
0
81
Jun ’25
Custom prediction panel not working in Google Docs
I’m working on a macOS Accessibility setup for a French-speaking user and I’ve hit a wall. (I'm not a developper and I'm trying to help my kid with dyslexia) I successfully built a custom word prediction panel using the Panel Editor (Keyboard) in macOS Accessibility > Keyboard > Accessibility Keyboard. Here’s what I have so far: • The prediction panel works system-wide: I can use it to type in Finder, Safari, Notes, TextEdit, and even browser search bars. • The panel appears above all applications and suggestions show up correctly. • However, it does not work inside Google Docs (tested in Chrome, Safari, and Firefox). Selecting a word from the panel does nothing in the Docs editor. I suspect this is because: • Google Docs does not use a standard macOS text input field. • Docs is a web app that relies on custom JavaScript editors, contentEditable elements, and canvas rendering, so macOS Accessibility APIs (AXTextField, AXInsertText, etc.) don’t register or inject text events. • Accessibility tools like the Accessibility Keyboard rely on native macOS text input methods, which don’t hook into Google Docs’ custom editor. Important: I’m not a programmer. I’d like to know if there is an easy fix or option in macOS, Google Chrome, or Google Docs that would make my custom prediction panel work, before going into custom development. Technical setup: • MacBook Air (M2, 2022) • RAM: 8 GB • macOS: Sequoia 15.3.1 • Language: French (system and keyboard) • Accessibility Keyboard: Enabled via Settings > Accessibility > Keyboard • Custom panel: Built using Panel Editor (Keyboard), named “Philemon Prédiction” • Browsers tested: Chrome, Safari, Firefox (same issue) • Behavior: Panel is visible, suggestions appear, but inserting text does nothing in Google Docs Has anyone worked around this limitation? Is there a simple setting, workaround, or accessibility option to bridge macOS Accessibility input with Google Docs’ editor? Thanks a lot!
0
1
934
Aug ’25
多相机切换时画质参数差异显著
切换后两者的亮度、色彩饱和度、对比度等画质参数差距较大,导致画面视觉体验割裂,破坏直播连贯性,影响用户观看沉浸感。 期望 "· 对标常规直播单反相机的画质基准,优化 Vision Pro 的画面亮度、色彩还原能力; · 提供设备端或配套软件的画质自定义调节功能(亮度、对比度、色温等),支持直播前手动校准,确保与单反相机画面风格一致。"
0
0
91
1w
Separate bluetooth keyboard and bluetooth scanner inputs
Hi, I have an iOS app where bluetooth scanner and bluetooth keyboard both should work simultaneously. I have used 'pressesBegan' method to fetch the characters which are coming from the bluetooth keyboard. This method is fetching all the bluetooth keyboard inputs correctly. But this method is also called when the characters are coming from the bluetooth scanner. In the 'pressesBegan' method, I have to separate the inputs which are coming from the bluetooth keyboard and are coming from bluetooth scanner. They both have some different use in the app. I have already tried with the fetching speed of the characters, but no luck in case of high speed typing. So characters fetching speed will not work in our case. Is there any way to separate the inputs based on some other factorials? Or any other info in the 'pressesBegan' method which can separate the input that it is coming from scanner or is coming from keyboard. Any suggestion regarding this will be helpful. Thanks in advance.
0
0
404
Jan ’25
iOS 26 regression: `DeviceActivityEvent`: `eventDidReachThreshold` called immediately (instead of waiting till threshold is reached)
Hello Albert! I am experiencing some strange bugs around DeviceActivityEvents (part of the DeviceActivity framework) on iOS 26 / iOS 26.1 / iOS 26.2 beta: When creating a DeviceActivityEvent we can assign a threshold and applicationTokens. The idea is, that after the user has spent said threshold on said apps, eventDidReachThreshold() is called. The property includesPastActivity is set to false. On iOS 26 however, it happens (quite reliably after updating to a new beta seed) quite often that eventDidReachThreshold() is called immediately (after a couple of seconds) instead of waiting for the threshold to be met. Is anyone else seeing similar issues on iOS 26 / iOS 26.1 / iOS 26.2 beta? Only workaround I have found is to ask users to revoke and re-grant Screen Time permissions. This only holds for about two weeks though or at most until the next iOS 26 beta update is installed, so it is not a permanent solution unfortunately. Feedback (incl. sysdiagnoses and sample project) is filed under: FB18061981 FB18927456 One of our users has filed their own feedback request as well: FB20817853 Thanks a lot for any help on this!
0
0
314
Nov ’25