Explore best practices for creating inclusive apps for users of Apple accessibility features and users from diverse backgrounds.

All subtopics
Posts under Accessibility & Inclusion topic

Post

Replies

Boosts

Views

Activity

Make Accessibility Focus move to UIPickerView when tapping on UITextField (Full Keyboard Access)
I have a UITextField in my application for entering a state. If I tap on it, a UIPickerView pops up and let's the user select a state (but they can still type too). The issue relates to Full Keyboard Access. If we select the UITextField using an external keyboard, the UIPickerView appears, but in order to get to it the user has to tab through the whole view controller to get to the UIPickerView at the end. What would be nice is to a) move focus directly to the UIPickerView (have it highlighted in blue and scrollable right away with keyboard) or b) make the UIPickerView the next view that's accessible when tabbing over or using the arrow keys. I've tried using: UIAccessibility notifications (both .screenChanged and .layoutChanged, with and without a delay). This ended up only announcing the view, but didn't help with full keyboard access. Making the UIPickerView a first responder when it appears. Attempting to change the accessibilityElements order (but with so many views and views within views, this isn't really a viable option either). Pressing tab + -> (tab and right arrow button) will quickly take the user to the end of the chain of accessibility elements, in other words, to the UIPickerView. But there has to be a cleaner way of just automatically setting the focus to the UIPickerView or making it the next element by pressing the arrow key.
0
0
397
Mar ’25
ARKit Eye Tracking Calibration Issues - Word-Level Reading Tracking Feasibility
Hi Apple Developer Community, I'm developing an eye-tracking application using ARKit's ARFaceTrackingConfiguration and ARFaceAnchor.blendShapes for gaze detection using Xcode. I'm experiencing several calibration and accuracy issues and would appreciate insights from the community. Current Implementation Using ARFaceAnchor.blendShapes (.eyeLookUpLeft, .eyeLookDownLeft, .eyeLookInLeft, .eyeLookOutLeft, etc.) Implementing custom sensitivity curves and smoothing algorithms Applying baseline correction and coordinate mapping Using quadratic regression for calibration point mapping Issues I'm Facing 1. Calibration Mismatch Red dot position doesn't align with where I'm actually looking Significant offset between intended gaze point and actual cursor position Calibration seems to drift or become inaccurate over time 2. Extreme Eye Movement Requirements Need to make exaggerated eye movements to reach screen edges/corners Natural eye movements don't translate to proportional cursor movement Difficulty reaching certain screen regions even with calibration 3. Sensitivity and Stability Issues Cursor jitters or jumps around when looking at center Too much sensitivity to micro-movements Inconsistent behavior between calibration and normal operation 4. I also noticed that tracking on calibration screen as well as tracking on reading screen works better as expected when head movement is there, but I do not want much head movement. I want tracking with normal eye movement while reading an Ebook. Primary Question: Word-Level Eye Tracking Feasibility Is word-level eye tracking (tracking gaze as users read through individual words in an ebook) technically feasible with current iPhone/iPad hardware? I understand that Apple's built-in eye tracking is primarily an accessibility feature for UI navigation. However, I'm wondering if the TrueDepth camera and ARKit's eye tracking capabilities are sufficient for: Tracking natural reading patterns (left-to-right, line-by-line progression) Detecting which specific words a user is looking at Maintaining accuracy for sustained reading sessions (15-30 minutes) Working reliably across different users and lighting conditions Questions for the Community Hardware Limitations: Are iPhone/iPad TrueDepth cameras capable of the precision needed for word-level tracking, or is this beyond current hardware capabilities? Calibration Best Practices: What calibration strategies have worked best for accurate gaze mapping? How many calibration points are typically needed? Reading-Specific Challenges: Are there particular challenges when tracking reading behavior vs. general gaze tracking? Alternative Approaches: Are there better approaches than ARKit blend shapes for this use case? Current Setup Devices: iPhone 14 Pro iOS Version: iOS 18.3 ARKit Version: Latest available Any insights, experiences, or technical guidance would be greatly appreciated. I'm particularly interested in hearing from developers who have worked on similar eye tracking applications or have experience with the limitations and capabilities of ARKit's eye tracking features. Thank you for your time and expertise!
0
0
706
Oct ’25
ApplePay Merchant Session - Could not create SSL/TLS secure channel issue
As part of apple pay implementation we are trying to create a merchant session by trying to connect to apple endpoint https://apple-pay-gateway-cert.apple.com/paymentservices/startSession. While trying to do so we are facing an error “An error occurred while sending the request. The request was aborted: Could not create SSL/TLS secure channel.” . I call the validation url by passing to a C# .Net Framework 4.8 Web API. The API setups an HttpClient with the Merchant Identity Validation Certificate found in my apple account and calls the validation url passing in the required Json Validation Object. When I call PostAsync() I get an exception with the above error message Code is working successfully on my local machine but facing this issue while deployed on Dev / Model environment for testing. We have used Azure app service for deployment and TLS version 1.2 already present here. We have used the Merchant Identity certificate that was issued and have also checked with networking and infrastructure team to make its not an issue from our side. Does anyone have any other idea what could be causing this error. Thank you, Supriya
0
0
112
Jun ’25
Add custom emoji tags
Allow the user to add their own tags to the default emoji tags. For instance, this emoji, for me, is nonna: 🤌🏻. My efficiency would improve immensely if I could search for it as the “Nonna” emoji, rather than searching for nonna, remembering it doesn’t exist, trying the search for other things it might be called, realising I don’t know what it is, then having to scroll through all the hand emojis twice to find it. 🤌🏻🤞🏼👌
1
0
521
Jan ’25
Voice-over mode issues
Added a view controller in the storyboard, added a tableview in this view, and added a cell under the table, when I run the APP to jump to the page, when using the narration function, I find that when I use three fingers to swipe up or down, a sentence will be broadcast in English, I want to no longer change the accessiblity of the cell, when I perform the behavior of swiping up or down with three fingers, Broadcast how Chinese should be implemented.
1
0
75
Jun ’25
PHPickerViewController No Auto Focus
The issue is, I cannot auto acquire bluetooth keyboard focus in PHPickerViewController after enabling 'Full Keyboard Access' in my IPhone 14 with iOS version 18.3.1. The keyboard focus in PHPickerViewController will show, however, after I tapped on the blank space of the PHPickerViewController. How to make the focus on at the first place then? I'm using UINavigationController and calling setNavigationBarHidden(true, animated: false). Then I use this controller to present PHPickerViewController using some configuration setup below. self.configuration = PHPickerConfiguration() configuration.filter = .any(of: filters) configuration.selectionLimit = selectionLimit if #available(iOS 15.0, *), allowOrdering { configuration.selection = .ordered } configuration.preferredAssetRepresentationMode = .current Finally I set the delegate to PHPickerViewController and call UINavigationController.present(PHPickerViewController, animated: true) to render it. Also I notice animation showing in first video then disappear.
2
0
297
Mar ’25
Accessibility Traits for Children of a Tab Bar
Hi! I'm working on an application where I'd like VoiceOver to give each element of a tab bar the "Tab" trait. I'm testing this using the Accessibility Inspector. Essentially, I'd like to replicate the behavior of how Safari identifies each of its tabs as a "Tab" (I've attached a photo below). How exactly is this accomplished? I've tried using the .isTabBar trait to designate the child objects as "Tabs", but this doesn't seem to be working and I've struggled to find documentation about this. For additional context, these child items are Buttons, and I would like to have the .isButton trait essentially replaced by something like an .isTab trait. Not sure if this is actually possible or not, but curious how the Accessibility Inspector recognizes this in Safari.
0
0
171
Jun ’25
How to force VoiceOver to read decimal point even when there are 6 or more decimal digits?
When VoiceOver reads decimal numbers with six or more digits after the decimal, it stops announcing the decimal separator and also adds pauses between each digit. Text("0.12345") // VoiceOver: "zero **point** one two three four five" Text("0.123456") // VoiceOver: "zero one, two, three, four, five, six" How can I force VoiceOver to announce the decimal separator ("point") and not insert pauses regardless of the number of decimal digits?
1
0
268
Jun ’25
Apple Developer Program Purchase Not Saving Progress
I’m trying to enroll in the Apple Developer Program as an individual. I’ve gone through the steps on the website and started the purchase process. However, after a couple of days when I return to the site, it doesn’t remember my progress — I have to start the enrollment from scratch every time. Is this expected behavior? Am I missing a step to save my progress or complete the enrollment properly? Any help or guidance would be appreciated. Thank you!
0
0
161
Jun ’25
My Enrollment is being processed for long time
Hello, I’m reaching out regarding an issue with our organization’s Apple Developer Program enrollment. We’ve successfully created a developer account and our organization is verified through the D-U-N-S system. The D-U-N-S ID is correctly displayed in our Apple Developer account. However, the enrollment status still shows: “Your enrollment is being processed.” It’s been 3 months, and we haven’t received any further communication or updates. Has anyone experienced a similar delay? Is there anything else we should do to expedite the process? Any guidance or insight would be greatly appreciated. Thanks,
1
0
876
Oct ’25
When using UIScrollView and UITextView together, inserting a link causes the following text to disappear.
Since UITextView does not support the zoom function, the zoom function of UITextView with addSubview is used in UIScrollView. However, when I use the link here, the text behind it is missing. Ex) https://appstoreconnect.apple.com/login\nApple Developer Login -> The text “Apple Developer Login” does not appear. If anyone has experienced the same problem as me or knows a solution, please leave a comment. Note) It is working normally in iOS16, but the text behind the link disappears in iOS18. The text is not visible, but you can copy it and paste it to retrieve the missing text.
0
0
245
Feb ’25
How to Enable Group Navigation Behavior for Custom Views in VoiceOver?
In VoiceOver, when using Group Navigation style, the cursor first focuses on the semantic group. To navigate inside the group, a two-finger swipe (left or right) can be used. This behavior works for default containers like the Navigation Bar, Tab Bar, and Tool Bar. How can I achieve the same behavior for a custom view? I tried setting accessibilityContainerType = .semanticGroup, but it only works for Mac Catalyst. Is there an equivalent approach for iOS?
0
0
416
Mar ’25
Accessibility Personal Voice Issue with iOS & iPadOS Beta 1-6
Please refer to Feedback report: FB19701007 The Personal Voice file created in English changes to either Spanish or Chinese and no longer works properly. This has been happening since Beta 1 of iOS/iPadOS 26. I have been unable to pinpoint what causes this to occur. Possibly downloading foreign voices to a device or using different voices via AVSpeechSynthesizer. I run an app in Xcode on my device that prints to the console info about the installed voices. Initially after creation here is the output: Voice Identifier: com.apple.speech.personalvoice.16173F8D-DFB0-4024-98CC-69D965FD96A4 Language: en-US Then I hear a Spanish accent and find this: Voice Identifier: com.apple.speech.personalvoice.16173F8D-DFB0-4024-98CC-69D965FD96A4 Language: es-MX Currently it isn't working and here is the output: Voice Identifier: com.apple.speech.personalvoice.16173F8D-DFB0-4024-98CC-69D965FD96A4 Language: zh-CN Note that the voice file on all three above is the same. No matter what the user does, the created voice file should never be able to change languages. On my test devices I reset them all by erasing all content and settings and creating a new English Personal Voice and the issue persists. A side issue is the toggle share across devices doesn't remain off if turned off. I tried to not share to see if that could be the cause, but the toggle turns on automatically. It won’t remain off.
0
0
860
Aug ’25
多设备协同操作繁琐
直播过程中需同时操作 Vision Pro(拍摄)、Mac(推流)、中控台(画面切换),无统一控制入口,调节 3D 模型、校准画质等操作需在多设备间切换,易出错且效率低。 期望 针对直播场景,提供桌面端专属控制软件,支持一站式管理 Vision Pro 的拍摄参数、3D 模型切换、虚实融合效果等,实现多设备协同操作的可视化、便捷化。
0
0
245
1w
iOS 26 regression: `DeviceActivityEvent`: `eventDidReachThreshold` called immediately (instead of waiting till threshold is reached)
Hello Albert! I am experiencing some strange bugs around DeviceActivityEvents (part of the DeviceActivity framework) on iOS 26 / iOS 26.1 / iOS 26.2 beta: When creating a DeviceActivityEvent we can assign a threshold and applicationTokens. The idea is, that after the user has spent said threshold on said apps, eventDidReachThreshold() is called. The property includesPastActivity is set to false. On iOS 26 however, it happens (quite reliably after updating to a new beta seed) quite often that eventDidReachThreshold() is called immediately (after a couple of seconds) instead of waiting for the threshold to be met. Is anyone else seeing similar issues on iOS 26 / iOS 26.1 / iOS 26.2 beta? Only workaround I have found is to ask users to revoke and re-grant Screen Time permissions. This only holds for about two weeks though or at most until the next iOS 26 beta update is installed, so it is not a permanent solution unfortunately. Feedback (incl. sysdiagnoses and sample project) is filed under: FB18061981 FB18927456 One of our users has filed their own feedback request as well: FB20817853 Thanks a lot for any help on this!
0
0
315
Nov ’25
Autonomous Single App Mode(ASAM) in macOS
Hello I tried implementing the ASAM for macOS as per apple guidelines with configuration profile mentioned here but didn't had any success. Then Apple suggested to use requestGuidedAccessSession in macOS but that is only supported in macOS Catalyst but that also didn't work with valid config profiles too. Did anyone get success with ASAM mode without assessment entitltlement?
0
0
255
Jul ’25
Verifying braille output in an iOS app without a physical braille device?
I'm developing a calculator app and working to ensure a great experience for both VoiceOver and Braille display users. For expressions like (2+3)×5, I need two different accessibility outputs: VoiceOver (spoken): A descriptive string like “left paren two plus three right paren times five,” provided via .accessibilityValue. I'm using a custom spellOut function since VoiceOver doesn't announce parentheses—which are kind of important when doing math! Braille (symbolic): The literal math string (2+3)×5, provided using .accessibilityCustomContent("", ...), with an empty label so it’s not spoken aloud. The issue: I don’t have access to a Braille display device and Xcode’s Accessibility Inspector doesn’t seem to show the custom content. Is there any way to confirm that custom Braille content is being set correctly in Simulator or with other tools? Or…is there a "math mode" in VoiceOver that forces it to announce parentheses? Any advice or workarounds would be much appreciated! Thanks, Uhl
8
0
323
Jul ’25
Size of stylus mesh tip
Hello community, We're designing an app that can optionally be controlled by a stylus with a mesh tip. In this case, the mesh tip we're using is 5 mm in diameter. It seems that mesh tip contact detection is unstable in this size, although it works better with a larger diameter. Is it possible to access a setting in iOS that lets you define the minimum contact area needed to detect a contact on the screen? This would enable us to use this 5 mm stylus. Best regards, Edwin
1
0
382
Feb ’25