i would like to ask what are the new touchpad shortcuts in the latest os for ipad using a trackpad. The three fingers swipe to the right and left seemed to be removed. Thank you
Explore best practices for creating inclusive apps for users of Apple accessibility features and users from diverse backgrounds.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I am seeing a strange issue where NSObject accessibilityRespondsToUserInteraction returns true on Simulator but false on device.
Checking the same object on simulator with Accessibility inspector I see the object traits as image so why would it return true in that case?
Are there any other way to check the the item might be accessibilityRespondsToUserInteraction OR Clickable beside that property and traits?
(Or is it just another bug)
Hi! I have noticed a few glitches as well as some overall unfortunate cons with the assistive access mode.
Alarms, timers, stopwatch, etc. do not sound or alert. However, I have an infant monitor app and I do get that sound alert so I know it is possible.. do I need to download a separate alarm app for it to work?
Cannot make FaceTime calls with favorite contacts.
Find My iPhone cannot jump to the maps app.
Camera cannot zoom in or out.
Photos cannot be deleted, edited, or shared in a shared album in the photos app.
Photos/videos cannot be sent in messages.
Spotify cannot be accessed from the lock screen.
Apps do not stay open if you lock the phone screen or leave it on too long without touching the screen (auto locks).
There is no flashlight option. I downloaded an app to have this feature but without being touched the screen will lock which shuts off the flashlight feature in the app until I unlock the phone again.
iOSアプリでNEAppPushSessionを使い、NEAppPushDelegateの通知を受けてCallKitの着信画面を表示する実装をしていますが、以下の問題に直面しています。
8/13
ログにて下記のエラーが頻発しました。
通知の受け取りテストを約120回してその間ずっとこのエラーが出ていました。
エラー 2025-08-14 11:27:06.793073 +0900 nesessionmanager NESMAppPushSession[SimplePushDefaultConfiguration:7B7218F3-94B5-4AE5-9B9E-94E176694D02] failed to report incoming call to CallKit, error: Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service named com.apple.callkit.networkextension.messagecontrollerhost was invalidated from this process." UserInfo={NSDebugDescription=The connection to service named com.apple.callkit.networkextension.messagecontrollerhost was invalidated from this process.}
このエラーログが頻発した後、callkitの通知画面が表示されなくなりました。
ですがどうやら通知の監視は開始しているようです。
15時間後の8/14
時間をあけたからか、再度通知が来るようになりました。
ですが再度通知の受け取りテストを行った時に同じエラーログが出ました。
再度通知テストを約120回程行ったら、このエラーログが頻発した後、callkitの通知画面が表示されなくなりました。
ですがどうやら今回も通知の監視は開始しているようです。
15時間後の8/15
今日は15時間かけても通知を取得できませんでした。
ですが同じく通知の監視は開始していそうです。
iPhoneの再起動、Xcodeのクリーンアップ、アンインストールして再インストールなどしても通知は来ないままでした。
また、不思議なことに通知が来ない事象が起きた端末以外でも同じように通知を取得することができません。
他の通知は受け取ることができますが独自の通知であるNEAppPushManagerだけ通知を取得することができません。
質問です。
再度通知を出すためには何をすれば良いでしょうか。
この事象は4099エラーを出しすぎたことにより発生する障害なのでしょうか。
4099エラーを出ている原因は何でしょうか。
Topic:
Accessibility & Inclusion
SubTopic:
General
Tags:
User Notifications
PushKit
CallKit
Push To Talk
When using iOS VoiceOver to navigate a webpage, selecting a element correctly activates the :focus-visible state. However, when VoiceOver moves to a non-button element (such as a or ), the previously focused button retains its :focus-visible state. The focus indicator only updates when VoiceOver moves to another .
This behavior can be confusing for screen reader users, as it creates the appearance of multiple elements being focused simultaneously. It also differs from expected keyboard navigation behavior, where focus styles typically update as soon as the user moves to a new interactive element.
Is this an intentional VoiceOver behavior, or could this be a bug? If intentional, is there a recommended workaround to ensure correct focus indication when moving between different types of elements?
Steps to Reproduce:
Enable VoiceOver on an iOS device.
Navigate using swipe gestures or explore-by-touch to focus on a .
Observe that the button correctly receives the :focus-visible styling.
Move to a non-button element (e.g., a with tabindex="0" or an ).
Notice that the button still retains its :focus-visible state, even though VoiceOver has moved to a new element.
Expected Behavior:
The previously focused should lose its :focus-visible state when VoiceOver moves to a different interactive element, just as it does when using keyboard navigation.
Actual Behavior:
The :focus-visible state remains on the previously focused button unless VoiceOver moves to another . This can create confusion by displaying multiple focus indicators at once.
Tested On:
iOS 17.7, 18.3.1
iOS Safari
iPhone 11 Pro, iPhone 14 Pro Max
Hi everyone,
I enrolled in the Apple Developer Program on the evening of December 26, 2025, and the membership fee has already been successfully charged to my bank account. However, my account is still showing a “Pending” status with the message “Subscribe your membership.”
At this point, some time has passed and I haven’t received a confirmation email or any follow-up requesting additional information.
Topic:
Accessibility & Inclusion
SubTopic:
General
I have a UIImageView as the background of a custom UIView subclass. The image itself does not contain any text. On top of this image view, I have added two UILabels.
To improve accessibility, I converted the entire view into a single accessibility element and set a proper accessibilityLabel. Additionally, I disabled accessibility for the UIImageView and the labels by setting isAccessibilityElement = false.
However, when VoiceOver's Accessibility Recognition's Text Recognition feature is enabled, VoiceOver still detects and announces the text inside the UILabels at the end after reading my custom accessibility properties. This text should not be announced.
It seems that VoiceOver treats the UILabel content as part of the UIImageView. Additionally, when using the Explore Image rotor action, the entire subview is recognized as a single image.
Is this the expected behavior? If so, is there a way to disable VoiceOver’s text recognition for this view while keeping custom accessibility intact?
class BackgroundLabelView: UIView {
private let backgroundImageView = UIImageView()
private let backgroundImageView2 = UIImageView()
private let titleLabel = UILabel()
private let subtitleLabel = UILabel()
override init(frame: CGRect) {
super.init(frame: frame)
setupView()
}
required init?(coder: NSCoder) {
super.init(coder: coder)
setupView()
configureAceesibility()
}
private func configureAceesibility() {
backgroundImageView.isAccessibilityElement = false
backgroundImageView2.isAccessibilityElement = false
titleLabel.isAccessibilityElement = false
subtitleLabel.isAccessibilityElement = false
isAccessibilityElement = true
accessibilityTraits = .button
}
func configure(backgroundImage: UIImage?, title: String, subtitle: String) {
backgroundImageView.image = backgroundImage
titleLabel.text = title
subtitleLabel.text = subtitle
accessibilityLabel = "Holiday Offer ," + title + "," + subtitle
}
private func setupView() {
backgroundImageView2.contentMode = .scaleAspectFill
backgroundImageView2.clipsToBounds = true
backgroundImageView2.translatesAutoresizingMaskIntoConstraints = false
backgroundImageView2.image = UIImage(resource: .bannerfestival)
addSubview(backgroundImageView2)
backgroundImageView.contentMode = .scaleAspectFit
backgroundImageView.clipsToBounds = true
backgroundImageView.translatesAutoresizingMaskIntoConstraints = false
addSubview(backgroundImageView)
titleLabel.font = UIFont.systemFont(ofSize: 18, weight: .bold)
titleLabel.textColor = .white
titleLabel.translatesAutoresizingMaskIntoConstraints = false
titleLabel.numberOfLines = 0
addSubview(titleLabel)
subtitleLabel.font = UIFont.systemFont(ofSize: 14, weight: .regular)
subtitleLabel.textColor = .white.withAlphaComponent(0.8)
subtitleLabel.translatesAutoresizingMaskIntoConstraints = false
subtitleLabel.numberOfLines = 0
addSubview(subtitleLabel)
NSLayoutConstraint.activate([
backgroundImageView2.leadingAnchor.constraint(equalTo: leadingAnchor),
backgroundImageView2.trailingAnchor.constraint(equalTo: trailingAnchor),
backgroundImageView2.heightAnchor.constraint(equalToConstant: 200),
backgroundImageView.centerYAnchor.constraint(equalTo: centerYAnchor),
backgroundImageView.topAnchor.constraint(equalTo: topAnchor),
backgroundImageView.leadingAnchor.constraint(greaterThanOrEqualTo: leadingAnchor),
backgroundImageView.trailingAnchor.constraint(equalTo: trailingAnchor),
backgroundImageView.bottomAnchor.constraint(equalTo: bottomAnchor),
titleLabel.leadingAnchor.constraint(equalTo: leadingAnchor, constant: 16),
titleLabel.trailingAnchor.constraint(lessThanOrEqualTo: centerXAnchor),
titleLabel.bottomAnchor.constraint(equalTo: centerYAnchor, constant: -4),
subtitleLabel.leadingAnchor.constraint(equalTo: leadingAnchor, constant: 16),
subtitleLabel.trailingAnchor.constraint(lessThanOrEqualTo: centerXAnchor),
subtitleLabel.topAnchor.constraint(equalTo: centerYAnchor, constant: 4)
])
}
override func layoutSubviews() {
super.layoutSubviews()
backgroundImageView.layer.cornerRadius = layer.cornerRadius
}
}
Allow the user to add their own tags to the default emoji tags.
For instance, this emoji, for me, is nonna: 🤌🏻. My efficiency would improve immensely if I could search for it as the “Nonna” emoji, rather than searching for nonna, remembering it doesn’t exist, trying the search for other things it might be called, realising I don’t know what it is, then having to scroll through all the hand emojis twice to find it.
🤌🏻🤞🏼👌
I have some doubts about how VoiceOver handles focus when the screen updates.
When a new UIViewController is pushed onto a UINavigationController or presented modally, how does VoiceOver decide which element to focus on? Is there a way to control or customize this behavior?
In a UISplitViewController, when an item is selected in the primary view controller, the focus should shift to the relevant content in the secondary view controller. How can we ensure that VoiceOver correctly moves focus to the right element in the secondary panel?
画面亮度存在无规律动态波动(时亮时暗),且无手动控制入口,导致商品颜色还原失真、主播面部曝光异常(过曝 / 欠曝),严重影响直播展示效果。
期望
"· 优化直播模式的自动曝光算法,提升复杂光线环境下的亮度稳定性;
· 增加 “直播模式” 专属亮度锁定功能,支持手动设定亮度参数并锁定,满足直播场景下的画质可控需求。
"
I’m developing an ARKit application where I aim to attach procedurally generated audio to detected planes in the environment. While using a static audio file with SCNAudioSource and SCNAudioPlayer works as expected, integrating procedural audio via AVAudioSourceNode does not produce any sound, nor does it generate any error messages: Stack Overflow Post
Working Implementation with Static Audio File:
let audioPlayer = SCNAudioPlayer(source: audioSource)
node.addAudioPlayer(audioPlayer)
Attempted Implementation with Procedural Audio:
// Audio generation code
}
let audioPlayer = SCNAudioPlayer(avAudioNode: audioNode)
node.addAudioPlayer(audioPlayer)
In this setup, the AVAudioSourceNode successfully generates audio when connected directly to an AVAudioEngine. However, when used with SCNAudioPlayer and attached to an SCNNode, it fails to produce sound. What doesn’t work is creating some procedural audio with an AVAudioNode, as documented here:
Apple docs
Additionally, I explored the WWDC18 AR game project, SwiftShot, which utilizes SCNAudioPlayer(avAudioNode:). After updating it for the latest Xcode, the graphics function correctly, but the audio does not play. I also noted that the Apple documentation mentions an audioPlayerWithAVAudioNode: method, stating:
Using this initializer is typically not necessary. Instead, call the audioPlayerWithAVAudioNode: method, which returns a cached audio player object if one for the specified AVAudioNode object has already been created and is available for use.
However, this method does not appear to be available in Swift. Any insights or guidance on this matter would be greatly appreciated.
I am trying to enroll for Developer program, but just before the payment page it ask to login again and then it takes forever to login and I cant proceed. This is happening on web browser.
Thanks
Milind
I need to understand the different layers that are there in the iPhone X and later OLED screens as I am designing a hardware attachment. They seem to be projecting letters and images from a different layer than the subpixel layer. Is this proprietary information, or is there a resource that explores them?
Topic:
Accessibility & Inclusion
SubTopic:
General
I'm encountering an issue related to BLE device discovery on iOS.
I have a BLE peripheral device that I initially connected to using an iOS device. After this connection, the BLE device's advertised name was programmatically changed by the peripheral. Now, when I try to scan for this device using other iOS devices, it does not appear in the scan results in most apps — including nRF Connect and our own custom BLE app that uses CoreBluetooth.
A few observations:
The device is definitely powered on and advertising (confirmed via Android).
The name change is reflected correctly on Android and on the iOS device that originally connected to it.
Other iOS devices no longer see the device in their scan list.
Is the accessibility feature, voice command recording available on the Apple Vision Pro? It does not start on my device.
The Apple Vision Pro is on 26.1.
Regular single voice commands work on the Apple Vision Pro.
Recording commands worked on other devices. (iPad and iPhone)
Hi everyone,
I’ve been analyzing the current state of Sign Language accessibility tools, and I noticed a significant gap in learning tools: we lack real-time feedback for students (e.g., "Is my hand position correct?").
Most current solutions rely on 2D video processing, which struggles with depth perception and occlusion (hand-over-hand or hand-over-face gestures), which are critical in Sign Language grammar.
I'd like to propose/discuss an architecture leveraging the current LiDAR + Neural Engine capabilities found in iPhone devices to solve this.
The Concept: Skeleton-based Normalization
Instead of training ML models on raw video frames (which introduces noise from lighting, skin tone, and clothing), we could use ARKit's Body Tracking to abstract the input.
Capture: Use ARKit/LiDAR to track the user's upper body and hand joints in 3D space.
Data Normalization: Extract only the vector coordinates (X, Y, Z of joints). This creates a "clean" dataset, effectively normalizing the user regardless of physical appearance.
Comparison: Feed these vectors into a CoreML model trained on "Reference Skeletons" (recorded by native signers).
Feedback Loop: The app calculates the geometric distance between the user's pose and the reference pose to provide specific correction (e.g., "Raise your elbow 10 degrees").
Why this approach?
Solves Occlusion: LiDAR handles depth much better than standard RGB cameras when hands cross the body.
Privacy: We are processing coordinates, not video streams.
Efficiency: Comparing vector sequences is computationally cheaper than video analysis, preserving battery life.
Has anyone experimented with using ARKit Body Anchors specifically for comparing complex gesture sequences against a stored "correct" database? I believe this "Skeleton First" approach is the key to scalable Sign Language education apps.
Looking forward to hearing your thoughts.
Hi. I wanted to open a .sheet with .presentationDetents([.height(320)]) and having a couple of pickers and buttons inside.
The problem is that I can't tab with keyboard between the elements inside if I use .presentationDetents([.height(320)]) on the root VStack inside of the .sheet.
It works perfectly if I remove it but then the sheet becomes fullscreen which I don't want.
Is this a bug or am I using it incorrectly?
Topic:
Accessibility & Inclusion
SubTopic:
General
When My Usb interface working on recording, the sync is not good work. I found every IO will in_io_buffer_frame_size is same, it is not sync to UpdateCurrentZeroTimestamp. So The Audio driver Kit Read opration is not same like Write? What is the way sync with Usb in data?
If only play audio with UpdateCurrentZeroTimestamp, it working fine.
Thanks!
Hello,
I'm currently unable to access App Store Connect. When I try to open https://appstoreconnect.apple.com, I receive the following error message:
“appstoreconnect.apple.com is currently unable to handle this request.”
I’ve tried the following steps, but the issue persists:
Cleared browser cache and cookies
Tried different browsers (Safari, Chrome)
Attempted from multiple devices and networks
Is this a known issue or is there any workaround available?
Would appreciate any help or update on the current status.
Thank you,
Topic:
Accessibility & Inclusion
SubTopic:
General
Hello,
I had submitted a question to clarify which components have accessibility APIs that trigger haptics for VoiceOver users https://developer.apple.com/forums/thread/773182.
The question stems from perhaps a more direct question about specific components: do tablists and disclosures natively intend to include haptics or screen reader hint or other state or properties to indicate to screen reader users where the component begins or ends?
In some web experiences there are screen reader hint text stating "end of..." or "entering" as a way to define the boundaries of these inline dialogs.
I had asked about haptics in the prior thread because I do not recall natively implemented version of this except in some haptic cues but have not experienced them consistently so I am not sure if that is an intended native Swift implementation or perhaps something custom.
Topic:
Accessibility & Inclusion
SubTopic:
General
Tags:
iOS
Accessibility
Sound and Haptics
Core Haptics