Since updating to iOS 26.0 (and confirmed on 26.1), ARBodyTrackingConfiguration no longer detects a valid ARBodyAnchor on devices with LiDAR (e.g., iPhone 15 Pro, iPhone 17 Pro Max).
This issue reproduces in custom projects and Apple’s official sample “Capturing Body Motion in 3D”.
The AR session runs normally, but the delegate call:
func session(_ session: ARSession, didUpdate anchors: [ARAnchor])
never yields an ARBodyAnchor with valid joint transforms.
All joints return nil when calling:
body.skeleton.modelTransform(for: jointName)
resulting in 0 valid joints per frame.
Environment
• Device: iPhone 17 Pro Max (LiDAR)
• iOS: 26.0 / 26.1
• Xcode: 16.0 (stable)
• Framework: ARKit + RealityKit
• Configuration used:
config.worldAlignment = .gravityAndHeading
config.isAutoFocusEnabled = true
config.environmentTexturing = .none
session.run(config)
Also tested: with and without frameSemantics = .bodyDetection
Expected Behavior
ARBodyAnchor should be detected and body.skeleton should contain ~89 valid joints with continuous updates.
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Created
I am building a 360 photo viewer in VisionOS 26. Which allows the user to choose a 2 by 1 jpg and then renders it with a sphere mesh entity. And I use: TextureResource(contentsOf: url, options: options).
I noticed two situations here in terms of mipmaps options.
When setting "mipmapsMode: .none":
The graphic quality within the "gaze area" looks sharp and clear
The two poles (top and bottom) are perfectly rendered
Massive shimmer around the "gaze area"
When setting "mipmapsMode: .allocateAndGenerateAll":
The graphic looks slightly blurrier than in ".none" within the "gaze area"
The two poles are very blurry and hard to recognize the texture
Much less shimmer around the "gaze area"
My question would be: Is there a way to have the perfect graphic quality in ".none" without the massive shimmer?
Thank you!
Screenshots:
mipmapsMode: .none
mipmapsMode: .allocateAndGenerateAll
I have an open Feedback conversation with Apple on this topic, but I am curious if others have run into this, or want to try out my sample code in their set up.
there are two API’s for reading controller buttons, axis, and D pads: GCPhysicalInputProfile and GCControllerLiveInput. There are inconsistencies in behaviour between the two of them. Apple recommends we use GCControllerLiveInput, however, there are some capabilities on these controllers that are only accessible through GCPhysicalInputProfile, as I’ll discuss below.
PSVR2 R2/L2 buttons, a.k.a. triggers, have force input analogue values. These can only be accessed on GCPhysicalInputProfile
PSVR2 thumbstick direction values are read through “axes” on GCPhysicalInputProfile, but only “dpads” on GCControllerLiveInput
on both GCPhysicalInputProfile and GCControllerLiveInput, All pressed events of all buttons are fired properly using generic aliases ( Trigger, Grip ,Menu, Right Thumbstick, Left Thumbstick, Right Button A & B (Circle & Cross), Left Button A&B (Triangle and Square) ). Apple reserves the system button as the equivalent of a home button for the OS.
on GCPhysicalInputProfile, touch events are fired when the button is also pressed, but not for only touches.
on GCControllerLiveInput , Touch events only works for the following buttons: Left Thumbstick, Right Thumbstick, Right Button A (Circle), and Right Button B (Cross). But Right Button B touch event isn’t labelled correctly, it fires as the Right Button A event.
I observed this inside ALVR which uses a polling based approach to event processing:
https://github.com/alvr-org/alvr-visionos/blob/17b5968f9d894944b53e97134b39dfce0993302a/ALVRClient/WorldTracker.swift#L301
To simplify to see this on a very simple app, I used the Apple example TrackingAccessories application:
https://developer.apple.com/documentation/ARKit/tracking-accessories-in-volumetric-windows
I’ve attached the code that replaces the AccessoryTrackingModel class. I added code that prints out what is touched/pressed, see the trackAllConnectedSpatialControllers method:
https://github.com/svrc/TrackingAccessories
Hi,
we've developed an app for Vision Pro that utilises the GroupActivitites SDK to provide shared experiences for our users.
Remote Participation works great, but we can't get nearby sharing to work.
The behaviour we're observing:
User 1 engages share sheet from Volume, 2nd Vision Pro is visible.
User 1 starts nearby sharing
Session initialisation runs for approx. 30 seconds, then fails
Sometimes, the nearby participant doesn't show up at all after the initialisation has failed once.
As stated in the Configure your visionOS app for sharing with people nearby article, we didn't make any changes to our implementation to support nearby sharing.
Any help would be greatly appreciated.
Kind regards,
David
Hi, I have a hand model that is in FBX and I'm exporting it to USD in Blender. I get a skinned mesh and while I can track the whole hand how do I track each joint and assign it and animate the skinned mesh itself. All my attempts say this is not possible in RealityKit as of now. True?
Hi All,
We're a studio building an app and as part of a scene we have a 3D asset with a smoke particle emitter and a curved mesh that plays video. I notice that when the video alone is played or the particle effect alone is done then the scene works fine but the frame rate drops drastically when both are turned on.
How do I solve this because this is an important storytelling feature.
Hello, I've pre-ordered the Logitech Muse with hopes of developing with it, but have yet to find any documentation relating to the capabilities it will have/any APIs that will be available to take advantage of the Muse. Is anyone aware of what might become available?
Thank you in advance.
When scanning multiple rooms (10+) in a single structure using ARWorldMap for coordinate space consistency, RoomCaptureSession throws CaptureError.exceedSceneSizeLimit. The instructions here (https://developer.apple.com/documentation/roomplan/scanning-the-rooms-of-a-single-structure) provide exactly what I am doing to keep the underlying ARSession alive (by calling captureSession.stop(pause: false)) and save the results before a user moves to the next room. Scanning 11 or so rooms will cause the user to hit the exceedSceneSizeLimit error. The ARWorldMap is about 58 MB and always is around this size when hitting this issue. No anchors are present and all the data seems to be from tracking data.
On iPad devices (where I do not see this issue) the ARWorldMap grows as a significantly slower rate in size.
I save the ARWorldMap after each room is scanned and confirmed by the user. If I use the ARMap to initialize the ARSession (as described in the docs) the session will immediately error with "exceedSceneSizeLimit" once the captureSession.run() is executed. Occasionally it will allow me/the user to scan again, but either breaks mid scan or the following.
This has been working fine for the past 2 years and users have been able to scan dozens of rooms without issue. It seems only lately that it has been a problem.
I would expect the ARWorldMap to be allowed for much bigger sizes. At this point I can just about scan more area of my house with a single scan than I can when I use different captureSessions.
Few observations:
This happens on my iPhone 15 Pro Max, my iPhone 17 Pro, but not my iPad M4 (maybe memory related?). It is possible if scanning many more rooms it would happen on the iPad too.
I have tried things such as resetting the ARConfig on the underlying ARSession to reset some, but this doesn't work.
I have tried to create a new ARWorldMap and move the origin to the older map to clear out tracking data. This almost works but causes a mess of issues when a user moves at all due to the unshared coordinate space.
I believe there are three active issues regarding this: FB14454922, FB15035788, FB20642944
Could we get an update for this issue? It is a production issue and severely limits my user experience in my production application.
0
I’m using ARKit + SceneKit (Swift) with ARWorldTrackingConfiguration and detectionImages to place a 3D object (USDZ via SCNScene(named:)) when a reference image is detected. While the image is tracked, the object stays correctly aligned.
Goal: When the tracked image is no longer visible, I want the placed node to remain visible and fixed at its last known pose (no drifting) as I move the camera.
What works so far: Detect image → add node → track updates When the image disappears → keep showing the node at its last pose
Problem: After the image is no longer tracked, the node drifts as I move the device/camera. It looks like it’s still influenced by the (now unreliable) image anchor or accumulating small world-tracking errors.
Question: What’s the correct way in ARKit to “freeze” the node at its last known world transform once ARImageAnchor stops tracking, so it doesn’t drift?
I have a question about Apple’s preinstalled visionOS app “Encounter Dinosaurs.”
In this app, the dinosaurs are displayed over the real-world background, but the PhysicallyBasedMaterial (PBM) in RealityKit doesn’t appear to respond to the actual brightness of the environment.
Even when I change the lighting in the room, the dinosaurs’ brightness and shading remain almost the same.
If this behavior is intentional — for example, if the app disables real-world lighting influence or uses a fixed lighting setup — could someone explain how and why it’s implemented that way?
I'm trying to add a feature to my app to allow a user to import items from other apps, like Safari, via the share sheet.
I've done this many times on iOS/iPadOS easily with a Share Extension. From what I can tell, Xcode tells me share extensions are not available on visionOS - though my experience on device tells me differently (It seems Reminders, Notes & more implement them somehow.) I was finally able to get it working on device only...but I can now no longer test in the simulator, and have not found a way to distribute this app.
When attempting to run on the simulator, I get this issue:
Please try again later. Appex bundle at /Users/jason/Library/Developer/CoreSimulator/Devices/09A70160-4F4F-4F5E-B679-F6F7D876D7EF/data/Library/Caches/com.apple.mobile.installd.staging/temp.6OAEZp/extracted/LaunchBar.app/PlugIns/LaunchBarShareExtension.appex with id co.swiftfox.LaunchBar.ShareExtension specifies a value (com.apple.share-services) for the NSExtensionPointIdentifier key in the NSExtension dictionary in its Info.plist that does not correspond to a known extension point.
When trying to archive an upload to test flight, I get this similar error:
Invalid Info.plist value. The value for the key 'DTPlatformName' in bundle LaunchBar.app/PlugIns/LaunchBarShareExtension.appex is invalid. (ID: 207610c7-b7e1-48be-959b-22a43cd32d16)
The app is for visionOS only - which I'm thinking might be the problem? The share extension is "Designed For iPhone" and requires me to include iPhone as a run destination. In the worst case I can build an iPhone UI for the app but I'd rather not, as it is very specific to visionOS.
Has anyone successfully launched a share extension on a visionOS only app? I have an iPad app with a share extension that shows up fine on visionOS, but the issue seems to be specifically with visionOS only apps.
Topic:
Spatial Computing
SubTopic:
General
Hi, I'm developing an app for Vision Pro using Xcode, while updating the latest update, things that worked in my app suddenly didn't.
in my app flow I'm tapping spheres to get their positions, from some reason I get an offset from where I tap to where a marker on that position is showing up.
here's the part of code that does that, and a part that is responsible for an alignment that happens afterwards:
func loadMainScene(at position: SIMD3) async {
guard let content = self.content else { return }
do {
let rootEntity = try await Entity(named: "surgery 16.09", in: realityKitContentBundle)
rootEntity.scale = SIMD3<Float>(repeating: 0.5)
rootEntity.generateCollisionShapes(recursive: true)
self.modelRootEntity = rootEntity
let bounds = rootEntity.visualBounds(relativeTo: nil)
print("📏 Model bounds: center=\(bounds.center), extents=\(bounds.extents)")
let pivotEntity = Entity()
pivotEntity.addChild(rootEntity)
self.pivotEntity = pivotEntity
let modelAnchor = AnchorEntity(world: [1, 1.3, -0.8])
modelAnchor.addChild(pivotEntity)
content.add(modelAnchor)
updateModelOpacity(0.5)
self.modelAnchor = modelAnchor
rootEntity.visit { entity in
print("👀 Entity in model: \(entity.name)")
if entity.name.lowercased().hasPrefix("focus") {
entity.generateCollisionShapes(recursive: true)
entity.components.set(InputTargetComponent())
print("🎯 Made tappable: \(entity.name)")
}
}
print("✅ Model loaded with collisions")
guard let sphere = placementSphere else { return }
let sphereWorldXform = sphere.transformMatrix(relativeTo: nil)
var newXform = sphereWorldXform
newXform.columns.3.y += 0.1 // move up by 20 cm
let gridAnchor = AnchorEntity(world: newXform)
self.gridAnchor = gridAnchor
content.add(gridAnchor)
let baseScene = try await Entity(named: "Scene", in: realityKitContentBundle)
let gridSizeX = 18
let gridSizeY = 10
let gridSizeZ = 10
let spacing: Float = 0.05
let startX: Float = -Float(gridSizeX - 1) * spacing * 0.5 + 0.3
let startY: Float = -Float(gridSizeY - 1) * spacing * 0.5 - 0.1
let startZ: Float = -Float(gridSizeZ - 1) * spacing * 0.5
for i in 0..<gridSizeX {
for j in 0..<gridSizeY {
for k in 0..<gridSizeZ {
if j < 2 || j > gridSizeY - 5 { continue } // remove 2 bottom, 4 top
let cell = baseScene.clone(recursive: true)
cell.name = "Sphere"
cell.scale = .one * 0.02
cell.position = [
startX + Float(i) * spacing,
startY + Float(j) * spacing,
startZ + Float(k) * spacing
]
cell.generateCollisionShapes(recursive: true)
gridCells.append(cell)
gridAnchor.addChild(cell)
}
}
}
content.add(gridAnchor)
print("✅ Grid added")
} catch {
print("❌ Failed to load: \(error)")
}
}
private func handleModelOrGridTap(_ tappedEntity: Entity) {
guard let modelRootEntity = modelRootEntity else { return }
let localPosition = tappedEntity.position(relativeTo: modelRootEntity)
let worldPosition = tappedEntity.position(relativeTo: nil)
switch tapStep {
case 0:
modelPointA = localPosition
modelAnchor?.addChild(createMarker(at: worldPosition, color: [1, 0, 0]))
print("📍 Model point A: \(localPosition)")
tapStep += 1
case 1:
modelPointB = localPosition
modelAnchor?.addChild(createMarker(at: worldPosition, color: [1, 0.5, 0]))
print("📍 Model point B: \(localPosition)")
tapStep += 1
case 2:
targetPointA = worldPosition
targetMarkerA = createMarker(at: worldPosition,color: [0, 1, 0])
modelAnchor?.addChild(targetMarkerA!)
print("✅ Target point A: \(worldPosition)")
tapStep += 1
case 3:
targetPointB = worldPosition
targetMarkerB = createMarker(at: worldPosition,color: [0, 0, 1])
modelAnchor?.addChild(targetMarkerB!)
print("✅ Target point B: \(worldPosition)")
alignmentReady = true
tapStep += 1
default:
print("⚠️ Unexpected tap on model helper at step \(tapStep)")
}
}
func alignModel2Points() {
guard let modelPointA = modelPointA,
let modelPointB = modelPointB,
let targetPointA = targetPointA,
let targetPointB = targetPointB,
let modelRootEntity = modelRootEntity,
let pivotEntity = pivotEntity,
let modelAnchor = modelAnchor else {
print("❌ Missing data for alignment")
return
}
let modelVec = modelPointB - modelPointA
let targetVec = targetPointB - targetPointA
let modelLength = length(modelVec)
let targetLength = length(targetVec)
let scale = targetLength / modelLength
let modelDir = normalize(modelVec)
let targetDir = normalize(targetVec)
var axis = cross(modelDir, targetDir)
let axisLength = length(axis)
var rotation = simd_quatf()
if axisLength < 1e-6 {
if dot(modelDir, targetDir) > 0 {
rotation = simd_quatf(angle: 0, axis: [0,1,0])
} else {
let up: SIMD3<Float> = [0,1,0]
axis = cross(modelDir, up)
if length(axis) < 1e-6 {
axis = cross(modelDir, [1,0,0])
}
rotation = simd_quatf(angle: .pi, axis: normalize(axis))
}
} else {
let dotProduct = dot(modelDir, targetDir)
let clampedDot = max(-1.0, min(dotProduct, 1.0))
let angle = acos(clampedDot)
rotation = simd_quatf(angle: angle, axis: normalize(axis))
}
modelRootEntity.scale = .one * scale
modelRootEntity.orientation = rotation
let transformedPointA = rotation.act(modelPointA * scale)
pivotEntity.position = -transformedPointA
modelAnchor.position = targetPointA
alignedModelPosition = modelAnchor.position
print("✅ Aligned with scale \(scale), rotation \(rotation)")
Topic:
Spatial Computing
SubTopic:
General
Hello,
I am in the process of implementing SharePlay support in my visionOS app. Everything runs fine when I test locally, but when my app is distributed via TestFlight, calling try await activity.activate() shows the SharePlay dialog as usual, but then when I start a new FaceTime call, my ImmersiveSpace gets dismissed.
This is only happening when the app is distributed via TestFlight, when I run it locally the ImmersiveSpace stays active as expected.
Looking at the console on my Mac I found this log:
Invalid initial client settings class: UIApplicationSceneClientSettings; expected class: MRUISharedApplicationSceneClientSettings; bundle ID: com.apple.facetime; scene ID: com.apple.facetime:SFBSystemService-DDA8C751-C0C4-487E-AD85-59EF4E6C6050
Does anyone have an idea how I can fix this? It's driving me nuts and I wasted over a day looking for a workaround but so far been unsuccessful.
Thanks!
Hi everyone,
We’re developing a Unity project for Apple Vision Pro that connects PSVR2 Sense controllers for advanced interaction and input.
We’ve encountered a major limitation:
when the controller is not held close to the designated hand (e.g., resting on a table or held by the non designated hand), the Sense controller enters a low-power or reduced-update mode. This results in noticeably reduced tracking update frequency and responsiveness until the controller is held again.
For certain use cases, this behavior is undesirable. In our case, it prevents continuous real-time tracking of the controller even when it’s stationary or being tracked externally.
Request:
Please consider exposing an API flag or developer option in ARKit to disable and optionally delay the low-power mode when the app requires full-rate updates regardless of proximity or hand pose detection.
If I trigger the apple rating modal in an Immersive space it appears on the ground in (0,0,0) I need it to be in front of the user like push notification perimssion does or other permissions requests.
Hello,
If you add a ManipulationComponent to a RealityKit entity and then continue to add instructions, sooner or later you will encounter a crash with the following error message:
Attempting to move entity “%s” (%p) under “%s” (%p), but the new parent entity is currently being removed. Changing the parent/child entities of an entity in an event handler while that entity is already being reassigned is not supported.
CoreSimulator 1048 – Device: Apple Vision Pro 4K (B87DD32A-E862-4791-8B71-92E50CE6EC06) – Runtime: visionOS 26.0 (23M336) – Device Type: Apple Vision Pro
The problem occurs precisely with this code:
ManipulationComponent.configureEntity(object)
I adapted Apple's ObjectPlacementExample and made the changes available via GitHub.
The desired behavior is that I add entities to ManipulationComponent and then Realitiykit runs stably and does not crash randomly.
GitHub Repo
Thanks
Andre
Hi, I am trying to load files from the Apple Vision Pro's storage into a Unity App (using Apple visionOS XR Plugin and not PolySpatial package). So far, I've tried using UnitySimpleFileBrowser and UnityStandaloneFileBrowser (both aren't made for the Vision Pro and don't work there), and then implemented my own naive file browser that at least allows me to view directories (that I can see from the App Sandbox). This is of course very limited:
Gray folders can't be accessed, the only 3 available ones don't contain anything where a user would put files through the "Files" app.
I know that an app can request access to these "Files & Folders":
So my question is: Is there a way to request this access for a Unity-built app at the moment? If yes, what do I need to do? I've looked into the generated Xcode project's "Capabilities", but did not find anything related to file access. Any help is appreciated!
I use ARKit for motion tracking. I get the skeleton joint coordinates and use them for animation. I didn't make any changes to the code, but I updated the iOS version from 18 to 26, and modelTransform now always returns nil.
https://developer.apple.com/documentation/arkit/arskeleton3d/modeltransform(for:)
For example
bodyAnchor.skeleton.modelTransform(for: .init(rawValue: "head_joint"))
bodyAnchor is ARBodyAnchor.
I see the default skeleton on the screen, but now I can't get the coordinates out of it.
I'm using an example from Apple's WWDC presentation.
https://developer.apple.com/documentation/arkit/capturing-body-motion-in-3d
Are there any changes in the API? Or just bug?
After writing the code, when debugging on VisionPro, the program will encounter a blocking situation when running from Xcode to VisionPro. It will take a long time for the execution information to appear on the Xcode console
We’re trying to build a custom player for Unity. For this, we’re using AVPlayer with AVPlayerItemVideoOutput to get textures. However, we noticed that playback is not smooth and the stream often freezes.
For testing, we used this 8K video:
https://deovr.com/nwfnq1
The video was played using the following code:
@objc public func playVideo(urlString: String)
{
guard let url = URL(string: urlString) else { return }
let pItem = AVPlayerItem(url: url)
playerItem = pItem
pItem.preferredForwardBufferDuration = 10.0
let pixelBufferAttributes: [String: Any] = [
kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,
kCVPixelBufferMetalCompatibilityKey as String: true,
]
let output = AVPlayerItemVideoOutput( pixelBufferAttributes: pixelBufferAttributes )
pItem.add(output)
playerItemObserver = pItem.observe(\.status)
{
[weak self] pItem, _ in
guard pItem.status == .readyToPlay else { return }
self?.playerItemObserver = nil
self?.player.play()
}
player = AVPlayer(playerItem: pItem)
player.currentItem?.preferredPeakBitRate = 35_000_000
}
When AVPlayerItemVideoOutput is attached, the video stutters and the log looks like this:
🟢 Playback likely to keep up
🟡 Buffer ahead: 4.08s | buffer: 4.08s
🟡 Buffer ahead: 4.08s | buffer: 4.08s
🟡 Buffer ahead: -0.07s | buffer: 0.00s
🟡 Buffer ahead: 2.94s | buffer: 3.49s
🟡 Buffer ahead: 2.50s | buffer: 4.06s
🟡 Buffer ahead: 1.74s | buffer: 4.30s
🟡 Buffer ahead: 0.74s | buffer: 4.30s
🟠 Playback may stall
🛑 Buffer empty
🟡 Buffer ahead: 0.09s | buffer: 4.30s
🟠 Playback may stall
🟠 Playback may stall
🛑 Buffer empty
🟠 Playback may stall
🟣 Buffer full
🟡 Buffer ahead: 1.41s | buffer: 1.43s
🟡 Buffer ahead: 1.41s | buffer: 1.43s
🟡 Buffer ahead: 1.07s | buffer: 1.43s
🟣 Buffer full
🟡 Buffer ahead: 0.47s | buffer: 1.65s
🟠 Playback may stall
🛑 Buffer empty
🟡 Buffer ahead: 0.10s | buffer: 1.65s
🟠 Playback may stall
🟡 Buffer ahead: 1.99s | buffer: 2.03s
🟡 Buffer ahead: 1.99s | buffer: 2.03s
🟣 Buffer full
🟣 Buffer full
🟡 Buffer ahead: 1.41s | buffer: 2.00s
🟡 Buffer ahead: 0.68s | buffer: 2.27s
🟡 Buffer ahead: 0.09s | buffer: 2.27s
🟠 Playback may stall
🛑 Buffer empty
🟠 Playback may stall
When we remove AVPlayerItemVideoOutput from the player, the video plays smoothly, and the output looks like this:
🟢 Playback likely to keep up
🟡 Buffer ahead: 1.94s | buffer: 1.94s
🟡 Buffer ahead: 1.94s | buffer: 1.94s
🟡 Buffer ahead: 1.22s | buffer: 2.22s
🟡 Buffer ahead: 1.05s | buffer: 3.05s
🟡 Buffer ahead: 1.12s | buffer: 4.12s
🟡 Buffer ahead: 1.18s | buffer: 5.18s
🟡 Buffer ahead: 0.72s | buffer: 5.72s
🟡 Buffer ahead: 1.27s | buffer: 7.28s
🟡 Buffer ahead: 2.09s | buffer: 3.03s
🟡 Buffer ahead: 4.16s | buffer: 6.10s
🟡 Buffer ahead: 6.66s | buffer: 7.09s
🟡 Buffer ahead: 5.66s | buffer: 7.09s
🟡 Buffer ahead: 4.66s | buffer: 7.09s
🟡 Buffer ahead: 4.02s | buffer: 7.45s
🟡 Buffer ahead: 3.62s | buffer: 8.05s
🟡 Buffer ahead: 2.62s | buffer: 8.05s
🟡 Buffer ahead: 2.49s | buffer: 3.53s
🟡 Buffer ahead: 2.43s | buffer: 3.38s
🟡 Buffer ahead: 1.90s | buffer: 3.85s
We’ve tried different attribute settings for AVPlayerItemVideoOutput. We also removed all logic related to reading frame data, but the choppy playback still remained.
Can you advise whether this is a player issue or if we’re doing something wrong?