The issue reproducible with empty project. When you run it and tap "Open immersive space" it takes a couple of minutes to respond. The issue only reproducible on real device with debugger attached. Reproducible other developers too (not specific to my environment). Issue doesn't exists in Xcode 16.
Afer initial long delay subsequent opens works fine.
Console logs:
nw_socket_copy_info [C1:2] getsockopt TCP_INFO failed [102: Operation not supported on socket]
nw_socket_copy_info getsockopt TCP_INFO failed [102: Operation not supported on socket]
Failed to set dependencies on asset 9303749952624825765 because NetworkAssetManager does not have an asset entity for that id.
void * _Nullable NSMapGet(NSMapTable * _Nonnull, const void * _Nullable): map table argument is NULL
PSO compilation completed for driver shader copyFromBufferToTexture so=0 sbpr=256 sbpi=16384 ss=(64, 64, 1) p=70 sc=1 ds=0 dl=0 do=(0, 0, 0) in 1997
XPC connection interrupted
<<<< FigAudioSession(AV) >>>> audioSessionAVAudioSession_CopyMXSessionProperty signalled err=-19224 (kFigAudioSessionError_UnsupportedOperation) (getMXSessionProperty unsupported) at FigAudioSession_AVAudioSession.m:606
Failed to load item AXCodeItem<0x14706f250> [Rank:6000] SpringBoardUIServices [AXBundle name:/System/Library/AccessibilityBundles/SpringBoardUIServices.axbundle/SpringBoardUIServices] [Platforms and Targets:{ iOS = SpringBoardUIServices; } Framework] [Excluded: (null)]. error: Error Domain=AXLoading Code=0 "URL does not exist: file:///System/Library/AccessibilityBundles/SpringBoardUIServices.axbundle" UserInfo={NSLocalizedDescription=URL does not exist: file:///System/Library/AccessibilityBundles/SpringBoardUIServices.axbundle}
Failed to load item AXCodeItem<0x14706f250> [Rank:6000] SpringBoardUIServices [AXBundle name:/System/Library/AccessibilityBundles/SpringBoardUIServices.axbundle/SpringBoardUIServices] [Platforms and Targets:{ iOS = SpringBoardUIServices; } Framework] [Excluded: (null)]. error: Error Domain=AXLoading Code=0 "URL does not exist: file:///System/Library/AccessibilityBundles/SpringBoardUIServices.axbundle" UserInfo={NSLocalizedDescription=URL does not exist: file:///System/Library/AccessibilityBundles/SpringBoardUIServices.axbundle}
[b30780-MRUIFeedbackTypeButtonWithBackgroundTouchDown] Playback timed out before completion (after 3111 ms)
Failed to set dependencies on asset 7089614247973236977 because NetworkAssetManager does not have an asset entity for that id.
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
The new Mac virtual display feature on visionOS 2 offers a curved/panoramic window. I was wondering if this is simply a property that can be applied to a window, or if it involves an immersive mode or SceneKit/RealityKit?
Hi,
I was wondering if the Enterprise API for visionOS 2 includes access to the raw Lidar data from the Apple Vision Pro, or any intermediate data representation (like the depthMap as shown in this post)? Or if there would be any way to get access to this data?
Thanks in advance!
We're developing an iOS application that integrates RoomCaptureSession with ARSCNView for room scanning. Our implementation differs from the standard RoomCaptureView because we need custom UI guidance with 3D dots placed in the scanning environment to guide users through the capture process.
Bug Description:
The application crashes when users attempt to scan multiple rooms or apartments in sequence. The crash specifically occurs with the following pattern:
User successfully scans first room with multiple hotspots (working correctly)
User stops scanning, moves to a new room
In the new room, first 1-2 hotspots work correctly
Application crashes when attempting to scan additional hotspots
Technical Details:
Error: SLAM Anchor assertion failure in SlamAnchor.cpp:37 : HasValidPose()
Crash occurs in Thread 27 with CAPIDetectionOutputFwdNode
Error suggests invalid positioning when placing AR anchors
Steps to Reproduce:
Start room scan
Complete multiple hotspot captures in first room
Stop scanning
Start new room scan
Capture 1-2 hotspots successfully
Attempt additional hotspot captures -> crashes
Attempted Solutions:
Implemented anchor cleanup between sessions
Added position validation before anchor placement
Implemented ARSession error handling
Added proper thread management for AR operations
Environment:
Device: iPhone 14 Pro (LiDAR equipped)
iOS Version: 18.1.1 (22B91)
Testing through TestFlight
Crash Log Details:
Exception Type: EXC_CRASH (SIGABRT)
Exception Codes: 0x0000000000000000, 0x0000000000000000
Exception Note: EXC_CORPSE_NOTIFY
Triggered by Thread: 27
Thread 27 Crashed:
0 libsystem_kernel.dylib 0x00000001f0cc91d4 __pthread_kill + 8
1 libsystem_pthread.dylib 0x0000000228e12ef8 pthread_kill + 268
2 libsystem_c.dylib 0x00000001a86bbad8 abort + 128
3 AppleCV3D 0x0000000234d71a28 cv3d::vio::capi::SlamAnchor::SlamAnchor
Question:
Is there a recommended approach for handling multiple room captures with custom ARSCNView integration? The standard RoomCaptureView implementation doesn't show this behavior, but we need the custom guidance functionality that ARSCNView provides.
Crash Log
Code and full crash logs can be provided if needed.
Hello Community,
I am currently developing an experimental VisionOS app, to investigate the social effects of the new Spatial Persona feature, for my bachelor thesis. My setup includes a simple board game for the participants, in which they can engage with their persona avatars.
I tried to use the TabletopKit for this setup, but ran into issues when starting the SharePlay session. When I testes my app, I couldn't see the other spatial persona anymore, despite the green SharePlay button indicating the session started. The other person can see my actions in their version of the app on the board, but can not interact with anything. Also, we are both seat on the default side of the seat.
I tried to remove the environment I added, because it doesn't seem to synch with the other player. When I tried the FaceTime feature in the simulator without the environment, I could then see the test robot avatar, but at a totally wrong place. It's seems like it isn't just my environment occluding the seats, but a flaw in the seating process as well.
When I tried the FaceTime feature in the simulator on the official test scene (TabletopKit Sample), I got the same incorrect placement and the warning "role(for:inSeatNumber:): The provided role identifier does not match a role in the current template."
So my questions are:
What needs to be changed so the TabletopKit can handle seating correctly?
How can I correctly use immersive scenes in combination with the TabletopKit?
I tried to keep the implementation of the TabletopKit example as close as possible, so I think it will enough to look into this codebase for now.
I debugged the position of seats and they are placed correctly in front of their equipment. The personas are just not placed on them.
For the M2 Apple Vision Pro, there's "a general guideline, we recommend no more than 500 thousand triangles for an immersive scene, with 250 thousand for applications in the shared space." --https://developer.apple.com/videos/play/wwdc2024/10186/?time=147
Is there a revised recommendation for the M5 Apple Vision Pro?
How do you force quit apps on the Vision Pro simulator?
I've read online you can double press the digital crown on a real Vision Pro device but there is no such button on the simulator. So far I've tried
Pressing the home button twice (⇧⌘H)
Pressing the Siri button twice (⌥⇧⌘H)
None of them works
I'm on Xcode 15.0 beta 5 (15A5209g) & visionOS 1.0 beta 2 (21N5207e)
This is related to the WWDC presentation, What's new in Metal rendering for immersive apps..
Specifically, the macOS spatial streaming to visionOS feature: For reference: the page in the docs.
The presentation demonstrates it using a full immersive space and Metal rendering using compositor services.
I'd like clarity on a few things:
Is the remote device wireless, or must the visionOS device be connected via a wired connected?
Is there a limit to the number of remote devices, and if not, could macOS render different things per remote device simultaneously?
Can I also use mixed mode with passthrough enabled, instead of just a fully-immersive mode?
Can I use RealityKit instead of Metal? If so, may I have an example, or would someone point to an example?
Game Controller Input Limitations in visionOS Volumetric Windows
Hello Apple Developer Community,
I'm developing a game for visionOS and have encountered significant limitations with game controller input when using volumetric windows (WindowGroup with .volumetric style). I'd appreciate clarification on whether this is expected behavior and any guidance on best practices.
🧩 Issue Summary
When using a DualSense controller with a volumetric window in visionOS, only a subset of controller inputs are available to the app. The remaining inputs appear to be reserved by the system for UI navigation.
✅ Working Inputs (Volumetric Window)
D-Pad (all directions)
L3 (left thumbstick button click)
R3 (right thumbstick button click)
Menu button
Options button
❌ Not Working Inputs (Volumetric Window)
Left thumbstick analog movement (used for UI scrolling instead)
Right thumbstick analog movement (used for UI scrolling instead)
Face buttons (Cross, Circle, Square, Triangle / A, B, X, Y)
Shoulder buttons (L1, R1)
Triggers (L2, R2)
Key observation: When moving the left thumbstick in a volumetric window, the window's UI scrolls vertically instead of sending input to my app's GameController handlers. Similarly, face buttons seem to be reserved for system UI interactions.
⚙️ Implementation Details
I'm using the standard GameController framework:
Connect to controller via GCController.controllers()
Access extendedGamepad profile
Set up valueChangedHandler and pressedChangedHandler for all inputs
Handlers confirmed registered via logging
Working inputs (D-Pad, L3, R3) trigger immediately and consistently
Non-working inputs (thumbsticks, face buttons) never trigger
🧠 Critical Finding: ImmersiveSpace Works Perfectly
When testing the exact same code in an ImmersiveSpace (.mixed immersion style), all controller inputs work perfectly:
✅ Both thumbsticks provide full analog input
✅ All face buttons trigger their handlers
✅ All shoulder buttons and triggers work correctly
✅ 100% success rate with no intermittent issues
This suggests the issue isn't with my code, but rather how visionOS handles controller input differently between Volumetric Windows and ImmersiveSpace.
🧪 Test Environment
I created a minimal test project (Controller-Playground) to isolate the issue:
A simple ControllerTester class that registers all GameController handlers
A visual UI showing real-time input state
No game logic, RealityKit physics, or other complexity
Results
In volumetric window: Only D-Pad, L3, R3, Menu, Options work
In ImmersiveSpace: All inputs work perfectly
This confirms the limitation exists at the visionOS platform level, not in app code.
🧰 Attempted Workarounds
I tried the following without success:
Setting GCSupportsControllerUserInteraction = false in Info.plist
Setting UIRequiresFullScreen = true
Changing window styles (.plain, .volumetric)
Polling vs. handler-based input approaches
Various threading models (MainActor, separate thread)
Result: The only way to enable full controller support is to switch to ImmersiveSpace.
❓ Questions for Apple
Is this input reservation behavior in volumetric windows intended and documented?
Are game controllers expected to have limited functionality in volumetric windows while full functionality is reserved for ImmersiveSpace?
Is there a way to request full controller input access in a volumetric window, or is ImmersiveSpace the only option for complete controller support?
Where can I find official documentation about controller input differences between window types?
Are there any APIs or configuration options to disable system controller shortcuts in volumetric windows?
🎯 Impact
This limitation has a significant effect on game design and architecture:
Volumetric windows offer a multitasking-friendly, less immersive experience
ImmersiveSpace provides full controller support but may be more immersive than some games require
Games that only need basic D-Pad and button input can work fine in volumetric windows
Games requiring analog sticks or face buttons must currently use ImmersiveSpace
It would be very helpful if Apple could clarify or reference existing documentation regarding controller input handling in different visionOS window types. If such documentation doesn't exist yet, it might be valuable to include this information in future developer guides or best-practice documents.
🕹 Current Workaround
For now, I'm using:
D-Pad for character movement (digital 8-direction)
R3 (right stick click) as a substitute for the "X" button
This setup allows the game to function within a volumetric window, though full controller support still requires ImmersiveSpace.
📄 Request
If this is expected behavior, I may have simply missed the relevant documentation — could you please point me to any existing resources that explain this design?
If there isn't one yet, it would be great if future visionOS documentation could:
Clearly outline controller input behavior across window types
Provide guidance on when to use Volumetric Windows vs. ImmersiveSpace for games
Consider adding an API option to request full controller access when appropriate
If this is not expected behavior, I'm happy to file a detailed bug report with sample code.
💻 System Information
visionOS: Latest Simulator
Xcode: Latest version
Controller: Sony DualSense
Framework: GameController (standard extendedGamepad profile)
Test project: Minimal reproducible example available
Thank you for any clarification or guidance you can provide. This information would be valuable for many developers working on visionOS games.
Apple's new Spatial Personas use Gaussian Splatting,
but I have not found any APIs for visionOS to display a Gaussian Splat like a PLY file.
Am I just missing the Apple documentation? If not, are there common practices developers are using for displaying Gaussian Splats in visionOS?
I am getting the error "Initializing hosting entity without a context" in the console when I build and run my game in XCode 16.0 beta, targeting Vision Pro OS 2.0 (22N5252n).
Not sure where the error is originating.
https://developer.apple.com/documentation/realitykit/videomaterial
The documentation: "Video materials support transparency if the source video’s file format also supports transparency."
I have a transparency video(Hand.mov, HEVC with alpha), I can show the video with transparency background correctly on Vision Pro Simulates, but on physic Device the video has a black background. I'm sure the video format is ok because I can see get the texture from video and display it on an UnlitMaterial.
How can I show the transparency video correctly with the RealityKit/VideoMaterial?
Spatial widget is a new feature of visionos 26. I notice The system’s Photo app can add a Spatial Image in the widget. I wonder if third apps can use spatial image or any 3D content in it's widget? I try to use RealityView in widget and it run with a crash.
So does spatial Image in widget only supported by the system Photo app, and not available to developers now?
Hi I have a monitoring app, that will take input video from uvc and process it using Metal, and eventually get a MTLTexture.
The problem I'm facing is I have to convert MTLTexture to CGImage then call TextureResource.replace, which is super slow. Metal processing speed is same as input frame rate(50pfs), but MTLTexture -> CGImage -> TextureResource only got 7fps...
Is there any way I can make it faster?
Topic:
Spatial Computing
SubTopic:
General
Tags:
Media Player
Frameworks
Media Accessibility
Core Media
Hi, I am trying to load files from the Apple Vision Pro's storage into a Unity App (using Apple visionOS XR Plugin and not PolySpatial package). So far, I've tried using UnitySimpleFileBrowser and UnityStandaloneFileBrowser (both aren't made for the Vision Pro and don't work there), and then implemented my own naive file browser that at least allows me to view directories (that I can see from the App Sandbox). This is of course very limited:
Gray folders can't be accessed, the only 3 available ones don't contain anything where a user would put files through the "Files" app.
I know that an app can request access to these "Files & Folders":
So my question is: Is there a way to request this access for a Unity-built app at the moment? If yes, what do I need to do? I've looked into the generated Xcode project's "Capabilities", but did not find anything related to file access. Any help is appreciated!
I'm trying to run a PhotogrammetrySession based on photos taken in an AVCaptureSession and stored as .heic files.
When I load the files I'm always seeing the error "Sample 0 missing LiDAR point cloud!" showing up for each individual sample.
Debugging shows that sample.depthDataMap is populated, also the .heic contains depth data which can be extracted using e.g. heif-convert on my Mac.
Comparing the .heic I created to one of the ObjectCaptureSession which doesn't show the LiDAR warning, I noticed the only difference being the HEIC information here:
So my questions are:
Are these the missing information in my manual capture causing this warning?
Can I somehow add these information in an AVCaptureSession?
Do these information allow better photogrammetry results?
Hi Apple Team,
We noticed the following exciting changelog in the latest macOS 26 beta:
A new algorithm significantly improves PhotogrammetrySession reconstruction quality of low-texture objects not captured with the ObjectCaptureSession front end. It will be downloaded and cached once in the background when the PhotogrammetrySession is used at runtime. If network isn’t available at that time, the old low quality model will be used until the new one can be downloaded. There is no code change needed to get this improved model. (145220451)
However after trying this on the latest beta and running some tests we do not see any differences on objects with low textures such as single coloured surfaces. Is there anything we are missing? the machine is definitely connected to the internet but we have no way of knowing from the logs if the new model is being used?
thanks
In visionOS, I'm trying to create an immersive environment which would feature several spheres in which immersive movies are visible. I'm starting from a sample code which creates a sphere, sets an immersive movie as its material, and opens it as an immersive environment. This works fine.
But if I create a sphere in an open immersive environment using Reality Composer Pro and sets its material to an immersive movie, I can see the movie on the sphere while I move outside of it but if I try to get inside the sphere, it disappears. What would be the right way of doing this ?
For visionOS 2.0+, it has been announced the object tracking feature. Is there any support for PolySpatial in Unity or is it only available in Swift and Xcode?
I'm developing a VisionOS app with bouncing ball physics and struggling to achieve natural bouncing behavior using RealityKit's physics system. Despite following Apple's recommended parameters, the ball loses significant energy on each bounce and doesn't behave like a real basketball, tennis ball, or football would.
With identical physics parameters (restitution = 1.0), RealityKit shows significant energy loss. I've had to implement a custom physics system to compensate, but I want to use native RealityKit physics. It's impossible to make it work by applying custom impulses.
Ball Physics Setup (Following Apple Forum Recommendations)
// From PhysicsManager.swift
private func createBallEntityRealityKit() -> Entity {
let ballRadius: Float = 0.05
let ballEntity = Entity()
ballEntity.name = "bouncingBall"
// Mesh and material
let mesh = MeshResource.generateSphere(radius: ballRadius)
var material = PhysicallyBasedMaterial()
material.baseColor = .init(tint: .cyan)
material.roughness = .float(0.3)
material.metallic = .float(0.8)
ballEntity.components.set(ModelComponent(mesh: mesh, materials: [material]))
// Physics setup from Apple Developer Forums
let physics = PhysicsBodyComponent(
massProperties: .init(mass: 0.624), // Seems too heavy for 5cm ball
material: PhysicsMaterialResource.generate(
staticFriction: 0.8,
dynamicFriction: 0.6,
restitution: 1.0 // Perfect elasticity, yet still loses energy
),
mode: .dynamic
)
ballEntity.components.set(physics)
ballEntity.components.set(PhysicsMotionComponent())
// Collision setup
let collisionShape = ShapeResource.generateSphere(radius: ballRadius)
ballEntity.components.set(CollisionComponent(shapes: [collisionShape]))
return ballEntity
}
Ground Plane Physics
// From GroundPlaneView.swift
let groundPhysics = PhysicsBodyComponent(
massProperties: .init(mass: 1000),
material: PhysicsMaterialResource.generate(
staticFriction: 0.7,
dynamicFriction: 0.6,
restitution: 1.0 // Perfect bounce
),
mode: .static
)
entity.components.set(groundPhysics)
Wall Physics
// From WalledBoxManager.swift
let wallPhysics = PhysicsBodyComponent(
massProperties: .init(mass: 1000),
material: PhysicsMaterialResource.generate(
staticFriction: 0.7,
dynamicFriction: 0.6,
restitution: 0.85 // Slightly less than ground
),
mode: .static
)
wall.components.set(wallPhysics)
Collision Detection
// From GroundPlaneView.swift
content.subscribe(to: CollisionEvents.Began.self) { event in
guard physicsMode == .realityKit else { return }
let currentTime = Date().timeIntervalSince1970
guard currentTime - lastCollisionTime > 0.1 else { return }
if event.entityA.name == "bouncingBall" || event.entityB.name == "bouncingBall" {
let normal = event.collision.normal
// Distinguish between wall and ground collisions
if abs(normal.y) < 0.3 { // Wall bounce
print("Wall collision detected")
} else if normal.y > 0.7 { // Ground bounce
print("Ground collision detected")
}
lastCollisionTime = currentTime
}
}
Issues Observed
Energy Loss: Despite restitution = 1.0 (perfect elasticity), the ball loses ~20-30% energy per bounce
Wall Sliding: Ball tends to slide down walls instead of bouncing naturally
No Damping Control: Comments mention damping values but they don't seem to affect the physics
Change in mass also doesn't do much.
Custom Physics System (Workaround)
I've implemented a custom physics system that manually calculates velocities and applies more realistic restitution values:
// From BouncingBallComponent.swift
struct BouncingBallComponent: Component {
var velocity: SIMD3<Float> = .zero
var angularVelocity: SIMD3<Float> = .zero
var bounceState: BounceState = .idle
var lastBounceTime: TimeInterval = 0
var bounceCount: Int = 0
var peakHeight: Float = 0
var totalFallDistance: Float = 0
enum BounceState {
case idle
case falling
case justBounced
case bouncing
case settled
}
}
Is this energy loss expected behavior in RealityKit, even with perfect restitution (1.0)?
Are there additional physics parameters (damping, solver iterations, etc.) that could improve bounce behavior?
Would switching to Unity be necessary for more realistic ball physics, or am I missing something in RealityKit?
Even in the last video here: https://stepinto.vision/example-code/collisions-physics-physics-material/ bounce of the ball is very unnatural - stops after 3-4 bounces. I apply custom impulses, but then if I have walls around the ball, it's almost impossible to make it look natural. I also saw this post https://developer.apple.com/forums/thread/759422 and ball is still not bouncing naturally.