Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

Adding reference image failed in visionos
I am trying the image tracking of ARKit on VisionPro, but there seems to be some problem when adding reference image. Here is my code: let images = ReferenceImage.loadReferenceImages(inGroupNamed: "photos") print("Images: \(images)") try await appState!.arkitSession.run([imageTracking]) It can successfully print those images, however sometimes it will print the error message like this: ARImageTrackingRemoteService: Adding reference image <ARReferenceImage: 0x3032399e0 name="chair" physicalSize=(0.070, 0.093)> failed. When this error message is printed, the corresponding image can not be tracked. I do not understand why this will happen, because sometimes the image can be successfully added, but other time not, even for the same image. It makes my app not stable. Besides, there are some other error messages, and I do not know whether it is related: ARPredictorRemoteService <0x1042154a0>: Query queue is not running. Execution of the command buffer was aborted due to an error during execution. Insufficient Permission (to submit GPU work from background) (00000006:kIOGPUCommandBufferCallbackErrorBackgroundExecutionNotPermitted)
1
0
356
Mar ’25
How to trigger actions by OnCollision in Behaviors Component
It's all about notifications to trigger actions from RCP's new Timeline system. From Compose interactive 3D content in Reality Composer Pro I am actually starting to confuse why there was need to use Entity.applyTapForBehaviors in code to trigger content in Behaviors Component. Simply because in Behaviors Component, we have chosen OnTap to allow a "Tap Notification" to trigger our action (on a selected target object). Then I guess by selecting OnCollision this trigger, I should write something like CollisionEvent.entityA.applyCollisionForBehaviors, which we don't have. And ofc the collision on my object won't trigger this action (because I only did things in RCP not in code). Ignoring this post has pointed out we could use Behaviors Component's OnNotification to trigger something for now. I found that I could still use OnTap trigger but actually put my code Entity.applyTapForBehaviors under my subscribed collision's begin event. That actually works better than OnCollision So what is the design principles here? And how could I trigger a collision notification to let my Behaviors Component's OnCollision actually works?
1
2
613
Mar ’25
Reality Composer Pro "Create new scene in the project" doesn't work more than once
In Reality Composer Pro 2.0 (448.120.2), if I use the "Create new scene in the project" button more than once, this feature doesn't seem to work. The second time I use "Create new scene in the project", Reality Composer Pro displays a new empty USDA file in the project browser with the wrong icon (a yellow document icon instead of a 3D box icon). Also, it doesn't create the new scene's USDA file on disk or display the scene in entity hierarchy browser or create a new tab. Consequently, whenever I want to add more than one new scene to my project, I have to repeatedly quit and restart Reality Composer Pro. In my use of RCP just now, I had to quit and restart RCP six times to create seven new scenes. Is this a known issue? Repro steps: In a Reality Composer Pro project, create a new folder using the "add folder" icon button in the project browser. Inside the new folder, click the "Create new scene in the project" icon button. Click the "Create new scene in the project" icon button a second time. Expected behavior: A new USDA file is created on disk. The new USDA file's root entity appears in the entity hierarchy browser and a corresponding new tab is created. Observed behavior: The USDA file for this scene is not created on disk and it does not appear in the entity hierarchy browser in a new tab. In the project browser view, a yellow document icon appears and it does not appear to correspond to an actual USDA file. Thank you for any insight you can provide about this issue.
1
0
139
Jun ’25
How to Disable Default Background Audio in RealityKit’s ObjectCaptureSession?
I am using RealityKit's ObjectCaptureSession API to capture objects, presenting the process with ObjectCaptureView. During the object capture session, there is default background audio that plays automatically. I noticed this same audio behavior in Apple's official Composer app, which seems to use the same API. I'd like to disable this audio in my app, but I have not been able to find any API or configuration option to do so. However, the audio persists, and I cannot find a way to turn it off. Is there an official method or workaround to disable this default audio in the ObjectCaptureSession API? Any guidance would be appreciated. Thank you!
1
0
693
Jan ’25
ARMeshAnchors are very unreliable on iPad Pro (4th gen)
Hello, We are developing an AR app that requires the lidar meshes. Unfortunately the ARMeshAnchors that allows us to retrieve the mesh data are very unreliable. It happens very often that the ARSession removes all ARMeshAnchors and takes anywhere from 5s to 30s to reappear. The planes detection (ARPlaneAnchors) are still working fine and the camera tracking is also working normally. I tried a basic ARKit sample app, and got the same behaviour as our own app. Is this a known issue ? Anything we can do to mitigate the issue ? Thank you
1
0
301
Mar ’25
VNDetectHumanBodyPose3DRequest failing on Vision Pro
Hi! I attempted running a sample project for detecting human pose in 3D with vision framework, that can be found here: https://developer.apple.com/documentation/vision/detecting-human-body-poses-in-3d-with-vision. It works perfectly on my Macbook Pro M1, but fails on Apple Vision Pro. After selecting a photo, an endless loading screen is displayed and the following message is produced in the console: Failed to initialize 2D Detection Algorithm. Failed to initialize 2D Pose Estimation Algorithm. Failed to initialize algorithm modules Network path is nil: (null) Failed to initialize 2D Detection Algorithm. Failed to initialize 2D Pose Estimation Algorithm. Failed to initialize algorithm modules Unable to perform the request: Error Domain=com.apple.Vision Code=9 "Async status object reported as failed but without an error" UserInfo={NSLocalizedDescription=Async status object reported as failed but without an error}. de-activating session 70138 after timeout Is human pose detection expected to work on VisionOS? Is there any special configuration required, that I might be missing?
1
1
127
Mar ’25
CustomMaterial disable unlit tone mapping
Hi, since iOS 18 UnlitMaterial and ShaderGraphMaterial have the option to disable tone mapping, e.g via https://developer.apple.com/documentation/realitykit/unlitmaterial/init(applypostprocesstonemap:) Is it possible to do the same for CustomMaterial? I tried initializing a CustomMaterial based on an UnlitMaterial where tone mapping is disabled, like so: let unlitMat = UnlitMaterial(applyPostProcessToneMap: false) let customMaterial = try CustomMaterial( from: unlitMat, surfaceShader: surfaceShader, geometryModifier: geometryModifier ) but that does not seem to work. The colors of my texture still look altered in comparison to a plain UnlitMaterial or a ShaderGraphMaterial where its disabled. Any hints? Thank you!
1
0
123
Jun ’25
The folding and unfolding effect of the NBA sand table
Seeing this magical sand table, the unfolding and folding effects are similar to spreading out cards, which is very interesting. But I don't know how to achieve it. I want to see if there are any ways to achieve this effect and give some ideas. May I ask if this effect can be achieved under the existing API
1
0
63
May ’25
ARDepthData.confidenceMap only returns low confidence on certain devices
A few users have recently reported no longer being able to capture point clouds using our app, specifically on iPhone 15 Pro devices. We recently found an in-house device that exhibits this behavior and found that the confidenceMap contains only low confidence values, regardless of the environment being captured. Our app uses a higher confidence threshold; setting the threshold to a lower value produces noisy results as expected, so that is a non-viable option. Other LiDAR based apps have been tested with this device and the results are the same. No points, or noisy point clouds in apps that allow a lower confidence threshold setting. On devices that exhibit this behavior the "Displaying a point cloud using scene depth" Apple sample app can be used to visualize the issue. First reports of this new behavior occurred as early as iOS 18.4. Looking for recommendations on which team(s) at Apple to reach out to with these findings since the behavior manifests on only a small sample of devices.
1
0
230
Jun ’25
Unable to Retain Main App Window State When Transitioning to Immersive Space
In Vision OS app, I have two types of windows: Main App Window – This is the default window that launches when the app starts. It displays the video listings and other primary content. Immersive Space Window – This opens only when a user starts streaming or playing a video. Issue: When entering the immersive space, the main app window remains visible in front of it unless manually closed. To avoid this, I currently close the main window when transitioning to immersive space and reopen it when exiting from immersive space. However, this causes the app to restart instead of resuming from its previous state. Desired Behavior: I want the main app window to retain its state and seamlessly resume from where it was before entering immersive mode, rather than restarting. Attempts & Challenges: Tried managing opacity, visibility but none worked as expected. Couldn’t find a way to push the main window to the background while bringing the immersive space to the foreground. Looking for a solution to keep the main window’s state intact while transitioning between immersive and normal modes.
1
0
80
Mar ’25
How to trigger and timeline-control particle emitters in Reality Composer Pro (without Xcode or code)?
Hi, I’m currently using Reality Composer Pro on macOS, and I’m trying to work with particle emitters using only the built-in tools — no Xcode, no custom Swift code. Here’s what I’m trying to achieve: 1. Is it possible to trigger a particle emitter with a tap gesture, directly inside Reality Composer Pro? 2. Can I add a particle emitter to a timeline, so that it emits at a specific time during an animation or behavior sequence? 3. Most importantly, is there a way to precisely control emission behavior — such as when particles start or stop — using visual and intuitive controls, without scripting? My goal is to set up particle effects that are interactive and timeline-controlled, but managed entirely through Reality Composer Pro’s GUI, not through code. Any suggestions, tips, or examples would be very helpful. Thanks in advance!
1
0
112
Jun ’25