My ARViewContainer code is not working. I don't know how to debug the issue and I don't know or see where my results is going to. I need help to resolve this issue. please help debug. See code below:
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I'm seeing this error while attempting to compile my VisionOS app under Xcode 26. My existing code looks like:
let (naturalSize, formatDescriptions, mediaCharacteristics) = try? await videoTrack.load(.naturalSize, .formatDescriptions, .mediaCharacteristics)
This is now giving a compiler error: Type of expression is ambiguous without a type annotation
I don't see that anything that was changed or deprecated in the latest version. Also loading the properties individually seems to work fine i.e.:
let naturalSize = try? await videoTrack.load(.naturalSize)
let formatDescriptions = try? await videoTrack.load(.formatDescriptions)
let mediaCharacteristics = try? await videoTrack.load(.mediaCharacteristics)
How do I convert a blend shape/morphed 3D lip-synced model into a usdz that will play in AR on an iPhone?
Has anyone had success with MeshInstancesComponent? I tried to follow the sample code from What's New in RealityKit but it wouldn't compile. I was able to use one of the init overloads to get it to compile, but using it crashes both my device and the simulator. Even with one instance.
If I update vision pro size then textfield and button are not update as per its new size.
Its working on some scren but not in some screen.
Please refer below screenshot for your reference,
Hi all,
I'm working on an ARKit-based iOS app where I need to accurately determine the direction the device is facing to localize objects in the real world. I'm using:
let config = ARWorldTrackingConfiguration()
config.worldAlignment = .gravityAndHeading
Thus, I would expect the world alignment to behave as given in the gravityAndHeading page.
The AR session is started after verifying that CLLocationManager.headingAccuracy <= 20, and the compass appears to be calibrated.
However, I'm seeing a major inconsistency:
When the rear camera is physically pointed toward true North, I would expect:
cameraTransform.columns.2.z ≈ -1 // (i.e. ARKit's -Z pointing North)
But instead, I'm consistently seeing:
cameraTransform.columns.2.z ≈ +0.97 // Implies camera is facing South
Meanwhile, the translation vector behaves as expected:
As I physically move North, cameraTransform.columns.3.z becomes more negative, matching the world’s +Z = South assumption.
For example, let's say I have the device in landscapeRight (or landscapeLeft for UIDeviceOrientation). Let's say the device rear camera is pointing towards True North, and I start moving towards True North. I get something like this:
Camera Transform = simd_float4x4(
[
[0.98446155, -0.030119859, 0.172998, 0.0],
[0.023979114, 0.9990097, 0.037477385, 0.0],
[-0.17395553, -0.032746706, 0.98420894, 0.0],
[0.024039675, -0.037087332, -0.22780673, 0.99999994]
])
As you can see, the cameraTransform.columns.2.z is positive despite the rear camera pointing towards True North, while cameraTransform.columns.3.z is correctly positive as the device is moving towards True North.
So here is my question:
Why is cameraTransform.columns.2.z positive when the rear camera is physically facing North?
Any clarity would be deeply appreciated. I've read the documentation and tested with different heading accuracies and AR session resets, but I keep running into this orientation mismatch.
Thanks in advance!
当我进入混合空间时,出现一个模型,但模型后面有一个 windowGroup,无法完全查看。如果我想点击进入 mix 空间,我需要使用代码将 windowGroup 移动到另一个位置,而不是手动移动

Topic:
Spatial Computing
SubTopic:
General
Visionos26 Enterprise api has the new feature: Shared Coordinate Space, participants exchange their coordinate data by SharedCoordinateSpaceProvider through their own network, when shared coordinate space established with nearby participants, the event: connectedParticipantIdentifiers(participants: [UUID]) will be received.
But the Event.participantIdentifier still be an invalid default value(00000000-0000-0000-FFFF-FFFFFFFF) in this time, I wonder when or how I can get a valid event.participantIdentifier, or is there some other way to get the local participantIdentifier?
Or If it's a bug, please fix it in later beta release version, thank you.
Hello,
We've been working for months now on an App for the Vision Pro.
(it's been great btw!)
We already have an App in the App Store for iOS, and have been migrating our platform from the Microsoft Hololens 2 to the AVP:
https://apps.microsoft.com/detail/9NPPP031VHD1
We require the Main Camera access and already have gotten the Enterprise.license for development purposes.
Unfortunately, we cannot publish our Business App (which uses an Enterprise API) under the same Name/Bundle ID as our iOS App because it would conflict with our current Distribution Method.
We arrived at the conclusion that we need a new Enterprise.license under a different Bundle ID to create a new App for the Business Store.
Has anyone been in the same boat as us, and tried to publish to the Business Store while already having an App in the Public App Store under the same name?
We applied to get another license for distribution under another name (with "Pro" at the end), but it's been stuck in limbo for over a month now (probably because the new bundle ID doesn't have any track record).
Anyhow, thanks for any help, we're open to suggestions as to how to proceed!
Hi,
since iOS 18 UnlitMaterial and ShaderGraphMaterial have the option to disable tone mapping, e.g via https://developer.apple.com/documentation/realitykit/unlitmaterial/init(applypostprocesstonemap:)
Is it possible to do the same for CustomMaterial? I tried initializing a CustomMaterial based on an UnlitMaterial where tone mapping is disabled, like so:
let unlitMat = UnlitMaterial(applyPostProcessToneMap: false)
let customMaterial = try CustomMaterial(
from: unlitMat,
surfaceShader: surfaceShader,
geometryModifier: geometryModifier
)
but that does not seem to work. The colors of my texture still look altered in comparison to a plain UnlitMaterial or a ShaderGraphMaterial where its disabled.
Any hints? Thank you!
When building a multiplayer Tabletop game, the documentation includes how to attach a custom TabletopNetworkSessionCoordinator, which could be used in addition to TabletopGame.MultiplayerDelegate. But so far, we have been unable to create these types of custom coordinators or have a delegate that works.
Our current setup with our generic GroupActivity works by sending the session to TabletopGame's coordinateWithSession method (like in the current sample project), but we didn't find a way to access and control, for example, the arbiter, seats, player events, among other features mentioned on https://developer.apple.com/documentation/tabletopkit/tabletopnetworksession.
Is correct to expect having access to the participants, messenger, or journal without having to maintain a parallel coordinator?
possibly we are missing something here; any suggestions?
I am trying to loop my videoMaterial. I have researched the AXQueuePlayer and AVPlayerLooper and tried to implement them into my code.
Please see attached.
There are no errors showing up but the videoMaterial is no longer working.
Please see the attached for the working code that plays the videoMaterial.
I am stumped can anyone help me solve this?
Thank you.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Hi, I am a new developer. I want to add articulated objects and deformable objects into my AR game. I haven't found any tutorial on this, I hope to interact with these objects. Please let me know if this is available in visionOS.
A few users have recently reported no longer being able to capture point clouds using our app, specifically on iPhone 15 Pro devices. We recently found an in-house device that exhibits this behavior and found that the confidenceMap contains only low confidence values, regardless of the environment being captured. Our app uses a higher confidence threshold; setting the threshold to a lower value produces noisy results as expected, so that is a non-viable option.
Other LiDAR based apps have been tested with this device and the results are the same. No points, or noisy point clouds in apps that allow a lower confidence threshold setting. On devices that exhibit this behavior the "Displaying a point cloud using scene depth" Apple sample app can be used to visualize the issue.
First reports of this new behavior occurred as early as iOS 18.4.
Looking for recommendations on which team(s) at Apple to reach out to with these findings since the behavior manifests on only a small sample of devices.
Hi, I am trying to create a simple effect to create feather edges on the image using Reality Composer Pro. Something like this:
As you can see it has softer edges on all sides that dissolves into transparency with the background.
this is what I have been able to achieve on my own.
I want to use the "feather" input node value (float) from 0.0 to 1.0 to increase or decrease the strength of the feather edges.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Reality Composer
Shader Graph Editor
visionOS
Can I apply .scrollInputBehavior(.enabled, for: .look) to a WebView (wrapped UIViewRepresentable) in a visionOS 26 app?
I tried it myself, but I couldn't do it, so I would like to know if there is any way to do this.
Best regards.
Currently I am using mixed style immersive view to place both my WindowView(plain style) and ImmersiveView content together. The issue is that the rendering depth testing may always let the virtual content block my normal WindowView. Is it possible to manually set windowedVIew always displays in the front of my virtual view in mixed style immersion? (I know modelSortGroup but it doesn't quite fits here)
Or if I can dynamically change the .progressive value when the immersive space is open (set the value to zero means .mixed itself right?)
There a way to use contentCaptureProtected with Quick Look on VisionOS 26? Or exist a way to see a spatial photo with Quick Look without sharing options ?
It's much easier to use custom material to bridge metal shader onto Reality model than using LowLevelTexture does the same thing.
Why VisionOS doesn't support this material:
Hello everyone, I'm a new developer and I'm still learning the foundations of Swift and SwiftUI while building my first app. Today I wanted to ask you how to implement AR Quixck Views inside my app. I wanna be able to dynamically preview AR objects in a dedicated view, however, I don't seem to have understood where and how to locate AR objects inside my project. I tried including them in the Assets folder of the project, or in the Recources folder, or within the main folder of my project alongside the MyAppApp.swift file. None of the methods I used seemed have worked in that none of the objects was ever located. I made sure to specify the path to the files every time, but somehow the location isn't recognized. I also tried giving no path so that the app would search for the files in their default location (which I apparently haven't grasped yet), but still my attempt failed. I don't have the code sample on me at the moment, but I will write a followup comment on this post to show you what I wrote in case anyone was interested in debugging my code. Meanwhile, if anyone would be so kind to point me at the support article or to comment below the sample code they used in their app, I would very much appreciate it, so that I can start debugging. Thank you for reading this, I appreciate you.