Hi. I am mixing content destined for Vision Pro. Locked to video. I have the AAX installer and the ASAF video player demonstrated in the quicktimes is nit included in the install package for pro tools. Would it be possible to post a link ?
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi everyone,
I am wondering under which settings the camera(s) were set by the time they were calibrated.
For instance, one aspect that is easy to find is the reference resolution of the images taken when calibrating the intrinsics, this is by retrieving
intrinsicMatrixReferenceDimensions.
Making sure that the principal point is referenced to the by the time resolution used when the calibration was ongoing.
However, recently I saw that there are focusing modes that potentially displace the lens' physical position.
Settings like:
AutoFocusRangeRestriction: none, near, far
setFocusModeLocked: Locks the lens position at the specified value, and sets the focus mode to a locked state.
My concern lies the impact this focusing lens displacements can have on the intrinsic matrix parameters, like these parameters no longer describe the camera since the lens position has changed.
In simple words, what is the focus 'mode'/'range' the cameras were set when calibrating them for intrnisics?
Hi, I have a hand model that is in FBX and I'm exporting it to USD in Blender. I get a skinned mesh and while I can track the whole hand how do I track each joint and assign it and animate the skinned mesh itself. All my attempts say this is not possible in RealityKit as of now. True?
Hi Apple Team,
I’m working on a human portrait scanning application using PhotogrammetrySession, and I’ve been very impressed by the results. Thank you for building such a powerful and accessible photogrammetry solution into macOS!
I do, however, have a question regarding mesh detail limitations on different Mac hardware configurations.
When using PhotogrammetrySession.Request.Detail.custom and trying to set maximumPolygonCount = 1000000, I see the following log message:
Clamped max poly count: 1000000 to device limit. 250000 is used.
This is on an M1 Max with 32 GB RAM.
I’m aware that PhotogrammetrySession.limits can report values like maximumInputImageDimension and maximumNumberOfInputImages, but I haven’t found documentation on how the maximumPolygonCount is determined, and what hardware specs influence it.
Is it tied more to:
• GPU performance (e.g. neural/graphics cores)?
• CPU architecture?
• Memory size or bandwidth?
• Or is it fixed per SoC generation?
I’d love to understand what kind of hardware upgrades (e.g. moving to M4 Pro or increasing RAM) could allow me to increase mesh complexity and generate more detailed models.
Any insights would be greatly appreciated—and if this is covered in upcoming WWDC sessions or documentation, I’d be happy to tune in.
Thanks in advance!
KitCheng
I'm developing an AR application for the iPad pro where the primary purpose is to overlay 3D design data on top of production parts. For alignment, we are using Vuforia (model targets) which work really well locally. The further the device is moved from the point of original alignment, we are seeing quite a bit of overlay error (drift?).
My primary questions are:
Are there any best practices to stabilize frame-to-frame tracking when using model targets? We are noticing drift as soon as the device starts moving (the drift appears to occur specifically in the direction the device is moving). After about 15 feet of movement, we are observing about 3-6" of overlay error
These use cases can be over 100 feet long. In order to reset drift, we understand we'll need multiple alignment points (model targets) along the way. Is there a standard/best practice for this? Ex: have a new alignment point every x-feet?
We are using plane anchors to set our alignment. Typically we attach it to the nearest plane; however, the anchor point can be very far away (the origin of the model, which often is not near where the virtual content is). Could this be the issue? The anchor is far from the plane that we attach it too. Would moving the anchor closer to the plane we attach it too improve stability? After a few steps, the plane we originally attach too will be out of FoV anyway.
Thanks in advance!
Topic:
Spatial Computing
SubTopic:
ARKit
The AR based app I am working on right now is experiencing an issue. Sometimes, the AR session fails with a call to my ARSessionObserver's session(_ session: ARSession, didFailWithError error: Error)
with the following error:
Error Domain=com.apple.arkit.error
Code=102 "Required sensor failed."
NSLocalizedFailureReason="A sensor failed to deliver the required input.,"
NSLocalizedRecoverySuggestion="Make sure that the application has the required privacy settings."
The underlying error seems to point to the CoreMotion framework:
Domain=CMErrorDomain
Code=102 "(null)
Some people seem to have experienced this issue and solved it by making sure that the Compass Calibration switch is ON in Settings > Privacy > Location Services > System Services.
For context, the ARWorldTrackingConfiguration.worldAlignment is set to .gravity
The thing is it is already ON when I experience this issue.
I also noticed that this issue happens way more often on the iPhone 16e than in any other device.
Has anyone had similar experiences? I am looking for a way to prevent this error from happening (ideally) or handling in a way that does not affect the user. Any help is appreciated
具体表现为:在Unity编辑器中材质显示正常,但部署到Vision Pro真机后部分材质丢失或Shader效果异常(如透明通道失效、光照计算错误等)。此问题影响了开发进度,希望得到技术支持的帮助
Specific results: The materials are displayed normally in the Unity editor, but after being deployed to the Vision Pro real machine, some materials are lost or the Shader effect is abnormal (such as transparent channel failure, antenna calculation error, etc.). This problem has affected the development progress, and I hope to get help from technical support
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Hello Community,
I’m currently working with the sample code “CapturingDepthUsingTheLiDARCamera” and using it to capture the depth map of an image taken with the iPhone 14 Pro.
From this depth map, I generate a point cloud using the intrinsic camera parameters.
I've noticed that objects not facing the camera directly appear distorted in the resulting point cloud.
For example: An object with surfaces that are perpendicular to each other appears with a sharper angle in the point cloud — around 60° instead of 90°.
My question is:
Is this due to the general accuracy limitations of the LiDAR sensor? Or could it be related to the sample code?
To obtain the depth map, I’m using:
AVCapturePhoto.depthData.converting(toDepthDataType: kCVPixelFormatType_DepthFloat32)
Thanks in advance for your help!
I am working on an app that will allow a user to load and share their model files (usdz, usda, usdc). I'm looking at security options to prevent bad actors. Are there security or validation methods built into ARKit/RealityKit/CloudKit when loading models or saving them on the cloud? I want to ensure no one can inject any sort of exploit through these file types.
I like to compose an APN message. (using FCM)
what shall I do for it?
Topic:
Spatial Computing
SubTopic:
ARKit
Platform: iOS18
Tech: RealityView
Hi! I was wondering if RealityView now provides ways for their session to persist Anchor data in a world such that the anchor locations in one session can be saved and loaded in a another session that persists the exact same anchor positions.
I know that ARWorldMap in ARKit does that, but I was not able to find a way to use it with RealityView. I think it's because RealityView has ARKit under its hood but does not expose the ARKit session info publicly to the client code.
So I was wondering if there's a SwiftUI + RealityView approach that can help me to achieve a similar goal: Come back to the same location and see the object in exactly the same place.
Thanks!
I am considering adding finger pad haptics (Data flow for haptic feedback is directed from the AVP to the fingers, not vice versa). Simple piezos wired to a wrist connection holding the driver/battery.
But I'm concerned it will impact the hand tracking. Any guidance regarding gloves and/or the size of any peripherals attached to fingers?
Or, if anyone has another (inexpensive) low profile option on the market please LMK. Thanks
Topic:
Spatial Computing
SubTopic:
General
Can I apply .scrollInputBehavior(.enabled, for: .look) to a WebView (wrapped UIViewRepresentable) in a visionOS 26 app?
I tried it myself, but I couldn't do it, so I would like to know if there is any way to do this.
Best regards.
Problem Description:
I am developing an application that runs in the Shared Space on Apple Vision Pro using Unity. When using the UI ScrollView (Scroll View) component, I found that the Mask / RectMask2D does not function in the Shared Space.
Scrolling content is not masked or cropped; it extends beyond the view boundary and is displayed directly.
The same UI works correctly across platforms such as Unity Editor, iOS, and macOS, but the issue only occurs in the shared space of Vision Pro.
Reproduction steps:
Create a ScrollView in Unity.
Add a Mask or RectMask2D to the viewport.
Deploy the application to Apple Vision Pro and run it in Shared Space mode.
Sliding content will not be clipped by the mask, and the masked area is entirely ineffective.
Expected behavior:
The content of ScrollView should be properly clipped by Mask / RectMask2D and should not render outside the mask boundary.
Actual results:
In the shared space of Vision Pro, the mask is ineffective, causing scrolling content to extend beyond the designated area and resulting in severe UI distortion.
Environmental Information:
Device: Apple Vision Pro
Mode: Shared Space
Unity Version: 6000.0.40f1
visionOS version: visionOS 26.0
Unity PolySpatial Version: 2.0.4
Impact
This issue causes Unity UI to fail to display correctly on Vision Pro, preventing ScrollView from properly clipping content, which impacts the UI experience and interaction effects in practical applications.
Expected Result: When running a Unity app in the shared space of visionOS, the Mask / RectMask2D of ScrollView functions correctly
Hello everyone, I'm a new developer and I'm still learning the foundations of Swift and SwiftUI while building my first app. Today I wanted to ask you how to implement AR Quixck Views inside my app. I wanna be able to dynamically preview AR objects in a dedicated view, however, I don't seem to have understood where and how to locate AR objects inside my project. I tried including them in the Assets folder of the project, or in the Recources folder, or within the main folder of my project alongside the MyAppApp.swift file. None of the methods I used seemed have worked in that none of the objects was ever located. I made sure to specify the path to the files every time, but somehow the location isn't recognized. I also tried giving no path so that the app would search for the files in their default location (which I apparently haven't grasped yet), but still my attempt failed. I don't have the code sample on me at the moment, but I will write a followup comment on this post to show you what I wrote in case anyone was interested in debugging my code. Meanwhile, if anyone would be so kind to point me at the support article or to comment below the sample code they used in their app, I would very much appreciate it, so that I can start debugging. Thank you for reading this, I appreciate you.
Hope to achieve stable transmission
And the colors are different. The colors in the glasses are not consistent with the colors projected on the screen.
There a way to use contentCaptureProtected with Quick Look on VisionOS 26? Or exist a way to see a spatial photo with Quick Look without sharing options ?
In my Reality Composer Pro workflow for Vision Pro development, I’m using xcrun realitytool image to pre-compress textures into .ktx format, typically using ASTC block compression. These textures are used for cubemaps and environment assets.
I’ve noticed that regardless of the image content—whether it’s a highly detailed photo or a completely black image—once compressed with the same ASTC block size (e.g., ASTC_8x8), the resulting .ktx file size is nearly identical. There appears to be no content-aware logic that adapts the compression ratio to the actual texture complexity.
In contrast, Unreal Engine behaves differently: even when all cubemap faces are imported at the same resolution as DDS textures, the engine performs content-aware compression during packaging:
Low-complexity images are compressed more aggressively
The final packaged file size varies based on content complexity
Since Reality Composer Pro requires textures to be pre-compressed as .ktx, there’s no opportunity for runtime optimization or per-image compression adjustment.
Just wondering: is there any recommended way to implement content-aware compression for .ktx textures in Reality Composer Pro?
Or any best practices to optimize .ktx sizes based on image complexity?
Thanks!
Hi,
after upgrading to 2.4.1 (from 1.0) my vision stucks on "Retrieving configuration" screen. Apple Store didn't support my case since it has been sold in USA and the product isn't still present in italian market. I don't have dev strap, how can I manage the issue?
Thank you
Topic:
Spatial Computing
SubTopic:
General
After implementing the method of obtaining video streams discussed at WWDC in the program, I found that the obtained video stream does not include digital models in the digital space or related videos such as the program UI. I would like to ask how to obtain a video stream or frame that contains only the physical world?
let formats = CameraVideoFormat.supportedVideoFormats(for: .main, cameraPositions:[.left])
let cameraFrameProvider = CameraFrameProvider()
var arKitSession = ARKitSession()
var pixelBuffer: CVPixelBuffer?
var cameraAccessStatus = ARKitSession.AuthorizationStatus.notDetermined
let worldTracking = WorldTrackingProvider()
func requestWorldSensingCameraAccess() async {
let authorizationResult = await arKitSession.requestAuthorization(for: [.cameraAccess])
cameraAccessStatus = authorizationResult[.cameraAccess]!
}
func queryAuthorizationCameraAccess() async{
let authorizationResult = await arKitSession.queryAuthorization(for: [.cameraAccess])
cameraAccessStatus = authorizationResult[.cameraAccess]!
}
func monitorSessionEvents() async {
for await event in arKitSession.events {
switch event {
case .dataProviderStateChanged(_, let newState, let error):
switch newState {
case .initialized:
break
case .running:
break
case .paused:
break
case .stopped:
if let error {
print("An error occurred: \(error)")
}
@unknown default:
break
}
case .authorizationChanged(let type, let status):
print("Authorization type \(type) changed to \(status)")
default:
print("An unknown event occured \(event)")
}
}
}
@MainActor
func processWorldAnchorUpdates() async {
for await anchorUpdate in worldTracking.anchorUpdates {
switch anchorUpdate.event {
case .added:
//检查是否有持久化对象附加到此添加的锚点-
//它可能是该应用程序之前运行的一个世界锚。
//ARKit显示与此应用程序相关的所有世界锚点
//当世界跟踪提供程序启动时。
fallthrough
case .updated:
//使放置的对象的位置与其对应的对象保持同步
//世界锚点,如果未跟踪锚点,则隐藏对象。
break
case .removed:
//如果删除了相应的世界定位点,则删除已放置的对象。
break
}
}
}
func arkitRun() async{
do {
try await arKitSession.run([cameraFrameProvider,worldTracking])
} catch {
return
}
}
@MainActor
func processDeviceAnchorUpdates() async {
await run(function: self.cameraFrameUpdatesBuffer, withFrequency: 90)
}
@MainActor
func cameraFrameUpdatesBuffer() async{
guard let cameraFrameUpdates =
cameraFrameProvider.cameraFrameUpdates(for: formats[0]),let cameraFrameUpdates1 =
cameraFrameProvider.cameraFrameUpdates(for: formats[1]) else {
return
}
for await cameraFrame in cameraFrameUpdates {
guard let mainCameraSample = cameraFrame.sample(for: .left) else {
continue
}
self.pixelBuffer = mainCameraSample.pixelBuffer
}
for await cameraFrame in cameraFrameUpdates1 {
guard let mainCameraSample = cameraFrame.sample(for: .left) else {
continue
}
if self.pixelBuffer != nil {
self.pixelBuffer = mergeTwoFrames(frame1: self.pixelBuffer!, frame2: mainCameraSample.pixelBuffer, outputSize: CGSize(width: 1920, height: 1080))
}
}
}