Happy new year to all!
I have created an iOS app that also runs on Apple Vision Pro.
On iOS, when you activate the fileImporter modal, you can swipe down the modal in iOS to dismiss.
However, in visionOS, this same modal CANNOT be swiped down to cancel/dismiss. If you are drilled deep into a file hierarchy, you have to navigate back to the top level to tap X to dismiss.
Is there a way to add swipe down to the visionOS implementation of fileImporter, or any other workaround so the user doesn't have to navigate back to the top to dismiss?
Again, this is not a visionOS app but an iOS app compatible for use in Vision Pro.
Thanks!
visionOS
RSS for tagDiscuss developing for spatial computing and Apple Vision Pro.
Posts under visionOS tag
200 Posts
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hello,
I am currently considering developing a Full Space app that enables a shared visionOS experience with nearby users.
Intended Features
A Mixed Full Space app in which dozens of 3D models are placed in the space.
These 3D models may play embedded animations when tapped, be programmatically moved or rotated, or be controlled via Reality Composer Pro timelines.
The app also includes audio, spatial audio, videos with audio, and videos without audio, which are rendered as VideoTextures on planes and played back in the space.
Some media elements play automatically, while others are triggered by user interaction.
However, it is unclear whether AVPlaybackCoordinator supports shared playback across multiple types of media, such as:
audio only
spatial audio
video without audio
video with audio
I am also unsure whether there are alternative or recommended approaches for synchronizing playback in this scenario.
Questions
Is it technically possible to implement the experience described above using visionOS?
Are there any important implementation considerations or limitations that should be taken into account?
For example, when two participants experience the app simultaneously, how is the content positioned for each participant?
Is the spatial placement of content shared across participants, or is it positioned relative to each participant’s viewpoint?
For nearby participants, is it necessary to register a spatial Persona? My understanding is that spatial Personas are not visible for nearby users during the experience; is this correct?
When experiencing SharePlay with nearby users, is it possible to share the experience without registering the other participant’s contact information?
I have watched the following session, but I was unable to fully understand the feasibility of the above use case or the concrete implementation details:
https://developer.apple.com/videos/play/wwdc2025/318/
Thank you.
We're trying to switch from using main camera access on Arkit to screen-capture with passthrough however we're facing some issues and it seems a bit complicated to debug.
We have set up a broadcast Extension, set up some logs on the sample Handler but we get nothing in the console nor that the recording starts, we set up the picker as well and we can see our extension in the control center as one of the choices but clicking start, results in it stopping in less than one second after.
The only message that is rather contradictory we see in the console.app is the following
[INFO] -[RPRecordingManager getSystemBroadcastExtensionInfo:]_block_invoke:1333 Extension has passthrough license
and just right after
[INFO] -[RPRecordingManager getSystemBroadcastExtensionInfo:]_block_invoke:1336 Extension does not have passthrough license
Has anyone tried to make an ILPD based AIME file?
When I try the resulting AIME switches to USDZ Mesh instead of saving the ILPD Data.
The samples shown in volumetric work great but moving to an immersive experience the pen physical buttons don't work when you're focusing to an entity with a collision.
直播过程中需同时操作 Vision Pro(拍摄)、Mac(推流)、中控台(画面切换),无统一控制入口,调节 3D 模型、校准画质等操作需在多设备间切换,易出错且效率低。
期望
针对直播场景,提供桌面端专属控制软件,支持一站式管理 Vision Pro 的拍摄参数、3D 模型切换、虚实融合效果等,实现多设备协同操作的可视化、便捷化。
画面亮度存在无规律动态波动(时亮时暗),且无手动控制入口,导致商品颜色还原失真、主播面部曝光异常(过曝 / 欠曝),严重影响直播展示效果。
期望
"· 优化直播模式的自动曝光算法,提升复杂光线环境下的亮度稳定性;
· 增加 “直播模式” 专属亮度锁定功能,支持手动设定亮度参数并锁定,满足直播场景下的画质可控需求。
"
切换后两者的亮度、色彩饱和度、对比度等画质参数差距较大,导致画面视觉体验割裂,破坏直播连贯性,影响用户观看沉浸感。
期望
"· 对标常规直播单反相机的画质基准,优化 Vision Pro 的画面亮度、色彩还原能力;
· 提供设备端或配套软件的画质自定义调节功能(亮度、对比度、色温等),支持直播前手动校准,确保与单反相机画面风格一致。"
传输后的直播流分辨率显著下降,画面细节丢失、清晰度不足,导致 3D 家具商品的纹理、尺寸等关键信息无法精准展示,影响用户对商品的判断。
期望
优化流传输过程中的分辨率压缩策略,减少传输过程中的画质损耗,提升 Mac 端接收的直播流清晰度,匹配 3D 商品展示的高精度需求。
佩戴者头部自然晃动时,设备拍摄的画面会出现明显抖动,导致观看直播的用户产生眩晕感,严重影响直播沉浸体验和购物决策效率。
希望
优化设备内置防抖算法,降低头部常规晃动对画面稳定性的影响,提升直播画面的流畅度。
Hello,
Want to understand what's the current state for developing for Apple Vision Pro? I want to stream a video from a remote server in realtime. It is a video stream and can't download it.
I want to stream a low quality stream and high res stream. The server will only send the "box" where user is looking at. Are there any API to track where the user is looking at in the experience?
Thanks,
The Send to Device in Reality Composer Pro no longer works for me. The progress spinner keeps spinning...
Happens with Wifi and Developer Strap (first gen).
Any known workarounds / ways to recover?
I posted https://developer.apple.com/forums/thread/809481 yesterday about an issue I discovered with pushWindow in visionOS 26.2 RC, but today I discovered a second problem with pushWindow.
If window A calls pushWindow to present window B, and the user pins window B to a wall, the following unexpected behaviors are observed:
Window B spontaneously disappears.
If the user re-launches the (still running) app from the visionOS home view, both window A and window B appear simultaneously. I assume only window B should be visible at this point, since window A pushed window B.
If the user closes window B, it's now impossible to present window B again. Calls to pushWindow appear to be ignored.
If the user force-quits the app and relaunches it, and pushWindow is called again, window B appears, but window A remains visible.
I also noticed this surprising behavior:
This broken state of pushWindow behavior now affects all other apps on the system that may call pushWindow in the future, not just the app whose pushed window was pinned above.
A workaround is to reboot the device, and then the system will behave as expected until the next time the user pins a pushed window.
I recently added pushWindow to my app, and I discovered that in visionOS 26.2 RC (23N301), pushWindow followed by dismissWindow no longer works as expected.
Specifically, if the user moves the pushed window, then when the pushed window is later dismissed, the parent window's position isn't aligned with the pushed window's new position. Its original position is restored instead.
Curiously, the bug only happens when an app is launched from the visionOS home view, and not when an app is launched from Xcode. It also doesn't happen in the visionOS 26.2 simulator.
Another interesting detail is that while the parent window is hidden, if the user long-presses the Digital Crown and then dismisses the pushed window, the parent window's position seems to be immune from the Digital Crown scene reorientation. It's restored to its original real world position.
Demo video: https://youtu.be/zR3t2ON3Wz0
I've submitted feedback as FB21287011 with a sample app and detailed repro steps.
Has anyone else encountered this issue already and figured out a workaround? It would be nice if I could get pushWindow to work correctly in my app.
Thanks everybody! 😀
Hello,
I am in the process of implementing SharePlay support in my visionOS app. Everything runs fine when I test locally, but when my app is distributed via TestFlight, calling try await activity.activate() shows the SharePlay dialog as usual, but then when I start a new FaceTime call, my ImmersiveSpace gets dismissed.
This is only happening when the app is distributed via TestFlight, when I run it locally the ImmersiveSpace stays active as expected.
Looking at the console on my Mac I found this log:
Invalid initial client settings class: UIApplicationSceneClientSettings; expected class: MRUISharedApplicationSceneClientSettings; bundle ID: com.apple.facetime; scene ID: com.apple.facetime:SFBSystemService-DDA8C751-C0C4-487E-AD85-59EF4E6C6050
Does anyone have an idea how I can fix this? It's driving me nuts and I wasted over a day looking for a workaround but so far been unsuccessful.
Thanks!
Hi everyone,
I’m encountering a memory overflow issue in my visionOS app and I’d like to confirm if this is expected behavior or if I’m missing something in cleanup.
App Context
The app showcases apartments in real scale using AR.
Apartments are heavy USDZ models (hundreds of thousands of triangles, high-resolution textures).
Users can walk inside the apartments, and performance is good even close to hardware limits.
Flow
The app starts in a full immersive space (RealityView) for selecting the apartment.
When an apartment is selected, a new ImmersiveSpace opens and the apartment scene loads.
The scene includes multiple USDZ models, EnvironmentResources, and dynamic textures for skyboxes.
When the user dismisses the experience, we attempt cleanup:
Nulling out all entity references.
Removing ModelComponents.
Clearing cached textures and skyboxes.
Forcing dictionaries/collections to empty.
Despite this cleanup, memory usage remains very high.
Problem
After dismissing the ImmersiveSpace, memory does not return to baseline.
Check the attached screenshot of the profiling made using Instruments:
Initial state: ~30MB (main menu).
After loading models sequentially: ~3.3GB.
Skybox textures bring it near ~4GB.
After dismissing the experience (at ~01:00 mark): memory only drops slightly (to ~2.66GB).
When loading the second apartment, memory continues to increase until ~5GB, at which point the app crashes due to memory pressure.
The issue is consistently visible under VM: IOSurface in Instruments. No leaks are detected.
So it looks like RealityKit (or lower-level frameworks) keeps caching meshes and textures, and does not free them when RealityView is ended. But for my use case, these resources should be fully released once the ImmersiveSpace is dismissed, since new apartments will load entirely different models and textures.
Cleanup Code Example
Here’s a simplified version of the cleanup I’m doing:
func clearAllRoomEntities() {
for (entityName, entity) in entityFromMarker {
entity.removeFromParent()
if let modelEntity = entity as? ModelEntity {
modelEntity.components.removeAll()
modelEntity.children.forEach { $0.removeFromParent() }
modelEntity.clearTexturesAndMaterials()
}
entityFromMarker[entityName] = nil
removeSkyboxPortals(from: entityName)
}
entityFromMarker.removeAll()
}
extension ModelEntity {
func clearTexturesAndMaterials() {
guard var modelComponent = self.model else { return }
for index in modelComponent.materials.indices {
removeTextures(from: &modelComponent.materials[index])
}
modelComponent.materials.removeAll()
self.model = modelComponent
self.model = nil
}
private func removeTextures(from material: inout any Material) {
if var pbr = material as? PhysicallyBasedMaterial {
pbr.baseColor.texture = nil
pbr.emissiveColor.texture = nil
pbr.metallic.texture = nil
pbr.roughness.texture = nil
pbr.normal.texture = nil
pbr.ambientOcclusion.texture = nil
pbr.clearcoat.texture = nil
material = pbr
} else if var simple = material as? SimpleMaterial {
simple.color.texture = nil
material = simple
}
}
}
Questions
Is this expected RealityKit behavior (textures/meshes cached internally)?
Is there a way to force RealityKit to release GPU resources tied to USDZ models when they’re no longer used?
Should dismissing the ImmersiveSpace automatically free those IOSurfaces, or do I need to handle this differently?
Any guidance, best practices, or confirmation would be hugely appreciated.
Thanks in advance!
I have developed a fun living diorama world using Reality Composer Pro and XCode. Everything is as it should be, and it looks/works great ... until it does not.
If I seem to make any change to any of the 10 timelines that I am using (all on the same scene, no nested scenes), running the app in simulator, device, and via testflight throws errors around compiled timelines, leading to the black screen of death. Every time I clean and run, the timelines in questions might change. Its very frustrating and impossible to track down.
Heres are some examples.
AssetLoadRequest failed because asset failed to load '/ (3661553931319769725 Timeline (RealityFileAsset)URL/file:///var/containers/Bundle/Application/F4408256-6014-4264-9E4B-F74AEF0EDE53/SantasVillage.app/RealityKitContent_RealityKitContent.bundle/RealityKitContent.reality/Timeline_779.compiledtimeline)' (failed to register asset)
Asset / (13631856135570808851 AnimationLibraryAsset (RealityFileAsset)URL/file:///var/containers/Bundle/Application/F4408256-6014-4264-9E4B-F74AEF0EDE53/SantasVillage.app/RealityKitContent_RealityKitContent.bundle/RealityKitContent.reality/AnimationLibraryAsset_1.compiledanimationlibraryasset) failure: failed to register asset
Asset 10430065658338454790 AnimationScene (RealityFileAsset)URL/file:///var/containers/Bundle/Application/F4408256-6014-4264-9E4B-F74AEF0EDE53/SantasVillage.app/RealityKitContent_RealityKitContent.bundle/RealityKitContent.reality/AnimationScene_14.compiledanimationscene failure: failed to register asset
I went with recommended fixes of closing RCP > Clean Build Folder > Delete Derrived Date (multiple ways) > Re-Open Xcode > Reset Package Cache > Re-Open RCP via XCode > Make a Change > Save > Clean Build Folder Again > Run.
Sometimes it works. Most times it does not.
I then found my own little work-around, but its not always working as is literally costing me days of wasted time messing around with this. I will DISABLE all timelines, do the above clean method, rerun with no timelines, and it resolves. Then, turn on timelines ONE BY ONE and run until I get another error. Then, rebuild that timeline and nothing else.
This is not sustainable. There must be some better way to do this? Or, perhaps I am doing something wrong? Please help if you can.
Topic:
Developer Tools & Services
SubTopic:
Xcode
Tags:
Reality Composer
RealityKit
Reality Composer Pro
visionOS
i'd like to have a little bit control over the transparency of the videomaterial. is there any way to prepare a shadergraph unlit shader and use it with the videomaterial.
https://developer.apple.com/documentation/realitykit/videomaterial
The documentation: "Video materials support transparency if the source video’s file format also supports transparency."
I have a transparency video(Hand.mov, HEVC with alpha), I can show the video with transparency background correctly on Vision Pro Simulates, but on physic Device the video has a black background. I'm sure the video format is ok because I can see get the texture from video and display it on an UnlitMaterial.
How can I show the transparency video correctly with the RealityKit/VideoMaterial?
After adding TextComponents to my Entities on visionOS, I have observed that visualBounds will ignore the TextComponents.
Documentation states that it should render a rounded rectangle mesh. These mashes are visible on the device, but not visible in the debugger ("Capture Entity Hierarchy") and ignored by visualBounds.
Am I missing something?
static func makeDirection(_ direction: Direction) -> Entity {
let text = Entity()
text.name = direction.rawValue
text.setScale(SIMD3(repeating: 5), relativeTo: nil)
text.transform.rotation = direction.rotation
text.components.set(direction.textComponent)
return text
}
My workaround is to add a disabled ModelEntity and take its bounds 😬