Hello,
I am currently considering developing a Full Space app that enables a shared visionOS experience with nearby users.
Intended Features
A Mixed Full Space app in which dozens of 3D models are placed in the space.
These 3D models may play embedded animations when tapped, be programmatically moved or rotated, or be controlled via Reality Composer Pro timelines.
The app also includes audio, spatial audio, videos with audio, and videos without audio, which are rendered as VideoTextures on planes and played back in the space.
Some media elements play automatically, while others are triggered by user interaction.
However, it is unclear whether AVPlaybackCoordinator supports shared playback across multiple types of media, such as:
audio only
spatial audio
video without audio
video with audio
I am also unsure whether there are alternative or recommended approaches for synchronizing playback in this scenario.
Questions
Is it technically possible to implement the experience described above using visionOS?
Are there any important implementation considerations or limitations that should be taken into account?
For example, when two participants experience the app simultaneously, how is the content positioned for each participant?
Is the spatial placement of content shared across participants, or is it positioned relative to each participant’s viewpoint?
For nearby participants, is it necessary to register a spatial Persona? My understanding is that spatial Personas are not visible for nearby users during the experience; is this correct?
When experiencing SharePlay with nearby users, is it possible to share the experience without registering the other participant’s contact information?
I have watched the following session, but I was unable to fully understand the feasibility of the above use case or the concrete implementation details:
https://developer.apple.com/videos/play/wwdc2025/318/
Thank you.
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi. I am mixing content destined for Vision Pro. Locked to video. I have the AAX installer and the ASAF video player demonstrated in the quicktimes is nit included in the install package for pro tools. Would it be possible to post a link ?
Hey all,
I'm working on a visionOS app that captures live frames from the left and right cameras of Apple Vision Pro using cameraFrame.sample(for: .left/.right).
Apple provides documentation on encoding side-by-side frames into MV-HEVC spatial video using CMTaggedBuffer:
Converting Side-by-Side 3D Video to MV-HEVC
My question:
Is there any way to render tagged frames (e.g. CMTaggedBuffer with .stereoView(.leftEye/.rightEye)) live, directly to a surface in RealityKit or Metal, without saving them to a file?
I’d like to create a true stereoscopic (spatial) live video preview, not just render two images side-by-side.
Any advice or insights would be greatly appreciated!
Hi ,
I'm struggling with visionOS window management and need help with closing child windows programmatically.
App Structure
My app has a Main-Sub window hierarchy:
AWindow (Home/Main)
BWindow (Main feature window)
CWindow (Tool window - child of BWindow)
Navigation flow:
AWindow → BWindow (switch, 1 window on screen)
BWindow → CWindow (opens child, 2 windows on screen)
I want BWindow and CWindow to be separate movable windows (not sheet/popover) so users can position them independently in space.
The Problem
CWindow doesn't close when BWindow closes by tapping the X button below the app (next to the window bar)
User clicks X on BWindow → BWindow closes but CWindow remains
CWindow becomes orphaned on screen
Can close CWindow programmatically when switching BWindow back to AWindow
App launch issue
After closing both windows, CWindow is remembered as last window
Reopening app shows only CWindow instead of BWindow
User gets stuck in CWindow with no way back to BWindow
I've Tried Environment dismissWindow in cleanup but its not working.
// In BWindow.swift
.onDisappear {
if windowManager.isWindowOpen("cWindow") {
dismissWindow(id: "cWindow")
}
}
My App Structure Code Now
// in MyNameApp.swift
@main
struct MyNameApp: App {
var body: some Scene {
WindowGroup(id: "aWindow") {
AWindow()
}
WindowGroup(id: "bWindow") {
BWindow()
}
WindowGroup(id: "cWindow") {
CWindow()
}
}
}
// WindowStateManager.swift
class WindowStateManager: ObservableObject {
static let shared = WindowStateManager()
@Published private var openWindows: Set<String> = []
@Published private var windowDependencies: [String: String] = [:]
private init() {}
func markWindowAsOpen(_ id: String) {
markWindowAsOpen(id, parent: nil)
}
func markWindowAsClosed(_ id: String) {
openWindows.remove(id)
windowDependencies[id] = nil
}
func isWindowOpen(_ id: String) -> Bool {
let isOpen = openWindows.contains(id)
return isOpen
}
func markWindowAsOpen(_ id: String, parent: String? = nil) {
openWindows.insert(id)
if let parentId = parent {
windowDependencies[id] = parentId
}
}
func getParentWindow(of childId: String) -> String? {
let parent = windowDependencies[childId]
return parent
}
func getChildWindows(of parentId: String) -> [String] {
let children = windowDependencies.compactMap { key, value in
value == parentId ? key : nil
}
return children
}
func setNextWindowParent(_ parentId: String) {
UserDefaults.standard.set(parentId, forKey: "nextWindowParent")
}
func getAndClearNextWindowParent() -> String? {
let parent = UserDefaults.standard.string(forKey: "nextWindowParent")
UserDefaults.standard.removeObject(forKey: "nextWindowParent")
return parent
}
func forceCloseChildWindows(of parentId: String) {
let children = getChildWindows(of: parentId)
for child in children {
markWindowAsClosed(child)
NotificationCenter.default.post(
name: Notification.Name("ForceCloseWindow"),
object: nil,
userInfo: ["windowId": child]
)
forceCloseChildWindows(of: child)
}
}
func hasMainWindowOpen() -> Bool {
let mainWindows = ["main", "bWindow"]
return mainWindows.contains { isWindowOpen($0) }
}
func cleanupOrphanWindows() {
for (child, parent) in windowDependencies {
if isWindowOpen(child) && !isWindowOpen(parent) {
NotificationCenter.default.post(
name: Notification.Name("ForceCloseWindow"),
object: nil,
userInfo: ["windowId": child]
)
markWindowAsClosed(child)
}
}
}
}
// BWindow.swift
struct BWindow: View {
@Environment(\.dismissWindow) private var dismissWindow
@ObservedObject private var windowManager = WindowStateManager.shared
var body: some View {
VStack {
Button("Open C Window") {
windowManager.setNextWindowParent("bWindow")
openWindow(id: "cWindow")
}
}
.onAppear {
windowManager.markWindowAsOpen("bWindow")
}
.onDisappear {
windowManager.markWindowAsClosed("bWindow")
windowManager.forceCloseChildWindows(of: "bWindow")
}
.onChange(of: scenePhase) { oldValue, newValue in
if newValue == .background || newValue == .inactive {
windowManager.forceCloseChildWindows(of: "bWindow")
}
}
}
}
// CWindow.swift
import SwiftUI
struct cWindow: View {
@ObservedObject private var windowManager = WindowStateManager.shared
@State private var shouldClose = false
var body: some View {
// Content
}
.onDisappear {
windowManager.markWindowAsClosed("cWindow")
NotificationCenter.default.removeObserver(
self,
name: Notification.Name("ForceCloseWindow"),
object: nil
)
}
.onChange(of: scenePhase) { oldValue, newValue in
if newValue == .background {
}
}
.onAppear {
let parent = windowManager.getAndClearNextWindowParent()
windowManager.markWindowAsOpen("cWindow", parent: parent)
NotificationCenter.default.addObserver(
forName: Notification.Name("ForceCloseWindow"),
object: nil, queue: .main) { notification in
if let windowId = notification.userInfo?["windowId"] as? String, windowId == "cWindow" {
shouldClose = true
}
}
}
.onChange(of: shouldClose) { _, newValue in
if newValue {
dismissWindow()
}
}
}
The logs show everything executes correctly, but CWindow remains visible on screen.
Questions
Why doesn't dismissWindow(id:) work in cleanup scenarios?
Is there a proper way to create a window relationships like parent-child relationships in visionOS?
How can I ensure main windows open on app launch instead of tool windows?
What's the recommended pattern for dependent windows in visionOS?
Environment: Xcode 16.2, visionOS 2.0, SwiftUI
I am working on an app that will allow a user to load and share their model files (usdz, usda, usdc). I'm looking at security options to prevent bad actors. Are there security or validation methods built into ARKit/RealityKit/CloudKit when loading models or saving them on the cloud? I want to ensure no one can inject any sort of exploit through these file types.
I want to let users place 2D/3D “artworks” on detected walls and have them reappear in exactly the same real‑world spot after quitting and relaunching the app (like widgets do, but for my own entities).Environment:
Xcode 26, visionOS 2.0, RealityKit + ARKitSession/WorldTrackingProvider
Entities are parented to a holder that’s aligned to a wall via plane/mesh raycasts
What I’ve tried:
Create a WorldAnchor at placement, save UUID + full 4×4 transform
On next launch, re-create the WorldAnchor (or set the saved transform) and attach the entity
Gate restore on relocalization/mesh updates and disable all raycast/search after restore
Issue:
After relaunch, placement still resolves relative to current device pose, not the same wall position.
Questions:
Is there a public API in visionOS 2.0 to persist app‑managed world anchors across sessions (room‑fixed), e.g., AnchorStore or equivalent?
If not, what’s the recommended pattern to reliably restore wall‑anchored content?
Are persistence features mentioned for widgets/windows available to third‑party RealityKit entities?
Topic:
Spatial Computing
SubTopic:
General
In Vision OS app, I have two types of windows:
Main App Window – This is the default window that launches when the app starts. It displays the video listings and other primary content.
Immersive Space Window – This opens only when a user starts streaming or playing a video.
Issue:
When entering the immersive space, the main app window remains visible in front of it unless manually closed. To avoid this, I currently close the main window when transitioning to immersive space and reopen it when exiting from immersive space. However, this causes the app to restart instead of resuming from its previous state.
Desired Behavior:
I want the main app window to retain its state and seamlessly resume from where it was before entering immersive mode, rather than restarting.
Attempts & Challenges:
Tried managing opacity, visibility but none worked as expected.
Couldn’t find a way to push the main window to the background while bringing the immersive space to the foreground.
Looking for a solution to keep the main window’s state intact while transitioning between immersive and normal modes.
I'm trying to add a feature to my app to allow a user to import items from other apps, like Safari, via the share sheet.
I've done this many times on iOS/iPadOS easily with a Share Extension. From what I can tell, Xcode tells me share extensions are not available on visionOS - though my experience on device tells me differently (It seems Reminders, Notes & more implement them somehow.) I was finally able to get it working on device only...but I can now no longer test in the simulator, and have not found a way to distribute this app.
When attempting to run on the simulator, I get this issue:
Please try again later. Appex bundle at /Users/jason/Library/Developer/CoreSimulator/Devices/09A70160-4F4F-4F5E-B679-F6F7D876D7EF/data/Library/Caches/com.apple.mobile.installd.staging/temp.6OAEZp/extracted/LaunchBar.app/PlugIns/LaunchBarShareExtension.appex with id co.swiftfox.LaunchBar.ShareExtension specifies a value (com.apple.share-services) for the NSExtensionPointIdentifier key in the NSExtension dictionary in its Info.plist that does not correspond to a known extension point.
When trying to archive an upload to test flight, I get this similar error:
Invalid Info.plist value. The value for the key 'DTPlatformName' in bundle LaunchBar.app/PlugIns/LaunchBarShareExtension.appex is invalid. (ID: 207610c7-b7e1-48be-959b-22a43cd32d16)
The app is for visionOS only - which I'm thinking might be the problem? The share extension is "Designed For iPhone" and requires me to include iPhone as a run destination. In the worst case I can build an iPhone UI for the app but I'd rather not, as it is very specific to visionOS.
Has anyone successfully launched a share extension on a visionOS only app? I have an iPad app with a share extension that shows up fine on visionOS, but the issue seems to be specifically with visionOS only apps.
Topic:
Spatial Computing
SubTopic:
General
Hello,
If you add a ManipulationComponent to a RealityKit entity and then continue to add instructions, sooner or later you will encounter a crash with the following error message:
Attempting to move entity “%s” (%p) under “%s” (%p), but the new parent entity is currently being removed. Changing the parent/child entities of an entity in an event handler while that entity is already being reassigned is not supported.
CoreSimulator 1048 – Device: Apple Vision Pro 4K (B87DD32A-E862-4791-8B71-92E50CE6EC06) – Runtime: visionOS 26.0 (23M336) – Device Type: Apple Vision Pro
The problem occurs precisely with this code:
ManipulationComponent.configureEntity(object)
I adapted Apple's ObjectPlacementExample and made the changes available via GitHub.
The desired behavior is that I add entities to ManipulationComponent and then Realitiykit runs stably and does not crash randomly.
GitHub Repo
Thanks
Andre
Hi, I'm currently implementing 180° / 360° property for immersive video in my app.
I was able to implement 360° easily by just giving VideoMaterial to flipped sphere.
However, I'm bit stuck at 180°. I want to implement by setting VideoMaterial to hemisphere mesh. But since RealityKit doesn't provide default function such like MeshResource.generateHemisphere yet, I just want to apply VideoMaterial half front visible, and half back transparent. I thought this would make my sphere looks like hemisphere.
But I can't find my way to implement this method.. I would appreciate any advice / idea / information that might help.
Currently I am using mixed style immersive view to place both my WindowView(plain style) and ImmersiveView content together. The issue is that the rendering depth testing may always let the virtual content block my normal WindowView. Is it possible to manually set windowedVIew always displays in the front of my virtual view in mixed style immersion? (I know modelSortGroup but it doesn't quite fits here)
Or if I can dynamically change the .progressive value when the immersive space is open (set the value to zero means .mixed itself right?)
How do I configure a Unity project for a fully immersive VR app on Apple Vision Pro using Metal Rendering, and add a simple pinch-to-teleport-where-looking feature? I've tried the available samples and docs, but they don't cover this clearly (to me).
So far, I've reviewed Unity XR docs, Apple dev guides, and tutorials, but most emphasize spatial apps. Metal examples exist but don't include teleportation. Specifically:
visionOS sample "XRI_SimpleRig" – Deploys to device/simulator, but no full immersion or teleport.
XRI Toolkit sample "XR Origin Hands (XR Rig)" – Pinch gestures detect, but not linked to movement.
visionOS "XR Plugin" sample "Metal Sample URP" – Metal setup works, but static scene without locomotion.
I'm new in Unity XR development and would appreciate a simple, standalone scene or document focused only on the essentials for "teleport to gaze on pinch" in VR mode—no extra features. I do have some experience in unreal, world toolkit, cosmo, etc from the 90's and I'm ok with code.
Please include steps for:
Setting up immersive VR (disabling spatial defaults if needed).
Integrating pinch detection with ray-based teleport.
Any config changes or basic scripts.
Project Configuration:
Unity Editor Version: 6000.2.5f1.2588.7373 (Revision: 6000.2/staging 43d04cd1df69)
Installed Packages:
Apple visionOS XR Plugin: 2.3.1
AR Foundation: 6.2.0
PolySpatial XR: 2.3.1
XR Core Utilities: 2.5.3
XR Hands: 1.6.1
XR Interaction Toolkit: 3.2.1
XR Legacy Input Helpers: 2.1.12
XR Plugin Management: 4.5.1
Imported Samples:
Apple visionOS XR Plugin 2.3.1: Metal Sample - URP
XR Hands 1.6.1
XR Interaction Toolkit 3.2.1: Hands Interaction Demo, Starter Assets, visionOS
Build Platform Settings:
Target: Apple visionOS
App Mode: Metal Rendering with Compositor Services
Selected Validation Profiles: visionOS Metal
Documentation: Enabled
Xcode Version: 26.01
visionOS SDK: 26
Mac Hardware: Apple M1 Max
Target visionOS Version: 20 or 26
Test Environment: Model: Apple Vision Pro, visionOS 26.0.1 (23M341), Apple M1 Max
No errors in builds so far; just missing the desired functionality.
Thanks for a complete response with actionable steps.
Hi,
I'm looking to build something similar to the header blur in the App Store and Apple TV app settings. Does anyone know the best way to achieve this so that when there is nothing behind the header it looks the same as the rest of the view background but when content goes underneath it has a blur effect. I've seen .scrollEdgeEffect on IOS26 is there something similar for visionOS?
Thanks!
This is no longer highlighting my entity when looking at it:
RealityView { content
let hoverComponent = HoverEffectComponent(.spotlight(
HoverEffectComponent.SpotlightHoverEffectStyle(
color: .white, strength: 2.0
)
))
entity.components.set(hoverComponent)
The entity is in a window. The same code works in an immersive view.
Collision Component and Input type are set in RCP.
It's also stopped working on my published app (built under visionOS 2.x) using my visionOS 26 device.
If I use a 2.x simulator, it works.
Is this a bug or is there something I'm missing?
Thanks.
I have been trying to implement this look where a component looks "pushed in" but I could not find any resources regarding this effect. The closest I got was a combination of a RoundedRectangle and .glassBackgroundEffect(), but this makes the view look pushed out, instead of pushed in.
I was wondering if this is achievable in SwiftUI level, or even in UIKit level.
Hi, I am a new developer. I want to add articulated objects and deformable objects into my AR game. I haven't found any tutorial on this, I hope to interact with these objects. Please let me know if this is available in visionOS.
I work on a game where I use timeline animations in Reality Composer Pro.
The game runs in an immersive space, but can be paused where I then move the whole level root entity from the immersive space to another RealityView in a Window Group. When the player continues I do it exactly the other way around to move the level root from the window group back to my immersive space RealityView.
And it seems like all animations get automatically stopped and restarted when the scene gets changed. The problem is, it does not resume where it stopped before, it completely starts again from where it stopped and therefore, has for example a wrong y offset as visible in the picture.
For example in the picture, the yellow sphere loops the following animation:
0 to 100
100 to -100
-100 to 0
If I now pause the game (and basically switch scenes), the previous animation gets stopped and restarted at position y = 100. So now it loops:
100 to 200
200 to 0
0 to 100
I already tried all kind of setups - like:
Setting the animations relative to root, parent, local
Using behaviors (on Added to Scene, on Notification)
And finally even by accessing the availableAnimations directly and saving the playback controller of the animation
There I saw, if I manually trigger the following code before switching the scene, everything works as expected:
Button("Reset") {
animationPlaybackController.time = 0
animationPlaybackController.pause()
animationPlaybackController.stop(blendOutDuration: 0.00001)
}
But if I use time = 0 with .stop() directly, the time = 0 seems to be ignored and I get the same behavior as before that it stops in a wrong y offset, hence my assumption that animations get stopped and invalidated once they change the scene.
I tried to call the code manually on ImmersiveSpace.onDisappear, WindowGroup.onAppear and different kind of SceneEvents subscriptions, but unfortunately nothing worked.
So am I doing something wrong in general or is there a way to fix this?
Can you help to write a code able to pick an element a bit far from me, then bring it near to me, flick it a bit and then send it back to its original position when I release it?
Thanks a lot,
Christophe
I'm seeing this error while attempting to compile my VisionOS app under Xcode 26. My existing code looks like:
let (naturalSize, formatDescriptions, mediaCharacteristics) = try? await videoTrack.load(.naturalSize, .formatDescriptions, .mediaCharacteristics)
This is now giving a compiler error: Type of expression is ambiguous without a type annotation
I don't see that anything that was changed or deprecated in the latest version. Also loading the properties individually seems to work fine i.e.:
let naturalSize = try? await videoTrack.load(.naturalSize)
let formatDescriptions = try? await videoTrack.load(.formatDescriptions)
let mediaCharacteristics = try? await videoTrack.load(.mediaCharacteristics)
Hi there,
I was looking to add a particle emitter to my augmented reality app I'm developing using RealityKit. I'm targeting iOS. I noticed in the documentation for the ParticleEmitterComponent that it looks like iOS 18.0+ is supported, but when I try to use the ParticleEmitterComponent in my code in XCode, I get an error that it isn't found. Furthermore, this StackOverflow post seems to indicate that particle systems are not available for iOS. Would it be possible to get clarification on this?