Hi!
I'm currently trying to render another XR scene in front of a RealityKit one.
Actually, I'm anchoring a plane to the head with a shader to display for left/right eye side-by-side images. By default, the camera has a near plane so I can directly draw at z=0.
Is there a way to change the camera near plane? Or maybe there is a better solution to overlay image/texture for left/right eyes?
Ideally, I would layer some kind of CompositorLayer on RealityKit, but that's sadly not possible from what I know.
Thanks in advance and have a good day!
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Seeing this magical sand table, the unfolding and folding effects are similar to spreading out cards, which is very interesting. But I don't know how to achieve it. I want to see if there are any ways to achieve this effect and give some ideas. May I ask if this effect can be achieved under the existing API
I want to open the control center in Vision Pro’s Xcode simulator. Can I open it? If I can, please tell me how to do it. Thank you.
Hi,
I am in the process of implementing SharePlay into our app. The shared experience opens an Immersive Space and we set systemCoordinator.configuration.supportsGroupImmersiveSpace = true
Now visionOS establishes a shared coordinate space for the immersive space.
From the docs:
To achieve consistent positioning of RealityKit entities across multiple devices in an immersive space during a SharePlay session
There are cases where we want to position content in front of the user (independent of the shared session, and for each user individually). Normally to do that we use the transform retrieved via worldTrackingProvider.queryDeviceAnchor.originFromAnchorTransform
to position content in front of the user (plus some Z Offset and smooth interpolation).
This works fine in non-SharePlay instances and the device transform is where I would expect it to be but during the FaceTime call deviceAnchor.originFromAnchorTransform seems to use the shared origin of the immersive space and then I end up with a transform that might be offset.
Here is a video of the issue in action: https://streamable.com/205r2p
The blue rect is place using AnchorEntity(.head, trackingMode: .continuous). This works regardless of the call and the entity is always placed based on the head position.
The green rect is adjusted on every frame using the transform I get from worldTrackingProvider.queryDeviceAnchor. As you can see it's offset.
Is there any way I can query query this transform locally for the user during a FaceTime call?
Also I would like to know if it's possible to disable this automatic entity transform syncing behavior?
Setting entity.synchronization = nil results in the entity not showing up at all.
https://developer.apple.com/documentation/realitykit/synchronizationcomponent
Is SynchronizationComponent only relevant for the legacy MultiPeerConnectivity approach?
Thank you!
Similar to the visionOS Spatial Gallery app, I'm developing a visionOS app that will show spatial photos and videos. Is it possible to re-create the horizontal (or a vertical) scrolling functionality that shows spatial photos and spatial video previews? Does the Spatial Gallery app use private APIs to create this functionality? I've been looking at the Quick Look documentation and have been able to use the PreviewApplication to show a single preview, but do not see anything for a collection of files as the Spatial Gallery app presents in the scrolling view. Any insights or direction on how this may be done is greatly appreciated.
Hello,
I've been tinkering a bit with TextComponent.
Based on the docs it seems like this component should always render sharp and nice text, no matter how close the user gets:
RealityKit dynamically adjusts the backing size to a value that results in high-fidelity text at its current location.
And it does on visionOS, but on iOS and macOS the text gets pixelated when I get close to it, as if its just rendering it once as a plain image texture.
Can anyone tell me if this is expected behavior or a bug?
Here two screenshots for comparison (iPhone and Vision Pro):
Thanks!
let component = GestureComponent(DragGesture())
iOS: ☑️
visionOS: ❌
This bug from beta to public, please fix it.
Hi ,
I'm struggling with visionOS window management and need help with closing child windows programmatically.
App Structure
My app has a Main-Sub window hierarchy:
AWindow (Home/Main)
BWindow (Main feature window)
CWindow (Tool window - child of BWindow)
Navigation flow:
AWindow → BWindow (switch, 1 window on screen)
BWindow → CWindow (opens child, 2 windows on screen)
I want BWindow and CWindow to be separate movable windows (not sheet/popover) so users can position them independently in space.
The Problem
CWindow doesn't close when BWindow closes by tapping the X button below the app (next to the window bar)
User clicks X on BWindow → BWindow closes but CWindow remains
CWindow becomes orphaned on screen
Can close CWindow programmatically when switching BWindow back to AWindow
App launch issue
After closing both windows, CWindow is remembered as last window
Reopening app shows only CWindow instead of BWindow
User gets stuck in CWindow with no way back to BWindow
I've Tried Environment dismissWindow in cleanup but its not working.
// In BWindow.swift
.onDisappear {
if windowManager.isWindowOpen("cWindow") {
dismissWindow(id: "cWindow")
}
}
My App Structure Code Now
// in MyNameApp.swift
@main
struct MyNameApp: App {
var body: some Scene {
WindowGroup(id: "aWindow") {
AWindow()
}
WindowGroup(id: "bWindow") {
BWindow()
}
WindowGroup(id: "cWindow") {
CWindow()
}
}
}
// WindowStateManager.swift
class WindowStateManager: ObservableObject {
static let shared = WindowStateManager()
@Published private var openWindows: Set<String> = []
@Published private var windowDependencies: [String: String] = [:]
private init() {}
func markWindowAsOpen(_ id: String) {
markWindowAsOpen(id, parent: nil)
}
func markWindowAsClosed(_ id: String) {
openWindows.remove(id)
windowDependencies[id] = nil
}
func isWindowOpen(_ id: String) -> Bool {
let isOpen = openWindows.contains(id)
return isOpen
}
func markWindowAsOpen(_ id: String, parent: String? = nil) {
openWindows.insert(id)
if let parentId = parent {
windowDependencies[id] = parentId
}
}
func getParentWindow(of childId: String) -> String? {
let parent = windowDependencies[childId]
return parent
}
func getChildWindows(of parentId: String) -> [String] {
let children = windowDependencies.compactMap { key, value in
value == parentId ? key : nil
}
return children
}
func setNextWindowParent(_ parentId: String) {
UserDefaults.standard.set(parentId, forKey: "nextWindowParent")
}
func getAndClearNextWindowParent() -> String? {
let parent = UserDefaults.standard.string(forKey: "nextWindowParent")
UserDefaults.standard.removeObject(forKey: "nextWindowParent")
return parent
}
func forceCloseChildWindows(of parentId: String) {
let children = getChildWindows(of: parentId)
for child in children {
markWindowAsClosed(child)
NotificationCenter.default.post(
name: Notification.Name("ForceCloseWindow"),
object: nil,
userInfo: ["windowId": child]
)
forceCloseChildWindows(of: child)
}
}
func hasMainWindowOpen() -> Bool {
let mainWindows = ["main", "bWindow"]
return mainWindows.contains { isWindowOpen($0) }
}
func cleanupOrphanWindows() {
for (child, parent) in windowDependencies {
if isWindowOpen(child) && !isWindowOpen(parent) {
NotificationCenter.default.post(
name: Notification.Name("ForceCloseWindow"),
object: nil,
userInfo: ["windowId": child]
)
markWindowAsClosed(child)
}
}
}
}
// BWindow.swift
struct BWindow: View {
@Environment(\.dismissWindow) private var dismissWindow
@ObservedObject private var windowManager = WindowStateManager.shared
var body: some View {
VStack {
Button("Open C Window") {
windowManager.setNextWindowParent("bWindow")
openWindow(id: "cWindow")
}
}
.onAppear {
windowManager.markWindowAsOpen("bWindow")
}
.onDisappear {
windowManager.markWindowAsClosed("bWindow")
windowManager.forceCloseChildWindows(of: "bWindow")
}
.onChange(of: scenePhase) { oldValue, newValue in
if newValue == .background || newValue == .inactive {
windowManager.forceCloseChildWindows(of: "bWindow")
}
}
}
}
// CWindow.swift
import SwiftUI
struct cWindow: View {
@ObservedObject private var windowManager = WindowStateManager.shared
@State private var shouldClose = false
var body: some View {
// Content
}
.onDisappear {
windowManager.markWindowAsClosed("cWindow")
NotificationCenter.default.removeObserver(
self,
name: Notification.Name("ForceCloseWindow"),
object: nil
)
}
.onChange(of: scenePhase) { oldValue, newValue in
if newValue == .background {
}
}
.onAppear {
let parent = windowManager.getAndClearNextWindowParent()
windowManager.markWindowAsOpen("cWindow", parent: parent)
NotificationCenter.default.addObserver(
forName: Notification.Name("ForceCloseWindow"),
object: nil, queue: .main) { notification in
if let windowId = notification.userInfo?["windowId"] as? String, windowId == "cWindow" {
shouldClose = true
}
}
}
.onChange(of: shouldClose) { _, newValue in
if newValue {
dismissWindow()
}
}
}
The logs show everything executes correctly, but CWindow remains visible on screen.
Questions
Why doesn't dismissWindow(id:) work in cleanup scenarios?
Is there a proper way to create a window relationships like parent-child relationships in visionOS?
How can I ensure main windows open on app launch instead of tool windows?
What's the recommended pattern for dependent windows in visionOS?
Environment: Xcode 16.2, visionOS 2.0, SwiftUI
I am trying to launch a fully immersive game from Unity on a SwiftUI view. The game is using Metal Rendering with Compositor Services.
I added the unity Xcode project into the workspace, added the necessary bridge code. When I click on the button to call ufw?.showUnityWindow(), it does not start and I get the following in the console:
AR session failed to start after 5 seconds. Is the app configured to use an immersive space?
With Xcode 26, loading ressources with RealityKit is extremely slow.
Here my project takes almost 50 seconds to load.
I also get multiple Hang detected messages in the console:
When I uncheck "Debug executable" in the schema, the same project loads in 2 seconds.
I'm using RealityKit asynchronous loading:
private static func loadFromRealityComposerPro(
named entityName: String,
fromSceneNamed sceneName: String
) async -> Entity? {
var entity: Entity?
do {
let scene = try await Entity(
named: sceneName,
in: visionPetsContentBundle
)
entity = scene.findEntity(named: entityName)
} catch {
print(
"Error loading \(entityName) from scene \(sceneName): \(error.localizedDescription)"
)
}
return entity
}
Anyone having the same problem?
Topic:
Spatial Computing
SubTopic:
General
Has Roomplan been abandoned? Two years have gone by without comments from Apple on improvements. Are the improvements behind the scenes? Is there going to be any major updates?
If I long press on an element, the sidebar disappears and then a Done appears on the screen, but nothing else changes, so what are the Environments in Vision Pro's Simulator?
.glassEffect(.regular, in: .rect(cornerRadius: 24))
error; 'glassEffect(_:in:isEnabled:)' is unavailable in visionOS
This is not surprising since visionOS already has a native glass interface that formed a model for the other OS's, but this error will create additional overhead for developers creating multi-platform apps that include visionOS.
At the moment the map kit APls only support non-volumetric maps (i.e. in a window or in a volume, but on a 2D surface).
Is support for 3D volumetric maps in VisionOS in the works? And if so when can we expect it to be available?
Hey all,
I'm working on a visionOS app that captures live frames from the left and right cameras of Apple Vision Pro using cameraFrame.sample(for: .left/.right).
Apple provides documentation on encoding side-by-side frames into MV-HEVC spatial video using CMTaggedBuffer:
Converting Side-by-Side 3D Video to MV-HEVC
My question:
Is there any way to render tagged frames (e.g. CMTaggedBuffer with .stereoView(.leftEye/.rightEye)) live, directly to a surface in RealityKit or Metal, without saving them to a file?
I’d like to create a true stereoscopic (spatial) live video preview, not just render two images side-by-side.
Any advice or insights would be greatly appreciated!
In visionOS, once an immersive space is opened, the background color is solid black, is it possible to make this background transparent?
FYI, The Immersive spaces on visionOS uses Compositor Services for drawing 3D content.
Hello,
Let me ask you a question about Apple Immersive Video.
https://www.apple.com/newsroom/2024/07/new-apple-immersive-video-series-and-films-premiere-on-vision-pro/
I am currently considering implementing a feature to play Apple Immersive Video as a background scene in the app I developed, using 3DCG-created content converted into Apple Immersive Video format.
First, I would like to know if it is possible to integrate Apple Immersive Video into an app.
Could you provide information about the required software and the integration process for incorporating Apple Immersive Video into an app?
It would be great if you could also share any helpful website resources.
I am considering creating Apple Immersive Video content and would like to know about the necessary equipment and software for producing both live-action footage and 3DCG animation videos.
As I mentioned earlier, I’m planning to play Apple Immersive Video as a background in the app. In doing so, I would also like to place some 3D models as RealityKit entities and spatial audio elements.
I’m also planning to develop the visionOS app as a Full Space Mixed experience. Is it possible to have an immersive viewing experience with Apple Immersive Video in Full Space Mixed mode? Does Apple Immersive Video support Full Space Mixed?
I’ve asked several questions, and that’s all for now. Thank you in advance!
Hi,
We are trying to port our Unity app from other XR devices to Vision Pro. Thus it's way easier for us to use the Metal rendering layer, fully immersive. And to stay true to the platform, we want to keep the gaze/pinch interaction system.
But we just noticed that, unlike Polyspatial XR apps, VisionOS XR in Metal does not provide gaze info unless the user is actively pinching... Which forbids any attempt to give visual feedback on what they are looking at (buttons, etc).
Is this planned in Apple's roadmap ?
Thanks
Here is my code in visionOS 2.3
NavigationSplitView {
List {
}
.navigationTitle("Passwords")
} detail: {
Text("Hello")
.navigationTitle("All")
}
The font size of "Passwords" and "All" are smaller than the ones in Passwords app.
With the new ImagePresentationComponent in visionOS 26, how can text/overlays be shown on top of the image as seen in the Spatial Gallery app?