With iOS26 unveiled, has anyone noticed or found any changes related to RoomPlan?
I can't find anything myself, which is disappointing.
Has anyone found any improvements or changes?
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I noticed that when I drag the menu window in an Immersive View, the entities behind it becomes semi-transparent, and the boundary between virtual and real-world objects is very pronounced.
May I ask how does VisionOS implement this effect? Is there any API or technique I can use in my own code to enable the same semi-transparent overlay - even when I am not dragging the menu window?
I have an entity that was created using Mixamo, and it has an animation.
after the animation completes the mesh of the robot is not where the entity is positioned.
I want to do something like when the animation finishes, I set the root entity's transform to the mesh's transform. There are no transformations applied to any of the children of this root of the model, which means that the transformations are applied to the skeleton due the the playing of animations.
Is there a way where I can apply the final position of the root of the skeleton to the root entity to make sure to position the entity where the animation has ended just before the next animation plays?
I am developing a Unity application for the Apple Vision Pro using PolySpatial and RealityKit integration.
The goal is to create a graspable object (for example, a handheld cube) that includes a secondary camera. When the user grabs and moves the object, the secondary camera should render its view to a RenderTexture, which is displayed on a quad attached to the object, simulating a live camera screen.
In the Unity Editor, this setup works correctly. The RenderTexture updates in real time, and the quad displays the camera’s view as expected.
However, when building and running the application on the Vision Pro, the quad only displays the clear background color of the secondary camera. No scene content appears. The graspable interaction itself works fine: the object can be grabbed and moved as intended.
Steps I have taken:
Created a new layer (CameraFeed) and assigned the relevant objects to it.
Set the secondary camera’s culling mask to render only the CameraFeed layer.
Assigned the RenderTexture as the camera’s target texture.
Applied the RenderTexture to an Unlit/Texture material on a quad.
Confirmed the camera is active and correctly positioned relative to the object.
From my research, it appears that once objects are managed by RealityKit through PolySpatial (for example, made graspable), they are no longer rendered through Unity's normal camera pipeline. Only the main XR camera (managed by RealityKit) seems able to see these objects. Secondary Unity cameras cannot render RealityKit-synced content to a RenderTexture. If this is correct, it seems there is currently no way to implement a true live secondary camera feed showing graspable objects on Vision Pro using Unity PolySpatial.
My questions are:
Is there any official way to enable multiple camera rendering of RealityKit-managed objects through PolySpatial?
Are there known workarounds to simulate a live camera feed that still allows objects to be grabbed?
Has anyone found alternative design patterns or methods for this kind of interaction?
Environment: Unity 6.0 , PolySpatial 2.2.4, Apple Vision OS XR 2.2.4
Any insight or suggestions would be greatly appreciated.
Thank you.
I believe I have created a videoMaterial and assigned it to a mesh with code I found in the Developer's Documentation but Im getting this error.
"Trailing closure passed to parameter of type 'String' that does not accept a closure"
I have attached a photo of the code and where the error happens.
Any help will greatly be appreciated.
I've submitted my first AR app for iPhone and iPad to iTunes Connect. After sending a binary to iTunes Connect, I've received the following warning message.
The app contains the following UIRequiredDeviceCapabilities values, which aren’t supported in visionOS: [arkit].
No. 1, my app doesn't support visionOS. No. 2, I don't have the UIRequiredDeviceCapabilities dictionary in info.plist. Why am I receiving this warning? One article related to this issue that I've read suggests that I remove the UIRequiredDeviceCapabilities dictionary. Well, I don't have it in my plist. What can I do with this warning message? Thanks.
Topic:
Spatial Computing
SubTopic:
ARKit
I am experience problem with three iPhone 13 Pro.
They are reporting the lowest quality for all points in the depthmap from the Lidar sensor.
The readings I get are unusable.
If it was just one phone I would consider it a faulty sensor, but in this case it is three phones that gives the same result.
I have other iPhone 13 Pro that works as expected.
Have any else experienced a similar behavior?
I am using iOS 18.4.1
https://developer.apple.com/documentation/avfoundation/avdepthdata/depthdataquality
Topic:
Spatial Computing
SubTopic:
ARKit
I have an arguably massive project and am not sure if the issue is with the assets or my approach in the code.
the error says : Tool terminated due to error "SIGNAL 6:Abort trap:6"
Basically I have around 15-20 assets (usda files built out of usdz files). In the code i am loading a scene with all the usda files and then have the functions to enable and disable a particular asset when needed.
This was working as intended when i am using dummy assets(with less polygons, lesser textures)
But when i placed the actual assets the error appears and persists. Do I have a bad approach of loading all the scenes at once?
Previously i have used an approach which loads the scenes when needed and that involved some lag before rendering the assets. But my current approach(when using dummies) works like a dime rendering and hiding the assets in realtime with no lag.
Kindly suggest any workarounds.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Reality Composer
AR / VR
RealityKit
Reality Composer Pro
I have two RealityView: ParentView and When click the button in ParentView, ChildView will be shown as full screen cover, but the camera feed in ChildView will not be shown, only black screen.
If I show ChildView directly, it works with camera feed.
Please help me on this issue? Thanks.
import RealityKit
import SwiftUI
struct ParentView: View{
@State private var showIt = false
var body: some View{
ZStack{
RealityView{content in
content.camera = .virtual
let box = ModelEntity(mesh: MeshResource.generateSphere(radius: 0.2),materials: [createSimpleMaterial(color: .red)])
content.add(box)
}
Button("Click here"){
showIt = true
}
}
.fullScreenCover(isPresented: $showIt){
ChildView()
.overlay(
Button("Close"){
showIt = false
}.padding(20),
alignment: .bottomLeading
)
}
.ignoresSafeArea(.all)
}
}
import ARKit
import RealityKit
import SwiftUI
struct ChildView: View{
var body: some View{
RealityView{content in
content.camera = .spatialTracking
}
}
}
Hi,
I'm looking to build something similar to the header blur in the App Store and Apple TV app settings. Does anyone know the best way to achieve this so that when there is nothing behind the header it looks the same as the rest of the view background but when content goes underneath it has a blur effect. I've seen .scrollEdgeEffect on IOS26 is there something similar for visionOS?
Thanks!
I am using ARKit to detect image in visionPro. However I met some question about adding the reference image.
Some of my images can not be added correctly sometimes. (As you can see in the picture above, the 'orange' can not be added correctly, but the 'cup' can). However, sometimes they will be added without any problem. I do not know why it will happen. And I want they all be added steadily.
I'm getting the following error message when compiling the Apple provided sample, Spaceship game for the Apple Visio Pro. I've already tried deleting the derived data resetting the package cache and restarting Xcode but still getting the following error: [xrsimulator] Exception thrown during compile: Cannot get rkassets content for path /Users/myoungkang/Downloads/CreatingASpaceshipGame/Packages/Studio/Sources/Studio/Studio.rkassets because 'The file “Studio.rkassets” couldn’t be opened because you don’t have permission to view it.'
error: Tool exited with code 1
Topic:
Spatial Computing
SubTopic:
General
I am allowing users to go through and capture different rooms, and add a custom label to that room. Is there a way to store data about this in the captured room so that it persists into the final merge? As it is now, My users mark all their merges with custom labels, but after merging there is no way to remember which room is which in the merging process so they have to go through and manually add the labels back. For larger floor plans this is not ideal.
I need help to wrap my head around this...
If I import the Reality Composer Pro package and load it into an ARView, I will see 1.3gb of memory usage and about 180-220% cpu usage. The frames will start at around 60fps, and then eventually drop to around 30fps.
If I export the usdz from Reality Composer Pro and load that into the same ARView, I will see about 1gb of memory usage and around 150% cpu usage; fps holds longer at 60 but eventually drops.
If I load that same usdz into a QuickLook view, I will see about 55mb of memory usage, 9-11% cpu, and the frames stay locked at 116fps. The only thing I notice is the button I have is slightly less responsive, but it all still works fine.
I don't understand. How can I make the ARView work as efficiently as QuickLook?
Anyone could share ideas or nodes setup to implement a gaussian blur on shader graph material, with a blur size parameter? Thanks!
I'm working on an iOS app using ARKit and RealityKit where I scan QR codes and want to place 3D models at the exact position of the QR code in the real world.
Is it possible to accurately place a 3D model at the exact position of a QR code in AR using ARKit and RealityKit? Specifically, I want the model to appear at the precise location where the QR code is detected, rather than just somewhere in the AR space.
If this is possible, could you point me in the right direction or recommend the best approach to achieve this?
Thank you for your help!
I am still not finding resources to know how to replace hands in a full immersive Space.... I reached the goal by creating a ARKit session that can detect the USDZ hand mesh Joints and connect to the hand-tracked-joints.... but I feel that's not the best solution... I really want to use the RealityKit potential to track and replace hands (with USDZ skinned ones) in an immersive environment, but the only resources I found are from November 2023.... :(
Can someone help me?
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
I have tested the MagnifyGesture code below on multiple devices:
Vision Pro - working
iPhone - working
iPad - working
macOS - not working
In Reality Composer Pro, I have also added the below components to the test model entity:
Input Target
Collision
For macOS, I tried the touchpad pinch gesture and mouse scroll wheel, but neither approach works. How to resolve this issue? Thank you.
import SwiftUI
import RealityKit
import RealityKitContent
struct ContentView: View {
var body: some View {
RealityView { content in
// Add the initial RealityKit content
if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
content.add(immersiveContentEntity)
}
}
.gesture(MagnifyGesture()
.targetedToAnyEntity()
.onChanged(onMagnifyChanged)
.onEnded(onMagnifyEnded))
}
func onMagnifyChanged(_ value: EntityTargetValue<MagnifyGesture.Value>) {
print("onMagnifyChanged")
}
func onMagnifyEnded(_ value: EntityTargetValue<MagnifyGesture.Value>) {
print("onMagnifyEnded")
}
}
Topic:
Spatial Computing
SubTopic:
General
randomly, the app does not work after small changes in Reality Composer. Small changes like scaling a object a tiny bit.
to fix the error, i have to change another element in reality composer and hope for the best. if this does not help, i change (transform) something else, or deactive/activate something to get the project working again. I can't see a pattern why the Reality Composer Project sometimes gets in a state where it does not compile anymore.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Environment
Xcode: 16.2
VisionOS SDK 2.4
Swift 6.1
Targets: Apple Vision Pro (immersive space)
Frameworks: ARKit, RealityKit, SwiftUI
What I’m Trying to Do
I have a view-model class PlacementManager that holds two AR providers:
private var worldTracking: WorldTrackingProvider
private var planeDetection: PlaneDetectionProvider
I want to dynamically replace these providers in a setEnvironment(_:) method (so I can save/clear a JSON scene and restart ARKit).
What’s Happening
If I declare them as :
private let worldTracking = WorldTrackingProvider()
private let planeDetection = PlaneDetectionProvider()
I get compile-errors when I later do:
self.worldTracking = newWorldTracking // Cannot assign to property: 'worldTracking' is a 'let' constant
If I change them to un-initialized vars:
private var worldTracking: WorldTrackingProvider
private var planeDetection: PlaneDetectionProvider
then in my init() I get:
self used in property access 'worldTracking' before all stored properties are initialized
Code snipet
@Observable
final class PlacementManager : ObservableObject {
private var worldTracking: WorldTrackingProvider
private var planeDetection: PlaneDetectionProvider
// … other props …
@MainActor
init() {
// error: self.worldTracking used before init…
planeAnchorHandler = PlaneAnchorHandler(rootEntity: root)
persistenceManager = PersistenceManager(
worldTracking: worldTracking,
rootEntity: root
)
// …
}
@MainActor
func setEnvironment(env: Environnement) async {
let newWorldTracking = WorldTrackingProvider()
let newPlaneDetection = PlaneDetectionProvider()
try await appState!.arkitSession.run(
[ newWorldTracking, newPlaneDetection ]
)
self.worldTracking = newWorldTracking
self.planeDetection = newPlaneDetection
// …
}
}
What I’ve Tried
Giving them default values at declaration (= WorldTrackingProvider())
Initializing them at the top of init() before any use
Passing the new providers into arkitSession.run(...)
My Question
What is the recommended Swift-style pattern to declare and reassign these ARKit provider properties so that:
They’re fully initialized before use in init(), and
I can swap them out later in setEnvironment(...) without compiler errors?
Any pointers (or links to forum threads / docs) would be greatly appreciated!