I posted https://developer.apple.com/forums/thread/809481 yesterday about an issue I discovered with pushWindow in visionOS 26.2 RC, but today I discovered a second problem with pushWindow.
If window A calls pushWindow to present window B, and the user pins window B to a wall, the following unexpected behaviors are observed:
Window B spontaneously disappears.
If the user re-launches the (still running) app from the visionOS home view, both window A and window B appear simultaneously. I assume only window B should be visible at this point, since window A pushed window B.
If the user closes window B, it's now impossible to present window B again. Calls to pushWindow appear to be ignored.
If the user force-quits the app and relaunches it, and pushWindow is called again, window B appears, but window A remains visible.
I also noticed this surprising behavior:
This broken state of pushWindow behavior now affects all other apps on the system that may call pushWindow in the future, not just the app whose pushed window was pinned above.
A workaround is to reboot the device, and then the system will behave as expected until the next time the user pins a pushed window.
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I recently added pushWindow to my app, and I discovered that in visionOS 26.2 RC (23N301), pushWindow followed by dismissWindow no longer works as expected.
Specifically, if the user moves the pushed window, then when the pushed window is later dismissed, the parent window's position isn't aligned with the pushed window's new position. Its original position is restored instead.
Curiously, the bug only happens when an app is launched from the visionOS home view, and not when an app is launched from Xcode. It also doesn't happen in the visionOS 26.2 simulator.
Another interesting detail is that while the parent window is hidden, if the user long-presses the Digital Crown and then dismisses the pushed window, the parent window's position seems to be immune from the Digital Crown scene reorientation. It's restored to its original real world position.
Demo video: https://youtu.be/zR3t2ON3Wz0
I've submitted feedback as FB21287011 with a sample app and detailed repro steps.
Has anyone else encountered this issue already and figured out a workaround? It would be nice if I could get pushWindow to work correctly in my app.
Thanks everybody! 😀
We're trying to switch from using main camera access on Arkit to screen-capture with passthrough however we're facing some issues and it seems a bit complicated to debug.
We have set up a broadcast Extension, set up some logs on the sample Handler but we get nothing in the console nor that the recording starts, we set up the picker as well and we can see our extension in the control center as one of the choices but clicking start, results in it stopping in less than one second after.
The only message that is rather contradictory we see in the console.app is the following
[INFO] -[RPRecordingManager getSystemBroadcastExtensionInfo:]_block_invoke:1333 Extension has passthrough license
and just right after
[INFO] -[RPRecordingManager getSystemBroadcastExtensionInfo:]_block_invoke:1336 Extension does not have passthrough license
My app is getting video from UVC device, and I wish to display it in an Immersive Space. But when I open Immersive Space, the UVC capture will just stop.AI said it's due to confliction in Camera pipeline. But I don't really understand, I don't need to use any on device camera, why it conflict with my UVC...
For the M2 Apple Vision Pro, there's "a general guideline, we recommend no more than 500 thousand triangles for an immersive scene, with 250 thousand for applications in the shared space." --https://developer.apple.com/videos/play/wwdc2024/10186/?time=147
Is there a revised recommendation for the M5 Apple Vision Pro?
Hi I know it's possible to play equirectangular VR180 video either SBS or MV-HEVC. And for fisheye video, the only way I know is to convert it into an AIVU for playback.
Is there any way to directly play fisheye video using AVPlayer? Thanks a lot!
We're developing an iOS application that integrates RoomCaptureSession with ARSCNView for room scanning. Our implementation differs from the standard RoomCaptureView because we need custom UI guidance with 3D dots placed in the scanning environment to guide users through the capture process.
Bug Description:
The application crashes when users attempt to scan multiple rooms or apartments in sequence. The crash specifically occurs with the following pattern:
User successfully scans first room with multiple hotspots (working correctly)
User stops scanning, moves to a new room
In the new room, first 1-2 hotspots work correctly
Application crashes when attempting to scan additional hotspots
Technical Details:
Error: SLAM Anchor assertion failure in SlamAnchor.cpp:37 : HasValidPose()
Crash occurs in Thread 27 with CAPIDetectionOutputFwdNode
Error suggests invalid positioning when placing AR anchors
Steps to Reproduce:
Start room scan
Complete multiple hotspot captures in first room
Stop scanning
Start new room scan
Capture 1-2 hotspots successfully
Attempt additional hotspot captures -> crashes
Attempted Solutions:
Implemented anchor cleanup between sessions
Added position validation before anchor placement
Implemented ARSession error handling
Added proper thread management for AR operations
Environment:
Device: iPhone 14 Pro (LiDAR equipped)
iOS Version: 18.1.1 (22B91)
Testing through TestFlight
Crash Log Details:
Exception Type: EXC_CRASH (SIGABRT)
Exception Codes: 0x0000000000000000, 0x0000000000000000
Exception Note: EXC_CORPSE_NOTIFY
Triggered by Thread: 27
Thread 27 Crashed:
0 libsystem_kernel.dylib 0x00000001f0cc91d4 __pthread_kill + 8
1 libsystem_pthread.dylib 0x0000000228e12ef8 pthread_kill + 268
2 libsystem_c.dylib 0x00000001a86bbad8 abort + 128
3 AppleCV3D 0x0000000234d71a28 cv3d::vio::capi::SlamAnchor::SlamAnchor
Question:
Is there a recommended approach for handling multiple room captures with custom ARSCNView integration? The standard RoomCaptureView implementation doesn't show this behavior, but we need the custom guidance functionality that ARSCNView provides.
Crash Log
Code and full crash logs can be provided if needed.
Sorry for the cross-post but it's now two days in and this isn't fixed.
If you try to use Xcode 16.3b3 with visionOS, it won't download the visionOS SDK, gives a 'network error' so you can't use the latest beta for Apple Vision Pro.
FB16927025
FB16917874
FB16910449
Hi, I'm developing an app for Vision Pro using Xcode, while updating the latest update, things that worked in my app suddenly didn't.
in my app flow I'm tapping spheres to get their positions, from some reason I get an offset from where I tap to where a marker on that position is showing up.
here's the part of code that does that, and a part that is responsible for an alignment that happens afterwards:
func loadMainScene(at position: SIMD3) async {
guard let content = self.content else { return }
do {
let rootEntity = try await Entity(named: "surgery 16.09", in: realityKitContentBundle)
rootEntity.scale = SIMD3<Float>(repeating: 0.5)
rootEntity.generateCollisionShapes(recursive: true)
self.modelRootEntity = rootEntity
let bounds = rootEntity.visualBounds(relativeTo: nil)
print("📏 Model bounds: center=\(bounds.center), extents=\(bounds.extents)")
let pivotEntity = Entity()
pivotEntity.addChild(rootEntity)
self.pivotEntity = pivotEntity
let modelAnchor = AnchorEntity(world: [1, 1.3, -0.8])
modelAnchor.addChild(pivotEntity)
content.add(modelAnchor)
updateModelOpacity(0.5)
self.modelAnchor = modelAnchor
rootEntity.visit { entity in
print("👀 Entity in model: \(entity.name)")
if entity.name.lowercased().hasPrefix("focus") {
entity.generateCollisionShapes(recursive: true)
entity.components.set(InputTargetComponent())
print("🎯 Made tappable: \(entity.name)")
}
}
print("✅ Model loaded with collisions")
guard let sphere = placementSphere else { return }
let sphereWorldXform = sphere.transformMatrix(relativeTo: nil)
var newXform = sphereWorldXform
newXform.columns.3.y += 0.1 // move up by 20 cm
let gridAnchor = AnchorEntity(world: newXform)
self.gridAnchor = gridAnchor
content.add(gridAnchor)
let baseScene = try await Entity(named: "Scene", in: realityKitContentBundle)
let gridSizeX = 18
let gridSizeY = 10
let gridSizeZ = 10
let spacing: Float = 0.05
let startX: Float = -Float(gridSizeX - 1) * spacing * 0.5 + 0.3
let startY: Float = -Float(gridSizeY - 1) * spacing * 0.5 - 0.1
let startZ: Float = -Float(gridSizeZ - 1) * spacing * 0.5
for i in 0..<gridSizeX {
for j in 0..<gridSizeY {
for k in 0..<gridSizeZ {
if j < 2 || j > gridSizeY - 5 { continue } // remove 2 bottom, 4 top
let cell = baseScene.clone(recursive: true)
cell.name = "Sphere"
cell.scale = .one * 0.02
cell.position = [
startX + Float(i) * spacing,
startY + Float(j) * spacing,
startZ + Float(k) * spacing
]
cell.generateCollisionShapes(recursive: true)
gridCells.append(cell)
gridAnchor.addChild(cell)
}
}
}
content.add(gridAnchor)
print("✅ Grid added")
} catch {
print("❌ Failed to load: \(error)")
}
}
private func handleModelOrGridTap(_ tappedEntity: Entity) {
guard let modelRootEntity = modelRootEntity else { return }
let localPosition = tappedEntity.position(relativeTo: modelRootEntity)
let worldPosition = tappedEntity.position(relativeTo: nil)
switch tapStep {
case 0:
modelPointA = localPosition
modelAnchor?.addChild(createMarker(at: worldPosition, color: [1, 0, 0]))
print("📍 Model point A: \(localPosition)")
tapStep += 1
case 1:
modelPointB = localPosition
modelAnchor?.addChild(createMarker(at: worldPosition, color: [1, 0.5, 0]))
print("📍 Model point B: \(localPosition)")
tapStep += 1
case 2:
targetPointA = worldPosition
targetMarkerA = createMarker(at: worldPosition,color: [0, 1, 0])
modelAnchor?.addChild(targetMarkerA!)
print("✅ Target point A: \(worldPosition)")
tapStep += 1
case 3:
targetPointB = worldPosition
targetMarkerB = createMarker(at: worldPosition,color: [0, 0, 1])
modelAnchor?.addChild(targetMarkerB!)
print("✅ Target point B: \(worldPosition)")
alignmentReady = true
tapStep += 1
default:
print("⚠️ Unexpected tap on model helper at step \(tapStep)")
}
}
func alignModel2Points() {
guard let modelPointA = modelPointA,
let modelPointB = modelPointB,
let targetPointA = targetPointA,
let targetPointB = targetPointB,
let modelRootEntity = modelRootEntity,
let pivotEntity = pivotEntity,
let modelAnchor = modelAnchor else {
print("❌ Missing data for alignment")
return
}
let modelVec = modelPointB - modelPointA
let targetVec = targetPointB - targetPointA
let modelLength = length(modelVec)
let targetLength = length(targetVec)
let scale = targetLength / modelLength
let modelDir = normalize(modelVec)
let targetDir = normalize(targetVec)
var axis = cross(modelDir, targetDir)
let axisLength = length(axis)
var rotation = simd_quatf()
if axisLength < 1e-6 {
if dot(modelDir, targetDir) > 0 {
rotation = simd_quatf(angle: 0, axis: [0,1,0])
} else {
let up: SIMD3<Float> = [0,1,0]
axis = cross(modelDir, up)
if length(axis) < 1e-6 {
axis = cross(modelDir, [1,0,0])
}
rotation = simd_quatf(angle: .pi, axis: normalize(axis))
}
} else {
let dotProduct = dot(modelDir, targetDir)
let clampedDot = max(-1.0, min(dotProduct, 1.0))
let angle = acos(clampedDot)
rotation = simd_quatf(angle: angle, axis: normalize(axis))
}
modelRootEntity.scale = .one * scale
modelRootEntity.orientation = rotation
let transformedPointA = rotation.act(modelPointA * scale)
pivotEntity.position = -transformedPointA
modelAnchor.position = targetPointA
alignedModelPosition = modelAnchor.position
print("✅ Aligned with scale \(scale), rotation \(rotation)")
Topic:
Spatial Computing
SubTopic:
General
Hi All,
We're a studio building an app and as part of a scene we have a 3D asset with a smoke particle emitter and a curved mesh that plays video. I notice that when the video alone is played or the particle effect alone is done then the scene works fine but the frame rate drops drastically when both are turned on.
How do I solve this because this is an important storytelling feature.
I am trying to apply impulseAction to an entity but everytime entity.playAnimation(impulseAnimation) is executed, the log says Cannot find a BindPoint for any bind path: "". I can't figure out what is wrong. Could someone please help me with this?
import SwiftUI
import RealityKit
import RealityKitContent
struct ImmersiveView: View {
var body: some View {
RealityView { content in
// Add the initial RealityKit content
if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle), var sphere = immersiveContentEntity.findEntity(named: "Sphere") {
sphere.components.set(CollisionComponent(shapes: [ShapeResource.generateSphere(radius: 0.1)]))
sphere.components.set(PhysicsBodyComponent(shapes: [ShapeResource.generateSphere(radius: 0.1)], mass: 1000))
sphere.components[PhysicsBodyComponent.self]?.isAffectedByGravity = false
sphere.position = [0, 1, -1]
content.add(immersiveContentEntity)
// Create an action to apply an impulse, forcing the object to move upwards.
let impulseAction = ImpulseAction(linearImpulse: [0, 1, 0])
// Create a small positive duration value.
let duration: TimeInterval = 1 / 30.0
// Create an animation for the action, which will start playing
// after five seconds.
do {
let impulseAnimation = try AnimationResource
.makeActionAnimation(for: impulseAction,
duration: duration,
delay: 5.0)
// Play the sequence animation that will play the actions.
sphere.playAnimation(impulseAnimation)
} catch {
print("Error: \(error)")
}
}
}
}
}
All the logs:
Could not locate file 'default-binaryarchive.metallib' in bundle.
Error creating the CFMessagePort needed to communicate with PPT.
AddInstanceForFactory: No factory registered for id <CFUUID 0x6000029a5b80> F8BB1C28-BAE8-11D6-9C31-00039315CD46
cannot add handler to 0 from 1 - dropping
nw_socket_copy_info [C1:2] getsockopt TCP_INFO failed [102: Operation not supported on socket]
nw_socket_copy_info getsockopt TCP_INFO failed [102: Operation not supported on socket]
Registering library (/Library/Developer/CoreSimulator/Volumes/xrOS_22N840/Library/Developer/CoreSimulator/Profiles/Runtimes/xrOS 2.2.simruntime/Contents/Resources/RuntimeRoot/System/Library/PrivateFrameworks/CoreRE.framework/default.metallib) that already exists in shader manager. Library will be overwritten.
cannot add handler to 0 from 1 - dropping
Cannot find a BindPoint for any bind path: "", ""
Sync object without snapshot while removing view (id: 2816861686082450363, type: 6373420419761316588[SelectableSceneContentIdentifierComponent]).
But i think only Cannot find a BindPoint for any bind path: "", "" is relevant.
I am currently developing an app for visionOS and have encountered an issue involving a component and system that moves an entity up and down within a specific Y-axis range. The system works as expected until I introduce sound playback using AVAudioPlayer.
Whenever I use AVAudioPlayer to play sound, the entity exhibits unexpected behaviors, such as freezing or becoming unresponsive. The freezing of the entity's movement is particularly noticeable when playing the audio for the first time. After that, it becomes less noticeable, but you can still feel it, especially when the audio is played in quick succession.
Also, the issue is more noticable on real device than the simulator
//
// IssueApp.swift
// Issue
//
// Created by Zhendong Chen on 2/1/25.
//
import SwiftUI
@main
struct IssueApp: App {
var body: some Scene {
WindowGroup {
ContentView()
}
.windowStyle(.volumetric)
}
}
//
// ContentView.swift
// Issue
//
// Created by Zhendong Chen on 2/1/25.
//
import SwiftUI
import RealityKit
import RealityKitContent
struct ContentView: View {
@State var enlarge = false
var body: some View {
RealityView { content, attachments in
// Add the initial RealityKit content
if let scene = try? await Entity(named: "Scene", in: realityKitContentBundle) {
if let sphere = scene.findEntity(named: "Sphere") {
sphere.components.set(UpAndDownComponent(speed: 0.03, minY: -0.05, maxY: 0.05))
}
if let button = attachments.entity(for: "Button") {
button.position.y -= 0.3
scene.addChild(button)
}
content.add(scene)
}
} attachments: {
Attachment(id: "Button") {
VStack {
Button {
SoundManager.instance.playSound(filePath: "apple_en")
} label: {
Text("Play audio")
}
.animation(.none, value: 0)
.fontWeight(.semibold)
}
.padding()
.glassBackgroundEffect()
}
}
.onAppear {
UpAndDownSystem.registerSystem()
}
}
}
//
// SoundManager.swift
// LinguaBubble
//
// Created by Zhendong Chen on 1/14/25.
//
import Foundation
import AVFoundation
class SoundManager {
static let instance = SoundManager()
private var audioPlayer: AVAudioPlayer?
func playSound(filePath: String) {
guard let url = Bundle.main.url(forResource: filePath, withExtension: ".mp3") else { return }
do {
audioPlayer = try AVAudioPlayer(contentsOf: url)
audioPlayer?.play()
} catch let error {
print("Error playing sound. \(error.localizedDescription)")
}
}
}
//
// UpAndDownComponent+System.swift
// Issue
//
// Created by Zhendong Chen on 2/1/25.
//
import RealityKit
struct UpAndDownComponent: Component {
var speed: Float
var axis: SIMD3<Float>
var minY: Float
var maxY: Float
var direction: Float = 1.0 // 1 for up, -1 for down
var initialY: Float?
init(speed: Float = 1.0, axis: SIMD3<Float> = [0, 1, 0], minY: Float = 0.0, maxY: Float = 1.0) {
self.speed = speed
self.axis = axis
self.minY = minY
self.maxY = maxY
}
}
struct UpAndDownSystem: System {
static let query = EntityQuery(where: .has(UpAndDownComponent.self))
init(scene: RealityKit.Scene) {}
func update(context: SceneUpdateContext) {
let deltaTime = Float(context.deltaTime) // Time between frames
for entity in context.entities(matching: Self.query, updatingSystemWhen: .rendering) {
guard var component: UpAndDownComponent = entity.components[UpAndDownComponent.self] else { continue }
// Ensure we have the initial Y value set
if component.initialY == nil {
component.initialY = entity.transform.translation.y
}
// Calculate the current position
let currentY = entity.transform.translation.y
// Move the entity up or down
let newY = currentY + (component.speed * component.direction * deltaTime)
// If the entity moves out of the allowed range, reverse the direction
if newY >= component.initialY! + component.maxY {
component.direction = -1.0 // Move down
} else if newY <= component.initialY! + component.minY {
component.direction = 1.0 // Move up
}
// Apply the new position
entity.transform.translation = SIMD3<Float>(entity.transform.translation.x, newY, entity.transform.translation.z)
// Update the component with the new direction
entity.components[UpAndDownComponent.self] = component
}
}
}
Could someone help me with this?
Hi,
we've developed an app for Vision Pro that utilises the GroupActivitites SDK to provide shared experiences for our users.
Remote Participation works great, but we can't get nearby sharing to work.
The behaviour we're observing:
User 1 engages share sheet from Volume, 2nd Vision Pro is visible.
User 1 starts nearby sharing
Session initialisation runs for approx. 30 seconds, then fails
Sometimes, the nearby participant doesn't show up at all after the initialisation has failed once.
As stated in the Configure your visionOS app for sharing with people nearby article, we didn't make any changes to our implementation to support nearby sharing.
Any help would be greatly appreciated.
Kind regards,
David
Hi there.
Thanks to amazing help from you guys, I've managed to code a 360 image carousel, where the user can browse 360 images located inside the project package.
Is there a way to access the filesystem on AVP outside the app?
I know about the FileManager, and I can get access to the .documentsDirectory, but how do I access documents folder from the "Files" app on the AVP?
My goal is to read images from a hardcoded folderlocation on the AVP, such that the user never will have to select the images themselves.
I know this may not be the "right" way to do this. The app is supposed to be "foolproof" with a minimum of userinteraction.
The only way to change the images should be to change the contents of the hardcoded imagefolder.
I hope this makes sense =)
Thanks in advance!
Regards,
Kim
Topic:
Spatial Computing
SubTopic:
General
Hi I have a monitoring app, that will take input video from uvc and process it using Metal, and eventually get a MTLTexture.
The problem I'm facing is I have to convert MTLTexture to CGImage then call TextureResource.replace, which is super slow. Metal processing speed is same as input frame rate(50pfs), but MTLTexture -> CGImage -> TextureResource only got 7fps...
Is there any way I can make it faster?
Topic:
Spatial Computing
SubTopic:
General
Tags:
Media Player
Frameworks
Media Accessibility
Core Media
Hello experts, and question seekers,
I have been trying to get Gaussian splats working with RealityKit, however it seems not to work out for me.
The library I use for Gaussian splatting: https://github.com/scier/MetalSplatter
My idea was to use the renderers provided by RealityKit (aka RealityRenderer) https://developer.apple.com/documentation/realitykit/realityrenderer and the renderer provided by MetalSplatter (aka. SplatRenderer) https://github.com/scier/MetalSplatter/blob/main/MetalSplatter/Sources/SplatRenderer.swift
Then with a custom render pipeline, I would be able to compose the outputs of the renderers, enabling the possibility, for example to build immersive scenery with realistic environment scans, as Gaussian splats, and RealityKit to provide the necessary features to build extra scenery around Gaussian splats, eg. dynamic 3D models inside Gaussian splats.
However the problem is, as of now I am not able to do that with the current implementation of RealityRenderer.
It seems to be, that first RealityRenderer is supposed to be an API, just to render colour information onto a texture, which in first glance might be useful, but misses important information, such as for example depth, and stencil information.
Second issue is, even with that in mind, currently I am not able to execute RealityRenderer.updateAndRender, due to the following error messages:
Could not resolve material name 'engine:BuiltinRenderGraphResources/Common/realityRendererBackground.rematerial' in bundle at '/Users//Library/Developer/CoreSimulator/Devices//data/Containers/Bundle/Application//.app'. Loading via asset path.
exiting spatial tracking service update thread because wait returned 37”
I was able to build a custom Metal view with UIViewRepresentable, MTKView, and MTKViewDelegate, enabling me to build a custom rendering pipeline, by utilising some of the Metal developer workflows.
Reference: https://developer.apple.com/documentation/xcode/metal-developer-workflows/
Inside draw(in view: MTKView), in a class derived by MTKViewDelegate:
guard let currentDrawable = view.currentDrawable else {
return
}
let realityRenderer = try! RealityRenderer()
try! realityRenderer.updateAndRender(deltaTime: 0.0, cameraOutput: .init(.singleProjection(colorTexture: currentDrawable.texture)), whenScheduled: { realityRenderer in
print("Rendering scheduled")
}, onComplete: { RealityRenderer in
print("Rendering completed")
})
Can you please tell me, what I am doing wrong?
Is there any solution, that enables me to use RealityKit with for example Gaussian splats?
Any help is greatly appreciated.
All the best,
Ethem Kurt
let component = GestureComponent(DragGesture())
iOS: ☑️
visionOS: ❌
This bug from beta to public, please fix it.
HoverEffectComponent on macOS 15 and iOS 18 works fine using RealityView, but seems to be ignored when ARView (even with a SwiftUI UIViewRepresentable) is used.
Feedback ID: FB15080805
I am using HelloPhotogrammetry in Xcode
I can make one model with something like HelloPhotogrammetry.main([path_to_folder_of images, path_to_output/model.usdz, "-d", "medium", "-o", "unordered", "-f", "high" ])
But how would I request several models simultaneously? I only want to vary the detail.
[ ("/Users/you/Desktop/model_medium.usdz", detail: .medium), ("/Users/you/Desktop/model_full.usdz", detail: .full), ("/Users/you/Desktop/model_raw.usdz", detail: .raw ]
Hi,
I am in the process of implementing SharePlay into our app. The shared experience opens an Immersive Space and we set systemCoordinator.configuration.supportsGroupImmersiveSpace = true
Now visionOS establishes a shared coordinate space for the immersive space.
From the docs:
To achieve consistent positioning of RealityKit entities across multiple devices in an immersive space during a SharePlay session
There are cases where we want to position content in front of the user (independent of the shared session, and for each user individually). Normally to do that we use the transform retrieved via worldTrackingProvider.queryDeviceAnchor.originFromAnchorTransform
to position content in front of the user (plus some Z Offset and smooth interpolation).
This works fine in non-SharePlay instances and the device transform is where I would expect it to be but during the FaceTime call deviceAnchor.originFromAnchorTransform seems to use the shared origin of the immersive space and then I end up with a transform that might be offset.
Here is a video of the issue in action: https://streamable.com/205r2p
The blue rect is place using AnchorEntity(.head, trackingMode: .continuous). This works regardless of the call and the entity is always placed based on the head position.
The green rect is adjusted on every frame using the transform I get from worldTrackingProvider.queryDeviceAnchor. As you can see it's offset.
Is there any way I can query query this transform locally for the user during a FaceTime call?
Also I would like to know if it's possible to disable this automatic entity transform syncing behavior?
Setting entity.synchronization = nil results in the entity not showing up at all.
https://developer.apple.com/documentation/realitykit/synchronizationcomponent
Is SynchronizationComponent only relevant for the legacy MultiPeerConnectivity approach?
Thank you!