When assigning a ManipulationComponent to an Entity SceneEvents.WillRemoveEntity will be called for that Entity.
Expected Behavior: the Entity is not (even if temporarily) removed from the Scene and no SceneEvents will be triggered as a result of assigning a ManipulationComponent.
FB20872220
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I have been digging through the docs and the developer videos, and I have noticed a mention to RealityView having som potential limitations with anchors and world tracking. However, I haven’t been able to locate my answers.
Does anyone know (or point me to) if RealityView supports everything ARView does, and if not what are the difference?
I was fooling around with RealityView today with a simple plane anchor, and the stability of that anchor didn’t seem to be as steady as I recall ARView being In the past on iPhone.
I’m trying to determine if I should be rolling over into RealityView or stay with ARView on this little educational project. I would imagine the answer is to go RealityView, but I want to make sure I’m not setting myself up for failure based on any current limitations For anchors and world data.
Topic:
Spatial Computing
SubTopic:
ARKit
Hello Community,
I am currently developing an experimental VisionOS app, to investigate the social effects of the new Spatial Persona feature, for my bachelor thesis. My setup includes a simple board game for the participants, in which they can engage with their persona avatars.
I tried to use the TabletopKit for this setup, but ran into issues when starting the SharePlay session. When I testes my app, I couldn't see the other spatial persona anymore, despite the green SharePlay button indicating the session started. The other person can see my actions in their version of the app on the board, but can not interact with anything. Also, we are both seat on the default side of the seat.
I tried to remove the environment I added, because it doesn't seem to synch with the other player. When I tried the FaceTime feature in the simulator without the environment, I could then see the test robot avatar, but at a totally wrong place. It's seems like it isn't just my environment occluding the seats, but a flaw in the seating process as well.
When I tried the FaceTime feature in the simulator on the official test scene (TabletopKit Sample), I got the same incorrect placement and the warning "role(for:inSeatNumber:): The provided role identifier does not match a role in the current template."
So my questions are:
What needs to be changed so the TabletopKit can handle seating correctly?
How can I correctly use immersive scenes in combination with the TabletopKit?
I tried to keep the implementation of the TabletopKit example as close as possible, so I think it will enough to look into this codebase for now.
I debugged the position of seats and they are placed correctly in front of their equipment. The personas are just not placed on them.
hi guys,
I'm working in VFX industry and I've got the question that, is it possible to create immersive video directly from virtual scene created in DCC software like maya, rendered into footage, then coded into immersive video, and finally play in in vision pro?
thanks.
Topic:
Spatial Computing
SubTopic:
General
Platform: visionOS 2.6
Framework: RealityKit, SwiftUIComponent: ImagePresentationComponent
I’m working with the new ImagePresentationComponent from visionOS 26 and hitting a rendering limitation when switching to .spatialStereoImmersive viewing mode within a WindowGroup context.
This is what I’m seeing:
Pure immersive space: ImagePresentationComponent with .spatialStereoImmersive mode works perfectly in a standalone ImmersiveSpace
Mode switching API: All mode transitions work correctly (logs confirm the component updates)
Spatial content: .spatialStereo mode renders correctly in both window and immersive contexts.
This is where it’s breaking for me:
Window context: When the same RealityView + ImagePresentationComponent is placed inside a WindowGroup (even when that window is floating in a mixed immersive space), switching to .spatialStereoImmersive mode shows no visual change
The API calls succeed, state updates correctly, but the immersive content doesn’t render.
Apple’s Spatial Gallery demonstrates exactly what I’m trying to achieve:
Spatial photos displayed in a window with what feels like horizontal scroll view using system window control bar, etc.
Tapping a spatial photo smoothly transitions it to immersive mode in-place.
The immersive content appears to “grow” from the original window position by just changing IPC viewing modes.
This proves the functionality should be possible, but I can’t determine the correct configuration.
So, my question to is:
Is there a specific RealityView or WindowGroup configuration required to enable immersive content rendering from window contexts that you know of?
Are there bounds/clipping settings that need to be configured to allow immersive content to “break out” of window constraints?
Does .spatialStereoImmersive require a specific rendering context that’s not available in windowed RealityView instances?
How do you think Apple’s SG app achieves this functionality?
For a little more context:
All viewing modes are available: [.mono, .spatialStereo, .spatialStereoImmersive]
The spatial photos are valid and work correctly in pure immersive space
Mixed immersive space is active when testing window context
No errors or warnings in console beyond the successful mode switching logs I’m getting
Any insights into the proper configuration for window-hosted immersive content
Hello,
There are odd artifacts (one looks like an image placeholder) appearing when dismissing an immersive space which is displaying an ImagePresentationComponent. Both artifacts look like widgets..
See below our simple code displaying the ImagePresentationComponent and the images of the odd artifacts that appear briefly when dismissing the immersive space.
import OSLog
import RealityKit
import SwiftUI
struct ImmersiveImageView: View {
let logger = Logger(subsystem: AppConstant.SUBSYSTEM, category: "ImmersiveImageView")
@Environment(AppModel.self) private var appModel
var body: some View {
RealityView { content in
if let currentMedia = appModel.currentMedia,
var imagePresentationComponent = currentMedia.imagePresentationComponent {
let imagePresentationComponentEntity = Entity()
switch currentMedia.type {
case .iphoneSpatialMovie:
logger.info("\(#function) \(#line) spatial3DImmersive display for \(String(describing: currentMedia))")
imagePresentationComponent.desiredViewingMode = .spatial3DImmersive
case .twoD:
logger.info("\(#function) \(#line) spatial3DImmersive display for \(String(describing: currentMedia))")
imagePresentationComponent.desiredViewingMode = .spatial3DImmersive
case .visionProConvertedSpatialPhoto:
logger.info("\(#function) \(#line) spatialStereoImmersive display for \(String(describing: currentMedia))")
imagePresentationComponent.desiredViewingMode = .spatialStereoImmersive
default :
logger.error("\(#function) \(#line) Unsupported media type \(currentMedia.type)")
assertionFailure("Unsupported media type \(currentMedia.type)")
}
imagePresentationComponentEntity.components.set(imagePresentationComponent)
imagePresentationComponentEntity.position = AppConstant.Position.spacialImagePosition
content.add(imagePresentationComponentEntity)
}
let toggleViewAttachmentComponent = ViewAttachmentComponent(rootView: ToggleImmersiveSpaceButton())
let toggleViewAttachmentComponentEntity = Entity(components: toggleViewAttachmentComponent)
toggleViewAttachmentComponentEntity.position = SIMD3<Float>(
AppConstant.Position.spacialImagePosition.x + 1,
AppConstant.Position.spacialImagePosition.y,
AppConstant.Position.spacialImagePosition.z
)
toggleViewAttachmentComponentEntity.scale = AppConstant.Scale.attachments
content.add(toggleViewAttachmentComponentEntity)
}
}
}
I would like to translate info in a three.js based web app as a 3D model in a volumetric window. Is it possible to do this in a similar manner as loading a web page in a WKWebView?
Still don't understand why no one is clarifying about this Apple Video https://developer.apple.com/videos/play/wwdc2023/10111
At the end of this video, there's an incomplete tutorial about connecting a USDZ with mesh and Skeleton structure to the hand tracking system. No example project is linked, and no one is giving the community any clarification. Please can you help us to understand how to proceed?
I started a new project using RealityKit and RealityView, intended as an AR app on iPhone and iPad, but eventually VisionOS as well. I'm challenged because I find much of the recent documentations, WWDC videos, etc, include features that are VisionOS only.
Right now, I would simply like to create some gesture functionality that is similar to AR Quick Look defaults, meaning drag to reposition, two fingers to rotate or zoom. In the past, this would be implemented with something like:
arView.installGestures([.all], for: entity)
however, with RealityView I don't know how (or if possible) to access an ARView.
In RealityKit, I have found this doc: https://developer.apple.com/documentation/realitykit/transforming-realitykit-entities-with-gestures
However, many of the features in that posting are VisionOS only, and I've found no good documentation on the topic that is specific or at least compatible with iOS.
I know reverting to an ARView is an option, but I want to use RealityView if at all possible as I see it as more forward-looking.
Topic:
Spatial Computing
SubTopic:
General
Is it possible to have a skydome which influences lighting in the scene but is otherwise invisible? in a raytracer that would be visible to secondary rays but invisible to primary rays.
Cheers, thanks.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
I have a grpc server running inside of a task. When the user takes the headset off, the grpc server will no longer work when they put the headset back on.
I would like to have this action detected so that I can cancel the task (which will effectively close the grpc server).
I am also using a visual indicator to let the user know if the server is running, but it will not accurately reflect the state of the server when removing and putting back on the headset.
Topic:
Spatial Computing
SubTopic:
General
Hello! I’m excited to see that Look to Scroll has been included in visionOS 26 Beta. I’m aiming to achieve a feature where the user’s gaze at a specific edge automatically scrolls to that position. However, I’ve experimented with ScrollView and haven’t been able to trigger this functionality. Could you advise if additional API modifiers are necessary? Thank you!
I am trying to get the new PresentationComponent working in VisionOS26 as seen in this WWDC video:
https://developer.apple.com/videos/play/wwdc2025/274/?time=962 (18:29 minutes into video)
Here is some other example code but it doesn't work either: https://stepinto.vision/devlogs/project-graveyard-devlog-002/
My simple Text view (that I am adding as a PresentationComponent) does not appear in my RealityView even though the entity is found. Here is a simple example built from an Xcode immersive view default project:
struct ImmersiveView: View {
@Environment(AppModel.self) var appModel
var body: some View {
RealityView { content in
// Add the initial RealityKit content
if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
content.add(immersiveContentEntity)
if let materializedImmersiveContentEntity = try? await Entity(named: "Test", in: realityKitContentBundle) {
content.add(materializedImmersiveContentEntity)
var presentation = PresentationComponent(
configuration: .popover(arrowEdge: .bottom),
content: Text("Hello, World!")
.foregroundColor(.red)
)
presentation.isPresented = true
materializedImmersiveContentEntity.components.set(presentation)
}
}
}
}
}
Here is the Apple reference: https://developer.apple.com/documentation/realitykit/presentationcomponent
Hi there.
Thanks to amazing help from you guys, I've managed to code a 360 image carousel, where the user can browse 360 images located inside the project package.
Is there a way to access the filesystem on AVP outside the app?
I know about the FileManager, and I can get access to the .documentsDirectory, but how do I access documents folder from the "Files" app on the AVP?
My goal is to read images from a hardcoded folderlocation on the AVP, such that the user never will have to select the images themselves.
I know this may not be the "right" way to do this. The app is supposed to be "foolproof" with a minimum of userinteraction.
The only way to change the images should be to change the contents of the hardcoded imagefolder.
I hope this makes sense =)
Thanks in advance!
Regards,
Kim
Topic:
Spatial Computing
SubTopic:
General
I am wondering, is it possible to somehow configure a 3D object to respond to the gaze of a person, like change colors of some parts of the 3D-Model where a person is looking, i.e. where a person's gaze lands on the surface of the 3D-model ?
For example, if there is a 3D model of a Cool Dragon 🐉 in the physical space of a person, when seen through with the mixed reality view, of a VisionPro. Now, it would be really cool to change only the color, or make some specific parts of the dragon skin shimmer, but only in the areas where a person is looking. Is this possible ? Is it do-able with eye-tracking of VisionPro ?
Any advice would be appreciated. 🤝 🤓 I am new to iOS and VisionOS development.
Topic:
Spatial Computing
SubTopic:
General
I have a huge sphere where the camera stays inside the sphere and turn on front face culling on my ShaderGraphMaterial applied on that sphere, so that I can place other 3D stuff inside. However when it comes to attachment, the object occlusion never works as I am expecting. Specifically my attachments are occluded by my sphere (some are not so the behavior is not deterministic.
Then I suspect it was the issue of depth testing so I started using ModelSortGroup to reorder the rending sequence. However it doesn't work. As I was searching through the internet, this post's comments shows that ModelSortGroup simply doesn't work on attachments.
So I wonder how should I tackle this issue now? To let my attachments appear inside my sphere.
OS/Sys: VisionOS 2.3/XCode 16.3
Hello,
we have a RealityKit app that also runs on macOS via Catalyst.
For specific USD assets containing particle systems we have observed a reproducible crash.
Steps to reproduce:
Open Reality Composer Pro
Create new file
Create simple particle system (default one is fine)
export as USDZ
Create project in Xcode
Call Entity.load(… and pass in your USD
Running this on an Intel iMac with macOS Sequoia 15.3 will lead to a crash with the following console log:
validateWithDevice:4704: failed assertion `Render Pipeline DescrvalidateWithDevice:4704: failed assertion `Render Pipeline Descriptor Validation
depthAttachmentPixelFormat (MTLPixelFormatDepthvalidateWithDevice:4704: failed assertion `Render Pipeline Descriptor Validation
depthAttachmentPixelFormat (MTLPixelFormatDepthvalidateWithDevice:4704: failed assertion `Render Pipeline Descriptor Validation
depthAttachmentPixelFormat (MTLPixelFormatDepth32Float) and stencilAttachmentPixelFormat (MTLPixelFormatStencil8) must match.
'
iptor Validation
depthAttachmentPixelFormat (MTLPixelFormatDepth32Float) and stencilAttachmentPixelFormat (MTLPixelFormatStencil8) must match.
'
32Float) and stencilAttachmentPixelFormat (MTLPixelFormatStencil32Float) and stencilAttachmentPixelFormat (MTLPixelFormatStencil8) must match.
'
8) must match.
'
Xcode version: 16.2.0
iMac 2020 3,8GHz Intel Core i7
macOS Sequoia 15.3
FB16477373
It would be great if this could be fixed quickly or a workaround provided since it affects or production app. Thank you!
I can generate a ShapeResource from a ReakityKit entity's extents. Could I apply some scaling to the generated shape. Is there a way to do that?
// model is a ModelResource and bounds is a BoundingBox
var shape = ShapeResource.generateConvex(from: model.mesh);
shape = shape.offsetBy(translation: bounds.center)
// How can I scale the shape to fit within the bounds?
The following API only provide the rotation and translation support. and I cannot find the scale support.
offsetBy(rotation: simd_quatf = simd_quatf(ix: 0, iy: 0, iz: 0, r: 1), translation: SIMD3<Float> = SIMD3<Float>())
I can put the ShapeResource on an entity and scale the entity. But, I would like to know if it is possible to scale the ShapeResource itself without attaching it to an entity.
One of the most common ways to provide a window size in visionOS is to use the defaultSize scene modifier.
WindowGroup(id: "someID") {
SomeView()
}
.defaultSize(CGSize(width: 600, height: 600))
Starting in visionOS 26, using this has a side effect. visionOS 26 will restore windows that have been locked in place or snapped to surfaces. If a user has manually adjusted the size of a locked/snapped window, the users size is only restore in some cases.
Manual resize respected
Leaving a room and returning later
Taking the headset off and putting it back on later
Manual resize NOT respected
Device restart. In this case, the window is reopened where it was locked, but the size is set back to the values passed to defaultSize. The manual resizing adjustments the user has made are lost. This is counter to how all other windows and widgets work.
I reported this last month (FB18429638), but haven't heard back if this is a bug or intended behavior.
Questions
What is the best way to provide a default window size that will only be used when opening new windows–and not used during scene restoration?
Should we try to keep track of window size after users adjust them and save that somewhere?
If this is intended behavior, can someone please update the docs accordingly?
Hi, I'm developing a virtual camera system using ReplayKit to capture scene video by directly accessing raw video buffers. The capture mechanism works flawlessly when repeatedly starting and stopping video capture within a continuous immersive environment. However, a critical issue arises when interrupting the immersive space:
Step 1: Enter immersive environment and start and stop capture videos(Multiple times with no issues)
Step 2: Press the crown button to exit the immersive environment
Step 3: Return to the immersive space subsequently
Step 4: Attempt to start the video capture
At this point, the startCapture method throws an unexpected error, disrupting the video capture workflow.
This is the Xcode error that I see " [ERROR] -[RPScreenRecorder startCaptureWithHandler:completionHandler:]_block_invoke_2:500 failed to start due to error: Error Domain=com.apple.ReplayKit.RPRecordingErrorDomain Code=-5803 "Recording failed to start" UserInfo={NSLocalizedDescription=Recording failed to start}"
I have tried all possible ways to stopCapture including OnDisappear and other methods and nothing seems to solve this.