Hi everyone,
I've been exploring an idea that involves using virtual light sources in VisionOS/RealityKit to interact with real-world objects. Specifically, I'd like to simulate a scenario where a virtual spotlight or other light source casts light or shadows onto real-world environments, creating the effect of virtual lighting interacting with physical surroundings. Is this currently feasible within VisionOS/RealityKit?
Thank you!
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Can an app made with the Room Plan API be used on iPhones without LIDAR? If so, how much accuracy would be lost compared to iPhones with LIDAR?
If not, is there an API similar to RoomPlan that works on iPhones without LiDAR?
Hello again!
So, I am trying my best to figure out how to get a simple, fully Immersive scene that is navigable via the PS5 controller to work on Vision Pro. I am even using ChatGPT (both externally AND in Xcode when it makes fix recommendations) however, I seem to be moving around in circles, having to constantly send error messages back to ChatGPT, then try the new "confirmed to work" fixed code, only to have more errors. I also have explicitly let the AI know that I'm using all the current betas, only to learn there has been lots of stuff deprecated (of course!).
This is my initial query to ChatGPT:
"How can I create a fully immersive VR experience for Apple Vision Pro, using a PS5 game controller to navigate the user via first person view through a 3d scene?"
And then down the rabbit hole the past few days. Seriously, I thought AI was supposed to be helpful! LOL
Per Xcode logs, I'm seeing thumbstick movement, but simply cannot get the camera to move. Is this even possible?! Is it a camera defined in Reality Composer Pro? Or, does it need to be defined outside. I'm trying to follow along with the AI, but it seems to go back and forth with different solutions. Here is the latest it's spitting out:
__
Thanks for this super clear log — and you’re absolutely right: the gamepad input is working, CameraRoot is moving as expected… but your actual camera view is not changing.
This strongly suggests the issue is that RealityKit is not using your custom camera (PerspectiveCameraComponent) for rendering the view.
⸻
🔍 What’s happening
Even though CameraRoot is moving (with correct logs), RealityKit will default to its own internal camera unless:
1. You explicitly assign your camera entity to be the one used, or
2. Your camera is the only one and RealityKit picks it up automatically.
But in visionOS 2.0 / Xcode 26, there seems to be a bug or behavior change where RealityKit no longer auto-uses your custom PerspectiveCameraComponent in RealityView unless it’s:
• properly set with content.camera, or
• part of a .usda/Reality Composer camera hierarchy that’s already active.
Unfortunately:
🟥 RealityView.camera = .entity(cameraRoot)
❌ Is no longer allowed (we confirmed this breaks with a type error)
⸻
✅ The Reliable Workaround: Attach your camera to the Immersive scene itself
Since we’re loading a .usda scene (Reality Composer Pro), here’s the most reliable path forward:
✔️ Option A: Embed your camera into the Reality Composer Pro scene
1. Open your Reality Composer Pro scene
2. Add a Camera entity directly in your Immersive.usda scene
3. Name it something like "PlayerCamera"
4. Position it at [0, 1.6, 0]
5. Leave it selected as the default camera
Then in your Swift code:
• Find the camera entity with .findEntity(named:)
• Move that entity instead of creating a new one.
⸻
✔️ Option B: Force RealityKit to use your custom camera (via transform parenting)
Since direct camera assignment is unavailable, we can use a trick:
Move the whole RealityView scene via a parent entity with a camera.
But let’s go with Option A, since you’ve already got a Reality Composer Pro scene.
__
Can any developer over there can make a suggestion on how to move forward? I JUST want to create a sandbox template so I can start exploring 3d scenes created in Maya and sent over to headset. I'll deal with animation next, but come on, why is this so difficult to get working? I'm not a programmer, but have been trying to wrap my head around Xcode and SwiftUI. This needs to be much simpler. Or, you need to provide us creatives with better sample templates and non programmer speak on how to set this up properly. Ideally, you HIRE us 3d professionals to work side by side with the programmers to help make these tools useable - especially Reality Composer Pro. Seriously, I am making a concerted effort to use the native tools, even though I would love to be porting Unreal Engine scenes over.
If anyone can help point me in the right direction, coming from a 3D Creator/Animator/Modeler perspective, I, and my fellow peers in the XR/AR/VR community would greatly appreciate it. Thank you.
Hi !
I'm new on this forum, so if I need to update this post to have more info, or anything else, please let me know.
I'm using the Apple Vision Pro to develop some app (with unity). To demonstrate what the user see on the headset, I would like to mirror the view on a device (an iPad in this case). I managed to do this without any issue.
My problem is that, in the Vision Pro, I have an interface that the user can interact with. But I would like to be able to manage myself the interface on the iPad. What I mean is that the user can (or can't, doesn't matter) see the interface in the headset, and the interface is controlled by myself on the iPad.
Is there any way to do this ? Is this a question I should ask on unity's forum ? (I don't think so, because it should be related to the mirroring function non ?)
Topic:
Spatial Computing
SubTopic:
General
Hello,
I'm working with the new PortalComponent introduced in visionOS 2.0, and I've encountered some issues when transitioning entities between virtual and real-world spaces using crossingMode.
Specifically:
Lighting inconsistency: When CG content (ModelEntities with PhysicallyBasedMaterial) crosses the portal from virtual space into the real environment, the way light reflects on the objects changes noticeably. This causes a jarring visual effect, as the same material appears differently depending on the space it's in.
Unnatural transition visuals: During the transition, the CG models often appear to "emerge from the wall," especially when crossing from virtual to real. This ruins the immersive illusion and feels visually unnatural.
IBL adjustment attempts: I’ve tried adding an ImageBasedLightComponent to the world entity, and while it slightly improves the lighting consistency, the issue still remains to a noticeable degree.
My goal is to create a seamless visual experience when CG entities cross between spaces, without sudden lighting shifts or immersion-breaking geometry reveals.
Has anyone else experienced similar issues?
Is there a recommended setup or workaround to better control lighting and visual fidelity when using crossingMode with portals in visionOS 2.0?
Any guidance would be greatly appreciated.
Thank you!
As I understand it there are two ways I can track a hand, or a joint, in RealityKit:
either, create an AnchorEntity, for example AnchorEntity(.hand(.left, location: .palm))
or, set up an ARSession with a HandTrackingProvider ( a lot more code which I haven't repeated here).
Assuming this is correct, when would I want to use one over the other?
I use ARKit to build an app, scan rooms to collect the spatial data of objects and re-construct the 3D scene.
the problem is I found the depth map values captured in ARFrame significantly deviate from the real distances, even nonlinearly, for the distances below 1.5m, values are basically correct, but beyond 1.5m, they are smaller than real values. for example read 1.9m from the generated depthmap.tiff, but real distance is 3 meters.
below is my code of generating tiff file to record depth map data:
Generated TIFF file (captured from ARKit):
as shown above, the maximum distance is around 1.9m, but real distance to that wall is more than 3 meters, and also you can see, the depth map picture captured in ARKit is quite blurry, particularly at far distance (> 2.0m), almost smeared out.
Generated TIFF file (captured from AVFoundation):
In comparison, the depth map captured from traditional AVFoundation and with the same hardware device is much clear, the values seem not in meter unit though.
I'm working on creating a panorama view in AVP. When I got to this line of code Xcode says that "Type 'Entity' does not conform to protocol 'View'":
private var realityView: RealityView!
as well as this line, with the same error message:
private func setupPanoramaScene(for content: RealityView.Content)
What should I put as a argument for reality view? It doesn't work without arguments either.
Topic:
Spatial Computing
SubTopic:
ARKit
Previously I had developed software using SMI eye trackers, both screen mounted and their mobile glasses, for unique therapeutic and physiology applications. Sadly, after SMI was bought by Apple, their hardware and software have been taken off the market and now it is very difficult to get secondhand-market systems. The Apple Vision Pro integrates the SMI hardware. While I can use ARKit to get gaze position, I do not see a way to access information that was previously made accessible on the SMI hardware, particularly: dwell time and pupil diameter information. I am hopeful (or asking) to see that if a user has a properly set up Optic ID and would opt-in if, either on the present or a future version of visionOS, it might be possible to get access to the data streams for dwell times and pupil diameter. Pupil diameter is particularly important as it is a very good physiological measure of how much stress a person is encountering, which is critical to some of the therapeutic applications that formerly we used SMI hardware. Any ideas, or, if this is not possible, proposing this to the visionOS team would be appreciated!
Topic:
Spatial Computing
SubTopic:
General
Hi,
I created an app using iOS Object Capture API which works only on Lidar enabled phones. It's a limitation of the Api provided by apple itself.
I Submitted an app for Review , but It is getting rejected (Twice) saying it doesnt work on non pro models. Even though I explained that capturing Needs Lidar and supported only in PRO models, It still gets rejected after testing in Non Pro models. is there a way out?
When I get close to an Entity in RealityKit wearing VisionPro. The Entity will become transparent so I can distinguish it is rendering by VisionPro instead of an object in reality world. How can I make it not transparent when I get close to the Entity?
Hi, are we allowed to push the default support in Package.swift up to iOS 18 to allow for the latest APIs?
And with the terms of the competition, can we use stock 3D USDZ assets?
Thank you!
Hello,
I am developing a visionOS application and am interested in obtaining detailed data of users’ hands through ARKit, including but not limited to Transform and rotation angle. I have reviewed Happy Beem, but it appears to only introduce the method of identifying the user’s specific gestures.
Could you please advise on how to obtain the Transform and rotation angle of the user’s hand?
Thank you.
sample repo: https://github.com/ckse93/VideoDiffusionIssueSHowcase
Repo has detailed step by step workflow. as well as screenshot, python script compute result, and parameters
after running computeDiffuseReflectionUVs.py and mapping textures and reflection diffuse to objects, I noticed that reflection diffuse does not produce any color.
expected result is shown below, diffused light has color
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
RealityKit
Reality Composer Pro
Shader Graph Editor
Hi there,
I'm developing a visionOS app that is using the anchor points and mesh from SceneReconstructionProvider anchor updates. I load an ImmersiveSpace using a RealityView and apply a ShaderGraphMaterial (from a Shader Graph in Reality Composer Pro) to the mesh and use calls to setParameter to dynamically update the material on very rapid frequency. The mesh is locked (no more updates) before the calls to setParameter. This process works for a few minutes but then eventually I get the following error in the console:
assertion failure: Index out of range (operator[]:line 789) index = 13662, max = 1
With the following stack trace:
Thread 1 Queue : com.apple.main-thread (serial)
#0 0x00000002880f90d0 in __abort_with_payload ()
#1 0x000000028812a6dc in abort_with_payload_wrapper_internal ()
#2 0x000000028812a710 in abort_with_payload ()
#3 0x0000000288003f40 in _os_crash_msg ()
#4 0x00000001dc9ff624 in re::ecs2::ComponentBucketsBase::addComponent ()
#5 0x00000001dc9ffadc in re::ecs2::ComponentBucketsBase::moveComponent ()
#6 0x00000001dc8b0278 in re::ecs2::MaterialParameterBlockArrayComponentStateImpl::processPreparingComponents ()
#7 0x00000001dc8b05e4 in re::ecs2::MaterialParameterBlockArraySystem::update ()
#8 0x00000001dd008744 in re::Scheduler::executePhase ()
#9 0x00000001dc032ec4 in re::Engine::executePhase ()
#10 0x0000000248121898 in RCPSharedSimulationExecuteUpdate ()
#11 0x00000002264e488c in __59-[MRUISharedSimulation _doJoinWithConnectionContext:error:]_block_invoke.44 ()
#12 0x0000000268c5fe9c in _UIUpdateSequenceRunNext ()
#13 0x00000002696ea540 in schedulerStepScheduledMainSectionContinue ()
#14 0x000000026af8d284 in UC::DriverCore::continueProcessing ()
#15 0x00000001a1bd4e6c in CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION ()
#16 0x00000001a1bd4db0 in __CFRunLoopDoSource0 ()
#17 0x00000001a1bd44f0 in __CFRunLoopDoSources0 ()
#18 0x00000001a1bd3640 in __CFRunLoopRun ()
#19 0x00000001a1bce284 in _CFRunLoopRunSpecificWithOptions ()
#20 0x00000001eff12d2c in GSEventRunModal ()
#21 0x00000002697de878 in -[UIApplication _run] ()
#22 0x00000002697e33c0 in UIApplicationMain ()
#23 0x00000001b56651e4 in closure #1 (Swift.UnsafeMutablePointer<Swift.Optional<Swift.UnsafeMutablePointer<Swift.Int8>>>) -> Swift.Never in SwiftUI.KitRendererCommon(Swift.AnyObject.Type) -> Swift.Never ()
#24 0x00000001b5664f08 in SwiftUI.runApp<τ_0_0 where τ_0_0: SwiftUI.App>(τ_0_0) -> Swift.Never ()
#25 0x00000001b53ad570 in static SwiftUI.App.main() -> () ()
#26 0x0000000101bc7b9c in static MetalRendererApp.$main() ()
#27 0x0000000101bc7bdc in main ()
#28 0x0000000197fd0284 in start ()
Any advice on how to solve this or prevent the error?
Thanks!
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Reality Composer Pro
Shader Graph Editor
Hey everyone,
I'm working on an object viewer where users can place objects in a real room using AR, and I want both visionOS (Apple Vision Pro) and iOS devices (iPad, iPhone) to participate in the same shared spatial experience. The idea is that a user with a Vision Pro can place an object, and peers using iPhones/iPads can see the same object in the same position in their AR view.
I've looked into ARKit's Shared ARWorldMap and MultipeerConnectivity, but I'm not sure if this extends seamlessly to visionOS or if Apple has an official way to sync spatial data between visionOS and iOS devices.
Has anyone tried sharing a spatial world between visionOS and iOS?
Are there any built-in frameworks that allow for a shared multiuser AR session across these devices?
If not, what would be the best way to sync object positions between them?
Would love to hear if anyone has insights or experience with this! 🚀
Thanks!
In an earlier beta, BillboardComponent had rotationAxis and upDirection properties which allowed more fine-grained control of how an entity rotates towards the camera.
Currently, it is only possible to orient the z axis of the entity.
Looking at the robot in the documentation, the rotation of its z axis causes its feet to lift off the ground.
Before, it was possible to restrain the rotation to one axis (y, for example) so that the robot's feet stayed on the ground with
billboard.upDirection = [0, 1, 0]
billboard.rotationAxis = [0, 1, 0]
Is there an alternative way to achieve this? Are these properties (or similar) coming back?
I tried to show spatial photo on my application by swiftUI's Image but it just show flat version of it even I Use Vision Pro,
so, how can I show spatial photo to users,
does there any options for this?
Hello,
I'm trying to view the components of an Entity I'm creating in RealityKit by reading from a USDZ file. I have the following code snippet in my app.
if let appleEntity = try? Entity.loadModel(named: "apple_tile") {
let c = appleEntity.components
for comp in c { // <- compiler error here
print(comp)
}
}
The compiler error I'm receiving says "For-in loop requires 'Entity.ComponentSet' to conform to 'Sequence'". However, I thought this was the case, according to the documentation for Entity.ComponentSet?
Curious if anyone else has had this problem. Running XCode 15.4, and my Swift version is
xcrun swift -version
swift-driver version: 1.90.11.1 Apple Swift version 5.10 (swiftlang-5.10.0.13 clang-1500.3.9.4)
Target: x86_64-apple-macosx14.0
I have Mac mini M4 with 16GB memory, the Xcode is 16.1, when I test my Vision Pro App with the Simulator, it is very slow and system shows the memory is under the high pressure.
How do I run/test/debug the application on Vision Pro directly? Tried to add my Vision Pro to my developer account, it didn't work due to cannot find UDID, when I hook the USB to the battery, it only shows Battery device ID.
Topic:
Spatial Computing
SubTopic:
General