Discuss Spatial Computing on Apple Platforms.

Posts under General subtopic

Post

Replies

Boosts

Views

Activity

Imitating a grip on an object
I'm playing about with the hand tracking systems in reality kit / Vision Pro I thought it would be interesting if I could attach a virtual object to a hand when the hand is gripping (thought it would be fun to attach a basic cylinder to mimic a wand from Harry Potter) I'm able to detect when the user is gripping but having trouble placing an object as though it's within the hand. The simplest version of this is using an AnchorEntity pointing to the user's palm which kind of works, but quickly breaks the illusion when you rotate the wrist or hand. It seems as though I will have to roll my own anchor entity using the various points of the user's hand and I thought calculating some median point between the thumb and little finger tips would be a good start but it's proven a little difficult as we need both rotation and position. I'm already out of my depth with reality kit and matrices (and thanks to ChatGPT) I have some code, but as soon as I apply the position manually (as opposed to a hand anchor entity) it fails to render on the user's hand. It feels like this should already have been something someone has looked in to, any ideas on what might be the issue here? Note: HandTrackingSystem.handTracking is a HandTrackingProvider() guard let anchors = HandTrackingSystem.handTracking.latestAnchors.leftHand else { return } if let thumb = anchors.handSkeleton?.joint(.thumbTip), let little = anchors.handSkeleton?.joint(.littleFingerTip) { let thumbPos = simd_make_float3(thumb.anchorFromJointTransform.columns.3) let littlePos = simd_make_float3(little.anchorFromJointTransform.columns.3) let midPos = (thumbPos + littlePos) / 2 let direction = normalize(littlePos - thumbPos) let rotation = simd_quatf(from: [0, 1, 0], to: direction) wandEntity.transform.translation = midPos wandEntity.transform.rotation = rotation content.add(wandEntity) }
1
0
562
Jul ’25
Dynamically assigning texture resource to ShaderGraphMaterial on VisionOS
I implemented a ShaderGraphMaterial and tried to load it from my usda scene by ShaderGraphMaterial.init(name: in: bundle). I want to dynamically set TextureResource on that material, so I wanted to expose texture as Uniform Input of a ShaderGraphMaterial. But obviously RCP's Shader Graph doesn't support Texture input as parameter as the image shows: And from the code level, ShaderGraphMaterial also didn't expose a way to set TexturesResources neither. Its parameterNames shows an empty array if I didn't set any custom input params. The texture I get is from my backend so it really cannot be saved into a file and load it again (that would be too weird). Is there something I am missing?
1
0
555
Jan ’25
Gaze, Eye Tracking responsive Stuff
I am wondering, is it possible to somehow configure a 3D object to respond to the gaze of a person, like change colors of some parts of the 3D-Model where a person is looking, i.e. where a person's gaze lands on the surface of the 3D-model ? For example, if there is a 3D model of a Cool Dragon 🐉 in the physical space of a person, when seen through with the mixed reality view, of a VisionPro. Now, it would be really cool to change only the color, or make some specific parts of the dragon skin shimmer, but only in the areas where a person is looking. Is this possible ? Is it do-able with eye-tracking of VisionPro ? Any advice would be appreciated. 🤝 🤓 I am new to iOS and VisionOS development.
1
0
231
Mar ’25
RealityKit Entity ComponentSet does not conform to Sequence?
Hello, I'm trying to view the components of an Entity I'm creating in RealityKit by reading from a USDZ file. I have the following code snippet in my app. if let appleEntity = try? Entity.loadModel(named: "apple_tile") { let c = appleEntity.components for comp in c { // <- compiler error here print(comp) } } The compiler error I'm receiving says "For-in loop requires 'Entity.ComponentSet' to conform to 'Sequence'". However, I thought this was the case, according to the documentation for Entity.ComponentSet? Curious if anyone else has had this problem. Running XCode 15.4, and my Swift version is xcrun swift -version swift-driver version: 1.90.11.1 Apple Swift version 5.10 (swiftlang-5.10.0.13 clang-1500.3.9.4) Target: x86_64-apple-macosx14.0
3
0
415
Mar ’25
Missing Properties in BillboardComponent
In an earlier beta, BillboardComponent had rotationAxis and upDirection properties which allowed more fine-grained control of how an entity rotates towards the camera. Currently, it is only possible to orient the z axis of the entity. Looking at the robot in the documentation, the rotation of its z axis causes its feet to lift off the ground. Before, it was possible to restrain the rotation to one axis (y, for example) so that the robot's feet stayed on the ground with billboard.upDirection = [0, 1, 0] billboard.rotationAxis = [0, 1, 0] Is there an alternative way to achieve this? Are these properties (or similar) coming back?
1
0
297
Mar ’25
Portal crossing causes inconsistent lighting and visual artifacts between virtual and real spaces (visionOS 2.0)
Hello, I'm working with the new PortalComponent introduced in visionOS 2.0, and I've encountered some issues when transitioning entities between virtual and real-world spaces using crossingMode. Specifically: Lighting inconsistency: When CG content (ModelEntities with PhysicallyBasedMaterial) crosses the portal from virtual space into the real environment, the way light reflects on the objects changes noticeably. This causes a jarring visual effect, as the same material appears differently depending on the space it's in. Unnatural transition visuals: During the transition, the CG models often appear to "emerge from the wall," especially when crossing from virtual to real. This ruins the immersive illusion and feels visually unnatural. IBL adjustment attempts: I’ve tried adding an ImageBasedLightComponent to the world entity, and while it slightly improves the lighting consistency, the issue still remains to a noticeable degree. My goal is to create a seamless visual experience when CG entities cross between spaces, without sudden lighting shifts or immersion-breaking geometry reveals. Has anyone else experienced similar issues? Is there a recommended setup or workaround to better control lighting and visual fidelity when using crossingMode with portals in visionOS 2.0? Any guidance would be greatly appreciated. Thank you!
5
0
263
Jul ’25
VisionOS Beta 3 spatial sculpting apple sample crash
Hello since updating to beta 3 the sculpting sample app doesn't work it crashes on running. seems to be something in AnchorEntity or AccessoryAnchoringSource Referenced from: <00B81486-1A74-30A0-B75B-4B39E3AF57DF> /private/var/containers/Bundle/Application/3D2EBF59-19F0-4BF4-8567-6962AA36A2C6/delete.app/delete.debug.dylib Expected in: <BAA9B221-78A1-3B99-AA2F-B8DFCD179FC7> /System/Library/Frameworks/RealityFoundation.framework/RealityFoundation
1
0
329
Jul ’25
How to use defaultSize with visionOS window restoration?
One of the most common ways to provide a window size in visionOS is to use the defaultSize scene modifier. WindowGroup(id: "someID") { SomeView() } .defaultSize(CGSize(width: 600, height: 600)) Starting in visionOS 26, using this has a side effect. visionOS 26 will restore windows that have been locked in place or snapped to surfaces. If a user has manually adjusted the size of a locked/snapped window, the users size is only restore in some cases. Manual resize respected Leaving a room and returning later Taking the headset off and putting it back on later Manual resize NOT respected Device restart. In this case, the window is reopened where it was locked, but the size is set back to the values passed to defaultSize. The manual resizing adjustments the user has made are lost. This is counter to how all other windows and widgets work. I reported this last month (FB18429638), but haven't heard back if this is a bug or intended behavior. Questions What is the best way to provide a default window size that will only be used when opening new windows–and not used during scene restoration? Should we try to keep track of window size after users adjust them and save that somewhere? If this is intended behavior, can someone please update the docs accordingly?
1
0
448
Jul ’25
RealityKit / visionOS – Memory not released after dismissing ImmersiveSpace with USDZ models
Hi everyone, I’m encountering a memory overflow issue in my visionOS app and I’d like to confirm if this is expected behavior or if I’m missing something in cleanup. App Context The app showcases apartments in real scale using AR. Apartments are heavy USDZ models (hundreds of thousands of triangles, high-resolution textures). Users can walk inside the apartments, and performance is good even close to hardware limits. Flow The app starts in a full immersive space (RealityView) for selecting the apartment. When an apartment is selected, a new ImmersiveSpace opens and the apartment scene loads. The scene includes multiple USDZ models, EnvironmentResources, and dynamic textures for skyboxes. When the user dismisses the experience, we attempt cleanup: Nulling out all entity references. Removing ModelComponents. Clearing cached textures and skyboxes. Forcing dictionaries/collections to empty. Despite this cleanup, memory usage remains very high. Problem After dismissing the ImmersiveSpace, memory does not return to baseline. Check the attached screenshot of the profiling made using Instruments: Initial state: ~30MB (main menu). After loading models sequentially: ~3.3GB. Skybox textures bring it near ~4GB. After dismissing the experience (at ~01:00 mark): memory only drops slightly (to ~2.66GB). When loading the second apartment, memory continues to increase until ~5GB, at which point the app crashes due to memory pressure. The issue is consistently visible under VM: IOSurface in Instruments. No leaks are detected. So it looks like RealityKit (or lower-level frameworks) keeps caching meshes and textures, and does not free them when RealityView is ended. But for my use case, these resources should be fully released once the ImmersiveSpace is dismissed, since new apartments will load entirely different models and textures. Cleanup Code Example Here’s a simplified version of the cleanup I’m doing: func clearAllRoomEntities() { for (entityName, entity) in entityFromMarker { entity.removeFromParent() if let modelEntity = entity as? ModelEntity { modelEntity.components.removeAll() modelEntity.children.forEach { $0.removeFromParent() } modelEntity.clearTexturesAndMaterials() } entityFromMarker[entityName] = nil removeSkyboxPortals(from: entityName) } entityFromMarker.removeAll() } extension ModelEntity { func clearTexturesAndMaterials() { guard var modelComponent = self.model else { return } for index in modelComponent.materials.indices { removeTextures(from: &modelComponent.materials[index]) } modelComponent.materials.removeAll() self.model = modelComponent self.model = nil } private func removeTextures(from material: inout any Material) { if var pbr = material as? PhysicallyBasedMaterial { pbr.baseColor.texture = nil pbr.emissiveColor.texture = nil pbr.metallic.texture = nil pbr.roughness.texture = nil pbr.normal.texture = nil pbr.ambientOcclusion.texture = nil pbr.clearcoat.texture = nil material = pbr } else if var simple = material as? SimpleMaterial { simple.color.texture = nil material = simple } } } Questions Is this expected RealityKit behavior (textures/meshes cached internally)? Is there a way to force RealityKit to release GPU resources tied to USDZ models when they’re no longer used? Should dismissing the ImmersiveSpace automatically free those IOSurfaces, or do I need to handle this differently? Any guidance, best practices, or confirmation would be hugely appreciated. Thanks in advance!
8
0
1.9k
2w
Is there a way to scale a RealityKit ShapeResource?
I can generate a ShapeResource from a ReakityKit entity's extents. Could I apply some scaling to the generated shape. Is there a way to do that? // model is a ModelResource and bounds is a BoundingBox var shape = ShapeResource.generateConvex(from: model.mesh); shape = shape.offsetBy(translation: bounds.center) // How can I scale the shape to fit within the bounds? The following API only provide the rotation and translation support. and I cannot find the scale support. offsetBy(rotation: simd_quatf = simd_quatf(ix: 0, iy: 0, iz: 0, r: 1), translation: SIMD3<Float> = SIMD3<Float>()) I can put the ShapeResource on an entity and scale the entity. But, I would like to know if it is possible to scale the ShapeResource itself without attaching it to an entity.
3
0
578
Feb ’25
App Window Closure Sequence Impacts Main Interface Reload Behavior
My VisionOS App (Travel Immersive) has two interface windows: a main 2D interface window and a 3D Earth window. If the user first closes the main interface window and then the Earth window, clicking the app icon again will only launch the Earth window while failing to display the main interface window. However, if the user closes the Earth window first and then the main interface window, the app restarts normally‌. Below is the code of import SwiftUI @main struct Travel_ImmersiveApp: App { @StateObject private var appModel = AppModel() var body: some Scene { WindowGroup(id: "MainWindow") { ContentView() .environmentObject(appModel) .onDisappear { appModel.closeEarthWindow = true } } .windowStyle(.automatic) .defaultSize(width: 1280, height: 825) WindowGroup(id: "Earth") { if !appModel.closeEarthWindow { Globe3DView() .environmentObject(appModel) .onDisappear { appModel.isGlobeWindowOpen = false } } else { EmptyView() // 关闭时渲染空视图 } } .windowStyle(.volumetric) .defaultSize(width: 0.8, height: 0.8, depth: 0.8, in: .meters) ImmersiveSpace(id: "ImmersiveView") { ImmersiveView() .environmentObject(appModel) } } }
6
0
275
Apr ’25
Debug Vision Pro application directly on physical device instead of the simulator
I have Mac mini M4 with 16GB memory, the Xcode is 16.1, when I test my Vision Pro App with the Simulator, it is very slow and system shows the memory is under the high pressure. How do I run/test/debug the application on Vision Pro directly? Tried to add my Vision Pro to my developer account, it didn't work due to cannot find UDID, when I hook the USB to the battery, it only shows Battery device ID.
3
0
494
Jan ’25
generateConvex(from mesh: MeshResource) crashes instead of throwing a Swift error
I have a MeshResource and I would like to create a collision component from it. let childBounds = child.visualBounds(relativeTo: self) var childShape: ShapeResource do { // Crashed by the following line instead of throwing a Swift Error childShape = try await ShapeResource.generateConvex(from: childModel.mesh); } catch { childShape = ShapeResource.generateBox(size: childBounds.extents) childShape = childShape.offsetBy(translation: childBounds.center) } Based on this document: https://developer.apple.com/documentation/realitykit/shaperesource/generateconvex(from:)-6upj9 Will throw an error if mesh does not define a nonempty convex volume. For example, will fail if all the vertices in mesh are coplanar. But, the method crashes the app instead of throwing a Swift error Incident Identifier: 35CD58F8-FFE3-48EA-85D3-6D241D8B0B4C CrashReporter Key: FE6790CA-6481-BEFD-CB26-F4E27652BEAE Hardware Model: Mac15,11 ... Version: 1.0 (1) Code Type: ARM-64 (Native) Role: Foreground Parent Process: launchd_sim [2057] Coalition: com.apple.CoreSimulator.SimDevice.85A2B8FA-689F-4237-B4E8-DDB93460F7F6 [1496] Responsible Process: SimulatorTrampoline [910] Date/Time: 2025-01-26 16:13:17.5053 +0800 Launch Time: 2025-01-26 16:13:09.5755 +0800 OS Version: macOS 15.2 (24C101) Release Type: User Report Version: 104 Exception Type: EXC_BREAKPOINT (SIGTRAP) Exception Codes: 0x0000000000000001, 0x00000001abf841d0 Termination Reason: SIGNAL 5 Trace/BPT trap: 5 Terminating Process: exc handler [17316] Triggered by Thread: 0 Thread 0 Crashed: 0 CoreRE 0x1abf841d0 REAssetManagerCollisionShapeAssetCreateConvexPolyhedron + 232 1 CoreRE 0x1abf845f0 REAssetManagerCollisionShapeAssetCreateConvexPolyhedronFromMesh + 868 2 RealityFoundation 0x1d25613bc static ShapeResource.generateConvex(from:) + 148 Here is the message on the app console from Xcode /Library/Caches/com.apple.xbs/Sources/REKit_Sim/ThirdParty/PhysX/physx/source/physxcooking/src/convex/QuickHullConvexHullLib.cpp (935) : internal error : QuickHullConvexHullLib::findSimplex: Simplex input points appers to be coplanar. Failed to cook convex mesh (0x3) assertion failure: 'convexPolyhedronShape != nullptr' (REAssetManagerCollisionShapeAssetCreateConvexPolyhedron:line 356) Bad parameters passed for convex mesh creation. Message from debug The above crash happened on a visionOS simulator (visionOS 2.2 (22N840)
1
0
438
Jan ’25
FromToByAnimation triggers availableAnimations not the single bone animation
So, I was trying to animate a single bone using FromToByAnimation, but when I start the animation, the model instead does the full body animation stored in the availableAnimations. If I don't run testAnimation nothing happens. If I run testAnimation I see the same animation as If I had called entity.playAnimation(entity.availableAnimations[0],..) here's the full code I use to animate a single bone: func testAnimation() { guard let jawAnim = jawAnimation(mouthOpen: 0.4) else { print("Failed to create jawAnim") return } guard let creature, let animResource = try? AnimationResource.generate(with: jawAnim) else { return } let controller = creature.playAnimation(animResource, transitionDuration: 0.02, startsPaused: false) print("controller: \(controller)") } func jawAnimation(mouthOpen: Float) -> FromToByAnimation<JointTransforms>? { guard let basePose else { return nil } guard let index = basePose.jointNames.firstIndex(of: jawBoneName) else { print("Target joint \(self.jawBoneName) not found in default pose joint names") return nil } let fromTransforms = basePose.jointTransforms let baseJawTransform = fromTransforms[index] let maxAngle: Float = 40 let angle: Float = maxAngle * mouthOpen * (.pi / 180) let extraRot = simd_quatf(angle: angle, axis: simd_float3(x: 0, y: 0, z: 1)) var toTransforms = basePose.jointTransforms toTransforms[index] = Transform( scale: baseJawTransform.scale * 2, rotation: baseJawTransform.rotation * extraRot, translation: baseJawTransform.translation ) let fromToBy = FromToByAnimation<JointTransforms>( jointNames: basePose.jointNames, name: "jaw-anim", from: fromTransforms, to: toTransforms, duration: 0.1, bindTarget: .jointTransforms, repeatMode: .none, ) return fromToBy } PS: I can confirm that I can set this bone to a specific position if I use guard let index = newPose.jointNames.firstIndex(of: boneName) ... let baseTransform = basePose.jointTransforms[index] newPose.jointTransforms[index] = Transform( scale: baseTransform.scale, rotation: baseTransform.rotation * extraRot, translation: baseTransform.translation ) skeletalComponent.poses.default = newPose creatureMeshEntity.components.set(skeletalComponent) This works for manually setting the bone position, so the jawBoneName and the joint-transformation can't be that wrong.
1
0
266
Aug ’25
Rendering Order with ModelSortGroup
I have a huge sphere where the camera stays inside the sphere and turn on front face culling on my ShaderGraphMaterial applied on that sphere, so that I can place other 3D stuff inside. However when it comes to attachment, the object occlusion never works as I am expecting. Specifically my attachments are occluded by my sphere (some are not so the behavior is not deterministic. Then I suspect it was the issue of depth testing so I started using ModelSortGroup to reorder the rending sequence. However it doesn't work. As I was searching through the internet, this post's comments shows that ModelSortGroup simply doesn't work on attachments. So I wonder how should I tackle this issue now? To let my attachments appear inside my sphere. OS/Sys: VisionOS 2.3/XCode 16.3
1
0
505
Feb ’25