Discuss Spatial Computing on Apple Platforms.

Posts under General subtopic

Post

Replies

Boosts

Views

Activity

Metal (Compositor Services) or RealityKit on visionOS
I am develop visionOS app. I am now very interested in Metal and Compositor Services, but I have not explored them in depth. I know that Metal has a higher degree of control freedom. I am wondering if using Compositor Services will have fewer functions than RealityKit in AR technology (such as scene reconstruction and understanding, hover effect, etc.).
4
0
225
Jun ’25
Ornaments in Presentations
We can add ornaments to popovers shown by PresentationComponent, but I’m not sure if we should. While working on the editor for entities in a Volume-based app, I had the idea to add ornaments to the presented views. The entire app exists inside a volume. A user can tap a item to present a popoverUI to edit it. This is displayed using the new PresentationComponent in visionOS 26. Ornaments have a new attachment anchor option this year: .parent(). .ornament(attachmentAnchor: .parent(.top), ornament: {...}) This works well in the Simulator. We can add ornaments around this popover view just like we would with a window. Unfortunately, when I run this on device I get a different experience. Any part of the ornament that overlaps with the popover content isn’t rendered correctly. Sometimes it entirely disappears, other times it becomes partially transparent. We could use content alignment to try to make sure the ornament doesn’t overlap the popover content. .ornament(attachmentAnchor: .parent(.top), contentAlignment: .bottom, ornament: {...}) This works sometimes–but not all the time. It’s not clear if this is a bug or not, because I’m not sure if we are even supposed to be able to use ornaments in this way. Here is my hierarchy: An app opens as a Volume Volume presenting a RealityView, with its own ornament using .scene() anchor Multiple Entities with Presentation Component show an edit view The view uses .parent() anchor to add ornaments. What makes me unsure is that other methods for drawing UI in RealityView don’t seem to work with ornaments. For example, if I add an attachment to show a view with the ornament–even when I use the .parent() anchor–the ornament is anchor to the volume, not the attachment view. So what do we think? Is this a rendering bug? Are ornaments intended to work with attachments and presentations?
2
0
355
Aug ’25
ImpulseAction giving strange error
I am trying to apply impulseAction to an entity but everytime entity.playAnimation(impulseAnimation) is executed, the log says Cannot find a BindPoint for any bind path: "". I can't figure out what is wrong. Could someone please help me with this? import SwiftUI import RealityKit import RealityKitContent struct ImmersiveView: View { var body: some View { RealityView { content in // Add the initial RealityKit content if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle), var sphere = immersiveContentEntity.findEntity(named: "Sphere") { sphere.components.set(CollisionComponent(shapes: [ShapeResource.generateSphere(radius: 0.1)])) sphere.components.set(PhysicsBodyComponent(shapes: [ShapeResource.generateSphere(radius: 0.1)], mass: 1000)) sphere.components[PhysicsBodyComponent.self]?.isAffectedByGravity = false sphere.position = [0, 1, -1] content.add(immersiveContentEntity) // Create an action to apply an impulse, forcing the object to move upwards. let impulseAction = ImpulseAction(linearImpulse: [0, 1, 0]) // Create a small positive duration value. let duration: TimeInterval = 1 / 30.0 // Create an animation for the action, which will start playing // after five seconds. do { let impulseAnimation = try AnimationResource .makeActionAnimation(for: impulseAction, duration: duration, delay: 5.0) // Play the sequence animation that will play the actions. sphere.playAnimation(impulseAnimation) } catch { print("Error: \(error)") } } } } } All the logs: Could not locate file 'default-binaryarchive.metallib' in bundle. Error creating the CFMessagePort needed to communicate with PPT. AddInstanceForFactory: No factory registered for id <CFUUID 0x6000029a5b80> F8BB1C28-BAE8-11D6-9C31-00039315CD46 cannot add handler to 0 from 1 - dropping nw_socket_copy_info [C1:2] getsockopt TCP_INFO failed [102: Operation not supported on socket] nw_socket_copy_info getsockopt TCP_INFO failed [102: Operation not supported on socket] Registering library (/Library/Developer/CoreSimulator/Volumes/xrOS_22N840/Library/Developer/CoreSimulator/Profiles/Runtimes/xrOS 2.2.simruntime/Contents/Resources/RuntimeRoot/System/Library/PrivateFrameworks/CoreRE.framework/default.metallib) that already exists in shader manager. Library will be overwritten. cannot add handler to 0 from 1 - dropping Cannot find a BindPoint for any bind path: "", "" Sync object without snapshot while removing view (id: 2816861686082450363, type: 6373420419761316588[SelectableSceneContentIdentifierComponent]). But i think only Cannot find a BindPoint for any bind path: "", "" is relevant.
2
0
652
Jan ’25
Unexpected Behavior in Entity Movement System When Using AVAudioPlayer in visionOS Development
I am currently developing an app for visionOS and have encountered an issue involving a component and system that moves an entity up and down within a specific Y-axis range. The system works as expected until I introduce sound playback using AVAudioPlayer. Whenever I use AVAudioPlayer to play sound, the entity exhibits unexpected behaviors, such as freezing or becoming unresponsive. The freezing of the entity's movement is particularly noticeable when playing the audio for the first time. After that, it becomes less noticeable, but you can still feel it, especially when the audio is played in quick succession. Also, the issue is more noticable on real device than the simulator // // IssueApp.swift // Issue // // Created by Zhendong Chen on 2/1/25. // import SwiftUI @main struct IssueApp: App { var body: some Scene { WindowGroup { ContentView() } .windowStyle(.volumetric) } } // // ContentView.swift // Issue // // Created by Zhendong Chen on 2/1/25. // import SwiftUI import RealityKit import RealityKitContent struct ContentView: View { @State var enlarge = false var body: some View { RealityView { content, attachments in // Add the initial RealityKit content if let scene = try? await Entity(named: "Scene", in: realityKitContentBundle) { if let sphere = scene.findEntity(named: "Sphere") { sphere.components.set(UpAndDownComponent(speed: 0.03, minY: -0.05, maxY: 0.05)) } if let button = attachments.entity(for: "Button") { button.position.y -= 0.3 scene.addChild(button) } content.add(scene) } } attachments: { Attachment(id: "Button") { VStack { Button { SoundManager.instance.playSound(filePath: "apple_en") } label: { Text("Play audio") } .animation(.none, value: 0) .fontWeight(.semibold) } .padding() .glassBackgroundEffect() } } .onAppear { UpAndDownSystem.registerSystem() } } } // // SoundManager.swift // LinguaBubble // // Created by Zhendong Chen on 1/14/25. // import Foundation import AVFoundation class SoundManager { static let instance = SoundManager() private var audioPlayer: AVAudioPlayer? func playSound(filePath: String) { guard let url = Bundle.main.url(forResource: filePath, withExtension: ".mp3") else { return } do { audioPlayer = try AVAudioPlayer(contentsOf: url) audioPlayer?.play() } catch let error { print("Error playing sound. \(error.localizedDescription)") } } } // // UpAndDownComponent+System.swift // Issue // // Created by Zhendong Chen on 2/1/25. // import RealityKit struct UpAndDownComponent: Component { var speed: Float var axis: SIMD3<Float> var minY: Float var maxY: Float var direction: Float = 1.0 // 1 for up, -1 for down var initialY: Float? init(speed: Float = 1.0, axis: SIMD3<Float> = [0, 1, 0], minY: Float = 0.0, maxY: Float = 1.0) { self.speed = speed self.axis = axis self.minY = minY self.maxY = maxY } } struct UpAndDownSystem: System { static let query = EntityQuery(where: .has(UpAndDownComponent.self)) init(scene: RealityKit.Scene) {} func update(context: SceneUpdateContext) { let deltaTime = Float(context.deltaTime) // Time between frames for entity in context.entities(matching: Self.query, updatingSystemWhen: .rendering) { guard var component: UpAndDownComponent = entity.components[UpAndDownComponent.self] else { continue } // Ensure we have the initial Y value set if component.initialY == nil { component.initialY = entity.transform.translation.y } // Calculate the current position let currentY = entity.transform.translation.y // Move the entity up or down let newY = currentY + (component.speed * component.direction * deltaTime) // If the entity moves out of the allowed range, reverse the direction if newY >= component.initialY! + component.maxY { component.direction = -1.0 // Move down } else if newY <= component.initialY! + component.minY { component.direction = 1.0 // Move up } // Apply the new position entity.transform.translation = SIMD3<Float>(entity.transform.translation.x, newY, entity.transform.translation.z) // Update the component with the new direction entity.components[UpAndDownComponent.self] = component } } } Could someone help me with this?
2
0
358
Feb ’25
Reading Imagefiles outside of app content
Hi there. Thanks to amazing help from you guys, I've managed to code a 360 image carousel, where the user can browse 360 images located inside the project package. Is there a way to access the filesystem on AVP outside the app? I know about the FileManager, and I can get access to the .documentsDirectory, but how do I access documents folder from the "Files" app on the AVP? My goal is to read images from a hardcoded folderlocation on the AVP, such that the user never will have to select the images themselves. I know this may not be the "right" way to do this. The app is supposed to be "foolproof" with a minimum of userinteraction. The only way to change the images should be to change the contents of the hardcoded imagefolder. I hope this makes sense =) Thanks in advance! Regards, Kim
2
0
282
Feb ’25
openImmersiveSpace works as expected but never returns Result
As in the title: openImmersiveSpace works as expected. The ImmersiveSpace in the ID opens normally but the function never resolves a Result. Here is a workaround I used to make user it was on the UI thread and scenePhase was active: ` @MainActor func openSpaceWithStateCheck() async { if scenePhase == .active { Task { switch await openImmersiveSpace(id: "RoomCaptureInteraction") { case .opened: isCapturingImagery = true break case .error: print("!! An error occurred when trying to open the immersive space captureRoomImagery") case .userCancelled: print("!! The user declined opening immersive space captureRoomImagery") @unknown default: print("!! unknown default result of opening space") break } } } else { print("Scene not active, deferring immersive space opening") } } I'm on visionOS 2.4 and SDK 2.2. I have tried uninstalling the app and rebuilding. Tried simply opening an empty ImmersiveSpace. The consistency of the ImmersiveSpace opening at least means I can work around it. Even dismissImmersiveSpace works normally and closes the immersive space. But a workaround seems hamfisted.
1
0
78
Mar ’25
App crashes after requesting PhotoLibrary limited access
My visionOS requires access to users' personal photos. The trigger mechanism is: when user firstly opens a FooView, a task attached to that FooView and calling let status = PHPhotoLibrary.authorizationStatus(for: .readWrite), if the status is .notDetermined, then calling PHPhotoLibrary.requestAuthorization(for: .readWrite, handler: authCompletionHandler) to let visionOS pop out a window to request Photo access. However, the app crashes every time when user selects Limited Access and the system try to pop out a photo library picker. And btw, I have set Prevent limited photos access alert to Yes, but it shouldn't affect the behavior here I guess. There was a debugger message here: *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Presentations are not permitted within volumetric window scenes.' However, the window this view belongs to is a .plain style window (though there were 3D object appearing in the other view of same windowgroup) This is my code snippet if this helps: checkAndUpdatePhotoAuthorization is just a wrapper of PHPhotoLibrary.authorizationStatus(for: .readWrite) private func checkAndUpdatePhotoAuthorization() -> PHAuthorizationStatus { let currentStatus = PHPhotoLibrary.authorizationStatus(for: .readWrite) switch currentStatus { case .authorized: print("Photo library access authorized.") isPhotoGalleryAuthorized = true isPhotoGalleryLimited = false isPhotoGalleryAccessRestricted = false isPhotoGalleryDetermined = true case .limited: print("Photo library access limited.") isPhotoGalleryLimited = true isPhotoGalleryAuthorized = false isPhotoGalleryAccessRestricted = false isPhotoGalleryDetermined = true case .notDetermined: isPhotoGalleryDetermined = false print("Photo library access not determined.") case .denied: print("Photo library access denied.") isPhotoGalleryAuthorized = false isPhotoGalleryLimited = false isPhotoGalleryAccessRestricted = false showSettingsAlert = true isPhotoGalleryDetermined = true case .restricted: print("Photo library access restricted.") isPhotoGalleryAuthorized = false isPhotoGalleryLimited = false isPhotoGalleryAccessRestricted = true showPhotoAuthExplainationAlert = true isPhotoGalleryDetermined = true @unknown default: print("Photo library Unknown authorization status.") isPhotoGalleryAuthorized = false isPhotoGalleryLimited = false isPhotoGalleryAccessRestricted = false isPhotoGalleryDetermined = true } return currentStatus } And then FooView attaches task to fire up checkAndUpdatePhotoAuthorization() var body: some View { EmptyView() } .task { try? await Task.sleep(for: .seconds(1.0)) let status = self.checkAndUpdatePhotoAuthorization() if status == .notDetermined { DispatchQueue.main.async { PHPhotoLibrary.requestAuthorization(for: .readWrite, handler: authCompletionHandler) } } Another thing worth to mention is that SOMETIMES it won't crash when running on a debug build. But it crashes when it comes to TF. Any other idea? Big thanks in advance XCode version: 16.2 beta 3 VisionOS version: 2.2
1
0
681
Jan ’25
Obect Capture App getting rejected for not supporting non Pro Models
Hi, I created an app using iOS Object Capture API which works only on Lidar enabled phones. It's a limitation of the Api provided by apple itself. I Submitted an app for Review , but It is getting rejected (Twice) saying it doesnt work on non pro models. Even though I explained that capturing Needs Lidar and supported only in PRO models, It still gets rejected after testing in Non Pro models. is there a way out?
1
0
353
Feb ’25
How to request several models simultaneously
I am using HelloPhotogrammetry in Xcode I can make one model with something like HelloPhotogrammetry.main([path_to_folder_of images, path_to_output/model.usdz, "-d", "medium", "-o", "unordered", "-f", "high" ]) But how would I request several models simultaneously? I only want to vary the detail. [ ("/Users/you/Desktop/model_medium.usdz", detail: .medium), ("/Users/you/Desktop/model_full.usdz", detail: .full), ("/Users/you/Desktop/model_raw.usdz", detail: .raw ]
2
0
97
Apr ’25
spatial-backdrop feature available yet?
In WWDC25 session What’s new for the spatial web, the presenter showed creating an immersive environment for a web page by adding to the page's HEAD section <link rel="spatial-backdrop" href="office.usdz" environmentmap="lighting.hdr"> My first attempt failed, and I am trying to track down why. Before I search all the potential failure paths, I wanted to ask the community, Is this feature available in the latest visionOS 26 beta? I haven't seen anyone talk about their use of the feature yet.
2
0
232
Aug ’25
Run to Device( on WLAN)?
Is it possible to use a local wifi router connecting Vision Pro and Mac for developing? I tried from Unity and Xcode. From Unity, the host app wouldn't open without WIFI (internet connection) From Xcode, I can see the Vision Pro paired, but while try to run there's no device listed. Any suggestions? Thanks a lot, /Ruiying
2
0
283
Jan ’25
Converting a Stop Motion Animation to usdz
Hello everyone, I've been trying for a few weeks now to convert a sequential series of meshes into a stop-motion animation in USDZ format. In Unreal Engine, I’ve already figured out how to transform the sequential series of individual meshes into a smooth animation using the node system and arrays. Unfortunately, the node system cannot be exported as a usdz animation logic in either Unreal or Blender. Because of this, I have tried several other methods to incorporate the animation logic. Here’s what I’ve tried so far: I attempted to create the animation in Blender with Render-/Viewports and mapping it to keyframes. However, in my experience, Viewports are not supported in the conversion. I tried aligning the vertices of individual objects and merging the frames using the Shrinkwrap modifier in Blender, then setting up a morph animation with keyframes. However, because the individual meshes are too different, this results in artifacts, and manually editing each mesh is too difficult for me to handle. I placed all individual meshes at the same position and animated them sequentially by scaling them from 0 to 100 in keyframes (Frame 1 is visible for 10 frames, then scales down at frame 11, while Frame 2 becomes visible at frame 11, and so on). I also adjusted the keyframes so that the scaling happens in a "constant" manner rather than the default Bezier or linear interpolation. I then converted this animation to .abc, and the result initially looked good. However, some information is lost when converting it with OpenUSD. The animation does not maintain its intended jump-like behavior in USDZ format, and instead, the scaling of individual files is visible in the animation. I tried using a Blender add-on (StepMotion), which allows the animation to be exported as .abc, but it can only be read in Blender or Unreal. Even in the preview, the animation is not displayed correctly, so converting the animation logic does not work either. 
Unfortunately, I have no alternative way to create the animation, as the individual frames have been provided to me as meshes. So far, I haven’t found a way to implement this successfully. I would be very grateful for any tips or ideas, as I am running out of options on how to make this work. Thanks in advance!
2
0
162
Apr ’25
Unity on VisionOS development - best practice on structuring a project
Hello, I am experimenting with Unity to develop a mixed reality (MR) application for visionOS. I would like to understand the best approach for structuring my project: Should I build the entire experience in Unity (both Windows and Volumes)? Or is it better to create only certain elements (e.g., Volumes) in Unity while managing Windows separately in Xcode? Also, how well do interactions (e.g pinch, grab…) created in Unity integrate with Xcode? If I use the PolySpatial plugin, does that allow me to manage all interactions entirely within Unity, or would I still need to handle/integrate part of it in Xcode? What's worked best for you? Please let me know if you have any recommendations, Thanks!
3
0
151
Apr ’25
Unable to Create a Fully Immersive Experience That Hides Other Windows in visionOS App
Description: I'm developing a travel/panorama viewing app for visionOS that allows users to view 360° panoramic images in an immersive space. When users enter panorama viewing mode, I want to provide a fully immersive experience where the main interface window and Earth 3D globe window are hidden. I've implemented the app following Apple's documentation on Creating Fully Immersive Experiences, but when users enter the immersive space, both the main window and the Earth 3D window remain visible, diminishing the immersive experience. Implementation Details: My app has three main components: A main content window showing panorama thumbnails A 3D globe window (volumetric) showing locations An immersive space for viewing 360° panoramas I'm using .immersionStyle(selection: $panoImageView, in: .full) to create a fully immersive experience, but other windows remain visible. Relevant Code: @main struct Travel_ImmersiveApp: App { @StateObject private var appModel = AppModel() @State private var panoImageView: ImmersionStyle = .full var body: some Scene { WindowGroup { ContentView() .environmentObject(appModel) } .windowStyle(.automatic) .defaultSize(width: 1280, height: 825) WindowGroup(id: "Earth") { Globe3DView() .environmentObject(appModel) .onAppear { appModel.isGlobeWindowOpen = true appModel.globeWindowOpen = true } .onDisappear { if !appModel.shouldCloseApp { appModel.handleGlobeWindowClose() } } } .windowStyle(.volumetric) .defaultSize(width: 0.8, height: 0.8, depth: 0.8, in: .meters) .windowResizability(.contentSize) ImmersiveSpace(id: "ImmersiveView") { ImmersiveView() .environmentObject(appModel) } .immersionStyle(selection: $panoImageView, in: .full) } } Opening the Immersive Space: func getPanoImageAndOpenImmersiveSpace() async { appModel.clearMemoryCache() do { let canView = appModel.canViewImage(image) if canView { let downloadedImage = try await appModel.getPanoramaImage(for: image) { progress in Task { @MainActor in cardState = .loading(progress: progress) } } await MainActor.run { appModel.updateCurrentImage(image, panoramaImage: downloadedImage) } if !appModel.immersiveSpaceOpened { try await openImmersiveSpace(id: "ImmersiveView") await MainActor.run { appModel.immersiveSpaceOpened = true cardState = .normal } } else { await MainActor.run { appModel.updateImmersiveView = true cardState = .normal } } } else { await MainActor.run { appModel.errorMessage = "You do not have permission to view this image." cardState = .normal } } } catch { // Error handling } } Immersive View Implementation: struct ImmersiveView: View { @EnvironmentObject var appModel: AppModel var body: some View { RealityView { content in let rootEntity = Entity() content.add(rootEntity) Task { if let selectedImage = appModel.selectedImage, appModel.canViewImage(selectedImage) { await loadPanorama(for: rootEntity) } } } update: { content in if appModel.updateImmersiveView, let selectedImage = appModel.selectedImage, appModel.canViewImage(selectedImage), let rootEntity = content.entities.first { Task { await loadPanorama(for: rootEntity) appModel.updateImmersiveView = false } } } .onAppear { print("ImmersiveView appeared") } .onDisappear { appModel.resetImmersiveState() } } // loadPanorama implementation... } What I've Tried Set immersionStyle to .full as recommended in the documentation Confirmed that the immersive space is properly opened and displaying panoramas Verified that the state management for the immersive space is working correctly Questions How can I ensure that when the user enters the immersive panorama viewing experience, all other windows (main interface and Earth 3D globe) are automatically hidden? Is there a specific API or approach I'm missing to properly implement a fully immersive experience that hides all other windows? Do I need to manually dismiss the windows when opening the immersive space, and if so, what's the best approach for doing this? Any guidance or sample code would be greatly appreciated. Thank you!
3
0
149
Apr ’25
Run to Device (on WLAN)?
I was trying to use a local Wifi router to connect Vision Pro and Mac for developing. From Unity, the host on Vision Pro wouldn't open; from Xcode I can see vision pro paired but by the Run button there's no device listed...thanks any ideas? /Ruiying
1
0
328
Jan ’25