Discuss Spatial Computing on Apple Platforms.

Posts under General subtopic

Post

Replies

Boosts

Views

Activity

Can´t find a DLL in a VisionOS app with Unity
Dear all, I´m using Unity 6.2 beta and Xcode 16.2. I´m creating a simple framework to use the text to speech functionality in VisionOS from unity. The framework is created in Swift. I create an objective-c wrapper with the following declarations: ... void _initTTS(int); ... I create the framework, import it in Unity and call the functions in a c# wrapper class. The code is as follows: public static class TTSPluginManager { [DllImport("TTS_Vision"] private static extern void _initTTS(int val); ... public static void Initialize() { #if UNITY_VISIONOS _initTTS(0); #else Debug.LogWarning("NativeTTS.Initialize called on a non-iOS platform. Ignoring."); #endif } } I have managed to compile and run the program in the Apple Vision Pro, but I keep on getting the following error: DllNotFoundException: TTS_Vision assembly: type: member:(null) TTSPluginManager.Initialize () (at Assets/Plugins/TTSPluginManager.cs:33) LecturePortalManager.OnCreateStory (Ink.Runtime.Story story) (at Assets/AVRLecture/LecturePortalManager.cs:17) InkLoader.StartStory () (at Assets/AVRLecture/InkLoader.cs:24) InkLoader.Start () (at Assets/AVRLecture/InkLoader.cs:18) If I run the generated code from Xcode, I can see the app in the AVP, but I keep getting a loading error: DllNotFoundException: Unable to load DLL 'TTS_Vision'. Tried the load the following dynamic libraries: Unable to load dynamic library '/TTS_Vision' because of 'Failed to open the requested dynamic library (0x06000000) dlerror() = dlopen(/TTS_Vision, 0x0005): tried: '/TTS_Vision' (no such file) at TTSPluginManager.Initialize () [0x00000] in <00000000000000000000000000000000>:0 at LecturePortalManager.OnCreateStory (Ink.Runtime.Story story) [0x00000] in <00000000000000000000000000000000>:0 I can see in the generated code that the framework (TTS_Vision) is there, but the path seems wrong. I've tried to add more options to the searched paths, with no success... Any hints or suggestions are much more appreciated.
1
0
286
Sep ’25
[Vision, visionOS] Is it possible using Vision Framework on visionOS for body tracking feature?
Hello, I checked following documentations. Vision | Apple Developer Documentation Discover Swift enhancements in the Vision framework - WWDC24 - Videos - Apple Developer I saw Vision Framework is available on visionOS. So I want to know that if it's possible using Vision Framework on visionOS for tracking human and animal body poses. Or are there some limits to use this on visionOS?
1
0
506
Jan ’25
Overlaying SwiftUI content with transparency in front of RealityView
Following up on my previous question here: https://developer.apple.com/forums/thread/774262 Having solved the clipping problem, I am now trying to overlay some content in front of the RealityView. However, it looks like any content with transparency does not render in front of the RealityView, while opaque views seem to work; placing content with transparency like glassBackgroundEffect() behind the RealityView in a ZStack causes the entire window to flicker. Additionally, my SwiftUI attachment placed in front of the stereoscopic image plane are invisible if the user look at it straight at 90 degrees. However, if the user look at it from increasing angles from the sides, the attachment gradually turns visible again. Are these behaviors expected? What is a recommended approach to overlay content in front of a RealityView? Thanks!
1
0
391
Feb ’25
CustomMaterial disable unlit tone mapping
Hi, since iOS 18 UnlitMaterial and ShaderGraphMaterial have the option to disable tone mapping, e.g via https://developer.apple.com/documentation/realitykit/unlitmaterial/init(applypostprocesstonemap:) Is it possible to do the same for CustomMaterial? I tried initializing a CustomMaterial based on an UnlitMaterial where tone mapping is disabled, like so: let unlitMat = UnlitMaterial(applyPostProcessToneMap: false) let customMaterial = try CustomMaterial( from: unlitMat, surfaceShader: surfaceShader, geometryModifier: geometryModifier ) but that does not seem to work. The colors of my texture still look altered in comparison to a plain UnlitMaterial or a ShaderGraphMaterial where its disabled. Any hints? Thank you!
1
0
123
Jun ’25
Eye tracking data access for researchers in the medical field
Hello, esteemed tech developer. I am using the Apple Vision Pro to create an AR assist system about the da Vinci Surgical Robot in a medical surgical suite, and would like to capture eye movement data with tester uniformity. Although the Apple Vision Pro has a superb infrared sensor to monitor eye movement status, Apple does not seem to have open access officially. (I'm aware of many existing discussions about this, but I was still wondering if there might be an option, particularly for research labs.)Here's my FB number.FB16603687
1
0
627
Feb ’25
Can developers use spatial image in third App's Spatial widget
Spatial widget is a new feature of visionos 26. I notice The system’s Photo app can add a Spatial Image in the widget. I wonder if third apps can use spatial image or any 3D content in it's widget? I try to use RealityView in widget and it run with a crash. So does spatial Image in widget only supported by the system Photo app, and not available to developers now?
1
1
731
Jul ’25
Degraded RoomPlan performance
We have been using RoomPlan in our app for 2+ years. Through a combination of in-app and manual coaching on scanning best practices, most users are able to achieve high-quality scans on a consistent basis. In recent weeks, however, we have observed an increase in reports of degraded scanning performance, even from veteran users who had not previously encountered issues. The RoomCaptureView overlay is jittery and crooked, and the resulting scan file has significant issues, even for simple, well-lit rectangular rooms. It is difficult to troubleshoot these issues given the number of variables at play, and the overall volume of reports is still relatively low, but we'd appreciate any guidance on known issues or workarounds that could help unblock our users who are being affected by this. I noticed that this post includes an acknowledgement of FB14454922 and FB15035788. Our issues seem slightly different as the scans are simply inaccurate and jittery without failing outright. I haven't found any other threads on similar issues.
1
1
910
2w
I want to know the update principle of the `RealityView`.
Here is the code snippets. struct RealityViewTestView: View { @State private var texts: [String] = [] var body: some View { RealityView { content, attachments in } update: { content, attachments in for text in texts { if let textEntity = attachments.entity(for: text) { textEntity.position.x = Float.random(in: -0.1...0.1) content.add(textEntity) } } } attachments: { ForEach(texts, id: \.self) { text in Attachment(id: text) { Text(text) .padding() .glassBackgroundEffect() } } } .toolbar { ToolbarItem { Button("Add") { texts.append(String(UUID().uuidString.prefix(6))) } } ToolbarItem { Button("Remove") { texts.remove(at: Int.random(in: 0..<texts.count)) } } } } } struct RealityViewTestView: View { @State private var texts: [String] = [] @State private var entities: [Entity] = [] var body: some View { RealityView { content, attachments in } update: { content, attachments in // for text in texts { // if let textEntity = attachments.entity(for: text) { // textEntity.position.x = Float.random(in: -0.1...0.1) // content.add(textEntity) // } // } for entity in entities { content.add(entity) } } attachments: { ForEach(texts, id: \.self) { text in Attachment(id: text) { Text(text) .padding() .glassBackgroundEffect() } } } .toolbar { ToolbarItem { Button("Add") { //texts.append(String(UUID().uuidString.prefix(6))) let m = ModelEntity(mesh: .generateSphere(radius: 0.1), materials: [SimpleMaterial(color: .white, isMetallic: false)]) m.position.x = Float.random(in: -0.2...0.2) entities.append(m) } } ToolbarItem { Button("Remove") { //texts.remove(at: Int.random(in: 0..<texts.count)) entities.removeLast() } } } } } About the first code snippet, when I remove an element from the texts, why content can automatically remove the corresponding entity? And about the second code snippet, content do not automatically remove the corresponding entity. I am very curious.
1
0
442
Jan ’25
Independent gestures for multiple entities in RealityView
Hi, I'm developing an app for the Apple Vision Pro. Inside the app the user should be able to load objects from the web into the scene and then be able to move them around (dragging and rotating) via gestures. My question: I'm working with RealityKit and use RealityView.. I have no issues loading in one object and making it interactive by adding gestures to the entire RealityView via the .gestures() function. Also I succeeded in loading multiple objects into the scene. My problem is that I can't figure out how to add my gestures to multiple objects independently. I can't use Reality composer since I'm loading the objects dynamically into the scene. Using .gestures() doesn't work for multiple objects since every gesture needs to be targeted to a specific entity but I have multiple entities. I also tried defining a GestureComponent and adding it to every newly loaded entity but that doesn't seem to work as nothing happens, even though my gesture is targeted to every entity having my GestureComponent. I only found solutions that are not usable on Visionos / RealityView like installGestures. Also I tried following this guide: https://developer.apple.com/documentation/realitykit/transforming-realitykit-entities-with-gestures But I feel like there are things missing and contradictory, like a extension of RealityView is mentioned but not shown and the guide states that we don't need to store the translation values for every entity and therefore creates a EntityGestureState.swift file to store these values but then it's not used and a GestureStateComponent is used instead that was never mentioned and contradicts what was just said about not storing values per entity because a new instance of it is created for every entity. Nevermind, following the guide didn't lead me to a working solution.
1
0
338
Feb ’25
Please provide access to face tracking blendshapes on vision os
Apple, please provide access to face tracking blend shapes on vision os, just like you do on iOS. You have the best eye and face tracking implementation on the market, please let us use it. There is a sizable audience who will buy the headset just for it. I personally know multiple people who are not buying the headset simply because you locked those features out. No raw camera access is needed, just abstracted blendshapes values. You will make the headset so much more useful if you do this simple thing.
1
1
132
Jun ’25
How to use defaultSize with visionOS window restoration?
One of the most common ways to provide a window size in visionOS is to use the defaultSize scene modifier. WindowGroup(id: "someID") { SomeView() } .defaultSize(CGSize(width: 600, height: 600)) Starting in visionOS 26, using this has a side effect. visionOS 26 will restore windows that have been locked in place or snapped to surfaces. If a user has manually adjusted the size of a locked/snapped window, the users size is only restore in some cases. Manual resize respected Leaving a room and returning later Taking the headset off and putting it back on later Manual resize NOT respected Device restart. In this case, the window is reopened where it was locked, but the size is set back to the values passed to defaultSize. The manual resizing adjustments the user has made are lost. This is counter to how all other windows and widgets work. I reported this last month (FB18429638), but haven't heard back if this is a bug or intended behavior. Questions What is the best way to provide a default window size that will only be used when opening new windows–and not used during scene restoration? Should we try to keep track of window size after users adjust them and save that somewhere? If this is intended behavior, can someone please update the docs accordingly?
1
0
448
Jul ’25
WebXR and PSVR2
We got very excited when we saw support for the PSVR2 on WWDC! Particularly interesting is WebXR to us, so we got the controllers to give it a try. Unfortunately they only register as gamepads in the navigator but not as XRInputDevice's We went through the experimental flags and didn't find something that is directly related. Is there a flag we missed? If not, when do you have PSVR2 support planned for WebXR?
1
5
258
Jun ’25