This modifier in visionOS 2.5 works perfectly with LazyVgrid inside a Stack in ScrollView:
.hoverEffect { effect, isActive, _ in
effect.scaleEffect(isActive ? 1.1 : 1.0)
But the grid does not scroll in visionOS 26 beta 1 unless the scaleEffect is commented out.
FB17941468
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Since using Quick Look exits you from both your app and Immersive Space. Is there a way to view immersive images within Immersive Space?
Topic:
Spatial Computing
SubTopic:
General
苹果12内存不够用,怎么办
When viewing an immersive space and I open a spatial photo in Quick Look, which hides the entire app interface to show the photo. Is there a memory limit? If the inmersive space is not active, the application keep the interface.
Dear all,
I´m using Unity 6.2 beta and Xcode 16.2. I´m creating a simple framework to use the text to speech functionality in VisionOS from unity. The framework is created in Swift. I create an objective-c wrapper with the following declarations:
...
void _initTTS(int);
...
I create the framework, import it in Unity and call the functions in a c# wrapper class. The code is as follows:
public static class TTSPluginManager
{
[DllImport("TTS_Vision"]
private static extern void _initTTS(int val);
...
public static void Initialize()
{
#if UNITY_VISIONOS
_initTTS(0);
#else
Debug.LogWarning("NativeTTS.Initialize called on a non-iOS platform. Ignoring.");
#endif
}
}
I have managed to compile and run the program in the Apple Vision Pro, but I keep on getting the following error:
DllNotFoundException: TTS_Vision assembly: type: member:(null)
TTSPluginManager.Initialize () (at Assets/Plugins/TTSPluginManager.cs:33)
LecturePortalManager.OnCreateStory (Ink.Runtime.Story story) (at Assets/AVRLecture/LecturePortalManager.cs:17)
InkLoader.StartStory () (at Assets/AVRLecture/InkLoader.cs:24)
InkLoader.Start () (at Assets/AVRLecture/InkLoader.cs:18)
If I run the generated code from Xcode, I can see the app in the AVP, but I keep getting a loading error:
DllNotFoundException: Unable to load DLL 'TTS_Vision'. Tried the load the following dynamic libraries: Unable to load dynamic library '/TTS_Vision' because of 'Failed to open the requested dynamic library (0x06000000) dlerror() = dlopen(/TTS_Vision, 0x0005): tried: '/TTS_Vision' (no such file)
at TTSPluginManager.Initialize () [0x00000] in <00000000000000000000000000000000>:0
at LecturePortalManager.OnCreateStory (Ink.Runtime.Story story) [0x00000] in <00000000000000000000000000000000>:0
I can see in the generated code that the framework (TTS_Vision) is there, but the path seems wrong. I've tried to add more options to the searched paths, with no success...
Any hints or suggestions are much more appreciated.
Hello,
I checked following documentations.
Vision | Apple Developer Documentation
Discover Swift enhancements in the Vision framework - WWDC24 - Videos - Apple Developer
I saw Vision Framework is available on visionOS.
So I want to know that if it's possible using Vision Framework on visionOS for tracking human and animal body poses. Or are there some limits to use this on visionOS?
Following up on my previous question here: https://developer.apple.com/forums/thread/774262
Having solved the clipping problem, I am now trying to overlay some content in front of the RealityView. However, it looks like any content with transparency does not render in front of the RealityView, while opaque views seem to work; placing content with transparency like glassBackgroundEffect() behind the RealityView in a ZStack causes the entire window to flicker.
Additionally, my SwiftUI attachment placed in front of the stereoscopic image plane are invisible if the user look at it straight at 90 degrees. However, if the user look at it from increasing angles from the sides, the attachment gradually turns visible again.
Are these behaviors expected? What is a recommended approach to overlay content in front of a RealityView? Thanks!
Hi,
since iOS 18 UnlitMaterial and ShaderGraphMaterial have the option to disable tone mapping, e.g via https://developer.apple.com/documentation/realitykit/unlitmaterial/init(applypostprocesstonemap:)
Is it possible to do the same for CustomMaterial? I tried initializing a CustomMaterial based on an UnlitMaterial where tone mapping is disabled, like so:
let unlitMat = UnlitMaterial(applyPostProcessToneMap: false)
let customMaterial = try CustomMaterial(
from: unlitMat,
surfaceShader: surfaceShader,
geometryModifier: geometryModifier
)
but that does not seem to work. The colors of my texture still look altered in comparison to a plain UnlitMaterial or a ShaderGraphMaterial where its disabled.
Any hints? Thank you!
I have a visionOS 2 project created on Xcode 16, when I updated to Xcode 26 beta5, I can't build it any more, every time it stuck in process like the picture shows below:
Already tried many methods to fix this issue, such as clear build folders, but don't work.
MacBook Air M2 / MacOS 26 beta5 / Xcode 26 beta5
Hello, esteemed tech developer.
I am using the Apple Vision Pro to create an AR assist system about the da Vinci Surgical Robot in a medical surgical suite, and would like to capture eye movement data with tester uniformity. Although the Apple Vision Pro has a superb infrared sensor to monitor eye movement status, Apple does not seem to have open access officially. (I'm aware of many existing discussions about this, but I was still wondering if there might be an option, particularly for research labs.)Here's my FB number.FB16603687
I am trying to run widgets on visionOS 26. Specifically I am trying to pin them to the simulator room's walls, however I am unable to do so.
Is this a limitation with the visionOS simulator right now, or am I missing a trick here?
Spatial widget is a new feature of visionos 26. I notice The system’s Photo app can add a Spatial Image in the widget. I wonder if third apps can use spatial image or any 3D content in it's widget? I try to use RealityView in widget and it run with a crash.
So does spatial Image in widget only supported by the system Photo app, and not available to developers now?
We have been using RoomPlan in our app for 2+ years. Through a combination of in-app and manual coaching on scanning best practices, most users are able to achieve high-quality scans on a consistent basis.
In recent weeks, however, we have observed an increase in reports of degraded scanning performance, even from veteran users who had not previously encountered issues. The RoomCaptureView overlay is jittery and crooked, and the resulting scan file has significant issues, even for simple, well-lit rectangular rooms.
It is difficult to troubleshoot these issues given the number of variables at play, and the overall volume of reports is still relatively low, but we'd appreciate any guidance on known issues or workarounds that could help unblock our users who are being affected by this. I noticed that this post includes an acknowledgement of FB14454922 and FB15035788. Our issues seem slightly different as the scans are simply inaccurate and jittery without failing outright. I haven't found any other threads on similar issues.
Here is the code snippets.
struct RealityViewTestView: View {
@State private var texts: [String] = []
var body: some View {
RealityView { content, attachments in
} update: { content, attachments in
for text in texts {
if let textEntity = attachments.entity(for: text) {
textEntity.position.x = Float.random(in: -0.1...0.1)
content.add(textEntity)
}
}
} attachments: {
ForEach(texts, id: \.self) { text in
Attachment(id: text) {
Text(text)
.padding()
.glassBackgroundEffect()
}
}
}
.toolbar {
ToolbarItem {
Button("Add") {
texts.append(String(UUID().uuidString.prefix(6)))
}
}
ToolbarItem {
Button("Remove") {
texts.remove(at: Int.random(in: 0..<texts.count))
}
}
}
}
}
struct RealityViewTestView: View {
@State private var texts: [String] = []
@State private var entities: [Entity] = []
var body: some View {
RealityView { content, attachments in
} update: { content, attachments in
// for text in texts {
// if let textEntity = attachments.entity(for: text) {
// textEntity.position.x = Float.random(in: -0.1...0.1)
// content.add(textEntity)
// }
// }
for entity in entities {
content.add(entity)
}
} attachments: {
ForEach(texts, id: \.self) { text in
Attachment(id: text) {
Text(text)
.padding()
.glassBackgroundEffect()
}
}
}
.toolbar {
ToolbarItem {
Button("Add") {
//texts.append(String(UUID().uuidString.prefix(6)))
let m = ModelEntity(mesh: .generateSphere(radius: 0.1), materials: [SimpleMaterial(color: .white, isMetallic: false)])
m.position.x = Float.random(in: -0.2...0.2)
entities.append(m)
}
}
ToolbarItem {
Button("Remove") {
//texts.remove(at: Int.random(in: 0..<texts.count))
entities.removeLast()
}
}
}
}
}
About the first code snippet, when I remove an element from the texts, why content can automatically remove the corresponding entity? And about the second code snippet, content do not automatically remove the corresponding entity. I am very curious.
Hi,
I'm developing an app for the Apple Vision Pro.
Inside the app the user should be able to load objects from the web into the scene and then be able to move them around (dragging and rotating) via gestures.
My question:
I'm working with RealityKit and use RealityView..
I have no issues loading in one object and making it interactive by adding gestures to the entire RealityView via the .gestures() function.
Also I succeeded in loading multiple objects into the scene.
My problem is that I can't figure out how to add my gestures to multiple objects independently.
I can't use Reality composer since I'm loading the objects dynamically into the scene.
Using .gestures() doesn't work for multiple objects since every gesture needs to be targeted to a specific entity but I have multiple entities.
I also tried defining a GestureComponent and adding it to every newly loaded entity but that doesn't seem to work as nothing happens,
even though my gesture is targeted to every entity having my GestureComponent.
I only found solutions that are not usable on Visionos / RealityView like installGestures.
Also I tried following this guide: https://developer.apple.com/documentation/realitykit/transforming-realitykit-entities-with-gestures
But I feel like there are things missing and contradictory, like a extension of RealityView is mentioned but not shown and the guide states that we don't need to store the translation values for every entity and therefore creates a EntityGestureState.swift file to store these values but then it's not used and a GestureStateComponent is used instead that was never mentioned and contradicts what was just said about not storing values per entity because a new instance of it is created for every entity. Nevermind, following the guide didn't lead me to a working solution.
Apple, please provide access to face tracking blend shapes on vision os, just like you do on iOS.
You have the best eye and face tracking implementation on the market, please let us use it. There is a sizable audience who will buy the headset just for it.
I personally know multiple people who are not buying the headset simply because you locked those features out.
No raw camera access is needed, just abstracted blendshapes values. You will make the headset so much more useful if you do this simple thing.
One of the most common ways to provide a window size in visionOS is to use the defaultSize scene modifier.
WindowGroup(id: "someID") {
SomeView()
}
.defaultSize(CGSize(width: 600, height: 600))
Starting in visionOS 26, using this has a side effect. visionOS 26 will restore windows that have been locked in place or snapped to surfaces. If a user has manually adjusted the size of a locked/snapped window, the users size is only restore in some cases.
Manual resize respected
Leaving a room and returning later
Taking the headset off and putting it back on later
Manual resize NOT respected
Device restart. In this case, the window is reopened where it was locked, but the size is set back to the values passed to defaultSize. The manual resizing adjustments the user has made are lost. This is counter to how all other windows and widgets work.
I reported this last month (FB18429638), but haven't heard back if this is a bug or intended behavior.
Questions
What is the best way to provide a default window size that will only be used when opening new windows–and not used during scene restoration?
Should we try to keep track of window size after users adjust them and save that somewhere?
If this is intended behavior, can someone please update the docs accordingly?
For visionOS 2.0+, it has been announced the object tracking feature. Is there any support for PolySpatial in Unity or is it only available in Swift and Xcode?
Entity.animate() makes entity animation much easier, but in many cases, I want to break the progress because of some gestures, I couldn't find any way to do this, including tried entity.stopAllAnimations(), I have to wait till Entity.animate() completes.
iOS 26 / visionOS 26
We got very excited when we saw support for the PSVR2 on WWDC!
Particularly interesting is WebXR to us, so we got the controllers to give it a try.
Unfortunately they only register as gamepads in the navigator but not as XRInputDevice's
We went through the experimental flags and didn't find something that is directly related. Is there a flag we missed? If not, when do you have PSVR2 support planned for WebXR?