On Xcode 26 and visionOS 26, apple provides observable property for Entity, so we can easily interact with Entity between RealityScene and SwiftUI, but there is a issue:
It's fine to observe Entity's position and scale properties in Slider, but can't observe orientation properties in Slider.
MacBook Air M2 / Xcode 26 beta6
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi,
I created an app using iOS Object Capture API which works only on Lidar enabled phones. It's a limitation of the Api provided by apple itself.
I Submitted an app for Review , but It is getting rejected (Twice) saying it doesnt work on non pro models. Even though I explained that capturing Needs Lidar and supported only in PRO models, It still gets rejected after testing in Non Pro models. is there a way out?
It's much easier to use custom material to bridge metal shader onto Reality model than using LowLevelTexture does the same thing.
Why VisionOS doesn't support this material:
Greetings. I am having this issue with a Unity Polyspatial VisionOS app.
We have our main Bounded Volume for our app.
We have other Native UI windows that appear when we interact with objects in our Bounded Volume.
If a user closes our main Bounded Volume...sometimes it quits the app. Sometimes it doesn't.
If we go back to the home screen and reopen the app, our main Bounded Volume doesn't always appear, and just the Native UI windows we left open are visible. But, we can sometimes still hear sounds that are playing in our Bounded Volume.
What solutions are there to make sure our Bounded Volume always appears when the app is open?
Spatial widget is a new feature of visionos 26. I notice The system’s Photo app can add a Spatial Image in the widget. I wonder if third apps can use spatial image or any 3D content in it's widget? I try to use RealityView in widget and it run with a crash.
So does spatial Image in widget only supported by the system Photo app, and not available to developers now?
The initial startup of visionOS 26 after install is glacially slow.
Hi team I don't know from this error has popped up any suggestions please
Thanks
Zippy Games
I'm developing an app in which I need to render pictures and contain some models in a RealityView. I want to set up a camera, intercept virtual content through the camera, and save it as an image.
Topic:
Spatial Computing
SubTopic:
General
Tags:
SwiftUI
RealityKit
Reality Composer Pro
Shader Graph Editor
When I was developing the visionOS 26beta Widget, I found that it could not work normally when the real vision OS was running, and an error would appear.
Please adopt container background api
It is worth mentioning that this problem does not occur on the visionOS virtual machine.
Does anyone know what the reason and solution are, or whether this is a visionOS error that needs Feedback? Thank you!
Hi, we would like to create something where you can open multiple volumetric windows and place them in a room, our biggest issue is that we want these windows to be persistent, so when I close and reopen the app, the windows to be in the same position. We can't use immersive spaces because we also want to have the possibility to access the shared space.
Is it possible with the current features and capabilities to do that? If yes do you have some advices how can we achieve this?
The alternative is if is it possible to open the virtual display in immersive spaces or if we have the possibility to implement our own virtual display.
Hi,
I have a question.
In visionOS, when a user looks at a button and performs a pinch gesture with their index finger and thumb, the button responds. By default, this works with both the left and right hands. However, I want to disable the pinch gesture when performed with the left hand while keeping it functional with the right hand.
I understand that the system settings allow users to configure input for both hands, the left hand only, or the right hand only. However, I would like to control this behavior within the app itself.
Is this possible?
Best regards.
Hello, I've pre-ordered the Logitech Muse with hopes of developing with it, but have yet to find any documentation relating to the capabilities it will have/any APIs that will be available to take advantage of the Muse. Is anyone aware of what might become available?
Thank you in advance.
Are there any changes to RotationSystem: System and RotationComponent: Component that I should be aware of to see if I need to update my use in my visionOS app?
I’m working on a Vision Pro app using Metal and need to implement multi-pass rendering. Specifically, I want to render intermediate results to a texture, then use that texture in a second pass for post-processing before presenting the final output.
What’s the best approach in visionOS? Should I use multiple render passes in a single command buffer or separate command buffers? Any insights on efficiently handling this in RealityKit or Metal?
Thanks!
So, I was trying to animate a single bone using FromToByAnimation, but when I start the animation, the model instead does the full body animation stored in the availableAnimations.
If I don't run testAnimation nothing happens.
If I run testAnimation I see the same animation as If I had called
entity.playAnimation(entity.availableAnimations[0],..)
here's the full code I use to animate a single bone:
func testAnimation() {
guard let jawAnim = jawAnimation(mouthOpen: 0.4) else {
print("Failed to create jawAnim")
return
}
guard let creature, let animResource = try? AnimationResource.generate(with: jawAnim) else { return }
let controller = creature.playAnimation(animResource, transitionDuration: 0.02, startsPaused: false)
print("controller: \(controller)")
}
func jawAnimation(mouthOpen: Float) -> FromToByAnimation<JointTransforms>? {
guard let basePose else { return nil }
guard let index = basePose.jointNames.firstIndex(of: jawBoneName) else {
print("Target joint \(self.jawBoneName) not found in default pose joint names")
return nil
}
let fromTransforms = basePose.jointTransforms
let baseJawTransform = fromTransforms[index]
let maxAngle: Float = 40
let angle: Float = maxAngle * mouthOpen * (.pi / 180)
let extraRot = simd_quatf(angle: angle, axis: simd_float3(x: 0, y: 0, z: 1))
var toTransforms = basePose.jointTransforms
toTransforms[index] = Transform(
scale: baseJawTransform.scale * 2,
rotation: baseJawTransform.rotation * extraRot,
translation: baseJawTransform.translation
)
let fromToBy = FromToByAnimation<JointTransforms>(
jointNames: basePose.jointNames,
name: "jaw-anim",
from: fromTransforms,
to: toTransforms,
duration: 0.1,
bindTarget: .jointTransforms,
repeatMode: .none,
)
return fromToBy
}
PS: I can confirm that I can set this bone to a specific position if I use
guard let index = newPose.jointNames.firstIndex(of: boneName) ...
let baseTransform = basePose.jointTransforms[index]
newPose.jointTransforms[index] = Transform(
scale: baseTransform.scale,
rotation: baseTransform.rotation * extraRot,
translation: baseTransform.translation
)
skeletalComponent.poses.default = newPose
creatureMeshEntity.components.set(skeletalComponent)
This works for manually setting the bone position, so the jawBoneName and the joint-transformation can't be that wrong.
If I trigger the apple rating modal in an Immersive space it appears on the ground in (0,0,0) I need it to be in front of the user like push notification perimssion does or other permissions requests.
I would like to translate info in a three.js based web app as a 3D model in a volumetric window. Is it possible to do this in a similar manner as loading a web page in a WKWebView?
I'm trying to add a feature to my app to allow a user to import items from other apps, like Safari, via the share sheet.
I've done this many times on iOS/iPadOS easily with a Share Extension. From what I can tell, Xcode tells me share extensions are not available on visionOS - though my experience on device tells me differently (It seems Reminders, Notes & more implement them somehow.) I was finally able to get it working on device only...but I can now no longer test in the simulator, and have not found a way to distribute this app.
When attempting to run on the simulator, I get this issue:
Please try again later. Appex bundle at /Users/jason/Library/Developer/CoreSimulator/Devices/09A70160-4F4F-4F5E-B679-F6F7D876D7EF/data/Library/Caches/com.apple.mobile.installd.staging/temp.6OAEZp/extracted/LaunchBar.app/PlugIns/LaunchBarShareExtension.appex with id co.swiftfox.LaunchBar.ShareExtension specifies a value (com.apple.share-services) for the NSExtensionPointIdentifier key in the NSExtension dictionary in its Info.plist that does not correspond to a known extension point.
When trying to archive an upload to test flight, I get this similar error:
Invalid Info.plist value. The value for the key 'DTPlatformName' in bundle LaunchBar.app/PlugIns/LaunchBarShareExtension.appex is invalid. (ID: 207610c7-b7e1-48be-959b-22a43cd32d16)
The app is for visionOS only - which I'm thinking might be the problem? The share extension is "Designed For iPhone" and requires me to include iPhone as a run destination. In the worst case I can build an iPhone UI for the app but I'd rather not, as it is very specific to visionOS.
Has anyone successfully launched a share extension on a visionOS only app? I have an iPad app with a share extension that shows up fine on visionOS, but the issue seems to be specifically with visionOS only apps.
Topic:
Spatial Computing
SubTopic:
General
I want to step into portal world. I've know PortalCrossingComponent can make an entity to cross portal, but how to make device cross into portal world?
I was trying to use a local Wifi router to connect Vision Pro and Mac for developing. From Unity, the host on Vision Pro wouldn't open; from Xcode I can see vision pro paired but by the Run button there's no device listed...thanks any ideas? /Ruiying
Topic:
Spatial Computing
SubTopic:
General