Hi team I don't know from this error has popped up any suggestions please
Thanks
Zippy Games
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Created
I have recently started testing ARKit on an iPhone 16 Pro and I have noticed that the AutoFocus reaction on this device is much slower than other devices. For example, if I point the camera to a close object AutoFocus takes 4-5 seconds to stabilize, the focal length is adjusted very very slowly. In some cases (although this is rare) AutoFocus seems almost stuck and requires a bit of device movement to trigger.
This is quite problematic when using some ARKit features like Image and Object detection as the detection algorithms struggle with out-of-focus images.
This problem is limited to ARKit. AutoFocus is significantly more responsive when the standard AVFoundation Camera API is used.
This behavior is easy to reproduce with any of the ARKit samples like https://developer.apple.com/documentation/arkit/arkit_in_ios/content_anchors/tracking_and_visualizing_planes
Is anybody else experiencing this problem?
Looking for help on getting "On Tap" to work inside RCP for my AVP project. I can get it to work when using "on added to scene" but if I switch to "on tap", the audio will not play when attaching the audio to an entity in my scene. I'm using the same entity for the tap gesture that the audio is using for the emitter. Here is my work flow for the "on added to scene" that works correctly to help troubleshoot my non working "on tap".
Behaviors: "on added to scene". action - timeline
Input target: check mark enabled, allowed all
Collision set to default
Audio library: source mp3 file
Chanel Audio: resource mp3 file above
Timeline: Play Audio with mp3 file added
This set up in RCP allows my AVP project to launch correctly with audio "on added to scene". But when switching behaviors to "on tap", the audio will no longer play and I can not figure out why. I've tried several different options and nothing works. Please help!
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Audio
USDZ
Reality Composer Pro
visionOS
Hi Apple engineers,
I'm currently working on an app that uses the incoming microphone audio and gives visual feedback to the user about the incoming audio.
I would like to use Reality Composer Pro's Developer Capture to get a high quality recording of the app and its use cases for the App Store — but any time I have an in-progress capture, my app stops receiving the incoming audio. It almost seems as if the microphone audio is getting 'hijacked' during the screen capture, which prevents me from demonstrating the app's core features.
Could you please advise on how to proceed?
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
I want to create a screenshot (static image) of the current view on the Apple Vision Pro using written code in visionOS. Unfortunately, I currently can’t find a way to achieve this. The only option I’ve found so far is through Reality Composer Pro. However, since I want to accomplish this directly through code, this approach is not an option for me.
Hi
I try to make a 360 stereo viewer, and I have made a ShaderGraphMaterial on Reality Composer Pro.
Im trying to use that material on a inverted sphere whitch is generated in Swift.
When I try to attach the material I get this error "Type of expression is ambiguous without a type annotation"
Here is the code (sorry im noob =) ):
import SwiftUI
import RealityKit
import RealityKitContent
import PhotosUI
struct ImmersiveView: View {
@Environment(AppModel.self) var appModel
var body: some View {
RealityView { content in
// Add the initial RealityKit content
guard let skyBoxEntity = await createSkybox() else {
return
}
content.add(skyBoxEntity)
}
}
}
private func createSkybox () async -> Entity? {
var matX = try? await ShaderGraphMaterial(named:
"/Root/Mat_Stereo360",
from: "360Stereo.usda",
in: realityKitContentBundle)
let sphere = await MeshResource.generateSphere(radius:1000)
let entity = await Entity()
entity.components.set(ModelComponent(mesh: sphere, materials:
[matX])). //ERROR HERE:
Type of expression is ambiguous without a type annotation
//entity.scale *= .init(x:-1, y:1, z:1)
return entity
}
I hope someone can help me =)
Best regards,
Kim
In visionOS, once an immersive space is opened, the background color is solid black, is it possible to make this background transparent?
FYI, The Immersive spaces on visionOS uses Compositor Services for drawing 3D content.
Hi,
We are trying to port our Unity app from other XR devices to Vision Pro. Thus it's way easier for us to use the Metal rendering layer, fully immersive. And to stay true to the platform, we want to keep the gaze/pinch interaction system.
But we just noticed that, unlike Polyspatial XR apps, VisionOS XR in Metal does not provide gaze info unless the user is actively pinching... Which forbids any attempt to give visual feedback on what they are looking at (buttons, etc).
Is this planned in Apple's roadmap ?
Thanks
If I update vision pro size then textfield and button are not update as per its new size.
Its working on some scren but not in some screen.
Please refer below screenshot for your reference,
I’m working on an iOS app that needs to measure the area of planes or surfaces, like the length and width of objects, just like the Apple Measure app does. I’ve been exploring ARKit, but I’m curious if there are any APIs or techniques that can help automate the process of detecting and measuring planes.
Specifically, I’m looking for a way to automatically detect and measure planes (e.g., from a top-down view). For example: Measuring a box width and length. I have attached a screenshot and a video of the Apple Measure App doing it.
Does Apple provide any tools or APIs for this, or are there any best practices I should know about? I’d love to hear from anyone who’s tackled something similar.
Video: https://drive.google.com/file/d/1BxM7fIbFxsCsYwY7w8ZxIeq_4WTGkkwA/view?usp=drive_link
We use Unity6+VisonOS2.2 to develop an MR Application.
App Mode: RealityKit with PolySpatial,
In the actual test, we found that when my moving position is more than 80~100 meters away from the starting position of the application, my current position will be reset to Vector.zero, which will cause my application experience to be very bad. Is anyone experiencing the same problem? Is there a solution?
Topic:
Spatial Computing
SubTopic:
General
Hi,
On visionOS to manage entity rotation we can rely on RotateGesture3D. We can even with the constrainedToAxis parameter authorize only rotation on an x, y or z axis or even make combinations.
What I want to know is if it is possible to constrain the rotation on axis automatically.
Let me explain, the functionality that I would like to implement is to constrain the rotation on an axis only once the user has started his gesture. The initial gesture the user makes should let us know which axis they want to rotate on.
This would be equivalent to activating a constraint automatically on one of the axes, as if we were defining the gesture on one of the axes.
RotateGesture3D(constrainedToAxis: .x)
RotateGesture3D(constrainedToAxis: .y)
RotateGesture3D(constrainedToAxis: .z)
Is it possible to do this?
If so, what would be the best way to do it?
A code example would be greatly appreciated.
Regards
Tof
PLATFORM AND VERSION
Vision OS
Development environment: Xcode 16.2, macOS 15.2
Run-time configuration: visionOS 2.3 (On Real Device, Not simulator)
Please someone confirm I'm not crazy and this issue is actually out of my control.
Spent hours trying to fix my app and running profiles because thought it was an issue related to my apps performance. Finally considered chance it was issue with API itself and made sample app to isolate problem, and it still existed in it. The issue is when a model entity moves around in a full space that was launched when the system environment immersion was turned up before opening it, the entities looks very choppy as they move around. If you take off the headset while still in the space, and put it back on, this fixes it and then they move smoothly as they should. In addition, you can also leave the space, and then turn the system environment immersion all the way down before launching the full space again, this will also make the entity moves smoothly as it should. If you launch a mixed immersion style instead of a full immersion style, this issue never arrises. The issue only arrises if you launch the space with either a full style, or progressive style, while the system immersion level is turned on.
STEPS TO REPRODUCE
https://github.com/nathan-707/ChoppyEntitySample
Open my test project, its a small, modified vision os project template that shows it clearly.
otherwise:
create immersive space with either full or progressive immersion style.
setup a entity in kinematic mode, apply a velocity to it to make it pass over your head when the space appears.
if you opened the space while the Apple Vision Pros system environment was turned up, the entity will look choppy.
if you take the headset off while in the same space, and put it back on, it will fix the issue and it will look smooth.
alternatively if you open the space with the system immersion environment all the way down, you will also not run into the issue. Again, issue also does not happen if space launched is in mixed style.
Hello, I am currently developing a Vision Pro VR application with Unreal Engine 5.5. Is it possible to interact with objects (grabbing, clicking on buttons)? I cannot find any information on this. Thank you.
Topic:
Spatial Computing
SubTopic:
General
I implemented a ShaderGraphMaterial and tried to load it from my usda scene by ShaderGraphMaterial.init(name: in: bundle). I want to dynamically set TextureResource on that material, so I wanted to expose texture as Uniform Input of a ShaderGraphMaterial. But obviously RCP's Shader Graph doesn't support Texture input as parameter as the image shows:
And from the code level, ShaderGraphMaterial also didn't expose a way to set TexturesResources neither. Its parameterNames shows an empty array if I didn't set any custom input params. The texture I get is from my backend so it really cannot be saved into a file and load it again (that would be too weird).
Is there something I am missing?
In visionOS, there are existing modifiers that can completely conceal the hands. However, I am interested in learning how to achieve the effect of only one hand disappearing while the other hand remains visible.
.upperLimbVisibility(.hidden)
We’re using the enterprise API for spatial barcode/QR code scanning in the Vision Pro app, but we often get invalid values for the barcode anchor from the API, leading to jittery barcode positions in the UI. The code we’re using is attached below.
import SwiftUI
import RealityKit
import ARKit
import Combine
struct ImmersiveView: View {
@State private var arkitSession = ARKitSession()
@State private var root = Entity()
@State private var fadeCompleteSubscriptions: Set = []
var body: some View {
RealityView { content in
content.add(root)
}
.task {
// Check if barcode detection is supported; otherwise handle this case.
guard BarcodeDetectionProvider.isSupported else { return }
// Specify the symbologies you want to detect.
let barcodeDetection = BarcodeDetectionProvider(symbologies: [.code128, .qr, .upce, .ean13, .ean8])
do {
try await arkitSession.requestAuthorization(for: [.worldSensing])
try await arkitSession.run([barcodeDetection])
print("Barcode scanning started")
for await update in barcodeDetection.anchorUpdates where update.event == .added {
let anchor = update.anchor
// Play an animation to indicate the system detected a barcode.
playAnimation(for: anchor)
// Use the anchor's decoded contents and symbology to take action.
print(
"""
Payload: \(anchor.payloadString ?? "")
Symbology: \(anchor.symbology)
""")
}
} catch {
// Handle the error.
print(error)
}
}
}
// Define this function in ImmersiveView.
func playAnimation(for anchor: BarcodeAnchor) {
guard let scene = root.scene else { return }
// Create a plane sized to match the barcode.
let extent = anchor.extent
let entity = ModelEntity(mesh: .generatePlane(width: extent.x, depth: extent.z), materials: [UnlitMaterial(color: .green)])
entity.components.set(OpacityComponent(opacity: 0))
// Position the plane over the barcode.
entity.transform = Transform(matrix: anchor.originFromAnchorTransform)
root.addChild(entity)
// Fade the plane in and out.
do {
let duration = 0.5
let fadeIn = try AnimationResource.generate(with: FromToByAnimation<Float>(
from: 0,
to: 1.0,
duration: duration,
isAdditive: true,
bindTarget: .opacity)
)
let fadeOut = try AnimationResource.generate(with: FromToByAnimation<Float>(
from: 1.0,
to: 0,
duration: duration,
isAdditive: true,
bindTarget: .opacity))
let fadeAnimation = try AnimationResource.sequence(with: [fadeIn, fadeOut])
_ = scene.subscribe(to: AnimationEvents.PlaybackCompleted.self, on: entity, { _ in
// Remove the plane after the animation completes.
entity.removeFromParent()
}).store(in: &fadeCompleteSubscriptions)
entity.playAnimation(fadeAnimation)
} catch {
print("Error")
}
}
}
I have a MeshResource and I would like to create a collision component from it.
let childBounds = child.visualBounds(relativeTo: self)
var childShape: ShapeResource
do {
// Crashed by the following line instead of throwing a Swift Error
childShape = try await ShapeResource.generateConvex(from: childModel.mesh);
} catch {
childShape = ShapeResource.generateBox(size: childBounds.extents)
childShape = childShape.offsetBy(translation: childBounds.center)
}
Based on this document: https://developer.apple.com/documentation/realitykit/shaperesource/generateconvex(from:)-6upj9
Will throw an error if mesh does not define a nonempty convex volume. For example, will fail if all the vertices in mesh are coplanar.
But, the method crashes the app instead of throwing a Swift error
Incident Identifier: 35CD58F8-FFE3-48EA-85D3-6D241D8B0B4C
CrashReporter Key: FE6790CA-6481-BEFD-CB26-F4E27652BEAE
Hardware Model: Mac15,11
...
Version: 1.0 (1)
Code Type: ARM-64 (Native)
Role: Foreground
Parent Process: launchd_sim [2057]
Coalition: com.apple.CoreSimulator.SimDevice.85A2B8FA-689F-4237-B4E8-DDB93460F7F6 [1496]
Responsible Process: SimulatorTrampoline [910]
Date/Time: 2025-01-26 16:13:17.5053 +0800
Launch Time: 2025-01-26 16:13:09.5755 +0800
OS Version: macOS 15.2 (24C101)
Release Type: User
Report Version: 104
Exception Type: EXC_BREAKPOINT (SIGTRAP)
Exception Codes: 0x0000000000000001, 0x00000001abf841d0
Termination Reason: SIGNAL 5 Trace/BPT trap: 5
Terminating Process: exc handler [17316]
Triggered by Thread: 0
Thread 0 Crashed:
0 CoreRE 0x1abf841d0 REAssetManagerCollisionShapeAssetCreateConvexPolyhedron + 232
1 CoreRE 0x1abf845f0 REAssetManagerCollisionShapeAssetCreateConvexPolyhedronFromMesh + 868
2 RealityFoundation 0x1d25613bc static ShapeResource.generateConvex(from:) + 148
Here is the message on the app console from Xcode
/Library/Caches/com.apple.xbs/Sources/REKit_Sim/ThirdParty/PhysX/physx/source/physxcooking/src/convex/QuickHullConvexHullLib.cpp (935) : internal error : QuickHullConvexHullLib::findSimplex: Simplex input points appers to be coplanar.
Failed to cook convex mesh (0x3)
assertion failure: 'convexPolyhedronShape != nullptr' (REAssetManagerCollisionShapeAssetCreateConvexPolyhedron:line 356) Bad parameters passed for convex mesh creation.
Message from debug
The above crash happened on a visionOS simulator (visionOS 2.2 (22N840)
I can generate a ShapeResource from a ReakityKit entity's extents. Could I apply some scaling to the generated shape. Is there a way to do that?
// model is a ModelResource and bounds is a BoundingBox
var shape = ShapeResource.generateConvex(from: model.mesh);
shape = shape.offsetBy(translation: bounds.center)
// How can I scale the shape to fit within the bounds?
The following API only provide the rotation and translation support. and I cannot find the scale support.
offsetBy(rotation: simd_quatf = simd_quatf(ix: 0, iy: 0, iz: 0, r: 1), translation: SIMD3<Float> = SIMD3<Float>())
I can put the ShapeResource on an entity and scale the entity. But, I would like to know if it is possible to scale the ShapeResource itself without attaching it to an entity.
Hi there!
I´m trying to make a 360 image carousel in RealityView/SwiftUI with very large textures. I´ve managed to load one 12K 360 image and showing it on a inverted sphere with a ShaderMaterialGraph made in Reality Composer Pro. When I try to load the next image I get an out of memory error. The carousel works fine with smaller textures.
My question is. How do I release the memory from the current texture before loading the next?
In theory the garbagecollector should erase it eventually?
Hope someone can help =)
Thanks in advance!
Best regards,
Kim