I am looking for a material that functions in the same way that Occlusion Material does, except that it only partially occludes whatever is behind it. One way that I have thought of doing this was to change the opacity of the entity that was covered in Occlusion Material, however this did not change anything. Please let me know if this is possible.
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I have recently started testing ARKit on an iPhone 16 Pro and I have noticed that the AutoFocus reaction on this device is much slower than other devices. For example, if I point the camera to a close object AutoFocus takes 4-5 seconds to stabilize, the focal length is adjusted very very slowly. In some cases (although this is rare) AutoFocus seems almost stuck and requires a bit of device movement to trigger.
This is quite problematic when using some ARKit features like Image and Object detection as the detection algorithms struggle with out-of-focus images.
This problem is limited to ARKit. AutoFocus is significantly more responsive when the standard AVFoundation Camera API is used.
This behavior is easy to reproduce with any of the ARKit samples like https://developer.apple.com/documentation/arkit/arkit_in_ios/content_anchors/tracking_and_visualizing_planes
Is anybody else experiencing this problem?
I'm getting the following error message when compiling the Apple provided sample, Spaceship game for the Apple Visio Pro. I've already tried deleting the derived data resetting the package cache and restarting Xcode but still getting the following error: [xrsimulator] Exception thrown during compile: Cannot get rkassets content for path /Users/myoungkang/Downloads/CreatingASpaceshipGame/Packages/Studio/Sources/Studio/Studio.rkassets because 'The file “Studio.rkassets” couldn’t be opened because you don’t have permission to view it.'
error: Tool exited with code 1
Topic:
Spatial Computing
SubTopic:
General
With iOS26 unveiled, has anyone noticed or found any changes related to RoomPlan?
I can't find anything myself, which is disappointing.
Has anyone found any improvements or changes?
let component = GestureComponent(DragGesture())
iOS: ☑️
visionOS: ❌
This bug from beta to public, please fix it.
The WWDC25 video and notes titled “Learn About Apple Immersive Video Technologies” introduced the Apple Spatial Audio Format (ASAF) and codec (APAC). However, despite references throughout on using immersive video, there is scant information on ASAF/APAC (including no code examples and no framework references), and I’ve found no documentation in Apple’s APIs/Frameworks about its implementation and use months on.
I want to leverage ambisonic audio in my app. I don’t want to write a custom AU if APAC will be opened up to developers. If you read the notes below along with the iPhone 17 advertising (“Video is captured with Spatial Audio for immersive listening”), it sounds like this is very much a live feature in iOS26.
Anyone know the state of play? I’m across how the PHASE engine works, which is unrelated to what I’m asking about here.
Original quote from video referenced above: “ASAF enables truly externalized audio experiences by ensuring acoustic cues are used to render the audio. It’s composed of new metadata coupled with linear PCM, and a powerful new spatial renderer that’s built into Apple platforms. It produces high resolution Spatial Audio through numerous point sources and high resolution sound scenes, or higher order ambisonics.”
”ASAF is carried inside of broadcast Wave files with linear PCM signals and metadata. You typically use ASAF in production, and to stream ASAF audio, you will need to encode that audio as an mp4 APAC file.”
”APAC efficiently distributes ASAF, and APAC is required for any Apple immersive video experience. APAC playback is available on all Apple platforms except watchOS, and supports Channels, Objects, Higher Order Ambisonics, Dialogue, Binaural audio, interactive elements, as well as provisioning for extendable metadata.”
Topic:
Spatial Computing
SubTopic:
General
A ShaderGraphMaterial with an Occlusion Surface Output generated with RealityComposer 2 fails to load on iOS 18 and macOS 15 with the following error:
RealityFoundation.ShaderGraphMaterial.LoadError.invalidTypeFound (https://developer.apple.com/documentation/realitykit/shadergraphmaterial/loaderror/invalidtypefound)
This happens with both https://developer.apple.com/documentation/shadergraph/realitykit/occlusion-surface-(realitykit) and https://developer.apple.com/documentation/shadergraph/realitykit/shadow-receiving-occlusion-surface-(realitykit)
RealityView { content in
do {
let bgEntity = ModelEntity(mesh: .generateCone(height: 0.5, radius: 0.1), materials: [SimpleMaterial(color: .red, isMetallic: true)])
bgEntity.position.z = -0.2
content.add(bgEntity)
let occlusionMaterial = try await ShaderGraphMaterial(named: "/Root/OcclusionMaterial", from: "OcclusionMaterial")
let testEntity = ModelEntity(mesh: .generateSphere(radius: 0.4), materials: [occlusionMaterial])
content.add(testEntity)
content.cameraTarget = testEntity
} catch {
print("Shader Graph Load Error:")
dump(error)
}
}
.realityViewCameraControls(.orbit)
.edgesIgnoringSafeArea(.all)
Feedback ID: FB15081296
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
RealityKit
Reality Composer Pro
Shader Graph Editor
Hi,
I was wondering if the Enterprise API for visionOS 2 includes access to the raw Lidar data from the Apple Vision Pro, or any intermediate data representation (like the depthMap as shown in this post)? Or if there would be any way to get access to this data?
Thanks in advance!
Dear all,
I´m using Unity 6.2 beta and Xcode 16.2. I´m creating a simple framework to use the text to speech functionality in VisionOS from unity. The framework is created in Swift. I create an objective-c wrapper with the following declarations:
...
void _initTTS(int);
...
I create the framework, import it in Unity and call the functions in a c# wrapper class. The code is as follows:
public static class TTSPluginManager
{
[DllImport("TTS_Vision"]
private static extern void _initTTS(int val);
...
public static void Initialize()
{
#if UNITY_VISIONOS
_initTTS(0);
#else
Debug.LogWarning("NativeTTS.Initialize called on a non-iOS platform. Ignoring.");
#endif
}
}
I have managed to compile and run the program in the Apple Vision Pro, but I keep on getting the following error:
DllNotFoundException: TTS_Vision assembly: type: member:(null)
TTSPluginManager.Initialize () (at Assets/Plugins/TTSPluginManager.cs:33)
LecturePortalManager.OnCreateStory (Ink.Runtime.Story story) (at Assets/AVRLecture/LecturePortalManager.cs:17)
InkLoader.StartStory () (at Assets/AVRLecture/InkLoader.cs:24)
InkLoader.Start () (at Assets/AVRLecture/InkLoader.cs:18)
If I run the generated code from Xcode, I can see the app in the AVP, but I keep getting a loading error:
DllNotFoundException: Unable to load DLL 'TTS_Vision'. Tried the load the following dynamic libraries: Unable to load dynamic library '/TTS_Vision' because of 'Failed to open the requested dynamic library (0x06000000) dlerror() = dlopen(/TTS_Vision, 0x0005): tried: '/TTS_Vision' (no such file)
at TTSPluginManager.Initialize () [0x00000] in <00000000000000000000000000000000>:0
at LecturePortalManager.OnCreateStory (Ink.Runtime.Story story) [0x00000] in <00000000000000000000000000000000>:0
I can see in the generated code that the framework (TTS_Vision) is there, but the path seems wrong. I've tried to add more options to the searched paths, with no success...
Any hints or suggestions are much more appreciated.
Hi guys,
I noticed that Apple created a really engaging visual effect for browsing spatial videos in the app. The video appears embedded in glass panel with glowing edges and even shows a parallax effect as you move around. When I tried to display the stereo video using RealityView, however, the video entity always floats above the panel.
May I ask how does VisionOS implement this effect? Is there any approach to achieve this effect or example code I can use in my own code.
Thanks!
https://developer.apple.com/documentation/realitykit/videomaterial
The documentation: "Video materials support transparency if the source video’s file format also supports transparency."
I have a transparency video(Hand.mov, HEVC with alpha), I can show the video with transparency background correctly on Vision Pro Simulates, but on physic Device the video has a black background. I'm sure the video format is ok because I can see get the texture from video and display it on an UnlitMaterial.
How can I show the transparency video correctly with the RealityKit/VideoMaterial?
I'm developing a VisionOS app with bouncing ball physics and struggling to achieve natural bouncing behavior using RealityKit's physics system. Despite following Apple's recommended parameters, the ball loses significant energy on each bounce and doesn't behave like a real basketball, tennis ball, or football would.
With identical physics parameters (restitution = 1.0), RealityKit shows significant energy loss. I've had to implement a custom physics system to compensate, but I want to use native RealityKit physics. It's impossible to make it work by applying custom impulses.
Ball Physics Setup (Following Apple Forum Recommendations)
// From PhysicsManager.swift
private func createBallEntityRealityKit() -> Entity {
let ballRadius: Float = 0.05
let ballEntity = Entity()
ballEntity.name = "bouncingBall"
// Mesh and material
let mesh = MeshResource.generateSphere(radius: ballRadius)
var material = PhysicallyBasedMaterial()
material.baseColor = .init(tint: .cyan)
material.roughness = .float(0.3)
material.metallic = .float(0.8)
ballEntity.components.set(ModelComponent(mesh: mesh, materials: [material]))
// Physics setup from Apple Developer Forums
let physics = PhysicsBodyComponent(
massProperties: .init(mass: 0.624), // Seems too heavy for 5cm ball
material: PhysicsMaterialResource.generate(
staticFriction: 0.8,
dynamicFriction: 0.6,
restitution: 1.0 // Perfect elasticity, yet still loses energy
),
mode: .dynamic
)
ballEntity.components.set(physics)
ballEntity.components.set(PhysicsMotionComponent())
// Collision setup
let collisionShape = ShapeResource.generateSphere(radius: ballRadius)
ballEntity.components.set(CollisionComponent(shapes: [collisionShape]))
return ballEntity
}
Ground Plane Physics
// From GroundPlaneView.swift
let groundPhysics = PhysicsBodyComponent(
massProperties: .init(mass: 1000),
material: PhysicsMaterialResource.generate(
staticFriction: 0.7,
dynamicFriction: 0.6,
restitution: 1.0 // Perfect bounce
),
mode: .static
)
entity.components.set(groundPhysics)
Wall Physics
// From WalledBoxManager.swift
let wallPhysics = PhysicsBodyComponent(
massProperties: .init(mass: 1000),
material: PhysicsMaterialResource.generate(
staticFriction: 0.7,
dynamicFriction: 0.6,
restitution: 0.85 // Slightly less than ground
),
mode: .static
)
wall.components.set(wallPhysics)
Collision Detection
// From GroundPlaneView.swift
content.subscribe(to: CollisionEvents.Began.self) { event in
guard physicsMode == .realityKit else { return }
let currentTime = Date().timeIntervalSince1970
guard currentTime - lastCollisionTime > 0.1 else { return }
if event.entityA.name == "bouncingBall" || event.entityB.name == "bouncingBall" {
let normal = event.collision.normal
// Distinguish between wall and ground collisions
if abs(normal.y) < 0.3 { // Wall bounce
print("Wall collision detected")
} else if normal.y > 0.7 { // Ground bounce
print("Ground collision detected")
}
lastCollisionTime = currentTime
}
}
Issues Observed
Energy Loss: Despite restitution = 1.0 (perfect elasticity), the ball loses ~20-30% energy per bounce
Wall Sliding: Ball tends to slide down walls instead of bouncing naturally
No Damping Control: Comments mention damping values but they don't seem to affect the physics
Change in mass also doesn't do much.
Custom Physics System (Workaround)
I've implemented a custom physics system that manually calculates velocities and applies more realistic restitution values:
// From BouncingBallComponent.swift
struct BouncingBallComponent: Component {
var velocity: SIMD3<Float> = .zero
var angularVelocity: SIMD3<Float> = .zero
var bounceState: BounceState = .idle
var lastBounceTime: TimeInterval = 0
var bounceCount: Int = 0
var peakHeight: Float = 0
var totalFallDistance: Float = 0
enum BounceState {
case idle
case falling
case justBounced
case bouncing
case settled
}
}
Is this energy loss expected behavior in RealityKit, even with perfect restitution (1.0)?
Are there additional physics parameters (damping, solver iterations, etc.) that could improve bounce behavior?
Would switching to Unity be necessary for more realistic ball physics, or am I missing something in RealityKit?
Even in the last video here: https://stepinto.vision/example-code/collisions-physics-physics-material/ bounce of the ball is very unnatural - stops after 3-4 bounces. I apply custom impulses, but then if I have walls around the ball, it's almost impossible to make it look natural. I also saw this post https://developer.apple.com/forums/thread/759422 and ball is still not bouncing naturally.
Is there any interest in this forum for those developing for the spatial web and safari. I can't seem to find any posts that are relevant here.
I am using AccessoryTrackingProvider from ARKit to get the transform of the PSVR2 controller via originFromAnchorTransform of the AccessoryAnchor. I also am trying to use AnchorEntity on the controller using RealityKit
However, none of the three options for Accessory.LocationName, which should be used to define the AnchorEntity target, seem to match the position on the controller which is being sent from ARKit.
The picture attached is showing two transforms:
RealityKit - using .gripSurface to define the AnchoringComponent.Target.accesssory location.
ARKit - using originFromAnchorTransform for AccessoryTrackingProvider.
They are not aligned at the same point.
As for the other options of Accessory.LocationName, using .aim is located at the tip of the controller and .grip is the same position as .gripSurface but with a different orientation.
I am wondering why there is not an option for Accessory.LocationName that actually matches the transform captured by ARKit?
是对原本VisionPro每个App的内存限制做了扩展嘛?放宽了内存限制么?
Hello Community,
I'm encountering an issue with the latest iOS 17 update, specifically related to RoomPlan version-2. In iOS 16, when using RoomPlan version-1, we were able to display stairs in our app. However, after upgrading to iOS 17 and implementing RoomPlan version-2, the stairs are no longer visible.
Despite thorough investigation, I couldn't find any option within the code to show or hide stairs, or any other objects for that matter. It seems like a specific issue with the update rather than a coding error on our part.
Has anyone else encountered a similar problem? If so, I would greatly appreciate any insights or solutions you might have. It's crucial for our app functionality to have stairs displayed accurately, and we're currently at a loss on how to address this issue.
Thank you in advance for any assistance you can provide.
Best regards
For visionOS 2.0+, it has been announced the object tracking feature. Is there any support for PolySpatial in Unity or is it only available in Swift and Xcode?
I am working with MeshAnchors, and I am having troubles getting to the classification of the triangles/faces.
This post references the MeshAnchor.Geometry, and that struct does have a property named "classifications", but it is of type GeometrySource. I cannot find any classification information in GeometrySource. Am I missing something there?
I think I am looking for something of type MeshAnchor.MeshClassification, but I cannot find any structs with this as a property.
The AR based app I am working on right now is experiencing an issue. Sometimes, the AR session fails with a call to my ARSessionObserver's session(_ session: ARSession, didFailWithError error: Error)
with the following error:
Error Domain=com.apple.arkit.error
Code=102 "Required sensor failed."
NSLocalizedFailureReason="A sensor failed to deliver the required input.,"
NSLocalizedRecoverySuggestion="Make sure that the application has the required privacy settings."
The underlying error seems to point to the CoreMotion framework:
Domain=CMErrorDomain
Code=102 "(null)
Some people seem to have experienced this issue and solved it by making sure that the Compass Calibration switch is ON in Settings > Privacy > Location Services > System Services.
For context, the ARWorldTrackingConfiguration.worldAlignment is set to .gravity
The thing is it is already ON when I experience this issue.
I also noticed that this issue happens way more often on the iPhone 16e than in any other device.
Has anyone had similar experiences? I am looking for a way to prevent this error from happening (ideally) or handling in a way that does not affect the user. Any help is appreciated
Hello Apple Team,
I am working on a RealityKit project for iOS, where I need to place a 3D asset far away from the camera (approximately 15 to 30 meters).
When enabling people occlusion, the 3D asset gets clipped when moved far away.
Is it possible to enable people occlusion for assets at close range (less than 10 meters) while disabling it for assets farther away to prevent clipping?
I understand that it is possible to switch configurations at runtime. However, I would like to place assets both close to and far from the camera simultaneously.
Thank you for your help!
Kind regards