the following documentation tells me that the CameraFrame.Sample.Parameters.extrinsics is of type simd_float4x4, great!
https://developer.apple.com/documentation/arkit/cameraframe/sample/parameters/4443449-extrinsics
I have read in the answer of another post that this extrinsics represents the pose of the physical camera relative to the device anchor.
Did I understand correctly that the device anchor is where the scene is rendered from onto the user's display?
What is the coordinate system in which this offset is defined, which axis is left, which one is up, which one is forward?
The last column of the extrinsics seems to define a translation of approximately 2 cm along the x axis, -2cm along the y axis and -5 cm along the z axis. I tried to measure the physical distance between the main left and right cameras in order to find out if it's rather 2cm or 5 cm from the "middle", it looks more like 5, so I assume that the z axis is looking towards the right (from the user's perspective). Is that so? For x and y, I assume that the physical camera is approximately 2 cm to the front of the user and 2cm to the bottom, which of x and y is horizontal, which on vertical?
How is the camera image indexed, is it row-major and is the origin on the top left?
I am looking forward to learning about all the details on these extrinsics in order to make use of it.
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I'm a novice in RealityKit and ARKit. I'm using ARKit in SwiftUI to show a cube with a number as shown below.
import SwiftUI
import RealityKit
import ARKit
struct ContentView : View {
var body: some View {
return ARViewContainer()
}
}
#Preview {
ContentView()
}
struct ARViewContainer: UIViewRepresentable {
typealias UIViewType = ARView
func makeUIView(context: UIViewRepresentableContext<ARViewContainer>) -> ARView {
let arView = ARView(frame: .zero, cameraMode: .ar, automaticallyConfigureSession: true)
arView.enableTapGesture()
return arView
}
func updateUIView(_ uiView: ARView, context: UIViewRepresentableContext<ARViewContainer>) {
}
}
extension ARView {
func enableTapGesture() {
let tapGestureRecognizer = UITapGestureRecognizer(target: self, action: #selector(handleTap(recognizer:)))
self.addGestureRecognizer(tapGestureRecognizer)
}
@objc func handleTap(recognizer: UITapGestureRecognizer) {
let tapLocation = recognizer.location(in: self) // print("Tap location: \(tapLocation)")
guard let rayResult = self.ray(through: tapLocation) else { return }
let results = self.raycast(from: tapLocation, allowing: .estimatedPlane, alignment: .any)
if let firstResult = results.first {
let position = simd_make_float3(firstResult.worldTransform.columns.3)
placeObject(at: position)
}
}
func placeObject(at position: SIMD3<Float>) {
let mesh = MeshResource.generateBox(size: 0.3)
let material = SimpleMaterial(color: UIColor.systemRed, roughness: 0.3, isMetallic: true)
let modelEntity = ModelEntity(mesh: mesh, materials: [material])
var unlitMaterial = UnlitMaterial()
if let textureResource = generateTextResource(text: "1", textColor: UIColor.white) {
unlitMaterial.color = .init(tint: .white, texture: .init(textureResource))
modelEntity.model?.materials = [unlitMaterial]
let id = UUID().uuidString
modelEntity.name = id
modelEntity.transform.scale = [0.3, 0.1, 0.3]
modelEntity.generateCollisionShapes(recursive: true)
let anchorEntity = AnchorEntity(world: position)
anchorEntity.addChild(modelEntity)
self.scene.addAnchor(anchorEntity)
}
}
func generateTextResource(text: String, textColor: UIColor) -> TextureResource? {
if let image = text.image(withAttributes: [NSAttributedString.Key.foregroundColor: textColor], size: CGSize(width: 18, height: 18)), let cgImage = image.cgImage {
let textureResource = try? TextureResource(image: cgImage, options: TextureResource.CreateOptions.init(semantic: nil))
return textureResource
}
return nil
}
}
I tap the floor and get a cube with '1' as shown below.
The background color of the cube is black, I guess. Where does this color come from and how can I change it into, say, red? Thanks.
Reality Composer is no longer available in XCode 15 Release. Is this intended or will be available in later releases? Do I need to revert to Xcode15 Beta8 to get Reality Composer?
What will be the pros and cons if unreal engine is used in apple vision pro?
What are the issues that unreal will face in future on apple vision pro?
What will be the specifications required for unreal to work on vision pro?
I am develop visionOS app. I am now very interested in Metal and Compositor Services, but I have not explored them in depth. I know that Metal has a higher degree of control freedom. I am wondering if using Compositor Services will have fewer functions than RealityKit in AR technology (such as scene reconstruction and understanding, hover effect, etc.).
Since updating to iOS 26.0 (and confirmed on 26.1), ARBodyTrackingConfiguration no longer detects a valid ARBodyAnchor on devices with LiDAR (e.g., iPhone 15 Pro, iPhone 17 Pro Max).
This issue reproduces in custom projects and Apple’s official sample “Capturing Body Motion in 3D”.
The AR session runs normally, but the delegate call:
func session(_ session: ARSession, didUpdate anchors: [ARAnchor])
never yields an ARBodyAnchor with valid joint transforms.
All joints return nil when calling:
body.skeleton.modelTransform(for: jointName)
resulting in 0 valid joints per frame.
Environment
• Device: iPhone 17 Pro Max (LiDAR)
• iOS: 26.0 / 26.1
• Xcode: 16.0 (stable)
• Framework: ARKit + RealityKit
• Configuration used:
config.worldAlignment = .gravityAndHeading
config.isAutoFocusEnabled = true
config.environmentTexturing = .none
session.run(config)
Also tested: with and without frameSemantics = .bodyDetection
Expected Behavior
ARBodyAnchor should be detected and body.skeleton should contain ~89 valid joints with continuous updates.
Hello,
After watching the Work with Reality Composer Pro content in Xcode, I had created the following custom component.
public struct TestComponent : Component, Codable{
public var text : String = "helloWorld"
public init() {}
}
I had registered the custom component as suggested in App.init function
init() {
RealityKitContent.TestComponent.registerComponent()
}
The custom component is decoded and realityView shows the sphere, when I load the "Scene" from realityKitContent bundle.
But if I export the scene to a separate file named "test_scene.usdz" on disk and shared to the simulator and then trying to load it load in reality view causes
EXC_BAD_ACCESS #0 0x0000000194c8d508 in Swift._StringObject.getSharedUTF8Start() -> Swift.UnsafePointer<Swift.UInt8> ()
Printing the loaded entity, shows the customComponent but when trying to load in show realityview , crashes the app immediately. Is there a way to fix it?
We're developing a VisionOS application, where we would like to do product recognition (like food items).
We have enterprise entitlements and therefore also main camera access for VisionOS. We send this live camera frames to a trained CoreML model where we will receive 2D coordinates from the model detection prediction.
Now, we would like to create a 3D anchor on the detected items so it can be visible for user. The 3D anchor is going to be the class name of the detected item.
How do we transform this 2D coordinate from the model prediction to a 3D anchor?
How do you call the effect where the edges around the central image gradually become transparent? This effect is also seen when viewing immersive mode of spatial photos in Vision Pro. How can I achieve this effect using SwiftUI or ShaderGraph? I want to use this effect when displaying images in my app.
I downloaded the official sample project “Accessing the Main Camera”, but I found that it’s not able to retrieve the camera feed on visionOS 26.1. After checking the debug logs, it seems the issue is caused by the system being unable to find the expected format.
I tested on a device running visionOS 2, and the camera feed worked correctly — but only when using the sample code from the visionOS 2 version, not the current one. I also noticed that some of the APIs have changed between versions.
Has anyone managed to successfully access the camera feed on visionOS 26.1?
Hey there,
since SceneView has been marked as „deprecated“ for SwiftUI, I‘m wondering which alternatives should be considered for the following situation:
I have a SwiftUI app (for iOS and iPadOS) where users can view (with rotate, scale, move gestures) 3D models (USDZ) in a scene. The models will be downloaded from web backend and called via local URL paths.
What I tested:
I‘ve tried ARView in .nonAR mode, RealityView, however I didn‘t get the expected response -> User can rotate, scale the 3D models in a virtual space.
ARView in nonAR mode still shows the object like in normal AR mode without camera stream.
I tried to add Gestures to the RealityView on iOS - loading USDZ 3D models worked but the gestures didn’t).
Model3D is only available for visionOS (that would be amazing to have it for iOS)
I also checked QuickLook Preview however it works pretty strange via Filepicker etc, which is not the way how the user should load the 3D models in my app.
Maybe I missed something, I couldn’t find anything which can help me. I‘m pretty much stucked adopting the latest and greatest frameworks/APIs in my App and taking the next steps porting my app to visionOS.
Long story short 😃:
Does someone have an idea what is the alternative to SceneView for USDZ 3D models?
I appreciate your support!!
Thanks in advance!
Most models are only available as glb or fbx, so I usually reexport them into usdz using Blender.
When I import them into Reality Composer Pro, Mesh, Textures etc look great, but in the Animation Library subsection all I can see is one default subtree animation.
In Blender I can see all available animations and play them individually. The default subtree animation just plays the default idle animation.
In fact when I open the nonlinear animation view in Blender and select a different animation as the default animation, the exported usdz shows the newly selected animation as default subtree animation.
I can see in the Apple sample apps models can have multiple animations in their Animation Library.
I'm using the latest Blender 4.5 and the usdz exporter should be working properly?
Hello,
If you add a ManipulationComponent to a RealityKit entity and then continue to add instructions, sooner or later you will encounter a crash with the following error message:
Attempting to move entity “%s” (%p) under “%s” (%p), but the new parent entity is currently being removed. Changing the parent/child entities of an entity in an event handler while that entity is already being reassigned is not supported.
CoreSimulator 1048 – Device: Apple Vision Pro 4K (B87DD32A-E862-4791-8B71-92E50CE6EC06) – Runtime: visionOS 26.0 (23M336) – Device Type: Apple Vision Pro
The problem occurs precisely with this code:
ManipulationComponent.configureEntity(object)
I adapted Apple's ObjectPlacementExample and made the changes available via GitHub.
The desired behavior is that I add entities to ManipulationComponent and then Realitiykit runs stably and does not crash randomly.
GitHub Repo
Thanks
Andre
I'm working on creating a panorama view in AVP. When I got to this line of code Xcode says that "Type 'Entity' does not conform to protocol 'View'":
private var realityView: RealityView!
as well as this line, with the same error message:
private func setupPanoramaScene(for content: RealityView.Content)
What should I put as a argument for reality view? It doesn't work without arguments either.
Topic:
Spatial Computing
SubTopic:
ARKit
We were having an issue wrb the system rotate and scale gestures (two-handed gestures / RotateGesture3D and MagnifyGesture) were extremely difficult to register (make work) in the visionOS simulator.
The solution we found was to:
Launch your app in the simulator
Move the pointer on top of the 3D object for which you are testing rotation and scaling gestures.
Press and hold the Option key to display touch points (ie: the two-handed gesture points).
While maintaining the option key pressed, release the pointer and re-enable it again. I am using a track pad with tap-to-click enabled and three-finger to drag enabled in accessibility, so "release the pointer and re-enable it again" translates simply to removing the three finger and placing them again on the trackpad.
If you have maintained the option key pressed, then you should now be able to rotate and scale the 3D object.
Context if you are interested:
Our issue was also occurring in Apple's own sample project relating to gestures "Transforming RealityKit entities using gestures", at below link.
On Apple's article "Interacting with your app in the visionOS simulator" at the below link, for two-handed gestures it states "Press and hold the Option key to display touch points. Move the pointer while pressing the Option key to change the distance between the touch points. Move the pointer and hold the Shift and Option keys to reposition the touch points."
This simply did not work anymore for rotation and scaling gestures.
These gestures used to be a lot more responsive in Sonoma. Either the article should be updated to what I described above, or there is an issue. Our colleague who is using macOS Sonoma 14.6.1 with the latest release of Xcode is not having these issues.
Here is the list of configurations (troubleshooting we tried!) where it is difficult to achieve rotation and scaling gestures in the visionOS simulator:
macOS Sequoia 16.1 Beta, Xcode 16.1 RC w visionOS 2.1
macOS Sequoia 16.1 Beta, Xcode 16.1 RC w visionOS 2.0
macOS Sequoia 16.1 Beta, Xcode 16.2 Beta 1 w visionOS 2.1
macOS Sequoia 16.1 Beta, Xcode 16.2 Beta 1 w visionOS 2.0
macOS Sequoia 16.1 Beta, remove all Xcodes and installed the build from AppStore (Xcode 16.1)
macOS Sequoia 16.1 Beta, Xcode 16.0 w visionOS 2.0
completely wiped out, and reset entire development machine, re-installed latest releases of sequoia (15.1) and xcode (15.1))
Throughout these troubleshooting I often:
restarted both xcode and sim
erased all derived data
erased all contents and settings from sims
performed fresh git clones
None of the above worked, only the workaround described above works atm. As you can maybe deduce, it was very time consuming to find the workaround, we also wasted some development effort thinking our gesture development was no-good.
Hopefully this will help other devs.
Article Link:
https://developer.apple.com/documentation/xcode/interacting-with-your-app-in-the-visionos-simulator
Gesture sample project link:
https://developer.apple.com/documentation/realitykit/transforming-realitykit-entities-with-gestures
I can generate a ShapeResource from a ReakityKit entity's extents. Could I apply some scaling to the generated shape. Is there a way to do that?
// model is a ModelResource and bounds is a BoundingBox
var shape = ShapeResource.generateConvex(from: model.mesh);
shape = shape.offsetBy(translation: bounds.center)
// How can I scale the shape to fit within the bounds?
The following API only provide the rotation and translation support. and I cannot find the scale support.
offsetBy(rotation: simd_quatf = simd_quatf(ix: 0, iy: 0, iz: 0, r: 1), translation: SIMD3<Float> = SIMD3<Float>())
I can put the ShapeResource on an entity and scale the entity. But, I would like to know if it is possible to scale the ShapeResource itself without attaching it to an entity.
We are working on a world scale AR app that leverages the device location and heading to place objects in the streets, so that they are correctly and stably anchored to certain locations.
Since the geo-tracking imagery is only available in certain cities and areas, we are trying to figure out how to fallback when geo-tracking is not available as the device move away, to still retain good AR camera accuracy. We might need to come up with some algorithm using the device GPS, to line up the ARCamera with our objects.
Question: Does geo-tracking always provide greater than or equal to the accuracy of world tracking, for a GPS outdoor AR experience?
If so, we can simply use the ARGeoTrackingConfiguration for the entire time, and rely on the ARView keeping itself aligned. Otherwise, we need to switch between it and ARWorldTrackingConfiguration when geo-tracking is not available and/or its accuracy is low, then roll our own algorithm to keep the camera aligned.
Thanks.
I have Mac mini M4 with 16GB memory, the Xcode is 16.1, when I test my Vision Pro App with the Simulator, it is very slow and system shows the memory is under the high pressure.
How do I run/test/debug the application on Vision Pro directly? Tried to add my Vision Pro to my developer account, it didn't work due to cannot find UDID, when I hook the USB to the battery, it only shows Battery device ID.
Topic:
Spatial Computing
SubTopic:
General
Hello,
I'm trying to view the components of an Entity I'm creating in RealityKit by reading from a USDZ file. I have the following code snippet in my app.
if let appleEntity = try? Entity.loadModel(named: "apple_tile") {
let c = appleEntity.components
for comp in c { // <- compiler error here
print(comp)
}
}
The compiler error I'm receiving says "For-in loop requires 'Entity.ComponentSet' to conform to 'Sequence'". However, I thought this was the case, according to the documentation for Entity.ComponentSet?
Curious if anyone else has had this problem. Running XCode 15.4, and my Swift version is
xcrun swift -version
swift-driver version: 1.90.11.1 Apple Swift version 5.10 (swiftlang-5.10.0.13 clang-1500.3.9.4)
Target: x86_64-apple-macosx14.0