Hi all,
I'm running into an issue with an app that previously worked fine on device using visionOS 2.0. After updating to visionOS 26, the same code runs fine in the simulator but crashes on the device with the following error:
-[MTLDebugComputeCommandEncoder _validateThreadsPerThreadgroup:]:1330:
failed assertion `(threadsPerThreadgroup.width(32) * threadsPerThreadgroup.height(32) * threadsPerThreadgroup.depth(1))(1024) must be <= 832. (kernel threadgroup size limit)`
Is there any documented way to check or increase the allowed threadsPerThreadgroup size on Apple Vision Pro? Or any recommended workaround for this regression?
Thanks in advance!
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Description:
I'm developing a travel/panorama viewing app for visionOS that allows users to view 360° panoramic images in an immersive space. When users enter panorama viewing mode, I want to provide a fully immersive experience where the main interface window and Earth 3D globe window are hidden.
I've implemented the app following Apple's documentation on Creating Fully Immersive Experiences, but when users enter the immersive space, both the main window and the Earth 3D window remain visible, diminishing the immersive experience.
Implementation Details:
My app has three main components:
A main content window showing panorama thumbnails
A 3D globe window (volumetric) showing locations
An immersive space for viewing 360° panoramas
I'm using .immersionStyle(selection: $panoImageView, in: .full) to create a fully immersive experience, but other windows remain visible.
Relevant Code:
@main
struct Travel_ImmersiveApp: App {
@StateObject private var appModel = AppModel()
@State private var panoImageView: ImmersionStyle = .full
var body: some Scene {
WindowGroup {
ContentView()
.environmentObject(appModel)
}
.windowStyle(.automatic)
.defaultSize(width: 1280, height: 825)
WindowGroup(id: "Earth") {
Globe3DView()
.environmentObject(appModel)
.onAppear {
appModel.isGlobeWindowOpen = true
appModel.globeWindowOpen = true
}
.onDisappear {
if !appModel.shouldCloseApp {
appModel.handleGlobeWindowClose()
}
}
}
.windowStyle(.volumetric)
.defaultSize(width: 0.8, height: 0.8, depth: 0.8, in: .meters)
.windowResizability(.contentSize)
ImmersiveSpace(id: "ImmersiveView") {
ImmersiveView()
.environmentObject(appModel)
}
.immersionStyle(selection: $panoImageView, in: .full)
}
}
Opening the Immersive Space:
func getPanoImageAndOpenImmersiveSpace() async {
appModel.clearMemoryCache()
do {
let canView = appModel.canViewImage(image)
if canView {
let downloadedImage = try await appModel.getPanoramaImage(for: image) { progress in
Task { @MainActor in
cardState = .loading(progress: progress)
}
}
await MainActor.run {
appModel.updateCurrentImage(image, panoramaImage: downloadedImage)
}
if !appModel.immersiveSpaceOpened {
try await openImmersiveSpace(id: "ImmersiveView")
await MainActor.run {
appModel.immersiveSpaceOpened = true
cardState = .normal
}
} else {
await MainActor.run {
appModel.updateImmersiveView = true
cardState = .normal
}
}
} else {
await MainActor.run {
appModel.errorMessage = "You do not have permission to view this image."
cardState = .normal
}
}
} catch {
// Error handling
}
}
Immersive View Implementation:
struct ImmersiveView: View {
@EnvironmentObject var appModel: AppModel
var body: some View {
RealityView { content in
let rootEntity = Entity()
content.add(rootEntity)
Task {
if let selectedImage = appModel.selectedImage,
appModel.canViewImage(selectedImage) {
await loadPanorama(for: rootEntity)
}
}
} update: { content in
if appModel.updateImmersiveView,
let selectedImage = appModel.selectedImage,
appModel.canViewImage(selectedImage),
let rootEntity = content.entities.first {
Task {
await loadPanorama(for: rootEntity)
appModel.updateImmersiveView = false
}
}
}
.onAppear {
print("ImmersiveView appeared")
}
.onDisappear {
appModel.resetImmersiveState()
}
}
// loadPanorama implementation...
}
What I've Tried
Set immersionStyle to .full as recommended in the documentation
Confirmed that the immersive space is properly opened and displaying panoramas
Verified that the state management for the immersive space is working correctly
Questions
How can I ensure that when the user enters the immersive panorama viewing experience, all other windows (main interface and Earth 3D globe) are automatically hidden?
Is there a specific API or approach I'm missing to properly implement a fully immersive experience that hides all other windows?
Do I need to manually dismiss the windows when opening the immersive space, and if so, what's the best approach for doing this?
Any guidance or sample code would be greatly appreciated. Thank you!
Hi, I've encountered a thread where an Apple engineer points out that there are 2 possible ways to anchor scenePhase, either App or View implementation: https://developer.apple.com/forums/thread/757429
This thread also links to documentation which states
If you read the phase from within a custom Scene instance, the value similarly reflects an aggregation of all the scenes that make up the custom scene:
This doesn't seem to be the case on visionOS 2, I tried the following code starting from an empty app template:
import SwiftUI
@main
struct SceneTestApp: App {
var body: some Scene {
MyScene()
WindowGroup(id: "extra") {
Text("Extra window")
}
}
}
struct MyScene: Scene {
@Environment(\.scenePhase) private var scenePhase
@Environment(\.openWindow) private var openWindow
var body: some Scene {
WindowGroup {
ContentView()
.onAppear {
openWindow(id: "extra")
}
}
.onChange(of: scenePhase) { oldValue, newValue in
print("scenePhase changed")
}
}
}
The result was that I didn't get onChange callback if I only closed the extra window, the callback only came after I closed both windows and the whole app was suspended. Is this expected behavior?
Hi, I am trying to update what entities are visible in my RealityView. After the SwiftData set is updated, I have to restart the app for it to appear in the RealityView.
Also, the RealityView does not close when I move to a different tab. It keeps everything on and tracking, leaving the model in the same location I left it.
import SwiftUI
import RealityKit
import MountainLake
import SwiftData
struct RealityLakeView: View {
@Environment(\.modelContext) private var context
@Query private var items: [Item]
var body: some View {
RealityView { content in
print("View Loaded")
let lakeScene = try? await Entity(named: "Lake", in: mountainLakeBundle)
let anchor = AnchorEntity(.plane(.horizontal, classification: .any, minimumBounds: SIMD2<Float>(0.2, 0.2)))
@MainActor func addEntity(name: String) {
if let lakeEntity = lakeScene?.findEntity(named: name) {
// Add the Cube_1 entity to the RealityView
anchor.addChild(lakeEntity)
} else {
print(name + "entity not found in the Lake scene.")
}
}
addEntity(name: "Island")
for item in items {
if(item.enabled) {
addEntity(name: item.value)
}
}
// Add the horizontal plane anchor to the scene
content.add(anchor)
content.camera = .spatialTracking
} placeholder: {
ProgressView()
}
.edgesIgnoringSafeArea(.all)
}
}
#Preview {
RealityLakeView()
}
Topic:
Spatial Computing
SubTopic:
General
Tags:
Swift Packages
RealityKit
Reality Composer Pro
SwiftData
My ARViewContainer code is not working. I don't know how to debug the issue and I don't know or see where my results is going to. I need help to resolve this issue. please help debug. See code below:
Hi,
since RealityKit 4 now supports Blend Shapes I was wondering if there are any workflow or tooling recommendations to bake/export them into a USDZ.
Are Blender or Cinema4D capable to do that out of the box? Should we look into NVIDIA omniverse (https://docs.omniverse.nvidia.com/connect/latest/blender/manual.htm)
So far this topic seems very sparsely documented and I would appreciate any hints. Thank you!
I'm trying to implement a prototype to render virtual objects in a mixed immersive space on the camer frames captured by CameraFrameProvider.
Here are what I have done:
Get camera's instrinsics from frame.primarySample.parameters.intrinsics
Get camera's extrinsics from frame.primarySample.parameters.extrinsics
Get the device anchor by worldTrackingProvider.queryDeviceAnchor(atTimestamp: CACurrentMediaTime())
Setup a RealityKit.RealityRenderer to render virtual objects on the captured camera frames
let realityRenderer = try RealityKit.RealityRenderer()
realityRenderer.cameraSettings.colorBackground = .outputTexture()
let cameraEntity = PerspectiveCamera()
// see https://developer.apple.com/forums/thread/770235
let cameraTransform = deviceAnchor.originFromAnchorTransform * extrinsics.inverse
cameraEntity.setTransformMatrix(cameraTransform, relativeTo: nil)
cameraEntity.camera.near = 0.01
cameraEntity.camera.far = 100
cameraEntity.camera.fieldOfViewOrientation = .horizontal
// manually calculated based on camera intrinsics
cameraEntity.camera.fieldOfViewInDegrees = 105
realityRenderer.entities.append(cameraEntity)
realityRenderer.activeCamera = cameraEntity
Virtual objects, which should be seen in the camera frames, are clipped out by the camera transform.
If I use deviceAnchor.originFromAnchorTransform as the camera transform, virtual objects can be rendered on camera frames at wrong positions (I think it is because the camera extrinsics isn't used to adjust the camera to the correct position).
My question is how to use the camera extrinsic matrix for this purpose?
Does the camera extrinsics point to a similar orientation of the device anchor with some minor rotation and postion change? Here is an extrinsics from a camera frame. It seems that the direction of Y-axis and Z-axis are flipped by the extrinsics. So the camera is point to a wrong direction.
simd_float4x4([[0.9914258, 0.012555369, -0.13006608, 0.0], // X-axis
[-0.0009778949, -0.9946325, -0.10346654, 0.0], // Y-axis
[-0.13066702, 0.10270659, -0.98609203, 0.0], // Z-axis
[0.024519, -0.019568002, -0.058280986, 1.0]]) // translation
Hello
RemoteDeviceIdentifier returns nil and therefore it crashes the HoverEffect sample project.
I have vision26 beta 2 on both devices
what the correct method of running this code sample ?
Hi Nathaniel,
I spoke with you yesterday in the WWDC lab. Thanks for chatting with me! Is it possible to get a link to a doc that has some key metrics I'd find in a RealityKit trace so I know if that metric is exceeding limits and probably causing a problem? Right now, I just see numbers and have no idea if a metric is high or low :). This is specifically for a VisionOS app.
Thanks,
Bob
After re-launching the immersive space in my app 5-10 times, the WorldTrackingProvider stops working. Only restarting the app will allow it to start working again.
Only on device, not the simulator.
I get these errors when it happens:
The device_anchor can only be queried when the world tracking provider is running.
ARPredictorRemoteService <0x107cbb5e0>: Service configured with error: Error Domain=com.apple.arkit.error Code=501 "(null)"
Remote Service was invalidated: <ARPredictorRemoteService: 0x107cbb5e0>, will stop all data_providers.
ARRemoteService: remote object proxy failed with error: Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service with pid 81 named com.apple.arkit.service.session was invalidated from this process." UserInfo={NSDebugDescription=The connection to service with pid 81 named com.apple.arkit.service.session was invalidated from this process.}
ARRemoteService: weak self released before invalidation
@Observable class VisionPro {
let session = ARKitSession()
let worldTracking = WorldTrackingProvider()
func transformMatrix() async -> simd_float4x4 {
guard let deviceAnchor = worldTracking.queryDeviceAnchor(atTimestamp: CACurrentMediaTime())
else { return .init() }
return deviceAnchor.originFromAnchorTransform
}
func runArkitSession() async {
Task {
try? await session.run([worldTracking])
}
}
}
which I call from my RealityView:
.task {
await visionPro.runArkitSession()
}
If I correctly understand, a new Enterprise API has been introduced In visionOS 26 allowing to fix windows to the user frame of reference, implementing a something like an "head up display", with the window tracking the user movements.
Is this API only available to enterprise applications, and if so is there a plan to make it available for every kind of app?
Since only the user can take a screenshot using the Apple Vision Pro's top buttons, the only workaround available to an immersive app that needs a screenshot to document the user's creative interior design choices is
ask the user to take a screenshot
wait until the user taps a button indicating the screenshot has been taken
then the app asks the user to select the screenshot when the app opens the PhotoPicker
when the user presses Done, the screenshot is handed off to the app.
One wonders why there is no Apple Api for doing this in a simple privacy protective way such as:
When called, the Apple api captures the screenshot in Apple secured memory
The api displays the screenshot to the user with appropriate privacy warnings and asks if the user wants to
a. share this screenshot with the app, or
b. cancel,
c. retake the screenshot
If the user approves, the app receives the screenshot
Hi guys,
I noticed that Apple created a really engaging visual effect for browsing spatial videos in the app. The video appears embedded in glass panel with glowing edges and even shows a parallax effect as you move around. When I tried to display the stereo video using RealityView, however, the video entity always floats above the panel.
May I ask how does VisionOS implement this effect? Is there any approach to achieve this effect or example code I can use in my own code.
Thanks!
Hi I am trying to implement something simple as people can share their Spatial Photos with others (just like this post). I encountered the same issue with him, but his answer doesn't help me out here.
Briefly speaking, I am using CGImgaeSoruce to extract paired leftImage and rightImage from one fetched spatial photo
let photos = PHAsset.fetchAssets(with: .image, options: nil)
// enumerating photos ....
if asset.mediaSubtypes.contains(PHAssetMediaSubtype.spatialMedia) {
spatialAsset = asset
}
// other code show below
I can fetch left and right images from native Spatial Photo (taken by Apple Vision Pro or iPhone 15+), but it didn't work on generated spatial photo (2D -> 3D feat in Photos).
// imageCount is 1 when it comes to generated spatial photo
let imageCount = CGImageSourceGetCount(source)
I searched over the net and someone says the generated version is having a depth image instead of left/right pair. But still I cannot extract any depth image from imageSource.
The full code below, the imagePair extraction will stop at "no groups found":
func extractPairedImage(phAsset: PHAsset, completion: @escaping (StereoImagePair?) -> Void) {
let options = PHImageRequestOptions()
options.isNetworkAccessAllowed = true
options.deliveryMode = .highQualityFormat
options.resizeMode = .none
options.version = .original
return PHImageManager.default().requestImageDataAndOrientation(for: phAsset, options: options) {
imageData, _, _, _ in
guard let imageData,
let imageSource = CGImageSourceCreateWithData(imageData as CFData, nil)
else {
completion(nil)
return
}
let stereoImagePair = stereoImagePair(from: imageSource)
completion(stereoImagePair)
}
}
}
func stereoImagePair(from source: CGImageSource) -> StereoImagePair? {
guard let properties = CGImageSourceCopyProperties(source, nil) as? [CFString: Any] else {
return nil
}
let imageCount = CGImageSourceGetCount(source)
print(String(format: "%d images found", imageCount))
guard let groups = properties[kCGImagePropertyGroups] as? [[CFString: Any]] else {
/// function returns here
print("no groups found")
return nil
}
guard
let stereoGroup = groups.first(where: {
let groupType = $0[kCGImagePropertyGroupType] as! CFString
return groupType == kCGImagePropertyGroupTypeStereoPair
})
else {
return nil
}
guard let leftIndex = stereoGroup[kCGImagePropertyGroupImageIndexLeft] as? Int,
let rightIndex = stereoGroup[kCGImagePropertyGroupImageIndexRight] as? Int,
let leftImage = CGImageSourceCreateImageAtIndex(source, leftIndex, nil),
let rightImage = CGImageSourceCreateImageAtIndex(source, rightIndex, nil),
let leftProperties = CGImageSourceCopyPropertiesAtIndex(source, leftIndex, nil),
let rightProperties = CGImageSourceCopyPropertiesAtIndex(source, rightIndex, nil)
else {
return nil
}
return (leftImage, rightImage, self.identifier)
}
Any suggestion? Thanks
visionOS 2.4
Hi everyone,
I'm creating an educational App that allows doing computational design in an immersive environment with the Vision Pro. The App is free and can be found here:
https://apps.apple.com/us/app/arcade-topology/id6742103633
The problem I have is that the mesh of voxels I currently create use ModelEntity and I recently read that this is horrible for scalability. I already start to see issues when I try to use thousands of voxels. I also read somewhere that I should then take advantage of GPUs and use metal to that end. I was wondering if someone could point me to a tutorial or article that discusses this. In essence, I need to create a 3D voxel mesh, and those voxels have to update their opacity within an iterative loop.
Thanks!
—Alejandro
In several visionOS apps, we readjust our scenes to the user's eye level (their heads). But, we have encountered issues whereby the WorldTrackingProvider returns bad/incorrect positions for the first x number of frames.
See below code which you can copy paste in any Immersive Space. Relaunch the space and observe the numberOfBadWorldInfos value is inconsistent.
a. what is the most reliable way to get the devices's position?
b. is this indeed a bug?
c. are we using worldInfo improperly?
d. as a workaround, in our apps we set to 10 the number of frames to let pass before using worldInfo, should we set our threshold differently?
import ARKit
import Combine
import OSLog
import SwiftUI
import RealityKit
import RealityKitContent
let SUBSYSTEM = Bundle.main.bundleIdentifier!
struct ImmersiveView: View {
let logger = Logger(subsystem: SUBSYSTEM, category: "ImmersiveView")
let session = ARKitSession()
let worldInfo = WorldTrackingProvider()
@State var sceneUpdateSubscription: EventSubscription? = nil
@State var deviceTransform: simd_float4x4? = nil
@State var numberOfBadWorldInfos = 0
@State var isBadWorldInfoLoged = false
var body: some View {
RealityView { content in
try? await session.run([worldInfo])
sceneUpdateSubscription = content.subscribe(to: SceneEvents.Update.self) { event in
guard let pose = worldInfo.queryDeviceAnchor(atTimestamp: CACurrentMediaTime()) else {
return
}
// `worldInfo` does not return correct values for the first few frames (exact number of frames is unknown)
// - known SO: https://stackoverflow.com/questions/78396187/how-to-determine-the-first-reliable-position-of-the-apple-vision-pro-device
deviceTransform = pose.originFromAnchorTransform
if deviceTransform!.columns.3.y < 1.6 {
numberOfBadWorldInfos += 1
logger.warning("\(#function) \(#line) deviceTransform.columns.3.y \(deviceTransform!.columns.3.y), numberOfBadWorldInfos \(numberOfBadWorldInfos)")
} else {
if !isBadWorldInfoLoged {
logger.info("\(#function) \(#line) deviceTransform.columns.3.y \(deviceTransform!.columns.3.y), numberOfBadWorldInfos \(numberOfBadWorldInfos)")
}
isBadWorldInfoLoged = true // stop logging.
}
}
}
}
}
My development team admin requested the Enterprise API for camera access on the vision pro. We got that granted, got a license for usage, and got instructions for integrating it with next steps.
We did the following:
Even when I try to download and run the sample project for "Accessing the Main Camera", and follow all the exact instructions mentioned here: https://developer.apple.com/documentation/visionos/accessing-the-main-camera
I am just unable to receive camera frames.
I added the capabilities, created a new provisioning profile with this access, added the entitlements to info.plist and entitlements, replaced the dummy license file with the one we were sent, and also have a matching bundle identifier and development certificate, but it is still not showing camera access for some reason.
"Main Camera Access" shows up in our Signing & Capabilities tab, and we also added the NSMainCameraDescription in the Info.plist and allow access while opening the app. None of this works. Not on my app, and not on the sample app that I just downloaded and tried to run on the Vision Pro after replacing the dummy license file.
We've recently discovered that our app crashes on startup on the latest visionOS 2.0 beta 5 (22N5297g) build. In fact, the entire field of view would dim down and visionOS would then restart, showing the Apple logo. Interestingly, no app crash is reported by Xcode during debug.
After investigation, we have isolated the issue to a specific USDZ asset in our app. Loading it in a sample, blank project also causes visionOS to reliably crash, or become extremely unresponsive with rendering artifacts everywhere.
This looks like a potentially serious issue. Even if the asset is problematic, loading it should not crash the entire OS. We have filed feedback FB14756285, along with a demo project. Hopefully someone can take a look. Thanks!
I am working with MeshAnchors, and I am having troubles getting to the classification of the triangles/faces.
This post references the MeshAnchor.Geometry, and that struct does have a property named "classifications", but it is of type GeometrySource. I cannot find any classification information in GeometrySource. Am I missing something there?
I think I am looking for something of type MeshAnchor.MeshClassification, but I cannot find any structs with this as a property.
We’re using the enterprise API for spatial barcode/QR code scanning in the Vision Pro app, but we often get invalid values for the barcode anchor from the API, leading to jittery barcode positions in the UI. The code we’re using is attached below.
import SwiftUI
import RealityKit
import ARKit
import Combine
struct ImmersiveView: View {
@State private var arkitSession = ARKitSession()
@State private var root = Entity()
@State private var fadeCompleteSubscriptions: Set = []
var body: some View {
RealityView { content in
content.add(root)
}
.task {
// Check if barcode detection is supported; otherwise handle this case.
guard BarcodeDetectionProvider.isSupported else { return }
// Specify the symbologies you want to detect.
let barcodeDetection = BarcodeDetectionProvider(symbologies: [.code128, .qr, .upce, .ean13, .ean8])
do {
try await arkitSession.requestAuthorization(for: [.worldSensing])
try await arkitSession.run([barcodeDetection])
print("Barcode scanning started")
for await update in barcodeDetection.anchorUpdates where update.event == .added {
let anchor = update.anchor
// Play an animation to indicate the system detected a barcode.
playAnimation(for: anchor)
// Use the anchor's decoded contents and symbology to take action.
print(
"""
Payload: \(anchor.payloadString ?? "")
Symbology: \(anchor.symbology)
""")
}
} catch {
// Handle the error.
print(error)
}
}
}
// Define this function in ImmersiveView.
func playAnimation(for anchor: BarcodeAnchor) {
guard let scene = root.scene else { return }
// Create a plane sized to match the barcode.
let extent = anchor.extent
let entity = ModelEntity(mesh: .generatePlane(width: extent.x, depth: extent.z), materials: [UnlitMaterial(color: .green)])
entity.components.set(OpacityComponent(opacity: 0))
// Position the plane over the barcode.
entity.transform = Transform(matrix: anchor.originFromAnchorTransform)
root.addChild(entity)
// Fade the plane in and out.
do {
let duration = 0.5
let fadeIn = try AnimationResource.generate(with: FromToByAnimation<Float>(
from: 0,
to: 1.0,
duration: duration,
isAdditive: true,
bindTarget: .opacity)
)
let fadeOut = try AnimationResource.generate(with: FromToByAnimation<Float>(
from: 1.0,
to: 0,
duration: duration,
isAdditive: true,
bindTarget: .opacity))
let fadeAnimation = try AnimationResource.sequence(with: [fadeIn, fadeOut])
_ = scene.subscribe(to: AnimationEvents.PlaybackCompleted.self, on: entity, { _ in
// Remove the plane after the animation completes.
entity.removeFromParent()
}).store(in: &fadeCompleteSubscriptions)
entity.playAnimation(fadeAnimation)
} catch {
print("Error")
}
}
}