I've encountered an unexpected crash with RoomPlan on iOS 16 devices. The odd part is the code is protected by an available check, since I'm using newer RoomPlan features.
Xcode error
dyld[40588]: Symbol not found: _$s8RoomPlan08CapturedA0V16USDExportOptionsV5modelAEvgZ
I can repro using the Apple sample code.
https://developer.apple.com/documentation/roomplan/create-a-3d-model-of-an-interior-room-by-guiding-the-user-through-an-ar-experience
Modify RoomCaptureViewController.swift as follows.
Remove
try finalResults?.export(to: destinationURL, exportOptions: .parametric)
Add
if #available(iOS 17.0, *) {
try finalResults?.export(to: destinationURL, exportOptions: .model)
} else {
try finalResults?.export(to: destinationURL, exportOptions: .parametric)
}
I would have expected this code to at least compile and run on older devices.
When the app was targeting iOS 15, the available checks worked as expected and the app is able to launch properly.
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
iOS currently restricts background Bluetooth advertising and scanning in order to preserve battery life and protect user privacy. While these restrictions serve important purposes, they also limit legitimate use cases where users have explicitly opted in to proximity-based experiences.
The core challenge is that modern social applications need a way to detect when users are physically present at the same location or event without requiring every participant to keep their app in the foreground. Under the current system, background BLE advertising is heavily throttled and can only transmit a limited payload, background scanning intervals are sparse and unpredictable, peer-to-peer proximity detection cannot be maintained reliably when apps are in the background, and Background App Refresh is non-deterministic, making any kind of time-based proximity validation impossible.
A proposed enhancement would be to introduce an “Enhanced Proximity Permission.” This would allow developers to enable reliable background BLE advertising and scanning for declared time windows, such as a maximum of eight hours. It would also allow devices running the same app to detect each other’s proximity using ephemeral, rotating identifiers that preserve privacy, with clear user consent and prominent indicators whenever the feature is active.
Unlocking this capability would open up new categories of applications. Live events could offer automatic attendance tracking at concerts, conferences, or sports venues. Retail environments could support opt-in foot traffic analysis and dwell-time insights. Social apps could allow users to find friends at festivals, campuses, or other large venues. Safety applications could extend to crowd density monitoring and contact tracing beyond COVID-era needs. Gaming could offer real-world multiplayer experiences based on physical proximity, and transportation providers could verify rideshare pickups or measure public transit flows automatically.
Privacy safeguards would remain central. Permissions would be time-boxed and expire after an event or session. A mandatory visual indicator would be displayed whenever proximity tracking is active. A user-facing dashboard would show all apps granted enhanced proximity access. Permissions would automatically be revoked after a period of non-use, and only ephemeral tokens not permanent identifiers would be broadcast.
The industry impact would be significant. With this enhancement, iOS could power the next generation of location-aware social platforms while maintaining Apple’s leadership in privacy through explicit user control and transparency. Current alternatives, such as requiring users to keep apps in the foreground or deploying dedicated hardware beacons, produce poor user experiences and constrain innovation in spatial computing and social applications.
Can anyone from Apple consider this change? Having to buy iBeacons is brutal and means slower adoption. Please reconsider this for users who opt in.
I am developing a Unity application for the Apple Vision Pro using PolySpatial and RealityKit integration.
The goal is to create a graspable object (for example, a handheld cube) that includes a secondary camera. When the user grabs and moves the object, the secondary camera should render its view to a RenderTexture, which is displayed on a quad attached to the object, simulating a live camera screen.
In the Unity Editor, this setup works correctly. The RenderTexture updates in real time, and the quad displays the camera’s view as expected.
However, when building and running the application on the Vision Pro, the quad only displays the clear background color of the secondary camera. No scene content appears. The graspable interaction itself works fine: the object can be grabbed and moved as intended.
Steps I have taken:
Created a new layer (CameraFeed) and assigned the relevant objects to it.
Set the secondary camera’s culling mask to render only the CameraFeed layer.
Assigned the RenderTexture as the camera’s target texture.
Applied the RenderTexture to an Unlit/Texture material on a quad.
Confirmed the camera is active and correctly positioned relative to the object.
From my research, it appears that once objects are managed by RealityKit through PolySpatial (for example, made graspable), they are no longer rendered through Unity's normal camera pipeline. Only the main XR camera (managed by RealityKit) seems able to see these objects. Secondary Unity cameras cannot render RealityKit-synced content to a RenderTexture. If this is correct, it seems there is currently no way to implement a true live secondary camera feed showing graspable objects on Vision Pro using Unity PolySpatial.
My questions are:
Is there any official way to enable multiple camera rendering of RealityKit-managed objects through PolySpatial?
Are there known workarounds to simulate a live camera feed that still allows objects to be grabbed?
Has anyone found alternative design patterns or methods for this kind of interaction?
Environment: Unity 6.0 , PolySpatial 2.2.4, Apple Vision OS XR 2.2.4
Any insight or suggestions would be greatly appreciated.
Thank you.
The initial startup of visionOS 26 after install is glacially slow.
Hi,
after upgrading to 2.4.1 (from 1.0) my vision stucks on "Retrieving configuration" screen. Apple Store didn't support my case since it has been sold in USA and the product isn't still present in italian market. I don't have dev strap, how can I manage the issue?
Thank you
Topic:
Spatial Computing
SubTopic:
General
I have an entity that was created using Mixamo, and it has an animation.
after the animation completes the mesh of the robot is not where the entity is positioned.
I want to do something like when the animation finishes, I set the root entity's transform to the mesh's transform. There are no transformations applied to any of the children of this root of the model, which means that the transformations are applied to the skeleton due the the playing of animations.
Is there a way where I can apply the final position of the root of the skeleton to the root entity to make sure to position the entity where the animation has ended just before the next animation plays?
I am following this example to create a stereoscopic image: https://developer.apple.com/documentation/visionos/creating-stereoscopic-image-in-visionos
I would also like to add corner radius to the stereoscopic RealityView. With ordinary SwiftUI views, we typically just use .clipShape(RoundedRectangle(cornerRadius: 32)):
struct StereoImage: View {
var body: some View {
let spacing: CGFloat = 10.0
let padding: CGFloat = 40.0
VStack(spacing: spacing) {
Text("Stereoscopic Image Example")
.font(.largeTitle)
RealityView { content in
let creator = StereoImageCreator()
guard let entity = await creator.createImageEntity() else {
print("Failed to create the stereoscopic image entity.")
return
}
content.add(entity)
}
.frame(depth: .zero)
}
.padding(padding)
.clipShape(RoundedRectangle(cornerRadius: 32)) // <= HERE!
}
}
This doesn't seem to actually clip the RealityView shown in the sample above. I am guessing this is due to the fact that the box in the RealityView has a non-zero z scale, which means it isn't on the same "layer" as its SwiftUI containers, and thus isn't clipped by the modifiers apply to the containers.
How can I properly apply a clipshape to RealityViews like this? Thanks!
Hello,
I'm currently trying to make a collaborative app. But it just works only on Reality View, when I tried to use Compositor Layer like below, the personas disappeared.
ImmersiveSpace(id: "ImmersiveSpace-Metal") {
CompositorLayer(configuration: MetalLayerConfiguration()) { layerRenderer in
SpatialRenderer_InitAndRun(layerRenderer)
}
}
Is there any potential solution too see Personas in Metal view?
Thanks in advance!
I noticed that when I drag the menu window in an Immersive View, the entities behind it becomes semi-transparent, and the boundary between virtual and real-world objects is very pronounced.
May I ask how does VisionOS implement this effect? Is there any API or technique I can use in my own code to enable the same semi-transparent overlay - even when I am not dragging the menu window?
Hi all,
I am currently developing a game in Unity for VisionOS and I'd prefer to use the PSVR2 controllers as a source of the raycast for menu selection instead of the default VisionOS gaze for my specific use case. Is there a way to access the IMU of PSVR2 controllers to do this instead of just using eyegaze + controller click for selection? Is there a specific configuration for GCController from within Unity maybe?
Thank you!
We’re trying to build a custom player for Unity. For this, we’re using AVPlayer with AVPlayerItemVideoOutput to get textures. However, we noticed that playback is not smooth and the stream often freezes.
For testing, we used this 8K video:
https://deovr.com/nwfnq1
The video was played using the following code:
@objc public func playVideo(urlString: String)
{
guard let url = URL(string: urlString) else { return }
let pItem = AVPlayerItem(url: url)
playerItem = pItem
pItem.preferredForwardBufferDuration = 10.0
let pixelBufferAttributes: [String: Any] = [
kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,
kCVPixelBufferMetalCompatibilityKey as String: true,
]
let output = AVPlayerItemVideoOutput( pixelBufferAttributes: pixelBufferAttributes )
pItem.add(output)
playerItemObserver = pItem.observe(\.status)
{
[weak self] pItem, _ in
guard pItem.status == .readyToPlay else { return }
self?.playerItemObserver = nil
self?.player.play()
}
player = AVPlayer(playerItem: pItem)
player.currentItem?.preferredPeakBitRate = 35_000_000
}
When AVPlayerItemVideoOutput is attached, the video stutters and the log looks like this:
🟢 Playback likely to keep up
🟡 Buffer ahead: 4.08s | buffer: 4.08s
🟡 Buffer ahead: 4.08s | buffer: 4.08s
🟡 Buffer ahead: -0.07s | buffer: 0.00s
🟡 Buffer ahead: 2.94s | buffer: 3.49s
🟡 Buffer ahead: 2.50s | buffer: 4.06s
🟡 Buffer ahead: 1.74s | buffer: 4.30s
🟡 Buffer ahead: 0.74s | buffer: 4.30s
🟠 Playback may stall
🛑 Buffer empty
🟡 Buffer ahead: 0.09s | buffer: 4.30s
🟠 Playback may stall
🟠 Playback may stall
🛑 Buffer empty
🟠 Playback may stall
🟣 Buffer full
🟡 Buffer ahead: 1.41s | buffer: 1.43s
🟡 Buffer ahead: 1.41s | buffer: 1.43s
🟡 Buffer ahead: 1.07s | buffer: 1.43s
🟣 Buffer full
🟡 Buffer ahead: 0.47s | buffer: 1.65s
🟠 Playback may stall
🛑 Buffer empty
🟡 Buffer ahead: 0.10s | buffer: 1.65s
🟠 Playback may stall
🟡 Buffer ahead: 1.99s | buffer: 2.03s
🟡 Buffer ahead: 1.99s | buffer: 2.03s
🟣 Buffer full
🟣 Buffer full
🟡 Buffer ahead: 1.41s | buffer: 2.00s
🟡 Buffer ahead: 0.68s | buffer: 2.27s
🟡 Buffer ahead: 0.09s | buffer: 2.27s
🟠 Playback may stall
🛑 Buffer empty
🟠 Playback may stall
When we remove AVPlayerItemVideoOutput from the player, the video plays smoothly, and the output looks like this:
🟢 Playback likely to keep up
🟡 Buffer ahead: 1.94s | buffer: 1.94s
🟡 Buffer ahead: 1.94s | buffer: 1.94s
🟡 Buffer ahead: 1.22s | buffer: 2.22s
🟡 Buffer ahead: 1.05s | buffer: 3.05s
🟡 Buffer ahead: 1.12s | buffer: 4.12s
🟡 Buffer ahead: 1.18s | buffer: 5.18s
🟡 Buffer ahead: 0.72s | buffer: 5.72s
🟡 Buffer ahead: 1.27s | buffer: 7.28s
🟡 Buffer ahead: 2.09s | buffer: 3.03s
🟡 Buffer ahead: 4.16s | buffer: 6.10s
🟡 Buffer ahead: 6.66s | buffer: 7.09s
🟡 Buffer ahead: 5.66s | buffer: 7.09s
🟡 Buffer ahead: 4.66s | buffer: 7.09s
🟡 Buffer ahead: 4.02s | buffer: 7.45s
🟡 Buffer ahead: 3.62s | buffer: 8.05s
🟡 Buffer ahead: 2.62s | buffer: 8.05s
🟡 Buffer ahead: 2.49s | buffer: 3.53s
🟡 Buffer ahead: 2.43s | buffer: 3.38s
🟡 Buffer ahead: 1.90s | buffer: 3.85s
We’ve tried different attribute settings for AVPlayerItemVideoOutput. We also removed all logic related to reading frame data, but the choppy playback still remained.
Can you advise whether this is a player issue or if we’re doing something wrong?
it looks like one week after accepting as a nearby other AVP device... it expires
since we are providing our clients for a timeless app to walk inside archtiecture, it's a shame that not technical staff should connect every week 5 devices to work together
is there any roundabout for this issue or straight to the wishlist ?
thanks for the support !!
I have a visionOS 2 project created on Xcode 16, when I updated to Xcode 26 beta5, I can't build it any more, every time it stuck in process like the picture shows below:
Already tried many methods to fix this issue, such as clear build folders, but don't work.
MacBook Air M2 / MacOS 26 beta5 / Xcode 26 beta5
I would like to integrate the object capture API with a ML model for analysis. So, i will need to get the current frame into CG images for further process.
Thanks in advance !
While using Screen Mirroring in developer mode within my immersive space, I noticed an alignment issue with the computer cursor (transparent circle). When I move it toward an attachment view, the cursor remains horizontal instead of aligning with the surface of the attachment view. It shows correctly on a 2D window only wrong on attachment view.
Is this behavior a bug, or could it be caused by a missing or incorrect configuration on the attachment view?
Want help, thanks.
When I was developing the visionOS 26beta Widget, I found that it could not work normally when the real vision OS was running, and an error would appear.
Please adopt container background api
It is worth mentioning that this problem does not occur on the visionOS virtual machine.
Does anyone know what the reason and solution are, or whether this is a visionOS error that needs Feedback? Thank you!
Hi,
since RealityKit 4 now supports Blend Shapes I was wondering if there are any workflow or tooling recommendations to bake/export them into a USDZ.
Are Blender or Cinema4D capable to do that out of the box? Should we look into NVIDIA omniverse (https://docs.omniverse.nvidia.com/connect/latest/blender/manual.htm)
So far this topic seems very sparsely documented and I would appreciate any hints. Thank you!
Hi,
I have used the template code for Plane Detection and placing models on them from here https://developer.apple.com/documentation/visionos/placing-content-on-detected-planes
This source code did not copy the animations in the preview model to the PlacedModel and hence I modified it to do a manual copy of animations and textures. There is a function called materialize() that does this and I was able to modify it to get it working where the placed models are now animating. The issue is when I apply gestures on them like drag or rotate. For those models that go through this logic I'm unable to add gestures even though I'm making sure that Collision and Input Target is set on the Placed Models. Has anyone been able to get this working or is it even a possibility?
My materialize function
func materialize() -> PlacedObject {
let shapes = previewEntity.components[CollisionComponent.self]!.shapes
// Clone render content first as we need its materials
let clonedRenderContent = renderContent.clone(recursive: true)
print("To be finding main model: \(descriptor.displayName)")
// Find the main model in preview hierarchy
func findMainModel(_ entity: Entity) -> Entity? {
if entity.name == descriptor.displayName.replacingOccurrences(of: " ", with: "_") {
print("Found main model: \(entity.name)")
return entity
}
for child in entity.children {
if child.name == descriptor.displayName.replacingOccurrences(of: " ", with: "_") {
print("Found main model in children: \(child.name)")
return child
}
}
return nil
}
// Clone hierarchy preserving structure, names, and materials
func cloneHierarchy(_ entity: Entity) -> Entity {
print("Cloning: \(entity.name)")
let cloned: Entity
if let model = entity as? ModelEntity {
// Clone with recursive false to handle children manually
cloned = model.clone(recursive: false)
if let clonedModel = cloned as? ModelEntity,
let originalMaterials = model.model?.materials {
// Preserve the original model's materials
clonedModel.model?.materials = originalMaterials
}
} else {
cloned = Entity()
}
// Preserve name and transform
cloned.name = entity.name
cloned.transform = entity.transform
// Clone children
for child in entity.children {
let clonedChild = cloneHierarchy(child)
cloned.addChild(clonedChild)
}
return cloned
}
print("=== Cloning Preview Structure ===")
// Clone the preview hierarchy with proper structure
let clonedStructure = cloneHierarchy(previewEntity)
// Find and use the main model
if let mainModel = findMainModel(clonedStructure) {
print("Using main model for PlacedObject")
let modelEntity: ModelEntity
if let asModel = mainModel as? ModelEntity {
print("Using asModel ")
modelEntity = asModel
} else {
modelEntity = ModelEntity()
modelEntity.name = mainModel.name
// Copy children and transforms
for child in mainModel.children {
modelEntity.addChild(child)
}
modelEntity.transform = mainModel.transform
}
// Add collision component here
let collisionComponent = CollisionComponent(shapes: shapes, isStatic: false,
filter: CollisionFilter(group: PlacedObject.collisionGroup, mask: .all))
modelEntity.components.set(collisionComponent)
// Create the placed object
let placedObject = PlacedObject(descriptor: descriptor, renderContentToClone: modelEntity, shapes: shapes)
// Set input target on the placed object itself
placedObject.components.set(InputTargetComponent(allowedInputTypes: [.direct, .indirect]))
return placedObject
} else {
print("Fallback to original render content")
let placedObject = PlacedObject(descriptor: descriptor, renderContentToClone: clonedRenderContent, shapes: shapes)
placedObject.components.set(InputTargetComponent(allowedInputTypes: [.direct, .indirect]))
return placedObject
}
}
My PlacedObject class where the init has the recursive cloning removed because it is handled in materialize
class PlacedObject: Entity {
let fileName: String
// The 3D model displayed for this object.
private let renderContent: ModelEntity
static let collisionGroup = CollisionGroup(rawValue: 1 << 29)
// The origin of the UI attached to this object.
// The UI is gravity aligned and oriented towards the user.
let uiOrigin = Entity()
var affectedByPhysics = false {
didSet {
guard affectedByPhysics != oldValue else { return }
if affectedByPhysics {
components[PhysicsBodyComponent.self]!.mode = .static
} else {
components[PhysicsBodyComponent.self]!.mode = .static
}
}
}
var isBeingDragged = false {
didSet {
affectedByPhysics = !isBeingDragged
}
}
var positionAtLastReanchoringCheck: SIMD3<Float>?
var atRest = false
init(descriptor: ModelDescriptor, renderContentToClone: ModelEntity, shapes: [ShapeResource]) {
fileName = descriptor.fileName
// renderContent = renderContentToClone.clone(recursive: true)
renderContent = renderContentToClone
super.init()
name = renderContent.name
// Apply the rendered content’s scale to this parent entity to ensure
// that the scale of the collision shape and physics body are correct.
scale = renderContent.scale
renderContent.scale = .one
// Make the object respond to gravity.
let physicsMaterial = PhysicsMaterialResource.generate(restitution: 0.0)
let physicsBodyComponent = PhysicsBodyComponent(shapes: shapes, mass: 1.0, material: physicsMaterial, mode: .static)
components.set(physicsBodyComponent)
components.set(CollisionComponent(shapes: shapes, isStatic: false,
filter: CollisionFilter(group: PlacedObject.collisionGroup, mask: .all)))
addChild(renderContent)
addChild(uiOrigin)
uiOrigin.position.y = extents.y / 2 // Position the UI origin in the object’s center.
// Allow direct and indirect manipulation of placed objects.
components.set(InputTargetComponent(allowedInputTypes: [.direct, .indirect]))
// Add a grounding shadow to placed objects.
renderContent.components.set(GroundingShadowComponent(castsShadow: true))
}
required init() {
fatalError("`init` is unimplemented.")
}
}
Thanks
Do you retain a reference to your content (RealityViewContent) events? For example, the Manipulation Events docs from Apple use _ to discard the result. In theory the event should keep working while the content is alive.
_ = content.subscribe(to: ManipulationEvents.WillBegin.self) { event in
event.entity.components[ModelComponent.self]?.materials[0] = SimpleMaterial(color: .blue, isMetallic: false)
}
_ = content.subscribe(to: ManipulationEvents.WillEnd.self) { event in
event.entity.components[ModelComponent.self]?.materials[0] = SimpleMaterial(color: .red, isMetallic: false)
}
We could store these events in state. I've seen this in a few samples and apps.
@State var beginSubscription: EventSubscription?
...
beginSubscription = content.subscribe(to: ManipulationEvents.WillBegin.self) { event in
event.entity.components[ModelComponent.self]?.materials[0] = SimpleMaterial(color: .blue, isMetallic: false)
}
The main advantage I see is that we can be more explicit about when we remove the event. Are there other reasons to keep a reference to these events?
Hi, are we allowed to push the default support in Package.swift up to iOS 18 to allow for the latest APIs?
And with the terms of the competition, can we use stock 3D USDZ assets?
Thank you!