Hi, I've encountered a thread where an Apple engineer points out that there are 2 possible ways to anchor scenePhase, either App or View implementation: https://developer.apple.com/forums/thread/757429
This thread also links to documentation which states
If you read the phase from within a custom Scene instance, the value similarly reflects an aggregation of all the scenes that make up the custom scene:
This doesn't seem to be the case on visionOS 2, I tried the following code starting from an empty app template:
import SwiftUI
@main
struct SceneTestApp: App {
var body: some Scene {
MyScene()
WindowGroup(id: "extra") {
Text("Extra window")
}
}
}
struct MyScene: Scene {
@Environment(\.scenePhase) private var scenePhase
@Environment(\.openWindow) private var openWindow
var body: some Scene {
WindowGroup {
ContentView()
.onAppear {
openWindow(id: "extra")
}
}
.onChange(of: scenePhase) { oldValue, newValue in
print("scenePhase changed")
}
}
}
The result was that I didn't get onChange callback if I only closed the extra window, the callback only came after I closed both windows and the whole app was suspended. Is this expected behavior?
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hello,
I am currently working on a Unity project for the Apple Vision Pro. I would like to have people passing in front of the virtual objects occlude the virtual objects that are behind. Something similar to this: https://developer.apple.com/documentation/arkit/occluding-virtual-content-with-people
I could unfortunately not find any documentation about this. Is it possible to implement body segmentation or occlusion on the Apple Vision Pro? If it's not currently supported, are there plans to add it? Any ideas on how to achieve this with existing tools?
Thanks!
Mehdi
Is there a suitable
UTType type to satisfy the need to pick up only SpatialVideo in UIDocumentPickerViewController?
I already know that PHPickerFilter in PHPickerViewController can do this, but not in UIDocumentPickerViewController.
Our app needs to adapt both of these ways to pick spatial videos
So is there anything that I can try in UIDocumentPickerViewController to fulfill such picker functionality?
Topic:
Spatial Computing
SubTopic:
General
Tags:
Files and Storage
Photos and Imaging
PhotoKit
visionOS
Hi,
I have used the template code for Plane Detection and placing models on them from here https://developer.apple.com/documentation/visionos/placing-content-on-detected-planes
This source code did not copy the animations in the preview model to the PlacedModel and hence I modified it to do a manual copy of animations and textures. There is a function called materialize() that does this and I was able to modify it to get it working where the placed models are now animating. The issue is when I apply gestures on them like drag or rotate. For those models that go through this logic I'm unable to add gestures even though I'm making sure that Collision and Input Target is set on the Placed Models. Has anyone been able to get this working or is it even a possibility?
My materialize function
func materialize() -> PlacedObject {
let shapes = previewEntity.components[CollisionComponent.self]!.shapes
// Clone render content first as we need its materials
let clonedRenderContent = renderContent.clone(recursive: true)
print("To be finding main model: \(descriptor.displayName)")
// Find the main model in preview hierarchy
func findMainModel(_ entity: Entity) -> Entity? {
if entity.name == descriptor.displayName.replacingOccurrences(of: " ", with: "_") {
print("Found main model: \(entity.name)")
return entity
}
for child in entity.children {
if child.name == descriptor.displayName.replacingOccurrences(of: " ", with: "_") {
print("Found main model in children: \(child.name)")
return child
}
}
return nil
}
// Clone hierarchy preserving structure, names, and materials
func cloneHierarchy(_ entity: Entity) -> Entity {
print("Cloning: \(entity.name)")
let cloned: Entity
if let model = entity as? ModelEntity {
// Clone with recursive false to handle children manually
cloned = model.clone(recursive: false)
if let clonedModel = cloned as? ModelEntity,
let originalMaterials = model.model?.materials {
// Preserve the original model's materials
clonedModel.model?.materials = originalMaterials
}
} else {
cloned = Entity()
}
// Preserve name and transform
cloned.name = entity.name
cloned.transform = entity.transform
// Clone children
for child in entity.children {
let clonedChild = cloneHierarchy(child)
cloned.addChild(clonedChild)
}
return cloned
}
print("=== Cloning Preview Structure ===")
// Clone the preview hierarchy with proper structure
let clonedStructure = cloneHierarchy(previewEntity)
// Find and use the main model
if let mainModel = findMainModel(clonedStructure) {
print("Using main model for PlacedObject")
let modelEntity: ModelEntity
if let asModel = mainModel as? ModelEntity {
print("Using asModel ")
modelEntity = asModel
} else {
modelEntity = ModelEntity()
modelEntity.name = mainModel.name
// Copy children and transforms
for child in mainModel.children {
modelEntity.addChild(child)
}
modelEntity.transform = mainModel.transform
}
// Add collision component here
let collisionComponent = CollisionComponent(shapes: shapes, isStatic: false,
filter: CollisionFilter(group: PlacedObject.collisionGroup, mask: .all))
modelEntity.components.set(collisionComponent)
// Create the placed object
let placedObject = PlacedObject(descriptor: descriptor, renderContentToClone: modelEntity, shapes: shapes)
// Set input target on the placed object itself
placedObject.components.set(InputTargetComponent(allowedInputTypes: [.direct, .indirect]))
return placedObject
} else {
print("Fallback to original render content")
let placedObject = PlacedObject(descriptor: descriptor, renderContentToClone: clonedRenderContent, shapes: shapes)
placedObject.components.set(InputTargetComponent(allowedInputTypes: [.direct, .indirect]))
return placedObject
}
}
My PlacedObject class where the init has the recursive cloning removed because it is handled in materialize
class PlacedObject: Entity {
let fileName: String
// The 3D model displayed for this object.
private let renderContent: ModelEntity
static let collisionGroup = CollisionGroup(rawValue: 1 << 29)
// The origin of the UI attached to this object.
// The UI is gravity aligned and oriented towards the user.
let uiOrigin = Entity()
var affectedByPhysics = false {
didSet {
guard affectedByPhysics != oldValue else { return }
if affectedByPhysics {
components[PhysicsBodyComponent.self]!.mode = .static
} else {
components[PhysicsBodyComponent.self]!.mode = .static
}
}
}
var isBeingDragged = false {
didSet {
affectedByPhysics = !isBeingDragged
}
}
var positionAtLastReanchoringCheck: SIMD3<Float>?
var atRest = false
init(descriptor: ModelDescriptor, renderContentToClone: ModelEntity, shapes: [ShapeResource]) {
fileName = descriptor.fileName
// renderContent = renderContentToClone.clone(recursive: true)
renderContent = renderContentToClone
super.init()
name = renderContent.name
// Apply the rendered content’s scale to this parent entity to ensure
// that the scale of the collision shape and physics body are correct.
scale = renderContent.scale
renderContent.scale = .one
// Make the object respond to gravity.
let physicsMaterial = PhysicsMaterialResource.generate(restitution: 0.0)
let physicsBodyComponent = PhysicsBodyComponent(shapes: shapes, mass: 1.0, material: physicsMaterial, mode: .static)
components.set(physicsBodyComponent)
components.set(CollisionComponent(shapes: shapes, isStatic: false,
filter: CollisionFilter(group: PlacedObject.collisionGroup, mask: .all)))
addChild(renderContent)
addChild(uiOrigin)
uiOrigin.position.y = extents.y / 2 // Position the UI origin in the object’s center.
// Allow direct and indirect manipulation of placed objects.
components.set(InputTargetComponent(allowedInputTypes: [.direct, .indirect]))
// Add a grounding shadow to placed objects.
renderContent.components.set(GroundingShadowComponent(castsShadow: true))
}
required init() {
fatalError("`init` is unimplemented.")
}
}
Thanks
Is it possible to create a button in my app that will turn on the spatial personas for the user? Currently the only way I know of turning on spatial personas is by selecting the cube icon in the FaceTime window which is quite clunky for people unfamiliar with the Vision Pro's UI. Any help would be appreciated.
When you click the button in the background of three horizontal lines, when the view is about to appear, add buried event statistics, but click the button to close it, it will repeat the view will appear method API, equivalent to the view method repeated execution twice, resulting in incorrect buried event statistics
I am a newby of spatial computing. Here I am learning how to use ARKit to capture the environment texture and apply it on a ModelEntity of RealityKit on Vision Pro. But I do not find a demo of how to use EnvironmentLightEstimationProvider.
After checking the documentation, I also have some questions:
EnvironmentProbeAnchor.environmentTexture is a MTLTexture, but EnvironmentResource needs a CGImage. How do I translate MTLTexture to CGImage(Forgive me that I do not know much about Metal or other framework, so It will be better if there is a code that I can copy and paste directly)
It seems that the EnvironmentProbeAnchor can only get the light information around the device. But what should I do if I want get the light information around the ModelEntity so that I can apply the environment texture on it.
It will be better if you can provide a code demo about how to use the new api.
Thank you!
Apple published a set of examples for using system gestures to interact with RealityKit entities. I've been using DragGesture a lot in my apps and noticed an issue when using it in an immersive space.
When dragging an entity, if I turn my body to face another direction, the dragged entity does not stay relative to my hand. This can lead to situations where the entity is pulled very close to me, or pushed far way, or even ends up behind me.
In the examples linked above, there are two versions of how they use drag.
handleFixedDrag: This is similar to what I'm doing now. It uses the value from value.gestureValue.translation3D as the basis for the drag
handlePivotDrag: This version aims to solve the problem I described above by using value.inputDevicePose3D as the basis of the gesture.
I've tried the example from handlePivotDrag, but it has one limitation. Using this version, I can move the entity around me as if it were on the inside of an arc or sphere. However, I can no longer move the entity further or closer. It stays within a similar (though not exact) distance relative to me while I drag.
Is there a way to combine these concepts? Ideally, I would like to use a gesture that behaves the same way that visionOS windows do. When we drag windows, I can move them around relative to myself, pull them closer, push them further, all while avoiding the issues described above.
Example from handleFixedDrag
mutating private func handleFixedDrag(value: EntityTargetValue<DragGesture.Value>) {
let state = EntityGestureState.shared
guard let entity = state.targetedEntity else { fatalError("Gesture contained no entity") }
if !state.isDragging {
state.isDragging = true
state.dragStartPosition = entity.scenePosition
}
let translation3D = value.convert(value.gestureValue.translation3D, from: .local, to: .scene)
let offset = SIMD3<Float>(x: Float(translation3D.x),
y: Float(translation3D.y),
z: Float(translation3D.z))
entity.scenePosition = state.dragStartPosition + offset
if let initialOrientation = state.initialOrientation {
state.targetedEntity?.setOrientation(initialOrientation, relativeTo: nil)
}
}
Example from handlePivotDrag
mutating private func handlePivotDrag(value: EntityTargetValue<DragGesture.Value>) {
let state = EntityGestureState.shared
guard let entity = state.targetedEntity else { fatalError("Gesture contained no entity") }
// The transform that the pivot will be moved to.
var targetPivotTransform = Transform()
// Set the target pivot transform depending on the input source.
if let inputDevicePose = value.inputDevicePose3D {
// If there is an input device pose, use it for positioning and rotating the pivot.
targetPivotTransform.scale = .one
targetPivotTransform.translation = value.convert(inputDevicePose.position, from: .local, to: .scene)
targetPivotTransform.rotation = value.convert(AffineTransform3D(rotation: inputDevicePose.rotation), from: .local, to: .scene).rotation
} else {
// If there is not an input device pose, use the location of the drag for positioning the pivot.
targetPivotTransform.translation = value.convert(value.location3D, from: .local, to: .scene)
}
if !state.isDragging {
// If this drag just started, create the pivot entity.
let pivotEntity = Entity()
guard let parent = entity.parent else { fatalError("Non-root entity is missing a parent.") }
// Add the pivot entity into the scene.
parent.addChild(pivotEntity)
// Move the pivot entity to the target transform.
pivotEntity.move(to: targetPivotTransform, relativeTo: nil)
// Add the targeted entity as a child of the pivot without changing the targeted entity's world transform.
pivotEntity.addChild(entity, preservingWorldTransform: true)
// Store the pivot entity.
state.pivotEntity = pivotEntity
// Indicate that a drag has started.
state.isDragging = true
} else {
// If this drag is ongoing, move the pivot entity to the target transform.
// The animation duration smooths the noise in the target transform across frames.
state.pivotEntity?.move(to: targetPivotTransform, relativeTo: nil, duration: 0.2)
}
if preserveOrientationOnPivotDrag, let initialOrientation = state.initialOrientation {
state.targetedEntity?.setOrientation(initialOrientation, relativeTo: nil)
}
}
I am a newby of spatial computing and I am using ARKit and RealityKit to develop a visionPro app.
I want to accomplish such a goal: If the user's hand touchs an object(an entity in RealityView) on the table, it will post a Window. But I do not know how to handle the event "the user's hand touchs the object". Should I use hand tracking feature to do some computing by myself? Or is there some api to use directly?
Thank you!
Hi, we would like to create something where you can open multiple volumetric windows and place them in a room, our biggest issue is that we want these windows to be persistent, so when I close and reopen the app, the windows to be in the same position. We can't use immersive spaces because we also want to have the possibility to access the shared space.
Is it possible with the current features and capabilities to do that? If yes do you have some advices how can we achieve this?
The alternative is if is it possible to open the virtual display in immersive spaces or if we have the possibility to implement our own virtual display.
Hi there,
I’m building a workplace experience that requires using virtual desktop, is there a way to launch it in my code, so user doesn’t have to do it manually?
Thanks in advance!
I am encountering an issue while using the multiview video demo provided at this link "https://developer.apple.com/documentation/avkit/creating-a-multiview-video-playback-experience-in-visionos/". Specifically, when running on versions of visionOS prior to 2.2, navigating back results in a blank screen. Has anyone else experienced this problem and found a solution? Any advice or workaround would be greatly appreciated.
Hi, I downloaded a few files from apple developers website, and they are in .reality file format. I wanted to see how they are constructed, but there is no way to open them and look at the content inside (3d model, shader, animation, etc). Is there a way to look at the content of .reality file?
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
RealityKit
Reality Composer Pro
Shader Graph Editor
I am developing an ARKit based application that requires plane detection of the tabletop at which the user is seated. Early testing was with an iPhone 8 and iPhone 8+. With those devices, ARKit rapidly detected the plane of the tabletop when it was only 8 to 10 inches away. Using iPhone 15 with the same code, it seems to require me to move the phone more like 15 to 16 inches away before detecting the plane of the table. This is an awkward motion for a user seated at a table. To validate that it was not necessarily a feature of my code, I determined that the same behavior results with Apple's sample AR Interaction application. Has anyone else experienced this, and if so, have suggestions to improve the situation?
Following up on my previous question here: https://developer.apple.com/forums/thread/774262
Having solved the clipping problem, I am now trying to overlay some content in front of the RealityView. However, it looks like any content with transparency does not render in front of the RealityView, while opaque views seem to work; placing content with transparency like glassBackgroundEffect() behind the RealityView in a ZStack causes the entire window to flicker.
Additionally, my SwiftUI attachment placed in front of the stereoscopic image plane are invisible if the user look at it straight at 90 degrees. However, if the user look at it from increasing angles from the sides, the attachment gradually turns visible again.
Are these behaviors expected? What is a recommended approach to overlay content in front of a RealityView? Thanks!
I am developing an visionos app. I load a .usdz file as a Reality Entity(such as a cabbage). And I want such an effect:
When I turn on a desk lamp in real world near the Entity, the surface of the Entity will correctly respond to the light in the real world.
I want an effect like this:
https://www.reddit.com/r/virtualreality/comments/1as01mm/shiny_disco_ball_reflecting_my_room/
I look up the api such as ImageBasedLightComponent andVirtualEnvironmentProbeComponent in RealityKit、EnvironmentLightEstimationProvider in ARKit,but I do not know how to code.
Besides, it will be better if the shadow will also respond to the light correctly.
Hello, esteemed tech developer.
I am using the Apple Vision Pro to create an AR assist system about the da Vinci Surgical Robot in a medical surgical suite, and would like to capture eye movement data with tester uniformity. Although the Apple Vision Pro has a superb infrared sensor to monitor eye movement status, Apple does not seem to have open access officially. (I'm aware of many existing discussions about this, but I was still wondering if there might be an option, particularly for research labs.)Here's my FB number.FB16603687
Hi,
I am creating an ECS. With this ECS I will need to register several DragGesture.
Question: Is it possible to define DragGestures in ECS? If yes, how do we do that? If not, what is the best way to do that?
Question: Is there a "gesture" method that takes an array of gestures as a parameter?
I am interested in any information that can help me, if possible with an example of code.
Regards
Tof
Hello,
I have downloaded and run the sample object tracking app for visionos.
Now I'm working on my own objects for tracking. I have made a model using Create ML using images of my object.
However, I cannot see how to convert the Create ML output file (xxx.mlmodel) into a reference object like the files in the sample project.
is there a tool for converting them?
TIA
Topic:
Spatial Computing
SubTopic:
ARKit
I am working with MeshAnchors, and I am having troubles getting to the classification of the triangles/faces.
This post references the MeshAnchor.Geometry, and that struct does have a property named "classifications", but it is of type GeometrySource. I cannot find any classification information in GeometrySource. Am I missing something there?
I think I am looking for something of type MeshAnchor.MeshClassification, but I cannot find any structs with this as a property.