Here is the code snippets.
struct RealityViewTestView: View {
@State private var texts: [String] = []
var body: some View {
RealityView { content, attachments in
} update: { content, attachments in
for text in texts {
if let textEntity = attachments.entity(for: text) {
textEntity.position.x = Float.random(in: -0.1...0.1)
content.add(textEntity)
}
}
} attachments: {
ForEach(texts, id: \.self) { text in
Attachment(id: text) {
Text(text)
.padding()
.glassBackgroundEffect()
}
}
}
.toolbar {
ToolbarItem {
Button("Add") {
texts.append(String(UUID().uuidString.prefix(6)))
}
}
ToolbarItem {
Button("Remove") {
texts.remove(at: Int.random(in: 0..<texts.count))
}
}
}
}
}
struct RealityViewTestView: View {
@State private var texts: [String] = []
@State private var entities: [Entity] = []
var body: some View {
RealityView { content, attachments in
} update: { content, attachments in
// for text in texts {
// if let textEntity = attachments.entity(for: text) {
// textEntity.position.x = Float.random(in: -0.1...0.1)
// content.add(textEntity)
// }
// }
for entity in entities {
content.add(entity)
}
} attachments: {
ForEach(texts, id: \.self) { text in
Attachment(id: text) {
Text(text)
.padding()
.glassBackgroundEffect()
}
}
}
.toolbar {
ToolbarItem {
Button("Add") {
//texts.append(String(UUID().uuidString.prefix(6)))
let m = ModelEntity(mesh: .generateSphere(radius: 0.1), materials: [SimpleMaterial(color: .white, isMetallic: false)])
m.position.x = Float.random(in: -0.2...0.2)
entities.append(m)
}
}
ToolbarItem {
Button("Remove") {
//texts.remove(at: Int.random(in: 0..<texts.count))
entities.removeLast()
}
}
}
}
}
About the first code snippet, when I remove an element from the texts, why content can automatically remove the corresponding entity? And about the second code snippet, content do not automatically remove the corresponding entity. I am very curious.
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hello,
I checked following documentations.
Vision | Apple Developer Documentation
Discover Swift enhancements in the Vision framework - WWDC24 - Videos - Apple Developer
I saw Vision Framework is available on visionOS.
So I want to know that if it's possible using Vision Framework on visionOS for tracking human and animal body poses. Or are there some limits to use this on visionOS?
I want a sentence custom hover effect, not a button.
I want a hover effect when you look at one sentence out of many sentences.
So I searched for reference videos https://youtu.be/DftRTx1oX6E , https://developer.apple.com/videos/play/wwdc2023/10110/ on apple youtube and visionOS documentation.
But I haven't gotten anywhere near my wish feature yet.
I respectfully request someone to help me. :)
Im just trying to setup some kind of visual on my iPad for testing and play purposes... I tried adding iPad to both APPLE Visual APPS.
I get this error for both
Building for 'iphoneos', but '12.0' must be >= '18.0'
Topic:
Spatial Computing
SubTopic:
General
Does anyone have any idea if Apple plans to add in UDIM support for its 3D development? It is a real bummer to not have this feature and makes an otherwise clean USD pipeline kinda suck.
Hi everyone,
I am wondering under which settings the camera(s) were set by the time they were calibrated.
For instance, one aspect that is easy to find is the reference resolution of the images taken when calibrating the intrinsics, this is by retrieving
intrinsicMatrixReferenceDimensions.
Making sure that the principal point is referenced to the by the time resolution used when the calibration was ongoing.
However, recently I saw that there are focusing modes that potentially displace the lens' physical position.
Settings like:
AutoFocusRangeRestriction: none, near, far
setFocusModeLocked: Locks the lens position at the specified value, and sets the focus mode to a locked state.
My concern lies the impact this focusing lens displacements can have on the intrinsic matrix parameters, like these parameters no longer describe the camera since the lens position has changed.
In simple words, what is the focus 'mode'/'range' the cameras were set when calibrating them for intrnisics?
The ShaderGraph Node Blurred Background (RealityKit) – https://developer.apple.com/documentation/shadergraph/realitykit/blurred-background-(realitykit) works fine within the RealityComposer Pro 2 editor but isn't working on iOS 18 or macOS 15. Instead of the blurred content it just renders as opaque in a single color (Screenshot 2).
Interestingly it also fails to render within RealityComposer Pro when no other entities are within the scene, e.g only a background skybox set.
Expected Behavior: It would be great if this node worked the same way as it does on visionOS since this would allow for really interesting and nice effects for scenes.
Feedback ID: FB15081190
Platform: iOS18
Tech: RealityView
Hi! I was wondering if RealityView now provides ways for their session to persist Anchor data in a world such that the anchor locations in one session can be saved and loaded in a another session that persists the exact same anchor positions.
I know that ARWorldMap in ARKit does that, but I was not able to find a way to use it with RealityView. I think it's because RealityView has ARKit under its hood but does not expose the ARKit session info publicly to the client code.
So I was wondering if there's a SwiftUI + RealityView approach that can help me to achieve a similar goal: Come back to the same location and see the object in exactly the same place.
Thanks!
HoverEffectComponent on macOS 15 and iOS 18 works fine using RealityView, but seems to be ignored when ARView (even with a SwiftUI UIViewRepresentable) is used.
Feedback ID: FB15080805
Hey,
I'm building an interior design app In Vision OS 2.0. I'm fetching the planes detected by ARKit and I then proceed to add them with an "OcclusionMaterial" to make sure my object are occluded accordingly. However, I'm facing two problems with this:
The ground shadows are completely disabled as soon as an occlusion material is added, even if I inset the planes doing the occlusion. I've looked into this: https://developer.apple.com/documentation/shadergraph/realitykit/shadow-receiving-occlusion-surface-(realitykit) but when I tried to use it, it behaved exactly as "OcclusionMaterial".
The planes are also occluding all windows (mines and the system ones), which is a behavior I'd like to avoid. I only want to occluded the Entity I added. Is there a way to achieve this?
Thanks in advance
Hey Everyone, Happy New Year!
I wanted to see if you have seen this before. I have added an attachment to the RealityView as a child on an entity that has a Billboard component set on it. I wanted to create the effect that the attachment is offset by .5 meters from center and follows the device as you move around it. IT works great until you try click a button.
The attachment moves with the billboard, but the collision box around the attachment is not following it. If I position myself perfectly it works.
Video Example: https://youtu.be/4d9Vx7K8MmU
//
// ImmersiveView.swift
// Billboard Attachment
//
// Created by Justin Leger on 1/3/25.
//
import SwiftUI
import RealityKit
import RealityKitContent
struct ImmersiveView: View {
var rootEntity = Entity()
var body: some View {
RealityView { content, attachments in
// Add the initial RealityKit content
let sphereEntity = ModelEntity(mesh: .generateSphere(radius: 0.1), materials: [SimpleMaterial(color: .red, roughness: 1, isMetallic: false)])
sphereEntity.position = [0.0, 1.0, -2.0]
let controlsPivotEntity = Entity()
controlsPivotEntity.components[BillboardComponent.self] = .init()
// Extract the attachemnt entity and disable it before its used.
if let controlsViewAttachmentEntity = attachments.entity(for: PlacedThingControls.attachmentId) {
controlsViewAttachmentEntity.position.z = 0.5
controlsPivotEntity.addChild(controlsViewAttachmentEntity)
sphereEntity.addChild(controlsPivotEntity)
}
content.add(sphereEntity)
}
attachments: {
Attachment(id: PlacedThingControls.attachmentId) {
PlacedThingControls()
}
}
}
}
#Preview(immersionStyle: .mixed) {
ImmersiveView()
.environment(AppModel())
}
struct PlacedThingControls: View {
static let attachmentId = "placed-thing-3D-controls"
var body: some View {
VStack {
HStack(spacing: 0) {
Button {
print("🗺️🗺️🗺️ Map selected pieces")
} label: {
Text("\(Image(systemName: "plus.square.dashed")) Manage Mesh Maps")
.fontWeight(.semibold)
.frame(maxWidth: .infinity)
}
.padding(.leading, 20)
Spacer()
Button(role: .destructive) {
print("🗑️🗑️🗑️ Delete selected pieces")
} label: {
Label {
Text("Delete")
} icon: {
Image(systemName: "trash")
}
.labelStyle(.iconOnly)
}
.padding(.trailing, 20)
}
.padding(.vertical)
.frame(minWidth: 320, maxWidth: 480)
}
.glassBackgroundEffect()
}
}
the following documentation tells me that the CameraFrame.Sample.Parameters.extrinsics is of type simd_float4x4, great!
https://developer.apple.com/documentation/arkit/cameraframe/sample/parameters/4443449-extrinsics
I have read in the answer of another post that this extrinsics represents the pose of the physical camera relative to the device anchor.
Did I understand correctly that the device anchor is where the scene is rendered from onto the user's display?
What is the coordinate system in which this offset is defined, which axis is left, which one is up, which one is forward?
The last column of the extrinsics seems to define a translation of approximately 2 cm along the x axis, -2cm along the y axis and -5 cm along the z axis. I tried to measure the physical distance between the main left and right cameras in order to find out if it's rather 2cm or 5 cm from the "middle", it looks more like 5, so I assume that the z axis is looking towards the right (from the user's perspective). Is that so? For x and y, I assume that the physical camera is approximately 2 cm to the front of the user and 2cm to the bottom, which of x and y is horizontal, which on vertical?
How is the camera image indexed, is it row-major and is the origin on the top left?
I am looking forward to learning about all the details on these extrinsics in order to make use of it.
Hi
I'm trying to create a water shader using the shader graph in Reality Composer Pro, but quite a few of the features you would need for realistic water rendering appear to be missing.
One big issue is the lack of a way to create refraction. We can easily control the transparency of the water by changing the opacity, but how can we distort what we see through the water? I can't find any obvious solution for that.
In Unity, they provide a node called HD Scene Color which is basically the scene rendered to an offscreen buffer which you can apply to the water and then distort to get a refraction effect. I guess the Background Blur node could be used for something like this if we could turn off the blur and distort it, but there's no control for the blur and no control for the texture coordinates.
Am I missing something? Any ideas are welcome :)
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
I’m currently using the RealityKit/ObjectCaptureSession API to develop my app, and I’ve noticed that Apple’s official Reality Composer app also uses the same API. However, both my app and the Reality Composer app crash if the device doesn’t have enough storage space (approximately 4 GB free). Here is the debug log I’m seeing:
Insufficient storage: required 4000000000
Switch to error state. Got error = insufficientStorage(requiredBytes: 4000000000)
fromState == toState so punting transition! from=disabled toState=disabled
Punting transition since states match: disabled
Got error starting session! insufficientStorage(requiredBytes: 4000000000)
I would like to request:
A fix for the crash in the official Reality Composer app.
Guidance on how to properly handle this crash or error when using the ObjectCaptureSession API in my own app.
Thank you!
I am trying to apply impulseAction to an entity but everytime entity.playAnimation(impulseAnimation) is executed, the log says Cannot find a BindPoint for any bind path: "". I can't figure out what is wrong. Could someone please help me with this?
import SwiftUI
import RealityKit
import RealityKitContent
struct ImmersiveView: View {
var body: some View {
RealityView { content in
// Add the initial RealityKit content
if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle), var sphere = immersiveContentEntity.findEntity(named: "Sphere") {
sphere.components.set(CollisionComponent(shapes: [ShapeResource.generateSphere(radius: 0.1)]))
sphere.components.set(PhysicsBodyComponent(shapes: [ShapeResource.generateSphere(radius: 0.1)], mass: 1000))
sphere.components[PhysicsBodyComponent.self]?.isAffectedByGravity = false
sphere.position = [0, 1, -1]
content.add(immersiveContentEntity)
// Create an action to apply an impulse, forcing the object to move upwards.
let impulseAction = ImpulseAction(linearImpulse: [0, 1, 0])
// Create a small positive duration value.
let duration: TimeInterval = 1 / 30.0
// Create an animation for the action, which will start playing
// after five seconds.
do {
let impulseAnimation = try AnimationResource
.makeActionAnimation(for: impulseAction,
duration: duration,
delay: 5.0)
// Play the sequence animation that will play the actions.
sphere.playAnimation(impulseAnimation)
} catch {
print("Error: \(error)")
}
}
}
}
}
All the logs:
Could not locate file 'default-binaryarchive.metallib' in bundle.
Error creating the CFMessagePort needed to communicate with PPT.
AddInstanceForFactory: No factory registered for id <CFUUID 0x6000029a5b80> F8BB1C28-BAE8-11D6-9C31-00039315CD46
cannot add handler to 0 from 1 - dropping
nw_socket_copy_info [C1:2] getsockopt TCP_INFO failed [102: Operation not supported on socket]
nw_socket_copy_info getsockopt TCP_INFO failed [102: Operation not supported on socket]
Registering library (/Library/Developer/CoreSimulator/Volumes/xrOS_22N840/Library/Developer/CoreSimulator/Profiles/Runtimes/xrOS 2.2.simruntime/Contents/Resources/RuntimeRoot/System/Library/PrivateFrameworks/CoreRE.framework/default.metallib) that already exists in shader manager. Library will be overwritten.
cannot add handler to 0 from 1 - dropping
Cannot find a BindPoint for any bind path: "", ""
Sync object without snapshot while removing view (id: 2816861686082450363, type: 6373420419761316588[SelectableSceneContentIdentifierComponent]).
But i think only Cannot find a BindPoint for any bind path: "", "" is relevant.
[xrsimulator] Component Compatibility: EnvironmentLightingConfiguration not available for 'xros 1.0', please update 'platforms' array in Package.swift
[xrsimulator] Exception thrown during compile: compileFailedBecause(reason: "compatibility faults")
Tool exited with code 1
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
sample repo: https://github.com/ckse93/VideoDiffusionIssueSHowcase
Repo has detailed step by step workflow. as well as screenshot, python script compute result, and parameters
after running computeDiffuseReflectionUVs.py and mapping textures and reflection diffuse to objects, I noticed that reflection diffuse does not produce any color.
expected result is shown below, diffused light has color
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
RealityKit
Reality Composer Pro
Shader Graph Editor
Is it possible to use a local wifi router connecting Vision Pro and Mac for developing? I tried from Unity and Xcode.
From Unity, the host app wouldn't open without WIFI (internet connection)
From Xcode, I can see the Vision Pro paired, but while try to run there's no device listed.
Any suggestions? Thanks a lot, /Ruiying
Topic:
Spatial Computing
SubTopic:
General
I was trying to use a local Wifi router to connect Vision Pro and Mac for developing. From Unity, the host on Vision Pro wouldn't open; from Xcode I can see vision pro paired but by the Run button there's no device listed...thanks any ideas? /Ruiying
Topic:
Spatial Computing
SubTopic:
General
I am using RealityKit's ObjectCaptureSession API to capture objects, presenting the process with ObjectCaptureView. During the object capture session, there is default background audio that plays automatically.
I noticed this same audio behavior in Apple's official Composer app, which seems to use the same API. I'd like to disable this audio in my app, but I have not been able to find any API or configuration option to do so.
However, the audio persists, and I cannot find a way to turn it off. Is there an official method or workaround to disable this default audio in the ObjectCaptureSession API?
Any guidance would be appreciated. Thank you!