Hi,
On visionOS to manage entity rotation we can rely on RotateGesture3D. We can even with the constrainedToAxis parameter authorize only rotation on an x, y or z axis or even make combinations.
What I want to know is if it is possible to constrain the rotation on axis automatically.
Let me explain, the functionality that I would like to implement is to constrain the rotation on an axis only once the user has started his gesture. The initial gesture the user makes should let us know which axis they want to rotate on.
This would be equivalent to activating a constraint automatically on one of the axes, as if we were defining the gesture on one of the axes.
RotateGesture3D(constrainedToAxis: .x)
RotateGesture3D(constrainedToAxis: .y)
RotateGesture3D(constrainedToAxis: .z)
Is it possible to do this?
If so, what would be the best way to do it?
A code example would be greatly appreciated.
Regards
Tof
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I want to display a model image in the windowGroup window. This image is not unique. To display the model image, how should I convert it into an image
As in the title: openImmersiveSpace works as expected. The ImmersiveSpace in the ID opens normally but the function never resolves a Result. Here is a workaround I used to make user it was on the UI thread and scenePhase was active:
`
@MainActor func openSpaceWithStateCheck() async {
if scenePhase == .active {
Task {
switch await openImmersiveSpace(id: "RoomCaptureInteraction") {
case .opened:
isCapturingImagery = true
break
case .error:
print("!! An error occurred when trying to open the immersive space captureRoomImagery")
case .userCancelled:
print("!! The user declined opening immersive space captureRoomImagery")
@unknown default:
print("!! unknown default result of opening space")
break
}
}
} else {
print("Scene not active, deferring immersive space opening")
}
}
I'm on visionOS 2.4 and SDK 2.2.
I have tried uninstalling the app and rebuilding. Tried simply opening an empty ImmersiveSpace.
The consistency of the ImmersiveSpace opening at least means I can work around it. Even dismissImmersiveSpace works normally and closes the immersive space. But a workaround seems hamfisted.
Hello,
I'm working on a visionOS project that uses Reality Composer Pro, and we are managing our project files with Git.
We've noticed that simply opening and closing the Reality Composer Pro application consistently generates changes in the following files, even when no explicit modifications have been made by the developer:
{ProjectName}/Packages/RealityKitContent/Package.realitycomposerpro/PluginData/*******/ShaderGraphEditorPluginID/ShaderGraphEditorPluginID
{ProjectName}/Packages/RealityKitContent/Package.realitycomposerpro/WorkspaceData/SceneMetadataList.json
Could you please clarify the purpose of these files? Why do they appear as modified when no direct changes are made from our end?
More importantly, is it safe to add these files to our .gitignore to prevent them from being tracked by Git? We are concerned that ignoring these files might lead to unexpected issues or inconsistencies when other team members pull the latest changes, especially if these files contain critical project metadata or state that needs to be synchronized.
Any insights or recommended best practices for managing Reality Composer Pro projects with Git would be greatly appreciated.
Thank you for your time and assistance.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Reality Composer
RealityKit
Reality Composer Pro
Any way to extend the video recording time in Reality Composer Pro from 3:00 to any longer value, such as editing preferences in Terminal or other workaround?
Is there any way to use the strap and a USB-C cable as a live video stream input source that would mirror to Quicktime or some other video capture tool?
I am assuming there is no online documentation or user manual for the strap, but please correct me if I'm wrong. Thank you.
Hi 26 beta guys,
I have apps using ARKit.
In iPadOS 26 beta, ARKit stops working after switching to other apps.
how to:
Enable WindowMode in iPadOS 26
Launch my app and start ARSession
Switch to another app (preference app, etc.)
Switch back to my app
AR stops updating camerafeed.
I debug printed ARSessionDelegate, and found that
after sessionWasInterrupted was called, sessionInterruptionEnded was never called.
sessionInterruptionEnded is called if WindowMode disabled.
Is this just a bug for 26 beta?
I suspect there is similar problem with non-AR camera.
Any idea?
Topic:
Spatial Computing
SubTopic:
ARKit
Hi Nathaniel,
I spoke with you yesterday in the WWDC lab. Thanks for chatting with me! Is it possible to get a link to a doc that has some key metrics I'd find in a RealityKit trace so I know if that metric is exceeding limits and probably causing a problem? Right now, I just see numbers and have no idea if a metric is high or low :). This is specifically for a VisionOS app.
Thanks,
Bob
Hello,
I was looking back into downloading the Tracking geographic locations in AR sample app from https://developer.apple.com/documentation/arkit/tracking-geographic-locations-in-ar
Unfortunately the Download links to the .zip of the DisplayingAPointCloudUsingSceneDepth sample project.
The exact same issue occurs when trying to download the sample code from https://developer.apple.com/documentation/ARKit/creating-a-fog-effect-using-scene-depth
Wondering if those links are deliberately broken because of possible deprecations.
Thanks to any Apple Engineer willing to look into that.
I am trying to run widgets on visionOS 26. Specifically I am trying to pin them to the simulator room's walls, however I am unable to do so.
Is this a limitation with the visionOS simulator right now, or am I missing a trick here?
I'm trying to develop an immersive visionOS app, which you can move an Entity having a PerspectiveCamera as its child in immersive space, and render the camera view on 2D window.
According to this thread, this seems to can be achieved using RealityRenderer. But when I added the scene entity loaded from realityKitContentBundle to realityRenderer.entities, I needed to clone all entities of the scene, otherwise all entities in the immersive space will disappear.
@Observable
@MainActor
final class OffscreenRenderModel {
private let renderer: RealityRenderer
private let colorTexture: MTLTexture
init(scene: Entity) throws {
renderer = try RealityRenderer()
// If not clone entities in the scene, all entities in the immersive space will disappear
renderer.entities.append(scene.clone(recursive: true))
let camera = PerspectiveCamera()
renderer.activeCamera = camera
renderer.entities.append(camera)
...
}
}
Is this the expected behavior? Or is there any other way to do this (move camera in immersive space and render its output on 2D window)?
Here is my sample code:
https://github.com/TAATHub/RealityKitPerspectiveCamera
Hi, I’m currently using Reality Composer Pro on macOS, and I’m trying to work with particle emitters using only the built-in tools — no Xcode, no custom Swift code.
Here’s what I’m trying to achieve:
1. Is it possible to trigger a particle emitter with a tap gesture, directly inside Reality Composer Pro?
2. Can I add a particle emitter to a timeline, so that it emits at a specific time during an animation or behavior sequence?
3. Most importantly, is there a way to precisely control emission behavior — such as when particles start or stop — using visual and intuitive controls, without scripting?
My goal is to set up particle effects that are interactive and timeline-controlled, but managed entirely through Reality Composer Pro’s GUI, not through code.
Any suggestions, tips, or examples would be very helpful. Thanks in advance!
In Reality Composer Pro 2.0 (448.120.2), if I use the "Create new scene in the project" button more than once, this feature doesn't seem to work.
The second time I use "Create new scene in the project", Reality Composer Pro displays a new empty USDA file in the project browser with the wrong icon (a yellow document icon instead of a 3D box icon).
Also, it doesn't create the new scene's USDA file on disk or display the scene in entity hierarchy browser or create a new tab.
Consequently, whenever I want to add more than one new scene to my project, I have to repeatedly quit and restart Reality Composer Pro.
In my use of RCP just now, I had to quit and restart RCP six times to create seven new scenes.
Is this a known issue?
Repro steps:
In a Reality Composer Pro project, create a new folder using the "add folder" icon button in the project browser.
Inside the new folder, click the "Create new scene in the project" icon button.
Click the "Create new scene in the project" icon button a second time.
Expected behavior:
A new USDA file is created on disk. The new USDA file's root entity appears in the entity hierarchy browser and a corresponding new tab is created.
Observed behavior:
The USDA file for this scene is not created on disk and it does not appear in the entity hierarchy browser in a new tab. In the project browser view, a yellow document icon appears and it does not appear to correspond to an actual USDA file.
Thank you for any insight you can provide about this issue.
I am using HelloPhotogrammetry in Xcode
I can make one model with something like HelloPhotogrammetry.main([path_to_folder_of images, path_to_output/model.usdz, "-d", "medium", "-o", "unordered", "-f", "high" ])
But how would I request several models simultaneously? I only want to vary the detail.
[ ("/Users/you/Desktop/model_medium.usdz", detail: .medium), ("/Users/you/Desktop/model_full.usdz", detail: .full), ("/Users/you/Desktop/model_raw.usdz", detail: .raw ]
Hello everyone,
I've been trying for a few weeks now to convert a sequential series of meshes into a stop-motion animation in USDZ format.
In Unreal Engine, I’ve already figured out how to transform the sequential series of individual meshes into a smooth animation using the node system and arrays.
Unfortunately, the node system cannot be exported as a usdz animation logic in either Unreal or Blender.
Because of this, I have tried several other methods to incorporate the animation logic. Here’s what I’ve tried so far:
I attempted to create the animation in Blender with Render-/Viewports and mapping it to keyframes. However, in my experience, Viewports are not supported in the conversion.
I tried aligning the vertices of individual objects and merging the frames using the Shrinkwrap modifier in Blender, then setting up a morph animation with keyframes. However, because the individual meshes are too different, this results in artifacts, and manually editing each mesh is too difficult for me to handle.
I placed all individual meshes at the same position and animated them sequentially by scaling them from 0 to 100 in keyframes (Frame 1 is visible for 10 frames, then scales down at frame 11, while Frame 2 becomes visible at frame 11, and so on). I also adjusted the keyframes so that the scaling happens in a "constant" manner rather than the default Bezier or linear interpolation. I then converted this animation to .abc, and the result initially looked good. However, some information is lost when converting it with OpenUSD. The animation does not maintain its intended jump-like behavior in USDZ format, and instead, the scaling of individual files is visible in the animation.
I tried using a Blender add-on (StepMotion), which allows the animation to be exported as .abc, but it can only be read in Blender or Unreal. Even in the preview, the animation is not displayed correctly, so converting the animation logic does not work either.
Unfortunately, I have no alternative way to create the animation, as the individual frames have been provided to me as meshes. So far, I haven’t found a way to implement this successfully.
I would be very grateful for any tips or ideas, as I am running out of options on how to make this work.
Thanks in advance!
Topic:
Spatial Computing
SubTopic:
General
Tags:
Core Animation
Reality Converter
Visual Design
USDZ
When viewing an immersive space and I open a spatial photo in Quick Look, which hides the entire app interface to show the photo. Is there a memory limit? If the inmersive space is not active, the application keep the interface.
Hey Everyone, Happy New Year!
I wanted to see if you have seen this before. I have added an attachment to the RealityView as a child on an entity that has a Billboard component set on it. I wanted to create the effect that the attachment is offset by .5 meters from center and follows the device as you move around it. IT works great until you try click a button.
The attachment moves with the billboard, but the collision box around the attachment is not following it. If I position myself perfectly it works.
Video Example: https://youtu.be/4d9Vx7K8MmU
//
// ImmersiveView.swift
// Billboard Attachment
//
// Created by Justin Leger on 1/3/25.
//
import SwiftUI
import RealityKit
import RealityKitContent
struct ImmersiveView: View {
var rootEntity = Entity()
var body: some View {
RealityView { content, attachments in
// Add the initial RealityKit content
let sphereEntity = ModelEntity(mesh: .generateSphere(radius: 0.1), materials: [SimpleMaterial(color: .red, roughness: 1, isMetallic: false)])
sphereEntity.position = [0.0, 1.0, -2.0]
let controlsPivotEntity = Entity()
controlsPivotEntity.components[BillboardComponent.self] = .init()
// Extract the attachemnt entity and disable it before its used.
if let controlsViewAttachmentEntity = attachments.entity(for: PlacedThingControls.attachmentId) {
controlsViewAttachmentEntity.position.z = 0.5
controlsPivotEntity.addChild(controlsViewAttachmentEntity)
sphereEntity.addChild(controlsPivotEntity)
}
content.add(sphereEntity)
}
attachments: {
Attachment(id: PlacedThingControls.attachmentId) {
PlacedThingControls()
}
}
}
}
#Preview(immersionStyle: .mixed) {
ImmersiveView()
.environment(AppModel())
}
struct PlacedThingControls: View {
static let attachmentId = "placed-thing-3D-controls"
var body: some View {
VStack {
HStack(spacing: 0) {
Button {
print("🗺️🗺️🗺️ Map selected pieces")
} label: {
Text("\(Image(systemName: "plus.square.dashed")) Manage Mesh Maps")
.fontWeight(.semibold)
.frame(maxWidth: .infinity)
}
.padding(.leading, 20)
Spacer()
Button(role: .destructive) {
print("🗑️🗑️🗑️ Delete selected pieces")
} label: {
Label {
Text("Delete")
} icon: {
Image(systemName: "trash")
}
.labelStyle(.iconOnly)
}
.padding(.trailing, 20)
}
.padding(.vertical)
.frame(minWidth: 320, maxWidth: 480)
}
.glassBackgroundEffect()
}
}
Hello since updating to beta 3 the sculpting sample app doesn't work it crashes on running.
seems to be something in AnchorEntity or AccessoryAnchoringSource
Referenced from: <00B81486-1A74-30A0-B75B-4B39E3AF57DF> /private/var/containers/Bundle/Application/3D2EBF59-19F0-4BF4-8567-6962AA36A2C6/delete.app/delete.debug.dylib
Expected in: <BAA9B221-78A1-3B99-AA2F-B8DFCD179FC7> /System/Library/Frameworks/RealityFoundation.framework/RealityFoundation
With the new ImagePresentationComponent in visionOS 26, how can text/overlays be shown on top of the image as seen in the Spatial Gallery app?
I made an animation in Blender using geometry nodes that I exported to USDC file (then I used Reality Converter to convert to USDZ) and I can see the animation when viewing from the finder but does not play after importing to RCP. Any idea how I can play the animation? Or can the animation be accessed through Xcode?
Thanks!