I am following this example to create a stereoscopic image: https://developer.apple.com/documentation/visionos/creating-stereoscopic-image-in-visionos
I would also like to add corner radius to the stereoscopic RealityView. With ordinary SwiftUI views, we typically just use .clipShape(RoundedRectangle(cornerRadius: 32)):
struct StereoImage: View {
var body: some View {
let spacing: CGFloat = 10.0
let padding: CGFloat = 40.0
VStack(spacing: spacing) {
Text("Stereoscopic Image Example")
.font(.largeTitle)
RealityView { content in
let creator = StereoImageCreator()
guard let entity = await creator.createImageEntity() else {
print("Failed to create the stereoscopic image entity.")
return
}
content.add(entity)
}
.frame(depth: .zero)
}
.padding(padding)
.clipShape(RoundedRectangle(cornerRadius: 32)) // <= HERE!
}
}
This doesn't seem to actually clip the RealityView shown in the sample above. I am guessing this is due to the fact that the box in the RealityView has a non-zero z scale, which means it isn't on the same "layer" as its SwiftUI containers, and thus isn't clipped by the modifiers apply to the containers.
How can I properly apply a clipshape to RealityViews like this? Thanks!
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
hi, I'm trying to create a virtual movie theater, but after running computeDiffuseReflectionUVs.py and applying attenuation map, I noticed the light falloff effect just covers over the objects. I used apple provided attenuation map (did not specify the attenuation map name on python script) with sample size of 6000. I thought the python script would calculate vertices and create shadow for, say, back of the chairs. Am I understanding this wrong?
Hi, I am trying to update what entities are visible in my RealityView. After the SwiftData set is updated, I have to restart the app for it to appear in the RealityView.
Also, the RealityView does not close when I move to a different tab. It keeps everything on and tracking, leaving the model in the same location I left it.
import SwiftUI
import RealityKit
import MountainLake
import SwiftData
struct RealityLakeView: View {
@Environment(\.modelContext) private var context
@Query private var items: [Item]
var body: some View {
RealityView { content in
print("View Loaded")
let lakeScene = try? await Entity(named: "Lake", in: mountainLakeBundle)
let anchor = AnchorEntity(.plane(.horizontal, classification: .any, minimumBounds: SIMD2<Float>(0.2, 0.2)))
@MainActor func addEntity(name: String) {
if let lakeEntity = lakeScene?.findEntity(named: name) {
// Add the Cube_1 entity to the RealityView
anchor.addChild(lakeEntity)
} else {
print(name + "entity not found in the Lake scene.")
}
}
addEntity(name: "Island")
for item in items {
if(item.enabled) {
addEntity(name: item.value)
}
}
// Add the horizontal plane anchor to the scene
content.add(anchor)
content.camera = .spatialTracking
} placeholder: {
ProgressView()
}
.edgesIgnoringSafeArea(.all)
}
}
#Preview {
RealityLakeView()
}
Topic:
Spatial Computing
SubTopic:
General
Tags:
Swift Packages
RealityKit
Reality Composer Pro
SwiftData
If I trigger the apple rating modal in an Immersive space it appears on the ground in (0,0,0) I need it to be in front of the user like push notification perimssion does or other permissions requests.
We have a project which is currently being built as a XCFramework.
The framework contains a custom component to be used with entities in Reality Composer Pro.
I have tried to se set the RCP Package.swift file to reference the framework package for the in the dependancies.
Nothing that I do with the folder path to reference the code is working.
Do I need to change the project to be using Swift source code instead of a XCFramework?
The component needs to be in the framework as there is a class in the framework that works directly with the custom compoent.
I am able to reference the XCFramework as a Swift Package with other projects.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
We use Unity6+VisonOS2.2 to develop an MR Application.
App Mode: RealityKit with PolySpatial,
In the actual test, we found that when my moving position is more than 80~100 meters away from the starting position of the application, my current position will be reset to Vector.zero, which will cause my application experience to be very bad. Is anyone experiencing the same problem? Is there a solution?
Topic:
Spatial Computing
SubTopic:
General
I’ve submitted the following feedback:
FB13820942 (List Outline View Not Using Accent Color on Disclosure Caret for visionOS)
I’d appreciate help on this to see if I’m doing something wrong or indeed it’s the way visionOS currently works and it’s a suggested feedback.
We are building an AR experience for deployment on iphones. We are using Unity but it looks as if Reality Composer Pro has better features for spatial audio. I am not sure if Reality Composer Pro can only be used for Vision Pro or can it also be used for deployment on Iphone or ipad.
Greetings. I am having this issue with a Unity Polyspatial VisionOS app.
We have our main Bounded Volume for our app.
We have other Native UI windows that appear when we interact with objects in our Bounded Volume.
If a user closes our main Bounded Volume...sometimes it quits the app. Sometimes it doesn't.
If we go back to the home screen and reopen the app, our main Bounded Volume doesn't always appear, and just the Native UI windows we left open are visible. But, we can sometimes still hear sounds that are playing in our Bounded Volume.
What solutions are there to make sure our Bounded Volume always appears when the app is open?
Hi everyone,
I'm creating an educational App that allows doing computational design in an immersive environment with the Vision Pro. The App is free and can be found here:
https://apps.apple.com/us/app/arcade-topology/id6742103633
The problem I have is that the mesh of voxels I currently create use ModelEntity and I recently read that this is horrible for scalability. I already start to see issues when I try to use thousands of voxels. I also read somewhere that I should then take advantage of GPUs and use metal to that end. I was wondering if someone could point me to a tutorial or article that discusses this. In essence, I need to create a 3D voxel mesh, and those voxels have to update their opacity within an iterative loop.
Thanks!
—Alejandro
I want to step into portal world. I've know PortalCrossingComponent can make an entity to cross portal, but how to make device cross into portal world?
I need to loop my videoMaterial and I don't know how to make it happen in my code.
I have included an image of my videoMaterial code.
Any help making this happen with be greatly appreciated.
Thank you,
Christopher
Topic:
Spatial Computing
SubTopic:
ARKit
When scanning multiple rooms (10+) in a single structure using ARWorldMap for coordinate space consistency, RoomCaptureSession throws CaptureError.exceedSceneSizeLimit. The instructions here (https://developer.apple.com/documentation/roomplan/scanning-the-rooms-of-a-single-structure) provide exactly what I am doing to keep the underlying ARSession alive (by calling captureSession.stop(pause: false)) and save the results before a user moves to the next room. Scanning 11 or so rooms will cause the user to hit the exceedSceneSizeLimit error. The ARWorldMap is about 58 MB and always is around this size when hitting this issue. No anchors are present and all the data seems to be from tracking data.
On iPad devices (where I do not see this issue) the ARWorldMap grows as a significantly slower rate in size.
I save the ARWorldMap after each room is scanned and confirmed by the user. If I use the ARMap to initialize the ARSession (as described in the docs) the session will immediately error with "exceedSceneSizeLimit" once the captureSession.run() is executed. Occasionally it will allow me/the user to scan again, but either breaks mid scan or the following.
This has been working fine for the past 2 years and users have been able to scan dozens of rooms without issue. It seems only lately that it has been a problem.
I would expect the ARWorldMap to be allowed for much bigger sizes. At this point I can just about scan more area of my house with a single scan than I can when I use different captureSessions.
Few observations:
This happens on my iPhone 15 Pro Max, my iPhone 17 Pro, but not my iPad M4 (maybe memory related?). It is possible if scanning many more rooms it would happen on the iPad too.
I have tried things such as resetting the ARConfig on the underlying ARSession to reset some, but this doesn't work.
I have tried to create a new ARWorldMap and move the origin to the older map to clear out tracking data. This almost works but causes a mess of issues when a user moves at all due to the unshared coordinate space.
I believe there are three active issues regarding this: FB14454922, FB15035788, FB20642944
Could we get an update for this issue? It is a production issue and severely limits my user experience in my production application.
I'm trying to add a feature to my app to allow a user to import items from other apps, like Safari, via the share sheet.
I've done this many times on iOS/iPadOS easily with a Share Extension. From what I can tell, Xcode tells me share extensions are not available on visionOS - though my experience on device tells me differently (It seems Reminders, Notes & more implement them somehow.) I was finally able to get it working on device only...but I can now no longer test in the simulator, and have not found a way to distribute this app.
When attempting to run on the simulator, I get this issue:
Please try again later. Appex bundle at /Users/jason/Library/Developer/CoreSimulator/Devices/09A70160-4F4F-4F5E-B679-F6F7D876D7EF/data/Library/Caches/com.apple.mobile.installd.staging/temp.6OAEZp/extracted/LaunchBar.app/PlugIns/LaunchBarShareExtension.appex with id co.swiftfox.LaunchBar.ShareExtension specifies a value (com.apple.share-services) for the NSExtensionPointIdentifier key in the NSExtension dictionary in its Info.plist that does not correspond to a known extension point.
When trying to archive an upload to test flight, I get this similar error:
Invalid Info.plist value. The value for the key 'DTPlatformName' in bundle LaunchBar.app/PlugIns/LaunchBarShareExtension.appex is invalid. (ID: 207610c7-b7e1-48be-959b-22a43cd32d16)
The app is for visionOS only - which I'm thinking might be the problem? The share extension is "Designed For iPhone" and requires me to include iPhone as a run destination. In the worst case I can build an iPhone UI for the app but I'd rather not, as it is very specific to visionOS.
Has anyone successfully launched a share extension on a visionOS only app? I have an iPad app with a share extension that shows up fine on visionOS, but the issue seems to be specifically with visionOS only apps.
Topic:
Spatial Computing
SubTopic:
General
Can we constrain or clamp translation with the new ManipulationComponent? For example, allow free movement within certain bounds.
In my Reality Composer Pro workflow for Vision Pro development, I’m using xcrun realitytool image to pre-compress textures into .ktx format, typically using ASTC block compression. These textures are used for cubemaps and environment assets.
I’ve noticed that regardless of the image content—whether it’s a highly detailed photo or a completely black image—once compressed with the same ASTC block size (e.g., ASTC_8x8), the resulting .ktx file size is nearly identical. There appears to be no content-aware logic that adapts the compression ratio to the actual texture complexity.
In contrast, Unreal Engine behaves differently: even when all cubemap faces are imported at the same resolution as DDS textures, the engine performs content-aware compression during packaging:
Low-complexity images are compressed more aggressively
The final packaged file size varies based on content complexity
Since Reality Composer Pro requires textures to be pre-compressed as .ktx, there’s no opportunity for runtime optimization or per-image compression adjustment.
Just wondering: is there any recommended way to implement content-aware compression for .ktx textures in Reality Composer Pro?
Or any best practices to optimize .ktx sizes based on image complexity?
Thanks!
Seeing this magical sand table, the unfolding and folding effects are similar to spreading out cards, which is very interesting. But I don't know how to achieve it. I want to see if there are any ways to achieve this effect and give some ideas. May I ask if this effect can be achieved under the existing API
Hi,
I'm encountering an issue in our app that uses RoomPlan and ARsession for scanning.
After prolonged use—especially under heavy load from both the scanning process and other unrelated app operations—the iPhone becomes very hot, and the following warning begins to appear more frequently:
"ARSession <0x107559680>: The delegate of ARSession is retaining 11 ARFrames. The camera will stop delivering camera images if the delegate keeps holding on to too many ARFrames. This could be a threading or memory management issue in the delegate and should be fixed."
I was able to reproduce this behavior using Apple’s RoomPlanExampleApp, with only one change: I introduced a CPU-intensive workload at the end of the startSession() function:
DispatchQueue.global().asyncAfter(deadline: .now() + 5) {
for i in 0..<4 {
var value = 10_000
DispatchQueue.global().async {
while true {
value *= 10_000
value /= 10_000
value ^= 10_000
value = 10_000
}
}
}
}
I suspect this is some RoomPlan API problem that's why a filed an feedback: 17441091
Can you help to write a code able to pick an element a bit far from me, then bring it near to me, flick it a bit and then send it back to its original position when I release it?
Thanks a lot,
Christophe
If I place the .usdz file in the project directory alongside other .swift files, ModelEntity loads it perfectly. However, if I try to load the same file from Reality Composer Pro under RealityKitContent.rkassets, I get the error: resourceNotFound("heart").
Could someone help me with this? Thank you so much
Code:
//
// TestttttttApp.swift
// Testtttttt
//
// Created by Zhendong Chen on 2/17/25.
//
import SwiftUI
@main
struct TestttttttApp: App {
var body: some Scene {
WindowGroup {
ContentView()
}
.windowStyle(.volumetric)
}
}
//
// ContentView.swift
// Testtttttt
//
// Created by Zhendong Chen on 2/17/25.
//
import SwiftUI
import RealityKit
import RealityKitContent
struct ContentView: View {
@State private var enlarge = false
var body: some View {
RealityView { content in
do {
// MARK: Work
let scene = try await ModelEntity(named: "heart")
content.add(scene)
// MARK: Doesn't work
// let scene = try await ModelEntity(named: "heart", in: realityKitContentBundle)
// content.add(scene)
} catch {
print(error)
}
}
}
}
#Preview(windowStyle: .volumetric) {
ContentView()
}
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Reality Composer
RealityKit
Reality Composer Pro
visionOS