Post can be removed.
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hello — I shipped an App Store build that signs in to Game Center using the Apple Unity Plugins (GameKit). The login banner appears, but my app still doesn’t show up in Game Center’s “All activity” (You started playing XXX 2d ago)
What I’ve done
Call await GKLocalPlayer.Authenticate();
“Game Center” is enabled for the current version in App Store Connect
Confirmed: other App Store games do appear under “All Activity” on the same device/account
Timeline: This is the first version that enables Game Center (not the app’s first release), and it has been about 2 hours since this build went live.
Questions
Is authentication alone sufficient for “Recently Played,” or is at least one Game Center component (leaderboards, achievements, activities, multiplayer) required?
Is there a typical propagation delay before “Recently Played” starts showing a newly enabled app/version?
Is there anything else I should configure in App Store Connect or entitlements to make “Recently Played” visible?
Thanks for any help.
So I get JPEG data in my app. Previously I was using the higher level NSBitmapImageRep API and just feeding the JPEG data to it.
But now I've noticed on Sonoma If I get a JPEG in the CMYK color space the NSBitmapImageRep renders mostly black and is corrupted. So I'm trying to drop down to the lower level APIs. Specifically I grab a CGImageRef and and trying to use the Accelerate API to convert it to another format (to hopefully workaround the issue...
CGImageRef sourceCGImage = `CGImageCreateWithJPEGDataProvider(jpegDataProvider,`
NULL,
shouldInterpolate,
kCGRenderingIntentDefault);
Now I use vImageConverter_CreateWithCGImageFormat... with the following values for source and destination formats:
Source format: (derived from sourceCGImage)
bitsPerComponent = 8
bitsPerPixel = 32
colorSpace = (kCGColorSpaceICCBased; kCGColorSpaceModelCMYK; Generic CMYK Profile)
bitmapInfo = kCGBitmapByteOrderDefault
version = 0
decode = 0x000060000147f780
renderingIntent = kCGRenderingIntentDefault
Destination format:
bitsPerComponent = 8
bitsPerPixel = 24
colorSpace = (DeviceRBG)
bitmapInfo = 8197
version = 0
decode = 0x0000000000000000
renderingIntent = kCGRenderingIntentDefault
But vImageConverter_CreateWithCGImageFormat fails with kvImageInvalidImageFormat. Now if I change the destination format to use 32 bitsPerpixel and use alpha in the bitmap info the vImageConverter_CreateWithCGImageFormat does not return an error but I get a black image just like NSBitmapImageRep
Apple, please bring back SceneKit.
Hi,
I'm rewriting my game from SceneKit to RealityKit, and I'm having trouble implementing the following scenario:
I tap on the iPhone screen to select an Entity that I want to drag.
If an Entity was tapped, it should then be possible to drag it left, right, etc.
SceneKit solution:
func CGPointToSCNVector3(_ view: SCNView, depth: Float, point: CGPoint) -> SCNVector3 {
let projectedOrigin = view.projectPoint(SCNVector3Make(0, 0, Float(depth)))
let locationWithz = SCNVector3Make(Float(point.x), Float(point.y), Float(projectedOrigin.z))
return view.unprojectPoint(locationWithz)
}
and then I was calling:
SCNView().hitTest(location, options: [SCNHitTestOption.firstFoundOnly:true])
the code was called inside of the UIPanGestureRecognizer in my UIViewController.
Could I reuse that code or should I go with the SwiftUI approach - something like that:
var body: some View {
RealityView {
....
} .gesture(TapGesture().onEnded {
})
?
I already have this code:
@State private var location: CGPoint?
.onTapGesture { location in
self.location = location
}
I'm trying to identify the entity that was tapped within the RealityView like that:
RealityView { content in
let box: ModelEntity = createBox() // for now there is only one box, however there will be many boxes
content.add(box)
let anchor = AnchorEntity(world: [0, 0, 0])
content.add(anchor)
_ = content.subscribe(to: SceneEvents.Update.self) { event in
//TODO: find tapped entity, so that it could be dragged inside of the DragGesture()
}
Any help would be appreciated.
I also noticed that if I create a TapGesture like that:
TapGesture(count: 1)
.targetedToAnyEntity()
and add it to my view using .gesture() then it is not triggered.
Hi,
I am a Multimedia and Graphics researcher and I am wondering if OpenGL API and drivers will be removed after appleOS 26?
macOS 26
iOS 26
iPadOS 26
visionOS 26
I am asking this because most of the libraries I use depends on OpenGL. Like CGAL, libigl, immediate mode ui, nanovg, nanogui, bullet physics. Transitioning from Vulkan and metal while using and learning those libraries is just not viable.
I would like to ask you that. I am the sole developer and I just want to ask you that.
Regards.
Topic:
Graphics & Games
SubTopic:
General
Hi there,
Is it possible to customize the Metal Performance HUD on Apple TV, similar to how it can be done on iPhone & iPad?
Would like to see things like Compiled Shaders for my Apps on tvOS
.
I'm running into an issue with collisions between two entities with a character controller component. In the collision handler for moveCharacter the collision has both hitEntity and characterEntity set to the same object. This object is the entity that was moved with moveCharacter()
The below example configures 3 objects.
stationary sphere with character controller
falling sphere with character controller
a stationary cube with a collision component
if the falling sphere hits the stationary sphere then the collision handler reports both hitEntity and characterEntity to be the falling sphere. I would expect that the hitEntity would be the stationary sphere and the character entity would be the falling sphere.
if the falling sphere hits the cube with a collision component the the hit entity is the cube and the characterEntity is the falling sphere as expected.
Is this the expected behavior? The entities act as expected visually however if I want the spheres to react differently depending on what character they collided with then I am not getting the expected results. IE: If a player controlled character collides with a NPC then exchange resource with NPC. if player collides with enemy then take damage.
import SwiftUI
import RealityKit
struct ContentView: View {
@State var root: Entity = Entity()
@State var stationary: Entity = createCharacter(named: "stationary", radius: 0.05, color: .blue)
@State var falling: Entity = createCharacter(named: "falling", radius: 0.05, color: .red)
@State var collisionCube: Entity = createCollisionCube(named: "cube", size: 0.1, color: .green)
//relative to root
@State var fallFrom: SIMD3<Float> = [0,0.5,0]
var body: some View {
RealityView { content in
content.add(root)
root.position = [0,-0.5,0.0]
root.addChild(stationary)
stationary.position = [0,0.05,0]
root.addChild(falling)
falling.position = fallFrom
root.addChild(collisionCube)
collisionCube.position = [0.2,0,0]
collisionCube.components.set(InputTargetComponent())
}
.gesture(SpatialTapGesture().targetedToAnyEntity().onEnded { tap in
let tapPosition = tap.entity.position(relativeTo: root)
falling.components.remove(FallComponent.self)
falling.teleportCharacter(to: tapPosition + fallFrom, relativeTo: root)
})
.toolbar {
ToolbarItemGroup(placement: .bottomOrnament) {
HStack {
Button("Drop") {
falling.components.set(FallComponent(speed: 0.4))
}
Button("Reset") {
falling.components.remove(FallComponent.self)
falling.teleportCharacter(to: fallFrom, relativeTo: root)
}
}
}
}
}
}
@MainActor
func createCharacter(named name: String, radius: Float, color: UIColor) -> Entity {
let character = ModelEntity(mesh: .generateSphere(radius: radius), materials: [SimpleMaterial(color: color, isMetallic: false)])
character.name = name
character.components.set(CharacterControllerComponent(radius: radius, height: radius))
return character
}
@MainActor
func createCollisionCube(named name: String, size: Float, color: UIColor) -> Entity {
let cube = ModelEntity(mesh: .generateBox(size: size), materials: [SimpleMaterial(color: color, isMetallic: false)])
cube.name = name
cube.generateCollisionShapes(recursive: true)
return cube
}
struct FallComponent: Component {
let speed: Float
}
struct FallSystem: System{
static let predicate: QueryPredicate<Entity> = .has(FallComponent.self) && .has(CharacterControllerComponent.self)
static let query: EntityQuery = .init(where: predicate)
let down: SIMD3<Float> = [0,-1,0]
init(scene: RealityKit.Scene) {
}
func update(context: SceneUpdateContext) {
let deltaTime = Float(context.deltaTime)
for entity in context.entities(matching: Self.query, updatingSystemWhen: .rendering) {
let speed = entity.components[FallComponent.self]?.speed ?? 0.5
entity.moveCharacter(by: down * speed * deltaTime, deltaTime: deltaTime, relativeTo: nil) { collision in
if collision.hitEntity == collision.characterEntity {
print("hit entity has collided with itself")
}
print("\(collision.characterEntity.name) collided with \(collision.hitEntity.name) ")
}
}
}
}
#Preview(windowStyle: .volumetric) {
ContentView()
}
I am developing a macOS terminal app, running on an M4 Pro, and using Metal.
I am not able use float8 or float16, both reporting Variable has incomplete type 'float16' (aka '__Reserved_Name__Do_not_use_float16').
Based on the system I should be able to use these. Either it is because it is also compiling to Intel, which they are not allowed, or something else. Either way I have not been able to figure out how to get past this.
IIs there a compiler setting I need to set to make this work? if so which one and what setting do I need? I only want to run this on M processes, on the latest version of OS so not interested in Intel version or backward compatibility.
If I compile a compute kernel with a call to texture.read(), it fails with the following error: "Error Domain=AGXMetalG13X Code=3 "Encountered unlowered function call to air.get_read_sampler" UserInfo={NSLocalizedDescription=Encountered unlowered function call to air.get_read_sampler}."
This error occurs on both macOS and iOS 26 Beta 5, but not when running on a simulator or in a playground. It does not occur on a macOS Sequoia VM. It occurs whether I use the old metal 3 or new metal 4 compilation method.
A workaround would be to use a sampler, but according to the feature tables, all platforms support reading from textures of all formats.
Below is a minimal example which produces the error:
let device = MTLCreateSystemDefaultDevice()!
let library = device.makeDefaultLibrary()!
let computeFunction = library.makeFunction(name: "compute_test")!
do {
let pipeline = try device.makeComputePipelineState(function: computeFunction)
debugPrint(pipeline)
} catch {
debugPrint("Metal 3 failed with error:\n\(error)")
}
#import <metal_stdlib>
using namespace metal;
kernel void compute_test(uint2 gid [[thread_position_in_grid]],
texture2d<float, access::read> in [[texture(0)]],
texture2d<float, access::write> out [[texture(1)]]) {
out.write(in.read(gid), gid);
}
I filed feedback FB19530049.
Hello, we are working on a iOS game project, as progress, the project grows larger and larger. Because we are using other game dependencies and libraries, here larger and larger refers to the whole project, and our source files integrated and compiled by Xcode are not many. Now, it seems we hit a bottleneck, when I add new files or functions to the previous files to implement a new feature, Xcode compile stucks(stops), it's Indexing | Initializing datastore forever, cannot produce a final build.
macOS 15.1, Xcode 16.2
Can you provide any solutions to solve this problem?
Also submitted Feedback ID #FB18432749
RealityKit spatial audio crackles and pops on iOS 26.0 beta 5.
It works correctly on iOS 18.6 and visionOS 26.0 beta 5.
The APIs used are AudioPlaybackController, Entity.prepareAudio, Entity.play
Videos of the expected and observed behavior are attached to the feedback FB19423059.
The audio should be a consistent, repeating sound, but it seems oddly abbreviated and the volume varies unexpectedly.
Thank you for investigating this issue.
Now that SceneKit has been marked as soft deprecated, is there a planned date or timeframe when it will be completely removed from iOS? I’m concerned about how long my existing SceneKit-based game will continue to work, especially as an indie developer without the resources for a quick rewrite to RealityKit.
For an app of mine I use CGSetDisplayTransferByTable to adjust the gamma table of the device. Since macOS Tahoe, these modifications are silently ignored. The display's actual gamma curve remains unchanged despite the API reporting successful completion.
I've filed a FB for it a few weeks ago, and would love to figure out what could be causing this.
FB18559786
I'm experiencing a specific issue where when using any of the MacOS 26 Tahoe betas with Low Power Mode enabled and using Vsync in fullscreen, my application framerate gets limited to a hard 30 fps. I have not experienced this on any older OS. For example Low Power Mode on 13.6 Ventura with Vsync fullscreen lets my application run at full 60 fps without issues.
Is this a bug or a change in behavior of Low Power Mode on Tahoe?
My application is 3D, runs at 60 fps and is sensitive to tearing, so I need Vsync and it is mostly utilized in fullscreen. And Low Power Mode is a default for many Macs, so default experience on Tahoe currently is a halved 30 fps. However there also seems to be inconsistencies of on which machines this happens, but older OSes are always fine.
Hi all,
I’m running into an issue when trying to reconstruct a 3D model using PhotogrammetrySession on macOS from a set of images captured via the iOS Object Capture sample app, specifically in Area mode.
When I attempt to create the model from these images (using the raw Images/ folder exported directly from the capture session), I get the following errors:
ERROR cv3dapi.pg: Internal error codes (2): 4011 4012
WARN cv3dapi.pg: Internal warning codes (1): 4507
Output error with code = -15
requestError: CoreOC.PhotogrammetrySession.Error.processError
I use the "Images" directory directly exported from Object Capture with my iphone 12 pro max (has lidar) set to "area mode" in the object capture app
here is an example heic image metadata from the sequence.
heif-info Images/00044.869568833.HEIC
MIME type: image/heic
main brand: heic
compatible brands: mif1, MiHE, MiPr, miaf, MiHB, heic
image: 3024x4032 (id=49), primary
tiles: 6x8, tile size: 512x512
colorspace: YCbCr, 4:2:0
bit depth: 8
thumbnail: 240x320
color profile: nclx
alpha channel: no
depth channel: yes
size: 192x256
bits per pixel: 8
z-near: 1.173828
z-far: 2.552734
d-min: undefined
d-max: undefined
representation: uniform Z
metadata:
Exif: 960 bytes
uri /tag:apple.com,2023:ObjectCapture#CameraTrackingState: 4 bytes
uri /tag:apple.com,2023:ObjectCapture#CameraCalibrationData: 1015 bytes
uri /tag:apple.com,2023:ObjectCapture#ObjectTransform: 48 bytes
uri /tag:apple.com,2023:ObjectCapture#ObjectBoundingBox: 48 bytes
uri /tag:apple.com,2023:ObjectCapture#RawFeaturePoints: 832 bytes
uri /tag:apple.com,2023:ObjectCapture#PointCloudData: 23984 bytes
uri /tag:apple.com,2023:ObjectCapture#BundleVersion: 5 bytes
uri /tag:apple.com,2023:ObjectCapture#SegmentID: 4 bytes
uri /tag:apple.com,2024:ObjectCapture#SessionUUID: 36 bytes
uri /tag:apple.com,2024:ObjectCapture#CaptureMode: 4 bytes
uri /tag:apple.com,2023:ObjectCapture#Feedback: 4 bytes
uri /tag:apple.com,2023:ObjectCapture#WideToDepthCameraTransform: 48 bytes
uri /tag:apple.com,2023:ObjectCapture#TemporalDepthPointClouds: 864026 bytes
transformations:
angle (ccw): 270
region annotations:
none
properties:
camera intrinsic matrix:
focal length: 2813.695557; 2813.695557
principal point: 1522.338502; 2002.843018
skew: 0.000000
camera extrinsic matrix:
rotation matrix:
-0.695 0.344 -0.632
0.007 -0.875 -0.483
-0.719 -0.340 0.606
Questions:
• What do internal error codes 4011 and 4012 refer to?
• Is there something specific about Area mode captures that require preprocessing before they’re compatible with PhotogrammetrySession?
• Has anyone successfully reconstructed a model from an Area mode session using the stock Apple tools?
NOTE: I can provide the folder of images for debugging if that would help!
Hi, developers,
I maintain a shipped app that uses string concatenation to construct Metal shader and compile on-device. Beta 4 seems disabled __asm keyword, resulting the compilation failure.
The error is:
v2/GEMMKernel.cpp:229: error: program_source:23:9: error: illegal string literal in 'asm'
__asm("air.simdgroup_async_copy_1d.p3i8.p1i8");
The relevant code is available at https://github.com/liuliu/ccv/blob/unstable/lib/nnc/mfa/v2/GEMMHeaders.cpp#L30 although any __asm will trip this.
Please give us guidance on whether this is a regression or this will be something enforced in 26 release. Personally, I would consider this as a bug given it won't impact anything "compiled" shaders.
Thanks for your patience reading this!
I'm running into a persistent visual issue while deploying a floral corridor scene to Apple Vision Pro using Unity 6.0 with URP and Metal. The issue only appears on the Vision Pro device — everything looks fine in the Unity Editor.
Issue Description
When the frame rate drops to around 60–70 FPS, noticeable distortion artifacts appear around the edges of foliage models. It seems like the background meshes (behind the plants) get warped and leak through the edges of the foliage. Although this is most visible around the leaves, even solid objects like standard URP wall or box models show distorted edges when the issue occurs.
All the foliage uses Opaque or Alpha Clipping materials.
Things I've Tried
Changing the foliage materials to Transparent mode —distortion around edges disappears, but using Transparent for a large number of foliage assets is not ideal for performance or sorting complexity.
Reducing the number of foliage objects — with only a few plants in the scene and the frame rate staying around 100 FPS, the distortion disappears. However, this isn’t a practical solution for a full environment.
Possible Cause?
I came across this note in the Unity documentation:
"Ensure depth-buffer for each pixel is non-zero - on visionOS, the depth buffer is used for reprojection. To ensure visual effects like skyboxes and shaders are displayed beautifully, ensure that some value is written to the depth for each pixel."
Could this be related to the issue? Is it possible that Alpha Clipping with low pixel coverage leads to some pixels not writing to the depth buffer, which then causes problems during Vision Pro’s reprojection or foveated rendering? However, even when I disable Alpha Clipping entirely, the distortion issue still persists, so it may not be solely caused by clipping itself.
Project Setup
Unity 6.0 (URP)
Depth Texture: Enable
Using Metal as the graphics backend
Running on real Vision Pro hardware (not simulator)
Any advice on how to avoid these distortion issues on Vision Pro would be greatly appreciated.
Thanks!
In SceneKit, when creating an .scn file from a rigged model, the framework created an SCNNode for each bone/joint, so you could add and remove child nodes directly to and from joints, and like any other SCNNode, you could access world position and world orientation for each joint. The analog would be for joints to be accessible as child entities of a ModelEntity in RealityKit. I am unable to proceed with migrating my project from SceneKit because of this, as there does not seem to be a way to even access the true world position of a joint with the current jointNames/jointTransforms paradigm.
The translation information from the given transforms is insufficient to determine the location of a joint at any given time, and other approaches like creating a GeometricPin for the given joint name and attaching it to another entity do not seem to work. So conveniently being able to attach an item to the hand of a rigged model was trivial in SceneKit and now feels impossible in RealityKit.
I am not the first person to notice this, and am feeling demoralized about proceeding with RealityKit with such a critical piece of functionality blocked
https://stackoverflow.com/questions/76726241/how-do-i-attach-an-entity-to-a-skeletons-joint-in-realitykit
Will this be addressed in some way?
Hi everyone,
I’m running into an issue with RealityKit when trying to animate BlendShapes (ShapeKeys) while a skeletal animation is playing. The model is a rigged character in .usdz format with both predefined skeletal animations and BlendShapes (exported from Blender).
The problem: when I play any animation using entity.playAnimation(...), the BlendShapes stop responding. Calling setBlendShapes(...) still logs that weights are being updated, but the visual changes are not visible.
The exact same blend shape animation works perfectly when no animation is playing.
In SceneKit the same model works as expected: shape keys get animated during animation playback. But not in realitykit
Still, as soon as an animation starts, the shape keys don’t animate anymore.
Here’s the test project on GitHub that demonstrates the issue clearly:
https://github.com/IAMTHEBURT/RealityKitWitnBlendShapesSample
The goal is to play facial expressions (like blinking or talking) while a body animation (like waving) is playing.
Is this a known limitation in RealityKit? Or is there a recommended way to combine skeletal animations with real-time BlendShape updates?
Thanks in advance for any insights.