I am trying to load some PNG data with MTKTextureLoader newTextureWithData,but the result shows wrong at the alpha area.
Here is the code. I have an image URL, after it downloads successfully, I try to use the data or UIImagePNGRepresentation (image), they all show wrong.
UIImage *tempImg = [UIImage imageWithData:data];
CGImageRef cgRef = tempImg.CGImage;
MTKTextureLoader *loader = [[MTKTextureLoader alloc] initWithDevice:device];
id<MTLTexture> temp1 = [loader newTextureWithData:data options:@{MTKTextureLoaderOptionSRGB: @(NO), MTKTextureLoaderOptionTextureUsage: @(MTLTextureUsageShaderRead), MTKTextureLoaderOptionTextureCPUCacheMode: @(MTLCPUCacheModeWriteCombined)} error:nil];
NSData *tempData = UIImagePNGRepresentation(tempImg);
id<MTLTexture> temp2 = [loader newTextureWithData:tempData options:@{MTKTextureLoaderOptionSRGB: @(NO), MTKTextureLoaderOptionTextureUsage: @(MTLTextureUsageShaderRead), MTKTextureLoaderOptionTextureCPUCacheMode: @(MTLCPUCacheModeWriteCombined)} error:nil];
id<MTLTexture> temp3 = [loader newTextureWithCGImage:cgRef options:@{MTKTextureLoaderOptionSRGB: @(NO), MTKTextureLoaderOptionTextureUsage: @(MTLTextureUsageShaderRead), MTKTextureLoaderOptionTextureCPUCacheMode: @(MTLCPUCacheModeWriteCombined)} error:nil];
}] resume];
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Is there any standard way of efficiently showing a MTLTexture on a RealityKit Entity?
I can't find anything proper on how to , for example, generate a LowLevelTexture out of a MTLTexture. Closest match was this two year old thread.
In the old SceneKit app, we would just do
guard let material = someNode.geometry?.materials.first else { return }
material.diffuse.contents = mtlTexture
Our flow is as follows (for visualizing the currently detected object):
Camera-Stream -> CoreML Segmentation -> Send the relevant part of the MLShapedArray-Tensor to a MTLComputeShader that returns a MTLTexture -> Show the resulting texture on a 3D object to the user
I just upgraded my macOS, Xcode and Simulator all to the newest beta version 26.
Then I found two issues when building my app with Xcode 26 and running it on simulator 26.
The game center access point no longer shows up in the app. This is how it's configured in the past. And it still works on simulator 18.4
func authenticatePlayer() {
GKAccessPoint.shared.location = .topTrailing
self.localPlayer.authenticateHandler = { viewController, error in
if let viewController = viewController {
// can present Game Center login screen
} else if self.localPlayer.isAuthenticated {
// game can be started
} else {
// user didn't log in, continue the game without game center
}
}
}
After game ended, the leaderboard won't load. This is how it's implemented in the past. It's still working in simulator 18.4
struct GameCenterView: UIViewControllerRepresentable {
@Environment(\.presentationMode) var presentationMode
...
func makeUIViewController(context: Context) -> GKGameCenterViewController {
let viewController = GKGameCenterViewController(
leaderboardID: getLeaderBoardID(with: leaderBoardGameMode),
playerScope: .global,
timeScope: .allTime
)
viewController.gameCenterDelegate = context.coordinator
return viewController
}
func updateUIViewController(_ uiViewController: GKGameCenterViewController, context: Context) {}
func makeCoordinator() -> Coordinator {
Coordinator(self)
}
class Coordinator: NSObject, GKGameCenterControllerDelegate {
let parent: GameCenterView
init(_ parent: GameCenterView) {
self.parent = parent
}
func gameCenterViewControllerDidFinish(_ gameCenterViewController: GKGameCenterViewController) {
parent.presentationMode.wrappedValue.dismiss()
}
}
}
In SceneKit, when creating an .scn file from a rigged model, the framework created an SCNNode for each bone/joint, so you could add and remove child nodes directly to and from joints, and like any other SCNNode, you could access world position and world orientation for each joint. The analog would be for joints to be accessible as child entities of a ModelEntity in RealityKit. I am unable to proceed with migrating my project from SceneKit because of this, as there does not seem to be a way to even access the true world position of a joint with the current jointNames/jointTransforms paradigm.
The translation information from the given transforms is insufficient to determine the location of a joint at any given time, and other approaches like creating a GeometricPin for the given joint name and attaching it to another entity do not seem to work. So conveniently being able to attach an item to the hand of a rigged model was trivial in SceneKit and now feels impossible in RealityKit.
I am not the first person to notice this, and am feeling demoralized about proceeding with RealityKit with such a critical piece of functionality blocked
https://stackoverflow.com/questions/76726241/how-do-i-attach-an-entity-to-a-skeletons-joint-in-realitykit
Will this be addressed in some way?
I have a Mac Studio 2023 M2 Max
Running Sonoma 14.6.1
Developing in XCode 16.1
It seems that the NSScreen frame settings may be incorrect. The frame settings received from NSScreen.screens don't seem to match up with the Desktop arrangement settings in the Settings.
Apologies in advance for this long post!
for screen in NSScreen.screens {
let name = screen.localizedName
Globals.logger.debug("Globals initializeScreens - screen \(i) '\(name, privacy: .public)'")
Globals.logger.debug("Globals initializeScreens - '\(screen.debugDescription, privacy: .public)'")
}
This is what I receive in the log:
Globals initializeScreens - '<NSScreen: 0x600000ef4240;
name="PHL 346E2C";
backingScaleFactor=1.000000;
frame={{0, 0}, {3440, 1440}};
visibleFrame={{0, 0}, {3440, 1415}}>'
Globals initializeScreens - screen 2 'Blackmagic (1)'
Globals initializeScreens - '<NSScreen: 0x600000ef42a0;
name="Blackmagic (1)";
backingScaleFactor=1.000000;
frame={{-3840, 0}, {1920, 1080}};
visibleFrame={{-3840, 0}, {1920, 1055}}>'
Globals initializeScreens - screen 3 'Blackmagic (4)'
Globals initializeScreens - '<NSScreen: 0x600000ef4360;
name="Blackmagic (4)";
backingScaleFactor=1.000000;
frame={{-1920, 0}, {1920, 1080}};
visibleFrame={{-1920, 0}, {1920, 1055}}>'
Globals initializeScreens - screen 4 'Blackmagic (2)'
Globals initializeScreens - '<NSScreen: 0x600000ef43c0;
name="Blackmagic (2)";
backingScaleFactor=1.000000;
frame={{5360, 0}, {1920, 1080}};
visibleFrame={{5360, 0}, {1920, 1055}}>'
Globals initializeScreens - screen 5 'Blackmagic (3)'
Globals initializeScreens - '<NSScreen: 0x600000ef4420;
name="Blackmagic (3)";
backingScaleFactor=1.000000;
frame={{3440, 0}, {1920, 1080}};
visibleFrame={{3440, 0}, {1920, 1055}}>'
It looks like the frame settings for Blackmagic (2) and Blackmagic (4) are switched.
The setup has five monitors. Four are using the USB-C Digital AV Multiport Adapters. The output for these are streamed into a rack of A/V equipment using BlackMagic Design mini converters and monitors.
My Swift application allows users to open four movies, one for each of the AV Adapters. The movies can then be played back in sync for later processing by the A/V equipment.
Here are some screen captures that show my display settings.
Blackmagic (1) and Blackmagic (2) are to the left of the main screen.
Blackmagic (3) and Blackmagic(4) are to the right of the main screen.
The desktop is hard to see but is correct.
The wallpaper settings are all correct.
The wallpaper is correctly ordered when displayed on the monitors.
After opening the movies and using the NSScreen frame settings, the displays are incorrectly ordered. Test B and Test D are switched, which is what I would expect given the NSScreen frame values.
Any ideas? I've tried re-arranging the desktops, rebooting, etc. but no luck.
The code that changes the screen location is similar to this post on Stack Overflow
public func setDisplay( screen: NSScreen ) {
Globals.logger.log("MovieWindowController - setDisplay = \(screen.localizedName, privacy: .public)")
Globals.logger.debug("MovieWindowController - setDisplay - '\(screen.debugDescription, privacy: .public)'")
let dx = CGFloat(Constants.midX)
let dy = CGFloat(Constants.midY)
var pos = NSPoint()
pos.x = screen.visibleFrame.midX - dx
pos.y = screen.visibleFrame.midY - dy
Globals.logger.debug("MovieWindowController - setDisplay - x = '\(pos.x, privacy: .public)', y = '\(pos.y, privacy: .public)'")
window?.setFrameOrigin(pos)
}
The log show just what I would expect given the incorrect frame values.
MovieWindowController - setDisplay = Blackmagic (1)
MovieWindowController - setDisplay - '<NSScreen: 0x6000018e8420; name="Blackmagic (1)"; backingScaleFactor=1.000000; frame={{-3840, 0}, {1920, 1080}}; visibleFrame={{-3840, 0}, {1920, 1055}}>'
MovieWindowController - setDisplay - x = '-3840.000000', y = '-12.500000'
MovieWindowController - setDisplay = Blackmagic (2)
MovieWindowController - setDisplay - '<NSScreen: 0x6000018a10e0; name="Blackmagic (2)"; backingScaleFactor=1.000000; frame={{5360, 0}, {1920, 1080}}; visibleFrame={{5360, 0}, {1920, 1055}}>'
MovieWindowController - setDisplay - x = '5360.000000', y = '-12.500000'
MovieWindowController - setDisplay = Blackmagic (3)
MovieWindowController - setDisplay - '<NSScreen: 0x6000018cc8a0; name="Blackmagic (3)"; backingScaleFactor=1.000000; frame={{3440, 0}, {1920, 1080}}; visibleFrame={{3440, 0}, {1920, 1055}}>'
MovieWindowController - setDisplay - x = '3440.000000', y = '-12.500000'
MovieWindowController - setDisplay = Blackmagic (4)
MovieWindowController - setDisplay - '<NSScreen: 0x6000018c9ce0; name="Blackmagic (4)"; backingScaleFactor=1.000000; frame={{-1920, 0}, {1920, 1080}}; visibleFrame={{-1920, 0}, {1920, 1055}}>'
MovieWindowController - setDisplay - x = '-1920.000000', y = '-12.500000'
Am I correct? I think this is driving me crazy!
Thanks in advance!
Edit: The mouse behavior is correct in moving across the displays!
Topic:
Graphics & Games
SubTopic:
General
Hi, developers,
I maintain a shipped app that uses string concatenation to construct Metal shader and compile on-device. Beta 4 seems disabled __asm keyword, resulting the compilation failure.
The error is:
v2/GEMMKernel.cpp:229: error: program_source:23:9: error: illegal string literal in 'asm'
__asm("air.simdgroup_async_copy_1d.p3i8.p1i8");
The relevant code is available at https://github.com/liuliu/ccv/blob/unstable/lib/nnc/mfa/v2/GEMMHeaders.cpp#L30 although any __asm will trip this.
Please give us guidance on whether this is a regression or this will be something enforced in 26 release. Personally, I would consider this as a bug given it won't impact anything "compiled" shaders.
Thanks for your patience reading this!
Hi! I watched the WWDC25 session "Bring your SceneKit project to RealityKit" which seemed like a great resource for those of us transitioning from the now-deprecated SceneKit framework. The session mentioned that the full sample code for the project would be available to download, but I haven't been able to find it in the Code section of the video page or in the Sample Code Library.
Has the sample code been released yet? Having the project code would make it much easier to follow along with the RealityKit changes shown in the video. Thanks again for the great session.
I'm building a game with a client-server architecture. Using GKMatch.chooseBestHostingPlayer(_:) rarely works. When I started testing it today, it worked once at the very beginning, and since then it always succeeds on one client and returns nil on the other client. I'm testing with a Mac and an iPhone. Sometimes it fails on the Mac, sometimes on the iPhone. On the device that it succeeds on, the provided host can be the device itself or the other one.
I created FB9583628 in August 2021, but after the Feedback Assistant team replied that they are not able to reproduce it, the feedback never went forward.
import SceneKit
import GameKit
#if os(macOS)
typealias ViewController = NSViewController
#else
typealias ViewController = UIViewController
#endif
class GameViewController: ViewController, GKMatchmakerViewControllerDelegate, GKMatchDelegate {
var match: GKMatch?
var matchStarted = false
override func viewDidLoad() {
super.viewDidLoad()
GKLocalPlayer.local.authenticateHandler = authenticate
}
private func authenticate(_ viewController: ViewController?, _ error: Error?) {
#if os(macOS)
if let viewController = viewController {
presentAsSheet(viewController)
} else if let error = error {
print(error)
} else {
print("authenticated as \(GKLocalPlayer.local.gamePlayerID)")
let viewController = GKMatchmakerViewController(matchRequest: defaultMatchRequest())!
viewController.matchmakerDelegate = self
GKDialogController.shared().present(viewController)
}
#else
if let viewController = viewController {
present(viewController, animated: true)
} else if let error = error {
print(error)
} else {
print("authenticated as \(GKLocalPlayer.local.gamePlayerID)")
let viewController = GKMatchmakerViewController(matchRequest: defaultMatchRequest())!
viewController.matchmakerDelegate = self
present(viewController, animated: true)
}
#endif
}
private func defaultMatchRequest() -> GKMatchRequest {
let request = GKMatchRequest()
request.minPlayers = 2
request.maxPlayers = 2
request.defaultNumberOfPlayers = 2
request.inviteMessage = "Ciao!"
return request
}
func matchmakerViewControllerWasCancelled(_ viewController: GKMatchmakerViewController) {
print("cancelled")
}
func matchmakerViewController(_ viewController: GKMatchmakerViewController, didFailWithError error: Error) {
print(error)
}
func matchmakerViewController(_ viewController: GKMatchmakerViewController, didFind match: GKMatch) {
self.match = match
match.delegate = self
startMatch()
}
func match(_ match: GKMatch, player: GKPlayer, didChange state: GKPlayerConnectionState) {
print("\(player.gamePlayerID) changed state to \(String(describing: state))")
startMatch()
}
func startMatch() {
let match = match!
if matchStarted || match.expectedPlayerCount > 0 {
return
}
print("starting match with local player \(GKLocalPlayer.local.gamePlayerID) and remote players \(match.players.map({ $0.gamePlayerID }))")
match.chooseBestHostingPlayer { host in
print("host is \(String(describing: host?.gamePlayerID))")
}
}
}
I have really enjoyed looking through the code and videos related to Metal 4. Currently, my interest is to update a ReSTIR Project and take advantage of more robust ways to refit acceleration Structures and more powerful ways to access resources.
I am working in Swift and have encountered a couple of puzzles:
What is the 'accepted' way to create a MTL4BufferRange to store indices and vertices?
How do I properly rewrite Swift code to build and compact an Acceleration Structure?
I do realize that this is all in Beta and will happily look through Code Samples this Fall. If other guidance is available earlier, that would be fabulous!
Thank you
I am creating a RealityKit scene that will contain over 12,000 duplicate cubes arranged in a circle (see image below). This is for some high-energy physical simulations I am doing. I accomplish this scene by creating a single cube and cloning it a bunch of times. So, I there is a single MeshResource and Material even though there are a lot of entities. I have confirmed this by checking with Swift's === operator. Even with this, the program is unworkably slow.
Any suggestions or tricks that could help with this type of scene?
Using a single geometry was the trick to getting SceneKit to work fast with geometries like this. I've been updating my software to RealityKit because I far prefer the structure of RealityKit over SceneKit.
Hello
XQuartz is an open-source effort to develop a version of the X.Org X Window System (https://www.xquartz.org/), widely used to bring graphical support to applications running in remote servers (usually via SSH).
Since macOS Tahoe, XQuartz fails to refresh properly on window resize (more info here https://github.com/XQuartz/XQuartz/issues/438#issuecomment-3371409500), leading to severe usability issues.
The XQuartz developers are already aware of the issue, but I’m wondering if there’s anything we can do at the OS level to resolve it and restore the usual behavior from before macOS Tahoe.
Thanks,
KiM
Topic:
Graphics & Games
SubTopic:
General
I am developing a macOS terminal app, running on an M4 Pro, and using Metal.
I am not able use float8 or float16, both reporting Variable has incomplete type 'float16' (aka '__Reserved_Name__Do_not_use_float16').
Based on the system I should be able to use these. Either it is because it is also compiling to Intel, which they are not allowed, or something else. Either way I have not been able to figure out how to get past this.
IIs there a compiler setting I need to set to make this work? if so which one and what setting do I need? I only want to run this on M processes, on the latest version of OS so not interested in Intel version or backward compatibility.
I would love to use Background GPU Access to do some video processing in the background.
However the documentation of BGContinuedProcessingTaskRequest.Resources.gpu clearly states:
Not all devices support background GPU use. For more information, see Performing long-running tasks on iOS and iPadOS.
Is there a list available of currently released devices that do (or don't) support GPU background usage? That would help to understand what part of our user base can use this feature. (And what hardware we need to test this on as developers.)
For example it seems that it isn't supported on an iPad Pro M1 with the current iOS 26 beta. The simulators also seem to not support the background GPU resource. So would be great to understand what hardware is capable of using this feature!
I'm a newbee at Vulkan and Xcode.
I have my project on github https://github.com/flocela/OrangeSpider/
Whenever I run, two windows open instead of only one.
I added testing, which means I have an OrangeSpider.xctestplan in the OrangeSpider/TestsOrangeSpider/ folder.
This is my first time adding testing to an XCode project, so I think this may be where the problem is.
I also get this error message:
ViewBridge to RemoteViewService Terminated: Error Domain=com.apple.ViewBridge Code=18 "(null)" UserInfo={com.apple.ViewBridge.error.hint=this process disconnected remote view controller -- benign unless unexpected, com.apple.ViewBridge.error.description=NSViewBridgeErrorCanceled}
Topic:
Graphics & Games
SubTopic:
Metal
Hello!
I'm currently building an app where I feed images into a Photogrammetry session to create a USDZ. Pretty straightforward, works great. We've recently started some testing on older devices, and have discovered that Photogrammetry is requiring devices that have LIDAR (we've seen some console logs referencing LIDAR if we stumble through a photogrammetry process without checking isSupported first)
Judging from @swredcam's posting about ReefScan from November 24 (https://developer.apple.com/forums/thread/769221) it looks like Photogrammetry did work on those non-LIDAR devices. In my own testing on an iPhone 12 mini with iOS 17, PhotogrammetrySession says it's not supported.
Since we're only feeding in a sequence of photos that have never had depth data, and they process fine on pro/max devices, we're curious why this would require a LIDAR sensor to work, when it seems like it did work without LIDAR in the past. Or is there some other limitation of non-pro devices that is causing photogrammetry to not be supported (especially on today's really powerful hardware)
Thanks!
++md
Can I use them in SK and do the animations work?
Thanks, Patrick
Hello,
I've been trying to leverage instanced rendering in RealityKit on visionOS but have not had success.
RealityKit states this is supported:
https://developer.apple.com/documentation/realitykit/validating-usd-files
https://developer.apple.com/videos/play/wwdc2021/10075/?time=1373
https://developer.apple.com/videos/play/wwdc2023/10099/?time=772
RealityKit Trace metrics
Validating instancing is working:
To test I made a base visionOS app with immersive space and the entity replaced with my test usdz file. I've been using the RealityKit Trace profiling template in xcode instruments in the immersive space and volume closed. This gets consistent draw call results.
If I have a single sphere mesh with one material I get one draw call, but the number of draw calls grows linearly with mesh count no matter how my entity is configured.
What I've tried
Create a test scene in blender, export with instancing enabled
Create a test scene in Reality Composer Pro using references
Author usda files by hand based on the OpenUSD spec
Programatically create a MeshResource with Contents at runtime
References
https://openusd.org/release/api/_usd__page__scenegraph_instancing.html
https://developer.apple.com/documentation/realitykit/meshresource
https://developer.apple.com/documentation/realitykit/meshresource/instance
Thank you
Hello,
I recently watched the WWDC2025 session titled “Combine Metal 4 machine learning and graphics” (https://developer.apple.com/videos/play/wwdc2025/262/ ), and I’m very excited about the new Metal 4 features that integrate machine learning with graphics—such as neural ambient occlusion, shader-based ML inference, and the use of MTLTensor and MTL4MachineLearningCommandEncoder.
While the session includes helpful code snippets and a compelling debug demo (e.g., the neural ambient occlusion example), the implementation details are not fully shown, and I haven’t been able to find a complete, runnable sample project that demonstrates end-to-end integration of ML and rendering in Metal 4.
Would Apple be able to provide a full, working example—such as an Xcode project—that shows how to:
Export a model to an .mlpackage,
Convert it to an .mtlpackage,
Use MTL4MachineLearningCommandEncoder alongside render passes,
Or embed small neural networks directly in shaders using Shader ML?
Having such a sample would greatly help developers like me adopt these powerful new capabilities correctly and efficiently.
Thank you very much for your time and support!
Best regards,
Breaking Through PolySpatial's ~8k Object Limit – Seeking Alternative Approaches for Large-Scale Digital Twins
Confirmed: PolySpatial make Doubles MeshFilter Count – Hard Limit at ~8k Active Objects (15.9k Total)
Project Context & Research Goals
I’m developing an industrial digital twin application for Apple Vision Pro using Unity’s PolySpatial framework (RealityKit rendering in Unbounded_Volume mode). The scene contains complex factory environments with:
Production line equipment Many fragmented grid objects need to be merged.)
Dynamic product racks (state-switchable assets)
Animated worker avatars
To optimize performance, I’m systematically testing visionOS’s rendering capacity limits. Through controlled stress tests, I’ve identified a critical threshold:
Key Finding
When the total MeshFilter count reaches 15,970 (system baseline + 7,985 user-created objects × 2 due to PolySpatial cloning), the application crashes consistently. This suggests:
PolySpatial’s mirroring mechanism effectively doubles GameObject overhead
An apparent hard limit exists around ~8k active mesh objects in practice
Objectives for This Discussion
Verify if others have encountered similar limits with PolySpatial/RealityKit
Understand whether this is a:
Memory constraint (per-app allocation)
Render pipeline limit (Metal draw calls)
Unity-specific PolySpatial behavior
Explore optimization strategies beyond brute-force object reduction
Why This Matters
Industrial metaverse applications require rendering thousands of interactive objects . Confirming these limits will help our team:
Design safer content guidelines
Prioritize GPU instancing/LOD investments
Potentially contribute back to PolySpatial’s optimization
I’d appreciate insights from engineers who’ve:
Pushed similar large-scale scenes in visionOS
Worked around PolySpatial’s cloning overhead
Discovered alternative capacity limits (vertices/draw calls)
Hi All!
I'm being asked to migrate an app which utilizes iCloud KVS (Key Value Storage). This ability is a new-ish feature, and the documentation about this is sparse [1]. Honestly, the entire documentation about the new iCloud transfer functionality seems to be missing. Same with Game Center / GameKit. While the docs say that it should work, I'd like to understand the process in more detail.
Has anyone migrated an iCloud KVS app? What happens after the transfer goes through, but before the first release? Do I need to do anything special? I see that the Entitlements file has the TeamID in the Key Value store - is that fine?
<key>com.apple.developer.ubiquity-kvstore-identifier</key>
<string>$(TeamIdentifierPrefix)$(CFBundleIdentifier)</string>
Can someone please share their experience?
Thank you!
[1] https://developer.apple.com/help/app-store-connect/transfer-an-app/overview-of-app-transfer