Render advanced 3D graphics and perform data-parallel computations using graphics processors using Metal.

Metal Documentation

Posts under Metal subtopic

Post

Replies

Boosts

Views

Activity

Float64 (Double Precision) Support on MPS with PyTorch on Apple Silicon?
Hi everyone, This project uses PyTorch on an Apple Silicon Mac (M1/M2/etc.), and the goal is to use the MPS backend for GPU acceleration, notes Apple Developer. However, the workflow depends on Float64 (double-precision) floating-point numbers for certain computations, notes PyTorch Forums. The error "Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead" has been encountered, notes GitHub. It seems that the MPS backend doesn't currently support Float64 for direct GPU computation. Questions for the community: Are there any known workarounds or best practices for handling Float64-dependent operations when using the MPS backend with PyTorch? For those working with high-precision tasks on Apple Silicon, what strategies are being used to balance performance with the need for Float64? Offloading to the CPU is an option, and it's of interest to know if there are any specific techniques or libraries within the Apple ecosystem that could streamline this process while aiming for optimal performance. Any insights, tips, or experiences would be appreciated. Thanks in advance, Jonaid MacBook Pro M3 Max
2
1
483
Oct ’25
iPhone limited to 60hz frame rate
Just wondering if anyone knows what it will take to hit greater than 60hz when targeting iPhone. If I set the preferredFramesPerSecond of an MTKView to 120, it works on the iPad, but on iPhone it never goes over 60hz, even with a simple hello triangle sample app... is this a limitation of targeting iPhone?
2
0
223
Sep ’25
Metal HUD Display Value Range
Can't seem to get the Metal HUD to display value range's (pre 26 Tahoe). The documented environment variable MTL_HUD_SHOW_VALUE_RANGE doesn't seem to work. https://developer.apple.com/documentation/xcode/monitoring-your-metal-apps-graphics-performance#Display-the-value-range-of-metrics Anyone having any luck?
2
0
348
Sep ’25
Pink screen on MTLCommandBuffer.presentDrawable.
I rewrote my graphics pipeline to use Load/Store better for clearing and don't care cases. All my tests pass, and in the Metal debugger, all the draw calls succeed. But when I present drawables (before [commandBuffer commit]) I only get a pink screen. I've tried everything I can think of: making sure the pixel formats are the same for the back buffer as my render targets, etc. But it's still pink. Could you point me in the right direction so I can fix this, or help describe why it's pink. That would be really helpful. Thank you, Brian Hapgood
2
0
413
Sep ’25
xCode26.x Metal4 classes do not compile
Hi, I am using xCode26.x. But my Metal4 classes are not compiling. I downloaded the sample code from Apple's website - https://developer.apple.com/documentation/Metal/processing-a-texture-in-a-compute-function. For example, I am getting errors like "Cannot find protocol declaration for 'MTL4CommandQueue'; I have hit a deadline. Any recommendations are very welcome. I have downloaded the Metal Tool chain. When I run the following commands on the terminal - xcodebuild -showComponent metalToolchain ; xcrun -f metal ; xcrun metal --version I get the following response - Asset Path: /System/Library/AssetsV2/com_apple_MobileAsset_MetalToolchain/86fbaf7b114a899754307896c0bfd52ffbf4fded.asset/AssetData Build Version: 17A321 Status: installed Toolchain Identifier: com.apple.dt.toolchain.Metal.32023 Toolchain Search Path: /Users/private/Library/Developer/DVTDownloads/MetalToolchain/mounts/86fbaf7b114a899754307896c0bfd52ffbf4fded /Users/private/Library/Developer/DVTDownloads/MetalToolchain/mounts/86fbaf7b114a899754307896c0bfd52ffbf4fded/Metal.xctoolchain/usr/bin/metal Apple metal version 32023.830 (metalfe-32023.830.2) Target: air64-apple-darwin24.6.0 Thread model: posix InstalledDir: /Users/private/Library/Developer/DVTDownloads/MetalToolchain/mounts/86fbaf7b114a899754307896c0bfd52ffbf4fded/Metal.xctoolchain/usr/metal/current/bin
2
0
1.2k
Jan ’26
Metal 4: When is it ok to dealloc a MTLBuffer's memory
I have something like this drawing in an MTKView (see at bottom). I am finding it difficult to figure out when can the Swift-land resources used in making the MTLBuffer(s) be released? Below, for example, is it ok if args goes out of scope (or is otherwise deallocated) at point 1, 2, or 3? Or perhaps even earlier, as soon as argsBuffer has been created? I have been reading through various articles such as Setting resource storage modes Choosing a resource storage mode for Apple GPUs Copying data to a private resource but it's a lot to absorb and I haven't been really able to find an authoritative description of the required lifetime of the resources in CPU land. I should mention that this is Metal 4 code. In previous versions of Metal, the MTLCommandBuffer had the ability to add a completion handler to be called by the GPU after it has finished running the commands in the buffer but in Metal 4 there is no such thing (it it were even needed for the purpose I am interested in). Any advice and/or pointers to the definitive literature will be appreciated. guard let argsBuffer = device.makeBuffer(bytes: &args,... argumentTable.setAddress(argsBuffer.gpuAddress, ... encoder.setArgumentTable(argumentTable, stages: .vertex) // encode drawing renderEncoder.draw... ... encoder.endEncoding() // 1 commandBuffer.endCommandBuffer() // 2 commandQueue.waitForDrawable(drawable) commandQueue.commit([commandBuffer]) // 3 commandQueue.signalDrawable(drawable) drawable.present()
2
0
238
Jan ’26
Xcode Metal Trace
Code is download from apple official metal4 sample [https://developer.apple.com/documentation/metal/drawing-a-triangle-with-metal-4?language=objc] enable metal gpu trace in macOS schema and trace a frame in Xcode. Xcode may show segment fault on App from some 'GTTrace' function when click trace button. When replay a .gputrace file, Xcode may crash , throw an internal error or a XPC error. The example code using old metal-renderer can trace without any problem and everything works fine. Test Environment: Xcode Version 26.2 (17C52) macOS 26.2 (25C56) M1 Pro 16GB A2442
2
0
515
Jan ’26
BGContinuedProcessingTask GPU access — no iPhone support?
We are developing a video processing app that applies CIFilter chains to video frames. To not force the user to keep the app foregrounded, we were happy to see the introduction of BGContinuedProcessingTask to continue processing when backgrounded. With iOS 26, I was excited to see the com.apple.developer.background-tasks.continued-processing.gpu entitlement, which should allow GPU access in the background. Even the article in the documentation provides "exporting video in a film-editing app" or "applying visual filters (HDR, etc) or compressing images for social media posts" as use cases. However, when I check BGTaskScheduler.shared.supportedResources.contains(.gpu) at runtime, it returns false on every iPhone I've tested (including iPhone 15 Pro and iPhone 16 Pro). From forum responses I've seen, it sounds like background GPU access is currently limited to iPad only. If that's the case, I have a few questions: Is this an intentional, permanent limitation — or is iPhone support planned for a future iOS release? What is the recommended approach for GPU-dependent background work on iPhone? My custom CIKernels are written in Metal (as Apple recommends since CIKL is deprecated), but Metal CIKernels cannot fall back to CPU rendering. This creates a situation where Apple's own deprecation guidance (migrate to Metal) conflicts with background processing realities (no GPU on iPhone). Should developers maintain deprecated CIKL kernel versions alongside Metal kernels purely as a CPU fallback for background execution? That feels like it defeats the purpose of the migration. It seems like a gap in the platform: the API exists, the entitlement exists, but the hardware support isn't there for the most common device category. Any clarity on Apple's direction here would be very helpful.
2
0
207
Feb ’26
How to load and draw texture with opacity in Metal
The background I'm finally working to convert my very old Mac kaleidoscope application, ScopeWorks, which was written in OpenGL and Objective-C, to a Multiplatform app in SwiftUI and Metal. I'm using the MetalKit MTKView class, wrapped for SwiftUI as an NSViewRepresentable or UIViewRepresentable. I then provide an MTKViewDelegate that provides a draw method. The draw method fetches the current render pass descriptor, creates a command buffer, sets up a render pipeline, and does its drawing. My renderer's makePipeline method looks like this: func makePipeline() { let library = device.makeDefaultLibrary() let pipelineDesc = MTLRenderPipelineDescriptor() pipelineDesc.vertexFunction = library?.makeFunction(name: "vertex_main") pipelineDesc.fragmentFunction = library?.makeFunction(name: "fragment_main") pipelineDesc.colorAttachments[0].pixelFormat = .bgra8Unorm pipeline = try! device.makeRenderPipelineState(descriptor: pipelineDesc) } And my shaders look like this: struct VertexOut { float4 position [[position]]; float2 texCoord; }; vertex VertexOut vertex_main(const device float2* position [[buffer(0)]], uint vid [[vertex_id]]) { VertexOut out; float2 pos = position[vid]; out.position = float4(pos, 0, 1); out.texCoord = pos * 0.5 + 0.5; // basic mapping return out; } fragment float4 fragment_main(VertexOut in [[stage_in]], texture2d<float> tex [[texture(0)]], constant float4& color [[buffer(1)]]) { constexpr sampler s(address::repeat, filter::linear); // float4 texColor = tex.sample(s, in.texCoord); // return texColor * color; float4 textureColor = {1, 2, 3, 4}; if (all(color == textureColor)) { return tex.sample(s, in.texCoord); } else { return color; } // Sample the texture directly — no color tint applied return tex.sample(s, in.texCoord); } The first part of my MTKViewDelegate's draw method looks like this: func draw(in view: MTKView) { guard let drawable = view.currentDrawable, let descriptor = view.currentRenderPassDescriptor, let pipeline = pipeline, let texture = texture else { return } let commandBuffer = commandQueue.makeCommandBuffer()! let encoder = commandBuffer.makeRenderCommandEncoder(descriptor: descriptor)! encoder.setRenderPipelineState(pipeline) encoder.setFragmentTexture(texture, index: 0) descriptor.colorAttachments[0].clearColor = MTLClearColor(red: 0.0, green: 0, blue: 0, alpha: 1.0) // Draw six equilateral triangles forming the hexagon let radius: Float = 0.6 for i in 0..<6 { let angle = Float(i) * (.pi / 3) let cosA = cos(angle) let sinA = sin(angle) let nextA = Float(i+1) * (.pi / 3) let cosB = cos(nextA) let sinB = sin(nextA) let verts: [simd_float2] = [ simd_float2(0, 0), simd_float2(radius * cosA, radius * sinA), simd_float2(radius * cosB, radius * sinB) ] encoder.setVertexBytes(verts, length: MemoryLayout<simd_float2>.stride * 3, index: 0) // Tell the fragment shader to use the texture color. var textureColor: simd_float4 = simd_float4(1, 2, 3, 4) encoder.setFragmentBytes(&textureColor, length: MemoryLayout<SIMD4<Float>>.stride, index: 1) encoder.drawPrimitives(type: .triangle, vertexStart: 0, vertexCount: 3) One of the things the existing app does is load PNG or TIFF images with an alpha channel, and then overlay parts of the image on top of themselves flipped, so you get interesting Moiré patterns in the lines in the resulting kaleidoscope. For now I'm working on a single sample image, loading it into a texture in Metal, and just rendering it as a hexagon and drawing lines for the triangles that make up the hexagon. (For now I'm using the vertex coordinates as the texture coordinates, so I get a hexagonal part of my texture rather than a single triangular part tessellated into a hexagon. I'll fix that later.) In both iOS and OS I set the clear color to black at the beginning of the draw function. The issue: The source image is mostly transparent, but with a lot of partly transparent pixels. Here's what it looks like in Photoshop, where you can see the transparent parts as a checkerboard pattern: (I tried to crop the original image to show the approximate part that I'm rendering in a hexagon, but it's not exact. Look for the same shapes in the different images to compare them.) When I render my hexagon in the Metal view in the iOS version of the app, it looks like it's forcing each pixel to fully opaque or fully transparent: And in the macOS version of the app, it seems to force ALL the pixels to opaque: I haven't shown all the setup code, because it's' a lot. Is there some rendering mode setup I'm missing in order to get it to draw the pixels into the output based on their opacity, including partial opacity?
2
0
540
4d
Best Way to Use MetalFX in Unreal Engine 5.7 for macOS Port?
Hi everyone, We’re currently porting a high-fidelity AA+ PC title built on Unreal Engine 5.7 to macOS (Apple Silicon), and we’re looking for guidance from anyone with experience in this area. At the moment, the game is already runnable on Mac, but not yet at a playable level — we’re seeing performance around 10–15 FPS on an M4 device. We’re actively analyzing and defining the work needed to reach production-quality performance on macOS. One of the key areas we’re exploring is leveraging MetalFX to improve frame rate. However, it seems there’s no official MetalFX plugin or direct integration available for Unreal Engine. Has anyone here successfully integrated MetalFX into a UE5 rendering pipeline, or found a recommended approach to do so? Any insights on best practices, workflows, or references (docs, samples, etc.) would be greatly appreciated. Thanks in advance!
2
0
257
2d
iOS Metal system delayed one Vsync period to really display the frame on the screen
View Layout Add the following views in a view controller: Label View A, with a subview of the same size: MTKView A View B, with a subview of the same size: MTKView B Refresh Rates of Each View The label view refreshes at 60fps (driven by CADisplayLink). MTKView A and B refresh at 15fps. MTKView Implementation Details The corresponding CAMetalLayer's maximumDrawableCount is set to 2, changed to double buffering. The scheduling mechanism is modified; drawing is not driven by the internal loop but is done manually. The draw call is triggered immediately upon receiving a frame. self.metalView.enableSetNeedsDisplay = NO; self.metalView.paused = YES; A new high-priority queue is created for drawing, instead of handling it on the main queue. MTKView Latency Tracking The GPU completion time T1 is observed through the addCompletedHandler callback of the CommandBuffer. The presentation time T2 of the frame is observed through the addPresentedHandler callback of the currentDrawable in MTKView. Testing shows that T2 - T1 > 16.6ms (the Vsync period at 60Hz). This means that after the GPU rendering in MTLView is finished, the frame is not actually displayed at the next Vsync instruction but only at the Vsync instruction after that. I believe there is an extra 16.6ms of latency here, which I want to eliminate by adjusting the rendering mechanism. Observation from Instruments From Instruments, the Surface presentation aligns with the above test results. After the Metal encoder finishes, the Surface in Display switches only after the next-next Vsync instruction. See the image in the link for details. Questions According to a beginner's understanding, after MTKView's GPU rendering is finished, the next Vsync instruction should officially display (make it visible). However, this is not what is observed. Does the subview MTKView need to wait for another Vsync cycle to be drawn to the actual display buffer? The label updates its text at 60fps, so the entire interface should be displayed at 60fps. Is the content of MTKView not synchronized when the display happens? Explanation of the Reasoning Behind Some MTKView Code Details Changing from the default triple buffering to double buffering helps reduce the latency introduced by rendering. Not using MTKView's own scheduling mechanism but using manual triggering of the draw method is because MTKView's own scheduling mechanism is driven by CADisplayLink. Therefore, if a frame falls within a Vsync window, it needs to wait for the next Vsync window to trigger the draw operation, which introduces waiting latency.
3
0
603
Dec ’25
Bug Report - Incorrect trackingAreaIdentifier in visionOS 26 Hover Effect Sample Code
Description: In the official visionOS 26 Hover Effect sample code project , I encountered an issue where the event.trackingAreaIdentifier returned by onSpatialEvent does not reset as expected. Steps to Reproduce: Select an object with trackingAreaID = 6 in the sample app. Look at a blank space (outside any tracking area) and perform a pinch gesture . Expected Behavior: The event.trackingAreaIdentifier should return 0 when interacting with a non-tracking area. Actual Behavior: The event.trackingAreaIdentifier still returns 6, even after restarting the app or killing the process. This persists regardless of where the pinch gesture is performed
3
0
296
Jul ’25
CAMetalLayer nextDrawable crash
Hi , My application meet below crash backtrace at very low repro rate from the public users, i do not see it relate to a specific iOS version or iPhone model. The last code line from my application is calling CAMetalLayer nextDrawable API. I did some basic studying, suppose it may relate to the wrong CAMetaLayer configuration, like frame property w or h <= 0.0 bounds property w or h <= 0.0 drawableSize w or h <= 0.0 or w or h > max value (like 16384) Not sure my above thinking is right or not? Will the UIView which my CAMetaLayer attached will cause such nextDrawable crash or not ? Thanks a lot Main Thread - Crashed libsystem_kernel.dylib __pthread_kill libsystem_c.dylib abort libsystem_c.dylib __assert_rtn Metal MTLReportFailure.cold.1 Metal MTLReportFailure Metal _MTLMessageContextEnd Metal -[MTLTextureDescriptorInternal validateWithDevice:] AGXMetalA13 0x245b1a000 + 4522096 QuartzCore allocate_drawable_texture(id<MTLDevice>, __IOSurface*, unsigned int, unsigned int, MTLPixelFormat, unsigned long long, CAMetalLayerRotation, bool, NSString*, unsigned long) QuartzCore get_unused_drawable(_CAMetalLayerPrivate*, CAMetalLayerRotation, bool, bool) QuartzCore CAMetalLayerPrivateNextDrawableLocked(CAMetalLayer*, CAMetalDrawable**, unsigned long*) QuartzCore -[CAMetalLayer nextDrawable] SpaceApp -[MetalRender renderFrame:] MetalRenderer.mm:167 SpaceApp -[FrameBuffer acceptFrame:] VideoRender.mm:173 QuartzCore CA::Display::DisplayLinkItem::dispatch_(CA::SignPost::Interval<(CA::SignPost::CAEventCode)835322056>&) QuartzCore CA::Display::DisplayLink::dispatch_items(unsigned long long, unsigned long long, unsigned long long) QuartzCore CA::Display::DisplayLink::dispatch_deferred_display_links(unsigned int) UIKitCore _UIUpdateSequenceRun UIKitCore schedulerStepScheduledMainSection UIKitCore runloopSourceCallback CoreFoundation __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__ CoreFoundation __CFRunLoopDoSource0 CoreFoundation __CFRunLoopDoSources0 CoreFoundation __CFRunLoopRun CoreFoundation CFRunLoopRunSpecific GraphicsServices GSEventRunModal UIKitCore -[UIApplication _run] UIKitCore UIApplicationMain
3
0
377
Jul ’25
Metal useResource vs. MTLFence
Hello, I'm tracking down a bug where useResource doesn't seem to apply proper synchronization when a resource is produced by the render pass then consumed by the compute pass, but when I use MTLFence between the to signal and wait between the render/compute encoders, the artifact goes away. The resource is created with MTLHazardTrackingModeTracked and useResource is called on the compute encoder after the render pass. Metal API Validation doesn't report any warnings/errors. Am I misunderstanding the difference between the two APIs? I dug through the Metal documentation and it looks like useResource should handle synchronization given the resource has MTLHazardTrackingModeTracked but on the other hand, MTLFence should be used to ensure proper synchronization between command encoders. Can someone can clarify the difference between the two APIs and when to use them.
3
0
165
Jul ’25
How to use MetalPeformancePrimitives
I am trying to learn the new Metal Peformance Primitives APIs. I have added the MetalPeformancePrimitives framework and included the header in my shader code as per documentation #include <MetalPeformancePrimitives/MetalPeformancePrimitives.h> Unfortunately, Xcode complains that the header cannot be found. How do I include it properly? I am using Xcode 26 on Tahoe. The MetalPeformancePrimitives framework is present on my machine and I can inspect the headers in the filesystem.
3
1
789
Oct ’25
App Freezes on iPadOS 26.x - GPU Metal Errors
I work on a Qt/QML app that uses Esri Maps SDK for Qt and that is deployed to both Windows and iPads. With a recent iPad OS upgrade to 26.1, many iPad users are reporting the application freezing after panning and/or identifying features in the map. It runs fine for our Windows users. I was able to reproduce this and grabbed the following error messages when the freeze happens: IOGPUMetalError: Caused GPU Address Fault Error (0000000b:kIOGPUCommandBufferCallbackErrorPageFault) IOGPUMetalError: Invalid Resource (00000009:kIOGPUCommandBufferCallbackErrorInvalidResource) Environment: Qt 6.5.4 (Qt for iOS) Esri Maps SDK for Qt 200.3 iPadOS 26.1 Because it appears to be a Metal error, I tried using OpenGL (Qt offers a way to easily set hte target graphics api): QQuickWindow::setGraphicsApi(QSGRendererInterface::GraphicsApi::OpenGL) Which worked! No more freezing. But I'm seeing many posts that OpenGL has been deprecated by Apple. I've seen posts that Apple deprecated OpenGL ES. But it seems to still be available with iPadOS 26.1. If so, will this fix (above) just cause problems with a future iPadOS update? Any other suggestions to address this issue? Upgrading our version of Qt + Esri SDK to the latest version is not an option for us. We are in the process to upgrade the full application, but it is a year or two out. So, we just need a fix to buy us some time for now. Appreciate any thoughts/insights....
3
0
565
Dec ’25
Memory leak when no draw calls issued to encoder
I noticed that when the render command encoder adds no draw calls an apps memory usage seems to grow unboundedly. Using a super simple MTKView-based drawing with the following delegate (code at end). If I add the simplest of draw calls, e.g., a single vertex, the app's memory usage is normal, around 100-ish MBs. I am attaching a couple screenshot, one from Xcode and one from Instruments. What's going on here? Is this an illegal program? If yes, why does it not crash, such as if the encode or command buffer weren't ended. Or is there some race condition at play here due to the lack of draws? class Renderer: NSObject, MTKViewDelegate { var device: MTLDevice var commandQueue: MTL4CommandQueue var commandBuffer: MTL4CommandBuffer var allocator: MTL4CommandAllocator override init() { guard let d = MTLCreateSystemDefaultDevice(), let queue = d.makeMTL4CommandQueue(), let cmdBuffer = d.makeCommandBuffer(), let alloc = d.makeCommandAllocator() else { fatalError("unable to create metal 4 objects") } self.device = d self.commandQueue = queue self.commandBuffer = cmdBuffer self.allocator = alloc super.init() } func mtkView(_ view: MTKView, drawableSizeWillChange size: CGSize) {} func draw(in view: MTKView) { guard let drawable = view.currentDrawable else { return } commandBuffer.beginCommandBuffer(allocator: allocator) guard let descriptor = view.currentMTL4RenderPassDescriptor, let encoder = commandBuffer.makeRenderCommandEncoder( descriptor: descriptor ) else { fatalError("unable to create encoder") } encoder.endEncoding() commandBuffer.endCommandBuffer() commandQueue.waitForDrawable(drawable) commandQueue.commit([commandBuffer]) commandQueue.signalDrawable(drawable) drawable.present() } }
3
0
435
Jan ’26
Unable to find intelgpu_kbl_gt2r0 slice or a compatible one in binary archive
Unable to find intelgpu_kbl_gt2r0 slice or a compatible one in binary archive 'file:///System/Library/PrivateFrameworks/IconRendering.framework/Resources/binary.metallib' available slices: applegpu_g13g, applegpu_g13s, applegpu_g13d, applegpu_g14g, applegpu_g14s, applegpu_g14d, applegpu_g15g, applegpu_g15s, applegpu_g15d, applegpu_g16g, applegpu_g16s, applegpu_g17g, applegpu_g15g, applegpu_g15s, applegpu_g15d, applegpu_g16s Is it related to performance of applications in macOS 26.2 on Intel Macs?
3
0
305
Feb ’26
Float64 (Double Precision) Support on MPS with PyTorch on Apple Silicon?
Hi everyone, This project uses PyTorch on an Apple Silicon Mac (M1/M2/etc.), and the goal is to use the MPS backend for GPU acceleration, notes Apple Developer. However, the workflow depends on Float64 (double-precision) floating-point numbers for certain computations, notes PyTorch Forums. The error "Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead" has been encountered, notes GitHub. It seems that the MPS backend doesn't currently support Float64 for direct GPU computation. Questions for the community: Are there any known workarounds or best practices for handling Float64-dependent operations when using the MPS backend with PyTorch? For those working with high-precision tasks on Apple Silicon, what strategies are being used to balance performance with the need for Float64? Offloading to the CPU is an option, and it's of interest to know if there are any specific techniques or libraries within the Apple ecosystem that could streamline this process while aiming for optimal performance. Any insights, tips, or experiences would be appreciated. Thanks in advance, Jonaid MacBook Pro M3 Max
Replies
2
Boosts
1
Views
483
Activity
Oct ’25
iPhone limited to 60hz frame rate
Just wondering if anyone knows what it will take to hit greater than 60hz when targeting iPhone. If I set the preferredFramesPerSecond of an MTKView to 120, it works on the iPad, but on iPhone it never goes over 60hz, even with a simple hello triangle sample app... is this a limitation of targeting iPhone?
Replies
2
Boosts
0
Views
223
Activity
Sep ’25
Metal HUD Display Value Range
Can't seem to get the Metal HUD to display value range's (pre 26 Tahoe). The documented environment variable MTL_HUD_SHOW_VALUE_RANGE doesn't seem to work. https://developer.apple.com/documentation/xcode/monitoring-your-metal-apps-graphics-performance#Display-the-value-range-of-metrics Anyone having any luck?
Replies
2
Boosts
0
Views
348
Activity
Sep ’25
Pink screen on MTLCommandBuffer.presentDrawable.
I rewrote my graphics pipeline to use Load/Store better for clearing and don't care cases. All my tests pass, and in the Metal debugger, all the draw calls succeed. But when I present drawables (before [commandBuffer commit]) I only get a pink screen. I've tried everything I can think of: making sure the pixel formats are the same for the back buffer as my render targets, etc. But it's still pink. Could you point me in the right direction so I can fix this, or help describe why it's pink. That would be really helpful. Thank you, Brian Hapgood
Replies
2
Boosts
0
Views
413
Activity
Sep ’25
xCode26.x Metal4 classes do not compile
Hi, I am using xCode26.x. But my Metal4 classes are not compiling. I downloaded the sample code from Apple's website - https://developer.apple.com/documentation/Metal/processing-a-texture-in-a-compute-function. For example, I am getting errors like "Cannot find protocol declaration for 'MTL4CommandQueue'; I have hit a deadline. Any recommendations are very welcome. I have downloaded the Metal Tool chain. When I run the following commands on the terminal - xcodebuild -showComponent metalToolchain ; xcrun -f metal ; xcrun metal --version I get the following response - Asset Path: /System/Library/AssetsV2/com_apple_MobileAsset_MetalToolchain/86fbaf7b114a899754307896c0bfd52ffbf4fded.asset/AssetData Build Version: 17A321 Status: installed Toolchain Identifier: com.apple.dt.toolchain.Metal.32023 Toolchain Search Path: /Users/private/Library/Developer/DVTDownloads/MetalToolchain/mounts/86fbaf7b114a899754307896c0bfd52ffbf4fded /Users/private/Library/Developer/DVTDownloads/MetalToolchain/mounts/86fbaf7b114a899754307896c0bfd52ffbf4fded/Metal.xctoolchain/usr/bin/metal Apple metal version 32023.830 (metalfe-32023.830.2) Target: air64-apple-darwin24.6.0 Thread model: posix InstalledDir: /Users/private/Library/Developer/DVTDownloads/MetalToolchain/mounts/86fbaf7b114a899754307896c0bfd52ffbf4fded/Metal.xctoolchain/usr/metal/current/bin
Replies
2
Boosts
0
Views
1.2k
Activity
Jan ’26
Metal 4: When is it ok to dealloc a MTLBuffer's memory
I have something like this drawing in an MTKView (see at bottom). I am finding it difficult to figure out when can the Swift-land resources used in making the MTLBuffer(s) be released? Below, for example, is it ok if args goes out of scope (or is otherwise deallocated) at point 1, 2, or 3? Or perhaps even earlier, as soon as argsBuffer has been created? I have been reading through various articles such as Setting resource storage modes Choosing a resource storage mode for Apple GPUs Copying data to a private resource but it's a lot to absorb and I haven't been really able to find an authoritative description of the required lifetime of the resources in CPU land. I should mention that this is Metal 4 code. In previous versions of Metal, the MTLCommandBuffer had the ability to add a completion handler to be called by the GPU after it has finished running the commands in the buffer but in Metal 4 there is no such thing (it it were even needed for the purpose I am interested in). Any advice and/or pointers to the definitive literature will be appreciated. guard let argsBuffer = device.makeBuffer(bytes: &args,... argumentTable.setAddress(argsBuffer.gpuAddress, ... encoder.setArgumentTable(argumentTable, stages: .vertex) // encode drawing renderEncoder.draw... ... encoder.endEncoding() // 1 commandBuffer.endCommandBuffer() // 2 commandQueue.waitForDrawable(drawable) commandQueue.commit([commandBuffer]) // 3 commandQueue.signalDrawable(drawable) drawable.present()
Replies
2
Boosts
0
Views
238
Activity
Jan ’26
Xcode Metal Trace
Code is download from apple official metal4 sample [https://developer.apple.com/documentation/metal/drawing-a-triangle-with-metal-4?language=objc] enable metal gpu trace in macOS schema and trace a frame in Xcode. Xcode may show segment fault on App from some 'GTTrace' function when click trace button. When replay a .gputrace file, Xcode may crash , throw an internal error or a XPC error. The example code using old metal-renderer can trace without any problem and everything works fine. Test Environment: Xcode Version 26.2 (17C52) macOS 26.2 (25C56) M1 Pro 16GB A2442
Replies
2
Boosts
0
Views
515
Activity
Jan ’26
BGContinuedProcessingTask GPU access — no iPhone support?
We are developing a video processing app that applies CIFilter chains to video frames. To not force the user to keep the app foregrounded, we were happy to see the introduction of BGContinuedProcessingTask to continue processing when backgrounded. With iOS 26, I was excited to see the com.apple.developer.background-tasks.continued-processing.gpu entitlement, which should allow GPU access in the background. Even the article in the documentation provides "exporting video in a film-editing app" or "applying visual filters (HDR, etc) or compressing images for social media posts" as use cases. However, when I check BGTaskScheduler.shared.supportedResources.contains(.gpu) at runtime, it returns false on every iPhone I've tested (including iPhone 15 Pro and iPhone 16 Pro). From forum responses I've seen, it sounds like background GPU access is currently limited to iPad only. If that's the case, I have a few questions: Is this an intentional, permanent limitation — or is iPhone support planned for a future iOS release? What is the recommended approach for GPU-dependent background work on iPhone? My custom CIKernels are written in Metal (as Apple recommends since CIKL is deprecated), but Metal CIKernels cannot fall back to CPU rendering. This creates a situation where Apple's own deprecation guidance (migrate to Metal) conflicts with background processing realities (no GPU on iPhone). Should developers maintain deprecated CIKL kernel versions alongside Metal kernels purely as a CPU fallback for background execution? That feels like it defeats the purpose of the migration. It seems like a gap in the platform: the API exists, the entitlement exists, but the hardware support isn't there for the most common device category. Any clarity on Apple's direction here would be very helpful.
Replies
2
Boosts
0
Views
207
Activity
Feb ’26
How to load and draw texture with opacity in Metal
The background I'm finally working to convert my very old Mac kaleidoscope application, ScopeWorks, which was written in OpenGL and Objective-C, to a Multiplatform app in SwiftUI and Metal. I'm using the MetalKit MTKView class, wrapped for SwiftUI as an NSViewRepresentable or UIViewRepresentable. I then provide an MTKViewDelegate that provides a draw method. The draw method fetches the current render pass descriptor, creates a command buffer, sets up a render pipeline, and does its drawing. My renderer's makePipeline method looks like this: func makePipeline() { let library = device.makeDefaultLibrary() let pipelineDesc = MTLRenderPipelineDescriptor() pipelineDesc.vertexFunction = library?.makeFunction(name: "vertex_main") pipelineDesc.fragmentFunction = library?.makeFunction(name: "fragment_main") pipelineDesc.colorAttachments[0].pixelFormat = .bgra8Unorm pipeline = try! device.makeRenderPipelineState(descriptor: pipelineDesc) } And my shaders look like this: struct VertexOut { float4 position [[position]]; float2 texCoord; }; vertex VertexOut vertex_main(const device float2* position [[buffer(0)]], uint vid [[vertex_id]]) { VertexOut out; float2 pos = position[vid]; out.position = float4(pos, 0, 1); out.texCoord = pos * 0.5 + 0.5; // basic mapping return out; } fragment float4 fragment_main(VertexOut in [[stage_in]], texture2d<float> tex [[texture(0)]], constant float4& color [[buffer(1)]]) { constexpr sampler s(address::repeat, filter::linear); // float4 texColor = tex.sample(s, in.texCoord); // return texColor * color; float4 textureColor = {1, 2, 3, 4}; if (all(color == textureColor)) { return tex.sample(s, in.texCoord); } else { return color; } // Sample the texture directly — no color tint applied return tex.sample(s, in.texCoord); } The first part of my MTKViewDelegate's draw method looks like this: func draw(in view: MTKView) { guard let drawable = view.currentDrawable, let descriptor = view.currentRenderPassDescriptor, let pipeline = pipeline, let texture = texture else { return } let commandBuffer = commandQueue.makeCommandBuffer()! let encoder = commandBuffer.makeRenderCommandEncoder(descriptor: descriptor)! encoder.setRenderPipelineState(pipeline) encoder.setFragmentTexture(texture, index: 0) descriptor.colorAttachments[0].clearColor = MTLClearColor(red: 0.0, green: 0, blue: 0, alpha: 1.0) // Draw six equilateral triangles forming the hexagon let radius: Float = 0.6 for i in 0..<6 { let angle = Float(i) * (.pi / 3) let cosA = cos(angle) let sinA = sin(angle) let nextA = Float(i+1) * (.pi / 3) let cosB = cos(nextA) let sinB = sin(nextA) let verts: [simd_float2] = [ simd_float2(0, 0), simd_float2(radius * cosA, radius * sinA), simd_float2(radius * cosB, radius * sinB) ] encoder.setVertexBytes(verts, length: MemoryLayout<simd_float2>.stride * 3, index: 0) // Tell the fragment shader to use the texture color. var textureColor: simd_float4 = simd_float4(1, 2, 3, 4) encoder.setFragmentBytes(&textureColor, length: MemoryLayout<SIMD4<Float>>.stride, index: 1) encoder.drawPrimitives(type: .triangle, vertexStart: 0, vertexCount: 3) One of the things the existing app does is load PNG or TIFF images with an alpha channel, and then overlay parts of the image on top of themselves flipped, so you get interesting Moiré patterns in the lines in the resulting kaleidoscope. For now I'm working on a single sample image, loading it into a texture in Metal, and just rendering it as a hexagon and drawing lines for the triangles that make up the hexagon. (For now I'm using the vertex coordinates as the texture coordinates, so I get a hexagonal part of my texture rather than a single triangular part tessellated into a hexagon. I'll fix that later.) In both iOS and OS I set the clear color to black at the beginning of the draw function. The issue: The source image is mostly transparent, but with a lot of partly transparent pixels. Here's what it looks like in Photoshop, where you can see the transparent parts as a checkerboard pattern: (I tried to crop the original image to show the approximate part that I'm rendering in a hexagon, but it's not exact. Look for the same shapes in the different images to compare them.) When I render my hexagon in the Metal view in the iOS version of the app, it looks like it's forcing each pixel to fully opaque or fully transparent: And in the macOS version of the app, it seems to force ALL the pixels to opaque: I haven't shown all the setup code, because it's' a lot. Is there some rendering mode setup I'm missing in order to get it to draw the pixels into the output based on their opacity, including partial opacity?
Replies
2
Boosts
0
Views
540
Activity
4d
Best Way to Use MetalFX in Unreal Engine 5.7 for macOS Port?
Hi everyone, We’re currently porting a high-fidelity AA+ PC title built on Unreal Engine 5.7 to macOS (Apple Silicon), and we’re looking for guidance from anyone with experience in this area. At the moment, the game is already runnable on Mac, but not yet at a playable level — we’re seeing performance around 10–15 FPS on an M4 device. We’re actively analyzing and defining the work needed to reach production-quality performance on macOS. One of the key areas we’re exploring is leveraging MetalFX to improve frame rate. However, it seems there’s no official MetalFX plugin or direct integration available for Unreal Engine. Has anyone here successfully integrated MetalFX into a UE5 rendering pipeline, or found a recommended approach to do so? Any insights on best practices, workflows, or references (docs, samples, etc.) would be greatly appreciated. Thanks in advance!
Replies
2
Boosts
0
Views
257
Activity
2d
iOS Metal system delayed one Vsync period to really display the frame on the screen
View Layout Add the following views in a view controller: Label View A, with a subview of the same size: MTKView A View B, with a subview of the same size: MTKView B Refresh Rates of Each View The label view refreshes at 60fps (driven by CADisplayLink). MTKView A and B refresh at 15fps. MTKView Implementation Details The corresponding CAMetalLayer's maximumDrawableCount is set to 2, changed to double buffering. The scheduling mechanism is modified; drawing is not driven by the internal loop but is done manually. The draw call is triggered immediately upon receiving a frame. self.metalView.enableSetNeedsDisplay = NO; self.metalView.paused = YES; A new high-priority queue is created for drawing, instead of handling it on the main queue. MTKView Latency Tracking The GPU completion time T1 is observed through the addCompletedHandler callback of the CommandBuffer. The presentation time T2 of the frame is observed through the addPresentedHandler callback of the currentDrawable in MTKView. Testing shows that T2 - T1 > 16.6ms (the Vsync period at 60Hz). This means that after the GPU rendering in MTLView is finished, the frame is not actually displayed at the next Vsync instruction but only at the Vsync instruction after that. I believe there is an extra 16.6ms of latency here, which I want to eliminate by adjusting the rendering mechanism. Observation from Instruments From Instruments, the Surface presentation aligns with the above test results. After the Metal encoder finishes, the Surface in Display switches only after the next-next Vsync instruction. See the image in the link for details. Questions According to a beginner's understanding, after MTKView's GPU rendering is finished, the next Vsync instruction should officially display (make it visible). However, this is not what is observed. Does the subview MTKView need to wait for another Vsync cycle to be drawn to the actual display buffer? The label updates its text at 60fps, so the entire interface should be displayed at 60fps. Is the content of MTKView not synchronized when the display happens? Explanation of the Reasoning Behind Some MTKView Code Details Changing from the default triple buffering to double buffering helps reduce the latency introduced by rendering. Not using MTKView's own scheduling mechanism but using manual triggering of the draw method is because MTKView's own scheduling mechanism is driven by CADisplayLink. Therefore, if a frame falls within a Vsync window, it needs to wait for the next Vsync window to trigger the draw operation, which introduces waiting latency.
Replies
3
Boosts
0
Views
603
Activity
Dec ’25
Bug Report - Incorrect trackingAreaIdentifier in visionOS 26 Hover Effect Sample Code
Description: In the official visionOS 26 Hover Effect sample code project , I encountered an issue where the event.trackingAreaIdentifier returned by onSpatialEvent does not reset as expected. Steps to Reproduce: Select an object with trackingAreaID = 6 in the sample app. Look at a blank space (outside any tracking area) and perform a pinch gesture . Expected Behavior: The event.trackingAreaIdentifier should return 0 when interacting with a non-tracking area. Actual Behavior: The event.trackingAreaIdentifier still returns 6, even after restarting the app or killing the process. This persists regardless of where the pinch gesture is performed
Replies
3
Boosts
0
Views
296
Activity
Jul ’25
CAMetalLayer nextDrawable crash
Hi , My application meet below crash backtrace at very low repro rate from the public users, i do not see it relate to a specific iOS version or iPhone model. The last code line from my application is calling CAMetalLayer nextDrawable API. I did some basic studying, suppose it may relate to the wrong CAMetaLayer configuration, like frame property w or h <= 0.0 bounds property w or h <= 0.0 drawableSize w or h <= 0.0 or w or h > max value (like 16384) Not sure my above thinking is right or not? Will the UIView which my CAMetaLayer attached will cause such nextDrawable crash or not ? Thanks a lot Main Thread - Crashed libsystem_kernel.dylib __pthread_kill libsystem_c.dylib abort libsystem_c.dylib __assert_rtn Metal MTLReportFailure.cold.1 Metal MTLReportFailure Metal _MTLMessageContextEnd Metal -[MTLTextureDescriptorInternal validateWithDevice:] AGXMetalA13 0x245b1a000 + 4522096 QuartzCore allocate_drawable_texture(id<MTLDevice>, __IOSurface*, unsigned int, unsigned int, MTLPixelFormat, unsigned long long, CAMetalLayerRotation, bool, NSString*, unsigned long) QuartzCore get_unused_drawable(_CAMetalLayerPrivate*, CAMetalLayerRotation, bool, bool) QuartzCore CAMetalLayerPrivateNextDrawableLocked(CAMetalLayer*, CAMetalDrawable**, unsigned long*) QuartzCore -[CAMetalLayer nextDrawable] SpaceApp -[MetalRender renderFrame:] MetalRenderer.mm:167 SpaceApp -[FrameBuffer acceptFrame:] VideoRender.mm:173 QuartzCore CA::Display::DisplayLinkItem::dispatch_(CA::SignPost::Interval<(CA::SignPost::CAEventCode)835322056>&) QuartzCore CA::Display::DisplayLink::dispatch_items(unsigned long long, unsigned long long, unsigned long long) QuartzCore CA::Display::DisplayLink::dispatch_deferred_display_links(unsigned int) UIKitCore _UIUpdateSequenceRun UIKitCore schedulerStepScheduledMainSection UIKitCore runloopSourceCallback CoreFoundation __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__ CoreFoundation __CFRunLoopDoSource0 CoreFoundation __CFRunLoopDoSources0 CoreFoundation __CFRunLoopRun CoreFoundation CFRunLoopRunSpecific GraphicsServices GSEventRunModal UIKitCore -[UIApplication _run] UIKitCore UIApplicationMain
Replies
3
Boosts
0
Views
377
Activity
Jul ’25
Metal useResource vs. MTLFence
Hello, I'm tracking down a bug where useResource doesn't seem to apply proper synchronization when a resource is produced by the render pass then consumed by the compute pass, but when I use MTLFence between the to signal and wait between the render/compute encoders, the artifact goes away. The resource is created with MTLHazardTrackingModeTracked and useResource is called on the compute encoder after the render pass. Metal API Validation doesn't report any warnings/errors. Am I misunderstanding the difference between the two APIs? I dug through the Metal documentation and it looks like useResource should handle synchronization given the resource has MTLHazardTrackingModeTracked but on the other hand, MTLFence should be used to ensure proper synchronization between command encoders. Can someone can clarify the difference between the two APIs and when to use them.
Replies
3
Boosts
0
Views
165
Activity
Jul ’25
Customize the Metal Performance HUD on Apple TV
Hi there, Is it possible to customize the Metal Performance HUD on Apple TV, similar to how it can be done on iPhone & iPad? Would like to see things like Compiled Shaders for my Apps on tvOS .
Replies
3
Boosts
0
Views
373
Activity
Aug ’25
Any way to save metrics presets as preferences in Metal HUD 4 on macOS?
I mean…I want to use defaults rather than launching apps via open with the saved environment variables. This is pretty easy on iOS and other platforms. So what about in macOS?
Replies
3
Boosts
0
Views
446
Activity
Aug ’25
How to use MetalPeformancePrimitives
I am trying to learn the new Metal Peformance Primitives APIs. I have added the MetalPeformancePrimitives framework and included the header in my shader code as per documentation #include <MetalPeformancePrimitives/MetalPeformancePrimitives.h> Unfortunately, Xcode complains that the header cannot be found. How do I include it properly? I am using Xcode 26 on Tahoe. The MetalPeformancePrimitives framework is present on my machine and I can inspect the headers in the filesystem.
Replies
3
Boosts
1
Views
789
Activity
Oct ’25
App Freezes on iPadOS 26.x - GPU Metal Errors
I work on a Qt/QML app that uses Esri Maps SDK for Qt and that is deployed to both Windows and iPads. With a recent iPad OS upgrade to 26.1, many iPad users are reporting the application freezing after panning and/or identifying features in the map. It runs fine for our Windows users. I was able to reproduce this and grabbed the following error messages when the freeze happens: IOGPUMetalError: Caused GPU Address Fault Error (0000000b:kIOGPUCommandBufferCallbackErrorPageFault) IOGPUMetalError: Invalid Resource (00000009:kIOGPUCommandBufferCallbackErrorInvalidResource) Environment: Qt 6.5.4 (Qt for iOS) Esri Maps SDK for Qt 200.3 iPadOS 26.1 Because it appears to be a Metal error, I tried using OpenGL (Qt offers a way to easily set hte target graphics api): QQuickWindow::setGraphicsApi(QSGRendererInterface::GraphicsApi::OpenGL) Which worked! No more freezing. But I'm seeing many posts that OpenGL has been deprecated by Apple. I've seen posts that Apple deprecated OpenGL ES. But it seems to still be available with iPadOS 26.1. If so, will this fix (above) just cause problems with a future iPadOS update? Any other suggestions to address this issue? Upgrading our version of Qt + Esri SDK to the latest version is not an option for us. We are in the process to upgrade the full application, but it is a year or two out. So, we just need a fix to buy us some time for now. Appreciate any thoughts/insights....
Replies
3
Boosts
0
Views
565
Activity
Dec ’25
Memory leak when no draw calls issued to encoder
I noticed that when the render command encoder adds no draw calls an apps memory usage seems to grow unboundedly. Using a super simple MTKView-based drawing with the following delegate (code at end). If I add the simplest of draw calls, e.g., a single vertex, the app's memory usage is normal, around 100-ish MBs. I am attaching a couple screenshot, one from Xcode and one from Instruments. What's going on here? Is this an illegal program? If yes, why does it not crash, such as if the encode or command buffer weren't ended. Or is there some race condition at play here due to the lack of draws? class Renderer: NSObject, MTKViewDelegate { var device: MTLDevice var commandQueue: MTL4CommandQueue var commandBuffer: MTL4CommandBuffer var allocator: MTL4CommandAllocator override init() { guard let d = MTLCreateSystemDefaultDevice(), let queue = d.makeMTL4CommandQueue(), let cmdBuffer = d.makeCommandBuffer(), let alloc = d.makeCommandAllocator() else { fatalError("unable to create metal 4 objects") } self.device = d self.commandQueue = queue self.commandBuffer = cmdBuffer self.allocator = alloc super.init() } func mtkView(_ view: MTKView, drawableSizeWillChange size: CGSize) {} func draw(in view: MTKView) { guard let drawable = view.currentDrawable else { return } commandBuffer.beginCommandBuffer(allocator: allocator) guard let descriptor = view.currentMTL4RenderPassDescriptor, let encoder = commandBuffer.makeRenderCommandEncoder( descriptor: descriptor ) else { fatalError("unable to create encoder") } encoder.endEncoding() commandBuffer.endCommandBuffer() commandQueue.waitForDrawable(drawable) commandQueue.commit([commandBuffer]) commandQueue.signalDrawable(drawable) drawable.present() } }
Replies
3
Boosts
0
Views
435
Activity
Jan ’26
Unable to find intelgpu_kbl_gt2r0 slice or a compatible one in binary archive
Unable to find intelgpu_kbl_gt2r0 slice or a compatible one in binary archive 'file:///System/Library/PrivateFrameworks/IconRendering.framework/Resources/binary.metallib' available slices: applegpu_g13g, applegpu_g13s, applegpu_g13d, applegpu_g14g, applegpu_g14s, applegpu_g14d, applegpu_g15g, applegpu_g15s, applegpu_g15d, applegpu_g16g, applegpu_g16s, applegpu_g17g, applegpu_g15g, applegpu_g15s, applegpu_g15d, applegpu_g16s Is it related to performance of applications in macOS 26.2 on Intel Macs?
Replies
3
Boosts
0
Views
305
Activity
Feb ’26