Hi there,
Is it possible to customize the Metal Performance HUD on Apple TV, similar to how it can be done on iPhone & iPad?
Would like to see things like Compiled Shaders for my Apps on tvOS
.
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I haven't been looking at screensavers for a long time because of Apple's lack of will (or resources?) to provide a public version of the private modern SDK used by Apple for a very long time now.
I'm now looking at the Screen Saver pane in System Settings (the What-If version of System Preferences in an alternate universe where all screens are in portrait mode).
In macOS Sequoia, it seems like 3rd party screensavers are not welcome considering that they are relegated to the "Other" section at the bottom of the list and you have to click Show All to start seeing 3rd party screen savers.
I also had a quick look at macOS Tahoe Beta 3 and it looks like that all the real screensavers are gone (3rd party and the ones from Apple: Hello, Message, Flurry, etc.) or at least it requires to be a Nobel Prize to find them (and the Search field is not useful).
I tried to install a 3rd party screen saver on macOS Tahoe Beta 3, it doesn't show up in the list.
To summarize:
No public access to modern APIs AFAIK.
UI that is hostile to 3rd party screen savers on macOS Sequoia.
Apparently only screensavers that are slideshows or movies curated by Apple in macOS Tahoe b3.
Hence the question:
Is there any future for screen savers on macOS?
Because if there's none, I won't waste my time trying to update some old screen savers.
Hello,
I'm currently working on my first SceneKit game and have encountered an issue related to moving an SCNNode using a UIPanGestureRecognizer.
When I deploy the game to my iPhone via Xcode in debug mode, all interactions are smooth. However, when I stop the debugging session and run the game directly from the device (outside of Xcode), the SCNNode movement behaves inconsistently — it works sometimes smoothly and sometimes not and the interaction becomes choppy. The SCNNode movement is controlled using a UIPanGestureRecognizer.
Do you have any ideas what might be causing the issue?
Hi,
What's the best way to handle drastic changes in scene charateristics with the new MTLFXTemporalDenoisedScaler?
Let's say a visible object of the scene radically changes its material properties. I can modify the albedo and roughness textures consequently. But I suspect the history will be corrupted. Blending visual information between the new frame and the previous ones might be a nonsense.
I guess the problem should be the same when objects appear or disappear instantly.
Is the upsacler manage these events for us (by lowering blending), or should we use the reactive or the denoise strength mask or something like that to handle them?
Following the post on
https://developer.apple.com/documentation/realitykit/custommaterial it's simple to use shader for materials and get uniforms and params from each vertex. However it's not available for visionOS. Any alternative to use in this case? I want to write shader to fill material by myself. (I have shader experience from web, familiar with fragment shader)
Recently, I adopted MetalFX for Upscale feature.
However, I have encountered a persistent build failure for the iOS Simulator with the error message, 'MetalFX is not available when building for iOS Simulator.'
To address this, I modified the MetalFX.framework status to 'Optional' within Build Phases > Link Binary With Libraries, adding the linker option (-weak_framework). Despite this adjustment, the build process continues to fail.
Furthermore, I observed that the MetalFX sample application provided by Apple, specifically the one found at https://developer.apple.com/documentation/metalfx/applying-temporal-antialiasing-and-upscaling-using-metalfx, also fails to build for the iOS Simulator target.
Has anyone encountered this issue?
Hello everyone,
I must have missed something but why isn't there a depthAttachmentPixelFormat to the new Metal 4 MTL4RenderPipelineDescriptor, unlike the old MTLRenderPipelineDescriptor?
So how do you set the depth pixel format?
Thanks in advance!
our app is live, and it appears that since the ios 18 update - the VideoMaterial renders pink / purple color instead of the video (picture attached). the audio is rendered properly.
we found that it occurs on old devices: iPhone 11 & iPhone SE 2020.
I've found this thread of Andy Jazz on stackoverflow:
Steps to Reproduce:
Create a plane for the video screen.
Apply a VideoMaterial using AVPlayerItem.
Anchor the model entity to an ARImageAnchor.
Expected Outcome:
The video should play as a material on the plane in RealityKit.
Actual Outcome:
On iOS 18, the plane appears pink, indicating the VideoMaterial isn’t applied.
What I’ve Tried:
-Verified the video URL is correct.
-Checked that the AVPlayerItem and VideoMaterial are initialised correctly.
-Ensured the AVPlayer is playing the video.
I also tried different formats (mov / mp4 / m4v), and verifying that the video's status is readyToPlay.
any suggestions?
Hi everyone,
I’m experiencing a critical issue with USDZ files created in Reality Composer on an iPad 9th Generation (iPadOS 18.3). The files work perfectly on iPads from the 10th Generation onwards and on iPad Pros. However, on older devices like the iPad 9th Generation and older iPhones, QuickLook (file preview) crashes when opening them.
This is a major issue because these USDZ files are part of an exhibition where artworks are extended with AR elements via a web page. If some visitors cannot view the 3D content, it significantly impacts the experience.
What’s puzzling is that two years ago, we exported USDZ files from Reality Composer, made them available via a website, and they worked flawlessly on all devices, including older iPads and iPhones. Now, with the latest iPadOS, they consistently crash on older devices.
Has anyone encountered a similar issue? Are there known limitations with QuickLook on older devices, or is there a way to optimize the USDZ files to prevent crashes? Could this be related to changes in iPadOS or RealityKit? Any advice or workaround would be greatly appreciated!
Thanks in advance!
During regular use, RealityKit generates an excessive amount of internal logging that is not actionable by third party developers. When developing an iOS RealityKit/ARKit app, this makes the Xcode console challenging to use for regular work.
(FB19173812)
See screenshots below.
Xcode does have an option for filtering out logging from specific SDKs, but enabling this feature to suppress the logging of RealityKit and related SDKs like PHASE is something developers have to do dozens of times each day. After a year of developing a RealityKit app, this process becomes frustrating.
If SDKs like Foundation, UIKit, and SwiftUI generated as much logging as RealityKit and related SDKs, Xcode's console would be unusable.
Is there any way to disable the logging of RealityKit and PHASE permanently?
Thank you for any help you provide.
My experience has been that ModelEntity(named:in:) can be used to load a USD file with a simple structure consisting of entities and model entities, and, critically, it will flatten the entity hierarchy down to a single ModelEntity, presumably reducing the number of draw calls.
However, can anyone verify that the following is true?
If ModelEntity(named:in:) is used to load a USD file from a RealityKit content bundle, it may fail when the USD file contains more complex data, such as shader graph material definitions, or perhaps for some other reason. I am not sure.
AND the error that ModelEntity(named:in:) throws in this case is
Cannot load RealityKitContent entity: Failed to find resource with name "<name>" in bundle
which would literally suggest that the file does not exist, instead of what I assume the error actually is, which is "the file exists but its entity hierarchy could not be flattened to a single ModelEntity" ?
Is that an accurate description of the known behavior of ModelEntity:named:in:)?
I understand that I could use Entity(named:in:) instead, without the flattening feature. My question is really more about the seemingly misleading error message.
Thank you for any clarification you can provide.
Hello!
I'm developing a GPU (shader) language, where I aim to target multiple backends with a common frontend. I wanted to avoid having to round trip through Metal, and go straight to IR just like I have with SPIRV, in order to have a fast and efficient compilation process.
I've been looking for a reference page where I can read about Metals IR, and as far as I'm aware, it exists, but I can't seem to find it anywhere.
Furthermore, if such a reference is available, is there also a toolkit where I can run validation on the output IR, and perhaps even run optimizations, much like spv-tools for SPIRV?
Any help would be appreciated!
Thanks,
Gustav
I am currently developing a mobile and server-side application using the new ObjectCaptureSession on iOS and PhotogrammetrySession on MacOS.
I have two questions regarding the newly updated APIs.
From WWDC23 session: "Meet Object Capture for iOS", I know that the Object Capture API uses Point Cloud data captured from iPhone LiDAR sensor. I want to know how to use the Point Cloud data captured on iPhone ObjectCaptureSession and use it to create 3D models on PhotogrammetrySession on MacOS.
From the example code from WWDC21, I know that the PhotogrammetrySession utilizes depth map from captured photo images by embedding it into the HEIC image and use those data to create a 3D asset on PhotogrammetrySession on MacOS. I would like to know if Point Cloud data is also embedded into the image to be used during 3D reconstruction and if not, how else the Point Cloud data is inserted to be used during reconstruction.
Another question is, I know that Point Cloud data is returned as a result from request to the PhtogrammetrySession.Request. I would like to know if this PointCloud data is the same set of data captured during ObjectCaptureSession from WWDC23 that is used to create ObjectCapturePointCloudView.
Thank you to everyone for the help in advance. It's a real pleasure to be developing with all the updates to RealityKit and the Object Capture API.
I would love to use Background GPU Access to do some video processing in the background.
However the documentation of BGContinuedProcessingTaskRequest.Resources.gpu clearly states:
Not all devices support background GPU use. For more information, see Performing long-running tasks on iOS and iPadOS.
Is there a list available of currently released devices that do (or don't) support GPU background usage? That would help to understand what part of our user base can use this feature. (And what hardware we need to test this on as developers.)
For example it seems that it isn't supported on an iPad Pro M1 with the current iOS 26 beta. The simulators also seem to not support the background GPU resource. So would be great to understand what hardware is capable of using this feature!
I didn't find a suggestion box on Swift's website so I'll post it here.
SwiftCharts are great but limited. I need more data on a single chart. Candlestick and OHLC type charts would be an excellent addition. Hopefully, influencers from Apple can make that happen.
Thanks.
Now the examples of metal-cpp are target on desktop and using AppKit which is not supported on iOS. Is there any tips for developing with metal-cpp on mobile device?
I'm new here so I don't know what's this function belongs to which topic... Sorry about that!
I watched the WWDC stream and I am really interested in this function, I'm wondering if this function could be used in my apps.
I looked up the document but I find it only support visionOS(i'm not sure about that, but I saw the demo is base on the visionOS)
let dic : [AnyHashable:Any] = [
kCGPDFXRegistryName: "http://www.color.org" as CFString,
kCGPDFXOutputConditionIdentifier: "FOGRA43" as CFString,
kCGPDFContextOutputIntent: "GTS_PDFX" as CFString,
kCGPDFXOutputIntentSubtype: "GTS_PDFX" as CFString,
kCGPDFContextCreateLinearizedPDF: "" as CFString,
kCGPDFContextCreatePDFA: "" as CFString,
kCGPDFContextAuthor: "Placeholder" as CFString,
kCGPDFContextCreator: "Placeholder" as CFString
]
Hello,
Now I would like to export my PDF's as PDF/A. In my opinion, there is also the right option for this under Core Graphics.
Unfortunately, the documentation does not show what is 'kCGPDFContextCreatePDFA' or 'kCGPDFContextLinearizedPDF' for
a stringvalue is required.
What I have already tried: GTS_PDFA1 , PDF/A-1, true as CFString.
(Above my CFDictionary. ...Author e.g are working perfectly.)
In the Finder you can see these two options, which I would also like to implement in my app.
Thank you in advance!
I am trying to convert a JPG image to a JP2 (JPEG 2000) format using the ImageMagick library on iOS. However, although the file extension is changing to .jp2, the format of the image does not seem to be changing. The output image is still being treated as a JPG file, and not as a true JP2 format.
Here is the code
(IBAction)convertButtonClicked:(id)sender {
NSString *jpgPath = [[NSBundle mainBundle] pathForResource:@"Example" ofType:@"jpg"];
NSString *tempFilePath = [NSTemporaryDirectory() stringByAppendingPathComponent:@"Converted.jp2"];
MagickWand *wand = NewMagickWand();
if (MagickReadImage(wand, [jpgPath UTF8String]) == MagickFalse) {
char *description;
ExceptionType severity;
description = MagickGetException(wand, &severity);
NSLog(@"Error reading image: %s", description);
MagickRelinquishMemory(description);
return;
}
if (MagickSetFormat(wand, "JP2") == MagickFalse) {
char *description;
ExceptionType severity;
description = MagickGetException(wand, &severity);
NSLog(@"Error setting image format to JP2: %s", description);
MagickRelinquishMemory(description);
}
if (MagickWriteImage(wand, [tempFilePath UTF8String]) == MagickFalse) {
NSLog(@"Error writing JP2 image");
return;
}
NSLog(@"Image successfully converted.");
}
@end
Topic:
Graphics & Games
SubTopic:
General
I'm a newbee at Vulkan and Xcode.
I have my project on github https://github.com/flocela/OrangeSpider/
Whenever I run, two windows open instead of only one.
I added testing, which means I have an OrangeSpider.xctestplan in the OrangeSpider/TestsOrangeSpider/ folder.
This is my first time adding testing to an XCode project, so I think this may be where the problem is.
I also get this error message:
ViewBridge to RemoteViewService Terminated: Error Domain=com.apple.ViewBridge Code=18 "(null)" UserInfo={com.apple.ViewBridge.error.hint=this process disconnected remote view controller -- benign unless unexpected, com.apple.ViewBridge.error.description=NSViewBridgeErrorCanceled}
Topic:
Graphics & Games
SubTopic:
Metal