Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.

All subtopics
Posts under Graphics & Games topic

Post

Replies

Boosts

Views

Activity

Is migrating from ARView to RealityView recommended?
We're using RealityKit to create a science education AR app for iOS, iPadOS, and visionOS. In the WWDC25 session video "Bring your SceneKit project to RealityKit" https://developer.apple.com/videos/play/wwdc2025/288 at 8:15, it's explained that when using RealityKit, RealityView should be used in all cases, whereas in the past, SceneKit required SCNView, SceneView, or ARSCNView, depending on an app's requirements. Because the initial development of our app on iOS predates iOS 18's RealityView, our app currently uses ARView to render RealityKit AR content on iOS and iPadOS. Is it recommended that we migrate to RealityView, or can we safely continue using our existing ARView implementation? We'd prefer to avoid unnecessary development cost. If migrating from ARView to RealityView is recommended, what specific benefits should we expect from this transition? Thank you.
1
2
193
Jun ’25
CGEvent Not Working
I am trying to simulate a paste command and it seems to not want to paste. It worked at one point with the same code and now is causing issues. My code looks like this: ` func simulatePaste() { guard let source = CGEventSource(stateID: .hidSystemState) else { print("Failed to create event source") return } let keyDown = CGEvent(keyboardEventSource: source, virtualKey: CGKeyCode(9), keyDown: true) let keyUp = CGEvent(keyboardEventSource: source, virtualKey: CGKeyCode(9), keyDown: false) keyDown?.flags = .maskCommand keyUp?.flags = .maskCommand keyDown?.post(tap: .cgAnnotatedSessionEventTap) keyUp?.post(tap: .cgAnnotatedSessionEventTap) print("Simulated Cmd + V") } I know that there is some issues around permissions and so in my Info.plist I have this: <string>NSApplication</string> <key>NSAppleEventsUsageDescription</key> <string>This app requires permission to send keyboard input for pasting from the clipboard.</string> I have also disabled sandbox. It does ask me if I want to give the app permissions but after approving it, it still doesn't paste.
1
0
452
Feb ’25
Broadcast Upload Extension
I am trying to use Broadcast upload extension but Broadcast picker starts countdown and stops (swiftUI). Steps i followed. added BroadcastUploadExtension as target same app group for for main app and extension added packages using SPM i seems the extension functions are not getting triggered, i check using UIScreen.main.isCaptured also which always comes as false. i tried Using Logs which never Appeared.
1
0
599
Mar ’25
How to Configure angularLimitInYZ for PhysicsSphericalJoint in RealityKit (Pendulum/Swing Behavior)
Hello RealityKit developers, I'm currently working on physics simulations in my visionOS app and am trying to adapt the concepts from the official sample Simulating physics joints in your RealityKit app. In the sample, a sphere is connected to the ceiling using a PhysicsRevoluteJoint to create a hinge-like simulation. I've successfully modified this setup to use a PhysicsSphericalJoint instead. The basic replacement works as expected: pin1 (attached to the sphere) rotates freely around pin0 (attached to the ceiling), much like a ball-and-socket joint should, removing all translational degrees of freedom. My challenge lies with the PhysicsSphericalJoint's angularLimitInYZ property. The documentation mentions that this property allows limiting the rotation around the Y and Z axes, defining an "elliptical cone shape around the x-axis of pin0." However, I'm struggling to understand how to specify these values to achieve a desired rotational limit. If I have a sphere that is currently capable of rotating 360 degrees around pin0 (like a free-spinning ball on a string), how would I use angularLimitInYZ to restrict its rotation to a certain height or angular range, preventing it from completing a full circle? Specifically, I'm trying to achieve a "swing" like behavior where the sphere oscillates back and forth but cannot rotate completely overhead or underfoot. What values or approach should I use for the angularLimitInYZ tuple to define such a restricted pendulum-like motion? Any insights, code examples, or explanations on how to properly configure angularLimitInYZ for this kind of behavior would be incredibly helpful! The following code is modified from the sample. extension MainView { func addPinsTo(ballEntity: Entity, attachmentEntity: Entity) throws { let hingeOrientation = simd_quatf(from: [1, 0, 0], to: [0, 0, 1]) let attachmentPin = attachmentEntity.pins.set( named: "attachment_hinge", position: .zero, orientation: hingeOrientation ) let relativeJointLocation = attachmentEntity.position( relativeTo: ballEntity ) let ballPin = ballEntity.pins.set( named: "ball_hinge", position: relativeJointLocation, orientation: hingeOrientation ) // Create a PhysicsSphericalJoint between the two pins. let revoluteJoint = PhysicsSphericalJoint(pin0: attachmentPin, pin1: ballPin) try revoluteJoint.addToSimulation() } } The following image is a screenshot of the operation when changing to PhysicsSphericalJoint. Thank you in advance for your assistance.
1
0
269
Jul ’25
'tangents' was deprecated in visionOS 2.0: Use cp_drawable_compute_projection instead
I'm using a class with tangents to render on RealityKit for VisionOS but in Vision26 it cause a crash on App and there not documentation how implement cp_drawable_compute_projection I have tried a few options but without success. Could you help me to implement it ? The part of code is: return drawable.views.map { view in let userViewpointMatrix = (simdDeviceAnchor * view.transform).inverse let projectionMatrix = ProjectiveTransform3D( leftTangent: Double(view.tangents[0]), rightTangent: Double(view.tangents[1]), topTangent: Double(view.tangents[2]), bottomTangent: Double(view.tangents[3]), nearZ: Double(drawable.depthRange.y), farZ: Double(drawable.depthRange.x), reverseZ: true ) let screenSize = SIMD2(x: Int(view.textureMap.viewport.width), y: Int(view.textureMap.viewport.height)) return ModelRendererViewportDescriptor(viewport: view.textureMap.viewport, projectionMatrix: .init(projectionMatrix), viewMatrix: userViewpointMatrix * translationMatrix * rotationMatrix * scalingMatrix * commonUpCalibration, screenSize: screenSize) }
1
0
140
Jun ’25
Low Power Mode on MacOS 26 Tahoe + Vsync fullscreen limits application to 30 fps
I'm experiencing a specific issue where when using any of the MacOS 26 Tahoe betas with Low Power Mode enabled and using Vsync in fullscreen, my application framerate gets limited to a hard 30 fps. I have not experienced this on any older OS. For example Low Power Mode on 13.6 Ventura with Vsync fullscreen lets my application run at full 60 fps without issues. Is this a bug or a change in behavior of Low Power Mode on Tahoe? My application is 3D, runs at 60 fps and is sensitive to tearing, so I need Vsync and it is mostly utilized in fullscreen. And Low Power Mode is a default for many Macs, so default experience on Tahoe currently is a halved 30 fps. However there also seems to be inconsistencies of on which machines this happens, but older OSes are always fine.
1
0
262
Aug ’25
SCNCamera SSAO on visionOS
Hi Looking at the documentation for screenSpaceAmbientOcclusionIntensity, I noticed that it says this is supported on visionOS 1.0+: https://developer.apple.com/documentation/scenekit/scncamera/screenspaceambientocclusionintensity Could someone enlighten me as to how that would work? As far as I know, we don't use an SCNCamera on visionOS. So, what's the idea here? Can we activate SSAO on visionOS?
1
0
434
Feb ’25
How to load a USDZ inside of a Reality Composer Pro package as ModelEntity in RealityKit ?
I need a MeshResource from ModelEntity to generate a box collider, but ModelEntity fails to load USDZ files from the Reality Composer Pro (RCP) bundle. This code works for loading an Entity: // Successfully loads as generic Entity previewEntity = try await Entity(named: fileName, in: realityKitContentBundle) But this fails when trying to load as ModelEntity: // Fails to load as ModelEntity modelEntity = try await ModelEntity(named: fileName, in: realityKitContentBundle) I found this thread mentioning: "You'll likely go from USDZ to Entity which contains a MeshResource when you load/init the USDZ file." But it doesn't explain ​how to actually extract the MeshResource. Could anyone advise: How to properly load USDZ files as ModelEntity from RCP bundles? How to extract MeshResource from a successfully loaded Entity? Alternative approaches to generate box colliders if direct extraction isn't possible? Sample code for extraction or workarounds would be greatly appreciated!
1
0
183
Jun ’25
Value of type 'SCRecordingOutput' has no member 'delegate'
Hello, I am trying to capture screen recording ( output.mp4 ) using ScreenCaptureKit and also the mouse positions during the recording ( mouse.json ). The recording and the mouse positions ( tracked based on mouse movements events only ) needs to be perfectly synced in order to add effects in post editing. I started off by using the await stream?.startCapture() and after that starting my mouse tracking function :- try await captureEngine.startCapture(configuration: config, filter: filter, recordingOutput: recordingOutput) let captureStartTime = Date() mouseTracker?.startTracking(with: captureStartTime) But every time I tested, there is a clear inconsistency in sync between the recorded video and the recorded mouse positions. The only thing I want is to know when exactly does the recording "actually" started so that I can start the mouse capture at that same time, and thus I tried using the Delegates, but being able to set them up perfectly. import Foundation import AVFAudio import ScreenCaptureKit import OSLog import Combine class CaptureEngine: NSObject, @unchecked Sendable { private let logger = Logger() private(set) var stream: SCStream? private var streamOutput: CaptureEngineStreamOutput? private var recordingOutput: SCRecordingOutput? private let videoSampleBufferQueue = DispatchQueue(label: "com.francestudio.phia.VideoSampleBufferQueue") private let audioSampleBufferQueue = DispatchQueue(label: "com.francestudio.phia.AudioSampleBufferQueue") private let micSampleBufferQueue = DispatchQueue(label: "com.francestudio.phia.MicSampleBufferQueue") func startCapture(configuration: SCStreamConfiguration, filter: SCContentFilter, recordingOutput: SCRecordingOutput) async throws { // Create the stream output delegate. let streamOutput = CaptureEngineStreamOutput() self.streamOutput = streamOutput do { stream = SCStream(filter: filter, configuration: configuration, delegate: streamOutput) try stream?.addStreamOutput(streamOutput, type: .screen, sampleHandlerQueue: videoSampleBufferQueue) try stream?.addStreamOutput(streamOutput, type: .audio, sampleHandlerQueue: audioSampleBufferQueue) try stream?.addStreamOutput(streamOutput, type: .microphone, sampleHandlerQueue: micSampleBufferQueue) self.recordingOutput = recordingOutput recordingOutput.delegate = self try stream?.addRecordingOutput(recordingOutput) try await stream?.startCapture() } catch { logger.error("Failed to start capture: \(error.localizedDescription)") throw error } } func stopCapture() async throws { do { try await stream?.stopCapture() } catch { logger.error("Failed to stop capture: \(error.localizedDescription)") throw error } } func update(configuration: SCStreamConfiguration, filter: SCContentFilter) async { do { try await stream?.updateConfiguration(configuration) try await stream?.updateContentFilter(filter) } catch { logger.error("Failed to update the stream session: \(String(describing: error))") } } func stopRecordingOutputForStream(_ recordingOutput: SCRecordingOutput) throws { try self.stream?.removeRecordingOutput(recordingOutput) } } // MARK: - SCRecordingOutputDelegate extension CaptureEngine: SCRecordingOutputDelegate { func recordingOutputDidStartRecording(_ recordingOutput: SCRecordingOutput) { let startTime = Date() logger.info("Recording output did start recording \(startTime)") } func recordingOutputDidFinishRecording(_ recordingOutput: SCRecordingOutput) { logger.info("Recording output did finish recording") } func recordingOutput(_ recordingOutput: SCRecordingOutput, didFailWithError error: any Error) { logger.error("Recording output failed with error: \(error.localizedDescription)") } } private class CaptureEngineStreamOutput: NSObject, SCStreamOutput, SCStreamDelegate { private let logger = Logger() override init() { super.init() } func stream(_ stream: SCStream, didOutputSampleBuffer sampleBuffer: CMSampleBuffer, of outputType: SCStreamOutputType) { guard sampleBuffer.isValid else { return } switch outputType { case .screen: break case .audio: break case .microphone: break @unknown default: logger.error("Encountered unknown stream output type:") } } func stream(_ stream: SCStream, didStopWithError error: Error) { logger.error("Stream stopped with error: \(error.localizedDescription)") } } I am getting error Value of type 'SCRecordingOutput' has no member 'delegate' Even though I am targeting macOs 15+ ( macOs 26 actually ) and macOs only. What is the best way to achieving the desired result? Is there any other / better way to do it?
1
0
215
Oct ’25
How can I simultaneously apply the drag gesture to multiple entities?
I wanted to drag EntityA while also dragging EntityB independently. I've tried to separate them by entity but it only recognizes the latest drag gesture RealityView { content, attachments in ... } .gesture( DragGesture() .targetedToEntity(EntityA) .onChanged { value in ... } ) .gesture( DragGesture() .targetedToEntity(EntityB) .onChanged { value in ... } ) also tried using the simultaneously but didn't work too, maybe i'm missing something .gesture( DragGesture() .targetedToEntity(EntityA) .onChanged { value in ... } .simultaneously(with: DragGesture() .targetedToEntity(EntityB) .onChanged { value in ... } )
1
1
694
Mar ’25
macOS 26 Games app – Achievement description shows incorrect text before unlocking
Hello, I found an issue with the Games app on macOS 26 (Tahoe) when viewing achievements: In App Store Connect, each achievement has different values set for the pre-earned description and the post-earned description. When testing with GameKit directly (GKAchievementDescription), both values are returned correctly. However, in the macOS Games app, the post-earned description is shown even before the achievement is earned. This seems to be a display issue specific to the Games app on macOS. Could you confirm if this is a known bug in the Games app, or if there is a reason why pre-earned descriptions are not being shown? Thank you.
1
0
323
Sep ’25
PhotogrammetrySession fails with internal errors 4011 / 4012 when using iOS Object Capture (Area Mode) images
Hi all, I’m running into an issue when trying to reconstruct a 3D model using PhotogrammetrySession on macOS from a set of images captured via the iOS Object Capture sample app, specifically in Area mode. When I attempt to create the model from these images (using the raw Images/ folder exported directly from the capture session), I get the following errors: ERROR cv3dapi.pg: Internal error codes (2): 4011 4012 WARN cv3dapi.pg: Internal warning codes (1): 4507 Output error with code = -15 requestError: CoreOC.PhotogrammetrySession.Error.processError I use the "Images" directory directly exported from Object Capture with my iphone 12 pro max (has lidar) set to "area mode" in the object capture app here is an example heic image metadata from the sequence. heif-info Images/00044.869568833.HEIC MIME type: image/heic main brand: heic compatible brands: mif1, MiHE, MiPr, miaf, MiHB, heic image: 3024x4032 (id=49), primary tiles: 6x8, tile size: 512x512 colorspace: YCbCr, 4:2:0 bit depth: 8 thumbnail: 240x320 color profile: nclx alpha channel: no depth channel: yes size: 192x256 bits per pixel: 8 z-near: 1.173828 z-far: 2.552734 d-min: undefined d-max: undefined representation: uniform Z metadata: Exif: 960 bytes uri /tag:apple.com,2023:ObjectCapture#CameraTrackingState: 4 bytes uri /tag:apple.com,2023:ObjectCapture#CameraCalibrationData: 1015 bytes uri /tag:apple.com,2023:ObjectCapture#ObjectTransform: 48 bytes uri /tag:apple.com,2023:ObjectCapture#ObjectBoundingBox: 48 bytes uri /tag:apple.com,2023:ObjectCapture#RawFeaturePoints: 832 bytes uri /tag:apple.com,2023:ObjectCapture#PointCloudData: 23984 bytes uri /tag:apple.com,2023:ObjectCapture#BundleVersion: 5 bytes uri /tag:apple.com,2023:ObjectCapture#SegmentID: 4 bytes uri /tag:apple.com,2024:ObjectCapture#SessionUUID: 36 bytes uri /tag:apple.com,2024:ObjectCapture#CaptureMode: 4 bytes uri /tag:apple.com,2023:ObjectCapture#Feedback: 4 bytes uri /tag:apple.com,2023:ObjectCapture#WideToDepthCameraTransform: 48 bytes uri /tag:apple.com,2023:ObjectCapture#TemporalDepthPointClouds: 864026 bytes transformations: angle (ccw): 270 region annotations: none properties: camera intrinsic matrix: focal length: 2813.695557; 2813.695557 principal point: 1522.338502; 2002.843018 skew: 0.000000 camera extrinsic matrix: rotation matrix: -0.695 0.344 -0.632 0.007 -0.875 -0.483 -0.719 -0.340 0.606 Questions: • What do internal error codes 4011 and 4012 refer to? • Is there something specific about Area mode captures that require preprocessing before they’re compatible with PhotogrammetrySession? • Has anyone successfully reconstructed a model from an Area mode session using the stock Apple tools? NOTE: I can provide the folder of images for debugging if that would help!
1
2
939
Jul ’25
virtual game controller + SwiftUI warning
Hi, I've just moved my SpriteKit-based game from UIView to SwiftUI + SpriteView and I'm getting this mesage Adding 'GCControllerView' as a subview of UIHostingController.view is not supported and may result in a broken view hierarchy. Add your view above UIHostingController.view in a common superview or insert it into your SwiftUI content in a UIViewRepresentable instead. Here's how I'm doing this struct ContentView: View { @State var alreadyStarted = false let initialScene = GKScene(fileNamed: "StartScene")!.rootNode as! SKScene var body: some View { ZStack { SpriteView(scene: initialScene, transition: .crossFade(withDuration: 1), isPaused: false , preferredFramesPerSecond: 60) .edgesIgnoringSafeArea(.all) .onAppear { if !self.alreadyStarted { self.alreadyStarted.toggle() initialScene.scaleMode = .aspectFit } } VirtualControllerView() .onAppear { let virtualController = BTTSUtilities.shared.makeVirtualController() BTTSSharedData.shared.virtualGameController = virtualController BTTSSharedData.shared.virtualGameController?.connect() } .onDisappear { BTTSSharedData.shared.virtualGameController?.disconnect() } } } } struct VirtualControllerView: UIViewRepresentable { func makeUIView(context: Context) -> UIView { let result = PassthroughView() return result } func updateUIView(_ uiView: UIView, context: Context) { } } class PassthroughView: UIView { override func hitTest(_ point: CGPoint, with event: UIEvent?) -> UIView? { for subview in subviews.reversed() { let convertedPoint = convert(point, to: subview) if let hitView = subview.hitTest(convertedPoint, with: event) { return hitView } } return nil } }
1
0
326
Sep ’25
SMAPI Malware blocking in Stardew Valley on macOS
Hola. At this point, I'm at my wit's end because I've tried EVERYTHING just to be able to play a single game on my Mac, but the new update makes it impossible. So I'm just gonna ask for one game: I've been trying to play Stardew Valley, modded with SMAPI, for a week now. Despite playing the game with mods for almost 2 years, OS is refusing to open the game because SMAPI "contains malware". I tried reinstalling the mod, but no dice. It just automatically deletes the terminal and blocks the game from opening. You can imagine my frustration because the mod has been 100% safe for 2 years. No option to "Open Anyway" in Security Settings, either. I have no say in this. I've tried code signing it in Terminal (three times). Also no dice. Followed these two forums. https://www.reddit.com/r/StardewValley/comments/1h071jl/mac_deleted_stardew_modding_api_because_of_malware/ https://www.reddit.com/r/SMAPI/comments/1h0fgv9/solution_for_mac_malware_issue_with_smapi_417/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button NOTHING. Please tell me there's a way to override this??? Just let me install malware on my computer!😭
1
1
775
Jan ’25
Compute kernel fails to compile when calling texture.read()
If I compile a compute kernel with a call to texture.read(), it fails with the following error: "Error Domain=AGXMetalG13X Code=3 "Encountered unlowered function call to air.get_read_sampler" UserInfo={NSLocalizedDescription=Encountered unlowered function call to air.get_read_sampler}." This error occurs on both macOS and iOS 26 Beta 5, but not when running on a simulator or in a playground. It does not occur on a macOS Sequoia VM. It occurs whether I use the old metal 3 or new metal 4 compilation method. A workaround would be to use a sampler, but according to the feature tables, all platforms support reading from textures of all formats. Below is a minimal example which produces the error: let device = MTLCreateSystemDefaultDevice()! let library = device.makeDefaultLibrary()! let computeFunction = library.makeFunction(name: "compute_test")! do { let pipeline = try device.makeComputePipelineState(function: computeFunction) debugPrint(pipeline) } catch { debugPrint("Metal 3 failed with error:\n\(error)") } #import <metal_stdlib> using namespace metal; kernel void compute_test(uint2 gid [[thread_position_in_grid]], texture2d<float, access::read> in [[texture(0)]], texture2d<float, access::write> out [[texture(1)]]) { out.write(in.read(gid), gid); } I filed feedback FB19530049.
1
0
200
Aug ’25
Operation not permitted loading image in SDL2 with Xcode
Hello, I am making a project in SDL, and with that I am using SDL_Image. I am doing all of this on Xcode. I've been able to initialize everything just fine, but issues spring up when I try to load an image. When I give the code a path to look for an image: Unable to load image! IMG_Error: Couldn't open [Insert image path here]: Operation not permitted I get that error. Keep in mind "Unable to load image" is a general error I put in the code should loading said image fail, the specific error which I called with IMG_GetError() is what we really need to know. I've seen before that this might occur if a program does not have full disk access. Because of this, I've tried giving Xcode full disk access, but this didn't work and I still got the same error.
1
0
721
Jan ’25
CGContext PDF/A intents
let dic : [AnyHashable:Any] = [ kCGPDFXRegistryName: "http://www.color.org" as CFString, kCGPDFXOutputConditionIdentifier: "FOGRA43" as CFString, kCGPDFContextOutputIntent: "GTS_PDFX" as CFString, kCGPDFXOutputIntentSubtype: "GTS_PDFX" as CFString, kCGPDFContextCreateLinearizedPDF: "" as CFString, kCGPDFContextCreatePDFA: "" as CFString, kCGPDFContextAuthor: "Placeholder" as CFString, kCGPDFContextCreator: "Placeholder" as CFString ] Hello, Now I would like to export my PDF's as PDF/A. In my opinion, there is also the right option for this under Core Graphics. Unfortunately, the documentation does not show what is 'kCGPDFContextCreatePDFA' or 'kCGPDFContextLinearizedPDF' for a stringvalue is required. What I have already tried: GTS_PDFA1 , PDF/A-1, true as CFString. (Above my CFDictionary. ...Author e.g are working perfectly.) In the Finder you can see these two options, which I would also like to implement in my app. Thank you in advance!
1
0
162
Jun ’25
Implementing Scalable Order-Independent Transparency (OIT) in Metal
Hi, Apple’s documentation on Order-Independent Transparency (OIT) describes an approach using image blocks, where an array of size 4 is allocated per fragment to store depth and color in a tile shading compute pass. However, when increasing the scene’s depth complexity by adding more overlapping quads, the OIT implementation fails due to the fixed array size. Is there a way to dynamically allocate storage for fragments based on actual depth complexity encountered during rasterization, rather than using a fixed-size array? Specifically, can an adaptive array of fragments be maintained and sorted by depth, where the size grows as needed instead of being limited to 4 entries? Any insights or alternative approaches would be greatly appreciated. Thank you!
1
0
563
Mar ’25