Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics
Posts under Machine Learning & AI topic

Post

Replies

Boosts

Views

Activity

DockKit .track() has no effect using VNDetectFaceRectanglesRequest
Hi, I'm testing DockKit with a very simple setup: I use VNDetectFaceRectanglesRequest to detect a face and then call dockAccessory.track(...) using the detected bounding box. The stand is correctly docked (state == .docked) and dockAccessory is valid. I'm calling .track(...) with a single observation and valid CameraInformation (including size, device, orientation, etc.). No errors are thrown. To monitor this, I added a logging utility – track(...) is being called 10–30 times per second, as recommended in the documentation. However: the stand does not move at all. There is no visible reaction to the tracking calls. Is there anything I'm missing or doing wrong? Is VNDetectFaceRectanglesRequest supported for DockKit tracking, or are there hidden requirements? Would really appreciate any help or pointers – thanks! That's my complete code: extension VideoFeedViewController: AVCaptureVideoDataOutputSampleBufferDelegate { func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) { guard let frame = CMSampleBufferGetImageBuffer(sampleBuffer) else { return } detectFace(image: frame) func detectFace(image: CVPixelBuffer) { let faceDetectionRequest = VNDetectFaceRectanglesRequest() { vnRequest, error in guard let results = vnRequest.results as? [VNFaceObservation] else { return } guard let observation = results.first else { return } let boundingBoxHeight = observation.boundingBox.size.height * 100 #if canImport(DockKit) if let dockAccessory = self.dockAccessory { Task { try? await trackRider( observation.boundingBox, dockAccessory, frame, sampleBuffer ) } } #endif } let imageResultHandler = VNImageRequestHandler(cvPixelBuffer: image, orientation: .up) try? imageResultHandler.perform([faceDetectionRequest]) func combineBoundingBoxes(_ box1: CGRect, _ box2: CGRect) -> CGRect { let minX = min(box1.minX, box2.minX) let minY = min(box1.minY, box2.minY) let maxX = max(box1.maxX, box2.maxX) let maxY = max(box1.maxY, box2.maxY) let combinedWidth = maxX - minX let combinedHeight = maxY - minY return CGRect(x: minX, y: minY, width: combinedWidth, height: combinedHeight) } #if canImport(DockKit) func trackObservation(_ boundingBox: CGRect, _ dockAccessory: DockAccessory, _ pixelBuffer: CVPixelBuffer, _ cmSampelBuffer: CMSampleBuffer) throws { // Zähle den Aufruf TrackMonitor.shared.trackCalled() let invertedBoundingBox = CGRect( x: boundingBox.origin.x, y: 1.0 - boundingBox.origin.y - boundingBox.height, width: boundingBox.width, height: boundingBox.height ) guard let device = captureDevice else { fatalError("Kamera nicht verfügbar") } let size = CGSize(width: Double(CVPixelBufferGetWidth(pixelBuffer)), height: Double(CVPixelBufferGetHeight(pixelBuffer))) var cameraIntrinsics: matrix_float3x3? = nil if let cameraIntrinsicsUnwrapped = CMGetAttachment( sampleBuffer, key: kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix, attachmentModeOut: nil ) as? Data { cameraIntrinsics = cameraIntrinsicsUnwrapped.withUnsafeBytes { $0.load(as: matrix_float3x3.self) } } Task { let orientation = getCameraOrientation() let cameraInfo = DockAccessory.CameraInformation( captureDevice: device.deviceType, cameraPosition: device.position, orientation: orientation, cameraIntrinsics: cameraIntrinsics, referenceDimensions: size ) let observation = DockAccessory.Observation( identifier: 0, type: .object, rect: invertedBoundingBox ) let observations = [observation] guard let image = CMSampleBufferGetImageBuffer(sampleBuffer) else { print("no image") return } do { try await dockAccessory.track(observations, cameraInformation: cameraInfo) } catch { print(error) } } } #endif func clearDrawings() { boundingBoxLayer?.removeFromSuperlayer() boundingBoxSizeLayer?.removeFromSuperlayer() } } } } @MainActor private func getCameraOrientation() -> DockAccessory.CameraOrientation { switch UIDevice.current.orientation { case .portrait: return .portrait case .portraitUpsideDown: return .portraitUpsideDown case .landscapeRight: return .landscapeRight case .landscapeLeft: return .landscapeLeft case .faceDown: return .faceDown case .faceUp: return .faceUp default: return .corrected } }
1
1
324
2w
Does Image Playground is On-device + Private Cloud ?
Apple's Image Playground primarily performs image generation on-device, but can use secure Private Cloud Compute for more complex requests that require larger models. Private Cloud Compute (PCC) For more complex tasks that require greater computational power than the device can provide, Image Playground leverages Apple's Private Cloud Compute. This system extends the privacy and security of the device to the cloud: Secure Environment: PCC runs on Apple silicon servers and uses a secure enclave to protect data, ensuring requests are processed in a verified, secure environment. No Data Storage: Data is never stored or made accessible to Apple when using PCC; it is used only to fulfill the specific request. Independent Verification: Independent experts are able to inspect the code running on these servers to verify Apple's privacy promises.
3
0
704
2w
tensorflow-metal error
I'm using python 3.9.6, tensorflow 2.20.0, tensorflow-metal 1.2.0, and when I try to run import tensorflow as tf It gives Traceback (most recent call last): File "/Users/haoduoyu/Code/demo.py", line 1, in <module> import tensorflow as tf File "/Users/haoduoyu/Code/test/lib/python3.9/site-packages/tensorflow/__init__.py", line 438, in <module> _ll.load_library(_plugin_dir) File "/Users/haoduoyu/Code/test/lib/python3.9/site-packages/tensorflow/python/framework/load_library.py", line 151, in load_library py_tf.TF_LoadLibrary(lib) tensorflow.python.framework.errors_impl.NotFoundError: dlopen(/Users/haoduoyu/Code/test/lib/python3.9/site-packages/tensorflow-plugins/libmetal_plugin.dylib, 0x0006): Library not loaded: @rpath/_pywrap_tensorflow_internal.so Referenced from: <8B62586B-B082-3113-93AB-FD766A9960AE> /Users/haoduoyu/Code/test/lib/python3.9/site-packages/tensorflow-plugins/libmetal_plugin.dylib Reason: tried: '/Users/haoduoyu/Code/test/lib/python3.9/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/_pywrap_tensorflow_internal.so' (no such file), '/Users/haoduoyu/Code/test/lib/python3.9/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/_pywrap_tensorflow_internal.so' (no such file) As long as I uninstall tensorflow-metal, nothing goes wrong. How can I fix this problem?
0
1
77
2w
Core ML .mlpackage not found in bundle despite target membership and Copy Bundle Resources
Hi everyone, I’m working on an iOS app that uses a Core ML model to run live image recognition. I’ve run into a persistent issue with the mlpackage not being turned into a swift class. This following error is in the code, and in carDetection.mlpackage, it says that model class has not been generated yet. The error in the code is as follows: What I’ve tried: Verified Target Membership is checked for carDetectionModel.mlpackage Confirmed the file is listed under Copy Bundle Resources (and removed from Compile Sources) Cleaned the build folder (Shift + Cmd + K) and rebuilt Renamed and re-added the .mlpackage file Restarted Xcode and re-added the file Logged bundle contents at runtime, but the .mlpackage still doesn’t appear The mlpackage is in Copy bundle resources, and is not in the compile sources. I just don't know why a swift class is not being generated for the mlpackage. Could someone please give me some guidance on what to do to resolve this issue? Sorry if my error is a bit naive, I'm pretty new to iOS app development
3
0
469
3w
Apple's PCC + Foundation Models
Hi, I am developing an iOS application that utilizes Apple’s Foundation Models to perform certain summarization tasks. I would like to understand whether user data is transferred to Private Cloud Compute (PCC) in cases where the computation cannot be performed entirely on-device. This information is critical for our internal security and compliance reviews. I would appreciate your clarification on this matter. Thank you.
1
0
315
3w
Python 3.13
Hello, Are there any plans to compile a python 3.13 version of tensorflow-metal? Just got my new Mac mini and the automatically installed version of python installed by brew is python 3.13 and while if I was in a hurry, I could manage to get python 3.12 installed and use the corresponding tensorflow-metal version but I'm not in a hurry. Many thanks, Alan
4
4
1.5k
3w
Help with dates in Foundation Model custom Tool
I have an app that stores lots of data that is of interest to the user. Analogies would be the Photos apps or the Health app. I'm trying to use the Foundation Models framework to allow users to surface information they find interesting using natural language, for example, "Tell me about the widgets from yesterday" or "Tell me about the widgets for the last 3 days". Specifically, I'm trying to get a date range passed down to the Tool so that I can pull the relevant widgets from the database in the call function. What is the right way to set up the Arguments to get at a date range?
3
0
653
4w
jax-metal failing due to incompatibility with jax 0.5.1 or later.
Hello, I am interested in using jax-metal to train ML models using Apple Silicon. I understand this is experimental. After installing jax-metal according to https://developer.apple.com/metal/jax/, my python code fails with the following error JaxRuntimeError: UNKNOWN: -:0:0: error: unknown attribute code: 22 -:0:0: note: in bytecode version 6 produced by: StableHLO_v1.12.1 My issue is identical to the one reported here https://github.com/jax-ml/jax/issues/26968#issuecomment-2733120325, and is fixed by pinning to jax-metal 0.1.1., jax 0.5.0 and jaxlib 0.5.0. Thank you!
0
0
440
Dec ’25
Getting CoreML to run inference on already allocated gpu buffers
I am running some experiments with WebGPU using the wgpu crate in rust. I have some Buffers already allocated in the GPU. Is it possible to use those already existing buffers directly as inputs to a predict call in CoreML? I want to prevent gpu to cpu download time as much as possible. Or are there any other ways to do something like this. Is this only possible using the latest Tensor object which came out with Metal 4 ?
0
0
445
Nov ’25
tensorflow-metal fails with tensorflow > 2.18.1
Also submitted as feedback (ID: FB20612561). Tensorflow-metal fails on tensorflow versions above 2.18.1, but works fine on tensorflow 2.18.1 In a new python 3.12 virtual environment: pip install tensorflow pip install tensor flow-metal python -c "import tensorflow as tf" Prints error: Traceback (most recent call last): File "", line 1, in File "/Users//pt/venv/lib/python3.12/site-packages/tensorflow/init.py", line 438, in _ll.load_library(_plugin_dir) File "/Users//pt/venv/lib/python3.12/site-packages/tensorflow/python/framework/load_library.py", line 151, in load_library py_tf.TF_LoadLibrary(lib) tensorflow.python.framework.errors_impl.NotFoundError: dlopen(/Users//pt/venv/lib/python3.12/site-packages/tensorflow-plugins/libmetal_plugin.dylib, 0x0006): Library not loaded: @rpath/_pywrap_tensorflow_internal.so Referenced from: <8B62586B-B082-3113-93AB-FD766A9960AE> /Users//pt/venv/lib/python3.12/site-packages/tensorflow-plugins/libmetal_plugin.dylib Reason: tried: '/Users//pt/venv/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/_pywrap_tensorflow_internal.so' (no such file), '/Users//pt/venv/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/_pywrap_tensorflow_internal.so' (no such file), '/opt/homebrew/lib/_pywrap_tensorflow_internal.so' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/opt/homebrew/lib/_pywrap_tensorflow_internal.so' (no such file)
6
4
2k
Nov ’25
Inquiry Regarding Siri–AI Integration Capabilities
: Hello, I’m seeking clarification on whether Apple provides any framework or API that enables deep integration between Siri and advanced AI assistants (such as ChatGPT), including system-level functions like voice interaction, navigation, cross-platform syncing, and operational access similar to Siri’s own capabilities. If no such option exists today, I would appreciate guidance on the recommended path or approved third-party solutions for building a unified, voice-first experience across Apple’s ecosystem. Thank you for your time and insight.
0
0
52
Nov ’25
App stuck “In Review” for several days after AI-policy rejection — need clarification
Hello everyone, I’m looking for guidance regarding my app review timeline, as things seem unusually delayed compared to previous submissions. My iOS app was rejected on November 19th due to AI-related policy questions. I immediately responded to the reviewer with detailed explanations covering: Model used (Gemini Flash 2.0 / 2.5 Lite) How the AI only generates neutral, non-directive reflective questions How the system prevents any diagnosis, therapy-like behavior or recommendations Crisis-handling limitations Safety safeguards at generation and UI level Internal red-team testing and results Data retention, privacy, and non-use of data for model training After sending the requested information, I resubmitted the build on November 19th at 14:40. Since then: November 20th (7:30) → Status changed to In Review. November 21st, 22nd, 23rd, 24th, 25th → No movement, still In Review. My open case on App Store Connect is still pending without updates. Because of the previous rejection, I expected a short delay, but this is now 5 days total and 3 business days with no progress, which feels longer than usual for my past submissions. I’m not sure whether: My app is in a secondary review queue due to the AI-related rejection, The reviewer is waiting for internal clarification, Or if something is stuck and needs to be escalated. I don’t want to resubmit a new build unless necessary, since that would restart the queue. Could someone from the community (or Apple, if possible) confirm whether this waiting time is normal after an AI-policy rejection? And is there anything I should do besides waiting — for example, contacting Developer Support again or requesting a follow-up? Thank you very much for your help. I appreciate any insight from others who have experienced similar delays.
0
0
637
Nov ’25
CoreML regression between macOS 26.0.1 and macOS 26.1 Beta causing scrambled tensor outputs
We’ve encountered what appears to be a CoreML regression between macOS 26.0.1 and macOS 26.1 Beta. In macOS 26.0.1, CoreML models run and produce correct results. However, in macOS 26.1 Beta, the same models produce scrambled or corrupted outputs, suggesting that tensor memory is being read or written incorrectly. The behavior is consistent with a low-level stride or pointer arithmetic issue — for example, using 16-bit strides on 32-bit data or other mismatches in tensor layout handling. Reproduction Install ON1 Photo RAW 2026 or ON1 Resize 2026 on macOS 26.0.1. Use the newest Highest Quality resize model, which is Stable Diffusion–based and runs through CoreML. Observe correct, high-quality results. Upgrade to macOS 26.1 Beta and run the same operation again. The output becomes visually scrambled or corrupted. We are also seeing similar issues with another Stable Diffusion UNet model that previously worked correctly on macOS 26.0.1. This suggests the regression may affect multiple diffusion-style architectures, likely due to a change in CoreML’s tensor stride, layout computation, or memory alignment between these versions. Notes The affected models are exported using standard CoreML conversion pipelines. No custom operators or third-party CoreML runtime layers are used. The issue reproduces consistently across multiple machines. It would be helpful to know if there were changes to CoreML’s tensor layout, precision handling, or MLCompute backend between macOS 26.0.1 and 26.1 Beta, or if this is a known regression in the current beta.
5
3
1.7k
Nov ’25
FoundationModel, context length, and testing
I am working on an app using FoundationModels to process web pages. I am looking to find ways to filter the input to fit within the token limits. I have unit tests, UI tests and the app running on an iPad in the simulator. It appears that the different configurations of the test environment seems to affect the token limits. That is, the same input in a unit test and UI test will hit different token limits. Is this correct? Or is this an artifact of my test tooling?
5
0
945
Nov ’25
Huge discrepency of predictions confidence between from Pytorch to Coreml example
I am follwing this tutorial: https://apple.github.io/coremltools/docs-guides/source/convert-a-torchvision-model-from-pytorch.html I have obtained simialr result using the python code. However when I view it in Xcode, the preview prediction percentage confidence is way off I suspect it is due the the output of the model, which is in percentage already and in Xcode it multiply 100 again leading to this result. Please give me any feedback to fix this, thank you.
0
0
141
Nov ’25
Do App Intent Domains work with Siri already?
Hi, guys. I'm writing about Apple Intelligence and I reached the point I have to explain App Intent Domains https://developer.apple.com/documentation/AppIntents/app-intent-domains but I noticed that there is a note explaining that these services are not available with Siri. I tried the example provided by Apple at https://developer.apple.com/documentation/AppIntents/making-your-app-s-functionality-available-to-siri and I can only make the intents work from the Shortcuts App, but not from Siri. Is this correct. App Intent Domains are still not available with Siri? Thanks
0
0
379
Nov ’25
videotoolbox superresolution
Hello, I'm using videotoolbox superresolution API in MACOS 26: https://developer.apple.com/documentation/videotoolbox/vtsuperresolutionscalerconfiguration/downloadconfigurationmodel(completionhandler:)?language=objc, when using swift, it's ok, when using objective-c, I get error when downloading model with downloadConfigurationModelWithCompletionHandler: [Auto] MA-auto{_failedLockContent} | failure reported by server | error:[com.apple.MobileAssetError.AutoAsset:MissingReference(6111)] [Auto] MA-auto{_failedLockContent} | failure reported by server | error:[com.apple.MobileAssetError.AutoAsset:UnderlyingError(6107)_1_com.apple.MobileAssetError.Download:47] Download completion handler called with error: The operation couldnxe2x80x99t be completed. (VTFrameProcessorErrorDomain error -19743.)
3
1
734
Nov ’25