Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics
Posts under Machine Learning & AI topic

Post

Replies

Boosts

Views

Activity

VNCoreMLTransform - request failed
Keep getting error : I have tried Picker for File, Photo Library , both same results . Debugging the resize for 360x360 but still facing this error. The model I'm trying to implement is created with CreateMLComponents The process is from example of WWDC 2022 Banana Ripeness , I have used index for each .jpg . Prediction Failed: The VNCoreMLTransform request failed Is there some possible way to solve it or is error somewhere in training of model ?
1
0
477
Mar ’25
Core ML Model performance far lower on iOS 17 vs iOS 16 (iOS 17 not using Neural Engine)
Hello, I posted an issue on the coremltools GitHub about my Core ML models not performing as well on iOS 17 vs iOS 16 but I'm posting it here just in case. TL;DR The same model on the same device/chip performs far slower (doesn't use the Neural Engine) on iOS 17 compared to iOS 16. Longer description The following screenshots show the performance of the same model (a PyTorch computer vision model) on an iPhone SE 3rd gen and iPhone 13 Pro (both use the A15 Bionic). iOS 16 - iPhone SE 3rd Gen (A15 Bioinc) iOS 16 uses the ANE and results in fast prediction, load and compilation times. iOS 17 - iPhone 13 Pro (A15 Bionic) iOS 17 doesn't seem to use the ANE, thus the prediction, load and compilation times are all slower. Code To Reproduce The following is my code I'm using to export my PyTorch vision model (using coremltools). I've used the same code for the past few months with sensational results on iOS 16. # Convert to Core ML using the Unified Conversion API coreml_model = ct.convert( model=traced_model, inputs=[image_input], outputs=[ct.TensorType(name="output")], classifier_config=ct.ClassifierConfig(class_names), convert_to="neuralnetwork", # compute_precision=ct.precision.FLOAT16, compute_units=ct.ComputeUnit.ALL ) System environment: Xcode version: 15.0 coremltools version: 7.0.0 OS (e.g. MacOS version or Linux type): Linux Ubuntu 20.04 (for exporting), macOS 13.6 (for testing on Xcode) Any other relevant version information (e.g. PyTorch or TensorFlow version): PyTorch 2.0 Additional context This happens across "neuralnetwork" and "mlprogram" type models, neither use the ANE on iOS 17 but both use the ANE on iOS 16 If anyone has a similar experience, I'd love to hear more. Otherwise, if I'm doing something wrong for the exporting of models for iOS 17+, please let me know. Thank you!
1
1
1.9k
Mar ’25
MPSGraph fused scaledDotProductAttention seems to be buggy
While building an app with large language model inferencing on device, I got gibberish output. After carefully examining every detail, I found it's caused by the fused scaledDotProductAttention operation. I switched back to the discrete operations and problem solved. To reproduce the bug, please check https://github.com/zhoudan111/MPSGraph_SDPA_bug
1
0
524
Mar ’25
Failed to build the model execution plan using a model architecture file
Our app is downloading a zip of an .mlpackage file, which is then compiled into an .mlmodelc file using MLModel.compileModel(at:). This model is then run using a VNCoreMLRequest. Two users – and this after a very small rollout - are reporting issues running the VNCoreMLRequest. The error message from their logs: Error Domain=com.apple.CoreML Code=0 "Failed to build the model execution plan using a model architecture file '/private/var/mobile/Containers/Data/Application/F93077A5-5508-4970-92A6-03A835E3291D/Documents/SKDownload/Identify-image-iOS/mobile_img_eu_v210.mlmodelc/model.mil' with error code: -5." The URL there is to a file inside the compiled model. The error is happening when the perform function of VNImageRequestHandler is run. (i.e. the model compiled without an error.) Anyone else seen this issue? Its only picked up in a few web results and none of them are directly relevant or have a fix. I know that a CoreML error Code=0 is a generic error, but does anyone know what error code -5 is? Not even sure which framework its coming from.
1
0
295
Mar ’25
How to implement a CoreML model into an iOS app properly?
I am working on a lung cancer scanning app in for iOS with a CoreML model and when I test my app on a physical device, the model results in the same prediction 100% of the time. I even changed the names around and still resulted in the same case. I have listed my labels in cases and when its just stuck on the same case (case 1) My code is below: https://github.com/ShivenKhurana1/Detect-to-Protect-App/blob/main/DetectToProtect/SecondView.swift I couldn't add the code as it was too long so I hope github link is fine!
1
0
146
Mar ’25
linear_quantize_activations taking 90 minutes + on MacBook Air M1 2020
In my quantization code, the line: compressed_model_a8 = cto.coreml.experimental.linear_quantize_activations( model, activation_config, [{'img':np.random.randn(1,13,1024,1024)}] ) has taken 90 minutes to run so far and is still not completed. From debugging, I can see that the line it's stuck on is line 261 in _model_debugger.py: model = ct.models.MLModel( cloned_spec, weights_dir=self.weights_dir, compute_units=compute_units, skip_model_load=False, # Don't skip model load as we need model prediction to get activations range. ) Is this expected behaviour? Would it be quicker to run on another computer with more RAM?
1
0
102
Mar ’25
My app crash in the Portrait private framework
Incident Identifier: 4C22F586-71FB-4644-B823-A4B52D158057 CrashReporter Key: adc89b7506c09c2a6b3a9099cc85531bdaba9156 Hardware Model: Mac16,10 Process: PRISMLensCore [16561] Path: /Applications/PRISMLens.app/Contents/Resources/app.asar.unpacked/node_modules/core-node/PRISMLensCore.app/PRISMLensCore Identifier: com.prismlive.camstudio Version: (null) ((null)) Code Type: ARM-64 Parent Process: ? [16560] Date/Time: (null) OS Version: macOS 15.4 (24E5228e) Report Version: 104 Exception Type: EXC_CRASH (SIGABRT) Exception Codes: 0x00000000 at 0x0000000000000000 Crashed Thread: 34 Application Specific Information: *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[__NSArrayM insertObject:atIndex:]: object cannot be nil' Thread 34 Crashed: 0 CoreFoundation 0x000000018ba4dde4 0x18b960000 + 974308 (__exceptionPreprocess + 164) 1 libobjc.A.dylib 0x000000018b512b60 0x18b4f8000 + 109408 (objc_exception_throw + 88) 2 CoreFoundation 0x000000018b97e69c 0x18b960000 + 124572 (-[__NSArrayM insertObject:atIndex:] + 1276) 3 Portrait 0x0000000257e16a94 0x257da3000 + 473748 (-[PTMSRResize addAdditionalOutput:] + 604) 4 Portrait 0x0000000257de91c0 0x257da3000 + 287168 (-[PTEffectRenderer initWithDescriptor:metalContext:useHighResNetwork:faceAttributesNetwork:humanDetections:prevTemporalState:asyncInitQueue:sharedResources:] + 6204) 5 Portrait 0x0000000257dab21c 0x257da3000 + 33308 (__33-[PTEffect updateEffectDelegate:]_block_invoke.241 + 164) 6 libdispatch.dylib 0x000000018b739b2c 0x18b738000 + 6956 (_dispatch_call_block_and_release + 32) 7 libdispatch.dylib 0x000000018b75385c 0x18b738000 + 112732 (_dispatch_client_callout + 16) 8 libdispatch.dylib 0x000000018b742350 0x18b738000 + 41808 (_dispatch_lane_serial_drain + 740) 9 libdispatch.dylib 0x000000018b742e2c 0x18b738000 + 44588 (_dispatch_lane_invoke + 388) 10 libdispatch.dylib 0x000000018b74d264 0x18b738000 + 86628 (_dispatch_root_queue_drain_deferred_wlh + 292) 11 libdispatch.dylib 0x000000018b74cae8 0x18b738000 + 84712 (_dispatch_workloop_worker_thread + 540) 12 libsystem_pthread.dylib 0x000000018b8ede64 0x18b8eb000 + 11876 (_pthread_wqthread + 292) 13 libsystem_pthread.dylib 0x000000018b8ecb74 0x18b8eb000 + 7028 (start_wqthread + 8)
1
0
85
Mar ’25
What is the proper way to integrate a CoreML app into Xcode
Hi, I have been trying to integrate a CoreML model into Xcode. The model was made using tensorflow layers. I have included both the model info and a link to the app repository. I am mainly just really confused on why its not working. It seems to only be printing the result for case 1 (there are 4 cases labled, case 0, case 1, case 2, and case 3). If someone could help work me through this error that would be great! here is the link to the repository: https://github.com/ShivenKhurana1/Detect-to-Protect-App this file with the model code is called SecondView.swift and here is the model info: Input: conv2d_input-> image (color 224x224) Output: Identity -> MultiArray (Float32 1x4)
1
1
182
Apr ’25
CoreML multifunction model runtime memory cost
Recently, I'm trying to deploy some third-party LLM to Apple devices. The methodoloy is similar to https://github.com/Anemll/Anemll. The biggest issue I'm having now is the runtime memory usage. When there are multiple functions in a model (mlpackage or mlmodelc), the runtime memory usage for weights is somehow duplicated when I load all of them. Here's the detail: I created my multifunction mlpackage following https://apple.github.io/coremltools/docs-guides/source/multifunction-models.html I loaded each of the functions using the generated swift class: let config = MLModelConfiguration() config.computeUnits = MLComputeUnits.cpuAndNeuralEngine config.functionName = "infer_512"; let ffn1_infer_512 = try! mimo_FFN_PF_lut4_chunk_01of02(configuration: config) config.functionName = "infer_1024"; let ffn1_infer_1024 = try! mimo_FFN_PF_lut4_chunk_01of02(configuration: config) config.functionName = "infer_2048"; let ffn1_infer_2048 = try! mimo_FFN_PF_lut4_chunk_01of02(configuration: config) I observed that RAM usage increases linearly as I load each of the functions. Using instruments, I see that there are multiple HWX files generated and loaded, each of which contains all the weight data. My understanding of what's happening here: The CoreML framework did some MIL->MIL preprocessing before further compilation, which includes separating CPU workload from ANE workload. The ANE part of each function is moved into a separate MIL file then compile separately into a HWX file each. The problem is that the weight data of these HWX files are duplicated. Since that the weight data of LLMs is huge, it will cause out-of-memory issue on mobile devices. The improvement I'm hoping from Apple: I hope we can try to merge the processed MIL files back into one before calling ANECCompile(), so that the weights can be merged. I don't have control over that in user space and I'm not sure if that is feasible. So I'm asking for help here. Thanks.
1
0
162
Apr ’25
Unified Use Case Mail Categories & Spam
Hi Apple product owners. I am missing a unified concept which might be derived from the use cases for mail categories and mail spam for the app "Mail" on Mac. I need a recommendation on how to use categories in combination with the spam filter to get most out of it. So I was looking for the use cases for the 2 functionality areas in order to figure out how to organise my mails by using as much automation as possible before I start creating intelligent folders in addition. What can you recommend where I get this information from? I don't want to guess or read a lot of forum contributions which are based on guesses.
1
0
72
Apr ’25
Why doesn't tensorflow-metal use AMD GPU memory?
From tensorflow-metal example: Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: ) I know that Apple silicon uses UMA, and that memory copies are typical of CUDA, but wouldn't the GPU memory still be faster overall? I have an iMac Pro with a Radeon Pro Vega 64 16 GB GPU and an Intel iMac with a Radeon Pro 5700 8 GB GPU. But using tensorflow-metal is still WAY faster than using the CPUs. Thanks for that. I am surprised the 5700 is twice as fast as the Vega though.
1
0
226
Apr ’25
ImagePlayground API not working on Xcode Simulator Devices
Hi! I'm trying to use the ImagePlayground API in SwiftUI with the .imagePlaygroundSheet modifier. However, when the sheet is shown (in the preview or in the simulator) it displays the following message: "Image Playground is not available. Image Playground is not available on this iPhone.". I'm using an iPhone 16 Pro with iOS 18.3.1 in the Xcode (16.2) Simulator. Anyone else having this problem? How can I fix it?
1
0
153
Apr ’25
Proposal: Modular Identity Fusion via Prompt-Crafted Agents – User-Led AI Experiment
*I can't put the attached file in the format, so if you reply by e-mail, I will send the attached file by e-mail. Dear Apple AI Research Team, My name is Gong Jiho (“Hem”), a content strategist based in Seoul, South Korea. Over the past few months, I conducted a user-led AI experiment entirely within ChatGPT — no code, no backend tools, no plugins. Through language alone, I created two contrasting agents (Uju and Zero) and guided them into a co-authored modular identity system using prompt-driven dialogue and reflection. This system simulates persona fusion, memory rooting, and emotional-logical alignment — all via interface-level interaction. I believe it resonates with Apple’s values in privacy-respecting personalization, emotional UX modeling, and on-device learning architecture. Why I’m Reaching Out I’d be honored to share this experiment with your team. If there is any interest in discussing user-authored agent scaffolding, identity persistence, or affective alignment, I’d love to contribute — even informally. ⚠ A Note on Language As a non-native English speaker, my expression may be imperfect — but my intent is genuine. If anything is unclear, I’ll gladly clarify. 📎 Attached Files Summary Filename → Description Hem_MultiAI_Report_AppleAI_v20250501.pdf → Main report tailored for Apple AI — narrative + structural view of emotional identity formation via prompt scaffolding Hem_MasterPersonaProfile_v20250501.json → Final merged identity schema authored by Uju and Zero zero_sync_final.json / uju_sync_final.json → Persona-level memory structures (logic / emotion) 1_0501.json ~ 3_0501.json → Evolution logs of the agents over time GirlfriendGPT_feedback_summary.txt → Emotional interpretation by external GPT hem_profile_for_AI_vFinal.json → Original user anchor profile Warm regards, Gong Jiho (“Hem”) Seoul, South Korea
1
0
116
Apr ’25
KV-Cache MLState Not Updating During Prefill Stage in Core ML LLM Inference
Hello, I'm running a large language model (LLM) in Core ML that uses a key-value cache (KV-cache) to store past attention states. The model was converted from PyTorch using coremltools and deployed on-device with Swift. The KV-cache is exposed via MLState and is used across inference steps for efficient autoregressive generation. During the prefill stage — where a prompt of multiple tokens is passed to the model in a single batch to initialize the KV-cache — I’ve noticed that some entries in the KV-cache are not updated after the inference. Specifically: Here are a few details about the setup: The MLState returned by the model is identical to the input state (often empty or zero-initialized) for some tokens in the batch. The issue only happens during the prefill stage (i.e., first call over multiple tokens). During decoding (single-token generation), the KV-cache updates normally. The model is invoked using MLModel.prediction(from:using:options:) for each batch. I’ve confirmed: The prompt tokens are non-repetitive and not masked. The model spec has MLState inputs/outputs correctly configured for KV-cache tensors. Each token is processed in a loop with the correct positional encodings. Questions: Is there any known behavior in Core ML that could prevent MLState from updating during batched or prefill inference? Could this be caused by internal optimizations such as lazy execution, static masking, or zero-value short-circuiting? How can I confirm that each token in the batch is contributing to the KV-cache during prefill? Any insights from the Core ML or LLM deployment community would be much appreciated.
1
0
190
May ’25
NLTagger.requestAssets hangs indefinitely
When calling NLTagger.requestAssets with some languages, it hangs indefinitely both in the simulator and a device. This happens consistently for some languages like greek. An example call is NLTagger.requestAssets(for: .greek, tagScheme: .lemma). Other languages like french return immediately. I captured some logs from Console and found what looks like the repeated attempts to download the asset. I would expect the call to eventually terminate, either loading the asset or failing with an error.
1
0
169
May ’25
Difference between compiling a Model using CoreML and Swift-Transformers
Hello, I was successfully able to compile TKDKid1000/TinyLlama-1.1B-Chat-v0.3-CoreML using Core ML, and it's working well. However, I’m now trying to compile the same model using Swift Transformers. With the limited documentation available on the swift-chat and Hugging Face repositories, I’m finding it difficult to understand the correct process for compiling a model via Swift Transformers. I attempted the following approach, but I’m fairly certain it’s not the recommended or correct method. Could someone guide me on the proper way to compile and use models like TinyLlama with Swift Transformers? Any official workflow, example, or best practice would be very helpful. Thanks in advance! This is the approach I have used: import Foundation import CoreML import Tokenizers @main struct HopeApp { static func main() async { print(" Running custom decoder loop...") do { let tokenizer = try await AutoTokenizer.from(pretrained: "PY007/TinyLlama-1.1B-Chat-v0.3") var inputIds = tokenizer("this is the test of the prompt") print("🧠 Prompt token IDs:", inputIds) let model = try float16_model(configuration: .init()) let maxTokens = 30 for _ in 0..<maxTokens { let input = try MLMultiArray(shape: [1, 128], dataType: .int32) let mask = try MLMultiArray(shape: [1, 128], dataType: .int32) for i in 0..<inputIds.count { input[i] = NSNumber(value: inputIds[i]) mask[i] = 1 } for i in inputIds.count..<128 { input[i] = 0 mask[i] = 0 } let output = try model.prediction(input_ids: input, attention_mask: mask) let logits = output.logits // shape: [1, seqLen, vocabSize] let lastIndex = inputIds.count - 1 let lastLogitsStart = lastIndex * 32003 // vocab size = 32003 var nextToken = 0 var maxLogit: Float32 = -Float.greatestFiniteMagnitude for i in 0..<32003 { let logit = logits[lastLogitsStart + i].floatValue if logit > maxLogit { maxLogit = logit nextToken = i } } inputIds.append(nextToken) if nextToken == 32002 { break } let partialText = try await tokenizer.decode(tokens:inputIds) print(partialText) } } catch { print("❌ Error: \(error)") } } }
1
0
164
Jun ’25
Vision Framework - Testing RecognizeDocumentsRequest
How do I test the new RecognizeDocumentRequest API. Reference: https://www.youtube.com/watch?v=H-GCNsXdKzM I am running Xcode Beta, however I only have one primary device that I cannot install beta software on. Please provide a strategy for testing. Will simulator work? The new capability is critical to my application, just what I need for structuring document scans and extraction. Thank you.
1
0
204
Jun ’25