I'm developing a macOS application using the FoundationModels framework
(LanguageModelSession) and encountering issues with the content sanitizer
blocking legitimate text input.
** Issue Description:**
The content sanitizer is flagging text strings that contain certain
substrings, even when they represent legitimate technical content. For
example:
F_SEEL_SEX1S.wav (sE Electronics SEX1S microphone model)
Technical product identifiers
Serial numbers and version codes
** Broader Concern:**
The content sanitizer appears to be applying restrictions that seem
inappropriate for user-owned content. Even if a filename were something
like "human sex.wav", users should have the right to process their own
legitimate files on their own devices without content filtering
interference.
** Error Messages:**
SensitiveContentSettings: Sanitizer model found unsafe content in value
FoundationModels.LanguageModelSession.GenerationError error 2
** Questions:**
Is there a way to disable content sanitization for processing
user-owned content?
2. What's the recommended approach for applications that need to handle
arbitrary user text?
3. Are there APIs to process personal content without filtering
restrictions?
** Environment:**
macOS 26.0
FoundationModels framework
LanguageModelSession
Any guidance would be appreciated.
Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Environment
MacOC 26
Xcode Version 26.0 beta 7 (17A5305k)
simulator: iPhone 16 pro
iOS: iOS 26
Problem
NLContextualEmbedding.load() fails with the following error
In simulator
Failed to load embedding from MIL representation: filesystem error: in create_directories: Permission denied ["/var/db/com.apple.naturallanguaged/com.apple.e5rt.e5bundlecache"]
filesystem error: in create_directories: Permission denied ["/var/db/com.apple.naturallanguaged/com.apple.e5rt.e5bundlecache"]
Failed to load embedding model 'mul_Latn' - '5C45D94E-BAB4-4927-94B6-8B5745C46289'
assetRequestFailed(Optional(Error Domain=NLNaturalLanguageErrorDomain Code=7 "Embedding model requires compilation" UserInfo={NSLocalizedDescription=Embedding model requires compilation}))
in #Playground
I'm new to this embedding model. Not sure if it's caused by my code or environment.
Code snippet
import Foundation
import NaturalLanguage
import Playgrounds
#Playground {
// Prefer initializing by script for broader coverage; returns NLContextualEmbedding?
guard let embeddingModel = NLContextualEmbedding(script: .latin) else {
print("Failed to create NLContextualEmbedding")
return
}
print(embeddingModel.hasAvailableAssets)
do {
try embeddingModel.load()
print("Model loaded")
} catch {
print("Failed to load model: \(error)")
}
}
Our app is downloading a zip of an .mlpackage file, which is then compiled into an .mlmodelc file using MLModel.compileModel(at:). This model is then run using a VNCoreMLRequest.
Two users – and this after a very small rollout - are reporting issues running the VNCoreMLRequest. The error message from their logs:
Error Domain=com.apple.CoreML Code=0 "Failed to build the model execution plan using a model architecture file '/private/var/mobile/Containers/Data/Application/F93077A5-5508-4970-92A6-03A835E3291D/Documents/SKDownload/Identify-image-iOS/mobile_img_eu_v210.mlmodelc/model.mil' with error code: -5."
The URL there is to a file inside the compiled model. The error is happening when the perform function of VNImageRequestHandler is run. (i.e. the model compiled without an error.)
Anyone else seen this issue? Its only picked up in a few web results and none of them are directly relevant or have a fix.
I know that a CoreML error Code=0 is a generic error, but does anyone know what error code -5 is? Not even sure which framework its coming from.
Some of my users are experiencing crashes on instantiation of a CoreML model I've bundled with my app. I haven't been able to reproduce the crash on any of my devices. Crashes happen across all iOS 18 releases. Seems like something internal in CoreML is causing an issue.
Full stack trace:
6646631296fb42128ddc340b2d4322f7-symbolicated.crash
Topic:
Machine Learning & AI
SubTopic:
Core ML
Good morning all has anyone encountered the issue of Siri returning back to her original user interface on IOS-26? I’m trying to figure out the cause. I’ve sent feedback via the feedback app. Just seeing if anyone else has the same issue.
Hi everyone,
I am using Xcode 16.4 in MacOS Sequoia 15.5 with Apple Intelligence turned on.
The following code gives the error message in the title:
import NaturalLanguage
@available(iOS 18.0, *)
func testSystemModel() {
let model = SystemLanguageModel.default
print(model)
}
What am I missing?
Hi, I just upgraded to macOS Tahoe Beta 2 and now I'm getting this error when I try to initialize my Foundation Models' session:
Error Resource (Local Sanitizer Asset) unavailable error.
import FoundationModels
#Playground {
let session = LanguageModelSession()
do {
let result = try await session.respond(to: "Tell me 3 colors")
print(result.content)
} catch {
print("Error", error)
}
}
I couldn't find any resource guiding me on how to solve this. Any help/workaround?
Thank you!
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Hi everyone,
I'm developing an iOS app using Foundation Models and I've hit a critical limitation that I believe affects many developers and millions of users.
The Issue
Foundation Models requires the device system language to be one of the supported languages. If a user has their device set to an unsupported language (Catalan, Dutch, Swedish, Polish, Danish, Norwegian, Finnish, Czech, Hungarian, Greek, Romanian, and many others), SystemLanguageModel.isSupported returns false and the framework is completely unavailable.
Why This Is Problematic
Scenario: A Catalan user has their iPhone in Catalan (native language). They want to use an AI chat app in Spanish or English (languages they speak fluently).
Current situation:
❌ Foundation Models: Completely unavailable
✅ OpenAI GPT-4: Works perfectly
✅ Anthropic Claude: Works perfectly
✅ Any cloud-based AI: Works perfectly
The user must choose between:
Keep device in Catalan → Cannot use Foundation Models at all
Change entire device to Spanish → Can use Foundation Models but terrible UX
Impact
This affects:
Millions of users in regions where unsupported languages are official
Multilingual users who prefer their device in their native language but can comfortably interact with AI in English/Spanish
Developers who cannot deploy Foundation Models-based apps in these markets
Privacy-conscious users who are ironically forced to use cloud AI instead of on-device AI
What We Need
One of these solutions would solve the problem:
Option 1: Per-app language override (preferred)
// Proposed API
let session = try await LanguageModelSession(preferredLanguage: "es-ES")
Option 2: Faster rollout of additional languages (particularly EU languages)
Option 3: Allow fallback to user-selected supported language when system language is unsupported
Technical Details
Current behavior:
// Device in Catalan
let isAvailable = SystemLanguageModel.isSupported
// Returns false
// No way to override or specify alternative language
Why This Matters
Apple Intelligence and Foundation Models are amazing for privacy and performance. But this language restriction makes the most privacy-focused AI solution less accessible than cloud alternatives. This seems contrary to Apple's values of accessibility and user choice.
Questions for the Community
Has anyone else encountered this limitation?
Are there any workarounds I'm missing?
Has anyone successfully filed feedback about this?(Please share FB number so we can reference it)
Are there any sessions or labs where this has been discussed?
Thanks for reading. I'd love to hear if others are facing this and how you're handling it.
Encountered a few times when the answer get "stuck" (I am now at beta 6).
This is an example.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I am working on a CoreML image classification model in Xcode, which takes a 299x299 image and attempts to classify hand-drawn sketches. The model was trained using Create ML and works perfectly when tested in the Create ML preview. However, when used in Xcode application, the classification results are incorrect.
I have already verified that the image is correctly resized to 299x299 pixels, matching the input size of the model. The classification always returns incorrect results, even when using images that were correctly classified during training. I originally used kCVPixelFormatType_32ARGB, but I read that CoreML typically expects BGRA format. I updated my conversion function to use kCVPixelFormatType_32BGRA and CGImageAlphaInfo.premultipliedLast, but the issue persists. This makes me suspect that either the pixel format is still incorrect or that something went wrong during the .mlmodelc compilation.
Topic:
Machine Learning & AI
SubTopic:
Create ML
Hi
We're on tensorflow 2.20 that has support now for python 3.13 (finally!). tensorflow-metal is still only supporting 2.18 which is over a year old.
When can we expect to see support in tensorflow-metal for tf 2.20 (or later!) ?
I bought a mac thinking I would be able to get great performance from the M processors but here I am using my CPU for my ML projects.
If it's taking so long to release it, why not open source it so the community can keep it more up to date?
cheers
Matt
It appears that there is a size limit when training the Tabular Classification model in CreatML. When the training data is small, the training process completes smoothly after a specified period. However, as the data volume increases, the following issues occur: initially, the training process indicates that it is in progress, but after approximately 24 hours, it is automatically terminated after an hour. I am certain that this is not a manual termination by myself or others, but rather an automatic termination by the machine. This issue persists despite numerous attempts, and the only message displayed is “Training Canceled.” I would appreciate it if someone could explain the reason behind this behavior and provide a solution. Thank you for your assistance.
I am working on a lung cancer scanning app in for iOS with a CoreML model and when I test my app on a physical device, the model results in the same prediction 100% of the time. I even changed the names around and still resulted in the same case. I have listed my labels in cases and when its just stuck on the same case (case 1)
My code is below:
https://github.com/ShivenKhurana1/Detect-to-Protect-App/blob/main/DetectToProtect/SecondView.swift
I couldn't add the code as it was too long so I hope github link is fine!
Can't import data in create ML word tagging project
training data is 100% correct I guarantee it:
I mean look this one has one entry in it.
[
{
"tokens": [
"a", "august", "gruters"
],
"labels": [
"BUILDER", "BUILDER", "BUILDER"
]
}
]
Topic:
Machine Learning & AI
SubTopic:
Create ML
In my quantization code, the line:
compressed_model_a8 = cto.coreml.experimental.linear_quantize_activations(
model, activation_config, [{'img':np.random.randn(1,13,1024,1024)}]
)
has taken 90 minutes to run so far and is still not completed. From debugging, I can see that the line it's stuck on is line 261 in _model_debugger.py:
model = ct.models.MLModel(
cloned_spec,
weights_dir=self.weights_dir,
compute_units=compute_units,
skip_model_load=False, # Don't skip model load as we need model prediction to get activations range.
)
Is this expected behaviour? Would it be quicker to run on another computer with more RAM?
At WWDC25 we launched a new type of Lab event for the developer community - Group Labs. A Group Lab is a panel Q&A designed for a large audience of developers. Group Labs are a unique opportunity for the community to submit questions directly to a panel of Apple engineers and designers. Here are the highlights from the WWDC25 Group Lab for Machine Learning and AI Frameworks.
What are you most excited about in the Foundation Models framework?
The Foundation Models framework provides access to an on-device Large Language Model (LLM), enabling entirely on-device processing for intelligent features. This allows you to build features such as personalized search suggestions and dynamic NPC generation in games. The combination of guided generation and streaming capabilities is particularly exciting for creating delightful animations and features with reliable output. The seamless integration with SwiftUI and the new design material Liquid Glass is also a major advantage.
When should I still bring my own LLM via CoreML?
It's generally recommended to first explore Apple's built-in system models and APIs, including the Foundation Models framework, as they are highly optimized for Apple devices and cover a wide range of use cases. However, Core ML is still valuable if you need more control or choice over the specific model being deployed, such as customizing existing system models or augmenting prompts. Core ML provides the tools to get these models on-device, but you are responsible for model distribution and updates.
Should I migrate PyTorch code to MLX?
MLX is an open-source, general-purpose machine learning framework designed for Apple Silicon from the ground up. It offers a familiar API, similar to PyTorch, and supports C, C++, Python, and Swift. MLX emphasizes unified memory, a key feature of Apple Silicon hardware, which can improve performance. It's recommended to try MLX and see if its programming model and features better suit your application's needs. MLX shines when working with state-of-the-art, larger models.
Can I test Foundation Models in Xcode simulator or device?
Yes, you can use the Xcode simulator to test Foundation Models use cases. However, your Mac must be running macOS Tahoe. You can test on a physical iPhone running iOS 18 by connecting it to your Mac and running Playgrounds or live previews directly on the device.
Which on-device models will be supported? any open source models?
The Foundation Models framework currently supports Apple's first-party models only. This allows for platform-wide optimizations, improving battery life and reducing latency. While Core ML can be used to integrate open-source models, it's generally recommended to first explore the built-in system models and APIs provided by Apple, including those in the Vision, Natural Language, and Speech frameworks, as they are highly optimized for Apple devices. For frontier models, MLX can run very large models.
How often will the Foundational Model be updated? How do we test for stability when the model is updated?
The Foundation Model will be updated in sync with operating system updates. You can test your app against new model versions during the beta period by downloading the beta OS and running your app. It is highly recommended to create an "eval set" of golden prompts and responses to evaluate the performance of your features as the model changes or as you tweak your prompts. Report any unsatisfactory or satisfactory cases using Feedback Assistant.
Which on-device model/API can I use to extract text data from images such as: nutrition labels, ingredient lists, cashier receipts, etc? Thank you.
The Vision framework offers the RecognizeDocumentRequest which is specifically designed for these use cases. It not only recognizes text in images but also provides the structure of the document, such as rows in a receipt or the layout of a nutrition label. It can also identify data like phone numbers, addresses, and prices.
What is the context window for the model? What are max tokens in and max tokens out?
The context window for the Foundation Model is 4,096 tokens. The split between input and output tokens is flexible. For example, if you input 4,000 tokens, you'll have 96 tokens remaining for the output. The API takes in text, converting it to tokens under the hood. When estimating token count, a good rule of thumb is 3-4 characters per token for languages like English, and 1 character per token for languages like Japanese or Chinese. Handle potential errors gracefully by asking for shorter prompts or starting a new session if the token limit is exceeded.
Is there a rate limit for Foundation Models API that is limited by power or temperature condition on the iPhone?
Yes, there are rate limits, particularly when your app is in the background. A budget is allocated for background app usage, but exceeding it will result in rate-limiting errors. In the foreground, there is no rate limit unless the device is under heavy load (e.g., camera open, game mode). The system dynamically balances performance, battery life, and thermal conditions, which can affect the token throughput. Use appropriate quality of service settings for your tasks (e.g., background priority for background work) to help the system manage resources effectively.
Do the foundation models support languages other than English?
Yes, the on-device Foundation Model is multilingual and supports all languages supported by Apple Intelligence. To get the model to output in a specific language, prompt it with instructions indicating the user's preferred language using the locale API (e.g., "The user's preferred language is en-US"). Putting the instructions in English, but then putting the user prompt in the desired output language is a recommended practice.
Are larger server-based models available through Foundation Models?
No, the Foundation Models API currently only provides access to the on-device Large Language Model at the core of Apple Intelligence. It does not support server-side models. On-device models are preferred for privacy and for performance reasons.
Is it possible to run Retrieval-Augmented Generation (RAG) using the Foundation Models framework?
Yes, it is possible to run RAG on-device, but the Foundation Models framework does not include a built-in embedding model. You'll need to use a separate database to store vectors and implement nearest neighbor or cosine distance searches. The Natural Language framework offers simple word and sentence embeddings that can be used. Consider using a combination of Foundation Models and Core ML, using Core ML for your embedding model.
Topic:
Machine Learning & AI
SubTopic:
General
This is my code:
witch SystemLanguageModel.default.availability {
case .available:
ContentView()
.popover(isPresented: $showSettings) {
SettingsView().presentationCompactAdaptation(.popover)
}
case .unavailable(.modelNotReady):
ContentUnavailableView("Apple Intelligence is unavailable",
systemImage: "apple.intelligence.badge.xmark",
description: Text("Please come back later."))
case .unavailable(.appleIntelligenceNotEnabled):
ContentUnavailableView("Apple Intelligence is unavailable",
systemImage: "apple.intelligence.badge.xmark",
description: Text("Please turn on Apple Intelligence."))
case .unavailable(.deviceNotEligible):
ContentUnavailableView("Apple Intelligence is unavailable",
systemImage: "apple.intelligence.badge.xmark",
description: Text("This device is not eligible for Apple Intelligence."))
case .unavailable:
ContentUnavailableView("Apple Intelligence is unavailable",
systemImage: "apple.intelligence.badge.xmark")
}
When I switch off Apple Intelligence, I expected "Please turn on Apple Intelligence.", but instead I get "Please come back later."
This seems to be wrong error?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I’m building an app that generates images based on text input from a specific text field. However, I’m encountering a problem:
For short prompts like "a cat and a dog", the entire string is sent to the Image Playground, even when I use the extracted method. For longer inputs, the behavior is inconsistent. Sometimes it extracts keywords correctly, but other times it doesn’t extract anything at all.
Since my app relies on generating images based on the extracted keywords, this inconsistency negatively impacts the user experience in my app. How can I make sure that keywords are always extracted from the input string?
Button("Generate", systemImage: "apple.intelligence") {
isPresented = true
}
.imagePlaygroundSheet(isPresented: $isPresented, concepts: [ImagePlaygroundConcept.extracted(from: text, title: textTitle)]) { url in
imageURL = url
}
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
iOS 18.2 includes a new feature called Visual Intelligence. If I hold down the Camera Control on my iPhone, I can take a photo of an object and use Google to look up items similar to what I've photographed.
Is there a way to programmatically open this interface within my app? If so, can I see which result the user selects?
Incident Identifier: 4C22F586-71FB-4644-B823-A4B52D158057
CrashReporter Key: adc89b7506c09c2a6b3a9099cc85531bdaba9156
Hardware Model: Mac16,10
Process: PRISMLensCore [16561]
Path: /Applications/PRISMLens.app/Contents/Resources/app.asar.unpacked/node_modules/core-node/PRISMLensCore.app/PRISMLensCore
Identifier: com.prismlive.camstudio
Version: (null) ((null))
Code Type: ARM-64
Parent Process: ? [16560]
Date/Time: (null)
OS Version: macOS 15.4 (24E5228e)
Report Version: 104
Exception Type: EXC_CRASH (SIGABRT)
Exception Codes: 0x00000000 at 0x0000000000000000
Crashed Thread: 34
Application Specific Information:
*** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[__NSArrayM insertObject:atIndex:]: object cannot be nil'
Thread 34 Crashed:
0 CoreFoundation 0x000000018ba4dde4 0x18b960000 + 974308 (__exceptionPreprocess + 164)
1 libobjc.A.dylib 0x000000018b512b60 0x18b4f8000 + 109408 (objc_exception_throw + 88)
2 CoreFoundation 0x000000018b97e69c 0x18b960000 + 124572 (-[__NSArrayM insertObject:atIndex:] + 1276)
3 Portrait 0x0000000257e16a94 0x257da3000 + 473748 (-[PTMSRResize addAdditionalOutput:] + 604)
4 Portrait 0x0000000257de91c0 0x257da3000 + 287168 (-[PTEffectRenderer initWithDescriptor:metalContext:useHighResNetwork:faceAttributesNetwork:humanDetections:prevTemporalState:asyncInitQueue:sharedResources:] + 6204)
5 Portrait 0x0000000257dab21c 0x257da3000 + 33308 (__33-[PTEffect updateEffectDelegate:]_block_invoke.241 + 164)
6 libdispatch.dylib 0x000000018b739b2c 0x18b738000 + 6956 (_dispatch_call_block_and_release + 32)
7 libdispatch.dylib 0x000000018b75385c 0x18b738000 + 112732 (_dispatch_client_callout + 16)
8 libdispatch.dylib 0x000000018b742350 0x18b738000 + 41808 (_dispatch_lane_serial_drain + 740)
9 libdispatch.dylib 0x000000018b742e2c 0x18b738000 + 44588 (_dispatch_lane_invoke + 388)
10 libdispatch.dylib 0x000000018b74d264 0x18b738000 + 86628 (_dispatch_root_queue_drain_deferred_wlh + 292)
11 libdispatch.dylib 0x000000018b74cae8 0x18b738000 + 84712 (_dispatch_workloop_worker_thread + 540)
12 libsystem_pthread.dylib 0x000000018b8ede64 0x18b8eb000 + 11876 (_pthread_wqthread + 292)
13 libsystem_pthread.dylib 0x000000018b8ecb74 0x18b8eb000 + 7028 (start_wqthread + 8)
Topic:
Machine Learning & AI
SubTopic:
General