Hi everyone,
I am using Xcode 16.4 in MacOS Sequoia 15.5 with Apple Intelligence turned on.
The following code gives the error message in the title:
import NaturalLanguage
@available(iOS 18.0, *)
func testSystemModel() {
let model = SystemLanguageModel.default
print(model)
}
What am I missing?
Foundation Models
RSS for tagDiscuss the Foundation Models framework which provides access to Apple’s on-device large language model that powers Apple Intelligence to help you perform intelligent tasks specific to your app.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I'm seeing this error a lot in my console log of my iPhone 15 Pro (Apple Intelligence enabled):
com.apple.modelcatalog.catalog sync: connection error during call: Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service named com.apple.modelcatalog.catalog was invalidated: failed at lookup with error 159 - Sandbox restriction." UserInfo={NSDebugDescription=The connection to service named com.apple.modelcatalog.catalog was invalidated: failed at lookup with error 159 - Sandbox restriction.} reached max num connection attempts: 1
Are there entitlements / permissions I need to enable in Xcode that I forgot to do?
Code example
Here's how I'm initializing the language model session:
private func setupLanguageModelSession() {
if #available(iOS 26.0, *) {
let instructions = """
my instructions
"""
do {
languageModelSession = try LanguageModelSession(instructions: instructions)
print("Foundation Models language model session initialized")
} catch {
print("Error creating language model session: \(error)")
languageModelSession = nil
}
} else {
print("Device does not support Foundation Models (requires iOS 26.0+)")
languageModelSession = nil
}
}
When I try to run visionOS 26 beta 2 on my device the app crashes on Launch:
dyld[904]: Symbol not found: _$s16FoundationModels10TranscriptV7entriesACSayAC5EntryOG_tcfC
Referenced from: <A71932DD-53EB-39E2-9733-32E9D961D186> /private/var/containers/Bundle/Application/53866099-99B1-4BBD-8C94-CD022646EB5D/VisionPets.app/VisionPets.debug.dylib
Expected in: <F68A7984-6B48-3958-A48D-E9F541868C62> /System/Library/Frameworks/FoundationModels.framework/FoundationModels
Symbol not found: _$s16FoundationModels10TranscriptV7entriesACSayAC5EntryOG_tcfC
Referenced from: <A71932DD-53EB-39E2-9733-32E9D961D186> /private/var/containers/Bundle/Application/53866099-99B1-4BBD-8C94-CD022646EB5D/VisionPets.app/VisionPets.debug.dylib
Expected in: <F68A7984-6B48-3958-A48D-E9F541868C62> /System/Library/Frameworks/FoundationModels.framework/FoundationModels
dyld config: DYLD_LIBRARY_PATH=/usr/lib/system/introspection DYLD_INSERT_LIBRARIES=/usr/lib/libLogRedirect.dylib:/usr/lib/libBacktraceRecording.dylib:/usr/lib/libMainThreadChecker.dylib:/usr/lib/libViewDebuggerSupport.dylib:/System/Library/PrivateFrameworks/GPUToolsCapture.framework/GPUToolsCapture
Symbol not found: _$s16FoundationModels10TranscriptV7entriesACSayAC5EntryOG_tcfC
Referenced from: <A71932DD-53EB-39E2-9733-32E9D961D186> /private/var/containers/Bundle/Application/53866099-99B1-4BBD-8C94-CD022646EB5D/VisionPets.app/VisionPets.debug.dylib
Expected in: <F68A7984-6B48-3958-A48D-E9F541868C62> /System/Library/Frameworks/FoundationModels.framework/FoundationModels
dyld config: DYLD_LIBRARY_PATH=/usr/lib/system/introspection DYLD_INSERT_LIBRARIES=/usr/lib/libLogRedirect.dylib:/usr/lib/libBacktraceRecording.dylib:/usr/lib/libMainThreadChecker.dylib:/usr/lib/libViewDebuggerSupport.dylib:/System/Library/PrivateFrameworks/GPUToolsCapture.framework/GPUToolsCapture
Message from debugger: Terminated due to signal 6
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
It seems like there was an undocumented change that made Transcript.init(entries: [Transcript.Entry] initializer private, which broke my application, which relies on (manual) reconstruction of Transcript entries.
Worked fine on beta 1, on beta 2 there's this error
dyld[72381]: Symbol not found: _$s16FoundationModels10TranscriptV7entriesACSayAC5EntryOG_tcfC
Referenced from: <44342398-591C-3850-9889-87C9458E1440> /Users/mika/experiments/apple-on-device-ai/fm
Expected in: <66A793F6-CB22-3D1D-A560-D1BD5B109B0D> /System/Library/Frameworks/FoundationModels.framework/Versions/A/FoundationModels
Is this a part of an API transition, if so -
Apple, please update your documentation
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I am using a contact tool to help get contact from my address book. but the model ins't invoking my tool call method. Even tried with a simple tool the outcome is the same my simple tool is not being invoked.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
For example:
I have a list of to-dos, each with a unique id (a GUID). I want to feed them to the LLM model and have the model rewrite the items so they start with an action verb.
I'd like to get them back and identify which rewritten item corresponds to which original item. I obviously can't compare the text, as it has changed.
I've tried passing the original GUIDs in with each to-do, but the extra GUID characters pollutes the input and confuses the model.
I've tried numbering them in order and adding an originalSortOrder field to my generable type, but it doesn't work reliably.
Any suggestions?
I could do them one at a time, but I also have a use case where I'm asking for them to be organized in sections, and while I've instructed the model not to rename anything, it still happens. It's just all very nondeterministic.
With respond() methods, the foundation model works well enough. With streamResponse() methods, the responses are very repetitive, verbose, and messy.
My app with foundation model uses more than 500 MB memory on an iPad Pro when running from Xcode. Devices supporting Apple Intelligence have at least 8GB memory. Should Apple use a bigger model (using 3 ~ 4 GB memory) for better stream responses?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
When I initialize a session with an existing transcript using this initializer:
public convenience init(model: SystemLanguageModel = .default, guardrails: LanguageModelSession.Guardrails = .default, tools: [any Tool] = [], transcript: Transcript)
The tools get ignored. I noticed that when doing that, the model never use the tools. When inspecting the transcript, I can see that the instruction entry does not have any tools available to it.
I tried this for both transcripts that already include an instruction entry and ones that don't - both yielding the same result..
Is this the intended behavior / am I missing something here?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Hey everyone
I'm Manish Mehta, field CTO at Centific. I recently read Apple's white paper, The Illusion of Thinking and it got me thinking about the current state of AI reasoning. Who here has read it?
The paper highlights how LLMs often rely on pattern recognition rather than genuine understanding. When faced with complex tasks, their performance can degrade significantly.
I was just thinking that to move beyond this problem, we need to explore approaches that combines Deeper Reasoning Architectures for true cognitive capability with Deep Human Partnership to guide AI toward better judgment and understanding.
The first part means fundamentally rewiring AI to reason. This involves advancing deeper architectures like World Models, which can build internal simulations to understand real-world scenarios , and Neurosymbolic systems, which combines neural networks with symbolic reasoning for deeper self-verification.
Additionally, we need to look at deep human partnership and scalable oversight. An AI cannot learn certain things from data alone, it lacks the real-world judgment an AI will never have. Among other things, deep domain expert human partners are needed to instill this wisdom , validate the AI's entire reasoning process , build its ethical guardrails , and act as skilled adversaries to find hidden flaws before they can cause harm.
What do you all think? Is this focus on a deeper partnership between advanced AI reasoning and deep human judgment the right path forward?
Agree? Disagree?
Thanks
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I'm new to Swift and was hoping the Playground would support loading adaptors. When I tried, I got a permissions error - thinking it's because it's not in the project and Playgrounds don't like going outside the project?
A tutorial and some sample code would be helpful.
Also some benchmarks on how long it's expected to take. Selfishly I'm on an M2 Mac Mini.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Hi all,
I’m encountering an issue when trying to run Apple Foundation Models in a blank project targeting iOS 26.
Below are the details:
Xcode: Latest version with iOS 26 SDK
macOS: macOS 26 Tahoe (installed on main disk)
Mac: 16” MacBook Pro with M2 Pro chip
Apple Intelligence: Available and functional on this machine
Problem:
I created a new blank iOS project, set the deployment target to iOS 26, and ran the following minimal code using Foundation Models. However, I get no response at all in the output - not even an error. The app runs, but the model does not produce any output.
#Playground {
let session = LanguageModelSession()
let response = try await session.respond(to: "Tell me a story")
}
Then, I tried to catch an error with this code:
#Playground {
let session = LanguageModelSession()
do {
let response = try await session.respond(to: "Tell me a story")
print(response)
} catch {
print("Failed to get response:", error)
}
print("This line, never gets executed")
}
And got these results:
I’ve done further testing and discovered something important:
I tried running the Code Along sample project, and there the #Playground macro worked without issues. The only significant difference I noticed was the Canvas run destination:
In my original project, I was using iPhone 16 Pro (iOS 26) as the run target in Canvas. Apple Intelligence was enabled on the simulator, but no response was returned when executing the prompt.
In the sample project, the Canvas was running on My Mac.
I attempted to match that setup, but at first, my destination was My Mac (Designed for iPad), which still didn’t work. The macro finally executed properly once I switched to My Mac (AppKit).
So the question is ... it seems that for now, Foundation Models and the #Playground macro only run correctly when the canvas or destination is set to “My Mac (AppKit)”?
When context window size exceeded, this error is not called (instead another error has shown up) to handle new session.
LanguageModelSession.GenerationError.exceededContextWindowSize
Or am I doing things wrong?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Testing Foundation Models framework with a health-focused recipe generation app. The on-device approach is appealing but performance is rough. Taking 20+ seconds just to get recipe name and description. Same content from Claude API: 4 seconds.
I know it's beta and on-device has different tradeoffs, but this is approaching unusable territory for real-time user experience. The streaming helps psychologically but doesn't mask the underlying latency.The privacy/cost benefits are compelling but not if users abandon the feature before it completes.
Anyone else seeing similar performance? Is this expected for beta, or are there optimization techniques I'm missing?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Is there anywhere we can reference error codes? I'm getting this error: "The operation couldn’t be completed. (FoundationModels.LanguageModelSession.GenerationError error 4.)" and I have no idea of what it means or what to attempt to fix.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Tags:
Machine Learning
Create ML
Apple Intelligence
Hey,
I receive GenerableContent as follows:
let response = try await session.respond(to: "", schema: generationSchema)
And it wraps GeneratedJSON which seems to be private.
What is the best way to get a string / raw value out of it? I noticed it could theoretically be accessed via transcriptEntries but it's not ideal.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I'm interested in using Foundation Models to act as an AI support agent for our extensive in-app documentation. We have many pages of in-app documents, which the user can currently search, but it would be great to use Foundation Models to let the user get answers to arbitrary questions.
Is this possible with the current version of Foundation Models? It seems like the way to add new context to the model is with the instructions parameter on LanguageModelSession. As I understand it, the combined instructions and prompt need to consume less than 4096 tokens.
That definitely wouldn't be enough for the amount of documentation I want the agent to be able to refer to. Is there another way of doing this, maybe as a series of recursive queries? If there is a solution based on multiple queries, should I expect this to be fast enough for interactive use?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Just tried to write a very simple test of using foundation models, but it gave me the error like this
"ModelManager received unentitled request. Expected entitlement com.apple.modelmanager.inference
establishment of session failed with Missing entitlement: com.apple.modelmanager.inference"
The simple code is listed below:
let session: LanguageModelSession = LanguageModelSession()
let response = try? await session.respond(to: "What is the capital of France?")
print("Response: (response)")
So what's the problem of this one?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
According to the Tool documentation, the arguments to the tool are specified as a static struct type T, which is given to tool.call(argument: T) However, if the arguments are not known until runtime, is it possible to still create a Tool object with the proper parameters? Let's say a JSON-style dictionary is passed into the Tool init function to specify T, is this achievable?
Has Apple made any commitment to versioning the Foundation Models on device? What if you build a feature that works great on 26.0 but they change the model or guardrails in 26.1 and it breaks your feature, is your only recourse filing Feedback or pulling the feature from the app? Will there be a way to specify a model version like in all of the server based LLM provider APIs? If not, sounds risky to build on.
This is my code:
witch SystemLanguageModel.default.availability {
case .available:
ContentView()
.popover(isPresented: $showSettings) {
SettingsView().presentationCompactAdaptation(.popover)
}
case .unavailable(.modelNotReady):
ContentUnavailableView("Apple Intelligence is unavailable",
systemImage: "apple.intelligence.badge.xmark",
description: Text("Please come back later."))
case .unavailable(.appleIntelligenceNotEnabled):
ContentUnavailableView("Apple Intelligence is unavailable",
systemImage: "apple.intelligence.badge.xmark",
description: Text("Please turn on Apple Intelligence."))
case .unavailable(.deviceNotEligible):
ContentUnavailableView("Apple Intelligence is unavailable",
systemImage: "apple.intelligence.badge.xmark",
description: Text("This device is not eligible for Apple Intelligence."))
case .unavailable:
ContentUnavailableView("Apple Intelligence is unavailable",
systemImage: "apple.intelligence.badge.xmark")
}
When I switch off Apple Intelligence, I expected "Please turn on Apple Intelligence.", but instead I get "Please come back later."
This seems to be wrong error?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models