Foundation Models

RSS for tag

Discuss the Foundation Models framework which provides access to Apple’s on-device large language model that powers Apple Intelligence to help you perform intelligent tasks specific to your app.

Foundation Models Documentation

Posts under Foundation Models subtopic

Post

Replies

Boosts

Views

Activity

Model Rate Limits?
Trying the Foundation Model framework and when I try to run several sessions in a loop, I'm getting a thrown error that I'm hitting a rate limit. Are these rate limits documented? What's the best practice here? I'm trying to run the models against new content downloaded from a web service where I might get ~200 items in a given download. They're relatively small but there can be that many that want to be processed in a loop.
4
1
537
Jun ’25
Does Generable support recursive schemas?
I've run into an issue with a small Foundation Models test with Generable. I'm getting a strange error message with this Generable. I was able to get simpler ones to work. Is this because the Generable is recursive with a property of [HTMLDiv]? The error message is: FoundationModels/SchemaAugmentor.swift:209: Fatal error: 'try!' expression unexpectedly raised an error: FoundationModels.GenerationSchema.SchemaError.undefinedReferences(schema: Optional("SafeResponse<HTMLDiv>"), references: ["HTMLDiv"], context: FoundationModels.GenerationSchema.SchemaError.Context(debugDescription: "Undefined types: [HTMLDiv]", underlyingErrors: [])) The code is: import FoundationModels import Playgrounds @Generable struct HTMLDiv { @Guide(description: "Optional named ID, useful for nicknames") var id: String? = nil @Guide(description: "Optional visible HTML text") var textContent: String? = nil @Guide(description: "Any child elements", .count(0...10)) var children: [HTMLDiv] = [] static var sample: HTMLDiv { HTMLDiv( id: "profileToolbar", children: [ HTMLDiv(textContent: "Log in"), HTMLDiv(textContent: "Sign up"), ] ) } } #Playground { do { let session = LanguageModelSession { "Your job is to generate simple HTML markup" "Here is an example response to the prompt: 'Make a profile toolbar':" HTMLDiv.sample } let response = try await session.respond( to: "Make a sign up form", generating: HTMLDiv.self ) print(response.content) } catch { print(error) } }
4
0
168
Jul ’25
Can I give additional context to Foundation Models?
I'm interested in using Foundation Models to act as an AI support agent for our extensive in-app documentation. We have many pages of in-app documents, which the user can currently search, but it would be great to use Foundation Models to let the user get answers to arbitrary questions. Is this possible with the current version of Foundation Models? It seems like the way to add new context to the model is with the instructions parameter on LanguageModelSession. As I understand it, the combined instructions and prompt need to consume less than 4096 tokens. That definitely wouldn't be enough for the amount of documentation I want the agent to be able to refer to. Is there another way of doing this, maybe as a series of recursive queries? If there is a solution based on multiple queries, should I expect this to be fast enough for interactive use?
4
0
321
Jul ’25
LanguageModelStream and collecting the final output
I have a Generable type with many elements. I am using a stream() to incrementally process the output (Generable.PartiallyGenerated?) content. At the end, I want to pass the final version (not partially generated) to another function. I cannot seem to find a good way to convert from a MyGenerable.PartiallyGenerated to a MyGenerable. Am I missing some functionality in the APIs?
4
0
586
Jul ’25
All generations in #Playground macro are throwing "unsafe" Generation Errors
I'm using Xcode 26 Beta 5 and get errors on any generation I try, however harmless, when wrapped in the #Playground macro. #Playground { let session = LanguageModelSession() let topic = "pandas" let prompt = "Write a safe and respectful story about (topic)." let response = try await session.respond(to: prompt) Not seeing any issues on simulator or device. Anyone else seeing this or have any ideas? Thanks for any help! Version 26.0 beta 5 (17A5295f) macOS 26.0 Beta (25A5316i)
4
0
152
Aug ’25
Crash when testing Speech sample app with FoundationModels on macOS 26.0 beta and iOS 26.0 beta
Hello, I am testing the sample project provided here: Bringing advanced speech-to-text capabilities to your app. On both macOS 26.0 beta and iOS 26.0 beta, the app crashes immediately on launch with a dyld "Symbol not found" error related to FoundationModels.framework. It feels like this may be related to testing primarily on newer Apple Silicon devices, as I am seeing consistent crashes on an Intel MacBook and on an older iPhone device. I would appreciate any insight, confirmation, or guidance on whether this is a known limitation or if there is a workaround. Is it planned to be resolved soon? Environment macOS: Device: MacBook Pro (Intel) Processor: 2 GHz Quad-Core Intel Core i5 Graphics: Intel Iris Plus Graphics 1536 MB Memory: 16 GB 3733 MHz LPDDR4X OS: macOS Tahoe Version 26.0 Beta (25A5338b) iOS: Device: iPhone 11 Model Number: MHDD3HN/A OS: iOS 26.0 Xcode: Version: 26.0 beta 3 (17A5276g) Crash (macOS) Abort signal received. Excerpt from crash dump: dyld`__abort_with_payload: 0x7ff80e3ad4a0 &lt;+0&gt;: movl $0x2000209, %eax 0x7ff80e3ad4a5 &lt;+5&gt;: movq %rcx, %r10 0x7ff80e3ad4a8 &lt;+8&gt;: syscall -&gt; 0x7ff80e3ad4aa &lt;+10&gt;: jae 0x7ff80e3ad4b4 Console: dyld[9819]: Symbol not found: _$s16FoundationModels20LanguageModelSessionC5model10guardrails5tools12instructionsAcA06SystemcD0C_AC10GuardrailsVSayAA4Tool_pGAA12InstructionsVSgtcfC Referenced from: /Users/userx/Library/Developer/Xcode/DerivedData/SwiftTranscriptionSampleApp-*/Build/Products/Debug/SwiftTranscriptionSampleApp.app/Contents/MacOS/SwiftTranscriptionSampleApp.debug.dylib Expected in: /System/Library/Frameworks/FoundationModels.framework/Versions/A/FoundationModels Crash (iOS) Abort signal received. Excerpt from crash dump: dyld`__abort_with_payload: 0x18f22b4b0 &lt;+0&gt;: mov x16, #0x209 0x18f22b4b4 &lt;+4&gt;: svc #0x80 -&gt; 0x18f22b4b8 &lt;+8&gt;: b.lo 0x18f22b4d8 Console dyld[2080]: Symbol not found: _$s16FoundationModels20LanguageModelSessionC5model10guardrails5tools12instructionsAcA06SystemcD0C_AC10GuardrailsVSayAA4Tool_pGAA12InstructionsVSgtcfC Referenced from: /private/var/containers/Bundle/Application/.../SwiftTranscriptionSampleApp.app/SwiftTranscriptionSampleApp.debug.dylib Expected in: /System/Library/Frameworks/FoundationModels.framework/FoundationModels Question Is this crash expected on Intel Macs and older iPhone models with the beta SDKs? Is there an official statement on whether macOS 26.x releases support Intel, or it exists only until macOS 26.1? Any suggested workarounds for testing this sample project on current hardware? Is this a known limitation for the 26.0 beta, and if so, should we expect a fix in 26.0 or only in subsequent releases? Attaching screenshots for reference. Thank you in advance.
4
0
537
Aug ’25
Error with guardrailViolation and underlyingErrors
Hi, I am a new IOS developer, trying to learn to integrate the Apple Foundation Model. my set up is: Mac M1 Pro MacOS 26 Beta Version 26.0 beta 3 Apple Intelligence &amp; Siri --&gt; On here is the code, func generate() { Task { isGenerating = true output = "⏳ Thinking..." do { let session = LanguageModelSession( instructions: """ Extract time from a message. Example Q: Golfing at 6PM A: 6PM """) let response = try await session.respond(to: "Go to gym at 7PM") output = response.content } catch { output = "❌ Error:, \(error)" print(output) } isGenerating = false } and I get these errors guardrailViolation(FoundationModels.LanguageModelSession.GenerationError.Context(debugDescription: "Prompt may contain sensitive or unsafe content", underlyingErrors: [Asset com.apple.gm.safety_embedding_deny.all not found in Model Catalog])) Can you help me get through this?
4
0
489
1w
Avoid hallucinations and information from trainning data
Hi For certain tasks, such as qualitative analysis or tagging, it is advisable to provide the AI with the option to respond with a joker / wild card answer when it encounters difficulties in tagging or scoring. For instance, you can include this slot in the prompt as follows: output must be "not data to score" when there isn't information to score. In the absence of these types of slots, AI trends to provide a solution even when there is insufficient information. Foundations Models are told to be prompted with simple prompts. I wonder: Is recommended keep this slot though adds verbose complexity? Is the best place the comment of a guided attribute? other tips? Another use case is when you want the AI to be tied to the information provided in the prompt and not take information from its data set. What is the best approach to this purpose? Thanks in advance for any suggestion.
4
0
801
Oct ’25
Missing module 'coremltools.libmilstoragepython'
Hello! I'm following the Foundation Models adapter training guide (https://developer.apple.com/apple-intelligence/foundation-models-adapter/) on my NVIDIA DGX Spark box. I'm able to train on my own data but the example notebook fails when I try to export the artifact as an fmadapter. I get the following error for the code block I'm trying to run. I haven't touched any of the code in the export folder. I tried exporting it on my Mac too and got the same error as well (given below). Would appreciate some more clarity around this. Thank you. Code Block: from export.export_fmadapter import Metadata, export_fmadapter metadata = Metadata( author="3P developer", description="An adapter that writes play scripts.", ) export_fmadapter( output_dir="./", adapter_name="myPlaywritingAdapter", metadata=metadata, checkpoint="adapter-final.pt", draft_checkpoint="draft-model-final.pt", ) Error: --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) Cell In[10], line 1 ----> 1 from export.export_fmadapter import Metadata, export_fmadapter 3 metadata = Metadata( 4 author="3P developer", 5 description="An adapter that writes play scripts.", 6 ) 8 export_fmadapter( 9 output_dir="./", 10 adapter_name="myPlaywritingAdapter", (...) 13 draft_checkpoint="draft-model-final.pt", 14 ) File /workspace/export/export_fmadapter.py:11 8 from typing import Any 10 from .constants import BASE_SIGNATURE, MIL_PATH ---> 11 from .export_utils import AdapterConverter, AdapterSpec, DraftModelConverter, camelize 13 logger = logging.getLogger(__name__) 16 class MetadataKeys(enum.StrEnum): File /workspace/export/export_utils.py:15 13 import torch 14 import yaml ---> 15 from coremltools.libmilstoragepython import _BlobStorageWriter as BlobWriter 16 from coremltools.models.neural_network.quantization_utils import _get_kmeans_lookup_table_and_weight 17 from coremltools.optimize._utils import LutParams ModuleNotFoundError: No module named 'coremltools.libmilstoragepython'
4
0
514
Oct ’25
InferenceError referencing context length in FoundationModels framework
I'm experimenting with downloading an audio file of spoken content, using the Speech framework to transcribe it, then using FoundationModels to clean up the formatting to add paragraph breaks and such. I have this code to do that cleanup: private func cleanupText(_ text: String) async throws -> String? { print("Cleaning up text of length \(text.count)...") let session = LanguageModelSession(instructions: "The content you read is a transcription of a speech. Separate it into paragraphs by adding newlines. Do not modify the content - only add newlines.") let response = try await session.respond(to: .init(text), generating: String.self) return response.content } The content length is about 29,000 characters. And I get this error: InferenceError::inferenceFailed::Failed to run inference: Context length of 4096 was exceeded during singleExtend.. Is 4096 a reference to a max input length? Or is this a bug? This is running on an M1 iPad Air, with iPadOS 26 Seed 1.
5
0
365
Jul ’25
visionOS 26 beta 2: Symbol Not Found on Foundation Models
When I try to run visionOS 26 beta 2 on my device the app crashes on Launch: dyld[904]: Symbol not found: _$s16FoundationModels10TranscriptV7entriesACSayAC5EntryOG_tcfC Referenced from: <A71932DD-53EB-39E2-9733-32E9D961D186> /private/var/containers/Bundle/Application/53866099-99B1-4BBD-8C94-CD022646EB5D/VisionPets.app/VisionPets.debug.dylib Expected in: <F68A7984-6B48-3958-A48D-E9F541868C62> /System/Library/Frameworks/FoundationModels.framework/FoundationModels Symbol not found: _$s16FoundationModels10TranscriptV7entriesACSayAC5EntryOG_tcfC Referenced from: <A71932DD-53EB-39E2-9733-32E9D961D186> /private/var/containers/Bundle/Application/53866099-99B1-4BBD-8C94-CD022646EB5D/VisionPets.app/VisionPets.debug.dylib Expected in: <F68A7984-6B48-3958-A48D-E9F541868C62> /System/Library/Frameworks/FoundationModels.framework/FoundationModels dyld config: DYLD_LIBRARY_PATH=/usr/lib/system/introspection DYLD_INSERT_LIBRARIES=/usr/lib/libLogRedirect.dylib:/usr/lib/libBacktraceRecording.dylib:/usr/lib/libMainThreadChecker.dylib:/usr/lib/libViewDebuggerSupport.dylib:/System/Library/PrivateFrameworks/GPUToolsCapture.framework/GPUToolsCapture Symbol not found: _$s16FoundationModels10TranscriptV7entriesACSayAC5EntryOG_tcfC Referenced from: <A71932DD-53EB-39E2-9733-32E9D961D186> /private/var/containers/Bundle/Application/53866099-99B1-4BBD-8C94-CD022646EB5D/VisionPets.app/VisionPets.debug.dylib Expected in: <F68A7984-6B48-3958-A48D-E9F541868C62> /System/Library/Frameworks/FoundationModels.framework/FoundationModels dyld config: DYLD_LIBRARY_PATH=/usr/lib/system/introspection DYLD_INSERT_LIBRARIES=/usr/lib/libLogRedirect.dylib:/usr/lib/libBacktraceRecording.dylib:/usr/lib/libMainThreadChecker.dylib:/usr/lib/libViewDebuggerSupport.dylib:/System/Library/PrivateFrameworks/GPUToolsCapture.framework/GPUToolsCapture Message from debugger: Terminated due to signal 6
5
0
194
Jun ’25
Foundation Models / Playgrounds Hello World - Help!
I am using Foundation Models for the first time and no response is being provided to me. Code import Playgrounds import FoundationModels #Playground { let session = LanguageModelSession() let result = try await session.respond(to: "List all the states in the USA") print(result.content) } Canvas Output What I did New file Code Canvas refreshes but nothing happens Am I missing a step or setup here? Please help. Something so basic is not working I do not know what to do. Running 40GPU, 16CPU MacBook Pro.. IOS26/Xcodebeta2/Tahoe allocated 8CPU, 48GB memory in Parallels VM. Settings for Playgrounds in Xcode Thank you for your help in advance.
5
1
322
Jul ’25
Foundation Models performance reality check - anyone else finding it slow?
Testing Foundation Models framework with a health-focused recipe generation app. The on-device approach is appealing but performance is rough. Taking 20+ seconds just to get recipe name and description. Same content from Claude API: 4 seconds. I know it's beta and on-device has different tradeoffs, but this is approaching unusable territory for real-time user experience. The streaming helps psychologically but doesn't mask the underlying latency.The privacy/cost benefits are compelling but not if users abandon the feature before it completes. Anyone else seeing similar performance? Is this expected for beta, or are there optimization techniques I'm missing?
5
0
290
Jul ’25
FoundationModel, context length, and testing
I am working on an app using FoundationModels to process web pages. I am looking to find ways to filter the input to fit within the token limits. I have unit tests, UI tests and the app running on an iPad in the simulator. It appears that the different configurations of the test environment seems to affect the token limits. That is, the same input in a unit test and UI test will hit different token limits. Is this correct? Or is this an artifact of my test tooling?
5
0
947
Nov ’25
Converting GenerableContent to JSON string
Hey, I receive GenerableContent as follows: let response = try await session.respond(to: "", schema: generationSchema) And it wraps GeneratedJSON which seems to be private. What is the best way to get a string / raw value out of it? I noticed it could theoretically be accessed via transcriptEntries but it's not ideal.
6
0
218
Jul ’25
Proposal: Develop a Token Estimation Tool for Foundation Models
Dear Apple Foundation Models Development Team, I am a developer integrating Apple Foundation Models (AFM) into my app and encountered the exceededContextWindowSize error when exceeding the 4096-token limit. Proposal: I suggest Apple develop a tool to estimate the token count of a prompt before sending it to the model. This tool could be integrated into FoundationModels Framework for ease of use. Benefits: A token estimation tool would help developers manage the context window limit and optimize performance. I hope Apple considers this proposal soon. Thank you!
6
0
353
Aug ’25